From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id C7FD6698D4 for ; Tue, 14 Sep 2021 09:48:25 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id C5FADA5D8 for ; Tue, 14 Sep 2021 09:48:25 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id D4D75A5CA for ; Tue, 14 Sep 2021 09:48:24 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id A7BEB44832 for ; Tue, 14 Sep 2021 09:48:18 +0200 (CEST) To: Proxmox VE development discussion , Dylan Whyte References: <20210913160036.148321-1-d.whyte@proxmox.com> From: Lorenz Stechauner Message-ID: <00b3106f-bbeb-66c2-3bb3-2440dfee7a64@proxmox.com> Date: Tue, 14 Sep 2021 09:48:17 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.14.0 MIME-Version: 1.0 In-Reply-To: <20210913160036.148321-1-d.whyte@proxmox.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-US X-SPAM-LEVEL: Spam detection results: 0 AWL 1.016 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% KAM_ASCII_DIVIDERS 0.8 Spam that uses ascii formatting tricks KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment NICE_REPLY_A -1.969 Looks like a legit reply (A) SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: Re: [pve-devel] [PATCH pve-docs 1/2] pmxcfs: language and style fixup X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 14 Sep 2021 07:48:25 -0000 patch looks good to me in general. see inline for notes On 13.09.21 18:00, Dylan Whyte wrote: > minor language fixup > replace usage of 'Proxmox VE' with '{pve}' > > Signed-off-by: Dylan Whyte > --- > pmxcfs.adoc | 62 ++++++++++++++++++++++++++--------------------------- > 1 file changed, 30 insertions(+), 32 deletions(-) > > diff --git a/pmxcfs.adoc b/pmxcfs.adoc > index d4579a7..c0327a2 100644 > --- a/pmxcfs.adoc > +++ b/pmxcfs.adoc > @@ -30,17 +30,17 @@ cluster nodes using `corosync`. We use this to store all PVE related > configuration files. > > Although the file system stores all data inside a persistent database > -on disk, a copy of the data resides in RAM. That imposes restriction > +on disk, a copy of the data resides in RAM. This imposes restrictions > on the maximum size, which is currently 30MB. This is still enough to > store the configuration of several thousand virtual machines. > > This system provides the following advantages: > > -* seamless replication of all configuration to all nodes in real time > -* provides strong consistency checks to avoid duplicate VM IDs > -* read-only when a node loses quorum > -* automatic updates of the corosync cluster configuration to all nodes > -* includes a distributed locking mechanism > +* Seamless replication of all configuration to all nodes in real time > +* Provides strong consistency checks to avoid duplicate VM IDs > +* Read-only when a node loses quorum > +* Automatic updates of the corosync cluster configuration to all nodes > +* Includes a distributed locking mechanism > > > POSIX Compatibility > @@ -49,13 +49,13 @@ POSIX Compatibility > The file system is based on FUSE, so the behavior is POSIX like. But > some feature are simply not implemented, because we do not need them: > > -* you can just generate normal files and directories, but no symbolic > +* You can just generate normal files and directories, but no symbolic > links, ... > > -* you can't rename non-empty directories (because this makes it easier > +* You can't rename non-empty directories (because this makes it easier > to guarantee that VMIDs are unique). > > -* you can't change file permissions (permissions are based on path) > +* You can't change file permissions (permissions are based on paths) > > * `O_EXCL` creates were not atomic (like old NFS) > > @@ -67,13 +67,11 @@ File Access Rights > > All files and directories are owned by user `root` and have group > `www-data`. Only root has write permissions, but group `www-data` can > -read most files. Files below the following paths: > +read most files. Files below the following paths are only accessible by root: > > /etc/pve/priv/ > /etc/pve/nodes/${NAME}/priv/ > > -are only accessible by root. > - > > Technology > ---------- > @@ -157,25 +155,25 @@ And disable verbose syslog messages with: > Recovery > -------- > > -If you have major problems with your Proxmox VE host, e.g. hardware > -issues, it could be helpful to just copy the pmxcfs database file > -`/var/lib/pve-cluster/config.db` and move it to a new Proxmox VE > +If you have major problems with your {pve} host, for example hardware > +issues, it could be helpful to copy the pmxcfs database file > +`/var/lib/pve-cluster/config.db`, and move it to a new {pve} > host. On the new host (with nothing running), you need to stop the > -`pve-cluster` service and replace the `config.db` file (needed permissions > -`0600`). Second, adapt `/etc/hostname` and `/etc/hosts` according to the > -lost Proxmox VE host, then reboot and check. (And don't forget your > -VM/CT data) > +`pve-cluster` service and replace the `config.db` file (required permissions > +`0600`). Following this, adapt `/etc/hostname` and `/etc/hosts` according to the > +lost {pve} host, then reboot and check (and don't forget your > +VM/CT data). > > > -Remove Cluster configuration > +Remove Cluster Configuration > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > -The recommended way is to reinstall the node after you removed it from > -your cluster. This makes sure that all secret cluster/ssh keys and any > +The recommended way is to reinstall the node after you remove it from > +your cluster. This ensures that all secret cluster/ssh keys and any > shared configuration data is destroyed. > > In some cases, you might prefer to put a node back to local mode without > -reinstall, which is described in > +reinstalling, which is described in > <> > > > @@ -183,28 +181,28 @@ Recovering/Moving Guests from Failed Nodes > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > For the guest configuration files in `nodes//qemu-server/` (VMs) and > -`nodes//lxc/` (containers), {pve} sees the containing node `` as > +`nodes//lxc/` (containers), {pve} sees the containing node `` as the > owner of the respective guest. This concept enables the usage of local locks > instead of expensive cluster-wide locks for preventing concurrent guest > configuration changes. > > -As a consequence, if the owning node of a guest fails (e.g., because of a power > -outage, fencing event, ..), a regular migration is not possible (even if all > -the disks are located on shared storage) because such a local lock on the > +As a consequence, if the owning node of a guest fails (for example, due to a power > +outage, fencing event, etc.), a regular migration is not possible (even if all > +the disks are located on shared storage), because such a local lock on the > (dead) owning node is unobtainable. This is not a problem for HA-managed > guests, as {pve}'s High Availability stack includes the necessary > (cluster-wide) locking and watchdog functionality to ensure correct and > automatic recovery of guests from fenced nodes. > > If a non-HA-managed guest has only shared disks (and no other local resources > -which are only available on the failed node are configured), a manual recovery > +which are only available on the failed node), a manual recovery > is possible by simply moving the guest configuration file from the failed > -node's directory in `/etc/pve/` to an alive node's directory (which changes the > +node's directory in `/etc/pve/` to an online node's directory (which changes the > logical owner or location of the guest). > > For example, recovering the VM with ID `100` from a dead `node1` to another > -node `node2` works with the following command executed when logged in as root > -on any member node of the cluster: > +node `node2` works by running the following command as root on any member node > +of the cluster: > > mv /etc/pve/nodes/node1/qemu-server/100.conf /etc/pve/nodes/node2/ > > @@ -213,7 +211,7 @@ that the failed source node is really powered off/fenced. Otherwise {pve}'s > locking principles are violated by the `mv` command, which can have unexpected > consequences. > > -WARNING: Guest with local disks (or other local resources which are only > +WARNING: Guests with local disks (or other local resources which are only > available on the dead node) are not recoverable like this. Either wait for the maybe write offline instead of dead? (like above, alive -> online) > failed node to rejoin the cluster or restore such guests from backups. >