From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id DB99769B13 for ; Tue, 14 Sep 2021 18:14:45 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id CA330FE62 for ; Tue, 14 Sep 2021 18:14:45 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id 35872FE4C for ; Tue, 14 Sep 2021 18:14:44 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id EFD6C4478D for ; Tue, 14 Sep 2021 18:14:43 +0200 (CEST) From: Dylan Whyte To: pve-devel@lists.proxmox.com Date: Tue, 14 Sep 2021 18:14:33 +0200 Message-Id: <20210914161434.176937-1-d.whyte@proxmox.com> X-Mailer: git-send-email 2.30.2 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SPAM-LEVEL: Spam detection results: 0 AWL 0.087 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% KAM_ASCII_DIVIDERS 0.8 Spam that uses ascii formatting tricks KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: [pve-devel] [PATCH v2 pve-docs 1/2] pmxcfs: language and style fixup X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 14 Sep 2021 16:14:45 -0000 minor language fixup replace usage of 'Proxmox VE' with '{pve}' Signed-off-by: Dylan Whyte --- Thanks for the feedback @lorenz! changes v2: - Refer to offline nodes as 'offline', rather than 'dead' pmxcfs.adoc | 68 ++++++++++++++++++++++++++--------------------------- 1 file changed, 33 insertions(+), 35 deletions(-) diff --git a/pmxcfs.adoc b/pmxcfs.adoc index d4579a7..1fdf9cb 100644 --- a/pmxcfs.adoc +++ b/pmxcfs.adoc @@ -30,17 +30,17 @@ cluster nodes using `corosync`. We use this to store all PVE related configuration files. Although the file system stores all data inside a persistent database -on disk, a copy of the data resides in RAM. That imposes restriction +on disk, a copy of the data resides in RAM. This imposes restrictions on the maximum size, which is currently 30MB. This is still enough to store the configuration of several thousand virtual machines. This system provides the following advantages: -* seamless replication of all configuration to all nodes in real time -* provides strong consistency checks to avoid duplicate VM IDs -* read-only when a node loses quorum -* automatic updates of the corosync cluster configuration to all nodes -* includes a distributed locking mechanism +* Seamless replication of all configuration to all nodes in real time +* Provides strong consistency checks to avoid duplicate VM IDs +* Read-only when a node loses quorum +* Automatic updates of the corosync cluster configuration to all nodes +* Includes a distributed locking mechanism POSIX Compatibility @@ -49,13 +49,13 @@ POSIX Compatibility The file system is based on FUSE, so the behavior is POSIX like. But some feature are simply not implemented, because we do not need them: -* you can just generate normal files and directories, but no symbolic +* You can just generate normal files and directories, but no symbolic links, ... -* you can't rename non-empty directories (because this makes it easier +* You can't rename non-empty directories (because this makes it easier to guarantee that VMIDs are unique). -* you can't change file permissions (permissions are based on path) +* You can't change file permissions (permissions are based on paths) * `O_EXCL` creates were not atomic (like old NFS) @@ -67,13 +67,11 @@ File Access Rights All files and directories are owned by user `root` and have group `www-data`. Only root has write permissions, but group `www-data` can -read most files. Files below the following paths: +read most files. Files below the following paths are only accessible by root: /etc/pve/priv/ /etc/pve/nodes/${NAME}/priv/ -are only accessible by root. - Technology ---------- @@ -157,25 +155,25 @@ And disable verbose syslog messages with: Recovery -------- -If you have major problems with your Proxmox VE host, e.g. hardware -issues, it could be helpful to just copy the pmxcfs database file -`/var/lib/pve-cluster/config.db` and move it to a new Proxmox VE +If you have major problems with your {pve} host, for example hardware +issues, it could be helpful to copy the pmxcfs database file +`/var/lib/pve-cluster/config.db`, and move it to a new {pve} host. On the new host (with nothing running), you need to stop the -`pve-cluster` service and replace the `config.db` file (needed permissions -`0600`). Second, adapt `/etc/hostname` and `/etc/hosts` according to the -lost Proxmox VE host, then reboot and check. (And don't forget your -VM/CT data) +`pve-cluster` service and replace the `config.db` file (required permissions +`0600`). Following this, adapt `/etc/hostname` and `/etc/hosts` according to the +lost {pve} host, then reboot and check (and don't forget your +VM/CT data). -Remove Cluster configuration +Remove Cluster Configuration ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -The recommended way is to reinstall the node after you removed it from -your cluster. This makes sure that all secret cluster/ssh keys and any +The recommended way is to reinstall the node after you remove it from +your cluster. This ensures that all secret cluster/ssh keys and any shared configuration data is destroyed. In some cases, you might prefer to put a node back to local mode without -reinstall, which is described in +reinstalling, which is described in <> @@ -183,28 +181,28 @@ Recovering/Moving Guests from Failed Nodes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ For the guest configuration files in `nodes//qemu-server/` (VMs) and -`nodes//lxc/` (containers), {pve} sees the containing node `` as +`nodes//lxc/` (containers), {pve} sees the containing node `` as the owner of the respective guest. This concept enables the usage of local locks instead of expensive cluster-wide locks for preventing concurrent guest configuration changes. -As a consequence, if the owning node of a guest fails (e.g., because of a power -outage, fencing event, ..), a regular migration is not possible (even if all -the disks are located on shared storage) because such a local lock on the -(dead) owning node is unobtainable. This is not a problem for HA-managed +As a consequence, if the owning node of a guest fails (for example, due to a power +outage, fencing event, etc.), a regular migration is not possible (even if all +the disks are located on shared storage), because such a local lock on the +(offline) owning node is unobtainable. This is not a problem for HA-managed guests, as {pve}'s High Availability stack includes the necessary (cluster-wide) locking and watchdog functionality to ensure correct and automatic recovery of guests from fenced nodes. If a non-HA-managed guest has only shared disks (and no other local resources -which are only available on the failed node are configured), a manual recovery +which are only available on the failed node), a manual recovery is possible by simply moving the guest configuration file from the failed -node's directory in `/etc/pve/` to an alive node's directory (which changes the +node's directory in `/etc/pve/` to an online node's directory (which changes the logical owner or location of the guest). -For example, recovering the VM with ID `100` from a dead `node1` to another -node `node2` works with the following command executed when logged in as root -on any member node of the cluster: +For example, recovering the VM with ID `100` from an offline `node1` to another +node `node2` works by running the following command as root on any member node +of the cluster: mv /etc/pve/nodes/node1/qemu-server/100.conf /etc/pve/nodes/node2/ @@ -213,8 +211,8 @@ that the failed source node is really powered off/fenced. Otherwise {pve}'s locking principles are violated by the `mv` command, which can have unexpected consequences. -WARNING: Guest with local disks (or other local resources which are only -available on the dead node) are not recoverable like this. Either wait for the +WARNING: Guests with local disks (or other local resources which are only +available on the offline node) are not recoverable like this. Either wait for the failed node to rejoin the cluster or restore such guests from backups. ifdef::manvolnum[] -- 2.30.2