public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Lorenz Stechauner <l.stechauner@proxmox.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>,
	Dylan Whyte <d.whyte@proxmox.com>
Subject: Re: [pve-devel] [PATCH pve-docs 1/2] pmxcfs: language and style fixup
Date: Tue, 14 Sep 2021 09:48:17 +0200	[thread overview]
Message-ID: <00b3106f-bbeb-66c2-3bb3-2440dfee7a64@proxmox.com> (raw)
In-Reply-To: <20210913160036.148321-1-d.whyte@proxmox.com>

patch looks good to me in general. see inline for notes


On 13.09.21 18:00, Dylan Whyte wrote:
> minor language fixup
> replace usage of 'Proxmox VE' with '{pve}'
>
> Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
> ---
>   pmxcfs.adoc | 62 ++++++++++++++++++++++++++---------------------------
>   1 file changed, 30 insertions(+), 32 deletions(-)
>
> diff --git a/pmxcfs.adoc b/pmxcfs.adoc
> index d4579a7..c0327a2 100644
> --- a/pmxcfs.adoc
> +++ b/pmxcfs.adoc
> @@ -30,17 +30,17 @@ cluster nodes using `corosync`. We use this to store all PVE related
>   configuration files.
>   
>   Although the file system stores all data inside a persistent database
> -on disk, a copy of the data resides in RAM. That imposes restriction
> +on disk, a copy of the data resides in RAM. This imposes restrictions
>   on the maximum size, which is currently 30MB. This is still enough to
>   store the configuration of several thousand virtual machines.
>   
>   This system provides the following advantages:
>   
> -* seamless replication of all configuration to all nodes in real time
> -* provides strong consistency checks to avoid duplicate VM IDs
> -* read-only when a node loses quorum
> -* automatic updates of the corosync cluster configuration to all nodes
> -* includes a distributed locking mechanism
> +* Seamless replication of all configuration to all nodes in real time
> +* Provides strong consistency checks to avoid duplicate VM IDs
> +* Read-only when a node loses quorum
> +* Automatic updates of the corosync cluster configuration to all nodes
> +* Includes a distributed locking mechanism
>   
>   
>   POSIX Compatibility
> @@ -49,13 +49,13 @@ POSIX Compatibility
>   The file system is based on FUSE, so the behavior is POSIX like. But
>   some feature are simply not implemented, because we do not need them:
>   
> -* you can just generate normal files and directories, but no symbolic
> +* You can just generate normal files and directories, but no symbolic
>     links, ...
>   
> -* you can't rename non-empty directories (because this makes it easier
> +* You can't rename non-empty directories (because this makes it easier
>     to guarantee that VMIDs are unique).
>   
> -* you can't change file permissions (permissions are based on path)
> +* You can't change file permissions (permissions are based on paths)
>   
>   * `O_EXCL` creates were not atomic (like old NFS)
>   
> @@ -67,13 +67,11 @@ File Access Rights
>   
>   All files and directories are owned by user `root` and have group
>   `www-data`. Only root has write permissions, but group `www-data` can
> -read most files. Files below the following paths:
> +read most files. Files below the following paths are only accessible by root:
>   
>    /etc/pve/priv/
>    /etc/pve/nodes/${NAME}/priv/
>   
> -are only accessible by root.
> -
>   
>   Technology
>   ----------
> @@ -157,25 +155,25 @@ And disable verbose syslog messages with:
>   Recovery
>   --------
>   
> -If you have major problems with your Proxmox VE host, e.g. hardware
> -issues, it could be helpful to just copy the pmxcfs database file
> -`/var/lib/pve-cluster/config.db` and move it to a new Proxmox VE
> +If you have major problems with your {pve} host, for example hardware
> +issues, it could be helpful to copy the pmxcfs database file
> +`/var/lib/pve-cluster/config.db`, and move it to a new {pve}
>   host. On the new host (with nothing running), you need to stop the
> -`pve-cluster` service and replace the `config.db` file (needed permissions
> -`0600`). Second, adapt `/etc/hostname` and `/etc/hosts` according to the
> -lost Proxmox VE host, then reboot and check. (And don't forget your
> -VM/CT data)
> +`pve-cluster` service and replace the `config.db` file (required permissions
> +`0600`). Following this, adapt `/etc/hostname` and `/etc/hosts` according to the
> +lost {pve} host, then reboot and check (and don't forget your
> +VM/CT data).
>   
>   
> -Remove Cluster configuration
> +Remove Cluster Configuration
>   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>   
> -The recommended way is to reinstall the node after you removed it from
> -your cluster. This makes sure that all secret cluster/ssh keys and any
> +The recommended way is to reinstall the node after you remove it from
> +your cluster. This ensures that all secret cluster/ssh keys and any
>   shared configuration data is destroyed.
>   
>   In some cases, you might prefer to put a node back to local mode without
> -reinstall, which is described in
> +reinstalling, which is described in
>   <<pvecm_separate_node_without_reinstall,Separate A Node Without Reinstalling>>
>   
>   
> @@ -183,28 +181,28 @@ Recovering/Moving Guests from Failed Nodes
>   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>   
>   For the guest configuration files in `nodes/<NAME>/qemu-server/` (VMs) and
> -`nodes/<NAME>/lxc/` (containers), {pve} sees the containing node `<NAME>` as
> +`nodes/<NAME>/lxc/` (containers), {pve} sees the containing node `<NAME>` as the
>   owner of the respective guest. This concept enables the usage of local locks
>   instead of expensive cluster-wide locks for preventing concurrent guest
>   configuration changes.
>   
> -As a consequence, if the owning node of a guest fails (e.g., because of a power
> -outage, fencing event, ..), a regular migration is not possible (even if all
> -the disks are located on shared storage) because such a local lock on the
> +As a consequence, if the owning node of a guest fails (for example, due to a power
> +outage, fencing event, etc.), a regular migration is not possible (even if all
> +the disks are located on shared storage), because such a local lock on the
>   (dead) owning node is unobtainable. This is not a problem for HA-managed
>   guests, as {pve}'s High Availability stack includes the necessary
>   (cluster-wide) locking and watchdog functionality to ensure correct and
>   automatic recovery of guests from fenced nodes.
>   
>   If a non-HA-managed guest has only shared disks (and no other local resources
> -which are only available on the failed node are configured), a manual recovery
> +which are only available on the failed node), a manual recovery
>   is possible by simply moving the guest configuration file from the failed
> -node's directory in `/etc/pve/` to an alive node's directory (which changes the
> +node's directory in `/etc/pve/` to an online node's directory (which changes the
>   logical owner or location of the guest).
>   
>   For example, recovering the VM with ID `100` from a dead `node1` to another
> -node `node2` works with the following command executed when logged in as root
> -on any member node of the cluster:
> +node `node2` works by running the following command as root on any member node
> +of the cluster:
>   
>    mv /etc/pve/nodes/node1/qemu-server/100.conf /etc/pve/nodes/node2/
>   
> @@ -213,7 +211,7 @@ that the failed source node is really powered off/fenced. Otherwise {pve}'s
>   locking principles are violated by the `mv` command, which can have unexpected
>   consequences.
>   
> -WARNING: Guest with local disks (or other local resources which are only
> +WARNING: Guests with local disks (or other local resources which are only
>   available on the dead node) are not recoverable like this. Either wait for the
maybe write offline instead of dead? (like above, alive -> online)
>   failed node to rejoin the cluster or restore such guests from backups.
>   




      parent reply	other threads:[~2021-09-14  7:48 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-13 16:00 Dylan Whyte
2021-09-13 16:00 ` [pve-devel] [PATCH pve-docs 2/2] pmxcfs: add more config files and discuss symlinks Dylan Whyte
2021-09-14  7:50   ` Lorenz Stechauner
2021-09-14  9:48     ` Thomas Lamprecht
2021-09-14  7:48 ` Lorenz Stechauner [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=00b3106f-bbeb-66c2-3bb3-2440dfee7a64@proxmox.com \
    --to=l.stechauner@proxmox.com \
    --cc=d.whyte@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal