From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [IPv6:2a01:7e0:0:424::9]) by lore.proxmox.com (Postfix) with ESMTPS id DFF601FF145 for ; Thu, 22 Jan 2026 12:22:09 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id C2DD2144A3; Thu, 22 Jan 2026 12:22:28 +0100 (CET) From: Manuel Federanko To: pdm-devel@lists.proxmox.com Date: Thu, 22 Jan 2026 12:21:05 +0100 Message-ID: <20260122112106.118611-4-m.federanko@proxmox.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20260122112106.118611-1-m.federanko@proxmox.com> References: <20260122112106.118611-1-m.federanko@proxmox.com> MIME-Version: 1.0 X-SPAM-LEVEL: Spam detection results: 0 BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment KAM_LAZY_DOMAIN_SECURITY 1 Sending domain does not have any anti-forgery methods RDNS_NONE 0.793 Delivered to internal network by a host with no rDNS SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_NONE 0.001 SPF: sender does not publish an SPF Record Subject: [pdm-devel] [PATCH datacenter-manager 3/3] docs: make commands, file paths and config options inline code blocks. X-BeenThere: pdm-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox Datacenter Manager development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Proxmox Datacenter Manager development discussion Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: pdm-devel-bounces@lists.proxmox.com Sender: "pdm-devel" Signed-off-by: Manuel Federanko --- docs/local-zfs.rst | 56 +++++++++++++++++++++++----------------------- 1 file changed, 28 insertions(+), 28 deletions(-) diff --git a/docs/local-zfs.rst b/docs/local-zfs.rst index 6b7a2f2..cd702bb 100644 --- a/docs/local-zfs.rst +++ b/docs/local-zfs.rst @@ -46,7 +46,7 @@ ZFS Administration ~~~~~~~~~~~~~~~~~~ This section gives you some usage examples for common tasks. ZFS itself is really powerful and -provides many options. The main commands to manage ZFS are `zfs` and `zpool`. Both commands come +provides many options. The main commands to manage ZFS are ``qzfs`` and ``zpool``. Both commands come with extensive manual pages, which can be read with: .. code-block:: console @@ -57,8 +57,8 @@ with extensive manual pages, which can be read with: Create a new zpool ^^^^^^^^^^^^^^^^^^ -To create a new pool, at least one disk is needed. The `ashift` should have the same sector-size (2 -power of `ashift`) or larger as the underlying disk. +To create a new pool, at least one disk is needed. The ``ashift`` should have the same sector-size (2 +power of ``ashift``) or larger as the underlying disk. .. code-block:: console @@ -115,7 +115,7 @@ Create a new pool with cache (L2ARC) It is possible to use a dedicated cache drive partition to increase the read performance (use SSDs). -For ``, you can use multiple devices, as is shown in +For ````, you can use multiple devices, as is shown in "Create a new pool with RAID*". .. code-block:: console @@ -128,7 +128,7 @@ Create a new pool with log (ZIL) It is possible to use a dedicated cache drive partition to increase the write performance (use SSDs). -For ``, you can use multiple devices, as is shown in "Create a new pool with RAID*". +For ````, you can use multiple devices, as is shown in "Create a new pool with RAID*". .. code-block:: console @@ -138,8 +138,8 @@ Add cache and log to an existing pool ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ You can add cache and log devices to a pool after its creation. In this example, we will use a -single drive for both cache and log. First, you need to create 2 partitions on the SSD with `parted` -or `gdisk` +single drive for both cache and log. First, you need to create 2 partitions on the SSD with ``parted`` +or ``gdisk`` .. important:: Always use GPT partition tables. @@ -162,8 +162,8 @@ Changing a failed device Changing a failed bootable device ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Depending on how `Proxmox Datacenter Manager`_ was installed, it is either using `grub` or -`systemd-boot` as a bootloader. +Depending on how `Proxmox Datacenter Manager`_ was installed, it is either using ``grub`` or +``systemd-boot`` as a bootloader. In either case, the first steps of copying the partition table, reissuing GUIDs and replacing the ZFS partition are the same. To make the system bootable from the new disk, different steps are @@ -178,19 +178,19 @@ needed which depend on the bootloader in use. .. NOTE:: Use the `zpool status -v` command to monitor how far the resilvering process of the new disk has progressed. -With `systemd-boot`: +With ``systemd-boot``: .. code-block:: console # proxmox-boot-tool format # proxmox-boot-tool init -.. NOTE:: `ESP` stands for EFI System Partition, which is setup as partition #2 on bootable disks +.. NOTE:: ``ESP`` stands for EFI System Partition, which is setup as partition #2 on bootable disks by the Proxmox Datacenter Manager installer. For details, see :ref:`Setting up a new partition for use as synced ESP `. -With `grub`: +With ``grub``: -Usually `grub.cfg` is located in `/boot/grub/grub.cfg` +Usually ``grub.cfg`` is located in ``/boot/grub/grub.cfg`` .. code-block:: console @@ -222,7 +222,7 @@ Limit ZFS memory usage It is good to use at most 50 percent (which is the default) of the system memory for ZFS ARC, to prevent performance degradation of the host. Use your preferred editor to change the configuration -in `/etc/modprobe.d/zfs.conf` and insert: +in ``/etc/modprobe.d/zfs.conf`` and insert: .. code-block:: console @@ -230,10 +230,10 @@ in `/etc/modprobe.d/zfs.conf` and insert: The above example limits the usage to 8 GiB ('8 * 2^30^'). -.. IMPORTANT:: In case your desired `zfs_arc_max` value is lower than or equal to `zfs_arc_min` - (which defaults to 1/32 of the system memory), `zfs_arc_max` will be ignored. Thus, for it to - work in this case, you must set `zfs_arc_min` to at most `zfs_arc_max - 1`. This would require - updating the configuration in `/etc/modprobe.d/zfs.conf`, with: +.. IMPORTANT:: In case your desired ``zfs_arc_max`` value is lower than or equal to ``zfs_arc_min`` + (which defaults to 1/32 of the system memory), ``zfs_arc_max`` will be ignored. Thus, for it to + work in this case, you must set ``zfs_arc_min`` to at most ``zfs_arc_max - 1``. This would require + updating the configuration in ``/etc/modprobe.d/zfs.conf``, with: .. code-block:: console @@ -241,7 +241,7 @@ The above example limits the usage to 8 GiB ('8 * 2^30^'). options zfs zfs_arc_max=8589934592 This example setting limits the usage to 8 GiB ('8 * 2^30^') on systems with more than 256 GiB of -total memory, where simply setting `zfs_arc_max` alone would not work. +total memory, where simply setting ``zfs_arc_max`` alone would not work. .. IMPORTANT:: If your root file system is ZFS, you must update your initramfs every time this value changes. @@ -268,7 +268,7 @@ A good value for servers is 10: # sysctl -w vm.swappiness=10 -To make the swappiness persistent, open `/etc/sysctl.conf` with an editor of your choice and add the +To make the swappiness persistent, open ``/etc/sysctl.conf`` with an editor of your choice and add the following line: .. code-block:: console @@ -297,8 +297,8 @@ To activate compression: # zpool set compression=lz4 -We recommend using the `lz4` algorithm, since it adds very little CPU overhead. Other algorithms -such as `lzjb`, `zstd` and `gzip-N` (where `N` is an integer from `1-9` representing the compression +We recommend using the ``lz4`` algorithm, since it adds very little CPU overhead. Other algorithms +such as ``lzjb``, ``zstd`` and ``gzip-N`` (where ``N`` is an integer from ``1-9`` representing the compression ratio, where 1 is fastest and 9 is best compression) are also available. Depending on the algorithm and how compressible the data is, having compression enabled can even increase I/O performance. @@ -341,16 +341,16 @@ Adding a `special` device to an existing pool with RAID-1: # zpool add special mirror -ZFS datasets expose the `special_small_blocks=` property. `size` can be `0` to disable storing -small file blocks on the `special` device, or a power of two in the range between `512B` to `128K`. +ZFS datasets expose the ``special_small_blocks=`` property. ``size`` can be ``0`` to disable storing +small file blocks on the `special` device, or a power of two in the range between ``512B`` to ``128K``. After setting this property, new file blocks smaller than `size` will be allocated on the `special` device. -.. IMPORTANT:: If the value for `special_small_blocks` is greater than or equal to the `recordsize` - (default `128K`) of the dataset, *all* data will be written to the `special` device, so be +.. IMPORTANT:: If the value for ``special_small_blocks`` is greater than or equal to the ``recordsize`` + (default ``128K``) of the dataset, *all* data will be written to the `special` device, so be careful! -Setting the `special_small_blocks` property on a pool will change the default value of that property +Setting the ``special_small_blocks`` property on a pool will change the default value of that property for all child ZFS datasets (for example, all containers in the pool will opt in for small file blocks). @@ -398,5 +398,5 @@ then, update the `initramfs` by running: and finally, reboot the node. -Another workaround to this problem is enabling the `zfs-import-scan.service`, which searches and +Another workaround to this problem is enabling the ``zfs-import-scan.service``, which searches and imports pools via device scanning (usually slower). -- 2.47.3 _______________________________________________ pdm-devel mailing list pdm-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pdm-devel