From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [IPv6:2a01:7e0:0:424::9]) by lore.proxmox.com (Postfix) with ESMTPS id 245321FF168 for ; Tue, 26 Nov 2024 12:44:01 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 6EB782EBEE; Tue, 26 Nov 2024 12:44:01 +0100 (CET) From: Hannes Laimer To: pbs-devel@lists.proxmox.com Date: Tue, 26 Nov 2024 12:43:22 +0100 Message-Id: <20241126114323.105838-5-h.laimer@proxmox.com> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20241126114323.105838-1-h.laimer@proxmox.com> References: <20241126114323.105838-1-h.laimer@proxmox.com> MIME-Version: 1.0 X-SPAM-LEVEL: Spam detection results: 0 AWL 0.027 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_RPBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_SAFE_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: [pbs-devel] [PATCH proxmox-backup 4/5] docs: add information for removable datastores X-BeenThere: pbs-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox Backup Server development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Proxmox Backup Server development discussion Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: pbs-devel-bounces@lists.proxmox.com Sender: "pbs-devel" Specifically about jobs and how they behave when the datastore is not mounted, how to create and use deivices with multiple datatstores on multiple PBS instances and options how to handle failed unmounts. Signed-off-by: Hannes Laimer --- docs/storage.rst | 31 ++++++++++++++++++++++++++----- 1 file changed, 26 insertions(+), 5 deletions(-) diff --git a/docs/storage.rst b/docs/storage.rst index 361af4420..5cd8704c4 100644 --- a/docs/storage.rst +++ b/docs/storage.rst @@ -176,16 +176,32 @@ datastores, should be either ``ext4`` or ``xfs``. It is also possible to create on completely unused disks through "Administration" > "Disks / Storage" > "Directory", using this method the disk will be partitioned and formatted automatically for the datastore. -Devices with only one datastore on them will be mounted automatically. It is possible to create a -removable datastore on one PBS and use it on multiple instances, the device just has to be added -on each instance as a removable datastore by checking "reuse datastore" on creation. -If the device already contains a datastore at the specified path it'll just be added as -a new datastore to the PBS instance and will be mounted whenever plugged in. Unmounting has +Devices with only one datastore on them will be mounted automatically. Unmounting has to be done through the UI by clicking "Unmount" on the summary page or using the CLI. +If unmounting should fail, the reason is logged in the unmount-task, and the datastore +will stay in maintenance mode ``unmounting``, which prevents any IO operations. If that should +happen, the maintenace mode has to be reset manually using: + +.. code-block:: console + + # proxmox-backup-manager datastore update --maintenance-mode offline + +to prevent any IO, or to clear it use: + +.. code-block:: console + + # proxmox-backup-manager datastore update --delete maintenance-mode + A single device can house multiple datastores, they only limitation is that they are not allowed to be nested. +Removable datastores are created on the the device with the given relative path that is specified +on creation. In order to use a datastore on multiple PBS instances, it has to be created on one, +and added with ``Reuse existing datastore`` checked on the others. The path you set on creation +is how multiple datastores on a signle device are identified. So When adding on a new PBS instance, +it has to match what was set on creation. + .. code-block:: console # proxmox-backup-manager datastore unmount store1 @@ -202,6 +218,11 @@ All datastores present on a device can be listed using ``proxmox-backup-debug``. # proxmox-backup-debug inspect device /dev/... +Verify jobs are skipped if the removable datastore should not be mounted when they are scheduled, +Sync jobs start, but fail with an error saying the datastore was not mounted. The reason is that +syncs not happening as schduled should at least be noticable. GC and pruning, like verification, +is skipped without a failed task if the datastore should not be mounted. + Managing Datastores ^^^^^^^^^^^^^^^^^^^ -- 2.39.5 _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel