From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [IPv6:2a01:7e0:0:424::9]) by lore.proxmox.com (Postfix) with ESMTPS id ABA341FF162 for ; Sat, 7 Sep 2024 21:04:12 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id AFC3B3142B; Sat, 7 Sep 2024 21:04:43 +0200 (CEST) Message-ID: <49702fb5-5162-4a8d-8f8c-b7e82ac59be8@dkfz-heidelberg.de> Date: Sat, 7 Sep 2024 21:04:09 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird From: Frank Thommen To: Proxmox VE user list Content-Language: en-US Organization: DKFZ Heidelberg, Omics IT and Data Management Core Facility (ODCF) X-SPAM-LEVEL: Spam detection results: 0 AWL -0.168 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_DNSWL_MED -2.3 Sender listed at https://www.dnswl.org/, medium trust RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_RPBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_SAFE_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record X-Content-Filtered-By: Mailman/MimeDel 2.1.29 Subject: [PVE-User] "nearfull" status in PVE Dashboard not consistent X-BeenThere: pve-user@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE user list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Proxmox VE user list Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Errors-To: pve-user-bounces@lists.proxmox.com Sender: "pve-user" Dear all, I am currently in the process to add SSDs for DB/WAL to our "converged" 3-node Ceph cluster. After having done so on two of three nodes, the PVE Ceph dashboard now reports "5 pool(s) nearfull": HEALTH_WARN: 5 pool(s) nearfull pool 'pve-pool1' is nearfull pool 'cephfs_data' is nearfull pool 'cephfs_metadata' is nearfull pool '.mgr' is nearfull pool '.rgw.root' is nearfull (see also attached Ceph_dashboard_nearfull_warning.jpg). The storage in general is 73% full ("40.87 TiB of 55.67 TiB"). However when looking at the pool overview in PVE, the pools don't seem to be very full at all. Some of them are even reported as being completely empty (see the attached Ceph_pool_overview.jpg). Please note: All Ceph manipulations have been done from the PVE UI, as we are not very experienced with the Ceph CLI. We are running PVE 8.2.3 and Ceph runs on version 17.2.7. Is this inconsistency normal or a problem? And if the latter, then (how) can it be fixed? Cheers, Frank _______________________________________________ pve-user mailing list pve-user@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user