From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id F00E61FF17C for ; Wed, 14 May 2025 11:07:10 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id B857D1C7F; Wed, 14 May 2025 11:07:31 +0200 (CEST) Date: Wed, 14 May 2025 11:06:57 +0200 (CEST) From: =?UTF-8?Q?Fabian_Gr=C3=BCnbichler?= To: Proxmox VE development discussion , Fiona Ebner Message-ID: <451129351.14846.1747213617524@webmail.proxmox.com> In-Reply-To: <9248a2f1-64be-4c7d-85c8-2cc31dde7133@proxmox.com> References: <20250513133122.94322-1-f.ebner@proxmox.com> <20250513133122.94322-2-f.ebner@proxmox.com> <9248a2f1-64be-4c7d-85c8-2cc31dde7133@proxmox.com> MIME-Version: 1.0 X-Priority: 3 Importance: Normal X-Mailer: Open-Xchange Mailer v7.10.6-Rev75 X-Originating-Client: open-xchange-appsuite X-SPAM-LEVEL: Spam detection results: 0 AWL 0.044 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_RPBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_SAFE_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: Re: [pve-devel] [PATCH storage 2/2] rbd plugin: status: explain why percentage value can be different from Ceph X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Proxmox VE development discussion Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: pve-devel-bounces@lists.proxmox.com Sender: "pve-devel" > Fiona Ebner hat am 14.05.2025 10:22 CEST geschrieben: > > > Am 13.05.25 um 15:31 schrieb Fiona Ebner: > > Signed-off-by: Fiona Ebner > > --- > > src/PVE/Storage/RBDPlugin.pm | 6 ++++++ > > 1 file changed, 6 insertions(+) > > > > diff --git a/src/PVE/Storage/RBDPlugin.pm b/src/PVE/Storage/RBDPlugin.pm > > index 154fa00..b56f8e4 100644 > > --- a/src/PVE/Storage/RBDPlugin.pm > > +++ b/src/PVE/Storage/RBDPlugin.pm > > @@ -703,6 +703,12 @@ sub status { > > > > # max_avail -> max available space for data w/o replication in the pool > > # stored -> amount of user data w/o replication in the pool > > + # NOTE These values are used because they are most natural from a user perspective. > > + # However, the %USED/percent_used value in Ceph is calculated from values before factoring out > > + # replication, namely 'bytes_used / (bytes_used + avail_raw)'. In certain setups, e.g. with LZ4 > > + # compression, this percentage can be noticeably different form the percentage > > + # 'stored / (stored + max_avail)' shown in the Proxmox VE CLI/UI. See also src/mon/PGMap.cc from > > + # the Ceph source code, which also mentions that 'stored' is an approximation. > > my $free = $d->{stats}->{max_avail}; > > my $used = $d->{stats}->{stored}; > > my $total = $used + $free; > > Thinking about this again, I don't think continuing to use 'stored' is > best after all, because that is before compression. And this is where > the mismatch really comes from AFAICT. For highly compressible data, the > mismatch between actual usage on the storage and 'stored' can be very > big (in a quick test using the 'yes' command to fill an RBD image, I got > stored = 2 * (used / replication_count)). And here in the storage stats > we are interested in the usage on the storage, not the actual amount of > data written by the user. For ZFS we also don't use 'logicalused', but > 'used'. but for ZFS, we actually use the "logical" view provided by `zfs list/get`, not the "physical" view provided by `zpool list/get` (and even the latter would already account for redundancy). e.g., with a testpool consisting of three mirrored vdevs of size 1G, with a single dataset filled with a file with 512MB of random data: $ zpool list -v testpool NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT testpool 960M 513M 447M - - 42% 53% 1.00x ONLINE - mirror-0 960M 513M 447M - - 42% 53.4% - ONLINE /tmp/vdev1.img 1G - - - - - - - ONLINE /tmp/vdev2.img 1G - - - - - - - ONLINE /tmp/vdev3.img 1G - - - - - - - ONLINE and what we use for the storage status: $ zfs get available,used testpool/data NAME PROPERTY VALUE SOURCE testpool/data available 319M - testpool/data used 512M - if we switch away from `stored`, we'd have to account for replication ourselves to match that, right? and we don't have that information readily available (and also no idea how to handle EC pools?)? wouldn't we just exchange one wrong set of numbers with another (differently) wrong set of numbers? FWIW, we already provide raw numbers in the pool view, and could maybe expand that view to provide more details? e.g., for my test rbd pool the pool view shows 50,29% used amounting to 163,43GiB, whereas the storage status says 51.38% used amounting to 61.11GB of 118.94GB, with the default 3/2 replication ceph df detail says: { "name": "rbd", "id": 2, "stats": { "stored": 61108710142, => /1000/1000/1000 == storage used "stored_data": 61108699136, "stored_omap": 11006, "objects": 15579, "kb_used": 171373017, "bytes_used": 175485968635, => /1024/1024/1024 == pool used "data_bytes_used": 175485935616, "omap_bytes_used": 33019, "percent_used": 0.5028545260429382, => rounded this is the pool view percentage "max_avail": 57831211008, => (this + stored)/1000/1000/1000 storage total "quota_objects": 0, "quota_bytes": 0, "dirty": 0, "rd": 253354, "rd_bytes": 38036885504, "wr": 75833, "wr_bytes": 33857918976, "compress_bytes_used": 0, "compress_under_bytes": 0, "stored_raw": 183326130176, "avail_raw": 173493638191 } }, > From src/osd/osd_types.h: > > > int64_t data_stored = 0; ///< Bytes actually stored by the user > > int64_t data_compressed = 0; ///< Bytes stored after compression > > int64_t data_compressed_allocated = 0; ///< Bytes allocated for compressed data > > int64_t data_compressed_original = 0; ///< Bytes that were compressed > > > > _______________________________________________ > pve-devel mailing list > pve-devel@lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel