From: Marco Gaiarin <gaio@lilliput.linux.it>
To: Matthieu Dreistadt via pve-user <pve-user@lists.proxmox.com>
Cc: pve-user@lists.proxmox.com
Subject: Re: [PVE-User] Analysis of free space...
Date: Tue, 30 Sep 2025 18:55:35 +0200 [thread overview]
Message-ID: <5letql-mot2.ln1@leia.lilliput.linux.it> (raw)
In-Reply-To: <mailman.453.1758989246.390.pve-user@lists.proxmox.com>; from SmartGate on Tue, Sep 30, 2025 at 22:06:01PM +0200
Mandi! Matthieu Dreistadt via pve-user
In chel di` si favelave...
> you can check "zfs list -o space", which will give you a more detailed
> view of what is using the space:
[...]
> Used = overall used
> Usedsnap = Used by Snapshots
> Usedds = Used Disk Space (not counting snapshots, only live data)
> Usedchild = Used by datasets/zvols further down in the same path (in my
> example, rpool has the same amount of Used and Usedchild space, since
> there is nothing directly inside of rpool itself)
Thans for the hint. Anyway:
root@lamprologus:~# zfs list -o space | grep ^rpool-data/
rpool-data/vm-100-disk-0 11.6T 1.07T 0B 1.07T 0B 0B
rpool-data/vm-100-disk-1 11.6T 1.81T 0B 1.81T 0B 0B
rpool-data/vm-100-disk-10 11.6T 1.42T 0B 1.42T 0B 0B
rpool-data/vm-100-disk-11 11.6T 1.86T 0B 1.86T 0B 0B
rpool-data/vm-100-disk-12 11.6T 1.64T 0B 1.64T 0B 0B
rpool-data/vm-100-disk-13 11.6T 2.23T 0B 2.23T 0B 0B
rpool-data/vm-100-disk-14 11.6T 1.96T 0B 1.96T 0B 0B
rpool-data/vm-100-disk-15 11.6T 1.83T 0B 1.83T 0B 0B
rpool-data/vm-100-disk-16 11.6T 1.89T 0B 1.89T 0B 0B
rpool-data/vm-100-disk-17 11.6T 2.05T 0B 2.05T 0B 0B
rpool-data/vm-100-disk-18 11.6T 3.39T 0B 3.39T 0B 0B
rpool-data/vm-100-disk-19 11.6T 3.40T 0B 3.40T 0B 0B
rpool-data/vm-100-disk-2 11.6T 1.31T 0B 1.31T 0B 0B
rpool-data/vm-100-disk-20 11.6T 3.36T 0B 3.36T 0B 0B
rpool-data/vm-100-disk-21 11.6T 2.50T 0B 2.50T 0B 0B
rpool-data/vm-100-disk-22 11.6T 3.22T 0B 3.22T 0B 0B
rpool-data/vm-100-disk-23 11.6T 2.73T 0B 2.73T 0B 0B
rpool-data/vm-100-disk-24 11.6T 2.53T 0B 2.53T 0B 0B
rpool-data/vm-100-disk-3 11.6T 213K 0B 213K 0B 0B
rpool-data/vm-100-disk-4 11.6T 213K 0B 213K 0B 0B
rpool-data/vm-100-disk-5 11.6T 1.48T 0B 1.48T 0B 0B
rpool-data/vm-100-disk-6 11.6T 1.35T 0B 1.35T 0B 0B
rpool-data/vm-100-disk-7 11.6T 930G 0B 930G 0B 0B
rpool-data/vm-100-disk-8 11.6T 1.26T 0B 1.26T 0B 0B
rpool-data/vm-100-disk-9 11.6T 1.30T 0B 1.30T 0B 0B
seems that really i was able to put 3.40T of real data on a 2T volume...
--
_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
next prev parent reply other threads:[~2025-09-30 20:11 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-17 13:27 Marco Gaiarin
2025-09-17 23:45 ` Gilou
2025-09-22 14:14 ` Marco Gaiarin
2025-09-22 14:17 ` Marco Gaiarin
2025-09-24 16:29 ` Marco Gaiarin
2025-09-27 15:42 ` Alwin Antreich via pve-user
2025-09-30 16:26 ` Marco Gaiarin
2025-09-27 16:06 ` Matthieu Dreistadt via pve-user
2025-09-30 16:55 ` Marco Gaiarin [this message]
2025-09-26 14:57 ` Marco Gaiarin
2025-09-18 13:44 ` Alwin Antreich via pve-user
2025-09-22 14:16 ` Marco Gaiarin
2025-09-25 8:26 ` Marco Gaiarin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5letql-mot2.ln1@leia.lilliput.linux.it \
--to=gaio@lilliput.linux.it \
--cc=pve-user@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox