public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
From: Matthieu Dreistadt via pve-user <pve-user@lists.proxmox.com>
To: pve-user@lists.proxmox.com
Cc: Matthieu Dreistadt <matthieu@3-stadt.de>
Subject: Re: [PVE-User] Analysis of free space...
Date: Sat, 27 Sep 2025 18:06:42 +0200	[thread overview]
Message-ID: <mailman.453.1758989246.390.pve-user@lists.proxmox.com> (raw)
In-Reply-To: <4sidql-6r83.ln1@leia.lilliput.linux.it>

[-- Attachment #1: Type: message/rfc822, Size: 7323 bytes --]

From: Matthieu Dreistadt <matthieu@3-stadt.de>
To: pve-user@lists.proxmox.com
Subject: Re: [PVE-User] Analysis of free space...
Date: Sat, 27 Sep 2025 18:06:42 +0200
Message-ID: <831baaaa-3d7d-487c-be79-6162a05745b4@3-stadt.de>

Hi Marco,

you can check "zfs list -o space", which will give you a more detailed 
view of what is using the space:

root@xxx:~# zfs list -o space
NAME                          AVAIL   USED  USEDSNAP  USEDDS 
USEDREFRESERV  USEDCHILD
rpool                          507G   354G        0B    104K      0B    
    354G
rpool/ROOT                     507G  4.40G        0B     96K      0B    
   4.40G
rpool/ROOT/pve-1               507G  4.40G     1.05G   3.35G      0B    
      0B
rpool/data                     507G   312G        0B    112K      0B    
    312G
rpool/data/subvol-105-disk-0  8.62G  11.4G     49.2M   11.4G      0B    
      0B

Used = overall used
Usedsnap = Used by Snapshots
Usedds = Used Disk Space (not counting snapshots, only live data)
Usedchild = Used by datasets/zvols further down in the same path (in my 
example, rpool has the same amount of Used and Usedchild space, since 
there is nothing directly inside of rpool itself)

Cheers,
Matthieu

Am 24.09.2025 um 18:29 schrieb Marco Gaiarin:
> Mandi! Marco Gaiarin
>    In chel di` si favelave...
>
>> Uh, wait... effectively we forgot to enable 'discard' on volumes, and we have
>> enabled afterward (but rebooted the VM).
>> I'll check refreservation property and report back.
> No, volumes seems have all refreservation to 'none', as expected; current
> situation is:
>
>   root@lamprologus:~# zfs list | grep ^rpool-data
>   rpool-data                  54.2T  3.84T   171K  /rpool-data
>   rpool-data/vm-100-disk-0    1.11T  3.84T  1.11T  -
>   rpool-data/vm-100-disk-1    2.32T  3.84T  2.32T  -
>   rpool-data/vm-100-disk-10   1.82T  3.84T  1.82T  -
>   rpool-data/vm-100-disk-11   2.03T  3.84T  2.03T  -
>   rpool-data/vm-100-disk-12   1.96T  3.84T  1.96T  -
>   rpool-data/vm-100-disk-13   2.48T  3.84T  2.48T  -
>   rpool-data/vm-100-disk-14   2.21T  3.84T  2.21T  -
>   rpool-data/vm-100-disk-15   2.42T  3.84T  2.42T  -
>   rpool-data/vm-100-disk-16   2.15T  3.84T  2.15T  -
>   rpool-data/vm-100-disk-17   2.14T  3.84T  2.14T  -
>   rpool-data/vm-100-disk-18   3.39T  3.84T  3.39T  -
>   rpool-data/vm-100-disk-19   3.40T  3.84T  3.40T  -
>   rpool-data/vm-100-disk-2    1.32T  3.84T  1.32T  -
>   rpool-data/vm-100-disk-20   3.36T  3.84T  3.36T  -
>   rpool-data/vm-100-disk-21   2.50T  3.84T  2.50T  -
>   rpool-data/vm-100-disk-22   3.22T  3.84T  3.22T  -
>   rpool-data/vm-100-disk-23   2.73T  3.84T  2.73T  -
>   rpool-data/vm-100-disk-24   2.53T  3.84T  2.53T  -
>   rpool-data/vm-100-disk-3     213K  3.84T   213K  -
>   rpool-data/vm-100-disk-4     213K  3.84T   213K  -
>   rpool-data/vm-100-disk-5    2.33T  3.84T  2.33T  -
>   rpool-data/vm-100-disk-6    2.28T  3.84T  2.28T  -
>   rpool-data/vm-100-disk-7    2.13T  3.84T  2.13T  -
>   rpool-data/vm-100-disk-8    2.29T  3.84T  2.29T  -
>   rpool-data/vm-100-disk-9    2.11T  3.84T  2.11T  -
>
> and a random volume (but all are similar):
>
>   root@lamprologus:~# zfs get all rpool-data/vm-100-disk-18 | grep refreservation
>   rpool-data/vm-100-disk-18  refreservation        none                   default
>   rpool-data/vm-100-disk-18  usedbyrefreservation  0B                     -
>
> Another strange thing is that all are 2TB volumes:
>
>   root@lamprologus:~# cat /etc/pve/qemu-server/100.conf | grep vm-100-disk-19
>   scsi20: rpool-data:vm-100-disk-19,backup=0,discard=on,replicate=0,size=2000G
>
> but:
>
>   root@lamprologus:~# zfs list rpool-data/vm-100-disk-19
>   NAME                        USED  AVAIL  REFER  MOUNTPOINT
>   rpool-data/vm-100-disk-19  3.40T  3.84T  3.40T  -
>
> why 'USED' is 3.40T?
>
>
> Thanks.
>



[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user

  parent reply	other threads:[~2025-09-27 16:07 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-09-17 13:27 Marco Gaiarin
2025-09-17 23:45 ` Gilou
2025-09-22 14:14   ` Marco Gaiarin
2025-09-22 14:17   ` Marco Gaiarin
2025-09-24 16:29     ` Marco Gaiarin
2025-09-27 15:42       ` Alwin Antreich via pve-user
2025-09-30 16:26         ` Marco Gaiarin
2025-09-27 16:06       ` Matthieu Dreistadt via pve-user [this message]
2025-09-30 16:55         ` Marco Gaiarin
2025-09-26 14:57     ` Marco Gaiarin
2025-09-18 13:44 ` Alwin Antreich via pve-user
2025-09-22 14:16   ` Marco Gaiarin
2025-09-25  8:26     ` Marco Gaiarin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=mailman.453.1758989246.390.pve-user@lists.proxmox.com \
    --to=pve-user@lists.proxmox.com \
    --cc=matthieu@3-stadt.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal