public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
* [PVE-User] Analysis of free space...
@ 2025-09-17 13:27 Marco Gaiarin
  2025-09-17 23:45 ` Gilou
  2025-09-18 13:44 ` Alwin Antreich via pve-user
  0 siblings, 2 replies; 13+ messages in thread
From: Marco Gaiarin @ 2025-09-17 13:27 UTC (permalink / raw)
  To: pve-user


We have an PVE8 server, on clustered, with 66TB of disk storage and (for
now) a single VM; volumes have all 'discard' option enabled, and the VM
(RH8) support and have trim enabled.

We moved around some data between volumes, and now we have 20TB used in the
VM, and 56TB used on the zfs pool that host that volumes.

Clearly we have done some manual 'trim' to be sure, but nothing changed (or,
better, some space get reclaimed but not what we expect).


There's some tool/methodology to find where data get stuck, and how?


Thanks.

-- 



_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PVE-User] Analysis of free space...
  2025-09-17 13:27 [PVE-User] Analysis of free space Marco Gaiarin
@ 2025-09-17 23:45 ` Gilou
  2025-09-22 14:14   ` Marco Gaiarin
  2025-09-22 14:17   ` Marco Gaiarin
  2025-09-18 13:44 ` Alwin Antreich via pve-user
  1 sibling, 2 replies; 13+ messages in thread
From: Gilou @ 2025-09-17 23:45 UTC (permalink / raw)
  To: pve-user

Le 17/09/2025 à 15:27, Marco Gaiarin a écrit :
> 
> We have an PVE8 server, on clustered, with 66TB of disk storage and (for
> now) a single VM; volumes have all 'discard' option enabled, and the VM
> (RH8) support and have trim enabled.
> 
> We moved around some data between volumes, and now we have 20TB used in the
> VM, and 56TB used on the zfs pool that host that volumes.
> 
> Clearly we have done some manual 'trim' to be sure, but nothing changed (or,
> better, some space get reclaimed but not what we expect).
> 
> 
> There's some tool/methodology to find where data get stuck, and how?
Hi,

What's the underlying storage? local zfs?
Does it have thin provisioning / sparse enabled?

You can check that forum post: 
https://forum.proxmox.com/threads/zfs-enable-thin-provisioning.41549/

Also inside the VM, make sure that all the stack is discard aware, LVM, 
filesystem and all..

Regards,
Gilou

_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PVE-User] Analysis of free space...
  2025-09-17 13:27 [PVE-User] Analysis of free space Marco Gaiarin
  2025-09-17 23:45 ` Gilou
@ 2025-09-18 13:44 ` Alwin Antreich via pve-user
  2025-09-22 14:16   ` Marco Gaiarin
  1 sibling, 1 reply; 13+ messages in thread
From: Alwin Antreich via pve-user @ 2025-09-18 13:44 UTC (permalink / raw)
  To: Proxmox VE user list; +Cc: Alwin Antreich

[-- Attachment #1: Type: message/rfc822, Size: 4612 bytes --]

From: "Alwin Antreich" <alwin@antreich.com>
To: "Proxmox VE user list" <pve-user@lists.proxmox.com>
Subject: Re: [PVE-User] Analysis of free space...
Date: Thu, 18 Sep 2025 13:44:04 +0000
Message-ID: <63783ca5c3f39fab913e615393ceea9c55bf147e@antreich.com>

Hi Marco,


September 17, 2025 at 3:27 PM, "Marco Gaiarin" <gaio@lilliput.linux.it mailto:gaio@lilliput.linux.it?to=%22Marco%20Gaiarin%22%20%3Cgaio%40lilliput.linux.it%3E > wrote:


> 
> We have an PVE8 server, on clustered, with 66TB of disk storage and (for
> now) a single VM; volumes have all 'discard' option enabled, and the VM
> (RH8) support and have trim enabled.
> 
> We moved around some data between volumes, and now we have 20TB used in the
> VM, and 56TB used on the zfs pool that host that volumes.
> 
> Clearly we have done some manual 'trim' to be sure, but nothing changed (or,
> better, some space get reclaimed but not what we expect).
> 
> There's some tool/methodology to find where data get stuck, and how?
> 
Are you running EXT4 on the guest?
https://forum.proxmox.com/threads/help-with-trim-on-virtio-scsi-single.123819/post-542552

Cheers,
Alwin

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PVE-User] Analysis of free space...
  2025-09-17 23:45 ` Gilou
@ 2025-09-22 14:14   ` Marco Gaiarin
  2025-09-22 14:17   ` Marco Gaiarin
  1 sibling, 0 replies; 13+ messages in thread
From: Marco Gaiarin @ 2025-09-22 14:14 UTC (permalink / raw)
  To: Gilou; +Cc: pve-user

Mandi! Gilou
  In chel di` si favelave...

> What's the underlying storage? local zfs?
> Does it have thin provisioning / sparse enabled?

Yes, storage is ZFS, and 'Thin provision' is enabled.


> Also inside the VM, make sure that all the stack is discard aware, LVM, 
> filesystem and all..

VM is an RH8, so pretty new, so have full support for discard. And, as just
stated, trimming effectively free some TB of data, but not what we expect...

Effectively the VM use LVM, but looking around seems to me that discard is
'transparent' against LVM, eg, there's no need/way to 'enable' or 'disable'
it... if it is supported by the underlying storage...

-- 



_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PVE-User] Analysis of free space...
  2025-09-18 13:44 ` Alwin Antreich via pve-user
@ 2025-09-22 14:16   ` Marco Gaiarin
  2025-09-25  8:26     ` Marco Gaiarin
  0 siblings, 1 reply; 13+ messages in thread
From: Marco Gaiarin @ 2025-09-22 14:16 UTC (permalink / raw)
  To: Alwin Antreich via pve-user; +Cc: pve-user

Mandi! Alwin Antreich via pve-user
  In chel di` si favelave...

> Are you running EXT4 on the guest?
> https://forum.proxmox.com/threads/help-with-trim-on-virtio-scsi-single.123819/post-542552

I think is XFS. I'll check.

-- 



_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PVE-User] Analysis of free space...
  2025-09-17 23:45 ` Gilou
  2025-09-22 14:14   ` Marco Gaiarin
@ 2025-09-22 14:17   ` Marco Gaiarin
  2025-09-24 16:29     ` Marco Gaiarin
  2025-09-26 14:57     ` Marco Gaiarin
  1 sibling, 2 replies; 13+ messages in thread
From: Marco Gaiarin @ 2025-09-22 14:17 UTC (permalink / raw)
  To: Gilou; +Cc: pve-user

Mandi! Gilou
  In chel di` si favelave...

> You can check that forum post: 
> https://forum.proxmox.com/threads/zfs-enable-thin-provisioning.41549/

Uh, wait... effectively we forgot to enable 'discard' on volumes, and we have
enabled afterward (but rebooted the VM).

I'll check refreservation property and report back.

-- 



_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PVE-User] Analysis of free space...
  2025-09-22 14:17   ` Marco Gaiarin
@ 2025-09-24 16:29     ` Marco Gaiarin
  2025-09-27 15:42       ` Alwin Antreich via pve-user
  2025-09-27 16:06       ` Matthieu Dreistadt via pve-user
  2025-09-26 14:57     ` Marco Gaiarin
  1 sibling, 2 replies; 13+ messages in thread
From: Marco Gaiarin @ 2025-09-24 16:29 UTC (permalink / raw)
  To: Marco Gaiarin; +Cc: pve-user

Mandi! Marco Gaiarin
  In chel di` si favelave...

> Uh, wait... effectively we forgot to enable 'discard' on volumes, and we have
> enabled afterward (but rebooted the VM).
> I'll check refreservation property and report back.

No, volumes seems have all refreservation to 'none', as expected; current
situation is:

 root@lamprologus:~# zfs list | grep ^rpool-data
 rpool-data                  54.2T  3.84T   171K  /rpool-data
 rpool-data/vm-100-disk-0    1.11T  3.84T  1.11T  -
 rpool-data/vm-100-disk-1    2.32T  3.84T  2.32T  -
 rpool-data/vm-100-disk-10   1.82T  3.84T  1.82T  -
 rpool-data/vm-100-disk-11   2.03T  3.84T  2.03T  -
 rpool-data/vm-100-disk-12   1.96T  3.84T  1.96T  -
 rpool-data/vm-100-disk-13   2.48T  3.84T  2.48T  -
 rpool-data/vm-100-disk-14   2.21T  3.84T  2.21T  -
 rpool-data/vm-100-disk-15   2.42T  3.84T  2.42T  -
 rpool-data/vm-100-disk-16   2.15T  3.84T  2.15T  -
 rpool-data/vm-100-disk-17   2.14T  3.84T  2.14T  -
 rpool-data/vm-100-disk-18   3.39T  3.84T  3.39T  -
 rpool-data/vm-100-disk-19   3.40T  3.84T  3.40T  -
 rpool-data/vm-100-disk-2    1.32T  3.84T  1.32T  -
 rpool-data/vm-100-disk-20   3.36T  3.84T  3.36T  -
 rpool-data/vm-100-disk-21   2.50T  3.84T  2.50T  -
 rpool-data/vm-100-disk-22   3.22T  3.84T  3.22T  -
 rpool-data/vm-100-disk-23   2.73T  3.84T  2.73T  -
 rpool-data/vm-100-disk-24   2.53T  3.84T  2.53T  -
 rpool-data/vm-100-disk-3     213K  3.84T   213K  -
 rpool-data/vm-100-disk-4     213K  3.84T   213K  -
 rpool-data/vm-100-disk-5    2.33T  3.84T  2.33T  -
 rpool-data/vm-100-disk-6    2.28T  3.84T  2.28T  -
 rpool-data/vm-100-disk-7    2.13T  3.84T  2.13T  -
 rpool-data/vm-100-disk-8    2.29T  3.84T  2.29T  -
 rpool-data/vm-100-disk-9    2.11T  3.84T  2.11T  -

and a random volume (but all are similar):

 root@lamprologus:~# zfs get all rpool-data/vm-100-disk-18 | grep refreservation
 rpool-data/vm-100-disk-18  refreservation        none                   default
 rpool-data/vm-100-disk-18  usedbyrefreservation  0B                     -

Another strange thing is that all are 2TB volumes:

 root@lamprologus:~# cat /etc/pve/qemu-server/100.conf | grep vm-100-disk-19
 scsi20: rpool-data:vm-100-disk-19,backup=0,discard=on,replicate=0,size=2000G

but:

 root@lamprologus:~# zfs list rpool-data/vm-100-disk-19
 NAME                        USED  AVAIL  REFER  MOUNTPOINT
 rpool-data/vm-100-disk-19  3.40T  3.84T  3.40T  -

why 'USED' is 3.40T?


Thanks.

-- 



_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PVE-User] Analysis of free space...
  2025-09-22 14:16   ` Marco Gaiarin
@ 2025-09-25  8:26     ` Marco Gaiarin
  0 siblings, 0 replies; 13+ messages in thread
From: Marco Gaiarin @ 2025-09-25  8:26 UTC (permalink / raw)
  To: Marco Gaiarin; +Cc: pve-user

Mandi! Marco Gaiarin
  In chel di` si favelave...

> I think is XFS. I'll check.

I confirm, FS is XFS.

-- 



_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PVE-User] Analysis of free space...
  2025-09-22 14:17   ` Marco Gaiarin
  2025-09-24 16:29     ` Marco Gaiarin
@ 2025-09-26 14:57     ` Marco Gaiarin
  1 sibling, 0 replies; 13+ messages in thread
From: Marco Gaiarin @ 2025-09-26 14:57 UTC (permalink / raw)
  To: Marco Gaiarin; +Cc: pve-user


> Uh, wait... effectively we forgot to enable 'discard' on volumes, and we have
> enabled afterward (but rebooted the VM).

Still here. I've rechecked every passage and i summarize here:

1) ZFS storage had 'Thin provision' enabled from the start

2) we create a bounch of volumes for the VM, and, using LVM, bind that on some
 logical volumes XFS formatted. 'discard' was not checked.

3) we delete roughly 20+TB of data from VM, we try to copy some more data on
 VM volumes but storage fill up.

4) we enabled 'discard' on volumes, reboot VM and phisical node, do some
 manual trim: some TB get freed, but not 20+TB.

5) we find another 8TB of junk data and delete them; after that, we do a
 manual trim that freed 8TB on storage.


Evidently data copied (and deleted) on voluems before we set 'discard=on'
are not managed by trim, and we need in some way to force some FS
'rescan'...


How? Thanks.

-- 



_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PVE-User] Analysis of free space...
  2025-09-24 16:29     ` Marco Gaiarin
@ 2025-09-27 15:42       ` Alwin Antreich via pve-user
  2025-09-30 16:26         ` Marco Gaiarin
  2025-09-27 16:06       ` Matthieu Dreistadt via pve-user
  1 sibling, 1 reply; 13+ messages in thread
From: Alwin Antreich via pve-user @ 2025-09-27 15:42 UTC (permalink / raw)
  To: Proxmox VE user list; +Cc: Alwin Antreich

[-- Attachment #1: Type: message/rfc822, Size: 6911 bytes --]

From: Alwin Antreich <alwin@antreich.com>
To: Proxmox VE user list <pve-user@lists.proxmox.com>
Subject: Re: [PVE-User] Analysis of free space...
Date: Sat, 27 Sep 2025 17:42:09 +0200
Message-ID: <9EB24CE1-C837-48F1-A938-8B38DECEA40B@antreich.com>

On 24 September 2025 18:29:26 CEST, Marco Gaiarin <gaio@lilliput.linux.it> wrote:
>Mandi! Marco Gaiarin
>  In chel di` si favelave...
>
>> Uh, wait... effectively we forgot to enable 'discard' on volumes, and we have
>> enabled afterward (but rebooted the VM).
>> I'll check refreservation property and report back.
>
>No, volumes seems have all refreservation to 'none', as expected; current
>situation is:
>
> root@lamprologus:~# zfs list | grep ^rpool-data
> rpool-data                  54.2T  3.84T   171K  /rpool-data
> rpool-data/vm-100-disk-0    1.11T  3.84T  1.11T  -
> rpool-data/vm-100-disk-1    2.32T  3.84T  2.32T  -
> rpool-data/vm-100-disk-10   1.82T  3.84T  1.82T  -
> rpool-data/vm-100-disk-11   2.03T  3.84T  2.03T  -
> rpool-data/vm-100-disk-12   1.96T  3.84T  1.96T  -
> rpool-data/vm-100-disk-13   2.48T  3.84T  2.48T  -
> rpool-data/vm-100-disk-14   2.21T  3.84T  2.21T  -
> rpool-data/vm-100-disk-15   2.42T  3.84T  2.42T  -
> rpool-data/vm-100-disk-16   2.15T  3.84T  2.15T  -
> rpool-data/vm-100-disk-17   2.14T  3.84T  2.14T  -
> rpool-data/vm-100-disk-18   3.39T  3.84T  3.39T  -
> rpool-data/vm-100-disk-19   3.40T  3.84T  3.40T  -
> rpool-data/vm-100-disk-2    1.32T  3.84T  1.32T  -
> rpool-data/vm-100-disk-20   3.36T  3.84T  3.36T  -
> rpool-data/vm-100-disk-21   2.50T  3.84T  2.50T  -
> rpool-data/vm-100-disk-22   3.22T  3.84T  3.22T  -
> rpool-data/vm-100-disk-23   2.73T  3.84T  2.73T  -
> rpool-data/vm-100-disk-24   2.53T  3.84T  2.53T  -
> rpool-data/vm-100-disk-3     213K  3.84T   213K  -
> rpool-data/vm-100-disk-4     213K  3.84T   213K  -
> rpool-data/vm-100-disk-5    2.33T  3.84T  2.33T  -
> rpool-data/vm-100-disk-6    2.28T  3.84T  2.28T  -
> rpool-data/vm-100-disk-7    2.13T  3.84T  2.13T  -
> rpool-data/vm-100-disk-8    2.29T  3.84T  2.29T  -
> rpool-data/vm-100-disk-9    2.11T  3.84T  2.11T  -
>
>and a random volume (but all are similar):
>
> root@lamprologus:~# zfs get all rpool-data/vm-100-disk-18 | grep refreservation
> rpool-data/vm-100-disk-18  refreservation        none                   default
> rpool-data/vm-100-disk-18  usedbyrefreservation  0B                     -
>
>Another strange thing is that all are 2TB volumes:
>
> root@lamprologus:~# cat /etc/pve/qemu-server/100.conf | grep vm-100-disk-19
> scsi20: rpool-data:vm-100-disk-19,backup=0,discard=on,replicate=0,size=2000G
>
>but:
>
> root@lamprologus:~# zfs list rpool-data/vm-100-disk-19
> NAME                        USED  AVAIL  REFER  MOUNTPOINT
> rpool-data/vm-100-disk-19  3.40T  3.84T  3.40T  -
>
>why 'USED' is 3.40T?
Snapshots by any chance?

Cheers,
Alwin

Hi Marco,



[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PVE-User] Analysis of free space...
  2025-09-24 16:29     ` Marco Gaiarin
  2025-09-27 15:42       ` Alwin Antreich via pve-user
@ 2025-09-27 16:06       ` Matthieu Dreistadt via pve-user
  2025-09-30 16:55         ` Marco Gaiarin
  1 sibling, 1 reply; 13+ messages in thread
From: Matthieu Dreistadt via pve-user @ 2025-09-27 16:06 UTC (permalink / raw)
  To: pve-user; +Cc: Matthieu Dreistadt

[-- Attachment #1: Type: message/rfc822, Size: 7323 bytes --]

From: Matthieu Dreistadt <matthieu@3-stadt.de>
To: pve-user@lists.proxmox.com
Subject: Re: [PVE-User] Analysis of free space...
Date: Sat, 27 Sep 2025 18:06:42 +0200
Message-ID: <831baaaa-3d7d-487c-be79-6162a05745b4@3-stadt.de>

Hi Marco,

you can check "zfs list -o space", which will give you a more detailed 
view of what is using the space:

root@xxx:~# zfs list -o space
NAME                          AVAIL   USED  USEDSNAP  USEDDS 
USEDREFRESERV  USEDCHILD
rpool                          507G   354G        0B    104K      0B    
    354G
rpool/ROOT                     507G  4.40G        0B     96K      0B    
   4.40G
rpool/ROOT/pve-1               507G  4.40G     1.05G   3.35G      0B    
      0B
rpool/data                     507G   312G        0B    112K      0B    
    312G
rpool/data/subvol-105-disk-0  8.62G  11.4G     49.2M   11.4G      0B    
      0B

Used = overall used
Usedsnap = Used by Snapshots
Usedds = Used Disk Space (not counting snapshots, only live data)
Usedchild = Used by datasets/zvols further down in the same path (in my 
example, rpool has the same amount of Used and Usedchild space, since 
there is nothing directly inside of rpool itself)

Cheers,
Matthieu

Am 24.09.2025 um 18:29 schrieb Marco Gaiarin:
> Mandi! Marco Gaiarin
>    In chel di` si favelave...
>
>> Uh, wait... effectively we forgot to enable 'discard' on volumes, and we have
>> enabled afterward (but rebooted the VM).
>> I'll check refreservation property and report back.
> No, volumes seems have all refreservation to 'none', as expected; current
> situation is:
>
>   root@lamprologus:~# zfs list | grep ^rpool-data
>   rpool-data                  54.2T  3.84T   171K  /rpool-data
>   rpool-data/vm-100-disk-0    1.11T  3.84T  1.11T  -
>   rpool-data/vm-100-disk-1    2.32T  3.84T  2.32T  -
>   rpool-data/vm-100-disk-10   1.82T  3.84T  1.82T  -
>   rpool-data/vm-100-disk-11   2.03T  3.84T  2.03T  -
>   rpool-data/vm-100-disk-12   1.96T  3.84T  1.96T  -
>   rpool-data/vm-100-disk-13   2.48T  3.84T  2.48T  -
>   rpool-data/vm-100-disk-14   2.21T  3.84T  2.21T  -
>   rpool-data/vm-100-disk-15   2.42T  3.84T  2.42T  -
>   rpool-data/vm-100-disk-16   2.15T  3.84T  2.15T  -
>   rpool-data/vm-100-disk-17   2.14T  3.84T  2.14T  -
>   rpool-data/vm-100-disk-18   3.39T  3.84T  3.39T  -
>   rpool-data/vm-100-disk-19   3.40T  3.84T  3.40T  -
>   rpool-data/vm-100-disk-2    1.32T  3.84T  1.32T  -
>   rpool-data/vm-100-disk-20   3.36T  3.84T  3.36T  -
>   rpool-data/vm-100-disk-21   2.50T  3.84T  2.50T  -
>   rpool-data/vm-100-disk-22   3.22T  3.84T  3.22T  -
>   rpool-data/vm-100-disk-23   2.73T  3.84T  2.73T  -
>   rpool-data/vm-100-disk-24   2.53T  3.84T  2.53T  -
>   rpool-data/vm-100-disk-3     213K  3.84T   213K  -
>   rpool-data/vm-100-disk-4     213K  3.84T   213K  -
>   rpool-data/vm-100-disk-5    2.33T  3.84T  2.33T  -
>   rpool-data/vm-100-disk-6    2.28T  3.84T  2.28T  -
>   rpool-data/vm-100-disk-7    2.13T  3.84T  2.13T  -
>   rpool-data/vm-100-disk-8    2.29T  3.84T  2.29T  -
>   rpool-data/vm-100-disk-9    2.11T  3.84T  2.11T  -
>
> and a random volume (but all are similar):
>
>   root@lamprologus:~# zfs get all rpool-data/vm-100-disk-18 | grep refreservation
>   rpool-data/vm-100-disk-18  refreservation        none                   default
>   rpool-data/vm-100-disk-18  usedbyrefreservation  0B                     -
>
> Another strange thing is that all are 2TB volumes:
>
>   root@lamprologus:~# cat /etc/pve/qemu-server/100.conf | grep vm-100-disk-19
>   scsi20: rpool-data:vm-100-disk-19,backup=0,discard=on,replicate=0,size=2000G
>
> but:
>
>   root@lamprologus:~# zfs list rpool-data/vm-100-disk-19
>   NAME                        USED  AVAIL  REFER  MOUNTPOINT
>   rpool-data/vm-100-disk-19  3.40T  3.84T  3.40T  -
>
> why 'USED' is 3.40T?
>
>
> Thanks.
>



[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PVE-User] Analysis of free space...
  2025-09-27 15:42       ` Alwin Antreich via pve-user
@ 2025-09-30 16:26         ` Marco Gaiarin
  0 siblings, 0 replies; 13+ messages in thread
From: Marco Gaiarin @ 2025-09-30 16:26 UTC (permalink / raw)
  To: Alwin Antreich via pve-user; +Cc: pve-user

Mandi! Alwin Antreich via pve-user
  In chel di` si favelave...

> Snapshots by any chance?

Uh, sorry, forgot to specify. No.

	root@lamprologus:~# zfs list -t snapshot rpool-data
	no datasets available

-- 



_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PVE-User] Analysis of free space...
  2025-09-27 16:06       ` Matthieu Dreistadt via pve-user
@ 2025-09-30 16:55         ` Marco Gaiarin
  0 siblings, 0 replies; 13+ messages in thread
From: Marco Gaiarin @ 2025-09-30 16:55 UTC (permalink / raw)
  To: Matthieu Dreistadt via pve-user; +Cc: pve-user

Mandi! Matthieu Dreistadt via pve-user
  In chel di` si favelave...

> you can check "zfs list -o space", which will give you a more detailed 
> view of what is using the space:
[...]
> Used = overall used
> Usedsnap = Used by Snapshots
> Usedds = Used Disk Space (not counting snapshots, only live data)
> Usedchild = Used by datasets/zvols further down in the same path (in my 
> example, rpool has the same amount of Used and Usedchild space, since 
> there is nothing directly inside of rpool itself)

Thans for the hint. Anyway:

 root@lamprologus:~# zfs list -o space | grep ^rpool-data/
 rpool-data/vm-100-disk-0    11.6T  1.07T        0B   1.07T             0B         0B
 rpool-data/vm-100-disk-1    11.6T  1.81T        0B   1.81T             0B         0B
 rpool-data/vm-100-disk-10   11.6T  1.42T        0B   1.42T             0B         0B
 rpool-data/vm-100-disk-11   11.6T  1.86T        0B   1.86T             0B         0B
 rpool-data/vm-100-disk-12   11.6T  1.64T        0B   1.64T             0B         0B
 rpool-data/vm-100-disk-13   11.6T  2.23T        0B   2.23T             0B         0B
 rpool-data/vm-100-disk-14   11.6T  1.96T        0B   1.96T             0B         0B
 rpool-data/vm-100-disk-15   11.6T  1.83T        0B   1.83T             0B         0B
 rpool-data/vm-100-disk-16   11.6T  1.89T        0B   1.89T             0B         0B
 rpool-data/vm-100-disk-17   11.6T  2.05T        0B   2.05T             0B         0B
 rpool-data/vm-100-disk-18   11.6T  3.39T        0B   3.39T             0B         0B
 rpool-data/vm-100-disk-19   11.6T  3.40T        0B   3.40T             0B         0B
 rpool-data/vm-100-disk-2    11.6T  1.31T        0B   1.31T             0B         0B
 rpool-data/vm-100-disk-20   11.6T  3.36T        0B   3.36T             0B         0B
 rpool-data/vm-100-disk-21   11.6T  2.50T        0B   2.50T             0B         0B
 rpool-data/vm-100-disk-22   11.6T  3.22T        0B   3.22T             0B         0B
 rpool-data/vm-100-disk-23   11.6T  2.73T        0B   2.73T             0B         0B
 rpool-data/vm-100-disk-24   11.6T  2.53T        0B   2.53T             0B         0B
 rpool-data/vm-100-disk-3    11.6T   213K        0B    213K             0B         0B
 rpool-data/vm-100-disk-4    11.6T   213K        0B    213K             0B         0B
 rpool-data/vm-100-disk-5    11.6T  1.48T        0B   1.48T             0B         0B
 rpool-data/vm-100-disk-6    11.6T  1.35T        0B   1.35T             0B         0B
 rpool-data/vm-100-disk-7    11.6T   930G        0B    930G             0B         0B
 rpool-data/vm-100-disk-8    11.6T  1.26T        0B   1.26T             0B         0B
 rpool-data/vm-100-disk-9    11.6T  1.30T        0B   1.30T             0B         0B

seems that really i was able to put 3.40T of real data on a 2T volume...

-- 



_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user


^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2025-09-30 20:11 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-09-17 13:27 [PVE-User] Analysis of free space Marco Gaiarin
2025-09-17 23:45 ` Gilou
2025-09-22 14:14   ` Marco Gaiarin
2025-09-22 14:17   ` Marco Gaiarin
2025-09-24 16:29     ` Marco Gaiarin
2025-09-27 15:42       ` Alwin Antreich via pve-user
2025-09-30 16:26         ` Marco Gaiarin
2025-09-27 16:06       ` Matthieu Dreistadt via pve-user
2025-09-30 16:55         ` Marco Gaiarin
2025-09-26 14:57     ` Marco Gaiarin
2025-09-18 13:44 ` Alwin Antreich via pve-user
2025-09-22 14:16   ` Marco Gaiarin
2025-09-25  8:26     ` Marco Gaiarin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal