* [PVE-User] Data limits not being respected on zfs vm volume
@ 2022-04-20 2:54 Lindsay Mathieson
2022-04-20 8:25 ` Fabian Grünbichler
2022-04-20 16:25 ` GM
0 siblings, 2 replies; 3+ messages in thread
From: Lindsay Mathieson @ 2022-04-20 2:54 UTC (permalink / raw)
To: pve-user
This is really odd - was downloading a large amount of data in a debian
VM last night, something went wrong (my problem), it didn't stop and
filled up the volume.
Should be a problem as the virtual disk only exists to store temporary data:
* vm-100-disk-1
* 256GB
* 1 Partition, formatted and mounted as EXT4
* Located under rpool/data
Trouble is, it kept expanding past 256GB, using up all the free space on
the host boot drive. This morning everything was down and I had to
delete the volume to get a functioning system.
zfs list of volumes and snapshots:
NAME USED AVAIL REFER MOUNTPOINT
rpool 450G 0B 104K /rpool
rpool/ROOT 17.3G 0B 96K /rpool/ROOT
rpool/ROOT/pve-1 17.3G 0B 17.3G /
rpool/data 432G 0B 128K /rpool/data
rpool/data/basevol-101-disk-0 563M 0B 563M
/rpool/data/basevol-101-disk-0
rpool/data/basevol-102-disk-0 562M 0B 562M
/rpool/data/basevol-102-disk-0
rpool/data/subvol-151-disk-0 911M 0B 911M
/rpool/data/subvol-151-disk-0
rpool/data/subvol-152-disk-0 712M 0B 712M
/rpool/data/subvol-152-disk-0
rpool/data/subvol-153-disk-0 712M 0B 712M
/rpool/data/subvol-153-disk-0
rpool/data/subvol-154-disk-0 710M 0B 710M
/rpool/data/subvol-154-disk-0
rpool/data/subvol-155-disk-0 838M 0B 838M
/rpool/data/subvol-155-disk-0
rpool/data/vm-100-disk-0 47.3G 0B 45.0G -
_*rpool/data/vm-100-disk-1 338G 0B 235G -*_
rpool/data/vm-100-state-fsck 2.05G 0B 2.05G -
rpool/data/vm-201-disk-0 40.1G 0B 38.0G -
rpool/data/vm-201-disk-1 176K 0B 104K -
root@px-server:~#
NAME USED AVAIL REFER MOUNTPOINT
rpool/data/basevol-101-disk-0@__base__ 8K - 563M -
rpool/data/basevol-102-disk-0@__base__ 8K - 562M -
rpool/data/vm-100-disk-0@fsck 2.32G - 42.7G -
rpool/data/vm-100-disk-1@fsck 103G - 164G -
rpool/data/vm-201-disk-0@BIOSChange 2.12G - 37.7G -
rpool/data/vm-201-disk-1@BIOSChange 72K - 96K -
VM fstab
UUID=7928b71b-a00e-4614-b239-d5cc9bf311d6 / ext4
errors=remount-ro 0 1
# swap was on /dev/sda5 during installation
# UUID=26f4eae9-7855-4561-b75b-1405cc5eec3e none swap sw
0 0
/dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0
_*PARTUUID=bd9ca0da-fbde-4bfc-852b-f6b7db86292a /mnt/temp
ext4 errors=remount-ro 0 1*_
# moosefs
mfsmount /mnt/plex fuse
defaults,_netdev,mfsdelayedinit,mfssubfolder=plex 0 0
How was this even possible?
nb. The process downloading data was running in docker hosted on the
debian vm.
--
Lindsay Mathieson
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PVE-User] Data limits not being respected on zfs vm volume
2022-04-20 2:54 [PVE-User] Data limits not being respected on zfs vm volume Lindsay Mathieson
@ 2022-04-20 8:25 ` Fabian Grünbichler
2022-04-20 16:25 ` GM
1 sibling, 0 replies; 3+ messages in thread
From: Fabian Grünbichler @ 2022-04-20 8:25 UTC (permalink / raw)
To: Proxmox VE user list
On April 20, 2022 4:54 am, Lindsay Mathieson wrote:
> This is really odd - was downloading a large amount of data in a debian
> VM last night, something went wrong (my problem), it didn't stop and
> filled up the volume.
>
>
> Should be a problem as the virtual disk only exists to store temporary data:
>
> * vm-100-disk-1
> * 256GB
> * 1 Partition, formatted and mounted as EXT4
> * Located under rpool/data
>
>
> Trouble is, it kept expanding past 256GB, using up all the free space on
> the host boot drive. This morning everything was down and I had to
> delete the volume to get a functioning system.
>
> zfs list of volumes and snapshots:
>
> NAME USED AVAIL REFER MOUNTPOINT
> rpool 450G 0B 104K /rpool
> rpool/ROOT 17.3G 0B 96K /rpool/ROOT
> rpool/ROOT/pve-1 17.3G 0B 17.3G /
> rpool/data 432G 0B 128K /rpool/data
> rpool/data/basevol-101-disk-0 563M 0B 563M
> /rpool/data/basevol-101-disk-0
> rpool/data/basevol-102-disk-0 562M 0B 562M
> /rpool/data/basevol-102-disk-0
> rpool/data/subvol-151-disk-0 911M 0B 911M
> /rpool/data/subvol-151-disk-0
> rpool/data/subvol-152-disk-0 712M 0B 712M
> /rpool/data/subvol-152-disk-0
> rpool/data/subvol-153-disk-0 712M 0B 712M
> /rpool/data/subvol-153-disk-0
> rpool/data/subvol-154-disk-0 710M 0B 710M
> /rpool/data/subvol-154-disk-0
> rpool/data/subvol-155-disk-0 838M 0B 838M
> /rpool/data/subvol-155-disk-0
> rpool/data/vm-100-disk-0 47.3G 0B 45.0G -
> _*rpool/data/vm-100-disk-1 338G 0B 235G -*_
used 338, refered 235G - so you either have snapshots, or raidz overhead
taking up the extra space.
> rpool/data/vm-100-state-fsck 2.05G 0B 2.05G -
> rpool/data/vm-201-disk-0 40.1G 0B 38.0G -
> rpool/data/vm-201-disk-1 176K 0B 104K -
> root@px-server:~#
>
>
> NAME USED AVAIL REFER MOUNTPOINT
> rpool/data/basevol-101-disk-0@__base__ 8K - 563M -
> rpool/data/basevol-102-disk-0@__base__ 8K - 562M -
> rpool/data/vm-100-disk-0@fsck 2.32G - 42.7G -
> rpool/data/vm-100-disk-1@fsck 103G - 164G -
snapshots taking up 105G at least, which lines up nicely with 338-235 =
103G (doesn't have to, snapshot space accounting is a bit complicated).
> rpool/data/vm-201-disk-0@BIOSChange 2.12G - 37.7G -
> rpool/data/vm-201-disk-1@BIOSChange 72K - 96K -
>
> How was this even possible?
see above. is the zvol thin-provisioned? if yes, then likely the
snapshots are at fault. for regular zvols, creating a snapshot would
already take care of having enough space at snapshot creationg time, and
such a situation cannot arise. with thin-provisioned storage it's always
possible to overcommit and run out of space.
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PVE-User] Data limits not being respected on zfs vm volume
2022-04-20 2:54 [PVE-User] Data limits not being respected on zfs vm volume Lindsay Mathieson
2022-04-20 8:25 ` Fabian Grünbichler
@ 2022-04-20 16:25 ` GM
1 sibling, 0 replies; 3+ messages in thread
From: GM @ 2022-04-20 16:25 UTC (permalink / raw)
To: Proxmox VE user list
>
> Trouble is, it kept expanding past 256GB, using up all the free space on
> the host boot drive. This morning everything was down and I had to
> delete the volume to get a functioning system.
Just to add up onto this, you could get around this from happening in the
future by setting a "reservation" or "refreservation" to "rpool/ROOT/pve-1"
dataset (man zfsprops for more details).
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2022-04-20 16:26 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-20 2:54 [PVE-User] Data limits not being respected on zfs vm volume Lindsay Mathieson
2022-04-20 8:25 ` Fabian Grünbichler
2022-04-20 16:25 ` GM
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal