From: "Fabian Grünbichler" <f.gruenbichler@proxmox.com>
To: Proxmox VE user list <pve-user@lists.proxmox.com>
Subject: Re: [PVE-User] Data limits not being respected on zfs vm volume
Date: Wed, 20 Apr 2022 10:25:20 +0200 [thread overview]
Message-ID: <1650442842.sqdn5m4pt2.astroid@nora.none> (raw)
In-Reply-To: <b1372d2e-7c5f-de3d-e8be-ce58fb2ab4b2@gmail.com>
On April 20, 2022 4:54 am, Lindsay Mathieson wrote:
> This is really odd - was downloading a large amount of data in a debian
> VM last night, something went wrong (my problem), it didn't stop and
> filled up the volume.
>
>
> Should be a problem as the virtual disk only exists to store temporary data:
>
> * vm-100-disk-1
> * 256GB
> * 1 Partition, formatted and mounted as EXT4
> * Located under rpool/data
>
>
> Trouble is, it kept expanding past 256GB, using up all the free space on
> the host boot drive. This morning everything was down and I had to
> delete the volume to get a functioning system.
>
> zfs list of volumes and snapshots:
>
> NAME USED AVAIL REFER MOUNTPOINT
> rpool 450G 0B 104K /rpool
> rpool/ROOT 17.3G 0B 96K /rpool/ROOT
> rpool/ROOT/pve-1 17.3G 0B 17.3G /
> rpool/data 432G 0B 128K /rpool/data
> rpool/data/basevol-101-disk-0 563M 0B 563M
> /rpool/data/basevol-101-disk-0
> rpool/data/basevol-102-disk-0 562M 0B 562M
> /rpool/data/basevol-102-disk-0
> rpool/data/subvol-151-disk-0 911M 0B 911M
> /rpool/data/subvol-151-disk-0
> rpool/data/subvol-152-disk-0 712M 0B 712M
> /rpool/data/subvol-152-disk-0
> rpool/data/subvol-153-disk-0 712M 0B 712M
> /rpool/data/subvol-153-disk-0
> rpool/data/subvol-154-disk-0 710M 0B 710M
> /rpool/data/subvol-154-disk-0
> rpool/data/subvol-155-disk-0 838M 0B 838M
> /rpool/data/subvol-155-disk-0
> rpool/data/vm-100-disk-0 47.3G 0B 45.0G -
> _*rpool/data/vm-100-disk-1 338G 0B 235G -*_
used 338, refered 235G - so you either have snapshots, or raidz overhead
taking up the extra space.
> rpool/data/vm-100-state-fsck 2.05G 0B 2.05G -
> rpool/data/vm-201-disk-0 40.1G 0B 38.0G -
> rpool/data/vm-201-disk-1 176K 0B 104K -
> root@px-server:~#
>
>
> NAME USED AVAIL REFER MOUNTPOINT
> rpool/data/basevol-101-disk-0@__base__ 8K - 563M -
> rpool/data/basevol-102-disk-0@__base__ 8K - 562M -
> rpool/data/vm-100-disk-0@fsck 2.32G - 42.7G -
> rpool/data/vm-100-disk-1@fsck 103G - 164G -
snapshots taking up 105G at least, which lines up nicely with 338-235 =
103G (doesn't have to, snapshot space accounting is a bit complicated).
> rpool/data/vm-201-disk-0@BIOSChange 2.12G - 37.7G -
> rpool/data/vm-201-disk-1@BIOSChange 72K - 96K -
>
> How was this even possible?
see above. is the zvol thin-provisioned? if yes, then likely the
snapshots are at fault. for regular zvols, creating a snapshot would
already take care of having enough space at snapshot creationg time, and
such a situation cannot arise. with thin-provisioned storage it's always
possible to overcommit and run out of space.
next prev parent reply other threads:[~2022-04-20 8:34 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-04-20 2:54 Lindsay Mathieson
2022-04-20 8:25 ` Fabian Grünbichler [this message]
2022-04-20 16:25 ` GM
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1650442842.sqdn5m4pt2.astroid@nora.none \
--to=f.gruenbichler@proxmox.com \
--cc=pve-user@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.