From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 74513DE0A for ; Wed, 20 Apr 2022 10:34:20 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 6B0BC8D35 for ; Wed, 20 Apr 2022 10:33:50 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id BE4128D2C for ; Wed, 20 Apr 2022 10:33:49 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 996EE42438 for ; Wed, 20 Apr 2022 10:25:28 +0200 (CEST) Date: Wed, 20 Apr 2022 10:25:20 +0200 From: Fabian =?iso-8859-1?q?Gr=FCnbichler?= To: Proxmox VE user list References: In-Reply-To: MIME-Version: 1.0 User-Agent: astroid/0.15.0 (https://github.com/astroidmail/astroid) Message-Id: <1650442842.sqdn5m4pt2.astroid@nora.none> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-SPAM-LEVEL: Spam detection results: 0 AWL 0.027 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment POISEN_SPAM_PILL 0.1 Meta: its spam POISEN_SPAM_PILL_1 0.1 random spam to be learned in bayes POISEN_SPAM_PILL_3 0.1 random spam to be learned in bayes SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record T_SCC_BODY_TEXT_LINE -0.01 - Subject: Re: [PVE-User] Data limits not being respected on zfs vm volume X-BeenThere: pve-user@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE user list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 20 Apr 2022 08:34:20 -0000 On April 20, 2022 4:54 am, Lindsay Mathieson wrote: > This is really odd - was downloading a large amount of data in a debian=20 > VM last night, something went wrong (my problem), it didn't stop and=20 > filled up the volume. >=20 >=20 > Should be a problem as the virtual disk only exists to store temporary da= ta: >=20 > * vm-100-disk-1 > * 256GB > * 1 Partition, formatted and mounted as EXT4 > * Located under rpool/data >=20 >=20 > Trouble is, it kept expanding past 256GB, using up all the free space on=20 > the host boot drive. This morning everything was down and I had to=20 > delete the volume to get a functioning system. >=20 > zfs list of volumes and snapshots: >=20 > NAME=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 USED AVAIL=C2=A0=C2=A0=C2=A0=C2=A0 REFER=C2=A0 MOUNTPOIN= T > rpool=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0 450G=C2=A0=C2=A0=C2=A0=C2=A0 0B=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0 104K=C2=A0 /rpool > rpool/ROOT=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 17.3G=C2=A0=C2=A0= =C2=A0=C2=A0 0B=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 96K /rpool/ROOT > rpool/ROOT/pve-1=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0 17.3G=C2=A0=C2=A0=C2=A0=C2=A0 0B=C2=A0=C2=A0=C2= =A0=C2=A0 17.3G=C2=A0 / > rpool/data=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 432G=C2=A0= =C2=A0=C2=A0=C2=A0 0B=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 128K /rpool/data > rpool/data/basevol-101-disk-0=C2=A0=C2=A0 563M=C2=A0=C2=A0=C2=A0=C2=A0 0B= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 563M=20 > /rpool/data/basevol-101-disk-0 > rpool/data/basevol-102-disk-0=C2=A0=C2=A0 562M=C2=A0=C2=A0=C2=A0=C2=A0 0B= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 562M=20 > /rpool/data/basevol-102-disk-0 > rpool/data/subvol-151-disk-0=C2=A0=C2=A0=C2=A0 911M=C2=A0=C2=A0=C2=A0=C2= =A0 0B=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 911M=20 > /rpool/data/subvol-151-disk-0 > rpool/data/subvol-152-disk-0=C2=A0=C2=A0=C2=A0 712M=C2=A0=C2=A0=C2=A0=C2= =A0 0B=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 712M=20 > /rpool/data/subvol-152-disk-0 > rpool/data/subvol-153-disk-0=C2=A0=C2=A0=C2=A0 712M=C2=A0=C2=A0=C2=A0=C2= =A0 0B=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 712M=20 > /rpool/data/subvol-153-disk-0 > rpool/data/subvol-154-disk-0=C2=A0=C2=A0=C2=A0 710M=C2=A0=C2=A0=C2=A0=C2= =A0 0B=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 710M=20 > /rpool/data/subvol-154-disk-0 > rpool/data/subvol-155-disk-0=C2=A0=C2=A0=C2=A0 838M=C2=A0=C2=A0=C2=A0=C2= =A0 0B=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 838M=20 > /rpool/data/subvol-155-disk-0 > rpool/data/vm-100-disk-0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 47.3G=C2=A0= =C2=A0=C2=A0=C2=A0 0B=C2=A0=C2=A0=C2=A0=C2=A0 45.0G=C2=A0 - > _*rpool/data/vm-100-disk-1=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 338G= =C2=A0=C2=A0=C2=A0=C2=A0 0B=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 235G=C2=A0 -*_ used 338, refered 235G - so you either have snapshots, or raidz overhead=20 taking up the extra space. > rpool/data/vm-100-state-fsck=C2=A0=C2=A0 2.05G=C2=A0=C2=A0=C2=A0=C2=A0 0B= =C2=A0=C2=A0=C2=A0=C2=A0 2.05G=C2=A0 - > rpool/data/vm-201-disk-0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 40.1G=C2=A0= =C2=A0=C2=A0=C2=A0 0B=C2=A0=C2=A0=C2=A0=C2=A0 38.0G=C2=A0 - > rpool/data/vm-201-disk-1=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 176K= =C2=A0=C2=A0=C2=A0=C2=A0 0B=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 104K=C2=A0 - > root@px-server:~# >=20 >=20 > NAME=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 US= ED=C2=A0 AVAIL=C2=A0=C2=A0=C2=A0=C2=A0 REFER MOUNTPOINT > rpool/data/basevol-101-disk-0@__base__=C2=A0=C2=A0=C2=A0=C2=A0 8K=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0 -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 563M - > rpool/data/basevol-102-disk-0@__base__=C2=A0=C2=A0=C2=A0=C2=A0 8K=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0 -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 562M - > rpool/data/vm-100-disk-0@fsck=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 2.32G=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 -=C2=A0=C2=A0=C2=A0= =C2=A0 42.7G - > rpool/data/vm-100-disk-1@fsck=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0 103G=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 -=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 164G - snapshots taking up 105G at least, which lines up nicely with 338-235 =3D=20 103G (doesn't have to, snapshot space accounting is a bit complicated). > rpool/data/vm-201-disk-0@BIOSChange=C2=A0=C2=A0=C2=A0=C2=A0 2.12G=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0 -=C2=A0=C2=A0=C2=A0=C2=A0 37.7G - > rpool/data/vm-201-disk-1@BIOSChange=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 7= 2K=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 96K = - >=20 > How was this even possible? see above. is the zvol thin-provisioned? if yes, then likely the=20 snapshots are at fault. for regular zvols, creating a snapshot would=20 already take care of having enough space at snapshot creationg time, and=20 such a situation cannot arise. with thin-provisioned storage it's always=20 possible to overcommit and run out of space.