From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [IPv6:2a01:7e0:0:424::9]) by lore.proxmox.com (Postfix) with ESMTPS id 5DA8B1FF136 for ; Mon, 23 Mar 2026 13:50:32 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id DE1EC1703D; Mon, 23 Mar 2026 13:50:49 +0100 (CET) Content-Type: text/plain; charset=UTF-8 Date: Mon, 23 Mar 2026 13:50:14 +0100 Message-Id: Subject: Re: [PATCH storage 1/2] fix #7339: lvmthick: add worker to free space of to be deleted VMs From: =?utf-8?q?Michael_K=C3=B6ppl?= To: "Lukas Sichert" , Content-Transfer-Encoding: quoted-printable Mime-Version: 1.0 X-Mailer: aerc 0.21.0 References: <20260323101506.56098-1-l.sichert@proxmox.com> <20260323101506.56098-2-l.sichert@proxmox.com> In-Reply-To: <20260323101506.56098-2-l.sichert@proxmox.com> X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1774270168403 X-SPAM-LEVEL: Spam detection results: 0 AWL 0.086 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_RPBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_SAFE_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Message-ID-Hash: XRGROUTJOYSZ7XSFWYJ4UIWPCTKSIXOW X-Message-ID-Hash: XRGROUTJOYSZ7XSFWYJ4UIWPCTKSIXOW X-MailFrom: m.koeppl@proxmox.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header X-Mailman-Version: 3.3.10 Precedence: list List-Id: Proxmox VE development discussion List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: noticed one more thing here. On Mon Mar 23, 2026 at 11:14 AM CET, Lukas Sichert wrote: > # lock shared storage > $plugin->cluster_lock_storage( > @@ -1206,16 +1206,24 @@ sub vdisk_free { > =20 > my (undef, undef, undef, undef, undef, $isBase, $format) =3D > $plugin->parse_volname($volname); > - $cleanup_worker =3D $plugin->free_image($storeid, $scfg, $vo= lname, $isBase, $format); > + $discard_worker =3D $plugin->free_image($storeid, $scfg, $vo= lname, $isBase, $format); > }, > ); > =20 > - return if !$cleanup_worker; > + return if !$discard_worker; > =20 > my $rpcenv =3D PVE::RPCEnvironment::get(); > my $authuser =3D $rpcenv->get_user(); > =20 > - $rpcenv->fork_worker('imgdel', undef, $authuser, $cleanup_worker); > + if ($scfg->{saferemove} && $scfg->{issue_blkdiscard}) { > + $rpcenv->fork_worker('diskzerodiscard', undef, $authuser, $disca= rd_worker); > + } elsif ($scfg->{saferemove}) { > + $rpcenv->fork_worker('diskzero', undef, $authuser, $discard_work= er);=20 > + } elsif ($scfg->{issue_blkdiscard}) { > + $rpcenv->fork_worker('diskdiscard', undef, $authuser, $discard_w= orker);=20 > + } else { > + die "config must have changed during execution. Disk can't be dele= ted safely"; This changes the current behavior in a way that affects other storage plugins. vdisk_free is used for various storage types and this part kind of assumes specifics of the LVM plugin (as in, for the LVM plugin it is true that if the discard_workers is set, at least one of the fields would have to be set) and will simply not fork a worker for any storage plugin where this is not the case. From what I can see right now, this discard worker is not returned for any of the other plugins (?), but that might not always be the case. I think it'd make sense to still run the 'imgdel' worker from before in the else case since the discard_worker was obviously defined. I'm also not sure about the error message. The storage config is only read once at the beginning of the function. Even if it did change in the meantime, you would not detect it here. > + } > } > =20 > sub vdisk_list { > diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm > index 32a8339..b0b94a6 100644 > --- a/src/PVE/Storage/LVMPlugin.pm > +++ b/src/PVE/Storage/LVMPlugin.pm > @@ -352,12 +352,12 @@ my sub free_lvm_volumes { > =20 > # we need to zero out LVM data for security reasons > # and to allow thin provisioning