From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: <pve-devel-bounces@lists.proxmox.com> Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id 285871FF164 for <inbox@lore.proxmox.com>; Fri, 9 May 2025 12:29:47 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id D74FF3A7CF; Fri, 9 May 2025 12:30:03 +0200 (CEST) Date: Fri, 9 May 2025 12:29:59 +0200 (CEST) From: =?UTF-8?Q?Fabian_Gr=C3=BCnbichler?= <f.gruenbichler@proxmox.com> To: Proxmox VE development discussion <pve-devel@lists.proxmox.com> Message-ID: <654056903.12913.1746786599614@webmail.proxmox.com> In-Reply-To: <mailman.34.1745322752.394.pve-devel@lists.proxmox.com> References: <20250422115141.808427-1-alexandre.derumier@groupe-cyllene.com> <mailman.34.1745322752.394.pve-devel@lists.proxmox.com> MIME-Version: 1.0 X-Priority: 3 Importance: Normal X-Mailer: Open-Xchange Mailer v7.10.6-Rev75 X-Originating-Client: open-xchange-appsuite X-SPAM-LEVEL: Spam detection results: 0 AWL 0.044 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_RPBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_SAFE_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [storage.pm, plugin.pm, lvmplugin.pm] Subject: Re: [pve-devel] [PATCH pve-storage 4/5] storage: vdisk_free: remove external snapshots X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com> List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe> List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/> List-Post: <mailto:pve-devel@lists.proxmox.com> List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help> List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe> Reply-To: Proxmox VE development discussion <pve-devel@lists.proxmox.com> Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: pve-devel-bounces@lists.proxmox.com Sender: "pve-devel" <pve-devel-bounces@lists.proxmox.com> > Alexandre Derumier via pve-devel <pve-devel@lists.proxmox.com> hat am 22.04.2025 13:51 CEST geschrieben: > add a $include_snapshots param to free_image to > remove the whole chain of snapshots when deleting the main image. rbd, zfs, btrfs, lvmthin and qcow2 internal snapshots already all behave like this by default.. shouldn't we just implement this for external snapshots without the need to opt into it? > > Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com> > --- > src/PVE/Storage.pm | 2 +- > src/PVE/Storage/LVMPlugin.pm | 72 ++++++++++++++++++++++++------------ > src/PVE/Storage/Plugin.pm | 27 +++++++++++--- > 3 files changed, 71 insertions(+), 30 deletions(-) > > diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm > index db9d190..55a9a43 100755 > --- a/src/PVE/Storage.pm > +++ b/src/PVE/Storage.pm > @@ -1059,7 +1059,7 @@ sub vdisk_free { > > my (undef, undef, undef, undef, undef, $isBase, $format) = > $plugin->parse_volname($volname); > - $cleanup_worker = $plugin->free_image($storeid, $scfg, $volname, $isBase, $format); > + $cleanup_worker = $plugin->free_image($storeid, $scfg, $volname, $isBase, $format, 1); > }); > > return if !$cleanup_worker; > diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm > index 8ee337a..b20fe98 100644 > --- a/src/PVE/Storage/LVMPlugin.pm > +++ b/src/PVE/Storage/LVMPlugin.pm > @@ -463,10 +463,34 @@ sub alloc_snap_image { > } > > sub free_image { > - my ($class, $storeid, $scfg, $volname, $isBase) = @_; > + my ($class, $storeid, $scfg, $volname, $isBase, $format, $include_snapshots) = @_; > > my $vg = $scfg->{vgname}; > > + my $name = ($class->parse_volname($volname))[1]; > + > + #activate volumes && snapshot volumes > + my $path = $class->path($scfg, $volname, $storeid); > + $path = "\@pve-$name" if $format && $format eq 'qcow2'; > + my $cmd = ['/sbin/lvchange', '-aly', $path]; > + run_command($cmd, errmsg => "can't activate LV '$path' to zero-out its data"); > + $cmd = ['/sbin/lvchange', '--refresh', $path]; > + run_command($cmd, errmsg => "can't refresh LV '$path' to zero-out its data"); > + > + my $volnames = []; > + if ($include_snapshots) { > + my $snapshots = $class->volume_snapshot_info($scfg, $storeid, $volname); > + for my $snapid (sort { $snapshots->{$b}->{order} <=> $snapshots->{$a}->{order} } keys %$snapshots) { > + my $snap = $snapshots->{$snapid}; > + next if $snapid eq 'current'; > + next if !$snap->{volid}; > + next if !$snap->{ext}; > + my ($snap_storeid, $snap_volname) = PVE::Storage::parse_volume_id($snap->{volid}); > + push @$volnames, $snap_volname; > + } > + } > + push @$volnames, $volname; > + > # we need to zero out LVM data for security reasons > # and to allow thin provisioning > > @@ -478,40 +502,40 @@ sub free_image { > if ($scfg->{saferemove_throughput}) { > $throughput = $scfg->{saferemove_throughput}; > } > - > - my $cmd = [ > + for my $name (@$volnames) { > + my $cmd = [ > '/usr/bin/cstream', > '-i', '/dev/zero', > - '-o', "/dev/$vg/del-$volname", > + '-o', "/dev/$vg/del-$name", > '-T', '10', > '-v', '1', > '-b', '1048576', > '-t', "$throughput" > - ]; > - eval { run_command($cmd, errmsg => "zero out finished (note: 'No space left on device' is ok here)"); }; > - warn $@ if $@; > - > - $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub { > - my $cmd = ['/sbin/lvremove', '-f', "$vg/del-$volname"]; > - run_command($cmd, errmsg => "lvremove '$vg/del-$volname' error"); > - }); > - print "successfully removed volume $volname ($vg/del-$volname)\n"; > + ]; > + eval { run_command($cmd, errmsg => "zero out finished (note: 'No space left on device' is ok here)"); }; > + warn $@ if $@; > + > + $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub { > + my $cmd = ['/sbin/lvremove', '-f', "$vg/del-$name"]; > + run_command($cmd, errmsg => "lvremove '$vg/del-$name' error"); > + }); > + print "successfully removed volume $name ($vg/del-$name)\n"; > + } > }; > > - my $cmd = ['/sbin/lvchange', '-aly', "$vg/$volname"]; > - run_command($cmd, errmsg => "can't activate LV '$vg/$volname' to zero-out its data"); > - $cmd = ['/sbin/lvchange', '--refresh', "$vg/$volname"]; > - run_command($cmd, errmsg => "can't refresh LV '$vg/$volname' to zero-out its data"); > - > if ($scfg->{saferemove}) { > - # avoid long running task, so we only rename here > - $cmd = ['/sbin/lvrename', $vg, $volname, "del-$volname"]; > - run_command($cmd, errmsg => "lvrename '$vg/$volname' error"); > + for my $name (@$volnames) { > + # avoid long running task, so we only rename here > + my $cmd = ['/sbin/lvrename', $vg, $name, "del-$name"]; > + run_command($cmd, errmsg => "lvrename '$vg/$name' error"); > + } > return $zero_out_worker; > } else { > - my $tmpvg = $scfg->{vgname}; > - $cmd = ['/sbin/lvremove', '-f', "$tmpvg/$volname"]; > - run_command($cmd, errmsg => "lvremove '$tmpvg/$volname' error"); > + for my $name (@$volnames) { > + my $tmpvg = $scfg->{vgname}; > + my $cmd = ['/sbin/lvremove', '-f', "$tmpvg/$name"]; > + run_command($cmd, errmsg => "lvremove '$tmpvg/$name' error"); > + } > } > > return undef; > diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm > index 3f83fae..0319ab2 100644 > --- a/src/PVE/Storage/Plugin.pm > +++ b/src/PVE/Storage/Plugin.pm > @@ -959,7 +959,7 @@ sub alloc_snap_image { > } > > sub free_image { > - my ($class, $storeid, $scfg, $volname, $isBase, $format) = @_; > + my ($class, $storeid, $scfg, $volname, $isBase, $format, $include_snapshots) = @_; > > die "cannot remove protected volume '$volname' on '$storeid'\n" > if $class->get_volume_attribute($scfg, $storeid, $volname, 'protected'); > @@ -975,12 +975,29 @@ sub free_image { > if (defined($format) && ($format eq 'subvol')) { > File::Path::remove_tree($path); > } else { > - if (!(-f $path || -l $path)) { > - warn "disk image '$path' does not exist\n"; > - return undef; > + > + my $volnames = []; > + if ($include_snapshots) { > + my $snapshots = $class->volume_snapshot_info($scfg, $storeid, $volname); > + for my $snapid (sort { $snapshots->{$b}->{order} <=> $snapshots->{$a}->{order} } keys %$snapshots) { > + my $snap = $snapshots->{$snapid}; > + next if $snapid eq 'current'; > + next if !$snap->{volid}; > + next if !$snap->{ext}; > + my ($snap_storeid, $snap_volname) = PVE::Storage::parse_volume_id($snap->{volid}); > + push @$volnames, $snap_volname; > + } > } > + push @$volnames, $volname; > > - unlink($path) || die "unlink '$path' failed - $!\n"; > + for my $name (@$volnames) { > + my $path = $class->filesystem_path($scfg, $name); > + if (!(-f $path || -l $path)) { > + warn "disk image '$path' does not exist\n"; > + } else { > + unlink($path) || die "unlink '$path' failed - $!\n"; > + } > + } > } > > # try to cleanup directory to not clutter storage with empty $vmid dirs if > -- > 2.39.5 _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel