From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: <pve-devel-bounces@lists.proxmox.com> Received: from firstgate.proxmox.com (firstgate.proxmox.com [IPv6:2a01:7e0:0:424::9]) by lore.proxmox.com (Postfix) with ESMTPS id 6AB551FF15E for <inbox@lore.proxmox.com>; Tue, 11 Feb 2025 12:20:59 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 194C829527; Tue, 11 Feb 2025 12:20:55 +0100 (CET) From: Fiona Ebner <f.ebner@proxmox.com> To: pve-devel@lists.proxmox.com Date: Tue, 11 Feb 2025 12:20:44 +0100 Message-Id: <20250211112045.37214-1-f.ebner@proxmox.com> X-Mailer: git-send-email 2.39.5 MIME-Version: 1.0 X-SPAM-LEVEL: Spam detection results: 0 AWL -0.048 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_RPBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_SAFE_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [lxc.pm, proxmox.com] Subject: [pve-devel] [PATCH container v2] api: restore: allow keeping not backed-up volumes X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com> List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe> List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/> List-Post: <mailto:pve-devel@lists.proxmox.com> List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help> List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe> Reply-To: Proxmox VE development discussion <pve-devel@lists.proxmox.com> Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: pve-devel-bounces@lists.proxmox.com Sender: "pve-devel" <pve-devel-bounces@lists.proxmox.com> Same rationale as in pve-manager commit 5f855ccf ("ui: restore: improve warning for restoring container with same ID"): it's surprising to (new) users that all owned mount point volumes are erased upon container restore, even those that are not currently selected for backup. This is different from VM restore, where volumes attached at drives not present in the backup will be kept around as unused volumes. Many users got tripped up by this over the years (e.g. [0][1][2]). While the warning added by pve-manager commit 5f855ccf helps, fact is that there are still new reports about lost data and thus very bad UX, because of this behavior. This patch adds an option to bring the behavior more in line with VM restore. A container backup does not contain the detailed information about which mount point volumes were included, so rely on the 'backup' flag to determine which ones were included and which were not. Note this is a bit more careful than VM restore, which only looks whether a volume with the same key is included in the backup and does not also consider the current 'backup' flag. Remove snapshots from the kept volumes, there are no snapshots after restore. Note that this does not change the fact that mount point volumes (according to the configuration contained in the backup) will be allocated and thus more space is required in scenarios where some volumes are kept. The long term plan is to allow selecting actions for volumes individually. [0]: https://bugzilla.proxmox.com/show_bug.cgi?id=3783 [1]: https://forum.proxmox.com/threads/109707/post-745415 [2]: https://forum.proxmox.com/threads/111760/post-482045 Signed-off-by: Fiona Ebner <f.ebner@proxmox.com> --- Changes in v2: * add an API parameter to make the new behavior opt-in (and have the UI opt-in by default with the next patch) src/PVE/API2/LXC.pm | 80 ++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 79 insertions(+), 1 deletion(-) diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm index 7cb5122..128566b 100644 --- a/src/PVE/API2/LXC.pm +++ b/src/PVE/API2/LXC.pm @@ -16,6 +16,7 @@ use PVE::DataCenterConfig; use PVE::AccessControl; use PVE::Firewall; use PVE::Storage; +use PVE::RESTEnvironment qw(log_warn); use PVE::RESTHandler; use PVE::RPCEnvironment; use PVE::ReplicationConfig; @@ -52,6 +53,56 @@ my $check_storage_access_migrate = sub { if !$scfg->{content}->{rootdir}; }; +my sub restore_keep_non_backup_volumes { + my ($storecfg, $old_conf, $volumes_in_backup) = @_; + + my $kept_volumes = []; # an array to preserve the order + my $kept_volumes_hash = {}; # hash to simplify check for presence + + PVE::LXC::Config->foreach_volume_full($old_conf, { include_unused => 1 }, sub { + my ($ms, $mountpoint) = @_; + + return if $mountpoint->{type} ne 'volume'; + + my ($keep, $reason); + # keep if either not previously backed up or not currently set for backup + if (!$volumes_in_backup->{$ms}) { + ($keep, $reason) = (1, "not previously backed up"); + } elsif (!PVE::LXC::Config->mountpoint_backup_enabled($ms, $mountpoint)) { + ($keep, $reason) = (1, "not currently backed up"); + } + + if ($keep) { + my $volid = $mountpoint->{volume}; + $kept_volumes_hash->{$volid} = 1; + push $kept_volumes->@*, $volid; + + delete $old_conf->{$ms}; + + my $description = "'$ms' ($volid"; + $description .= ",mp=$mountpoint->{mp}" if $mountpoint->{mp}; + $description .= ")"; + print "keeping $description as unused - $reason\n"; + } + }); + + # after the restore, there are no snapshots anymore + for my $snapname (keys $old_conf->{snapshots}->%*) { + PVE::LXC::Config->foreach_volume($old_conf->{snapshots}->{$snapname}, sub { + my ($ms, $mountpoint) = @_; + + my $volid = $mountpoint->{volume}; + + return if !$kept_volumes_hash->{$volid}; + + eval { PVE::Storage::volume_snapshot_delete($storecfg, $volid, $snapname); }; + log_warn("unable to remove snapshot '$snapname' from kept volume '$volid' - $@") if $@; + }); + } + + return $kept_volumes; +} + __PACKAGE__->register_method ({ subclass => "PVE::API2::LXC::Config", path => '{vmid}/config', @@ -198,6 +249,14 @@ __PACKAGE__->register_method({ default => 0, description => "Start the CT after its creation finished successfully.", }, + 'restore-safeguard-mp-volumes' => { + optional => 1, + type => 'boolean', + default => 0, + description => "Restore only - Preserve mount point volumes that are not included" + ." in the backup or do not currently have the 'backup' flag set as 'unused'" + ." volumes.", + }, }), }, returns => { @@ -216,6 +275,7 @@ __PACKAGE__->register_method({ my $ignore_unpack_errors = extract_param($param, 'ignore-unpack-errors'); my $bwlimit = extract_param($param, 'bwlimit'); my $start_after_create = extract_param($param, 'start'); + my $restore_safeguard_mp_volumes = extract_param($param, 'restore-safeguard-mp-volumes'); my $basecfg_fn = PVE::LXC::Config->config_file($vmid); my $same_container_exists = -f $basecfg_fn; @@ -381,6 +441,8 @@ __PACKAGE__->register_method({ my $was_template; my $vollist = []; + my $volumes_in_backup = {}; + my $kept_volumes = []; eval { my $orig_mp_param; # only used if $restore if ($restore) { @@ -428,6 +490,8 @@ __PACKAGE__->register_method({ my ($ms, $mountpoint) = @_; my $type = $mountpoint->{type}; if ($type eq 'volume') { + $volumes_in_backup->{$ms} = 1 + if PVE::LXC::Config->mountpoint_backup_enabled($ms, $mountpoint); die "unable to detect disk size - please specify $ms (size)\n" if !defined($mountpoint->{size}); my $disksize = $mountpoint->{size} / (1024 * 1024 * 1024); # create_disks expects GB as unit size @@ -463,7 +527,11 @@ __PACKAGE__->register_method({ # we always have the 'create' lock so check for more than 1 entry if (scalar(keys %$old_conf) > 1) { - # destroy old container volumes + # destroy old container volumes - keep not backed-up ones if requested + if ($restore_safeguard_mp_volumes) { + $kept_volumes = restore_keep_non_backup_volumes( + $storage_cfg, $old_conf, $volumes_in_backup); + } PVE::LXC::destroy_lxc_container($storage_cfg, $vmid, $old_conf, { lock => 'create' }); } @@ -497,6 +565,13 @@ __PACKAGE__->register_method({ foreach my $mp (keys %$delayed_mp_param) { $conf->{$mp} = $delayed_mp_param->{$mp}; } + + # register kept volumes as unused + for my $volid ($kept_volumes->@*) { + eval { PVE::LXC::Config->add_unused_volume($conf, $volid); }; + log_warn("orphaned volume '$volid' - $@") if $@; + } + # If the template flag was set, we try to convert again to template after restore if ($was_template) { print STDERR "Convert restored container to template...\n"; @@ -510,6 +585,9 @@ __PACKAGE__->register_method({ warn $@ if $@; PVE::LXC::destroy_disks($storage_cfg, $vollist); if ($destroy_config_on_error) { + log_warn("orphaned volumes: " . join(',', $kept_volumes->@*)) + if scalar($kept_volumes->@*) > 0; + eval { PVE::LXC::Config->destroy_config($vmid) }; warn $@ if $@; -- 2.39.5 _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel