From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id AC3351FF13E for ; Fri, 23 Jan 2026 15:35:16 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id F1329FD67; Fri, 23 Jan 2026 15:35:06 +0100 (CET) From: Fiona Ebner To: pve-devel@lists.proxmox.com Date: Fri, 23 Jan 2026 15:34:27 +0100 Message-ID: <20260123143454.150800-8-f.ebner@proxmox.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20260123143454.150800-1-f.ebner@proxmox.com> References: <20260123143454.150800-1-f.ebner@proxmox.com> MIME-Version: 1.0 X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1769178839282 X-SPAM-LEVEL: Spam detection results: 0 AWL -0.015 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: [pve-devel] [PATCH container v3 7/8] api: restore: allow keeping not backed-up volumes X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Proxmox VE development discussion Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: pve-devel-bounces@lists.proxmox.com Sender: "pve-devel" Same rationale as in pve-manager commit 5f855ccf ("ui: restore: improve warning for restoring container with same ID"): it's surprising to (new) users that all owned mount point volumes are erased upon container restore, even those that are not currently selected for backup. This is different from VM restore, where volumes attached at drives not present in the backup will be kept around as unused volumes. Many users got tripped up by this over the years (e.g. [0][1][2]). While the warning added by pve-manager commit 5f855ccf helps, fact is that there are still new reports about lost data and thus very bad UX, because of this behavior. This patch adds an option to bring the behavior more in line with VM restore. A container backup does not contain the detailed information about which mount point volumes were included, so rely on the 'backup' flag to determine which ones were included and which were not. Note this is a bit more careful than VM restore, which only looks whether a volume with the same key is included in the backup and does not also consider the current 'backup' flag. Remove snapshots from the kept volumes, there are no snapshots after restore. Note that this does not change the fact that mount point volumes (according to the configuration contained in the backup) will be allocated and thus more space is required in scenarios where some volumes are kept. The long term plan is to allow selecting actions for volumes individually. [0]: https://bugzilla.proxmox.com/show_bug.cgi?id=3783 [1]: https://forum.proxmox.com/threads/109707/post-745415 [2]: https://forum.proxmox.com/threads/111760/post-482045 Signed-off-by: Fiona Ebner --- Changes in v3: * Rebase on master and then on refactorings. * Don't preserve all when API mp/rootfs parameters are given, and rather also check volumes present in the backup. * Require 'restore' parameter to be present when 'restore-safeguard-mp-volumes' is specified. * Add POD for restore_keep_non_backup_volumes() * Slightly improve log messages. src/PVE/API2/LXC.pm | 163 ++++++++++++++++++++++++++++++++++++++------ 1 file changed, 144 insertions(+), 19 deletions(-) diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm index fc278a7..05704cc 100644 --- a/src/PVE/API2/LXC.pm +++ b/src/PVE/API2/LXC.pm @@ -15,6 +15,7 @@ use PVE::Firewall; use PVE::GuestHelpers; use PVE::INotify; use PVE::JSONSchema qw(get_standard_option); +use PVE::RESTEnvironment qw(log_warn); use PVE::RESTHandler; use PVE::RPCEnvironment; use PVE::RRD; @@ -69,6 +70,76 @@ my $check_storage_access_migrate = sub { if !$scfg->{content}->{rootdir}; }; +=head3 restore_keep_non_backup_volumes + + my $kept_volumes = restore_keep_non_backup_volumes( + $storage_cfg, $old_conf, $volumes_in_backup, + ); + +Modifies C<$old_conf> dropping the config keys for volumes that are kept and returns an array +reference with those volumes. The C<$volumes_in_backup> parameter is a hash reference with an entry +for each mount point present in the backup. + +=cut + +my sub restore_keep_non_backup_volumes { + my ($storecfg, $old_conf, $volumes_in_backup) = @_; + + my $kept_volumes = []; # an array to preserve the order + my $kept_volumes_hash = {}; # hash to simplify check for presence + + PVE::LXC::Config->foreach_volume_full( + $old_conf, + { include_unused => 1 }, + sub { + my ($ms, $mountpoint) = @_; + + return if $mountpoint->{type} ne 'volume'; + + my ($keep, $reason); + # keep if either not previously backed up or not currently marked for backup + if (!$volumes_in_backup->{$ms}) { + ($keep, $reason) = (1, "not previously backed up"); + } elsif (!PVE::LXC::Config->mountpoint_backup_enabled($ms, $mountpoint)) { + ($keep, $reason) = (1, "not currently backed up"); + } + + if ($keep) { + my $volid = $mountpoint->{volume}; + $kept_volumes_hash->{$volid} = 1; + push $kept_volumes->@*, $volid; + + delete $old_conf->{$ms}; + + my $description = "'$ms' ($volid"; + $description .= ",mp=$mountpoint->{mp}" if $mountpoint->{mp}; + $description .= ")"; + print "NOTE: keeping $description as unused - $reason\n"; + } + }, + ); + + # after the restore, there are no snapshots anymore + for my $snapname (keys $old_conf->{snapshots}->%*) { + PVE::LXC::Config->foreach_volume( + $old_conf->{snapshots}->{$snapname}, + sub { + my ($ms, $mountpoint) = @_; + + my $volid = $mountpoint->{volume}; + + return if !$kept_volumes_hash->{$volid}; + + eval { PVE::Storage::volume_snapshot_delete($storecfg, $volid, $snapname); }; + log_warn("unable to remove snapshot '$snapname' from kept volume '$volid' - $@") + if $@; + }, + ); + } + + return $kept_volumes; +} + __PACKAGE__->register_method({ subclass => "PVE::API2::LXC::Config", path => '{vmid}/config', @@ -135,6 +206,27 @@ __PACKAGE__->register_method({ }, }); +my sub create_ct_get_volumes_in_backup { + my ($orig_mp_param) = @_; + + my $volumes_in_backup = {}; + + PVE::LXC::Config->foreach_volume( + $orig_mp_param, + sub { + my ($ms, $mountpoint) = @_; + my $type = $mountpoint->{type}; + if ($type eq 'volume') { + if (PVE::LXC::Config->mountpoint_backup_enabled($ms, $mountpoint)) { + $volumes_in_backup->{$ms} = 1; + } + } + }, + ); + + return $volumes_in_backup; +} + my sub create_ct_determine_mp_param { my ( $storage_cfg, $vmid, $archive, $api_mp_param, $orig_mp_param, $restore, $storage, @@ -144,20 +236,24 @@ my sub create_ct_determine_mp_param { my $mp_param; my $delayed_mp_param = {}; - if (scalar(keys $api_mp_param->%*)) { - $mp_param = $api_mp_param; - return ($mp_param, $delayed_mp_param); - } - if (!$restore) { - $mp_param = { rootfs => "$storage:4" }; # defaults to 4GB - return ($mp_param, $delayed_mp_param); + # default is 4 GiB rootfs on the specified storage + $mp_param = scalar(keys $api_mp_param->%*) ? $api_mp_param : { rootfs => "$storage:4" }; + return ($mp_param, $delayed_mp_param, {}); } if (!defined($orig_mp_param)) { print "recovering backed-up configuration from '$archive'\n"; (undef, $orig_mp_param) = PVE::LXC::Create::recover_config($storage_cfg, $archive, $vmid); } + + my $volumes_in_backup = create_ct_get_volumes_in_backup($orig_mp_param); + + if (scalar(keys $api_mp_param->%*)) { + $mp_param = $api_mp_param; + return ($mp_param, $delayed_mp_param, $volumes_in_backup); + } + $mp_param = $orig_mp_param; die "rootfs configuration could not be recovered, please check and specify manually!\n" if !defined($mp_param->{rootfs}); @@ -200,7 +296,7 @@ my sub create_ct_determine_mp_param { }, ); - return ($mp_param, $delayed_mp_param); + return ($mp_param, $delayed_mp_param, $volumes_in_backup); } __PACKAGE__->register_method({ @@ -298,6 +394,16 @@ __PACKAGE__->register_method({ default => 0, description => "Add the CT as a HA resource after it was created.", }, + 'restore-safeguard-mp-volumes' => { + optional => 1, + type => 'boolean', + requires => 'restore', + default => 0, + description => + "Preserve mount point volumes that are not included in the backup or do not" + . " currently have the 'backup' flag set, keeping them as 'unused' entries in" + . " the configuration.", + }, }), }, returns => { @@ -317,6 +423,8 @@ __PACKAGE__->register_method({ my $bwlimit = extract_param($param, 'bwlimit'); my $start_after_create = extract_param($param, 'start'); my $ha_managed = extract_param($param, 'ha-managed'); + my $restore_safeguard_mp_volumes = + extract_param($param, 'restore-safeguard-mp-volumes'); my $basecfg_fn = PVE::LXC::Config->config_file($vmid); my $same_container_exists = -f $basecfg_fn; @@ -487,6 +595,7 @@ __PACKAGE__->register_method({ my $was_template; my $vollist = []; + my $kept_volumes = []; eval { my $orig_mp_param; # only used if $restore if ($restore) { @@ -531,16 +640,17 @@ __PACKAGE__->register_method({ } } - my ($mp_param, $delayed_mp_param) = create_ct_determine_mp_param( - $storage_cfg, - $vmid, - $archive, - $api_mp_param, - $orig_mp_param, - $restore, - $storage, - $is_root, - ); + my ($mp_param, $delayed_mp_param, $volumes_in_backup) = + create_ct_determine_mp_param( + $storage_cfg, + $vmid, + $archive, + $api_mp_param, + $orig_mp_param, + $restore, + $storage, + $is_root, + ); # up until here we did not modify the container, besides the lock $destroy_config_on_error = 1; @@ -549,7 +659,12 @@ __PACKAGE__->register_method({ # we always have the 'create' lock so check for more than 1 entry if (scalar(keys %$old_conf) > 1) { - # destroy old container volumes + # destroy old container volumes - keep not backed-up ones if requested + if ($restore_safeguard_mp_volumes) { + $kept_volumes = restore_keep_non_backup_volumes( + $storage_cfg, $old_conf, $volumes_in_backup, + ); + } PVE::LXC::destroy_lxc_container( $storage_cfg, $vmid, @@ -619,6 +734,13 @@ __PACKAGE__->register_method({ foreach my $mp (keys %$delayed_mp_param) { $conf->{$mp} = $delayed_mp_param->{$mp}; } + + # register kept volumes as unused + for my $volid ($kept_volumes->@*) { + eval { PVE::LXC::Config->add_unused_volume($conf, $volid); }; + log_warn("orphaned volume '$volid' - $@") if $@; + } + # If the template flag was set, we try to convert again to template after restore if ($was_template) { print STDERR "Convert restored container to template...\n"; @@ -632,6 +754,9 @@ __PACKAGE__->register_method({ warn $@ if $@; PVE::LXC::destroy_disks($storage_cfg, $vollist); if ($destroy_config_on_error) { + log_warn("orphaned volumes: " . join(',', $kept_volumes->@*)) + if scalar($kept_volumes->@*) > 0; + eval { PVE::LXC::Config->destroy_config($vmid) }; warn $@ if $@; -- 2.47.3 _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel