public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH container v2] api: restore: allow keeping not backed-up volumes
@ 2025-02-11 11:20 Fiona Ebner
  2025-02-11 11:20 ` [pve-devel] [PATCH manager v2] ui: restore: enable safeguarding of mount point volumes by default Fiona Ebner
  0 siblings, 1 reply; 2+ messages in thread
From: Fiona Ebner @ 2025-02-11 11:20 UTC (permalink / raw)
  To: pve-devel

Same rationale as in pve-manager commit 5f855ccf ("ui: restore:
improve warning for restoring container with same ID"): it's
surprising to (new) users that all owned mount point volumes are
erased upon container restore, even those that are not currently
selected for backup. This is different from VM restore, where volumes
attached at drives not present in the backup will be kept around as
unused volumes.

Many users got tripped up by this over the years (e.g. [0][1][2]).
While the warning added by pve-manager commit 5f855ccf helps, fact is
that there are still new reports about lost data and thus very bad UX,
because of this behavior.

This patch adds an option to bring the behavior more in line with VM
restore. A container backup does not contain the detailed information
about which mount point volumes were included, so rely on the 'backup'
flag to determine which ones were included and which were not. Note
this is a bit more careful than VM restore, which only looks whether a
volume with the same key is included in the backup and does not also
consider the current 'backup' flag.

Remove snapshots from the kept volumes, there are no snapshots after
restore.

Note that this does not change the fact that mount point volumes
(according to the configuration contained in the backup) will be
allocated and thus more space is required in scenarios where some
volumes are kept.

The long term plan is to allow selecting actions for volumes
individually.

[0]: https://bugzilla.proxmox.com/show_bug.cgi?id=3783
[1]: https://forum.proxmox.com/threads/109707/post-745415
[2]: https://forum.proxmox.com/threads/111760/post-482045

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---

Changes in v2:
    * add an API parameter to make the new behavior opt-in (and have
      the UI opt-in by default with the next patch)

 src/PVE/API2/LXC.pm | 80 ++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 79 insertions(+), 1 deletion(-)

diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 7cb5122..128566b 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -16,6 +16,7 @@ use PVE::DataCenterConfig;
 use PVE::AccessControl;
 use PVE::Firewall;
 use PVE::Storage;
+use PVE::RESTEnvironment qw(log_warn);
 use PVE::RESTHandler;
 use PVE::RPCEnvironment;
 use PVE::ReplicationConfig;
@@ -52,6 +53,56 @@ my $check_storage_access_migrate = sub {
 	if !$scfg->{content}->{rootdir};
 };
 
+my sub restore_keep_non_backup_volumes {
+    my ($storecfg, $old_conf, $volumes_in_backup) = @_;
+
+    my $kept_volumes = []; # an array to preserve the order
+    my $kept_volumes_hash = {}; # hash to simplify check for presence
+
+    PVE::LXC::Config->foreach_volume_full($old_conf, { include_unused => 1 }, sub {
+	my ($ms, $mountpoint) = @_;
+
+	return if $mountpoint->{type} ne 'volume';
+
+	my ($keep, $reason);
+	# keep if either not previously backed up or not currently set for backup
+	if (!$volumes_in_backup->{$ms}) {
+	    ($keep, $reason) = (1, "not previously backed up");
+	} elsif (!PVE::LXC::Config->mountpoint_backup_enabled($ms, $mountpoint)) {
+	    ($keep, $reason) = (1, "not currently backed up");
+	}
+
+	if ($keep) {
+	    my $volid = $mountpoint->{volume};
+	    $kept_volumes_hash->{$volid} = 1;
+	    push $kept_volumes->@*, $volid;
+
+	    delete $old_conf->{$ms};
+
+	    my $description = "'$ms' ($volid";
+	    $description .= ",mp=$mountpoint->{mp}" if $mountpoint->{mp};
+	    $description .= ")";
+	    print "keeping $description as unused - $reason\n";
+	}
+    });
+
+    # after the restore, there are no snapshots anymore
+    for my $snapname (keys $old_conf->{snapshots}->%*) {
+	PVE::LXC::Config->foreach_volume($old_conf->{snapshots}->{$snapname}, sub {
+	    my ($ms, $mountpoint) = @_;
+
+	    my $volid = $mountpoint->{volume};
+
+	    return if !$kept_volumes_hash->{$volid};
+
+	    eval { PVE::Storage::volume_snapshot_delete($storecfg, $volid, $snapname); };
+	    log_warn("unable to remove snapshot '$snapname' from kept volume '$volid' - $@") if $@;
+	});
+    }
+
+    return $kept_volumes;
+}
+
 __PACKAGE__->register_method ({
     subclass => "PVE::API2::LXC::Config",
     path => '{vmid}/config',
@@ -198,6 +249,14 @@ __PACKAGE__->register_method({
 		default => 0,
 		description => "Start the CT after its creation finished successfully.",
 	    },
+	    'restore-safeguard-mp-volumes' => {
+		optional => 1,
+		type => 'boolean',
+		default => 0,
+		description => "Restore only - Preserve mount point volumes that are not included"
+		    ." in the backup or do not currently have the 'backup' flag set as 'unused'"
+		    ." volumes.",
+	    },
 	}),
     },
     returns => {
@@ -216,6 +275,7 @@ __PACKAGE__->register_method({
 	my $ignore_unpack_errors = extract_param($param, 'ignore-unpack-errors');
 	my $bwlimit = extract_param($param, 'bwlimit');
 	my $start_after_create = extract_param($param, 'start');
+	my $restore_safeguard_mp_volumes = extract_param($param, 'restore-safeguard-mp-volumes');
 
 	my $basecfg_fn = PVE::LXC::Config->config_file($vmid);
 	my $same_container_exists = -f $basecfg_fn;
@@ -381,6 +441,8 @@ __PACKAGE__->register_method({
 	    my $was_template;
 
 	    my $vollist = [];
+	    my $volumes_in_backup = {};
+	    my $kept_volumes = [];
 	    eval {
 		my $orig_mp_param; # only used if $restore
 		if ($restore) {
@@ -428,6 +490,8 @@ __PACKAGE__->register_method({
 			    my ($ms, $mountpoint) = @_;
 			    my $type = $mountpoint->{type};
 			    if ($type eq 'volume') {
+				$volumes_in_backup->{$ms} = 1
+				    if PVE::LXC::Config->mountpoint_backup_enabled($ms, $mountpoint);
 				die "unable to detect disk size - please specify $ms (size)\n"
 				    if !defined($mountpoint->{size});
 				my $disksize = $mountpoint->{size} / (1024 * 1024 * 1024); # create_disks expects GB as unit size
@@ -463,7 +527,11 @@ __PACKAGE__->register_method({
 
 		# we always have the 'create' lock so check for more than 1 entry
 		if (scalar(keys %$old_conf) > 1) {
-		    # destroy old container volumes
+		    # destroy old container volumes - keep not backed-up ones if requested
+		    if ($restore_safeguard_mp_volumes) {
+			$kept_volumes = restore_keep_non_backup_volumes(
+			    $storage_cfg, $old_conf, $volumes_in_backup);
+		    }
 		    PVE::LXC::destroy_lxc_container($storage_cfg, $vmid, $old_conf, { lock => 'create' });
 		}
 
@@ -497,6 +565,13 @@ __PACKAGE__->register_method({
 		foreach my $mp (keys %$delayed_mp_param) {
 		    $conf->{$mp} = $delayed_mp_param->{$mp};
 		}
+
+		# register kept volumes as unused
+		for my $volid ($kept_volumes->@*) {
+		    eval { PVE::LXC::Config->add_unused_volume($conf, $volid); };
+		    log_warn("orphaned volume '$volid' - $@") if $@;
+		}
+
 		# If the template flag was set, we try to convert again to template after restore
 		if ($was_template) {
 		    print STDERR "Convert restored container to template...\n";
@@ -510,6 +585,9 @@ __PACKAGE__->register_method({
 		warn $@ if $@;
 		PVE::LXC::destroy_disks($storage_cfg, $vollist);
 		if ($destroy_config_on_error) {
+		    log_warn("orphaned volumes: " . join(',', $kept_volumes->@*))
+			if scalar($kept_volumes->@*) > 0;
+
 		    eval { PVE::LXC::Config->destroy_config($vmid) };
 		    warn $@ if $@;
 
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 2+ messages in thread

* [pve-devel] [PATCH manager v2] ui: restore: enable safeguarding of mount point volumes by default
  2025-02-11 11:20 [pve-devel] [PATCH container v2] api: restore: allow keeping not backed-up volumes Fiona Ebner
@ 2025-02-11 11:20 ` Fiona Ebner
  0 siblings, 0 replies; 2+ messages in thread
From: Fiona Ebner @ 2025-02-11 11:20 UTC (permalink / raw)
  To: pve-devel

Same rationale as in pve-manager commit 5f855ccf ("ui: restore:
improve warning for restoring container with same ID"): it's
surprising to (new) users that all owned mount point volumes are
erased upon container restore, even those that are not currently
selected for backup. This is different from VM restore, where volumes
attached at drives not present in the backup will be kept around as
unused volumes.

Opt-in by default to the new option to change this behavior.

Remove the special message printed regarding removal of mount point
volumes to avoid confusion/complexity. With the checkbox for
safeguarding being present, attention is already directed towards this
behavior. And since the checkbox is enabled by default, one needs to
explicitly opt-out for not backed-up mount points. For backed-up mount
points, it should be clear that they are overwritten by the restore,
since the confirm dialog already states "This will permanently erase
current CT data".

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---

New in v2.

Dependency bump for pve-container is needed.

 www/manager6/window/Restore.js | 18 +++++++++++++++---
 1 file changed, 15 insertions(+), 3 deletions(-)

diff --git a/www/manager6/window/Restore.js b/www/manager6/window/Restore.js
index 6efe1313..3d759d6a 100644
--- a/www/manager6/window/Restore.js
+++ b/www/manager6/window/Restore.js
@@ -47,6 +47,9 @@ Ext.define('PVE.window.Restore', {
 	    if (values.storage) {
 		params.storage = values.storage;
 	    }
+	    if (values['restore-safeguard-mp-volumes']) {
+		params['restore-safeguard-mp-volumes'] = 1;
+	    }
 
 	    ['bwlimit', 'cores', 'name', 'memory', 'sockets'].forEach(opt => {
 		if ((values[opt] ?? '') !== '') {
@@ -96,9 +99,6 @@ Ext.define('PVE.window.Restore', {
 		    gettext('This will permanently erase current {0} data.'),
 		    view.vmtype === 'lxc' ? 'CT' : 'VM',
 		)}`;
-		if (view.vmtype === 'lxc') {
-		    confirmMsg += `<br>${gettext('Mount point volumes are also erased.')}`;
-		}
 		Ext.Msg.confirm(gettext('Confirm'), confirmMsg, function(btn) {
 		    if (btn === 'yes') {
 			executeRestore();
@@ -278,6 +278,18 @@ Ext.define('PVE.window.Restore', {
 			},
 		    ],
 		},
+		{
+		    xtype: 'proxmoxcheckbox',
+		    name: 'restore-safeguard-mp-volumes',
+		    itemId: 'restoreSafeguardMpVolumes',
+		    fieldLabel: gettext('Safeguard Mount Point Volumes'),
+		    flex: 1,
+		    checked: true,
+		    autoEl: {
+			tag: 'div',
+			'data-qtip': gettext("Preserve mount point volumes that are not included in the backup as 'unused'."),
+		    },
+		},
 	    );
 	} else if (me.vmtype === 'qemu') {
 	    items.push({
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2025-02-11 11:21 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-02-11 11:20 [pve-devel] [PATCH container v2] api: restore: allow keeping not backed-up volumes Fiona Ebner
2025-02-11 11:20 ` [pve-devel] [PATCH manager v2] ui: restore: enable safeguarding of mount point volumes by default Fiona Ebner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal