* [pve-devel] [PATCH container] api: restore: keep not backed-up volumes
@ 2025-02-05 14:35 Fiona Ebner
2025-02-10 16:05 ` Thomas Lamprecht
0 siblings, 1 reply; 2+ messages in thread
From: Fiona Ebner @ 2025-02-05 14:35 UTC (permalink / raw)
To: pve-devel
Same rationale as in pve-manager commit 5f855ccf ("ui: restore:
improve warning for restoring container with same ID"): it's
surprising to (new) users that all owned mount point volumes are
erased upon container restore, even those that are not currently
selected for backup. This is different from VM restore, where volumes
attached at drives not present in the backup will be kept around as
unused volumes.
Many users got tripped up by this over the years (e.g. [0][1][2]).
While the warning added by pve-manager commit 5f855ccf helps, fact is
that there are still new reports about lost data and thus very bad UX,
because of this behavior.
This patch brings the behavior more in line with VM restore. A
container backup does not contain the detailed information about which
mount point volumes were included, so rely on the 'backup' flag to
determine which ones were included and which were not. Note this is
a bit more careful than VM restore, which only looks whether a volume
with the same key is included in the backup and does not also consider
the current 'backup' flag.
Remove snapshots from the kept volumes, there are no snapshots after
restore.
Note that this does not change the fact that mount point volumes
(according to the configuration contained in the backup) will be
allocated and thus more space is required in scenarios where some
volumes are kept.
The long term plan is to allow selecting actions for volumes
individually. For now, use a safer default.
[0]: https://bugzilla.proxmox.com/show_bug.cgi?id=3783
[1]: https://forum.proxmox.com/threads/109707/post-745415
[2]: https://forum.proxmox.com/threads/111760/post-482045
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
Since other users might rely on the current removal, we probably want
to wait until either the next point release with this or even until
PVE 9.
src/PVE/API2/LXC.pm | 69 ++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 68 insertions(+), 1 deletion(-)
diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 7cb5122..fb952c0 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -16,6 +16,7 @@ use PVE::DataCenterConfig;
use PVE::AccessControl;
use PVE::Firewall;
use PVE::Storage;
+use PVE::RESTEnvironment qw(log_warn);
use PVE::RESTHandler;
use PVE::RPCEnvironment;
use PVE::ReplicationConfig;
@@ -52,6 +53,56 @@ my $check_storage_access_migrate = sub {
if !$scfg->{content}->{rootdir};
};
+my sub restore_keep_non_backup_volumes {
+ my ($storecfg, $old_conf, $volumes_in_backup) = @_;
+
+ my $kept_volumes = []; # an array to preserve the order
+ my $kept_volumes_hash = {}; # hash to simplify check for presence
+
+ PVE::LXC::Config->foreach_volume_full($old_conf, { include_unused => 1 }, sub {
+ my ($ms, $mountpoint) = @_;
+
+ return if $mountpoint->{type} ne 'volume';
+
+ my ($keep, $reason);
+ # keep if either not previously backed up or not currently set for backup
+ if (!$volumes_in_backup->{$ms}) {
+ ($keep, $reason) = (1, "not previously backed up");
+ } elsif (!PVE::LXC::Config->mountpoint_backup_enabled($ms, $mountpoint)) {
+ ($keep, $reason) = (1, "not currently backed up");
+ }
+
+ if ($keep) {
+ my $volid = $mountpoint->{volume};
+ $kept_volumes_hash->{$volid} = 1;
+ push $kept_volumes->@*, $volid;
+
+ delete $old_conf->{$ms};
+
+ my $description = "'$ms' ($volid";
+ $description .= ",mp=$mountpoint->{mp}" if $mountpoint->{mp};
+ $description .= ")";
+ print "keeping $description as unused - $reason\n";
+ }
+ });
+
+ # after the restore, there are no snapshots anymore
+ for my $snapname (keys $old_conf->{snapshots}->%*) {
+ PVE::LXC::Config->foreach_volume($old_conf->{snapshots}->{$snapname}, sub {
+ my ($ms, $mountpoint) = @_;
+
+ my $volid = $mountpoint->{volume};
+
+ return if !$kept_volumes_hash->{$volid};
+
+ eval { PVE::Storage::volume_snapshot_delete($storecfg, $volid, $snapname); };
+ log_warn("unable to remove snapshot '$snapname' from kept volume '$volid' - $@") if $@;
+ });
+ }
+
+ return $kept_volumes;
+}
+
__PACKAGE__->register_method ({
subclass => "PVE::API2::LXC::Config",
path => '{vmid}/config',
@@ -381,6 +432,8 @@ __PACKAGE__->register_method({
my $was_template;
my $vollist = [];
+ my $volumes_in_backup = {};
+ my $kept_volumes = [];
eval {
my $orig_mp_param; # only used if $restore
if ($restore) {
@@ -428,6 +481,8 @@ __PACKAGE__->register_method({
my ($ms, $mountpoint) = @_;
my $type = $mountpoint->{type};
if ($type eq 'volume') {
+ $volumes_in_backup->{$ms} = 1
+ if PVE::LXC::Config->mountpoint_backup_enabled($ms, $mountpoint);
die "unable to detect disk size - please specify $ms (size)\n"
if !defined($mountpoint->{size});
my $disksize = $mountpoint->{size} / (1024 * 1024 * 1024); # create_disks expects GB as unit size
@@ -463,7 +518,9 @@ __PACKAGE__->register_method({
# we always have the 'create' lock so check for more than 1 entry
if (scalar(keys %$old_conf) > 1) {
- # destroy old container volumes
+ # destroy old container volumes, but keep not backed-up ones
+ $kept_volumes = restore_keep_non_backup_volumes(
+ $storage_cfg, $old_conf, $volumes_in_backup);
PVE::LXC::destroy_lxc_container($storage_cfg, $vmid, $old_conf, { lock => 'create' });
}
@@ -497,6 +554,13 @@ __PACKAGE__->register_method({
foreach my $mp (keys %$delayed_mp_param) {
$conf->{$mp} = $delayed_mp_param->{$mp};
}
+
+ # register kept volumes as unused
+ for my $volid ($kept_volumes->@*) {
+ eval { PVE::LXC::Config->add_unused_volume($conf, $volid); };
+ log_warn("orphaned volume '$volid' - $@") if $@;
+ }
+
# If the template flag was set, we try to convert again to template after restore
if ($was_template) {
print STDERR "Convert restored container to template...\n";
@@ -510,6 +574,9 @@ __PACKAGE__->register_method({
warn $@ if $@;
PVE::LXC::destroy_disks($storage_cfg, $vollist);
if ($destroy_config_on_error) {
+ log_warn("orphaned volumes: " . join(',', $kept_volumes->@*))
+ if scalar($kept_volumes->@*) > 0;
+
eval { PVE::LXC::Config->destroy_config($vmid) };
warn $@ if $@;
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [pve-devel] [PATCH container] api: restore: keep not backed-up volumes
2025-02-05 14:35 [pve-devel] [PATCH container] api: restore: keep not backed-up volumes Fiona Ebner
@ 2025-02-10 16:05 ` Thomas Lamprecht
0 siblings, 0 replies; 2+ messages in thread
From: Thomas Lamprecht @ 2025-02-10 16:05 UTC (permalink / raw)
To: Proxmox VE development discussion, Fiona Ebner
Am 05.02.25 um 15:35 schrieb Fiona Ebner:
> Same rationale as in pve-manager commit 5f855ccf ("ui: restore:
> improve warning for restoring container with same ID"): it's
> surprising to (new) users that all owned mount point volumes are
> erased upon container restore, even those that are not currently
> selected for backup. This is different from VM restore, where volumes
> attached at drives not present in the backup will be kept around as
> unused volumes.
>
> Many users got tripped up by this over the years (e.g. [0][1][2]).
> While the warning added by pve-manager commit 5f855ccf helps, fact is
> that there are still new reports about lost data and thus very bad UX,
> because of this behavior.
>
> This patch brings the behavior more in line with VM restore. A
> container backup does not contain the detailed information about which
> mount point volumes were included, so rely on the 'backup' flag to
> determine which ones were included and which were not. Note this is
> a bit more careful than VM restore, which only looks whether a volume
> with the same key is included in the backup and does not also consider
> the current 'backup' flag.
>
> Remove snapshots from the kept volumes, there are no snapshots after
> restore.
>
> Note that this does not change the fact that mount point volumes
> (according to the configuration contained in the backup) will be
> allocated and thus more space is required in scenarios where some
> volumes are kept.
>
> The long term plan is to allow selecting actions for volumes
> individually. For now, use a safer default.
>
> [0]: https://bugzilla.proxmox.com/show_bug.cgi?id=3783
> [1]: https://forum.proxmox.com/threads/109707/post-745415
> [2]: https://forum.proxmox.com/threads/111760/post-482045
>
> Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
> ---
>
> Since other users might rely on the current removal, we probably want
> to wait until either the next point release with this or even until
> PVE 9.
>
Another option might be to make this opt-out in the UI, i.e. not
per volume (can be done later) but for the whole restore.
btw. pve-manager commit 5f855ccf would need to be reverted along
side this to avoid making UX even more confusing.
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2025-02-10 16:06 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-02-05 14:35 [pve-devel] [PATCH container] api: restore: keep not backed-up volumes Fiona Ebner
2025-02-10 16:05 ` Thomas Lamprecht
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal