From: Fabian Ebner <f.ebner@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH guest-common 2/3] partially fix 3111: snapshot rollback: fix removing replication snapshots
Date: Mon, 12 Apr 2021 13:37:16 +0200 [thread overview]
Message-ID: <20210412113717.25356-2-f.ebner@proxmox.com> (raw)
In-Reply-To: <20210412113717.25356-1-f.ebner@proxmox.com>
Get the replicatable volumes from the snapshot config rather than the current
config. And filter those volumes further to those that will actually be rolled
back.
Previously, a volume that only had replication snapshots (e.g. because it was
added after the non-replication snapshot was taken, or the vmstate volume) would
end up without any snapshots whatsoever. Then, on the next replication run, such
a volume would lead to an error, because replication tried to do a full sync,
but the target volume still existed.
Should be enough for many real-world scenarios, but not a complete fix:
It is still possible to run into the problem by removing the last
(non-replication) snapshot after a rollback before replication can run once.
The list of volids is not required to be sorted by volid for prepare().
(It is now sorted by how foreach_volume() iterates.)
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---
PVE/AbstractConfig.pm | 17 +++++++++++++----
1 file changed, 13 insertions(+), 4 deletions(-)
diff --git a/PVE/AbstractConfig.pm b/PVE/AbstractConfig.pm
index 3348d8a..c4d1d6c 100644
--- a/PVE/AbstractConfig.pm
+++ b/PVE/AbstractConfig.pm
@@ -974,13 +974,22 @@ sub snapshot_rollback {
if ($prepare) {
my $repl_conf = PVE::ReplicationConfig->new();
if ($repl_conf->check_for_existing_jobs($vmid, 1)) {
- # remove all replication snapshots
- my $volumes = $class->get_replicatable_volumes($storecfg, $vmid, $conf, 1);
- my $sorted_volids = [ sort keys %$volumes ];
+ # remove replication snapshots on volumes affected by rollback *only*!
+ my $volumes = $class->get_replicatable_volumes($storecfg, $vmid, $snap, 1);
+
+ my $volids = [];
+ $class->foreach_volume($snap, sub {
+ my ($vs, $volume) = @_;
+
+ my $volid_key = $class->volid_key();
+ my $volid = $volume->{$volid_key};
+
+ push @{$volids}, $volid if $volumes->{$volid};
+ });
# remove all local replication snapshots (jobid => undef)
my $logfunc = sub { my $line = shift; chomp $line; print "$line\n"; };
- PVE::Replication::prepare($storecfg, $sorted_volids, undef, 1, undef, $logfunc);
+ PVE::Replication::prepare($storecfg, $volids, undef, 1, undef, $logfunc);
}
$class->foreach_volume($snap, sub {
--
2.20.1
next prev parent reply other threads:[~2021-04-12 11:37 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-04-12 11:37 [pve-devel] [PATCH storage 1/3] volume export/import: allow uppercase letters Fabian Ebner
2021-04-12 11:37 ` Fabian Ebner [this message]
2021-04-12 11:37 ` [pve-devel] [RFC guest-common 3/3] fix 3111: replicate guest on rollback if there are replication jobs for it Fabian Ebner
2021-04-12 12:54 ` [pve-devel] applied: [PATCH storage 1/3] volume export/import: allow uppercase letters Thomas Lamprecht
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210412113717.25356-2-f.ebner@proxmox.com \
--to=f.ebner@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal