* [pve-devel] [PATCH manager 1/4] remove all stale replicated volumes
@ 2020-10-01 11:15 Fabian Ebner
2020-10-01 11:15 ` [pve-devel] [PATCH guest-common 2/4] also consider storages from replication state on full removal Fabian Ebner
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Fabian Ebner @ 2020-10-01 11:15 UTC (permalink / raw)
To: pve-devel
Commit 0433b86df6dfdf1d64ee09322719a02a91690707 introduced a
regression where only stale replicated volumes with a snapshot with
an older timestamp would be cleaned up. This restores the previous
behavior where all stale replicated volumes, i.e. those with a
replication snapshot, but not present in $wanted_volids, are cleaned up.
Before this patch, after removing a volume from the guest config,
it would only be cleaned up the second time the replication ran afterwards.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---
PVE/CLI/pvesr.pm | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/PVE/CLI/pvesr.pm b/PVE/CLI/pvesr.pm
index cb79e2bf..1f175470 100644
--- a/PVE/CLI/pvesr.pm
+++ b/PVE/CLI/pvesr.pm
@@ -137,8 +137,9 @@ __PACKAGE__->register_method ({
push @$volids, map { $_->{volid} } @$images;
}
my ($last_snapshots, $cleaned_replicated_volumes) = PVE::Replication::prepare($storecfg, $volids, $jobid, $last_sync, $parent_snapname, $logfunc);
- foreach my $volid (keys %$cleaned_replicated_volumes) {
- if (!$wanted_volids->{$volid}) {
+ foreach my $volid (@{$volids}) {
+ if (($last_snapshots->{$volid} || $cleaned_replicated_volumes->{$volid})
+ && !$wanted_volids->{$volid}) {
$logfunc->("$jobid: delete stale volume '$volid'");
PVE::Storage::vdisk_free($storecfg, $volid);
delete $last_snapshots->{$volid};
--
2.20.1
^ permalink raw reply [flat|nested] 4+ messages in thread
* [pve-devel] [PATCH guest-common 2/4] also consider storages from replication state on full removal
2020-10-01 11:15 [pve-devel] [PATCH manager 1/4] remove all stale replicated volumes Fabian Ebner
@ 2020-10-01 11:15 ` Fabian Ebner
2020-10-01 11:15 ` [pve-devel] [PATCH guest-common 3/4] rely only on storeid_list " Fabian Ebner
2020-10-01 11:15 ` [pve-devel] [PATCH guest-common 4/4] cleanup storeid_list creation Fabian Ebner
2 siblings, 0 replies; 4+ messages in thread
From: Fabian Ebner @ 2020-10-01 11:15 UTC (permalink / raw)
To: pve-devel
This prevents left-over volumes in the following situation:
1. previous replication with volumes from different storages A and B exists
2. remove all volumes from storage B from the guest configuration
3. schedule full removal before the next normal replication
4. replication target still has the volumes on B
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---
PVE/Replication.pm | 1 +
1 file changed, 1 insertion(+)
diff --git a/PVE/Replication.pm b/PVE/Replication.pm
index ae0f145..15845bb 100644
--- a/PVE/Replication.pm
+++ b/PVE/Replication.pm
@@ -245,6 +245,7 @@ sub replicate {
if ($remove_job eq 'full' && $jobcfg->{target} ne $local_node) {
# remove all remote volumes
my @store_list = map { (PVE::Storage::parse_volume_id($_))[0] } @$sorted_volids;
+ push @store_list, @{$state->{storeid_list}};
my %hash = map { $_ => 1 } @store_list;
--
2.20.1
^ permalink raw reply [flat|nested] 4+ messages in thread
* [pve-devel] [PATCH guest-common 3/4] rely only on storeid_list from replication state on full removal
2020-10-01 11:15 [pve-devel] [PATCH manager 1/4] remove all stale replicated volumes Fabian Ebner
2020-10-01 11:15 ` [pve-devel] [PATCH guest-common 2/4] also consider storages from replication state on full removal Fabian Ebner
@ 2020-10-01 11:15 ` Fabian Ebner
2020-10-01 11:15 ` [pve-devel] [PATCH guest-common 4/4] cleanup storeid_list creation Fabian Ebner
2 siblings, 0 replies; 4+ messages in thread
From: Fabian Ebner @ 2020-10-01 11:15 UTC (permalink / raw)
To: pve-devel
Using the information from the replication state alone should be more correct.
E.g. the configuration might contain a new, not yet replicated volume when the
full removal happens, causing unneeded scanning on the target node.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---
Could be squashed with the previous patch.
There could be an edge case where the information from
the config might be useful: namely if the replication state is
missing/corrupt and full removal happens immediately without
normal replication happening in between. But IMHO it's not worth
keeping the extra code just for that...
PVE/Replication.pm | 9 +--------
1 file changed, 1 insertion(+), 8 deletions(-)
diff --git a/PVE/Replication.pm b/PVE/Replication.pm
index 15845bb..60cfc67 100644
--- a/PVE/Replication.pm
+++ b/PVE/Replication.pm
@@ -243,15 +243,8 @@ sub replicate {
$logfunc->("start job removal - mode '${remove_job}'");
if ($remove_job eq 'full' && $jobcfg->{target} ne $local_node) {
- # remove all remote volumes
- my @store_list = map { (PVE::Storage::parse_volume_id($_))[0] } @$sorted_volids;
- push @store_list, @{$state->{storeid_list}};
-
- my %hash = map { $_ => 1 } @store_list;
-
my $ssh_info = PVE::SSHInfo::get_ssh_info($jobcfg->{target});
- remote_prepare_local_job($ssh_info, $jobid, $vmid, [], [ keys %hash ], 1, undef, 1, $logfunc);
-
+ remote_prepare_local_job($ssh_info, $jobid, $vmid, [], $state->{storeid_list}, 1, undef, 1, $logfunc);
}
# remove all local replication snapshots (lastsync => 0)
prepare($storecfg, $sorted_volids, $jobid, 1, undef, $logfunc);
--
2.20.1
^ permalink raw reply [flat|nested] 4+ messages in thread
* [pve-devel] [PATCH guest-common 4/4] cleanup storeid_list creation
2020-10-01 11:15 [pve-devel] [PATCH manager 1/4] remove all stale replicated volumes Fabian Ebner
2020-10-01 11:15 ` [pve-devel] [PATCH guest-common 2/4] also consider storages from replication state on full removal Fabian Ebner
2020-10-01 11:15 ` [pve-devel] [PATCH guest-common 3/4] rely only on storeid_list " Fabian Ebner
@ 2020-10-01 11:15 ` Fabian Ebner
2 siblings, 0 replies; 4+ messages in thread
From: Fabian Ebner @ 2020-10-01 11:15 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---
Not sure if the expression for map{} is too cluttered here...
PVE/Replication.pm | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/PVE/Replication.pm b/PVE/Replication.pm
index 60cfc67..132e8bb 100644
--- a/PVE/Replication.pm
+++ b/PVE/Replication.pm
@@ -262,12 +262,8 @@ sub replicate {
my ($base_snapshots, $last_snapshots, $last_sync_snapname) = find_common_replication_snapshot(
$ssh_info, $jobid, $vmid, $storecfg, $sorted_volids, $state->{storeid_list}, $last_sync, $parent_snapname, $logfunc);
- my $storeid_hash = {};
- foreach my $volid (@$sorted_volids) {
- my ($storeid) = PVE::Storage::parse_volume_id($volid);
- $storeid_hash->{$storeid} = 1;
- }
- $state->{storeid_list} = [ sort keys %$storeid_hash ];
+ my %storeid_hash = map { (PVE::Storage::parse_volume_id($_))[0] => 1 } @$sorted_volids;
+ $state->{storeid_list} = [ sort keys %storeid_hash ];
# freeze filesystem for data consistency
if ($freezefs) {
--
2.20.1
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2020-10-01 11:16 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-01 11:15 [pve-devel] [PATCH manager 1/4] remove all stale replicated volumes Fabian Ebner
2020-10-01 11:15 ` [pve-devel] [PATCH guest-common 2/4] also consider storages from replication state on full removal Fabian Ebner
2020-10-01 11:15 ` [pve-devel] [PATCH guest-common 3/4] rely only on storeid_list " Fabian Ebner
2020-10-01 11:15 ` [pve-devel] [PATCH guest-common 4/4] cleanup storeid_list creation Fabian Ebner
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox