* [pve-devel] [RFC PATCH guest-common 1/2] ReplicationState: purge state from non local vms
@ 2022-05-24 11:41 Dominik Csapak
2022-05-24 11:41 ` [pve-devel] [PATCH guest-common 2/2] ReplicationState: deterministically order replication jobs Dominik Csapak
` (2 more replies)
0 siblings, 3 replies; 7+ messages in thread
From: Dominik Csapak @ 2022-05-24 11:41 UTC (permalink / raw)
To: pve-devel
when running replication, we don't want to keep replication states for
non-local vms. Normally this would not be a problem, since on migration,
we transfer the states anyway, but when the ha-manager steals a vm, it
cannot do that. In that case, having an old state lying around is
harmful, since the code does not expect the state to be out-of-sync
with the actual snapshots on disk.
One such problem is the following:
Replicate vm 100 from node A to node B and C, and activate HA. When node
A dies, it will be relocated to e.g. node B and start replicate from
there. If node B now had an old state lying around for it's sync to node
C, it might delete the common base snapshots of B and C and cannot sync
again.
Deleting the state for all non local guests fixes that issue, since it
always starts fresh, and the potentially existing old state cannot be
valid anyway since we just relocated the vm here (from a dead node).
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
i tested it in various configurations and with live migration, offline
migration, ha relocation etc. and did not find an issue, but since
the check was already there and commented out, maybe someone has a
better idea why this might not be a good thing, so sending as RFC
src/PVE/ReplicationState.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/PVE/ReplicationState.pm b/src/PVE/ReplicationState.pm
index 0a5e410..8eebb42 100644
--- a/src/PVE/ReplicationState.pm
+++ b/src/PVE/ReplicationState.pm
@@ -215,7 +215,7 @@ sub purge_old_states {
my $tid = $plugin->get_unique_target_id($jobcfg);
my $vmid = $jobcfg->{guest};
$used_tids->{$vmid}->{$tid} = 1
- if defined($vms->{ids}->{$vmid}); # && $vms->{ids}->{$vmid}->{node} eq $local_node;
+ if defined($vms->{ids}->{$vmid}) && $vms->{ids}->{$vmid}->{node} eq $local_node;
}
my $purge_state = sub {
--
2.30.2
^ permalink raw reply [flat|nested] 7+ messages in thread
* [pve-devel] [PATCH guest-common 2/2] ReplicationState: deterministically order replication jobs
2022-05-24 11:41 [pve-devel] [RFC PATCH guest-common 1/2] ReplicationState: purge state from non local vms Dominik Csapak
@ 2022-05-24 11:41 ` Dominik Csapak
2022-05-25 14:30 ` Thomas Lamprecht
2022-05-24 11:43 ` [pve-devel] [RFC PATCH guest-common 1/2] ReplicationState: purge state from non local vms Dominik Csapak
[not found] ` <1654180326.vcply64b2j.astroid@nora.none>
2 siblings, 1 reply; 7+ messages in thread
From: Dominik Csapak @ 2022-05-24 11:41 UTC (permalink / raw)
To: pve-devel
if we have multiple jobs for the same vmid with the same schedule,
the last_sync, next_sync and vmid will always be the same, so the order
depends on the order of the $jobs hash (which is random; thanks perl)
to have a fixed order, take the jobid also into consideration
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/PVE/ReplicationState.pm | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/src/PVE/ReplicationState.pm b/src/PVE/ReplicationState.pm
index 8eebb42..ae6b1fb 100644
--- a/src/PVE/ReplicationState.pm
+++ b/src/PVE/ReplicationState.pm
@@ -322,7 +322,9 @@ sub get_next_job {
return $res if $res != 0;
$res = $joba->{next_sync} <=> $jobb->{next_sync};
return $res if $res != 0;
- return $joba->{guest} <=> $jobb->{guest};
+ $res = $joba->{guest} <=> $jobb->{guest};
+ return $res if $res != 0;
+ return $a cmp $b;
};
foreach my $jobid (sort $sort_func keys %$jobs) {
--
2.30.2
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [pve-devel] [RFC PATCH guest-common 1/2] ReplicationState: purge state from non local vms
2022-05-24 11:41 [pve-devel] [RFC PATCH guest-common 1/2] ReplicationState: purge state from non local vms Dominik Csapak
2022-05-24 11:41 ` [pve-devel] [PATCH guest-common 2/2] ReplicationState: deterministically order replication jobs Dominik Csapak
@ 2022-05-24 11:43 ` Dominik Csapak
[not found] ` <1654180326.vcply64b2j.astroid@nora.none>
2 siblings, 0 replies; 7+ messages in thread
From: Dominik Csapak @ 2022-05-24 11:43 UTC (permalink / raw)
To: pve-devel
forgot to mention in the commit message, i believe this is the issue the user
runs into here:
https://forum.proxmox.com/threads/zfs-replication-sometimes-fails.104134/
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [pve-devel] [PATCH guest-common 2/2] ReplicationState: deterministically order replication jobs
2022-05-24 11:41 ` [pve-devel] [PATCH guest-common 2/2] ReplicationState: deterministically order replication jobs Dominik Csapak
@ 2022-05-25 14:30 ` Thomas Lamprecht
2022-05-27 6:23 ` Dominik Csapak
0 siblings, 1 reply; 7+ messages in thread
From: Thomas Lamprecht @ 2022-05-25 14:30 UTC (permalink / raw)
To: Proxmox VE development discussion, Dominik Csapak
On 24/05/2022 13:41, Dominik Csapak wrote:
> if we have multiple jobs for the same vmid with the same schedule,
> the last_sync, next_sync and vmid will always be the same, so the order
> depends on the order of the $jobs hash (which is random; thanks perl)
>
> to have a fixed order, take the jobid also into consideration
>
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
> src/PVE/ReplicationState.pm | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/src/PVE/ReplicationState.pm b/src/PVE/ReplicationState.pm
> index 8eebb42..ae6b1fb 100644
> --- a/src/PVE/ReplicationState.pm
> +++ b/src/PVE/ReplicationState.pm
> @@ -322,7 +322,9 @@ sub get_next_job {
> return $res if $res != 0;
> $res = $joba->{next_sync} <=> $jobb->{next_sync};
> return $res if $res != 0;
> - return $joba->{guest} <=> $jobb->{guest};
> + $res = $joba->{guest} <=> $jobb->{guest};
> + return $res if $res != 0;
> + return $a cmp $b;
nit, but couldn't this be
return $joba->{guest} <=> $jobb->{guest} || $a cmp $b;
instead, the right side of the logical OR only gets evaluated if the left side's
result is 0 (well also on undef and empty string "", but that cannot happen
with the spaceship operator).
> };
>
> foreach my $jobid (sort $sort_func keys %$jobs) {
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [pve-devel] [PATCH guest-common 2/2] ReplicationState: deterministically order replication jobs
2022-05-25 14:30 ` Thomas Lamprecht
@ 2022-05-27 6:23 ` Dominik Csapak
2022-05-27 7:22 ` Thomas Lamprecht
0 siblings, 1 reply; 7+ messages in thread
From: Dominik Csapak @ 2022-05-27 6:23 UTC (permalink / raw)
To: Thomas Lamprecht, Proxmox VE development discussion
On 5/25/22 16:30, Thomas Lamprecht wrote:
> On 24/05/2022 13:41, Dominik Csapak wrote:
>> if we have multiple jobs for the same vmid with the same schedule,
>> the last_sync, next_sync and vmid will always be the same, so the order
>> depends on the order of the $jobs hash (which is random; thanks perl)
>>
>> to have a fixed order, take the jobid also into consideration
>>
>> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
>> ---
>> src/PVE/ReplicationState.pm | 4 +++-
>> 1 file changed, 3 insertions(+), 1 deletion(-)
>>
>> diff --git a/src/PVE/ReplicationState.pm b/src/PVE/ReplicationState.pm
>> index 8eebb42..ae6b1fb 100644
>> --- a/src/PVE/ReplicationState.pm
>> +++ b/src/PVE/ReplicationState.pm
>> @@ -322,7 +322,9 @@ sub get_next_job {
>> return $res if $res != 0;
>> $res = $joba->{next_sync} <=> $jobb->{next_sync};
>> return $res if $res != 0;
>> - return $joba->{guest} <=> $jobb->{guest};
>> + $res = $joba->{guest} <=> $jobb->{guest};
>> + return $res if $res != 0;
>> + return $a cmp $b;
>
> nit, but couldn't this be
>
> return $joba->{guest} <=> $jobb->{guest} || $a cmp $b;
>
> instead, the right side of the logical OR only gets evaluated if the left side's
> result is 0 (well also on undef and empty string "", but that cannot happen
> with the spaceship operator).
>
yeah sure, i just blindly copied from the lines above. do we want
to change that pattern for all of them? like this:
---
return $sa->{last_iteration} <=> $sb->{last_iteration} ||
$joba->{next_sync} <=> $jobb->{next_sync} ||
$joba->{guest} <=> $jobb->{guest} ||
$a cmp $b;
---
>> };
>>
>> foreach my $jobid (sort $sort_func keys %$jobs) {
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [pve-devel] [PATCH guest-common 2/2] ReplicationState: deterministically order replication jobs
2022-05-27 6:23 ` Dominik Csapak
@ 2022-05-27 7:22 ` Thomas Lamprecht
0 siblings, 0 replies; 7+ messages in thread
From: Thomas Lamprecht @ 2022-05-27 7:22 UTC (permalink / raw)
To: Dominik Csapak, Proxmox VE development discussion
On 27/05/2022 08:23, Dominik Csapak wrote:
>>
>> nit, but couldn't this be
>>
>> return $joba->{guest} <=> $jobb->{guest} || $a cmp $b;
>>
>> instead, the right side of the logical OR only gets evaluated if the left side's
>> result is 0 (well also on undef and empty string "", but that cannot happen
>> with the spaceship operator).
>>
>
> yeah sure, i just blindly copied from the lines above. do we want
> to change that pattern for all of them? like this:
>
> ---
> return $sa->{last_iteration} <=> $sb->{last_iteration} ||
> $joba->{next_sync} <=> $jobb->{next_sync} ||
> $joba->{guest} <=> $jobb->{guest} ||
> $a cmp $b;
> ---
would be fine for me, but just for that we don't need a v2 and I'd rather like
some comment/review from Fabian (or anybody else that worked more closely with
replication) - I mean, on the other hand, this one could be applied
independently too...
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [pve-devel] [RFC PATCH guest-common 1/2] ReplicationState: purge state from non local vms
[not found] ` <1654180326.vcply64b2j.astroid@nora.none>
@ 2022-06-03 6:45 ` Thomas Lamprecht
0 siblings, 0 replies; 7+ messages in thread
From: Thomas Lamprecht @ 2022-06-03 6:45 UTC (permalink / raw)
To: Proxmox VE development discussion, Fabian Grünbichler
Am 02/06/2022 um 16:33 schrieb Fabian Grünbichler:
>> Replicate vm 100 from node A to node B and C, and activate HA. When node
>> A dies, it will be relocated to e.g. node B and start replicate from
>> there. If node B now had an old state lying around for it's sync to node
>> C, it might delete the common base snapshots of B and C and cannot sync
>> again.
>>
>> Deleting the state for all non local guests fixes that issue, since it
>> always starts fresh, and the potentially existing old state cannot be
>> valid anyway since we just relocated the vm here (from a dead node).
>>
>> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> the logic seems sound, the state *is* invalid/outdated once the guest
> has been stolen..
>
> Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
>
Thanks! @Dominik, can you please send a v2 with Fabian's R-b and the nit from
patch 1/2 addressed? thanks!
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2022-06-03 6:45 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-24 11:41 [pve-devel] [RFC PATCH guest-common 1/2] ReplicationState: purge state from non local vms Dominik Csapak
2022-05-24 11:41 ` [pve-devel] [PATCH guest-common 2/2] ReplicationState: deterministically order replication jobs Dominik Csapak
2022-05-25 14:30 ` Thomas Lamprecht
2022-05-27 6:23 ` Dominik Csapak
2022-05-27 7:22 ` Thomas Lamprecht
2022-05-24 11:43 ` [pve-devel] [RFC PATCH guest-common 1/2] ReplicationState: purge state from non local vms Dominik Csapak
[not found] ` <1654180326.vcply64b2j.astroid@nora.none>
2022-06-03 6:45 ` Thomas Lamprecht
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox