From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 18D5B674CB for ; Thu, 30 Jul 2020 13:29:24 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 0D1E516AD2 for ; Thu, 30 Jul 2020 13:29:24 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [212.186.127.180]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id 68DF116AB9 for ; Thu, 30 Jul 2020 13:29:22 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 37B38433F1 for ; Thu, 30 Jul 2020 13:29:22 +0200 (CEST) From: Fabian Ebner To: pve-devel@lists.proxmox.com Date: Thu, 30 Jul 2020 13:29:15 +0200 Message-Id: <20200730112915.15402-3-f.ebner@proxmox.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200730112915.15402-1-f.ebner@proxmox.com> References: <20200730112915.15402-1-f.ebner@proxmox.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SPAM-LEVEL: Spam detection results: 0 AWL 0.055 Adjusted score from AWL reputation of From: address KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_DNSWL_MED -2.3 Sender listed at https://www.dnswl.org/, medium trust SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: [pve-devel] [PATCH/RFC qemu-server 3/3] Fix checks for transfering replication state/switching job target X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 30 Jul 2020 11:29:24 -0000 When there are offline disks, $self->{replicated_volumes} will be auto-vivified to {} by the check: next if $self->{replicated_volumes}->{$volid} in sync_disks() and then {} would evaluate to true in a boolean context. Now the replication job information is retrieved once in prepare, and the job information rather than the information if volumes were replicated is used to decide whether to make the calls or not. For offline migration to a non-replication target, there are no $self->{replicated_volumes}, but the state should be transfered nonetheless. For online migration to a non-replication target, replication is broken afterwards anyways, so it doesn't make much of a difference if the state is transferred or not. Signed-off-by: Fabian Ebner --- Hope I'm not misinterpreting when these calls should or shouldn't be made. PVE/QemuMigrate.pm | 21 ++++++++++----------- 1 file changed, 10 insertions(+), 11 deletions(-) diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm index a20e1c7..6097ef2 100644 --- a/PVE/QemuMigrate.pm +++ b/PVE/QemuMigrate.pm @@ -220,6 +220,10 @@ sub prepare { # test if VM exists my $conf = $self->{vmconf} = PVE::QemuConfig->load_config($vmid); + my $repl_conf = PVE::ReplicationConfig->new(); + $self->{replication_jobcfg} = $repl_conf->find_local_replication_job($vmid, $self->{node}); + $self->{is_replicated} = $repl_conf->check_for_existing_jobs($vmid, 1); + PVE::QemuConfig->check_lock($conf); my $running = 0; @@ -227,10 +231,7 @@ sub prepare { die "can't migrate running VM without --online\n" if !$online; $running = $pid; - my $repl_conf = PVE::ReplicationConfig->new(); - my $is_replicated = $repl_conf->check_for_existing_jobs($vmid, 1); - my $is_replicated_to_target = defined($repl_conf->find_local_replication_job($vmid, $self->{node})); - if ($is_replicated && !$is_replicated_to_target) { + if ($self->{is_replicated} && !$self->{replication_jobcfg}) { if ($self->{opts}->{force}) { $self->log('warn', "WARNING: Node '$self->{node}' is not a replication target. Existing " . "replication jobs will fail after migration!\n"); @@ -362,9 +363,7 @@ sub sync_disks { }); } - my $rep_cfg = PVE::ReplicationConfig->new(); - my $replication_jobcfg = $rep_cfg->find_local_replication_job($vmid, $self->{node}); - my $replicatable_volumes = !$replication_jobcfg ? {} + my $replicatable_volumes = !$self->{replication_jobcfg} ? {} : PVE::QemuConfig->get_replicatable_volumes($storecfg, $vmid, $conf, 0, 1); my $test_volid = sub { @@ -489,7 +488,7 @@ sub sync_disks { } } - if ($replication_jobcfg) { + if ($self->{replication_jobcfg}) { if ($self->{running}) { my $version = PVE::QemuServer::kvm_user_version(); @@ -523,7 +522,7 @@ sub sync_disks { my $start_time = time(); my $logfunc = sub { $self->log('info', shift) }; $self->{replicated_volumes} = PVE::Replication::run_replication( - 'PVE::QemuConfig', $replication_jobcfg, $start_time, $start_time, $logfunc); + 'PVE::QemuConfig', $self->{replication_jobcfg}, $start_time, $start_time, $logfunc); } # sizes in config have to be accurate for remote node to correctly @@ -1193,7 +1192,7 @@ sub phase3_cleanup { } # transfer replication state before move config - $self->transfer_replication_state() if $self->{replicated_volumes}; + $self->transfer_replication_state() if $self->{is_replicated}; # move config to remote node my $conffile = PVE::QemuConfig->config_file($vmid); @@ -1202,7 +1201,7 @@ sub phase3_cleanup { die "Failed to move config to node '$self->{node}' - rename failed: $!\n" if !rename($conffile, $newconffile); - $self->switch_replication_job_target() if $self->{replicated_volumes}; + $self->switch_replication_job_target() if $self->{replication_jobcfg}; if ($self->{livemigration}) { if ($self->{stopnbd}) { -- 2.20.1