From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 0823763FFD for ; Thu, 29 Oct 2020 14:31:41 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 02E119D5A for ; Thu, 29 Oct 2020 14:31:41 +0100 (CET) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [212.186.127.180]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id B9F4D9CFD for ; Thu, 29 Oct 2020 14:31:38 +0100 (CET) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 7BB3745F95 for ; Thu, 29 Oct 2020 14:31:38 +0100 (CET) From: Fabian Ebner To: pve-devel@lists.proxmox.com Date: Thu, 29 Oct 2020 14:31:31 +0100 Message-Id: <20201029133132.28100-8-f.ebner@proxmox.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201029133132.28100-1-f.ebner@proxmox.com> References: <20201029133132.28100-1-f.ebner@proxmox.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SPAM-LEVEL: Spam detection results: 0 AWL -0.031 Adjusted score from AWL reputation of From: address KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_DNSWL_MED -2.3 Sender listed at https://www.dnswl.org/, medium trust SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [qemumigrate.pm] Subject: [pve-devel] [PATCH v2 qemu-server 7/8] fix checks for transfering replication state/switching job target X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 29 Oct 2020 13:31:41 -0000 In some cases $self->{replicated_volumes} will be auto-vivified to {} by checks like next if $self->{replicated_volumes}->{$volid} and then {} would evaluate to true in a boolean context. Now the replication information is retrieved once in prepare, and used to decide whether to make the calls or not. Signed-off-by: Fabian Ebner --- Changes from v1: * use 'if $self->{is_replicated}' both times, so that even if we migrated to a non-replication target, the source-parameters are updated in the jobs PVE/QemuMigrate.pm | 21 ++++++++++----------- 1 file changed, 10 insertions(+), 11 deletions(-) diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm index a8bcd15..3fb2850 100644 --- a/PVE/QemuMigrate.pm +++ b/PVE/QemuMigrate.pm @@ -220,6 +220,10 @@ sub prepare { # test if VM exists my $conf = $self->{vmconf} = PVE::QemuConfig->load_config($vmid); + my $repl_conf = PVE::ReplicationConfig->new(); + $self->{replication_jobcfg} = $repl_conf->find_local_replication_job($vmid, $self->{node}); + $self->{is_replicated} = $repl_conf->check_for_existing_jobs($vmid, 1); + PVE::QemuConfig->check_lock($conf); my $running = 0; @@ -227,10 +231,7 @@ sub prepare { die "can't migrate running VM without --online\n" if !$online; $running = $pid; - my $repl_conf = PVE::ReplicationConfig->new(); - my $is_replicated = $repl_conf->check_for_existing_jobs($vmid, 1); - my $is_replicated_to_target = defined($repl_conf->find_local_replication_job($vmid, $self->{node})); - if ($is_replicated && !$is_replicated_to_target) { + if ($self->{is_replicated} && !$self->{replication_jobcfg}) { if ($self->{opts}->{force}) { $self->log('warn', "WARNING: Node '$self->{node}' is not a replication target. Existing " . "replication jobs will fail after migration!\n"); @@ -362,9 +363,7 @@ sub sync_disks { }); } - my $rep_cfg = PVE::ReplicationConfig->new(); - my $replication_jobcfg = $rep_cfg->find_local_replication_job($vmid, $self->{node}); - my $replicatable_volumes = !$replication_jobcfg ? {} + my $replicatable_volumes = !$self->{replication_jobcfg} ? {} : PVE::QemuConfig->get_replicatable_volumes($storecfg, $vmid, $conf, 0, 1); my $test_volid = sub { @@ -489,7 +488,7 @@ sub sync_disks { } } - if ($replication_jobcfg) { + if ($self->{replication_jobcfg}) { if ($self->{running}) { my $version = PVE::QemuServer::kvm_user_version(); @@ -523,7 +522,7 @@ sub sync_disks { my $start_time = time(); my $logfunc = sub { $self->log('info', shift) }; $self->{replicated_volumes} = PVE::Replication::run_replication( - 'PVE::QemuConfig', $replication_jobcfg, $start_time, $start_time, $logfunc); + 'PVE::QemuConfig', $self->{replication_jobcfg}, $start_time, $start_time, $logfunc); } # sizes in config have to be accurate for remote node to correctly @@ -1193,7 +1192,7 @@ sub phase3_cleanup { } # transfer replication state before move config - $self->transfer_replication_state() if $self->{replicated_volumes}; + $self->transfer_replication_state() if $self->{is_replicated}; # move config to remote node my $conffile = PVE::QemuConfig->config_file($vmid); @@ -1202,7 +1201,7 @@ sub phase3_cleanup { die "Failed to move config to node '$self->{node}' - rename failed: $!\n" if !rename($conffile, $newconffile); - $self->switch_replication_job_target() if $self->{replicated_volumes}; + $self->switch_replication_job_target() if $self->{is_replicated}; if ($self->{livemigration}) { if ($self->{stopnbd}) { -- 2.20.1