From: Fabian Ebner <f.ebner@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH/RFC v2 qemu-server 8/8] don't migrate replicated VM whose replication job is marked for removal
Date: Thu, 29 Oct 2020 14:31:32 +0100 [thread overview]
Message-ID: <20201029133132.28100-9-f.ebner@proxmox.com> (raw)
In-Reply-To: <20201029133132.28100-1-f.ebner@proxmox.com>
while it didn't actually fail, we probably want to avoid the behavior:
With remove_job=full:
* run_replication called during migration causes the replicated volumes to
be removed
* migration continues by fully copying all volumes
With remove_job=local:
* run_replication called during migration causes the job (and local
replication snapshots) to be removed
* migration continues by fully copying all volumes and renaming them to
avoid collision with the still existing remote volumes
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---
New in v2
Alternatively, we could throw out the remove_job property before calling
run_replication during migration, use the replicated volumes, and let
the scheduled pvesr call remove the job after migration
PVE/QemuMigrate.pm | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 3fb2850..c6623e1 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -224,6 +224,10 @@ sub prepare {
$self->{replication_jobcfg} = $repl_conf->find_local_replication_job($vmid, $self->{node});
$self->{is_replicated} = $repl_conf->check_for_existing_jobs($vmid, 1);
+ if ($self->{replication_jobcfg} && defined($self->{replication_jobcfg}->{remove_job})) {
+ die "refusing to migrate replicated VM whose replication job is marked for removal\n";
+ }
+
PVE::QemuConfig->check_lock($conf);
my $running = 0;
--
2.20.1
next prev parent reply other threads:[~2020-10-29 13:31 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-10-29 13:31 [pve-devel] [PATCH-SERIES v2] some replication-related improvements Fabian Ebner
2020-10-29 13:31 ` [pve-devel] [PATCH v2 guest-common 1/8] job_status: read only after acquiring the lock Fabian Ebner
2020-10-29 13:31 ` [pve-devel] [PATCH v2 guest-common 2/8] clarify what the source property is used for in a replication job Fabian Ebner
2020-10-29 13:31 ` [pve-devel] [PATCH v2 guest-common 3/8] also update sources in switch_replication_job_target Fabian Ebner
2020-10-29 13:31 ` [pve-devel] [PATCH v2 guest-common 4/8] create nolock variant for switch_replication_job_target Fabian Ebner
2020-10-29 13:31 ` [pve-devel] [PATCH v2 guest-common 5/8] job_status: simplify fixup of jobs for stolen guests Fabian Ebner
2020-10-29 13:31 ` [pve-devel] [PATCH v2 qemu-server 6/8] Repeat check for replication target in locked section Fabian Ebner
2020-10-29 13:31 ` [pve-devel] [PATCH v2 qemu-server 7/8] fix checks for transfering replication state/switching job target Fabian Ebner
2020-10-29 13:31 ` Fabian Ebner [this message]
2020-11-09 9:48 ` [pve-devel] applied-series: [PATCH-SERIES v2] some replication-related improvements Fabian Grünbichler
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20201029133132.28100-9-f.ebner@proxmox.com \
--to=f.ebner@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal