From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 65BAB92D6 for ; Thu, 17 Nov 2022 14:34:59 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 3E41B2CFA3 for ; Thu, 17 Nov 2022 14:34:29 +0100 (CET) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS for ; Thu, 17 Nov 2022 14:34:27 +0100 (CET) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id D586243DBA for ; Thu, 17 Nov 2022 14:34:26 +0100 (CET) From: =?UTF-8?q?Fabian=20Gr=C3=BCnbichler?= To: pve-devel@lists.proxmox.com Date: Thu, 17 Nov 2022 14:33:46 +0100 Message-Id: <20221117133346.737686-11-f.gruenbichler@proxmox.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221117133346.737686-1-f.gruenbichler@proxmox.com> References: <20221117133346.737686-1-f.gruenbichler@proxmox.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-SPAM-LEVEL: Spam detection results: =?UTF-8?Q?0=0A=09?=AWL 0.137 Adjusted score from AWL reputation of From: =?UTF-8?Q?address=0A=09?=BAYES_00 -1.9 Bayes spam probability is 0 to 1% KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict =?UTF-8?Q?Alignment=0A=09?=SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF =?UTF-8?Q?Record=0A=09?=SPF_PASS -0.001 SPF: sender matches SPF record Subject: [pve-devel] [PATCH qemu-server v7 7/7] qm: add remote-migrate command X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 17 Nov 2022 13:34:59 -0000 which wraps the remote_migrate_vm API endpoint, but does the precondition checks that can be done up front itself. this now just leaves the FP retrieval and target node name lookup to the sync part of the API endpoint, which should be do-able in <30s .. an example invocation: $ qm remote-migrate 1234 4321 'host=123.123.123.123,apitoken=PVEAPIToken=user@pve!incoming=aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee,fingerprint=aa:bb:cc:dd:ee:ff:aa:bb:cc:dd:ee:ff:aa:bb:cc:dd:ee:ff:aa:bb:cc:dd:ee:ff:aa:bb:cc:dd:ee:ff:aa:bb' --target-bridge vmbr0 --target-storage zfs-a:rbd-b,nfs-c:dir-d,zfs-e --online will migrate the local VM 1234 to the host 123.123.1232.123 using the given API token, mapping the VMID to 4321 on the target cluster, all its virtual NICs to the target vm bridge 'vmbr0', any volumes on storage zfs-a to storage rbd-b, any on storage nfs-c to storage dir-d, and any other volumes to storage zfs-e. the source VM will be stopped but remain on the source node/cluster after the migration has finished. Signed-off-by: Fabian Grünbichler --- Notes: v7: - fix example in commit message - rebase on top of PVE::CLI::qm changes v6: - mark as experimental - drop `with-local-disks` parameter from API, always set to true - add example invocation to commit message v5: rename to 'remote-migrate' PVE/API2/Qemu.pm | 31 ------------- PVE/CLI/qm.pm | 113 +++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 113 insertions(+), 31 deletions(-) diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm index 6836c557..b0c40fa5 100644 --- a/PVE/API2/Qemu.pm +++ b/PVE/API2/Qemu.pm @@ -4543,17 +4543,6 @@ __PACKAGE__->register_method({ $param->{online} = 0; } - # FIXME: fork worker hear to avoid timeout? or poll these periodically - # in pvestatd and access cached info here? all of the below is actually - # checked at the remote end anyway once we call the mtunnel endpoint, - # we could also punt it to the client and not do it here at all.. - my $resources = $api_client->get("/cluster/resources", { type => 'vm' }); - if (grep { defined($_->{vmid}) && $_->{vmid} eq $target_vmid } @$resources) { - raise_param_exc({ target_vmid => "Guest with ID '$target_vmid' already exists on remote cluster" }); - } - - my $storages = $api_client->get("/nodes/localhost/storage", { enabled => 1 }); - my $storecfg = PVE::Storage::config(); my $target_storage = extract_param($param, 'target-storage'); my $storagemap = eval { PVE::JSONSchema::parse_idmap($target_storage, 'pve-storage-id') }; @@ -4565,26 +4554,6 @@ __PACKAGE__->register_method({ raise_param_exc({ 'target-bridge' => "failed to parse bridge map: $@" }) if $@; - my $check_remote_storage = sub { - my ($storage) = @_; - my $found = [ grep { $_->{storage} eq $storage } @$storages ]; - die "remote: storage '$storage' does not exist!\n" - if !@$found; - - $found = @$found[0]; - - my $content_types = [ PVE::Tools::split_list($found->{content}) ]; - die "remote: storage '$storage' cannot store images\n" - if !grep { $_ eq 'images' } @$content_types; - }; - - foreach my $target_sid (values %{$storagemap->{entries}}) { - $check_remote_storage->($target_sid); - } - - $check_remote_storage->($storagemap->{default}) - if $storagemap->{default}; - die "remote migration requires explicit storage mapping!\n" if $storagemap->{identity}; diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm index 6655842e..66feecce 100755 --- a/PVE/CLI/qm.pm +++ b/PVE/CLI/qm.pm @@ -15,6 +15,7 @@ use POSIX qw(strftime); use Term::ReadLine; use URI::Escape; +use PVE::APIClient::LWP; use PVE::Cluster; use PVE::Exception qw(raise_param_exc); use PVE::GuestHelpers; @@ -159,6 +160,117 @@ __PACKAGE__->register_method ({ return; }}); + +__PACKAGE__->register_method({ + name => 'remote_migrate_vm', + path => 'remote_migrate_vm', + method => 'POST', + description => "Migrate virtual machine to a remote cluster. Creates a new migration task. EXPERIMENTAL feature!", + permissions => { + check => ['perm', '/vms/{vmid}', [ 'VM.Migrate' ]], + }, + parameters => { + additionalProperties => 0, + properties => { + node => get_standard_option('pve-node'), + vmid => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }), + 'target-vmid' => get_standard_option('pve-vmid', { optional => 1 }), + 'target-endpoint' => get_standard_option('proxmox-remote', { + description => "Remote target endpoint", + }), + online => { + type => 'boolean', + description => "Use online/live migration if VM is running. Ignored if VM is stopped.", + optional => 1, + }, + delete => { + type => 'boolean', + description => "Delete the original VM and related data after successful migration. By default the original VM is kept on the source cluster in a stopped state.", + optional => 1, + default => 0, + }, + 'target-storage' => get_standard_option('pve-targetstorage', { + completion => \&PVE::QemuServer::complete_migration_storage, + optional => 0, + }), + 'target-bridge' => { + type => 'string', + description => "Mapping from source to target bridges. Providing only a single bridge ID maps all source bridges to that bridge. Providing the special value '1' will map each source bridge to itself.", + format => 'bridge-pair-list', + }, + bwlimit => { + description => "Override I/O bandwidth limit (in KiB/s).", + optional => 1, + type => 'integer', + minimum => '0', + default => 'migrate limit from datacenter or storage config', + }, + }, + }, + returns => { + type => 'string', + description => "the task ID.", + }, + code => sub { + my ($param) = @_; + + my $rpcenv = PVE::RPCEnvironment::get(); + my $authuser = $rpcenv->get_user(); + + my $source_vmid = $param->{vmid}; + my $target_endpoint = $param->{'target-endpoint'}; + my $target_vmid = $param->{'target-vmid'} // $source_vmid; + + my $remote = PVE::JSONSchema::parse_property_string('proxmox-remote', $target_endpoint); + + # TODO: move this as helper somewhere appropriate? + my $conn_args = { + protocol => 'https', + host => $remote->{host}, + port => $remote->{port} // 8006, + apitoken => $remote->{apitoken}, + }; + + $conn_args->{cached_fingerprints} = { uc($remote->{fingerprint}) => 1 } + if defined($remote->{fingerprint}); + + my $api_client = PVE::APIClient::LWP->new(%$conn_args); + my $resources = $api_client->get("/cluster/resources", { type => 'vm' }); + if (grep { defined($_->{vmid}) && $_->{vmid} eq $target_vmid } @$resources) { + raise_param_exc({ target_vmid => "Guest with ID '$target_vmid' already exists on remote cluster" }); + } + + my $storages = $api_client->get("/nodes/localhost/storage", { enabled => 1 }); + + my $storecfg = PVE::Storage::config(); + my $target_storage = $param->{'target-storage'}; + my $storagemap = eval { PVE::JSONSchema::parse_idmap($target_storage, 'pve-storage-id') }; + raise_param_exc({ 'target-storage' => "failed to parse storage map: $@" }) + if $@; + + my $check_remote_storage = sub { + my ($storage) = @_; + my $found = [ grep { $_->{storage} eq $storage } @$storages ]; + die "remote: storage '$storage' does not exist!\n" + if !@$found; + + $found = @$found[0]; + + my $content_types = [ PVE::Tools::split_list($found->{content}) ]; + die "remote: storage '$storage' cannot store images\n" + if !grep { $_ eq 'images' } @$content_types; + }; + + foreach my $target_sid (values %{$storagemap->{entries}}) { + $check_remote_storage->($target_sid); + } + + $check_remote_storage->($storagemap->{default}) + if $storagemap->{default}; + + return PVE::API2::Qemu->remote_migrate_vm($param); + }}); + __PACKAGE__->register_method ({ name => 'status', path => 'status', @@ -900,6 +1012,7 @@ our $cmddef = { clone => [ "PVE::API2::Qemu", 'clone_vm', ['vmid', 'newid'], { %node }, $upid_exit ], migrate => [ "PVE::API2::Qemu", 'migrate_vm', ['vmid', 'target'], { %node }, $upid_exit ], + 'remote-migrate' => [ __PACKAGE__, 'remote_migrate_vm', ['vmid', 'target-vmid', 'target-endpoint'], { %node }, $upid_exit ], set => [ "PVE::API2::Qemu", 'update_vm', ['vmid'], { %node } ], -- 2.30.2