From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 8593E7EA8A for ; Thu, 11 Nov 2021 12:04:20 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 63CDB995E for ; Thu, 11 Nov 2021 12:04:20 +0100 (CET) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id 4B5B69946 for ; Thu, 11 Nov 2021 12:04:18 +0100 (CET) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 1F3FB40A97 for ; Thu, 11 Nov 2021 12:04:18 +0100 (CET) Date: Thu, 11 Nov 2021 12:04:07 +0100 From: Fabian =?iso-8859-1?q?Gr=FCnbichler?= To: Fabian Ebner , pve-devel@lists.proxmox.com References: <20211105130359.40803-1-f.gruenbichler@proxmox.com> <20211105130359.40803-20-f.gruenbichler@proxmox.com> <06755979-9217-f572-384a-2825631f4f8f@proxmox.com> In-Reply-To: <06755979-9217-f572-384a-2825631f4f8f@proxmox.com> MIME-Version: 1.0 User-Agent: astroid/0.15.0 (https://github.com/astroidmail/astroid) Message-Id: <1636625314.61g1nly397.astroid@nora.none> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-SPAM-LEVEL: Spam detection results: 0 AWL 0.131 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment POISEN_SPAM_PILL 0.1 Meta: its spam POISEN_SPAM_PILL_1 0.1 random spam to be learned in bayes POISEN_SPAM_PILL_3 0.1 random spam to be learned in bayes SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: Re: [pve-devel] [PATCH qemu-server 07/10] mtunnel: add API endpoints X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 11 Nov 2021 11:04:20 -0000 On November 9, 2021 1:46 pm, Fabian Ebner wrote: > Am 05.11.21 um 14:03 schrieb Fabian Gr=C3=BCnbichler: >> the following two endpoints are used for migration on the remote side >>=20 >> POST /nodes/NODE/qemu/VMID/mtunnel >>=20 >> which creates and locks an empty VM config, and spawns the main qmtunnel >> worker which binds to a VM-specific UNIX socket. >>=20 >> this worker handles JSON-encoded migration commands coming in via this >> UNIX socket: >> - config (set target VM config) >> -- checks permissions for updating config >> -- strips pending changes and snapshots >> -- sets (optional) firewall config >> - disk (allocate disk for NBD migration) >> -- checks permission for target storage >> -- returns drive string for allocated volume >> - disk-import (import 'pvesm export' stream for offline migration) >> -- checks permission for target storage >> -- forks a child running 'pvesm import' reading from a UNIX socket >> -- only one import allowed to run at any given moment >> - query-disk-import >> -- checks output of 'pvesm import' for volume ID message >> -- returns volid + success, or 'pending', or 'error' >> - start (returning migration info) >> - fstrim (via agent) >> - bwlimit (query bwlimit for storage) >> - ticket (creates a ticket for a WS connection to a specific socket) >> - resume >> - stop >> - nbdstop >> - unlock >> - quit (+ cleanup) >>=20 >> this worker serves as a replacement for both 'qm mtunnel' and various >> manual calls via SSH. the API call will return a ticket valid for >> connecting to the worker's UNIX socket via a websocket connection. >>=20 >> GET+WebSocket upgrade /nodes/NODE/qemu/VMID/mtunnelwebsocket >>=20 >> gets called for connecting to a UNIX socket via websocket forwarding, >> i.e. once for the main command mtunnel, and once each for the memory >> migration and each NBD drive-mirror/storage migration. >>=20 >> access is guarded by a short-lived ticket binding the authenticated user >> to the socket path. such tickets can be requested over the main mtunnel, >> which keeps track of socket paths currently used by that >> mtunnel/migration instance. >>=20 >> each command handler should check privileges for the requested action if >> necessary. >>=20 >> Signed-off-by: Fabian Gr=C3=BCnbichler >> --- >>=20 >> Notes: >> requires >> - pve-storage with UNIX import support >> - pve-access-control with tunnel ticket support >> - pve-http-server with websocket fixes >>=20 >> PVE/API2/Qemu.pm | 627 +++++++++++++++++++++++++++++++++++++++++++++++ >> 1 file changed, 627 insertions(+) >>=20 >> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm >> index faf028b..a1a1813 100644 >> --- a/PVE/API2/Qemu.pm >> +++ b/PVE/API2/Qemu.pm >> @@ -6,8 +6,13 @@ use Cwd 'abs_path'; >> use Net::SSLeay; >> use POSIX; >> use IO::Socket::IP; >> +use IO::Socket::UNIX; >> +use IPC::Open3; >> +use JSON; >> +use MIME::Base64; >> use URI::Escape; >> use Crypt::OpenSSL::Random; >> +use Socket qw(SOCK_STREAM); >> =20 >> use PVE::Cluster qw (cfs_read_file cfs_write_file);; >> use PVE::RRD; >> @@ -856,6 +861,7 @@ __PACKAGE__->register_method({ >> { subdir =3D> 'spiceproxy' }, >> { subdir =3D> 'sendkey' }, >> { subdir =3D> 'firewall' }, >> + { subdir =3D> 'mtunnel' }, >> ]; >> =20 >> return $res; >> @@ -4428,4 +4434,625 @@ __PACKAGE__->register_method({ >> return PVE::QemuServer::Cloudinit::dump_cloudinit_config($conf, $para= m->{vmid}, $param->{type}); >> }}); >> =20 >> +__PACKAGE__->register_method({ >> + name =3D> 'mtunnel', >> + path =3D> '{vmid}/mtunnel', >> + method =3D> 'POST', >> + protected =3D> 1, >> + proxyto =3D> 'node', >> + description =3D> 'Migration tunnel endpoint - only for internal use= by VM migration.', >> + permissions =3D> { >> + check =3D> ['perm', '/vms/{vmid}', [ 'VM.Allocate' ]], >> + description =3D> "You need 'VM.Allocate' permissions on /vms/{vmid}. F= urther permission checks happen during the actual migration.", >> + }, >> + parameters =3D> { >> + additionalProperties =3D> 0, >> + properties =3D> { >> + node =3D> get_standard_option('pve-node'), >> + vmid =3D> get_standard_option('pve-vmid'), >> + storages =3D> { >> + type =3D> 'string', >> + format =3D> 'pve-storage-id-list', >> + optional =3D> 1, >> + description =3D> 'List of storages to check permission and availabili= ty. Will be checked again for all actually used storages during migration.'= , >> + }, >> + }, >> + }, >> + returns =3D> { >> + additionalProperties =3D> 0, >> + properties =3D> { >> + upid =3D> { type =3D> 'string' }, >> + ticket =3D> { type =3D> 'string' }, >> + socket =3D> { type =3D> 'string' }, >> + }, >> + }, >> + code =3D> sub { >> + my ($param) =3D @_; >> + >> + my $rpcenv =3D PVE::RPCEnvironment::get(); >> + my $authuser =3D $rpcenv->get_user(); >> + >> + my $node =3D extract_param($param, 'node'); >> + my $vmid =3D extract_param($param, 'vmid'); >> + >> + my $storages =3D extract_param($param, 'storages'); >> + >> + my $storecfg =3D PVE::Storage::config(); >> + foreach my $storeid (PVE::Tools::split_list($storages)) { >> + $check_storage_access_migrate->($rpcenv, $authuser, $storecfg, $st= oreid, $node); >> + } >> + >> + PVE::Cluster::check_cfs_quorum(); >> + >> + my $socket_addr =3D "/run/qemu-server/$vmid.mtunnel"; >> + >> + my $lock =3D 'create'; >> + eval { PVE::QemuConfig->create_and_lock_config($vmid, 0, $lock); }; >> + >> + raise_param_exc({ vmid =3D> "unable to create empty VM config - $@"}) >> + if $@; >> + >> + my $realcmd =3D sub { >> + my $pveproxy_uid; >> + >> + my $state =3D { >> + storecfg =3D> PVE::Storage::config(), >> + lock =3D> $lock, >> + }; >> + >> + my $run_locked =3D sub { >> + my ($code, $params) =3D @_; >> + return PVE::QemuConfig->lock_config($vmid, sub { >> + my $conf =3D PVE::QemuConfig->load_config($vmid); >> + >> + $state->{conf} =3D $conf; >> + >> + die "Encountered wrong lock - aborting mtunnel command handling.\= n" >> + if $state->{lock} && !PVE::QemuConfig->has_lock($conf, $state->{lock= }); >> + >> + return $code->($params); >> + }); >> + }; >> + >> + my $cmd_desc =3D { >> + bwlimit =3D> { >> + storage =3D> { >> + type =3D> 'string', >> + format =3D> 'pve-storage-id', >> + description =3D> "Storage for which bwlimit is queried", >> + }, >> + bwlimit =3D> { >> + description =3D> "Override I/O bandwidth limit (in KiB/s).", >> + optional =3D> 1, >> + type =3D> 'integer', >> + minimum =3D> '0', >> + }, >> + }, >> + config =3D> { >> + conf =3D> { >> + type =3D> 'string', >> + description =3D> 'Full VM config, adapted for target cluster/node', >> + }, >> + 'firewall-conf' =3D> { >=20 > Here and thus for parsing, it's 'firewall-conf', but in the command=20 > handler 'firewall-config' is accessed. >=20 thanks! the joys of additionalproperties defaulting to 1 ;) >> + type =3D> 'string', >> + description =3D> 'VM firewall config', >> + optional =3D> 1, >> + }, >> + }, >> + disk =3D> { >> + format =3D> PVE::JSONSchema::get_standard_option('pve-qm-image-fo= rmat'), >> + storage =3D> { >> + type =3D> 'string', >> + format =3D> 'pve-storage-id', >> + }, >> + drive =3D> { >> + type =3D> 'object', >> + description =3D> 'parsed drive information without volid and format'= , >> + }, >> + }, >> + 'disk-import' =3D> { >> + volname =3D> { >> + type =3D> 'string', >> + description =3D> 'volume name to use prefered target volume name', >=20 > Nit: I wasn't able to parse this description ;) (also missing r in=20 > preferred) >=20 probably because it's missing an 'as': 'volume name to use as preferred target volume name' as in, we try to keep that name, but if it's already taken you get a=20 different one if allow-rename is set, or an error other wise ;) >> + }, >> + format =3D> PVE::JSONSchema::get_standard_option('pve-qm-image-fo= rmat'), >> + 'export-formats' =3D> { >> + type =3D> 'string', >> + description =3D> 'list of supported export formats', >> + }, >> + storage =3D> { >> + type =3D> 'string', >> + format =3D> 'pve-storage-id', >> + }, >> + 'with-snapshots' =3D> { >> + description =3D> >> + "Whether the stream includes intermediate snapshots", >> + type =3D> 'boolean', >> + optional =3D> 1, >> + default =3D> 0, >> + }, >> + 'allow-rename' =3D> { >> + description =3D> "Choose a new volume ID if the requested " . >> + "volume ID already exists, instead of throwing an error.", >> + type =3D> 'boolean', >> + optional =3D> 1, >> + default =3D> 0, >> + }, >> + }, >> + start =3D> { >> + start_params =3D> { >> + type =3D> 'object', >> + description =3D> 'params passed to vm_start_nolock', >> + }, >> + migrate_opts =3D> { >> + type =3D> 'object', >> + description =3D> 'migrate_opts passed to vm_start_nolock', >> + }, >> + }, >> + ticket =3D> { >> + path =3D> { >> + type =3D> 'string', >> + description =3D> 'socket path for which the ticket should be valid. = must be known to current mtunnel instance.', >> + }, >> + }, >> + quit =3D> { >> + cleanup =3D> { >> + type =3D> 'boolean', >> + description =3D> 'remove VM config and disks, aborting migration', >> + default =3D> 0, >> + }, >> + }, >> + }; >> + >> + my $cmd_handlers =3D { >> + 'version' =3D> sub { >> + # compared against other end's version >> + # bump/reset both for breaking changes >> + # bump tunnel only for opt-in changes >> + return { >> + api =3D> 2, >> + age =3D> 0, >> + }; >> + }, >> + 'config' =3D> sub { >> + my ($params) =3D @_; >> + >> + # parse and write out VM FW config if given >> + if (my $fw_conf =3D $params->{'firewall-config'}) { >> + my ($path, $fh) =3D PVE::Tools::tempfile_contents($fw_conf, 700); >> + >> + my $empty_conf =3D { >> + rules =3D> [], >> + options =3D> {}, >> + aliases =3D> {}, >> + ipset =3D> {} , >> + ipset_comments =3D> {}, >> + }; >> + my $cluster_fw_conf =3D PVE::Firewall::load_clusterfw_conf(); >> + >> + # TODO: add flag for strict parsing? >> + # TODO: add import sub that does all this given raw content? >> + my $vmfw_conf =3D PVE::Firewall::generic_fw_config_parser($path, $cl= uster_fw_conf, $empty_conf, 'vm'); >> + $vmfw_conf->{vmid} =3D $vmid; >> + PVE::Firewall::save_vmfw_conf($vmid, $vmfw_conf); >> + >> + $state->{cleanup}->{fw} =3D 1; >> + } >> + >> + PVE::QemuConfig->remove_lock($vmid, 'create'); >> + >> + # TODO add flag for strict parsing? >> + my $new_conf =3D PVE::QemuServer::parse_vm_config("incoming/qemu-= server/$vmid.conf", $params->{conf}); >> + delete $new_conf->{lock}; >> + delete $new_conf->{digest}; >> + >> + # TODO handle properly? >> + delete $new_conf->{snapshots}; >> + delete $new_conf->{pending}; >=20 > 'parent' should also be deleted if the snapshots are. >=20 yes >> + >> + # not handled by update_vm_api >> + my $vmgenid =3D delete $new_conf->{vmgenid}; >> + my $meta =3D delete $new_conf->{meta}; >> + >> + $new_conf->{vmid} =3D $vmid; >> + $new_conf->{node} =3D $node; >> + >> + $update_vm_api->($new_conf, 1); >> + >> + my $conf =3D PVE::QemuConfig->load_config($vmid); >> + $conf->{lock} =3D 'migrate'; >> + $conf->{vmgenid} =3D $vmgenid; >> + $conf->{meta} =3D $meta; >> + PVE::QemuConfig->write_config($vmid, $conf); >> + >> + $state->{lock} =3D 'migrate'; >> + >> + return; >> + }, >> + 'bwlimit' =3D> sub { >> + my ($params) =3D @_; >> + >> + my $bwlimit =3D PVE::Storage::get_bandwidth_limit('migration', [$= params->{storage}], $params->{bwlimit}); >> + return { bwlimit =3D> $bwlimit }; >> + >> + }, >> + 'disk' =3D> sub { >> + my ($params) =3D @_; >=20 > Feels like some deduplication between here and=20 > vm_migrate_alloc_nbd_disks should be possible. >=20 yes, I seem to have forgotten to do that (this series predates=20 vm_migrate_alloc_nbd_disks, but I now remember thinking back then that=20 this is a good addition and I should fold it in) adapted it a bit and merged the two. >> + >> + my $format =3D $params->{format}; >> + my $storeid =3D $params->{storage}; >> + my $drive =3D $params->{drive}; >> + >> + $check_storage_access_migrate->($rpcenv, $authuser, $state->{stor= ecfg}, $storeid, $node); >> + >> + my ($default_format, $valid_formats) =3D PVE::Storage::storage_de= fault_format($state->{storecfg}, $storeid); >> + my $scfg =3D PVE::Storage::storage_config($storecfg, $storeid); >> + $format =3D $default_format >> + if !grep {$format eq $_} @{$valid_formats}; >> + >> + my $size =3D int($drive->{size})/1024; >> + my $newvolid =3D PVE::Storage::vdisk_alloc($state->{storecfg}, $s= toreid, $vmid, $format, undef, $size); >> + >> + my $newdrive =3D $drive; >> + $newdrive->{format} =3D $format; >> + $newdrive->{file} =3D $newvolid; >> + >> + $state->{cleanup}->{volumes}->{$newvolid} =3D 1; >> + my $drivestr =3D PVE::QemuServer::print_drive($newdrive); >> + return { >> + drivestr =3D> $drivestr, >> + volid =3D> $newvolid, >> + }; >> + }, >> + 'disk-import' =3D> sub { >> + my ($params) =3D @_; >=20 > Similarly here with storage_migrate. Having the checks and deciding on=20 > name+format be its own function would also make it possible to abort=20 > early, which is especially useful if there are multiple disks. But would=20 > require a precondition handler for remote migration of course. >=20 yeah, this part (and some of the counterpart in QemuMigrate) will move=20 to the storage layer one way or another for re-using in pve-container=20 and the replication code. >> + >> + die "disk import already running as PID '$state->{disk_import}->{= pid}'\n" >> + if $state->{disk_import}->{pid}; >> + >> + my $format =3D $params->{format}; >> + my $storeid =3D $params->{storage}; >> + $check_storage_access_migrate->($rpcenv, $authuser, $state->{stor= ecfg}, $storeid, $node); >> + >> + my $with_snapshots =3D $params->{'with-snapshots'} ? 1 : 0; >> + >> + my ($default_format, $valid_formats) =3D PVE::Storage::storage_de= fault_format($state->{storecfg}, $storeid); >> + my $scfg =3D PVE::Storage::storage_config($storecfg, $storeid); >> + die "unsupported format '$format' for storage '$storeid'\n" >> + if !grep {$format eq $_} @{$valid_formats}; >> + >> + my $volname =3D $params->{volname}; >> + >> + # get target volname, taken from PVE::Storage >> + (my $name_without_extension =3D $volname) =3D~ s/\.$format$//; >> + if ($scfg->{path}) { >> + $volname =3D "$vmid/$name_without_extension.$format"; >> + } else { >> + $volname =3D "$name_without_extension"; >> + } >=20 > This is just a best-effort for guessing a valid volname that was=20 > intended only as a fall-back when target and source storage have=20 > different types. If the storage type is the same, the volname should be=20 > kept, so that e.g. an external plugin with $scfg->{path} and no=20 > extension also works. but we don't have a guarantee that type foo on cluster A and type foo on=20 cluster B are identical, support the same formats, etc. (might be a=20 different version with different support, or a different plugin=20 altogether). I think this part can improve when we improve our name=20 handling in general, but I'd leave it like it is atm.. >> + >> + my $migration_snapshot; >> + if ($scfg->{type} eq 'zfspool' || $scfg->{type} eq 'btrfs') { >> + $migration_snapshot =3D '__migration__'; >> + } >> + >> + my $volid =3D "$storeid:$volname"; >> + >> + # find common import/export format, taken from PVE::Storage >> + my @import_formats =3D PVE::Storage::volume_import_formats($state= ->{storecfg}, $volid, $migration_snapshot, undef, $with_snapshots); >> + my @export_formats =3D PVE::Tools::split_list($params->{'export-f= ormats'}); >> + my %import_hash =3D map { $_ =3D> 1 } @import_formats; >> + my @common =3D grep { $import_hash{$_} } @export_formats; >> + die "no matching import/export format found for storage '$storeid= '\n" >> + if !@common; >> + $format =3D $common[0]; >> + >> + my $input =3D IO::File->new(); >> + my $info =3D IO::File->new(); >> + my $unix =3D "/run/qemu-server/$vmid.storage"; >> + >> + my $import_cmd =3D ['pvesm', 'import', $volid, $format, "unix://$= unix", '-with-snapshots', $with_snapshots]; >> + if ($params->{'allow-rename'}) { >> + push @$import_cmd, '-allow-rename', $params->{'allow-rename'}; >> + } >> + if ($migration_snapshot) { >> + push @$import_cmd, '-delete-snapshot', $migration_snapshot; >=20 > Missing '-snapshot $migration_snapshot'? While the parameter is ignored=20 > by our ZFSPoolPlugin, the BTRFSPlugin aborts if it's not specified=20 > AFAICS. And external plugins might require it too. done >=20 > In general, we'll need to be careful not to introduce mismatches between=20 > the import and the export parameters. Might it be better if the client=20 > would pass along (most of) the parameters for the import command (which=20 > basically is how it's done for the existing storage_migrate)? >=20 see next mail >> + } >> + >> + unlink $unix; >> + my $cpid =3D open3($input, $info, $info, @{$import_cmd}) >> + or die "failed to spawn disk-import child - $!\n"; >> + >> + $state->{disk_import}->{pid} =3D $cpid; >> + my $ready; >> + eval { >> + PVE::Tools::run_with_timeout(5, sub { $ready =3D <$info>; }); >> + }; >> + die "failed to read readyness from disk import child: $@\n" if $@= ; >> + print "$ready\n"; >> + >> + chown $pveproxy_uid, -1, $unix; >> + >> + $state->{disk_import}->{fh} =3D $info; >> + $state->{disk_import}->{socket} =3D $unix; >> + >> + $state->{sockets}->{$unix} =3D 1; >> + >> + return { >> + socket =3D> $unix, >> + format =3D> $format, >> + }; >> + }, >> + 'query-disk-import' =3D> sub { >> + my ($params) =3D @_; >> + >> + die "no disk import running\n" >> + if !$state->{disk_import}->{pid}; >> + >> + my $pattern =3D PVE::Storage::volume_imported_message(undef, 1); >> + my $result; >> + eval { >> + my $fh =3D $state->{disk_import}->{fh}; >> + PVE::Tools::run_with_timeout(5, sub { $result =3D <$fh>; }); >> + print "disk-import: $result\n" if $result; >> + }; >> + if ($result && $result =3D~ $pattern) { >> + my $volid =3D $1; >> + waitpid($state->{disk_import}->{pid}, 0); >> + >> + my $unix =3D $state->{disk_import}->{socket}; >> + unlink $unix; >> + delete $state->{sockets}->{$unix}; >> + delete $state->{disk_import}; >=20 > $volid should be registered for potential cleanup. >=20 done >> + return { >> + status =3D> "complete", >> + volid =3D> $volid, >> + }; >> + } elsif (!$result && waitpid($state->{disk_import}->{pid}, WNOHAN= G)) { >> + my $unix =3D $state->{disk_import}->{socket}; >> + unlink $unix; >> + delete $state->{sockets}->{$unix}; >> + delete $state->{disk_import}; >> + >> + return { >> + status =3D> "error", >> + }; >> + } else { >> + return { >> + status =3D> "pending", >> + }; >> + } >> + }, >> + 'start' =3D> sub { >> + my ($params) =3D @_; >> + >> + my $info =3D PVE::QemuServer::vm_start_nolock( >> + $state->{storecfg}, >> + $vmid, >> + $state->{conf}, >> + $params->{start_params}, >> + $params->{migrate_opts}, >> + ); >> + >> + >> + if ($info->{migrate}->{proto} ne 'unix') { >> + PVE::QemuServer::vm_stop(undef, $vmid, 1, 1); >> + die "migration over non-UNIX sockets not possible\n"; >> + } >> + >> + my $socket =3D $info->{migrate}->{addr}; >> + chown $pveproxy_uid, -1, $socket; >> + $state->{sockets}->{$socket} =3D 1; >> + >> + my $unix_sockets =3D $info->{migrate}->{unix_sockets}; >> + foreach my $socket (@$unix_sockets) { >> + chown $pveproxy_uid, -1, $socket; >> + $state->{sockets}->{$socket} =3D 1; >> + } >> + return $info; >> + }, >> + 'fstrim' =3D> sub { >> + if (PVE::QemuServer::qga_check_running($vmid)) { >> + eval { mon_cmd($vmid, "guest-fstrim") }; >> + warn "fstrim failed: $@\n" if $@; >> + } >> + return; >> + }, >> + 'stop' =3D> sub { >> + PVE::QemuServer::vm_stop(undef, $vmid, 1, 1); >> + return; >> + }, >> + 'nbdstop' =3D> sub { >> + PVE::QemuServer::nbd_stop($vmid); >> + return; >> + }, >> + 'resume' =3D> sub { >> + if (PVE::QemuServer::check_running($vmid, 1)) { >> + PVE::QemuServer::vm_resume($vmid, 1, 1); >> + } else { >> + die "VM $vmid not running\n"; >> + } >> + return; >> + }, >> + 'unlock' =3D> sub { >> + PVE::QemuConfig->remove_lock($vmid, $state->{lock}); >> + delete $state->{lock}; >> + return; >> + }, >> + 'ticket' =3D> sub { >> + my ($params) =3D @_; >> + >> + my $path =3D $params->{path}; >> + >> + die "Not allowed to generate ticket for unknown socket '$path'\n" >> + if !defined($state->{sockets}->{$path}); >> + >> + return { ticket =3D> PVE::AccessControl::assemble_tunnel_ticket($= authuser, "/socket/$path") }; >> + }, >> + 'quit' =3D> sub { >> + my ($params) =3D @_; >> + >> + if ($params->{cleanup}) { >> + if ($state->{cleanup}->{fw}) { >> + PVE::Firewall::remove_vmfw_conf($vmid); >> + } >> + >> + if (my @volumes =3D keys $state->{cleanup}->{volumes}->$%) { >=20 > keys on scalar? This is fixed in a later patch, but... yeah, that was a rebase gone wrong ;) >=20 >> + PVE::Storage::foreach_volid(@volumes, sub { >=20 > ...PVE::Storage::foreach_volid does not have this signature. It needs=20 > what vdisk_list returns. A simple 'for' should be good enough here. >=20 ack, I guess that was the source of a stray volume I had in one of my=20 last tests.. >> + my ($volid, $sid, $volname, $d) =3D @_; >> + >> + print "freeing volume '$volid' as part of cleanup\n"; >> + eval { PVE::Storage::vdisk_free($storecfg, $volid) }; >> + warn $@ if $@; >> + }); >> + } >> + >> + PVE::QemuServer::destroy_vm($state->{storecfg}, $vmid, 1); >> + } >> + >> + $state->{exit} =3D 1; >> + return; >> + }, >> + }; >> + >> + $run_locked->(sub { >> + my $socket_addr =3D "/run/qemu-server/$vmid.mtunnel"; >> + unlink $socket_addr; >> + >> + $state->{socket} =3D IO::Socket::UNIX->new( >> + Type =3D> SOCK_STREAM(), >> + Local =3D> $socket_addr, >> + Listen =3D> 1, >> + ); >> + >> + $pveproxy_uid =3D getpwnam('www-data') >> + or die "Failed to resolve user 'www-data' to numeric UID\n"; >> + chown $pveproxy_uid, -1, $socket_addr; >> + }); >> + >> + print "mtunnel started\n"; >> + >> + my $conn =3D $state->{socket}->accept(); >> + >> + $state->{conn} =3D $conn; >> + >> + my $reply_err =3D sub { >> + my ($msg) =3D @_; >> + >> + my $reply =3D JSON::encode_json({ >> + success =3D> JSON::false, >> + msg =3D> $msg, >> + }); >> + $conn->print("$reply\n"); >> + $conn->flush(); >> + }; >> + >> + my $reply_ok =3D sub { >> + my ($res) =3D @_; >> + >> + $res->{success} =3D JSON::true; >> + my $reply =3D JSON::encode_json($res); >> + $conn->print("$reply\n"); >> + $conn->flush(); >> + }; >> + >> + while (my $line =3D <$conn>) { >> + chomp $line; >> + >> + # untaint, we validate below if needed >> + ($line) =3D $line =3D~ /^(.*)$/; >> + my $parsed =3D eval { JSON::decode_json($line) }; >> + if ($@) { >> + $reply_err->("failed to parse command - $@"); >> + next; >> + } >> + >> + my $cmd =3D delete $parsed->{cmd}; >> + if (!defined($cmd)) { >> + $reply_err->("'cmd' missing"); >> + } elsif (my $handler =3D $cmd_handlers->{$cmd}) { >> + print "received command '$cmd'\n"; >> + eval { >> + if ($cmd_desc->{$cmd}) { >> + PVE::JSONSchema::validate($cmd_desc->{$cmd}, $parsed); >> + } else { >> + $parsed =3D {}; >> + } >> + my $res =3D $run_locked->($handler, $parsed); >> + $reply_ok->($res); >> + }; >> + $reply_err->("failed to handle '$cmd' command - $@") >> + if $@; >> + } else { >> + $reply_err->("unknown command '$cmd' given"); >> + } >> + >> + if ($state->{exit}) { >> + $state->{conn}->close(); >> + $state->{socket}->close(); >> + last; >> + } >> + } >> + >> + print "mtunnel exited\n"; >> + }; >> + >> + my $ticket =3D PVE::AccessControl::assemble_tunnel_ticket($authuser, "= /socket/$socket_addr"); >> + my $upid =3D $rpcenv->fork_worker('qmtunnel', $vmid, $authuser, $realc= md); >> + >> + return { >> + ticket =3D> $ticket, >> + upid =3D> $upid, >> + socket =3D> $socket_addr, >> + }; >> + }}); >> + >> +__PACKAGE__->register_method({ >> + name =3D> 'mtunnelwebsocket', >> + path =3D> '{vmid}/mtunnelwebsocket', >> + method =3D> 'GET', >> + proxyto =3D> 'node', >> + permissions =3D> { >> + description =3D> "You need to pass a ticket valid for the selected soc= ket. Tickets can be created via the mtunnel API call, which will check perm= issions accordingly.", >> + user =3D> 'all', # check inside >> + }, >> + description =3D> 'Migration tunnel endpoint for websocket upgrade -= only for internal use by VM migration.', >> + parameters =3D> { >> + additionalProperties =3D> 0, >> + properties =3D> { >> + node =3D> get_standard_option('pve-node'), >> + vmid =3D> get_standard_option('pve-vmid'), >> + socket =3D> { >> + type =3D> "string", >> + description =3D> "unix socket to forward to", >> + }, >> + ticket =3D> { >> + type =3D> "string", >> + description =3D> "ticket return by initial 'mtunnel' API call, or ret= rieved via 'ticket' tunnel command", >> + }, >> + }, >> + }, >> + returns =3D> { >> + type =3D> "object", >> + properties =3D> { >> + port =3D> { type =3D> 'string', optional =3D> 1 }, >> + socket =3D> { type =3D> 'string', optional =3D> 1 }, >> + }, >> + }, >> + code =3D> sub { >> + my ($param) =3D @_; >> + >> + my $rpcenv =3D PVE::RPCEnvironment::get(); >> + my $authuser =3D $rpcenv->get_user(); >> + >> + my $vmid =3D $param->{vmid}; >> + # check VM exists >> + PVE::QemuConfig->load_config($vmid); >> + >> + my $socket =3D $param->{socket}; >> + PVE::AccessControl::verify_tunnel_ticket($param->{ticket}, $authuser, = "/socket/$socket"); >> + >> + return { socket =3D> $socket }; >> + }}); >> + >> 1; >>=20 >=20