From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 6011F61AAE for ; Tue, 18 Jan 2022 11:20:27 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 4D00A27E19 for ; Tue, 18 Jan 2022 11:19:57 +0100 (CET) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id D327B27E0E for ; Tue, 18 Jan 2022 11:19:55 +0100 (CET) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id A2358452E9 for ; Tue, 18 Jan 2022 11:19:49 +0100 (CET) Date: Tue, 18 Jan 2022 11:19:41 +0100 From: Fabian =?iso-8859-1?q?Gr=FCnbichler?= To: Fabian Ebner , pve-devel@lists.proxmox.com References: <20220113100831.34113-1-f.ebner@proxmox.com> <1642425420.013zr4zixi.astroid@nora.none> In-Reply-To: MIME-Version: 1.0 User-Agent: astroid/0.15.0 (https://github.com/astroidmail/astroid) Message-Id: <1642500820.1ksez8i2ql.astroid@nora.none> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-SPAM-LEVEL: Spam detection results: 0 AWL -0.182 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% KAM_ASCII_DIVIDERS 0.8 Spam that uses ascii formatting tricks KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [qemu.pm, drive.pm, importdisk.pm, nodes.pm, ovf.pm, proxmox.com, qemuconfig.pm, qemuserver.pm] Subject: Re: [pve-devel] [RFC v10 qemu-server/manager] API for disk import and OVF X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Jan 2022 10:20:27 -0000 On January 18, 2022 10:08 am, Fabian Ebner wrote: > Am 17.01.22 um 16:43 schrieb Fabian Gr=C3=BCnbichler: >> On January 13, 2022 11:08 am, Fabian Ebner wrote: >>> Extend qm importdisk/importovf functionality to the API. >>> >>> >>> Used Dominic's latest version[0] as a starting point. GUI part still >>> needs to be rebased/updated, so it's not included here. >>> >>> >>> Changes from v9: >>> >>> * Split patch into smaller parts >>> >>> * Some minor (style) fixes/improvements (see individual patches) >>> >>> * Drop $manifest_only parameter for parse_ovf >>> >>> Instead, untaint the path when calling file_size_info, which makes >>> the call also work via API which uses the -T switch. If we do want >>> to keep $manifest_only, I'd argue that it should also skip the >>> file existence check and not only the file size check. Opinions? >>> >>> * Re-use do_import function >>> >>> Rather than duplicating most of it. The down side is the need to >>> add a new parameter for skipping configuration update. But I >>> suppose the plan is to have qm import switch to the new API at >>> some point, and then do_import can be simplified. >>> >>> * Instead of adding an import-sources parameter to the API, use a new >>> import-from property for the disk, that's only available with >>> import/alloc-enabled API endpoints via its own version of the schema >>> >>> Avoids the split across regular drive key parameters and >>> 'import_soruces', which avoids quite a bit of cross-checking >>> between the two and parsing/passing around the latter. >>> >>> The big downsides are: >>> * Schema handling is a bit messy. >>> * Need to special case print_drive, because we do intermediate >>> parse/print to cleanup drive paths. At a first glance, this >>> seems not too easy to avoid without complicating things elsewehe= re. >>> * Using the import-aware parse_drive in parse_volume, because that >>> is used via the foreach_volume iterators handling the parameters >>> of the import-enabled endpoints. Could be avoided by using for >>> loops with the import-aware parse_drive instead of >>> foreach_volume. >>> >>> Counter-arguments for using a single schema (citing Fabian G.): >>> * docs/schema dump/api docs: shouldn't look like you can put that >>> everywhere where we use the config schema >>> * shouldn't have nasty side-effects if someone puts it into the >>> config >>> >>> >>> After all, the 'import-from' disk property approach turned out to be >>> a uglier than I hoped it would. >>> >>> My problem with the 'import-sources' API parameter approach (see [0] >>> for details) is that it requires specifying both >>> scsi0: :-1, >>> import-sources: scsi0=3D/path/or/volid/for/source >>> leading to a not ideal user interface and parameter handling needing >>> cross-checks to verify that the right combination is specified, and >>> passing both around at the same time. >>> >>> Another approach would be adding a new special volid syntax using >>> my $IMPORT_DISK_RE =3D qr!^(([^/:\s]+):)import:(.*)$!; >>> allowing for e.g. >>> qm set 126 -scsi1 rbdkvm:import:myzpool:vm-114-disk-0,aio=3Dnative >>> qm set 126 -scsi2 rbdkvm:import:/dev/zvol/myzpool/vm-114-disk-1,ba= ckup=3D0 >>> Yes, it's a hack, but it would avoid the pain points from both other >>> approaches and be very simple. See the end of the mail for a POC. >>=20 >> the biggest down-side is that this 'invades' the storage plugin-owned >> volume ID namespace (further than the pre-existing, documented new disk >> syntax) - a volume ID is defined as anything starting with a storage >> identifier, followed by ':', followed by an arbitrary string (where the >> semantics of that string are up to the plugin). >>=20 >> while I don't think there is a storage plugin out there that somehow >> starts its volume IDs with 'import:' by default, it still doesn't feel >> very nice.. there might be plugins that are not very strict in their >> parsing that might accept such an 'import volid' as regular volid, and >> parse the latter part (which can be a regular volid after all!) instead >> of the full thing, which would make code paths that should not accept >> import volids accept them and just use the source without importing, >> with potentially disastrous consequences :-/ >>=20 >=20 > Ok, good point! >=20 >> that being said - other then this bigger conceptual question I only >> found nits/fixup/followup material - so if we want to take the current >> RFC I'd give this a more in-depth test spin in the next few days and >> apply with the small stuff adressed ;) >>=20 >=20 > Fine by me. I could also re-spin with the import-sources API parameter=20 > if that's preferred. If we go with the import-from approach, besides=20 > addressing your nits, I'd also not make parse_volume unconditionally=20 > parse with the extended schema, but replace the foreach_volume iterators=20 > in the appropriate places. I actually prefer the import-from approach to the old one, even with the=20 duplicate schema downside - it makes it more explicit where we want to allow 'magically' allocating new volumes, which was always a bit=20 confusing when you didn't have a clear picture of the code in your=20 head.. >=20 >>> >>> >>> [0]: https://lists.proxmox.com/pipermail/pve-devel/2021-June/048564.htm= l >>> >>> >>> pve-manager: >>> >>> Fabian Ebner (1): >>> api: nodes: add readovf endpoint >>> >>> PVE/API2/Nodes.pm | 7 +++++++ >>> 1 file changed, 7 insertions(+) >>> >>> >>> qemu-server: >>> >>> Dominic J=C3=A4ger (1): >>> api: support VM disk import >>> >>> Fabian Ebner (6): >>> schema: add pve-volume-id-or-absolute-path >>> parse ovf: untaint path when calling file_size_info >>> api: add endpoint for parsing .ovf files >>> image convert: allow block device as source >>> schema: drive: use separate schema when disk allocation is possible >>> api: create disks: factor out common part from if/else >>> >>> PVE/API2/Qemu.pm | 86 +++++++++++++++++++++++---------- >>> PVE/API2/Qemu/Makefile | 2 +- >>> PVE/API2/Qemu/OVF.pm | 55 +++++++++++++++++++++ >>> PVE/QemuConfig.pm | 2 +- >>> PVE/QemuServer.pm | 57 +++++++++++++++++++--- >>> PVE/QemuServer/Drive.pm | 92 +++++++++++++++++++++++++++--------= - >>> PVE/QemuServer/ImportDisk.pm | 2 +- >>> PVE/QemuServer/OVF.pm | 9 ++-- >>> 8 files changed, 242 insertions(+), 63 deletions(-) >>> create mode 100644 PVE/API2/Qemu/OVF.pm >>> >>> --=20 >>> 2.30.2 >>> >>> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm >>> index 6992f6f..6a22899 100644 >>> --- a/PVE/API2/Qemu.pm >>> +++ b/PVE/API2/Qemu.pm >>> @@ -21,8 +21,9 @@ use PVE::ReplicationConfig; >>> use PVE::GuestHelpers; >>> use PVE::QemuConfig; >>> use PVE::QemuServer; >>> -use PVE::QemuServer::Drive; >>> use PVE::QemuServer::CPUConfig; >>> +use PVE::QemuServer::Drive; >>> +use PVE::QemuServer::ImportDisk; >>> use PVE::QemuServer::Monitor qw(mon_cmd); >>> use PVE::QemuServer::Machine; >>> use PVE::QemuMigrate; >>> @@ -64,6 +65,7 @@ my $resolve_cdrom_alias =3D sub { >>> }; >>> =20 >>> my $NEW_DISK_RE =3D qr!^(([^/:\s]+):)?(\d+(\.\d+)?)$!; >>> +my $IMPORT_DISK_RE =3D qr!^(([^/:\s]+):)import:(.*)$!; >>> my $check_storage_access =3D sub { >>> my ($rpcenv, $authuser, $storecfg, $vmid, $settings, $default_stor= age) =3D @_; >>> =20 >>> @@ -86,6 +88,9 @@ my $check_storage_access =3D sub { >>> my $scfg =3D PVE::Storage::storage_config($storecfg, $storeid); >>> raise_param_exc({ storage =3D> "storage '$storeid' does not supp= ort vm images"}) >>> if !$scfg->{content}->{images}; >>> + } elsif (!$isCDROM && ($volid =3D~ $IMPORT_DISK_RE)) { >>> + warn "check access matched: $2 and $3\n"; >>> + warn "TODO implement import access check!\n"; >>> } else { >>> PVE::Storage::check_volume_access($rpcenv, $authuser, $storecfg,= $vmid, $volid); >>> } >>> @@ -212,6 +217,29 @@ my $create_disks =3D sub { >>> $disk->{size} =3D PVE::Tools::convert_size($size, 'kb' =3D> 'b')= ; >>> delete $disk->{format}; # no longer needed >>> $res->{$ds} =3D PVE::QemuServer::print_drive($disk); >>> + } elsif ($volid =3D~ $IMPORT_DISK_RE) { >>> + my ($storeid, $source) =3D ($2, $3); >>> + >>> + $source =3D PVE::Storage::abs_filesystem_path($storecfg, $source,= 1); >>> + my $src_size =3D PVE::Storage::file_size_info($source); >>> + die "Could not get file size of $source" if !defined($src_size); >>> + >>> + my (undef, $dst_volid) =3D PVE::QemuServer::ImportDisk::do_import= ( >>> + $source, >>> + $vmid, >>> + $storeid, >>> + { >>> + drive_name =3D> $ds, >>> + format =3D> $disk->{format}, >>> + 'skip-config-update' =3D> 1, >>> + }, >>> + ); >>> + >>> + push @$vollist, $dst_volid; >>> + $disk->{file} =3D $dst_volid; >>> + $disk->{size} =3D $src_size; >>> + delete $disk->{format}; # no longer needed >>> + $res->{$ds} =3D PVE::QemuServer::print_drive($disk); >>> } else { >>> =20 >>> PVE::Storage::check_volume_access($rpcenv, $authuser, $storecfg,= $vmid, $volid); >>> diff --git a/PVE/QemuServer/ImportDisk.pm b/PVE/QemuServer/ImportDisk.p= m >>> index 51ad52e..7557cac 100755 >>> --- a/PVE/QemuServer/ImportDisk.pm >>> +++ b/PVE/QemuServer/ImportDisk.pm >>> @@ -71,7 +71,7 @@ sub do_import { >>> PVE::Storage::activate_volumes($storecfg, [$dst_volid]); >>> PVE::QemuServer::qemu_img_convert($src_path, $dst_volid, $src_size, = undef, $zeroinit); >>> PVE::Storage::deactivate_volumes($storecfg, [$dst_volid]); >>> - PVE::QemuConfig->lock_config($vmid, $create_drive); >>> + PVE::QemuConfig->lock_config($vmid, $create_drive) if !$params->{'ski= p-config-update'}; >>> }; >>> if (my $err =3D $@) { >>> eval { PVE::Storage::vdisk_free($storecfg, $dst_volid) }; >>> --=20 >>> 2.30.2 >>> >>> >>> _______________________________________________ >>> pve-devel mailing list >>> pve-devel@lists.proxmox.com >>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel >>> >>=20 >>=20 >> _______________________________________________ >> pve-devel mailing list >> pve-devel@lists.proxmox.com >> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel >=20