public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH storage/qemu-server/pve-manager] implement ova/ovf import for directory type storages
@ 2024-04-16 13:18 Dominik Csapak
  2024-04-16 13:18 ` [pve-devel] [PATCH storage 1/9] copy OVF.pm from qemu-server Dominik Csapak
                   ` (17 more replies)
  0 siblings, 18 replies; 67+ messages in thread
From: Dominik Csapak @ 2024-04-16 13:18 UTC (permalink / raw)
  To: pve-devel

This series enables importing ova/ovf from directory based storages,
inclusive upload/download via the webui (ova only).

It also improves the ovf importer by parsing the ostype, nics, bootorder
(and firmware from vmware exported files).

I currently opted to move the OVF.pm to pve-storage, since there is no
real other place where we could put it. Building a seperate package
from qemu-servers git repo would also not be ideal, since we still
have a cyclic dev dependency then
(If someone has a better idea how to handle that, please do tell, and
i can do that in a v2)

There are surely some wrinkles left i did not think of, but all in all,
it should be pretty usable. E.g. i downloaded some ovas, uploaded them
on my cephfs in my virtual cluster, and successfully imported that with
live-import.

The biggest caveat when importing from ovas is that we have to
temporarily extract the disk images. I opted for doing that into the
import storage, but if we have a better idea where to put that, i can
implement it in a v2 (or as a follow up). For example, we could add a
new 'tmpdir' parameter to the create call and use that for extractig.

pve-storage:

Dominik Csapak (9):
  copy OVF.pm from qemu-server
  plugin: dir: implement import content type
  plugin: dir: handle ova files for import
  ovf: implement parsing the ostype
  ovf: implement parsing out firmware type
  ovf: implement rudimentary boot order
  ovf: implement parsing nics
  api: allow ova upload/download
  plugin: enable import for nfs/btfs/cifs/cephfs

 src/PVE/API2/Storage/Status.pm                |  15 +-
 src/PVE/Storage.pm                            |  78 +++-
 src/PVE/Storage/BTRFSPlugin.pm                |   5 +
 src/PVE/Storage/CIFSPlugin.pm                 |   6 +-
 src/PVE/Storage/CephFSPlugin.pm               |   6 +-
 src/PVE/Storage/DirPlugin.pm                  |  53 ++-
 src/PVE/Storage/Makefile                      |   1 +
 src/PVE/Storage/NFSPlugin.pm                  |   6 +-
 src/PVE/Storage/OVF.pm                        | 381 ++++++++++++++++++
 src/PVE/Storage/Plugin.pm                     |  23 +-
 src/test/Makefile                             |   5 +-
 src/test/ovf_manifests/Win10-Liz-disk1.vmdk   | Bin 0 -> 65536 bytes
 src/test/ovf_manifests/Win10-Liz.ovf          | 142 +++++++
 .../ovf_manifests/Win10-Liz_no_default_ns.ovf | 143 +++++++
 .../ovf_manifests/Win_2008_R2_two-disks.ovf   | 145 +++++++
 src/test/ovf_manifests/disk1.vmdk             | Bin 0 -> 65536 bytes
 src/test/ovf_manifests/disk2.vmdk             | Bin 0 -> 65536 bytes
 src/test/parse_volname_test.pm                |  13 +
 src/test/path_to_volume_id_test.pm            |  16 +
 src/test/run_ovf_tests.pl                     |  83 ++++
 20 files changed, 1112 insertions(+), 9 deletions(-)
 create mode 100644 src/PVE/Storage/OVF.pm
 create mode 100644 src/test/ovf_manifests/Win10-Liz-disk1.vmdk
 create mode 100755 src/test/ovf_manifests/Win10-Liz.ovf
 create mode 100755 src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf
 create mode 100755 src/test/ovf_manifests/Win_2008_R2_two-disks.ovf
 create mode 100644 src/test/ovf_manifests/disk1.vmdk
 create mode 100644 src/test/ovf_manifests/disk2.vmdk
 create mode 100755 src/test/run_ovf_tests.pl

qemu-server:

Dominik Csapak (3):
  api: delete unused OVF.pm
  use OVF from Storage
  api: create: implement extracting disks when needed for import-from

 PVE/API2/Qemu.pm                              |  26 +-
 PVE/API2/Qemu/Makefile                        |   2 +-
 PVE/API2/Qemu/OVF.pm                          |  53 ----
 PVE/CLI/qm.pm                                 |   4 +-
 PVE/QemuServer.pm                             |   5 +-
 PVE/QemuServer/Helpers.pm                     |   9 +
 PVE/QemuServer/Makefile                       |   1 -
 PVE/QemuServer/OVF.pm                         | 242 ------------------
 test/Makefile                                 |   5 +-
 test/ovf_manifests/Win10-Liz-disk1.vmdk       | Bin 65536 -> 0 bytes
 test/ovf_manifests/Win10-Liz.ovf              | 142 ----------
 .../ovf_manifests/Win10-Liz_no_default_ns.ovf | 142 ----------
 test/ovf_manifests/Win_2008_R2_two-disks.ovf  | 145 -----------
 test/ovf_manifests/disk1.vmdk                 | Bin 65536 -> 0 bytes
 test/ovf_manifests/disk2.vmdk                 | Bin 65536 -> 0 bytes
 test/run_ovf_tests.pl                         |  71 -----
 16 files changed, 37 insertions(+), 810 deletions(-)
 delete mode 100644 PVE/API2/Qemu/OVF.pm
 delete mode 100644 PVE/QemuServer/OVF.pm
 delete mode 100644 test/ovf_manifests/Win10-Liz-disk1.vmdk
 delete mode 100755 test/ovf_manifests/Win10-Liz.ovf
 delete mode 100755 test/ovf_manifests/Win10-Liz_no_default_ns.ovf
 delete mode 100755 test/ovf_manifests/Win_2008_R2_two-disks.ovf
 delete mode 100644 test/ovf_manifests/disk1.vmdk
 delete mode 100644 test/ovf_manifests/disk2.vmdk
 delete mode 100755 test/run_ovf_tests.pl

pve-manager:

Dominik Csapak (4):
  ui: fix special 'import' icon for non-esxi storages
  ui: guest import: add ova-needs-extracting warning text
  ui: enable import content type for relevant storages
  ui: enable upload/download buttons for 'import' type storages

 www/manager6/Utils.js                    | 3 ++-
 www/manager6/form/ContentTypeSelector.js | 2 +-
 www/manager6/storage/Browser.js          | 7 ++++++-
 www/manager6/storage/CephFSEdit.js       | 2 +-
 www/manager6/window/GuestImport.js       | 1 +
 www/manager6/window/UploadToStorage.js   | 1 +
 6 files changed, 12 insertions(+), 4 deletions(-)

-- 
2.39.2





^ permalink raw reply	[flat|nested] 67+ messages in thread

* [pve-devel] [PATCH storage 1/9] copy OVF.pm from qemu-server
  2024-04-16 13:18 [pve-devel] [PATCH storage/qemu-server/pve-manager] implement ova/ovf import for directory type storages Dominik Csapak
@ 2024-04-16 13:18 ` Dominik Csapak
  2024-04-16 15:02   ` Thomas Lamprecht
  2024-04-16 13:18 ` [pve-devel] [PATCH storage 2/9] plugin: dir: implement import content type Dominik Csapak
                   ` (16 subsequent siblings)
  17 siblings, 1 reply; 67+ messages in thread
From: Dominik Csapak @ 2024-04-16 13:18 UTC (permalink / raw)
  To: pve-devel

copies the OVF.pm and relevant ovf tests from qemu-server.
We need it here, and it uses PVE::Storage already, and since there is no
intermediary package/repository we could put it, it seems fitting in
here.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 src/PVE/Storage/Makefile                      |   1 +
 src/PVE/Storage/OVF.pm                        | 242 ++++++++++++++++++
 src/test/Makefile                             |   5 +-
 src/test/ovf_manifests/Win10-Liz-disk1.vmdk   | Bin 0 -> 65536 bytes
 src/test/ovf_manifests/Win10-Liz.ovf          | 142 ++++++++++
 .../ovf_manifests/Win10-Liz_no_default_ns.ovf | 142 ++++++++++
 .../ovf_manifests/Win_2008_R2_two-disks.ovf   | 145 +++++++++++
 src/test/ovf_manifests/disk1.vmdk             | Bin 0 -> 65536 bytes
 src/test/ovf_manifests/disk2.vmdk             | Bin 0 -> 65536 bytes
 src/test/run_ovf_tests.pl                     |  71 +++++
 10 files changed, 747 insertions(+), 1 deletion(-)
 create mode 100644 src/PVE/Storage/OVF.pm
 create mode 100644 src/test/ovf_manifests/Win10-Liz-disk1.vmdk
 create mode 100755 src/test/ovf_manifests/Win10-Liz.ovf
 create mode 100755 src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf
 create mode 100755 src/test/ovf_manifests/Win_2008_R2_two-disks.ovf
 create mode 100644 src/test/ovf_manifests/disk1.vmdk
 create mode 100644 src/test/ovf_manifests/disk2.vmdk
 create mode 100755 src/test/run_ovf_tests.pl

diff --git a/src/PVE/Storage/Makefile b/src/PVE/Storage/Makefile
index d5cc942..2daa0da 100644
--- a/src/PVE/Storage/Makefile
+++ b/src/PVE/Storage/Makefile
@@ -14,6 +14,7 @@ SOURCES= \
 	PBSPlugin.pm \
 	BTRFSPlugin.pm \
 	LvmThinPlugin.pm \
+	OVF.pm \
 	ESXiPlugin.pm
 
 .PHONY: install
diff --git a/src/PVE/Storage/OVF.pm b/src/PVE/Storage/OVF.pm
new file mode 100644
index 0000000..90ca453
--- /dev/null
+++ b/src/PVE/Storage/OVF.pm
@@ -0,0 +1,242 @@
+# Open Virtualization Format import routines
+# https://www.dmtf.org/standards/ovf
+package PVE::Storage::OVF;
+
+use strict;
+use warnings;
+
+use XML::LibXML;
+use File::Spec;
+use File::Basename;
+use Data::Dumper;
+use Cwd 'realpath';
+
+use PVE::Tools;
+use PVE::Storage;
+
+# map OVF resources types to descriptive strings
+# this will allow us to explore the xml tree without using magic numbers
+# http://schemas.dmtf.org/wbem/cim-html/2/CIM_ResourceAllocationSettingData.html
+my @resources = (
+    { id => 1, dtmf_name => 'Other' },
+    { id => 2, dtmf_name => 'Computer System' },
+    { id => 3, dtmf_name => 'Processor' },
+    { id => 4, dtmf_name => 'Memory' },
+    { id => 5, dtmf_name => 'IDE Controller', pve_type => 'ide' },
+    { id => 6, dtmf_name => 'Parallel SCSI HBA', pve_type => 'scsi' },
+    { id => 7, dtmf_name => 'FC HBA' },
+    { id => 8, dtmf_name => 'iSCSI HBA' },
+    { id => 9, dtmf_name => 'IB HCA' },
+    { id => 10, dtmf_name => 'Ethernet Adapter' },
+    { id => 11, dtmf_name => 'Other Network Adapter' },
+    { id => 12, dtmf_name => 'I/O Slot' },
+    { id => 13, dtmf_name => 'I/O Device' },
+    { id => 14, dtmf_name => 'Floppy Drive' },
+    { id => 15, dtmf_name => 'CD Drive' },
+    { id => 16, dtmf_name => 'DVD drive' },
+    { id => 17, dtmf_name => 'Disk Drive' },
+    { id => 18, dtmf_name => 'Tape Drive' },
+    { id => 19, dtmf_name => 'Storage Extent' },
+    { id => 20, dtmf_name => 'Other storage device', pve_type => 'sata'},
+    { id => 21, dtmf_name => 'Serial port' },
+    { id => 22, dtmf_name => 'Parallel port' },
+    { id => 23, dtmf_name => 'USB Controller' },
+    { id => 24, dtmf_name => 'Graphics controller' },
+    { id => 25, dtmf_name => 'IEEE 1394 Controller' },
+    { id => 26, dtmf_name => 'Partitionable Unit' },
+    { id => 27, dtmf_name => 'Base Partitionable Unit' },
+    { id => 28, dtmf_name => 'Power' },
+    { id => 29, dtmf_name => 'Cooling Capacity' },
+    { id => 30, dtmf_name => 'Ethernet Switch Port' },
+    { id => 31, dtmf_name => 'Logical Disk' },
+    { id => 32, dtmf_name => 'Storage Volume' },
+    { id => 33, dtmf_name => 'Ethernet Connection' },
+    { id => 34, dtmf_name => 'DMTF reserved' },
+    { id => 35, dtmf_name => 'Vendor Reserved'}
+);
+
+sub find_by {
+    my ($key, $param) = @_;
+    foreach my $resource (@resources) {
+	if ($resource->{$key} eq $param) {
+	    return ($resource);
+	}
+    }
+    return;
+}
+
+sub dtmf_name_to_id {
+    my ($dtmf_name) = @_;
+    my $found = find_by('dtmf_name', $dtmf_name);
+    if ($found) {
+	return $found->{id};
+    } else {
+	return;
+    }
+}
+
+sub id_to_pve {
+    my ($id) = @_;
+    my $resource = find_by('id', $id);
+    if ($resource) {
+	return $resource->{pve_type};
+    } else {
+	return;
+    }
+}
+
+# returns two references, $qm which holds qm.conf style key/values, and \@disks
+sub parse_ovf {
+    my ($ovf, $debug) = @_;
+
+    my $dom = XML::LibXML->load_xml(location => $ovf, no_blanks => 1);
+
+    # register the xml namespaces in a xpath context object
+    # 'ovf' is the default namespace so it will prepended to each xml element
+    my $xpc = XML::LibXML::XPathContext->new($dom);
+    $xpc->registerNs('ovf', 'http://schemas.dmtf.org/ovf/envelope/1');
+    $xpc->registerNs('rasd', 'http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData');
+    $xpc->registerNs('vssd', 'http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData');
+
+
+    # hash to save qm.conf parameters
+    my $qm;
+
+    #array to save a disk list
+    my @disks;
+
+    # easy xpath
+    # walk down the dom until we find the matching XML element
+    my $xpath_find_name = "/ovf:Envelope/ovf:VirtualSystem/ovf:Name";
+    my $ovf_name = $xpc->findvalue($xpath_find_name);
+
+    if ($ovf_name) {
+	# PVE::QemuServer::confdesc requires a valid DNS name
+	($qm->{name} = $ovf_name) =~ s/[^a-zA-Z0-9\-\.]//g;
+    } else {
+	warn "warning: unable to parse the VM name in this OVF manifest, generating a default value\n";
+    }
+
+    # middle level xpath
+    # element[child] search the elements which have this [child]
+    my $processor_id = dtmf_name_to_id('Processor');
+    my $xpath_find_vcpu_count = "/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${processor_id}]/rasd:VirtualQuantity";
+    $qm->{'cores'} = $xpc->findvalue($xpath_find_vcpu_count);
+
+    my $memory_id = dtmf_name_to_id('Memory');
+    my $xpath_find_memory = ("/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${memory_id}]/rasd:VirtualQuantity");
+    $qm->{'memory'} = $xpc->findvalue($xpath_find_memory);
+
+    # middle level xpath
+    # here we expect multiple results, so we do not read the element value with
+    # findvalue() but store multiple elements with findnodes()
+    my $disk_id = dtmf_name_to_id('Disk Drive');
+    my $xpath_find_disks="/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${disk_id}]";
+    my @disk_items = $xpc->findnodes($xpath_find_disks);
+
+    # disks metadata is split in four different xml elements:
+    # * as an Item node of type DiskDrive in the VirtualHardwareSection
+    # * as an Disk node in the DiskSection
+    # * as a File node in the References section
+    # * each Item node also holds a reference to its owning controller
+    #
+    # we iterate over the list of Item nodes of type disk drive, and for each item,
+    # find the corresponding Disk node, and File node and owning controller
+    # when all the nodes has been found out, we copy the relevant information to
+    # a $pve_disk hash ref, which we push to @disks;
+
+    foreach my $item_node (@disk_items) {
+
+	my $disk_node;
+	my $file_node;
+	my $controller_node;
+	my $pve_disk;
+
+	print "disk item:\n", $item_node->toString(1), "\n" if $debug;
+
+	# from Item, find corresponding Disk node
+	# here the dot means the search should start from the current element in dom
+	my $host_resource = $xpc->findvalue('rasd:HostResource', $item_node);
+	my $disk_section_path;
+	my $disk_id;
+
+	# RFC 3986 "2.3.  Unreserved Characters"
+	my $valid_uripath_chars = qr/[[:alnum:]]|[\-\._~]/;
+
+	if ($host_resource =~ m|^ovf:/(${valid_uripath_chars}+)/(${valid_uripath_chars}+)$|) {
+	    $disk_section_path = $1;
+	    $disk_id = $2;
+	} else {
+	   warn "invalid host ressource $host_resource, skipping\n";
+	   next;
+	}
+	printf "disk section path: $disk_section_path and disk id: $disk_id\n" if $debug;
+
+	# tricky xpath
+	# @ means we filter the result query based on a the value of an item attribute ( @ = attribute)
+	# @ needs to be escaped to prevent Perl double quote interpolation
+	my $xpath_find_fileref = sprintf("/ovf:Envelope/ovf:DiskSection/\
+ovf:Disk[\@ovf:diskId='%s']/\@ovf:fileRef", $disk_id);
+	my $fileref = $xpc->findvalue($xpath_find_fileref);
+
+	my $valid_url_chars = qr@${valid_uripath_chars}|/@;
+	if (!$fileref || $fileref !~ m/^${valid_url_chars}+$/) {
+	    warn "invalid host ressource $host_resource, skipping\n";
+	    next;
+	}
+
+	# from Disk Node, find corresponding filepath
+	my $xpath_find_filepath = sprintf("/ovf:Envelope/ovf:References/ovf:File[\@ovf:id='%s']/\@ovf:href", $fileref);
+	my $filepath = $xpc->findvalue($xpath_find_filepath);
+	if (!$filepath) {
+	    warn "invalid file reference $fileref, skipping\n";
+	    next;
+	}
+	print "file path: $filepath\n" if $debug;
+
+	# from Item, find owning Controller type
+	my $controller_id = $xpc->findvalue('rasd:Parent', $item_node);
+	my $xpath_find_parent_type = sprintf("/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/\
+ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
+	my $controller_type = $xpc->findvalue($xpath_find_parent_type);
+	if (!$controller_type) {
+	    warn "invalid or missing controller: $controller_type, skipping\n";
+	    next;
+	}
+	print "owning controller type: $controller_type\n" if $debug;
+
+	# extract corresponding Controller node details
+	my $adress_on_controller = $xpc->findvalue('rasd:AddressOnParent', $item_node);
+	my $pve_disk_address = id_to_pve($controller_type) . $adress_on_controller;
+
+	# resolve symlinks and relative path components
+	# and die if the diskimage is not somewhere under the $ovf path
+	my $ovf_dir = realpath(dirname(File::Spec->rel2abs($ovf)));
+	my $backing_file_path = realpath(join ('/', $ovf_dir, $filepath));
+	if ($backing_file_path !~ /^\Q${ovf_dir}\E/) {
+	    die "error parsing $filepath, are you using a symlink ?\n";
+	}
+
+	if (!-e $backing_file_path) {
+	    die "error parsing $filepath, file seems not to exist at $backing_file_path\n";
+	}
+
+	($backing_file_path) = $backing_file_path =~ m|^(/.*)|; # untaint
+
+	my $virtual_size = PVE::Storage::file_size_info($backing_file_path);
+	die "error parsing $backing_file_path, cannot determine file size\n"
+	    if !$virtual_size;
+
+	$pve_disk = {
+	    disk_address => $pve_disk_address,
+	    backing_file => $backing_file_path,
+	    virtual_size => $virtual_size
+	};
+	push @disks, $pve_disk;
+
+    }
+
+    return {qm => $qm, disks => \@disks};
+}
+
+1;
diff --git a/src/test/Makefile b/src/test/Makefile
index c54b10f..12991da 100644
--- a/src/test/Makefile
+++ b/src/test/Makefile
@@ -1,6 +1,6 @@
 all: test
 
-test: test_zfspoolplugin test_disklist test_bwlimit test_plugin
+test: test_zfspoolplugin test_disklist test_bwlimit test_plugin test_ovf
 
 test_zfspoolplugin: run_test_zfspoolplugin.pl
 	./run_test_zfspoolplugin.pl
@@ -13,3 +13,6 @@ test_bwlimit: run_bwlimit_tests.pl
 
 test_plugin: run_plugin_tests.pl
 	./run_plugin_tests.pl
+
+test_ovf: run_ovf_tests.pl
+	./run_ovf_tests.pl
diff --git a/src/test/ovf_manifests/Win10-Liz-disk1.vmdk b/src/test/ovf_manifests/Win10-Liz-disk1.vmdk
new file mode 100644
index 0000000000000000000000000000000000000000..662354a3d1333a2f6c4364005e53bfe7cd8b9044
GIT binary patch
literal 65536
zcmeIvy>HV%7zbeUF`dK)46s<q+^B&Pi6H|eMIb;zP1VdMK8V$P$u?2T)IS|NNta69
zSXw<No$qu%-}$}AUq|21A0<ihr0Gwa-nQ%QGfCR@wmshsN%A;JUhL<u_T%+U7Sd<o
zW^TMU0^M{}R2S(eR@1Ur*Q@eVF^^#r%c@u{hyC#J%V?Nq?*?z)=UG^1Wn9+n(yx6B
z(=ujtJiA)QVP~;guI5EOE2iV-%_??6=%y!^b+aeU_aA6Z4X2azC>{U!a5_FoJCkDB
zKRozW{5{B<Li)YUBEQ&fJe$RRZCRbA$5|CacQiT<A<uvIHbq(g$>yIY=etVNVcI$B
zY@^?CwTN|j)tg?;i)G&AZFqPqoW(5P2K~XUq>9sqVVe!!?y@Y;)^#k~TefEvd2_XU
z^M@5mfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk4^iOdL%ftb
z5g<T-009C72;3>~`p!f^fB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ
zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U
zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7
z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N
p0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK;Zui`~%h>XmtPp

literal 0
HcmV?d00001

diff --git a/src/test/ovf_manifests/Win10-Liz.ovf b/src/test/ovf_manifests/Win10-Liz.ovf
new file mode 100755
index 0000000..bf4b41a
--- /dev/null
+++ b/src/test/ovf_manifests/Win10-Liz.ovf
@@ -0,0 +1,142 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--Generated by VMware ovftool 4.1.0 (build-2982904), UTC time: 2017-02-07T13:50:15.265014Z-->
+<Envelope vmw:buildId="build-2982904" xmlns="http://schemas.dmtf.org/ovf/envelope/1" xmlns:cim="http://schemas.dmtf.org/wbem/wscim/1/common" xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1" xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" xmlns:vmw="http://www.vmware.com/schema/ovf" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
+  <References>
+    <File ovf:href="Win10-Liz-disk1.vmdk" ovf:id="file1" ovf:size="9155243008"/>
+  </References>
+  <DiskSection>
+    <Info>Virtual disk information</Info>
+    <Disk ovf:capacity="128" ovf:capacityAllocationUnits="byte * 2^30" ovf:diskId="vmdisk1" ovf:fileRef="file1" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:populatedSize="16798056448"/>
+  </DiskSection>
+  <NetworkSection>
+    <Info>The list of logical networks</Info>
+    <Network ovf:name="bridged">
+      <Description>The bridged network</Description>
+    </Network>
+  </NetworkSection>
+  <VirtualSystem ovf:id="vm">
+    <Info>A virtual machine</Info>
+    <Name>Win10-Liz</Name>
+    <OperatingSystemSection ovf:id="1" vmw:osType="windows9_64Guest">
+      <Info>The kind of installed guest operating system</Info>
+    </OperatingSystemSection>
+    <VirtualHardwareSection>
+      <Info>Virtual hardware requirements</Info>
+      <System>
+        <vssd:ElementName>Virtual Hardware Family</vssd:ElementName>
+        <vssd:InstanceID>0</vssd:InstanceID>
+        <vssd:VirtualSystemIdentifier>Win10-Liz</vssd:VirtualSystemIdentifier>
+        <vssd:VirtualSystemType>vmx-11</vssd:VirtualSystemType>
+      </System>
+      <Item>
+        <rasd:AllocationUnits>hertz * 10^6</rasd:AllocationUnits>
+        <rasd:Description>Number of Virtual CPUs</rasd:Description>
+        <rasd:ElementName>4 virtual CPU(s)</rasd:ElementName>
+        <rasd:InstanceID>1</rasd:InstanceID>
+        <rasd:ResourceType>3</rasd:ResourceType>
+        <rasd:VirtualQuantity>4</rasd:VirtualQuantity>
+      </Item>
+      <Item>
+        <rasd:AllocationUnits>byte * 2^20</rasd:AllocationUnits>
+        <rasd:Description>Memory Size</rasd:Description>
+        <rasd:ElementName>6144MB of memory</rasd:ElementName>
+        <rasd:InstanceID>2</rasd:InstanceID>
+        <rasd:ResourceType>4</rasd:ResourceType>
+        <rasd:VirtualQuantity>6144</rasd:VirtualQuantity>
+      </Item>
+      <Item>
+        <rasd:Address>0</rasd:Address>
+        <rasd:Description>SATA Controller</rasd:Description>
+        <rasd:ElementName>sataController0</rasd:ElementName>
+        <rasd:InstanceID>3</rasd:InstanceID>
+        <rasd:ResourceSubType>vmware.sata.ahci</rasd:ResourceSubType>
+        <rasd:ResourceType>20</rasd:ResourceType>
+      </Item>
+      <Item ovf:required="false">
+        <rasd:Address>0</rasd:Address>
+        <rasd:Description>USB Controller (XHCI)</rasd:Description>
+        <rasd:ElementName>usb3</rasd:ElementName>
+        <rasd:InstanceID>4</rasd:InstanceID>
+        <rasd:ResourceSubType>vmware.usb.xhci</rasd:ResourceSubType>
+        <rasd:ResourceType>23</rasd:ResourceType>
+      </Item>
+      <Item ovf:required="false">
+        <rasd:Address>0</rasd:Address>
+        <rasd:Description>USB Controller (EHCI)</rasd:Description>
+        <rasd:ElementName>usb</rasd:ElementName>
+        <rasd:InstanceID>5</rasd:InstanceID>
+        <rasd:ResourceSubType>vmware.usb.ehci</rasd:ResourceSubType>
+        <rasd:ResourceType>23</rasd:ResourceType>
+        <vmw:Config ovf:required="false" vmw:key="ehciEnabled" vmw:value="true"/>
+      </Item>
+      <Item>
+        <rasd:Address>0</rasd:Address>
+        <rasd:Description>SCSI Controller</rasd:Description>
+        <rasd:ElementName>scsiController0</rasd:ElementName>
+        <rasd:InstanceID>6</rasd:InstanceID>
+        <rasd:ResourceSubType>lsilogicsas</rasd:ResourceSubType>
+        <rasd:ResourceType>6</rasd:ResourceType>
+      </Item>
+      <Item ovf:required="false">
+        <rasd:AutomaticAllocation>true</rasd:AutomaticAllocation>
+        <rasd:ElementName>serial0</rasd:ElementName>
+        <rasd:InstanceID>7</rasd:InstanceID>
+        <rasd:ResourceType>21</rasd:ResourceType>
+        <vmw:Config ovf:required="false" vmw:key="yieldOnPoll" vmw:value="false"/>
+      </Item>
+      <Item>
+        <rasd:AddressOnParent>0</rasd:AddressOnParent>
+        <rasd:ElementName>disk0</rasd:ElementName>
+        <rasd:HostResource>ovf:/disk/vmdisk1</rasd:HostResource>
+        <rasd:InstanceID>8</rasd:InstanceID>
+        <rasd:Parent>6</rasd:Parent>
+        <rasd:ResourceType>17</rasd:ResourceType>
+      </Item>
+      <Item>
+        <rasd:AddressOnParent>2</rasd:AddressOnParent>
+        <rasd:AutomaticAllocation>true</rasd:AutomaticAllocation>
+        <rasd:Connection>bridged</rasd:Connection>
+        <rasd:Description>E1000e ethernet adapter on &quot;bridged&quot;</rasd:Description>
+        <rasd:ElementName>ethernet0</rasd:ElementName>
+        <rasd:InstanceID>9</rasd:InstanceID>
+        <rasd:ResourceSubType>E1000e</rasd:ResourceSubType>
+        <rasd:ResourceType>10</rasd:ResourceType>
+        <vmw:Config ovf:required="false" vmw:key="wakeOnLanEnabled" vmw:value="false"/>
+      </Item>
+      <Item ovf:required="false">
+        <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
+        <rasd:ElementName>sound</rasd:ElementName>
+        <rasd:InstanceID>10</rasd:InstanceID>
+        <rasd:ResourceSubType>vmware.soundcard.hdaudio</rasd:ResourceSubType>
+        <rasd:ResourceType>1</rasd:ResourceType>
+      </Item>
+      <Item ovf:required="false">
+        <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
+        <rasd:ElementName>video</rasd:ElementName>
+        <rasd:InstanceID>11</rasd:InstanceID>
+        <rasd:ResourceType>24</rasd:ResourceType>
+        <vmw:Config ovf:required="false" vmw:key="enable3DSupport" vmw:value="true"/>
+      </Item>
+      <Item ovf:required="false">
+        <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
+        <rasd:ElementName>vmci</rasd:ElementName>
+        <rasd:InstanceID>12</rasd:InstanceID>
+        <rasd:ResourceSubType>vmware.vmci</rasd:ResourceSubType>
+        <rasd:ResourceType>1</rasd:ResourceType>
+      </Item>
+      <Item ovf:required="false">
+        <rasd:AddressOnParent>1</rasd:AddressOnParent>
+        <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
+        <rasd:ElementName>cdrom0</rasd:ElementName>
+        <rasd:InstanceID>13</rasd:InstanceID>
+        <rasd:Parent>3</rasd:Parent>
+        <rasd:ResourceType>15</rasd:ResourceType>
+      </Item>
+      <vmw:Config ovf:required="false" vmw:key="memoryHotAddEnabled" vmw:value="true"/>
+      <vmw:Config ovf:required="false" vmw:key="powerOpInfo.powerOffType" vmw:value="soft"/>
+      <vmw:Config ovf:required="false" vmw:key="powerOpInfo.resetType" vmw:value="soft"/>
+      <vmw:Config ovf:required="false" vmw:key="powerOpInfo.suspendType" vmw:value="soft"/>
+      <vmw:Config ovf:required="false" vmw:key="tools.syncTimeWithHost" vmw:value="false"/>
+    </VirtualHardwareSection>
+  </VirtualSystem>
+</Envelope>
diff --git a/src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf b/src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf
new file mode 100755
index 0000000..b93540f
--- /dev/null
+++ b/src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf
@@ -0,0 +1,142 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--Generated by VMware ovftool 4.1.0 (build-2982904), UTC time: 2017-02-07T13:50:15.265014Z-->
+<Envelope vmw:buildId="build-2982904" xmlns="http://schemas.dmtf.org/ovf/envelope/1" xmlns:cim="http://schemas.dmtf.org/wbem/wscim/1/common" xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1" xmlns:vmw="http://www.vmware.com/schema/ovf" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
+  <References>
+    <File ovf:href="Win10-Liz-disk1.vmdk" ovf:id="file1" ovf:size="9155243008"/>
+  </References>
+  <DiskSection>
+    <Info>Virtual disk information</Info>
+    <Disk ovf:capacity="128" ovf:capacityAllocationUnits="byte * 2^30" ovf:diskId="vmdisk1" ovf:fileRef="file1" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:populatedSize="16798056448"/>
+  </DiskSection>
+  <NetworkSection>
+    <Info>The list of logical networks</Info>
+    <Network ovf:name="bridged">
+      <Description>The bridged network</Description>
+    </Network>
+  </NetworkSection>
+  <VirtualSystem ovf:id="vm">
+    <Info>A virtual machine</Info>
+    <Name>Win10-Liz</Name>
+    <OperatingSystemSection ovf:id="1" vmw:osType="windows9_64Guest">
+      <Info>The kind of installed guest operating system</Info>
+    </OperatingSystemSection>
+    <VirtualHardwareSection>
+      <Info>Virtual hardware requirements</Info>
+      <System>
+        <vssd:ElementName>Virtual Hardware Family</vssd:ElementName>
+        <vssd:InstanceID>0</vssd:InstanceID>
+        <vssd:VirtualSystemIdentifier>Win10-Liz</vssd:VirtualSystemIdentifier>
+        <vssd:VirtualSystemType>vmx-11</vssd:VirtualSystemType>
+      </System>
+      <Item>
+        <rasd:AllocationUnits xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">hertz * 10^6</rasd:AllocationUnits>
+        <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">Number of Virtual CPUs</rasd:Description>
+        <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">4 virtual CPU(s)</rasd:ElementName>
+        <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">1</rasd:InstanceID>
+        <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">3</rasd:ResourceType>
+        <rasd:VirtualQuantity xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">4</rasd:VirtualQuantity>
+      </Item>
+      <Item>
+        <rasd:AllocationUnits xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">byte * 2^20</rasd:AllocationUnits>
+        <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">Memory Size</rasd:Description>
+        <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">6144MB of memory</rasd:ElementName>
+        <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">2</rasd:InstanceID>
+        <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">4</rasd:ResourceType>
+        <rasd:VirtualQuantity xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">6144</rasd:VirtualQuantity>
+      </Item>
+      <Item>
+        <rasd:Address xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">0</rasd:Address>
+        <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">SATA Controller</rasd:Description>
+        <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">sataController0</rasd:ElementName>
+        <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">3</rasd:InstanceID>
+        <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmware.sata.ahci</rasd:ResourceSubType>
+        <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">20</rasd:ResourceType>
+      </Item>
+      <Item ovf:required="false">
+        <rasd:Address xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">0</rasd:Address>
+        <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">USB Controller (XHCI)</rasd:Description>
+        <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">usb3</rasd:ElementName>
+        <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">4</rasd:InstanceID>
+        <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmware.usb.xhci</rasd:ResourceSubType>
+        <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">23</rasd:ResourceType>
+      </Item>
+      <Item ovf:required="false">
+        <rasd:Address xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">0</rasd:Address>
+        <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">USB Controller (EHCI)</rasd:Description>
+        <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">usb</rasd:ElementName>
+        <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">5</rasd:InstanceID>
+        <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmware.usb.ehci</rasd:ResourceSubType>
+        <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">23</rasd:ResourceType>
+        <vmw:Config ovf:required="false" vmw:key="ehciEnabled" vmw:value="true"/>
+      </Item>
+      <Item>
+        <rasd:Address xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">0</rasd:Address>
+        <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">SCSI Controller</rasd:Description>
+        <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">scsiController0</rasd:ElementName>
+        <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">6</rasd:InstanceID>
+        <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">lsilogicsas</rasd:ResourceSubType>
+        <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">6</rasd:ResourceType>
+      </Item>
+      <Item ovf:required="false">
+        <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">true</rasd:AutomaticAllocation>
+        <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">serial0</rasd:ElementName>
+        <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">7</rasd:InstanceID>
+        <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">21</rasd:ResourceType>
+        <vmw:Config ovf:required="false" vmw:key="yieldOnPoll" vmw:value="false"/>
+      </Item>
+      <Item>
+        <rasd:AddressOnParent xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" >0</rasd:AddressOnParent>
+        <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" >disk0</rasd:ElementName>
+        <rasd:HostResource xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" >ovf:/disk/vmdisk1</rasd:HostResource>
+        <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">8</rasd:InstanceID>
+        <rasd:Parent xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">6</rasd:Parent>
+        <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">17</rasd:ResourceType>
+      </Item>
+      <Item>
+        <rasd:AddressOnParent xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">2</rasd:AddressOnParent>
+        <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">true</rasd:AutomaticAllocation>
+        <rasd:Connection xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">bridged</rasd:Connection>
+        <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">E1000e ethernet adapter on &quot;bridged&quot;</rasd:Description>
+        <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">ethernet0</rasd:ElementName>
+        <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">9</rasd:InstanceID>
+        <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">E1000e</rasd:ResourceSubType>
+        <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">10</rasd:ResourceType>
+        <vmw:Config ovf:required="false" vmw:key="wakeOnLanEnabled" vmw:value="false"/>
+      </Item>
+      <Item ovf:required="false">
+        <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">false</rasd:AutomaticAllocation>
+        <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">sound</rasd:ElementName>
+        <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">10</rasd:InstanceID>
+        <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmware.soundcard.hdaudio</rasd:ResourceSubType>
+        <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">1</rasd:ResourceType>
+      </Item>
+      <Item ovf:required="false">
+        <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">false</rasd:AutomaticAllocation>
+        <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">video</rasd:ElementName>
+        <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">11</rasd:InstanceID>
+        <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">24</rasd:ResourceType>
+        <vmw:Config ovf:required="false" vmw:key="enable3DSupport" vmw:value="true"/>
+      </Item>
+      <Item ovf:required="false">
+        <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">false</rasd:AutomaticAllocation>
+        <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmci</rasd:ElementName>
+        <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">12</rasd:InstanceID>
+        <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmware.vmci</rasd:ResourceSubType>
+        <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">1</rasd:ResourceType>
+      </Item>
+      <Item ovf:required="false">
+        <rasd:AddressOnParent xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">1</rasd:AddressOnParent>
+        <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">false</rasd:AutomaticAllocation>
+        <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">cdrom0</rasd:ElementName>
+        <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">13</rasd:InstanceID>
+        <rasd:Parent xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">3</rasd:Parent>
+        <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">15</rasd:ResourceType>
+      </Item>
+      <vmw:Config ovf:required="false" vmw:key="memoryHotAddEnabled" vmw:value="true"/>
+      <vmw:Config ovf:required="false" vmw:key="powerOpInfo.powerOffType" vmw:value="soft"/>
+      <vmw:Config ovf:required="false" vmw:key="powerOpInfo.resetType" vmw:value="soft"/>
+      <vmw:Config ovf:required="false" vmw:key="powerOpInfo.suspendType" vmw:value="soft"/>
+      <vmw:Config ovf:required="false" vmw:key="tools.syncTimeWithHost" vmw:value="false"/>
+    </VirtualHardwareSection>
+  </VirtualSystem>
+</Envelope>
diff --git a/src/test/ovf_manifests/Win_2008_R2_two-disks.ovf b/src/test/ovf_manifests/Win_2008_R2_two-disks.ovf
new file mode 100755
index 0000000..a563aab
--- /dev/null
+++ b/src/test/ovf_manifests/Win_2008_R2_two-disks.ovf
@@ -0,0 +1,145 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--Generated by VMware ovftool 4.1.0 (build-2982904), UTC time: 2017-02-27T15:09:29.768974Z-->
+<Envelope vmw:buildId="build-2982904" xmlns="http://schemas.dmtf.org/ovf/envelope/1" xmlns:cim="http://schemas.dmtf.org/wbem/wscim/1/common" xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1" xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" xmlns:vmw="http://www.vmware.com/schema/ovf" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
+  <References>
+    <File ovf:href="disk1.vmdk" ovf:id="file1" ovf:size="3481968640"/>
+    <File ovf:href="disk2.vmdk" ovf:id="file2" ovf:size="68096"/>
+  </References>
+  <DiskSection>
+    <Info>Virtual disk information</Info>
+    <Disk ovf:capacity="40" ovf:capacityAllocationUnits="byte * 2^30" ovf:diskId="vmdisk1" ovf:fileRef="file1" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:populatedSize="7684882432"/>
+    <Disk ovf:capacity="1" ovf:capacityAllocationUnits="byte * 2^30" ovf:diskId="vmdisk2" ovf:fileRef="file2" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:populatedSize="0"/>
+  </DiskSection>
+  <NetworkSection>
+    <Info>The list of logical networks</Info>
+    <Network ovf:name="bridged">
+      <Description>The bridged network</Description>
+    </Network>
+  </NetworkSection>
+  <VirtualSystem ovf:id="vm">
+    <Info>A virtual machine</Info>
+    <Name>Win_2008-R2x64</Name>
+    <OperatingSystemSection ovf:id="103" vmw:osType="windows7Server64Guest">
+      <Info>The kind of installed guest operating system</Info>
+    </OperatingSystemSection>
+    <VirtualHardwareSection>
+      <Info>Virtual hardware requirements</Info>
+      <System>
+        <vssd:ElementName>Virtual Hardware Family</vssd:ElementName>
+        <vssd:InstanceID>0</vssd:InstanceID>
+        <vssd:VirtualSystemIdentifier>Win_2008-R2x64</vssd:VirtualSystemIdentifier>
+        <vssd:VirtualSystemType>vmx-11</vssd:VirtualSystemType>
+      </System>
+      <Item>
+        <rasd:AllocationUnits>hertz * 10^6</rasd:AllocationUnits>
+        <rasd:Description>Number of Virtual CPUs</rasd:Description>
+        <rasd:ElementName>1 virtual CPU(s)</rasd:ElementName>
+        <rasd:InstanceID>1</rasd:InstanceID>
+        <rasd:ResourceType>3</rasd:ResourceType>
+        <rasd:VirtualQuantity>1</rasd:VirtualQuantity>
+      </Item>
+      <Item>
+        <rasd:AllocationUnits>byte * 2^20</rasd:AllocationUnits>
+        <rasd:Description>Memory Size</rasd:Description>
+        <rasd:ElementName>2048MB of memory</rasd:ElementName>
+        <rasd:InstanceID>2</rasd:InstanceID>
+        <rasd:ResourceType>4</rasd:ResourceType>
+        <rasd:VirtualQuantity>2048</rasd:VirtualQuantity>
+      </Item>
+      <Item>
+        <rasd:Address>0</rasd:Address>
+        <rasd:Description>SATA Controller</rasd:Description>
+        <rasd:ElementName>sataController0</rasd:ElementName>
+        <rasd:InstanceID>3</rasd:InstanceID>
+        <rasd:ResourceSubType>vmware.sata.ahci</rasd:ResourceSubType>
+        <rasd:ResourceType>20</rasd:ResourceType>
+      </Item>
+      <Item ovf:required="false">
+        <rasd:Address>0</rasd:Address>
+        <rasd:Description>USB Controller (EHCI)</rasd:Description>
+        <rasd:ElementName>usb</rasd:ElementName>
+        <rasd:InstanceID>4</rasd:InstanceID>
+        <rasd:ResourceSubType>vmware.usb.ehci</rasd:ResourceSubType>
+        <rasd:ResourceType>23</rasd:ResourceType>
+        <vmw:Config ovf:required="false" vmw:key="ehciEnabled" vmw:value="true"/>
+      </Item>
+      <Item>
+        <rasd:Address>0</rasd:Address>
+        <rasd:Description>SCSI Controller</rasd:Description>
+        <rasd:ElementName>scsiController0</rasd:ElementName>
+        <rasd:InstanceID>5</rasd:InstanceID>
+        <rasd:ResourceSubType>lsilogicsas</rasd:ResourceSubType>
+        <rasd:ResourceType>6</rasd:ResourceType>
+      </Item>
+      <Item ovf:required="false">
+        <rasd:AutomaticAllocation>true</rasd:AutomaticAllocation>
+        <rasd:ElementName>serial0</rasd:ElementName>
+        <rasd:InstanceID>6</rasd:InstanceID>
+        <rasd:ResourceType>21</rasd:ResourceType>
+        <vmw:Config ovf:required="false" vmw:key="yieldOnPoll" vmw:value="false"/>
+      </Item>
+      <Item>
+        <rasd:AddressOnParent>0</rasd:AddressOnParent>
+        <rasd:ElementName>disk0</rasd:ElementName>
+        <rasd:HostResource>ovf:/disk/vmdisk1</rasd:HostResource>
+        <rasd:InstanceID>7</rasd:InstanceID>
+        <rasd:Parent>5</rasd:Parent>
+        <rasd:ResourceType>17</rasd:ResourceType>
+      </Item>
+      <Item>
+        <rasd:AddressOnParent>1</rasd:AddressOnParent>
+        <rasd:ElementName>disk1</rasd:ElementName>
+        <rasd:HostResource>ovf:/disk/vmdisk2</rasd:HostResource>
+        <rasd:InstanceID>8</rasd:InstanceID>
+        <rasd:Parent>5</rasd:Parent>
+        <rasd:ResourceType>17</rasd:ResourceType>
+      </Item>
+      <Item>
+        <rasd:AddressOnParent>2</rasd:AddressOnParent>
+        <rasd:AutomaticAllocation>true</rasd:AutomaticAllocation>
+        <rasd:Connection>bridged</rasd:Connection>
+        <rasd:Description>E1000 ethernet adapter on &quot;bridged&quot;</rasd:Description>
+        <rasd:ElementName>ethernet0</rasd:ElementName>
+        <rasd:InstanceID>9</rasd:InstanceID>
+        <rasd:ResourceSubType>E1000</rasd:ResourceSubType>
+        <rasd:ResourceType>10</rasd:ResourceType>
+        <vmw:Config ovf:required="false" vmw:key="wakeOnLanEnabled" vmw:value="false"/>
+      </Item>
+      <Item ovf:required="false">
+        <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
+        <rasd:ElementName>sound</rasd:ElementName>
+        <rasd:InstanceID>10</rasd:InstanceID>
+        <rasd:ResourceSubType>vmware.soundcard.hdaudio</rasd:ResourceSubType>
+        <rasd:ResourceType>1</rasd:ResourceType>
+      </Item>
+      <Item ovf:required="false">
+        <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
+        <rasd:ElementName>video</rasd:ElementName>
+        <rasd:InstanceID>11</rasd:InstanceID>
+        <rasd:ResourceType>24</rasd:ResourceType>
+        <vmw:Config ovf:required="false" vmw:key="enable3DSupport" vmw:value="true"/>
+      </Item>
+      <Item ovf:required="false">
+        <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
+        <rasd:ElementName>vmci</rasd:ElementName>
+        <rasd:InstanceID>12</rasd:InstanceID>
+        <rasd:ResourceSubType>vmware.vmci</rasd:ResourceSubType>
+        <rasd:ResourceType>1</rasd:ResourceType>
+      </Item>
+      <Item ovf:required="false">
+        <rasd:AddressOnParent>1</rasd:AddressOnParent>
+        <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
+        <rasd:ElementName>cdrom0</rasd:ElementName>
+        <rasd:InstanceID>13</rasd:InstanceID>
+        <rasd:Parent>3</rasd:Parent>
+        <rasd:ResourceType>15</rasd:ResourceType>
+      </Item>
+      <vmw:Config ovf:required="false" vmw:key="cpuHotAddEnabled" vmw:value="true"/>
+      <vmw:Config ovf:required="false" vmw:key="memoryHotAddEnabled" vmw:value="true"/>
+      <vmw:Config ovf:required="false" vmw:key="powerOpInfo.powerOffType" vmw:value="soft"/>
+      <vmw:Config ovf:required="false" vmw:key="powerOpInfo.resetType" vmw:value="soft"/>
+      <vmw:Config ovf:required="false" vmw:key="powerOpInfo.suspendType" vmw:value="soft"/>
+      <vmw:Config ovf:required="false" vmw:key="tools.syncTimeWithHost" vmw:value="false"/>
+    </VirtualHardwareSection>
+  </VirtualSystem>
+</Envelope>
diff --git a/src/test/ovf_manifests/disk1.vmdk b/src/test/ovf_manifests/disk1.vmdk
new file mode 100644
index 0000000000000000000000000000000000000000..8660602343a1a955f9bcf2e6beaed99316dd8167
GIT binary patch
literal 65536
zcmeIvy>HV%7zbeUF`dK)46s<q9ua7|WuUkSgpg2EwX+*viPe0`HWAtSr(-ASlAWQ|
zbJF?F{+-YFKK_yYyn2=-$&0qXY<t)4ch@B8o_Fo_en^t%N%H0}e|H$~AF`0X3J-JR
zqY>z*Sy|tuS*)j3xo%d~*K!`iCRTO1T8@X|%lB-2b2}Q2@{gmi&a1d=x<|K%7N%9q
zn|Qfh$8m45TCV10Gb^W)c4ZxVA@tMpzfJp2S{y#m?iwzx)01@a>+{9rJna?j=ZAyM
zqPW{FznsOxiSi~-&+<BkewLkuP!u<VO<6U6^7*&xtNr=XaoRiS?V{gtwTMl%9Za|L
za#^%_7k)SjXE85!!SM7bspGUQewUqo+Glx@ubWtPwRL-yMO)CL`L7O2fB*pk1PBly
zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pkPg~&a(=JbS1PBlyK!5-N0!ISx
zkM7+PAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+
z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBly
zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF
z5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk
d1PBlyK!5-N0t5&UAV7cs0RjXF5cr=0{{Ra(Wheju

literal 0
HcmV?d00001

diff --git a/src/test/ovf_manifests/disk2.vmdk b/src/test/ovf_manifests/disk2.vmdk
new file mode 100644
index 0000000000000000000000000000000000000000..c4634513348b392202898374f1c8d2d51d565b27
GIT binary patch
literal 65536
zcmeIvy>HV%7zbeUF`dK)46s<q9#IIDI%J@v2!xPOQ?;`jUy0Rx$u<$$`lr`+(j_}X
ztLLQio&7tX?|uAp{Oj^rk|Zyh{<7(9yX&q=(mrq7>)ntf&y(cMe*SJh-aTX?eH9+&
z#z!O2Psc@dn~q~OEsJ%%D!&!;7&fu2iq&#-6u$l#k8ZBB&%=0f64qH6mv#5(X4k^B
zj9DEow(B_REmq6byr^fzbkeM>VlRY#diJkw-bwTQ2bx{O`BgehC%?a(PtMX_-hBS!
zV6(_?yX6<NxIa-=XX$BH#n2y*PeaJ_>%pcd>%ZCj`_<*{eCa6d4SQYmC$1K;F1Lf}
zc3v#=CU3(J2jMJcc^4cVA0$<rHpO?@@uyvu<=MK9Wm{XjSCKabJ(~aOpacjIAV7cs
z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAn>#W-ahT}R7ZdS0RjXF5Fl_M
z@c!W5Edc@q2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk
z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs
z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ
zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U
eAV7cs0RjXF5FkK+009C72oNAZfB=F2DR2*l=VfOA

literal 0
HcmV?d00001

diff --git a/src/test/run_ovf_tests.pl b/src/test/run_ovf_tests.pl
new file mode 100755
index 0000000..1ef78cc
--- /dev/null
+++ b/src/test/run_ovf_tests.pl
@@ -0,0 +1,71 @@
+#!/usr/bin/perl
+
+use strict;
+use warnings;
+use lib qw(..); # prepend .. to @INC so we use the local version of PVE packages
+
+use FindBin '$Bin';
+use PVE::Storage::OVF;
+use Test::More;
+
+use Data::Dumper;
+
+my $test_manifests = join ('/', $Bin, 'ovf_manifests');
+
+print "parsing ovfs\n";
+
+my $win2008 = eval { PVE::Storage::OVF::parse_ovf("$test_manifests/Win_2008_R2_two-disks.ovf") };
+if (my $err = $@) {
+    fail('parse win2008');
+    warn("error: $err\n");
+} else {
+    ok('parse win2008');
+}
+my $win10 = eval { PVE::Storage::OVF::parse_ovf("$test_manifests/Win10-Liz.ovf") };
+if (my $err = $@) {
+    fail('parse win10');
+    warn("error: $err\n");
+} else {
+    ok('parse win10');
+}
+my $win10noNs = eval { PVE::Storage::OVF::parse_ovf("$test_manifests/Win10-Liz_no_default_ns.ovf") };
+if (my $err = $@) {
+    fail("parse win10 no default rasd NS");
+    warn("error: $err\n");
+} else {
+    ok('parse win10 no default rasd NS');
+}
+
+print "testing disks\n";
+
+is($win2008->{disks}->[0]->{disk_address}, 'scsi0', 'multidisk vm has the correct first disk controller');
+is($win2008->{disks}->[0]->{backing_file}, "$test_manifests/disk1.vmdk", 'multidisk vm has the correct first disk backing device');
+is($win2008->{disks}->[0]->{virtual_size}, 2048, 'multidisk vm has the correct first disk size');
+
+is($win2008->{disks}->[1]->{disk_address}, 'scsi1', 'multidisk vm has the correct second disk controller');
+is($win2008->{disks}->[1]->{backing_file}, "$test_manifests/disk2.vmdk", 'multidisk vm has the correct second disk backing device');
+is($win2008->{disks}->[1]->{virtual_size}, 2048, 'multidisk vm has the correct second disk size');
+
+is($win10->{disks}->[0]->{disk_address}, 'scsi0', 'single disk vm has the correct disk controller');
+is($win10->{disks}->[0]->{backing_file}, "$test_manifests/Win10-Liz-disk1.vmdk", 'single disk vm has the correct disk backing device');
+is($win10->{disks}->[0]->{virtual_size}, 2048, 'single disk vm has the correct size');
+
+is($win10noNs->{disks}->[0]->{disk_address}, 'scsi0', 'single disk vm (no default rasd NS) has the correct disk controller');
+is($win10noNs->{disks}->[0]->{backing_file}, "$test_manifests/Win10-Liz-disk1.vmdk", 'single disk vm (no default rasd NS) has the correct disk backing device');
+is($win10noNs->{disks}->[0]->{virtual_size}, 2048, 'single disk vm (no default rasd NS) has the correct size');
+
+print "\ntesting vm.conf extraction\n";
+
+is($win2008->{qm}->{name}, 'Win2008-R2x64', 'win2008 VM name is correct');
+is($win2008->{qm}->{memory}, '2048', 'win2008 VM memory is correct');
+is($win2008->{qm}->{cores}, '1', 'win2008 VM cores are correct');
+
+is($win10->{qm}->{name}, 'Win10-Liz', 'win10 VM name is correct');
+is($win10->{qm}->{memory}, '6144', 'win10 VM memory is correct');
+is($win10->{qm}->{cores}, '4', 'win10 VM cores are correct');
+
+is($win10noNs->{qm}->{name}, 'Win10-Liz', 'win10 VM (no default rasd NS) name is correct');
+is($win10noNs->{qm}->{memory}, '6144', 'win10 VM (no default rasd NS) memory is correct');
+is($win10noNs->{qm}->{cores}, '4', 'win10 VM (no default rasd NS) cores are correct');
+
+done_testing();
-- 
2.39.2





^ permalink raw reply	[flat|nested] 67+ messages in thread

* [pve-devel] [PATCH storage 2/9] plugin: dir: implement import content type
  2024-04-16 13:18 [pve-devel] [PATCH storage/qemu-server/pve-manager] implement ova/ovf import for directory type storages Dominik Csapak
  2024-04-16 13:18 ` [pve-devel] [PATCH storage 1/9] copy OVF.pm from qemu-server Dominik Csapak
@ 2024-04-16 13:18 ` Dominik Csapak
  2024-04-17 10:07   ` Fiona Ebner
  2024-04-17 12:46   ` Fabian Grünbichler
  2024-04-16 13:18 ` [pve-devel] [PATCH storage 3/9] plugin: dir: handle ova files for import Dominik Csapak
                   ` (15 subsequent siblings)
  17 siblings, 2 replies; 67+ messages in thread
From: Dominik Csapak @ 2024-04-16 13:18 UTC (permalink / raw)
  To: pve-devel

in DirPlugin and not Plugin (because of cyclic dependency of
Plugin -> OVF -> Storage -> Plugin otherwise)

only ovf is currently supported (though ova will be shown in import
listing), expects the files to not be in a subdir, and adjacent to the
ovf file.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 src/PVE/Storage.pm                 |  8 ++++++-
 src/PVE/Storage/DirPlugin.pm       | 37 +++++++++++++++++++++++++++++-
 src/PVE/Storage/OVF.pm             |  2 ++
 src/PVE/Storage/Plugin.pm          | 18 ++++++++++++++-
 src/test/parse_volname_test.pm     | 13 +++++++++++
 src/test/path_to_volume_id_test.pm | 16 +++++++++++++
 6 files changed, 91 insertions(+), 3 deletions(-)

diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
index 40314a8..f8ea93d 100755
--- a/src/PVE/Storage.pm
+++ b/src/PVE/Storage.pm
@@ -114,6 +114,8 @@ our $VZTMPL_EXT_RE_1 = qr/\.tar\.(gz|xz|zst)/i;
 
 our $BACKUP_EXT_RE_2 = qr/\.(tgz|(?:tar|vma)(?:\.(${\PVE::Storage::Plugin::COMPRESSOR_RE}))?)/;
 
+our $IMPORT_EXT_RE_1 = qr/\.(ov[af])/;
+
 # FIXME remove with PVE 8.0, add versioned breaks for pve-manager
 our $vztmpl_extension_re = $VZTMPL_EXT_RE_1;
 
@@ -612,6 +614,7 @@ sub path_to_volume_id {
 	my $backupdir = $plugin->get_subdir($scfg, 'backup');
 	my $privatedir = $plugin->get_subdir($scfg, 'rootdir');
 	my $snippetsdir = $plugin->get_subdir($scfg, 'snippets');
+	my $importdir = $plugin->get_subdir($scfg, 'import');
 
 	if ($path =~ m!^$imagedir/(\d+)/([^/\s]+)$!) {
 	    my $vmid = $1;
@@ -640,6 +643,9 @@ sub path_to_volume_id {
 	} elsif ($path =~ m!^$snippetsdir/([^/]+)$!) {
 	    my $name = $1;
 	    return ('snippets', "$sid:snippets/$name");
+	} elsif ($path =~ m!^$importdir/([^/]+${IMPORT_EXT_RE_1})$!) {
+	    my $name = $1;
+	    return ('import', "$sid:import/$name");
 	}
     }
 
@@ -2170,7 +2176,7 @@ sub normalize_content_filename {
 # If a storage provides an 'import' content type, it should be able to provide
 # an object implementing the import information interface.
 sub get_import_metadata {
-    my ($cfg, $volid) = @_;
+    my ($cfg, $volid, $target) = @_;
 
     my ($storeid, $volname) = parse_volume_id($volid);
 
diff --git a/src/PVE/Storage/DirPlugin.pm b/src/PVE/Storage/DirPlugin.pm
index 2efa8d5..4dc7708 100644
--- a/src/PVE/Storage/DirPlugin.pm
+++ b/src/PVE/Storage/DirPlugin.pm
@@ -10,6 +10,7 @@ use IO::File;
 use POSIX;
 
 use PVE::Storage::Plugin;
+use PVE::Storage::OVF;
 use PVE::JSONSchema qw(get_standard_option);
 
 use base qw(PVE::Storage::Plugin);
@@ -22,7 +23,7 @@ sub type {
 
 sub plugindata {
     return {
-	content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1, backup => 1, snippets => 1, none => 1 },
+	content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1, backup => 1, snippets => 1, none => 1, import => 1 },
 		     { images => 1,  rootdir => 1 }],
 	format => [ { raw => 1, qcow2 => 1, vmdk => 1, subvol => 1 } , 'raw' ],
     };
@@ -247,4 +248,38 @@ sub check_config {
     return $opts;
 }
 
+sub get_import_metadata {
+    my ($class, $scfg, $volname, $storeid, $target) = @_;
+
+    if ($volname !~ m!^([^/]+)/.*${PVE::Storage::IMPORT_EXT_RE_1}$!) {
+	die "volume '$volname' does not look like an importable vm config\n";
+    }
+
+    my $path = $class->path($scfg, $volname, $storeid, undef);
+
+    # NOTE: all types must be added to the return schema of the import-metadata API endpoint
+    my $warnings = [];
+
+    my $res = PVE::Storage::OVF::parse_ovf($path, $isOva);
+    my $disks = {};
+    for my $disk ($res->{disks}->@*) {
+	my $id = $disk->{disk_address};
+	my $size = $disk->{virtual_size};
+	my $path = $disk->{relative_path};
+	$disks->{$id} = {
+	    volid => "$storeid:import/$path",
+	    defined($size) ? (size => $size) : (),
+	};
+    }
+
+    return {
+	type => 'vm',
+	source => $volname,
+	'create-args' => $res->{qm},
+	'disks' => $disks,
+	warnings => $warnings,
+	net => [],
+    };
+}
+
 1;
diff --git a/src/PVE/Storage/OVF.pm b/src/PVE/Storage/OVF.pm
index 90ca453..4a322b9 100644
--- a/src/PVE/Storage/OVF.pm
+++ b/src/PVE/Storage/OVF.pm
@@ -222,6 +222,7 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
 	}
 
 	($backing_file_path) = $backing_file_path =~ m|^(/.*)|; # untaint
+	($filepath) = $filepath =~ m|^(.*)|; # untaint
 
 	my $virtual_size = PVE::Storage::file_size_info($backing_file_path);
 	die "error parsing $backing_file_path, cannot determine file size\n"
@@ -231,6 +232,7 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
 	    disk_address => $pve_disk_address,
 	    backing_file => $backing_file_path,
 	    virtual_size => $virtual_size
+	    relative_path => $filepath,
 	};
 	push @disks, $pve_disk;
 
diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
index 22a9729..deaf8b2 100644
--- a/src/PVE/Storage/Plugin.pm
+++ b/src/PVE/Storage/Plugin.pm
@@ -654,6 +654,10 @@ sub parse_volname {
 	return ('backup', $fn);
     } elsif ($volname =~ m!^snippets/([^/]+)$!) {
 	return ('snippets', $1);
+    } elsif ($volname =~ m!^import/([^/]+$PVE::Storage::IMPORT_EXT_RE_1)$!) {
+	return ('import', $1);
+    } elsif ($volname =~ m!^import/([^/]+\.(raw|vmdk|qcow2))$!) {
+	return ('images', $1, 0, undef, undef, undef, $2);
     }
 
     die "unable to parse directory volume name '$volname'\n";
@@ -666,6 +670,7 @@ my $vtype_subdirs = {
     vztmpl => 'template/cache',
     backup => 'dump',
     snippets => 'snippets',
+    import => 'import',
 };
 
 sub get_vtype_subdirs {
@@ -691,6 +696,11 @@ sub filesystem_path {
     my ($vtype, $name, $vmid, undef, undef, $isBase, $format) =
 	$class->parse_volname($volname);
 
+    if (defined($vmid) && $vmid == 0) {
+	# import volumes?
+	$vtype = 'import';
+    }
+
     # Note: qcow2/qed has internal snapshot, so path is always
     # the same (with or without snapshot => same file).
     die "can't snapshot this image format\n"
@@ -1227,7 +1237,7 @@ sub list_images {
     return $res;
 }
 
-# list templates ($tt = <iso|vztmpl|backup|snippets>)
+# list templates ($tt = <iso|vztmpl|backup|snippets|import>)
 my $get_subdir_files = sub {
     my ($sid, $path, $tt, $vmid) = @_;
 
@@ -1283,6 +1293,10 @@ my $get_subdir_files = sub {
 		volid => "$sid:snippets/". basename($fn),
 		format => 'snippet',
 	    };
+	} elsif ($tt eq 'import') {
+	    next if $fn !~ m!/([^/]+$PVE::Storage::IMPORT_EXT_RE_1)$!i;
+
+	    $info = { volid => "$sid:import/$1", format => "$2" };
 	}
 
 	$info->{size} = $st->size;
@@ -1317,6 +1331,8 @@ sub list_volumes {
 		$data = $get_subdir_files->($storeid, $path, 'backup', $vmid);
 	    } elsif ($type eq 'snippets') {
 		$data = $get_subdir_files->($storeid, $path, 'snippets');
+	    } elsif ($type eq 'import') {
+		$data = $get_subdir_files->($storeid, $path, 'import');
 	    }
 	}
 
diff --git a/src/test/parse_volname_test.pm b/src/test/parse_volname_test.pm
index d6ac885..59819f0 100644
--- a/src/test/parse_volname_test.pm
+++ b/src/test/parse_volname_test.pm
@@ -81,6 +81,19 @@ my $tests = [
 	expected    => ['snippets', 'hookscript.pl'],
     },
     #
+    #
+    #
+    {
+	description => "Import, ova",
+	volname     => 'import/import.ova',
+	expected    => ['import', 'import.ova'],
+    },
+    {
+	description => "Import, ovf",
+	volname     => 'import/import.ovf',
+	expected    => ['import', 'import.ovf'],
+    },
+    #
     # failed matches
     #
     {
diff --git a/src/test/path_to_volume_id_test.pm b/src/test/path_to_volume_id_test.pm
index 8149c88..8bc1bf8 100644
--- a/src/test/path_to_volume_id_test.pm
+++ b/src/test/path_to_volume_id_test.pm
@@ -174,6 +174,22 @@ my @tests = (
 	    'local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.xz',
 	],
     },
+    {
+	description => 'Import, ova',
+	volname     => "$storage_dir/import/import.ova",
+	expected    => [
+	    'import',
+	    'local:import/import.ova',
+	],
+    },
+    {
+	description => 'Import, ovf',
+	volname     => "$storage_dir/import/import.ovf",
+	expected    => [
+	    'import',
+	    'local:import/import.ovf',
+	],
+    },
 
     # no matches, path or files with failures
     {
-- 
2.39.2





^ permalink raw reply	[flat|nested] 67+ messages in thread

* [pve-devel] [PATCH storage 3/9] plugin: dir: handle ova files for import
  2024-04-16 13:18 [pve-devel] [PATCH storage/qemu-server/pve-manager] implement ova/ovf import for directory type storages Dominik Csapak
  2024-04-16 13:18 ` [pve-devel] [PATCH storage 1/9] copy OVF.pm from qemu-server Dominik Csapak
  2024-04-16 13:18 ` [pve-devel] [PATCH storage 2/9] plugin: dir: implement import content type Dominik Csapak
@ 2024-04-16 13:18 ` Dominik Csapak
  2024-04-17 10:52   ` Fiona Ebner
  2024-04-17 12:45   ` Fabian Grünbichler
  2024-04-16 13:18 ` [pve-devel] [PATCH storage 4/9] ovf: implement parsing the ostype Dominik Csapak
                   ` (14 subsequent siblings)
  17 siblings, 2 replies; 67+ messages in thread
From: Dominik Csapak @ 2024-04-16 13:18 UTC (permalink / raw)
  To: pve-devel

since we want to handle ova files (which are only ovf+vmdks bundled in a
tar file) for import, add code that handles that.

we introduce a valid volname for files contained in ovas like this:

 storage:import/archive.ova/disk-1.vmdk

by basically treating the last part of the path as the name for the
contained disk we want.

we then provide 3 functions to use for that:

* copy_needs_extraction: determines from the given volid (like above) if
  that needs extraction to copy it, currently only 'import' vtype +
  defined format returns true here (if we have more options in the
  future, we can of course easily extend that)

* extract_disk_from_import_file: this actually extracts the file from
  the archive. Currently only ova is supported, so the extraction with
  'tar' is hardcoded, but again we can easily extend/modify that should
  we need to.

  we currently extract into the import storage in a directory named:
  `.tmp_<pid>_<targetvmid>` which should not clash with concurrent
  operations (though we do extract it multiple times then)

  alternatively we could implement either a 'tmpstorage' parameter,
  or use e.g. '/var/tmp/' or similar, but re-using the current storage
  seemed ok.

* cleanup_extracted_image: intended to cleanup the extracted images from
  above, including the surrounding temporary directory

we have to modify the `parse_ovf` a bit to handle the missing disk
images, and we parse the size out of the ovf part (since this is
informal only, it should be no problem if we cannot parse it sometimes)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 src/PVE/API2/Storage/Status.pm |  1 +
 src/PVE/Storage.pm             | 59 ++++++++++++++++++++++++++++++++++
 src/PVE/Storage/DirPlugin.pm   | 13 +++++++-
 src/PVE/Storage/OVF.pm         | 53 ++++++++++++++++++++++++++----
 src/PVE/Storage/Plugin.pm      |  5 +++
 5 files changed, 123 insertions(+), 8 deletions(-)

diff --git a/src/PVE/API2/Storage/Status.pm b/src/PVE/API2/Storage/Status.pm
index f7e324f..77ed57c 100644
--- a/src/PVE/API2/Storage/Status.pm
+++ b/src/PVE/API2/Storage/Status.pm
@@ -749,6 +749,7 @@ __PACKAGE__->register_method({
 				'efi-state-lost',
 				'guest-is-running',
 				'nvme-unsupported',
+				'ova-needs-extracting',
 				'ovmf-with-lsi-unsupported',
 				'serial-port-socket-only',
 			    ],
diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
index f8ea93d..bc073ef 100755
--- a/src/PVE/Storage.pm
+++ b/src/PVE/Storage.pm
@@ -2189,4 +2189,63 @@ sub get_import_metadata {
     return $plugin->get_import_metadata($scfg, $volname, $storeid);
 }
 
+sub copy_needs_extraction {
+    my ($volid) = @_;
+    my ($storeid, $volname) = parse_volume_id($volid);
+    my $cfg = config();
+    my $scfg = storage_config($cfg, $storeid);
+    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
+
+    my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $file_format) =
+	$plugin->parse_volname($volname);
+
+    return $vtype eq 'import' && defined($file_format);
+}
+
+sub extract_disk_from_import_file {
+    my ($volid, $vmid) = @_;
+
+    my ($storeid, $volname) = parse_volume_id($volid);
+    my $cfg = config();
+    my $scfg = storage_config($cfg, $storeid);
+    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
+
+    my ($vtype, $name, undef, undef, undef, undef, $file_format) =
+	$plugin->parse_volname($volname);
+
+    die "only files with content type 'import' can be extracted\n"
+	if $vtype ne 'import' || !defined($file_format);
+
+    # extract the inner file from the name
+    if ($volid =~ m!${name}/([^/]+)$!) {
+	$name = $1;
+    }
+
+    my ($source_file) = $plugin->path($scfg, $volname, $storeid);
+
+    my $destdir = $plugin->get_subdir($scfg, 'import');
+    my $pid = $$;
+    $destdir .= "/.tmp_${pid}_${vmid}";
+    mkdir $destdir;
+
+    ($source_file) = $source_file =~ m|^(/.*)|; # untaint
+
+    run_command(['tar', '-x', '-C', $destdir, '-f', $source_file, $name]);
+
+    return "$destdir/$name";
+}
+
+sub cleanup_extracted_image {
+    my ($source) = @_;
+
+    if ($source =~ m|^(/.+/\.tmp_[0-9]+_[0-9]+)/[^/]+$|) {
+	my $tmpdir = $1;
+
+	unlink $source or $! == ENOENT or die "removing image $source failed: $!\n";
+	rmdir $tmpdir or $! == ENOENT or die "removing tmpdir $tmpdir failed: $!\n";
+    } else {
+	die "invalid extraced image path '$source'\n";
+    }
+}
+
 1;
diff --git a/src/PVE/Storage/DirPlugin.pm b/src/PVE/Storage/DirPlugin.pm
index 4dc7708..50ceab7 100644
--- a/src/PVE/Storage/DirPlugin.pm
+++ b/src/PVE/Storage/DirPlugin.pm
@@ -260,14 +260,25 @@ sub get_import_metadata {
     # NOTE: all types must be added to the return schema of the import-metadata API endpoint
     my $warnings = [];
 
+    my $isOva = 0;
+    if ($path =~ m!\.ova!) {
+	$isOva = 1;
+	push @$warnings, { type => 'ova-needs-extracting' };
+    }
     my $res = PVE::Storage::OVF::parse_ovf($path, $isOva);
     my $disks = {};
     for my $disk ($res->{disks}->@*) {
 	my $id = $disk->{disk_address};
 	my $size = $disk->{virtual_size};
 	my $path = $disk->{relative_path};
+	my $volid;
+	if ($isOva) {
+	    $volid = "$storeid:$volname/$path";
+	} else {
+	    $volid = "$storeid:import/$path",
+	}
 	$disks->{$id} = {
-	    volid => "$storeid:import/$path",
+	    volid => $volid,
 	    defined($size) ? (size => $size) : (),
 	};
     }
diff --git a/src/PVE/Storage/OVF.pm b/src/PVE/Storage/OVF.pm
index 4a322b9..fb850a8 100644
--- a/src/PVE/Storage/OVF.pm
+++ b/src/PVE/Storage/OVF.pm
@@ -85,11 +85,37 @@ sub id_to_pve {
     }
 }
 
+# technically defined in DSP0004 (https://www.dmtf.org/dsp/DSP0004) as an ABNF
+# but realistically this always takes the form of 'bytes * base^exponent'
+sub try_parse_capacity_unit {
+    my ($unit_text) = @_;
+
+    if ($unit_text =~ m/^\s*byte\s*\*\s*([0-9]+)\s*\^\s*([0-9]+)\s*$/) {
+	my $base = $1;
+	my $exp = $2;
+	return $base ** $exp;
+    }
+
+    return undef;
+}
+
 # returns two references, $qm which holds qm.conf style key/values, and \@disks
 sub parse_ovf {
-    my ($ovf, $debug) = @_;
+    my ($ovf, $isOva, $debug) = @_;
+
+    # we have to ignore missing disk images for ova
+    my $dom;
+    if ($isOva) {
+	my $raw = "";
+	PVE::Tools::run_command(['tar', '-xO', '--wildcards', '--occurrence=1', '-f', $ovf, '*.ovf'], outfunc => sub {
+	    my $line = shift;
+	    $raw .= $line;
+	});
+	$dom = XML::LibXML->load_xml(string => $raw, no_blanks => 1);
+    } else {
+	$dom = XML::LibXML->load_xml(location => $ovf, no_blanks => 1);
+    }
 
-    my $dom = XML::LibXML->load_xml(location => $ovf, no_blanks => 1);
 
     # register the xml namespaces in a xpath context object
     # 'ovf' is the default namespace so it will prepended to each xml element
@@ -177,7 +203,17 @@ sub parse_ovf {
 	# @ needs to be escaped to prevent Perl double quote interpolation
 	my $xpath_find_fileref = sprintf("/ovf:Envelope/ovf:DiskSection/\
 ovf:Disk[\@ovf:diskId='%s']/\@ovf:fileRef", $disk_id);
+	my $xpath_find_capacity = sprintf("/ovf:Envelope/ovf:DiskSection/\
+ovf:Disk[\@ovf:diskId='%s']/\@ovf:capacity", $disk_id);
+	my $xpath_find_capacity_unit = sprintf("/ovf:Envelope/ovf:DiskSection/\
+ovf:Disk[\@ovf:diskId='%s']/\@ovf:capacityAllocationUnits", $disk_id);
 	my $fileref = $xpc->findvalue($xpath_find_fileref);
+	my $capacity = $xpc->findvalue($xpath_find_capacity);
+	my $capacity_unit = $xpc->findvalue($xpath_find_capacity_unit);
+	my $virtual_size;
+	if (my $factor = try_parse_capacity_unit($capacity_unit)) {
+	    $virtual_size = $capacity * $factor;
+	}
 
 	my $valid_url_chars = qr@${valid_uripath_chars}|/@;
 	if (!$fileref || $fileref !~ m/^${valid_url_chars}+$/) {
@@ -217,23 +253,26 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
 	    die "error parsing $filepath, are you using a symlink ?\n";
 	}
 
-	if (!-e $backing_file_path) {
+	if (!-e $backing_file_path && !$isOva) {
 	    die "error parsing $filepath, file seems not to exist at $backing_file_path\n";
 	}
 
 	($backing_file_path) = $backing_file_path =~ m|^(/.*)|; # untaint
 	($filepath) = $filepath =~ m|^(.*)|; # untaint
 
-	my $virtual_size = PVE::Storage::file_size_info($backing_file_path);
-	die "error parsing $backing_file_path, cannot determine file size\n"
-	    if !$virtual_size;
+	if (!$isOva) {
+	    my $size = PVE::Storage::file_size_info($backing_file_path);
+	    die "error parsing $backing_file_path, cannot determine file size\n"
+		if !$size;
 
+	    $virtual_size = $size;
+	}
 	$pve_disk = {
 	    disk_address => $pve_disk_address,
 	    backing_file => $backing_file_path,
-	    virtual_size => $virtual_size
 	    relative_path => $filepath,
 	};
+	$pve_disk->{virtual_size} = $virtual_size if defined($virtual_size);
 	push @disks, $pve_disk;
 
     }
diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
index deaf8b2..ea069ab 100644
--- a/src/PVE/Storage/Plugin.pm
+++ b/src/PVE/Storage/Plugin.pm
@@ -654,6 +654,11 @@ sub parse_volname {
 	return ('backup', $fn);
     } elsif ($volname =~ m!^snippets/([^/]+)$!) {
 	return ('snippets', $1);
+    } elsif ($volname =~ m!^import/([^/]+\.ova)\/([^/]+)$!) {
+	my $archive = $1;
+	my $file = $2;
+	my (undef, $format, undef) = parse_name_dir($file);
+	return ('import', $archive, 0, undef, undef, undef, $format);
     } elsif ($volname =~ m!^import/([^/]+$PVE::Storage::IMPORT_EXT_RE_1)$!) {
 	return ('import', $1);
     } elsif ($volname =~ m!^import/([^/]+\.(raw|vmdk|qcow2))$!) {
-- 
2.39.2





^ permalink raw reply	[flat|nested] 67+ messages in thread

* [pve-devel] [PATCH storage 4/9] ovf: implement parsing the ostype
  2024-04-16 13:18 [pve-devel] [PATCH storage/qemu-server/pve-manager] implement ova/ovf import for directory type storages Dominik Csapak
                   ` (2 preceding siblings ...)
  2024-04-16 13:18 ` [pve-devel] [PATCH storage 3/9] plugin: dir: handle ova files for import Dominik Csapak
@ 2024-04-16 13:18 ` Dominik Csapak
  2024-04-17 11:32   ` Fiona Ebner
  2024-04-16 13:18 ` [pve-devel] [PATCH storage 5/9] ovf: implement parsing out firmware type Dominik Csapak
                   ` (13 subsequent siblings)
  17 siblings, 1 reply; 67+ messages in thread
From: Dominik Csapak @ 2024-04-16 13:18 UTC (permalink / raw)
  To: pve-devel

use the standards info about the ostypes to map to our own
(see comment for link to the relevant part of the dmtf schema)

every type that is not listed we map to 'other', so no need to have it
in a list.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 src/PVE/Storage/OVF.pm    | 69 +++++++++++++++++++++++++++++++++++++++
 src/test/run_ovf_tests.pl |  3 ++
 2 files changed, 72 insertions(+)

diff --git a/src/PVE/Storage/OVF.pm b/src/PVE/Storage/OVF.pm
index fb850a8..dd8431a 100644
--- a/src/PVE/Storage/OVF.pm
+++ b/src/PVE/Storage/OVF.pm
@@ -55,6 +55,71 @@ my @resources = (
     { id => 35, dtmf_name => 'Vendor Reserved'}
 );
 
+# see https://schemas.dmtf.org/wbem/cim-html/2.55.0+/CIM_OperatingSystem.html
+my $ostype_ids = {
+    18 => 'winxp', # 'WINNT',
+    29 => 'solaris', # 'Solaris',
+    36 => 'l26', # 'LINUX',
+    58 => 'w2k', # 'Windows 2000',
+    67 => 'wxp', #'Windows XP',
+    69 => 'w2k3', # 'Microsoft Windows Server 2003',
+    70 => 'w2k3', # 'Microsoft Windows Server 2003 64-Bit',
+    71 => 'wxp', # 'Windows XP 64-Bit',
+    72 => 'wxp', # 'Windows XP Embedded',
+    73 => 'wvista', # 'Windows Vista',
+    74 => 'wvista', # 'Windows Vista 64-Bit',
+    75 => 'wxp', # 'Windows Embedded for Point of Service', ??
+    76 => 'w2k8', # 'Microsoft Windows Server 2008',
+    77 => 'w2k8', # 'Microsoft Windows Server 2008 64-Bit',
+    79 => 'l26', # 'RedHat Enterprise Linux',
+    80 => 'l26', # 'RedHat Enterprise Linux 64-Bit',
+    81 => 'solaris', #'Solaris 64-Bit',
+    82 => 'l26', # 'SUSE',
+    83 => 'l26', # 'SUSE 64-Bit',
+    84 => 'l26', # 'SLES',
+    85 => 'l26', # 'SLES 64-Bit',
+    87 => 'l26', # 'Novell Linux Desktop',
+    89 => 'l26', # 'Mandriva',
+    90 => 'l26', # 'Mandriva 64-Bit',
+    91 => 'l26', # 'TurboLinux',
+    92 => 'l26', # 'TurboLinux 64-Bit',
+    93 => 'l26', # 'Ubuntu',
+    94 => 'l26', # 'Ubuntu 64-Bit',
+    95 => 'l26', # 'Debian',
+    96 => 'l26', # 'Debian 64-Bit',
+    97 => 'l24', # 'Linux 2.4.x',
+    98 => 'l24', # 'Linux 2.4.x 64-Bit',
+    99 => 'l26', # 'Linux 2.6.x',
+    100 => 'l26', # 'Linux 2.6.x 64-Bit',
+    101 => 'l26', # 'Linux 64-Bit',
+    103 => 'win7', # 'Microsoft Windows Server 2008 R2',
+    105 => 'win7', # 'Microsoft Windows 7',
+    106 => 'l26', # 'CentOS 32-bit',
+    107 => 'l26', # 'CentOS 64-bit',
+    108 => 'l26', # 'Oracle Linux 32-bit',
+    109 => 'l26', # 'Oracle Linux 64-bit',
+    111 => 'win8', # 'Microsoft Windows Server 2011', ??
+    112 => 'win8', # 'Microsoft Windows Server 2012',
+    113 => 'win8', # 'Microsoft Windows 8',
+    114 => 'win8', # 'Microsoft Windows 8 64-bit',
+    115 => 'win8', # 'Microsoft Windows Server 2012 R2',
+    116 => 'win10', # 'Microsoft Windows Server 2016',
+    117 => 'win8', # 'Microsoft Windows 8.1',
+    118 => 'win8', # 'Microsoft Windows 8.1 64-bit',
+    119 => 'win10', # 'Microsoft Windows 10',
+    120 => 'win10', # 'Microsoft Windows 10 64-bit',
+    121 => 'win10', # 'Microsoft Windows Server 2019',
+    122 => 'win11', # 'Microsoft Windows 11 64-bit',
+    123 => 'win11', # 'Microsoft Windows Server 2022',
+    # others => 'other',
+};
+
+sub get_ostype {
+    my ($id) = @_;
+
+    return $ostype_ids->{$id} // 'other';
+}
+
 sub find_by {
     my ($key, $param) = @_;
     foreach my $resource (@resources) {
@@ -160,6 +225,10 @@ sub parse_ovf {
     my $xpath_find_disks="/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${disk_id}]";
     my @disk_items = $xpc->findnodes($xpath_find_disks);
 
+    my $xpath_find_ostype_id = "/ovf:Envelope/ovf:VirtualSystem/ovf:OperatingSystemSection/\@ovf:id";
+    my $ostype_id = $xpc->findvalue($xpath_find_ostype_id);
+    $qm->{ostype} = get_ostype($ostype_id);
+
     # disks metadata is split in four different xml elements:
     # * as an Item node of type DiskDrive in the VirtualHardwareSection
     # * as an Disk node in the DiskSection
diff --git a/src/test/run_ovf_tests.pl b/src/test/run_ovf_tests.pl
index 1ef78cc..e949c15 100755
--- a/src/test/run_ovf_tests.pl
+++ b/src/test/run_ovf_tests.pl
@@ -59,13 +59,16 @@ print "\ntesting vm.conf extraction\n";
 is($win2008->{qm}->{name}, 'Win2008-R2x64', 'win2008 VM name is correct');
 is($win2008->{qm}->{memory}, '2048', 'win2008 VM memory is correct');
 is($win2008->{qm}->{cores}, '1', 'win2008 VM cores are correct');
+is($win2008->{qm}->{ostype}, 'win7', 'win2008 VM ostype is correcty');
 
 is($win10->{qm}->{name}, 'Win10-Liz', 'win10 VM name is correct');
 is($win10->{qm}->{memory}, '6144', 'win10 VM memory is correct');
 is($win10->{qm}->{cores}, '4', 'win10 VM cores are correct');
+is($win10->{qm}->{ostype}, 'other', 'win10 VM ostype is correct');
 
 is($win10noNs->{qm}->{name}, 'Win10-Liz', 'win10 VM (no default rasd NS) name is correct');
 is($win10noNs->{qm}->{memory}, '6144', 'win10 VM (no default rasd NS) memory is correct');
 is($win10noNs->{qm}->{cores}, '4', 'win10 VM (no default rasd NS) cores are correct');
+is($win10noNs->{qm}->{ostype}, 'other', 'win10 VM (no default rasd NS) ostype is correct');
 
 done_testing();
-- 
2.39.2





^ permalink raw reply	[flat|nested] 67+ messages in thread

* [pve-devel] [PATCH storage 5/9] ovf: implement parsing out firmware type
  2024-04-16 13:18 [pve-devel] [PATCH storage/qemu-server/pve-manager] implement ova/ovf import for directory type storages Dominik Csapak
                   ` (3 preceding siblings ...)
  2024-04-16 13:18 ` [pve-devel] [PATCH storage 4/9] ovf: implement parsing the ostype Dominik Csapak
@ 2024-04-16 13:18 ` Dominik Csapak
  2024-04-17 11:43   ` Fiona Ebner
  2024-04-16 13:18 ` [pve-devel] [PATCH storage 6/9] ovf: implement rudimentary boot order Dominik Csapak
                   ` (12 subsequent siblings)
  17 siblings, 1 reply; 67+ messages in thread
From: Dominik Csapak @ 2024-04-16 13:18 UTC (permalink / raw)
  To: pve-devel

it seems there is no part of the ovf standard that handles which type of
bios there is (at least i could not find it). Every ovf/ova i tested
either has no info about it, or has it in a vmware specific property
which we pare here.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 src/PVE/Storage/DirPlugin.pm                       | 5 +++++
 src/PVE/Storage/OVF.pm                             | 5 +++++
 src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf | 1 +
 src/test/run_ovf_tests.pl                          | 1 +
 4 files changed, 12 insertions(+)

diff --git a/src/PVE/Storage/DirPlugin.pm b/src/PVE/Storage/DirPlugin.pm
index 50ceab7..8a248c7 100644
--- a/src/PVE/Storage/DirPlugin.pm
+++ b/src/PVE/Storage/DirPlugin.pm
@@ -283,6 +283,11 @@ sub get_import_metadata {
 	};
     }
 
+    if (defined($res->{qm}->{bios}) && $res->{qm}->{bios} eq 'ovmf') {
+	$disks->{efidisk0} = 1;
+	push @$warnings, { type => 'efi-state-lost', key => 'bios', value => 'ovmf' };
+    }
+
     return {
 	type => 'vm',
 	source => $volname,
diff --git a/src/PVE/Storage/OVF.pm b/src/PVE/Storage/OVF.pm
index dd8431a..f56c34d 100644
--- a/src/PVE/Storage/OVF.pm
+++ b/src/PVE/Storage/OVF.pm
@@ -229,6 +229,11 @@ sub parse_ovf {
     my $ostype_id = $xpc->findvalue($xpath_find_ostype_id);
     $qm->{ostype} = get_ostype($ostype_id);
 
+    # vmware specific firmware config, seems to not be standardized in ovf ?
+    my $xpath_find_firmware = "/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/vmw:Config[\@vmw:key=\"firmware\"]/\@vmw:value";
+    my $firmware = $xpc->findvalue($xpath_find_firmware) || 'seabios';
+    $qm->{bios} = 'ovmf' if $firmware eq 'efi';
+
     # disks metadata is split in four different xml elements:
     # * as an Item node of type DiskDrive in the VirtualHardwareSection
     # * as an Disk node in the DiskSection
diff --git a/src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf b/src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf
index b93540f..10ccaf1 100755
--- a/src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf
+++ b/src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf
@@ -137,6 +137,7 @@
       <vmw:Config ovf:required="false" vmw:key="powerOpInfo.resetType" vmw:value="soft"/>
       <vmw:Config ovf:required="false" vmw:key="powerOpInfo.suspendType" vmw:value="soft"/>
       <vmw:Config ovf:required="false" vmw:key="tools.syncTimeWithHost" vmw:value="false"/>
+      <vmw:Config ovf:required="false" vmw:key="firmware" vmw:value="efi"/>
     </VirtualHardwareSection>
   </VirtualSystem>
 </Envelope>
diff --git a/src/test/run_ovf_tests.pl b/src/test/run_ovf_tests.pl
index e949c15..a8b2d1e 100755
--- a/src/test/run_ovf_tests.pl
+++ b/src/test/run_ovf_tests.pl
@@ -70,5 +70,6 @@ is($win10noNs->{qm}->{name}, 'Win10-Liz', 'win10 VM (no default rasd NS) name is
 is($win10noNs->{qm}->{memory}, '6144', 'win10 VM (no default rasd NS) memory is correct');
 is($win10noNs->{qm}->{cores}, '4', 'win10 VM (no default rasd NS) cores are correct');
 is($win10noNs->{qm}->{ostype}, 'other', 'win10 VM (no default rasd NS) ostype is correct');
+is($win10noNs->{qm}->{bios}, 'ovmf', 'win10 VM (no default rasd NS) bios is correct');
 
 done_testing();
-- 
2.39.2





^ permalink raw reply	[flat|nested] 67+ messages in thread

* [pve-devel] [PATCH storage 6/9] ovf: implement rudimentary boot order
  2024-04-16 13:18 [pve-devel] [PATCH storage/qemu-server/pve-manager] implement ova/ovf import for directory type storages Dominik Csapak
                   ` (4 preceding siblings ...)
  2024-04-16 13:18 ` [pve-devel] [PATCH storage 5/9] ovf: implement parsing out firmware type Dominik Csapak
@ 2024-04-16 13:18 ` Dominik Csapak
  2024-04-17 11:54   ` Fiona Ebner
  2024-04-16 13:19 ` [pve-devel] [PATCH storage 7/9] ovf: implement parsing nics Dominik Csapak
                   ` (11 subsequent siblings)
  17 siblings, 1 reply; 67+ messages in thread
From: Dominik Csapak @ 2024-04-16 13:18 UTC (permalink / raw)
  To: pve-devel

simply add all parsed disks to the boot order in the order we encounter
them (similar to the esxi plugin).

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 src/PVE/Storage/OVF.pm    | 6 ++++++
 src/test/run_ovf_tests.pl | 3 +++
 2 files changed, 9 insertions(+)

diff --git a/src/PVE/Storage/OVF.pm b/src/PVE/Storage/OVF.pm
index f56c34d..f438de2 100644
--- a/src/PVE/Storage/OVF.pm
+++ b/src/PVE/Storage/OVF.pm
@@ -245,6 +245,8 @@ sub parse_ovf {
     # when all the nodes has been found out, we copy the relevant information to
     # a $pve_disk hash ref, which we push to @disks;
 
+    my $boot = [];
+
     foreach my $item_node (@disk_items) {
 
 	my $disk_node;
@@ -348,6 +350,10 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
 	};
 	$pve_disk->{virtual_size} = $virtual_size if defined($virtual_size);
 	push @disks, $pve_disk;
+	push @$boot, $pve_disk_address;
+    }
+
+    $qm->{boot} = "order=" . join(';', @$boot);
 
     }
 
diff --git a/src/test/run_ovf_tests.pl b/src/test/run_ovf_tests.pl
index a8b2d1e..8cf5662 100755
--- a/src/test/run_ovf_tests.pl
+++ b/src/test/run_ovf_tests.pl
@@ -56,16 +56,19 @@ is($win10noNs->{disks}->[0]->{virtual_size}, 2048, 'single disk vm (no default r
 
 print "\ntesting vm.conf extraction\n";
 
+is($win2008->{qm}->{boot}, 'order=scsi0;scsi1', 'win2008 VM boot is correct');
 is($win2008->{qm}->{name}, 'Win2008-R2x64', 'win2008 VM name is correct');
 is($win2008->{qm}->{memory}, '2048', 'win2008 VM memory is correct');
 is($win2008->{qm}->{cores}, '1', 'win2008 VM cores are correct');
 is($win2008->{qm}->{ostype}, 'win7', 'win2008 VM ostype is correcty');
 
+is($win10->{qm}->{boot}, 'order=scsi0', 'win10 VM boot is correct');
 is($win10->{qm}->{name}, 'Win10-Liz', 'win10 VM name is correct');
 is($win10->{qm}->{memory}, '6144', 'win10 VM memory is correct');
 is($win10->{qm}->{cores}, '4', 'win10 VM cores are correct');
 is($win10->{qm}->{ostype}, 'other', 'win10 VM ostype is correct');
 
+is($win10noNs->{qm}->{boot}, 'order=scsi0', 'win10 VM (no default rasd NS) boot is correct');
 is($win10noNs->{qm}->{name}, 'Win10-Liz', 'win10 VM (no default rasd NS) name is correct');
 is($win10noNs->{qm}->{memory}, '6144', 'win10 VM (no default rasd NS) memory is correct');
 is($win10noNs->{qm}->{cores}, '4', 'win10 VM (no default rasd NS) cores are correct');
-- 
2.39.2





^ permalink raw reply	[flat|nested] 67+ messages in thread

* [pve-devel] [PATCH storage 7/9] ovf: implement parsing nics
  2024-04-16 13:18 [pve-devel] [PATCH storage/qemu-server/pve-manager] implement ova/ovf import for directory type storages Dominik Csapak
                   ` (5 preceding siblings ...)
  2024-04-16 13:18 ` [pve-devel] [PATCH storage 6/9] ovf: implement rudimentary boot order Dominik Csapak
@ 2024-04-16 13:19 ` Dominik Csapak
  2024-04-17 12:09   ` Fiona Ebner
  2024-04-18  8:22   ` Fiona Ebner
  2024-04-16 13:19 ` [pve-devel] [PATCH storage 8/9] api: allow ova upload/download Dominik Csapak
                   ` (10 subsequent siblings)
  17 siblings, 2 replies; 67+ messages in thread
From: Dominik Csapak @ 2024-04-16 13:19 UTC (permalink / raw)
  To: pve-devel

by iterating over the relevant parts and trying to parse out the
'ResourceSubType'. The content of that is not standardized, but I only
ever found examples that are compatible with vmware, meaning it's
either 'e1000', 'e1000e' or 'vmxnet3' (in various capitalizations; thus
the `lc()`)

As a fallback i used vmxnet3, since i guess most OVAs are tuned for
vmware.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 src/PVE/Storage/DirPlugin.pm |  2 +-
 src/PVE/Storage/OVF.pm       | 20 +++++++++++++++++++-
 src/test/run_ovf_tests.pl    |  5 +++++
 3 files changed, 25 insertions(+), 2 deletions(-)

diff --git a/src/PVE/Storage/DirPlugin.pm b/src/PVE/Storage/DirPlugin.pm
index 8a248c7..21c8350 100644
--- a/src/PVE/Storage/DirPlugin.pm
+++ b/src/PVE/Storage/DirPlugin.pm
@@ -294,7 +294,7 @@ sub get_import_metadata {
 	'create-args' => $res->{qm},
 	'disks' => $disks,
 	warnings => $warnings,
-	net => [],
+	net => $res->{net},
     };
 }
 
diff --git a/src/PVE/Storage/OVF.pm b/src/PVE/Storage/OVF.pm
index f438de2..c3e7ed9 100644
--- a/src/PVE/Storage/OVF.pm
+++ b/src/PVE/Storage/OVF.pm
@@ -120,6 +120,12 @@ sub get_ostype {
     return $ostype_ids->{$id} // 'other';
 }
 
+my $allowed_nic_models = [
+    'e1000',
+    'e1000e',
+    'vmxnet3',
+];
+
 sub find_by {
     my ($key, $param) = @_;
     foreach my $resource (@resources) {
@@ -355,9 +361,21 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
 
     $qm->{boot} = "order=" . join(';', @$boot);
 
+    my $nic_id = dtmf_name_to_id('Ethernet Adapter');
+    my $xpath_find_nics = "/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${nic_id}]";
+    my @nic_items = $xpc->findnodes($xpath_find_nics);
+
+    my $net = {};
+
+    my $net_count = 0;
+    foreach my $item_node (@nic_items) {
+	my $model = $xpc->findvalue('rasd:ResourceSubType', $item_node);
+	$model = lc($model);
+	$model = 'vmxnet3' if ! grep $model, @$allowed_nic_models;
+	$net->{"net${net_count}"} = { model => $model };
     }
 
-    return {qm => $qm, disks => \@disks};
+    return {qm => $qm, disks => \@disks, net => $net};
 }
 
 1;
diff --git a/src/test/run_ovf_tests.pl b/src/test/run_ovf_tests.pl
index 8cf5662..d9a7b4b 100755
--- a/src/test/run_ovf_tests.pl
+++ b/src/test/run_ovf_tests.pl
@@ -54,6 +54,11 @@ is($win10noNs->{disks}->[0]->{disk_address}, 'scsi0', 'single disk vm (no defaul
 is($win10noNs->{disks}->[0]->{backing_file}, "$test_manifests/Win10-Liz-disk1.vmdk", 'single disk vm (no default rasd NS) has the correct disk backing device');
 is($win10noNs->{disks}->[0]->{virtual_size}, 2048, 'single disk vm (no default rasd NS) has the correct size');
 
+print "testing nics\n";
+is($win2008->{net}->{net0}->{model}, 'e1000', 'win2008 has correct nic model');
+is($win10->{net}->{net0}->{model}, 'e1000e', 'win10 has correct nic model');
+is($win10noNs->{net}->{net0}->{model}, 'e1000e', 'win10 (no default rasd NS) has correct nic model');
+
 print "\ntesting vm.conf extraction\n";
 
 is($win2008->{qm}->{boot}, 'order=scsi0;scsi1', 'win2008 VM boot is correct');
-- 
2.39.2





^ permalink raw reply	[flat|nested] 67+ messages in thread

* [pve-devel] [PATCH storage 8/9] api: allow ova upload/download
  2024-04-16 13:18 [pve-devel] [PATCH storage/qemu-server/pve-manager] implement ova/ovf import for directory type storages Dominik Csapak
                   ` (6 preceding siblings ...)
  2024-04-16 13:19 ` [pve-devel] [PATCH storage 7/9] ovf: implement parsing nics Dominik Csapak
@ 2024-04-16 13:19 ` Dominik Csapak
  2024-04-18  8:05   ` Fiona Ebner
  2024-04-16 13:19 ` [pve-devel] [PATCH storage 9/9] plugin: enable import for nfs/btfs/cifs/cephfs Dominik Csapak
                   ` (9 subsequent siblings)
  17 siblings, 1 reply; 67+ messages in thread
From: Dominik Csapak @ 2024-04-16 13:19 UTC (permalink / raw)
  To: pve-devel

introducing a seperate regex that only contains ova, since
upload/downloading ovfs does not make sense (since the disks are then
missing).

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 src/PVE/API2/Storage/Status.pm | 14 ++++++++++++--
 src/PVE/Storage.pm             | 11 +++++++++++
 2 files changed, 23 insertions(+), 2 deletions(-)

diff --git a/src/PVE/API2/Storage/Status.pm b/src/PVE/API2/Storage/Status.pm
index 77ed57c..14d6fe8 100644
--- a/src/PVE/API2/Storage/Status.pm
+++ b/src/PVE/API2/Storage/Status.pm
@@ -382,7 +382,7 @@ __PACKAGE__->register_method ({
 	    content => {
 		description => "Content type.",
 		type => 'string', format => 'pve-storage-content',
-		enum => ['iso', 'vztmpl'],
+		enum => ['iso', 'vztmpl', 'import'],
 	    },
 	    filename => {
 		description => "The name of the file to create. Caution: This will be normalized!",
@@ -448,6 +448,11 @@ __PACKAGE__->register_method ({
 		raise_param_exc({ filename => "wrong file extension" });
 	    }
 	    $path = PVE::Storage::get_vztmpl_dir($cfg, $param->{storage});
+	} elsif ($content eq 'import') {
+	    if ($filename !~ m![^/]+$PVE::Storage::UPLOAD_IMPORT_EXT_RE_1$!) {
+		raise_param_exc({ filename => "wrong file extension" });
+	    }
+	    $path = PVE::Storage::get_import_dir($cfg, $param->{storage});
 	} else {
 	    raise_param_exc({ content => "upload content type '$content' not allowed" });
 	}
@@ -572,7 +577,7 @@ __PACKAGE__->register_method({
 	    content => {
 		description => "Content type.", # TODO: could be optional & detected in most cases
 		type => 'string', format => 'pve-storage-content',
-		enum => ['iso', 'vztmpl'],
+		enum => ['iso', 'vztmpl', 'import'],
 	    },
 	    filename => {
 		description => "The name of the file to create. Caution: This will be normalized!",
@@ -642,6 +647,11 @@ __PACKAGE__->register_method({
 		raise_param_exc({ filename => "wrong file extension" });
 	    }
 	    $path = PVE::Storage::get_vztmpl_dir($cfg, $storage);
+	} elsif ($content eq 'import') {
+	    if ($filename !~ m![^/]+$PVE::Storage::UPLOAD_IMPORT_EXT_RE_1$!) {
+		raise_param_exc({ filename => "wrong file extension" });
+	    }
+	    $path = PVE::Storage::get_import_dir($cfg, $param->{storage});
 	} else {
 	    raise_param_exc({ content => "upload content-type '$content' is not allowed" });
 	}
diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
index bc073ef..c90dd42 100755
--- a/src/PVE/Storage.pm
+++ b/src/PVE/Storage.pm
@@ -116,6 +116,8 @@ our $BACKUP_EXT_RE_2 = qr/\.(tgz|(?:tar|vma)(?:\.(${\PVE::Storage::Plugin::COMPR
 
 our $IMPORT_EXT_RE_1 = qr/\.(ov[af])/;
 
+our $UPLOAD_IMPORT_EXT_RE_1 = qr/\.(ova)/;
+
 # FIXME remove with PVE 8.0, add versioned breaks for pve-manager
 our $vztmpl_extension_re = $VZTMPL_EXT_RE_1;
 
@@ -462,6 +464,15 @@ sub get_iso_dir {
     return $plugin->get_subdir($scfg, 'iso');
 }
 
+sub get_import_dir {
+    my ($cfg, $storeid) = @_;
+
+    my $scfg = storage_config($cfg, $storeid);
+    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
+
+    return $plugin->get_subdir($scfg, 'import');
+}
+
 sub get_vztmpl_dir {
     my ($cfg, $storeid) = @_;
 
-- 
2.39.2





^ permalink raw reply	[flat|nested] 67+ messages in thread

* [pve-devel] [PATCH storage 9/9] plugin: enable import for nfs/btfs/cifs/cephfs
  2024-04-16 13:18 [pve-devel] [PATCH storage/qemu-server/pve-manager] implement ova/ovf import for directory type storages Dominik Csapak
                   ` (7 preceding siblings ...)
  2024-04-16 13:19 ` [pve-devel] [PATCH storage 8/9] api: allow ova upload/download Dominik Csapak
@ 2024-04-16 13:19 ` Dominik Csapak
  2024-04-18  8:43   ` Fiona Ebner
  2024-04-16 13:19 ` [pve-devel] [PATCH qemu-server 1/3] api: delete unused OVF.pm Dominik Csapak
                   ` (8 subsequent siblings)
  17 siblings, 1 reply; 67+ messages in thread
From: Dominik Csapak @ 2024-04-16 13:19 UTC (permalink / raw)
  To: pve-devel

and reuse the DirPlugin implementation

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 src/PVE/Storage/BTRFSPlugin.pm  | 5 +++++
 src/PVE/Storage/CIFSPlugin.pm   | 6 +++++-
 src/PVE/Storage/CephFSPlugin.pm | 6 +++++-
 src/PVE/Storage/NFSPlugin.pm    | 6 +++++-
 4 files changed, 20 insertions(+), 3 deletions(-)

diff --git a/src/PVE/Storage/BTRFSPlugin.pm b/src/PVE/Storage/BTRFSPlugin.pm
index 42815cb..b7e3f82 100644
--- a/src/PVE/Storage/BTRFSPlugin.pm
+++ b/src/PVE/Storage/BTRFSPlugin.pm
@@ -40,6 +40,7 @@ sub plugindata {
 		backup => 1,
 		snippets => 1,
 		none => 1,
+		import => 1,
 	    },
 	    { images => 1, rootdir => 1 },
 	],
@@ -930,4 +931,8 @@ sub volume_import {
     return "$storeid:$volname";
 }
 
+sub get_import_metadata {
+    return PVE::Storage::DirPlugin::get_import_metadata(@_);
+}
+
 1
diff --git a/src/PVE/Storage/CIFSPlugin.pm b/src/PVE/Storage/CIFSPlugin.pm
index 2184471..475065a 100644
--- a/src/PVE/Storage/CIFSPlugin.pm
+++ b/src/PVE/Storage/CIFSPlugin.pm
@@ -99,7 +99,7 @@ sub type {
 sub plugindata {
     return {
 	content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1,
-		   backup => 1, snippets => 1}, { images => 1 }],
+		   backup => 1, snippets => 1, import => 1}, { images => 1 }],
 	format => [ { raw => 1, qcow2 => 1, vmdk => 1 } , 'raw' ],
     };
 }
@@ -314,4 +314,8 @@ sub update_volume_attribute {
     return PVE::Storage::DirPlugin::update_volume_attribute(@_);
 }
 
+sub get_import_metadata {
+    return PVE::Storage::DirPlugin::get_import_metadata(@_);
+}
+
 1;
diff --git a/src/PVE/Storage/CephFSPlugin.pm b/src/PVE/Storage/CephFSPlugin.pm
index 8aad518..36c64ea 100644
--- a/src/PVE/Storage/CephFSPlugin.pm
+++ b/src/PVE/Storage/CephFSPlugin.pm
@@ -116,7 +116,7 @@ sub type {
 
 sub plugindata {
     return {
-	content => [ { vztmpl => 1, iso => 1, backup => 1, snippets => 1},
+	content => [ { vztmpl => 1, iso => 1, backup => 1, snippets => 1, import => 1 },
 		     { backup => 1 }],
     };
 }
@@ -261,4 +261,8 @@ sub update_volume_attribute {
     return PVE::Storage::DirPlugin::update_volume_attribute(@_);
 }
 
+sub get_import_metadata {
+    return PVE::Storage::DirPlugin::get_import_metadata(@_);
+}
+
 1;
diff --git a/src/PVE/Storage/NFSPlugin.pm b/src/PVE/Storage/NFSPlugin.pm
index f2e4c0d..72e9c6d 100644
--- a/src/PVE/Storage/NFSPlugin.pm
+++ b/src/PVE/Storage/NFSPlugin.pm
@@ -53,7 +53,7 @@ sub type {
 
 sub plugindata {
     return {
-	content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1, backup => 1, snippets => 1 },
+	content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1, backup => 1, snippets => 1, import => 1 },
 		     { images => 1 }],
 	format => [ { raw => 1, qcow2 => 1, vmdk => 1 } , 'raw' ],
     };
@@ -223,4 +223,8 @@ sub update_volume_attribute {
     return PVE::Storage::DirPlugin::update_volume_attribute(@_);
 }
 
+sub get_import_metadata {
+    return PVE::Storage::DirPlugin::get_import_metadata(@_);
+}
+
 1;
-- 
2.39.2





^ permalink raw reply	[flat|nested] 67+ messages in thread

* [pve-devel] [PATCH qemu-server 1/3] api: delete unused OVF.pm
  2024-04-16 13:18 [pve-devel] [PATCH storage/qemu-server/pve-manager] implement ova/ovf import for directory type storages Dominik Csapak
                   ` (8 preceding siblings ...)
  2024-04-16 13:19 ` [pve-devel] [PATCH storage 9/9] plugin: enable import for nfs/btfs/cifs/cephfs Dominik Csapak
@ 2024-04-16 13:19 ` Dominik Csapak
  2024-04-18  8:52   ` Fiona Ebner
  2024-04-16 13:19 ` [pve-devel] [PATCH qemu-server 2/3] use OVF from Storage Dominik Csapak
                   ` (7 subsequent siblings)
  17 siblings, 1 reply; 67+ messages in thread
From: Dominik Csapak @ 2024-04-16 13:19 UTC (permalink / raw)
  To: pve-devel

the api part was never in use by anything

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 PVE/API2/Qemu/Makefile |  2 +-
 PVE/API2/Qemu/OVF.pm   | 53 ------------------------------------------
 2 files changed, 1 insertion(+), 54 deletions(-)
 delete mode 100644 PVE/API2/Qemu/OVF.pm

diff --git a/PVE/API2/Qemu/Makefile b/PVE/API2/Qemu/Makefile
index bdd4762b..5d4abda6 100644
--- a/PVE/API2/Qemu/Makefile
+++ b/PVE/API2/Qemu/Makefile
@@ -1,4 +1,4 @@
-SOURCES=Agent.pm CPU.pm Machine.pm OVF.pm
+SOURCES=Agent.pm CPU.pm Machine.pm
 
 .PHONY: install
 install:
diff --git a/PVE/API2/Qemu/OVF.pm b/PVE/API2/Qemu/OVF.pm
deleted file mode 100644
index cc0ef2da..00000000
--- a/PVE/API2/Qemu/OVF.pm
+++ /dev/null
@@ -1,53 +0,0 @@
-package PVE::API2::Qemu::OVF;
-
-use strict;
-use warnings;
-
-use PVE::JSONSchema qw(get_standard_option);
-use PVE::QemuServer::OVF;
-use PVE::RESTHandler;
-
-use base qw(PVE::RESTHandler);
-
-__PACKAGE__->register_method ({
-    name => 'readovf',
-    path => '',
-    method => 'GET',
-    proxyto => 'node',
-    description => "Read an .ovf manifest.",
-    protected => 1,
-    parameters => {
-	additionalProperties => 0,
-	properties => {
-	    node => get_standard_option('pve-node'),
-	    manifest => {
-		description => "Path to .ovf manifest.",
-		type => 'string',
-	    },
-	},
-    },
-    returns => {
-	type => 'object',
-	additionalProperties => 1,
-	properties => PVE::QemuServer::json_ovf_properties(),
-	description => "VM config according to .ovf manifest.",
-    },
-    code => sub {
-	my ($param) = @_;
-
-	my $manifest = $param->{manifest};
-	die "check for file $manifest failed - $!\n" if !-f $manifest;
-
-	my $parsed = PVE::QemuServer::OVF::parse_ovf($manifest);
-	my $result;
-	$result->{cores} = $parsed->{qm}->{cores};
-	$result->{name} =  $parsed->{qm}->{name};
-	$result->{memory} = $parsed->{qm}->{memory};
-	my $disks = $parsed->{disks};
-	for my $disk (@$disks) {
-	    $result->{$disk->{disk_address}} = $disk->{backing_file};
-	}
-	return $result;
-    }});
-
-1;
-- 
2.39.2





^ permalink raw reply	[flat|nested] 67+ messages in thread

* [pve-devel] [PATCH qemu-server 2/3] use OVF from Storage
  2024-04-16 13:18 [pve-devel] [PATCH storage/qemu-server/pve-manager] implement ova/ovf import for directory type storages Dominik Csapak
                   ` (9 preceding siblings ...)
  2024-04-16 13:19 ` [pve-devel] [PATCH qemu-server 1/3] api: delete unused OVF.pm Dominik Csapak
@ 2024-04-16 13:19 ` Dominik Csapak
  2024-04-18  9:07   ` Fiona Ebner
  2024-04-16 13:19 ` [pve-devel] [PATCH qemu-server 3/3] api: create: implement extracting disks when needed for import-from Dominik Csapak
                   ` (6 subsequent siblings)
  17 siblings, 1 reply; 67+ messages in thread
From: Dominik Csapak @ 2024-04-16 13:19 UTC (permalink / raw)
  To: pve-devel

and delete it here (incl tests; they live in pve-storage now).

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 PVE/CLI/qm.pm                                 |   4 +-
 PVE/QemuServer/Makefile                       |   1 -
 PVE/QemuServer/OVF.pm                         | 242 ------------------
 test/Makefile                                 |   5 +-
 test/ovf_manifests/Win10-Liz-disk1.vmdk       | Bin 65536 -> 0 bytes
 test/ovf_manifests/Win10-Liz.ovf              | 142 ----------
 .../ovf_manifests/Win10-Liz_no_default_ns.ovf | 142 ----------
 test/ovf_manifests/Win_2008_R2_two-disks.ovf  | 145 -----------
 test/ovf_manifests/disk1.vmdk                 | Bin 65536 -> 0 bytes
 test/ovf_manifests/disk2.vmdk                 | Bin 65536 -> 0 bytes
 test/run_ovf_tests.pl                         |  71 -----
 11 files changed, 3 insertions(+), 749 deletions(-)
 delete mode 100644 PVE/QemuServer/OVF.pm
 delete mode 100644 test/ovf_manifests/Win10-Liz-disk1.vmdk
 delete mode 100755 test/ovf_manifests/Win10-Liz.ovf
 delete mode 100755 test/ovf_manifests/Win10-Liz_no_default_ns.ovf
 delete mode 100755 test/ovf_manifests/Win_2008_R2_two-disks.ovf
 delete mode 100644 test/ovf_manifests/disk1.vmdk
 delete mode 100644 test/ovf_manifests/disk2.vmdk
 delete mode 100755 test/run_ovf_tests.pl

diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index b105830f..d1d35800 100755
--- a/PVE/CLI/qm.pm
+++ b/PVE/CLI/qm.pm
@@ -28,13 +28,13 @@ use PVE::Tools qw(extract_param file_get_contents);
 
 use PVE::API2::Qemu::Agent;
 use PVE::API2::Qemu;
+use PVE::Storage::OVF;
 use PVE::QemuConfig;
 use PVE::QemuServer::Drive;
 use PVE::QemuServer::Helpers;
 use PVE::QemuServer::Agent qw(agent_available);
 use PVE::QemuServer::ImportDisk;
 use PVE::QemuServer::Monitor qw(mon_cmd);
-use PVE::QemuServer::OVF;
 use PVE::QemuServer;
 
 use PVE::CLIHandler;
@@ -729,7 +729,7 @@ __PACKAGE__->register_method ({
 	my $storecfg = PVE::Storage::config();
 	PVE::Storage::storage_check_enabled($storecfg, $storeid);
 
-	my $parsed = PVE::QemuServer::OVF::parse_ovf($ovf_file);
+	my $parsed = PVE::Storage::OVF::parse_ovf($ovf_file);
 
 	if ($dryrun) {
 	    print to_json($parsed, { pretty => 1, canonical => 1});
diff --git a/PVE/QemuServer/Makefile b/PVE/QemuServer/Makefile
index ac26e56f..89d12091 100644
--- a/PVE/QemuServer/Makefile
+++ b/PVE/QemuServer/Makefile
@@ -2,7 +2,6 @@ SOURCES=PCI.pm		\
 	USB.pm		\
 	Memory.pm	\
 	ImportDisk.pm	\
-	OVF.pm		\
 	Cloudinit.pm	\
 	Agent.pm	\
 	Helpers.pm	\
diff --git a/PVE/QemuServer/OVF.pm b/PVE/QemuServer/OVF.pm
deleted file mode 100644
index b97b0520..00000000
--- a/PVE/QemuServer/OVF.pm
+++ /dev/null
@@ -1,242 +0,0 @@
-# Open Virtualization Format import routines
-# https://www.dmtf.org/standards/ovf
-package PVE::QemuServer::OVF;
-
-use strict;
-use warnings;
-
-use XML::LibXML;
-use File::Spec;
-use File::Basename;
-use Data::Dumper;
-use Cwd 'realpath';
-
-use PVE::Tools;
-use PVE::Storage;
-
-# map OVF resources types to descriptive strings
-# this will allow us to explore the xml tree without using magic numbers
-# http://schemas.dmtf.org/wbem/cim-html/2/CIM_ResourceAllocationSettingData.html
-my @resources = (
-    { id => 1, dtmf_name => 'Other' },
-    { id => 2, dtmf_name => 'Computer System' },
-    { id => 3, dtmf_name => 'Processor' },
-    { id => 4, dtmf_name => 'Memory' },
-    { id => 5, dtmf_name => 'IDE Controller', pve_type => 'ide' },
-    { id => 6, dtmf_name => 'Parallel SCSI HBA', pve_type => 'scsi' },
-    { id => 7, dtmf_name => 'FC HBA' },
-    { id => 8, dtmf_name => 'iSCSI HBA' },
-    { id => 9, dtmf_name => 'IB HCA' },
-    { id => 10, dtmf_name => 'Ethernet Adapter' },
-    { id => 11, dtmf_name => 'Other Network Adapter' },
-    { id => 12, dtmf_name => 'I/O Slot' },
-    { id => 13, dtmf_name => 'I/O Device' },
-    { id => 14, dtmf_name => 'Floppy Drive' },
-    { id => 15, dtmf_name => 'CD Drive' },
-    { id => 16, dtmf_name => 'DVD drive' },
-    { id => 17, dtmf_name => 'Disk Drive' },
-    { id => 18, dtmf_name => 'Tape Drive' },
-    { id => 19, dtmf_name => 'Storage Extent' },
-    { id => 20, dtmf_name => 'Other storage device', pve_type => 'sata'},
-    { id => 21, dtmf_name => 'Serial port' },
-    { id => 22, dtmf_name => 'Parallel port' },
-    { id => 23, dtmf_name => 'USB Controller' },
-    { id => 24, dtmf_name => 'Graphics controller' },
-    { id => 25, dtmf_name => 'IEEE 1394 Controller' },
-    { id => 26, dtmf_name => 'Partitionable Unit' },
-    { id => 27, dtmf_name => 'Base Partitionable Unit' },
-    { id => 28, dtmf_name => 'Power' },
-    { id => 29, dtmf_name => 'Cooling Capacity' },
-    { id => 30, dtmf_name => 'Ethernet Switch Port' },
-    { id => 31, dtmf_name => 'Logical Disk' },
-    { id => 32, dtmf_name => 'Storage Volume' },
-    { id => 33, dtmf_name => 'Ethernet Connection' },
-    { id => 34, dtmf_name => 'DMTF reserved' },
-    { id => 35, dtmf_name => 'Vendor Reserved'}
-);
-
-sub find_by {
-    my ($key, $param) = @_;
-    foreach my $resource (@resources) {
-	if ($resource->{$key} eq $param) {
-	    return ($resource);
-	}
-    }
-    return;
-}
-
-sub dtmf_name_to_id {
-    my ($dtmf_name) = @_;
-    my $found = find_by('dtmf_name', $dtmf_name);
-    if ($found) {
-	return $found->{id};
-    } else {
-	return;
-    }
-}
-
-sub id_to_pve {
-    my ($id) = @_;
-    my $resource = find_by('id', $id);
-    if ($resource) {
-	return $resource->{pve_type};
-    } else {
-	return;
-    }
-}
-
-# returns two references, $qm which holds qm.conf style key/values, and \@disks
-sub parse_ovf {
-    my ($ovf, $debug) = @_;
-
-    my $dom = XML::LibXML->load_xml(location => $ovf, no_blanks => 1);
-
-    # register the xml namespaces in a xpath context object
-    # 'ovf' is the default namespace so it will prepended to each xml element
-    my $xpc = XML::LibXML::XPathContext->new($dom);
-    $xpc->registerNs('ovf', 'http://schemas.dmtf.org/ovf/envelope/1');
-    $xpc->registerNs('rasd', 'http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData');
-    $xpc->registerNs('vssd', 'http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData');
-
-
-    # hash to save qm.conf parameters
-    my $qm;
-
-    #array to save a disk list
-    my @disks;
-
-    # easy xpath
-    # walk down the dom until we find the matching XML element
-    my $xpath_find_name = "/ovf:Envelope/ovf:VirtualSystem/ovf:Name";
-    my $ovf_name = $xpc->findvalue($xpath_find_name);
-
-    if ($ovf_name) {
-	# PVE::QemuServer::confdesc requires a valid DNS name
-	($qm->{name} = $ovf_name) =~ s/[^a-zA-Z0-9\-\.]//g;
-    } else {
-	warn "warning: unable to parse the VM name in this OVF manifest, generating a default value\n";
-    }
-
-    # middle level xpath
-    # element[child] search the elements which have this [child]
-    my $processor_id = dtmf_name_to_id('Processor');
-    my $xpath_find_vcpu_count = "/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${processor_id}]/rasd:VirtualQuantity";
-    $qm->{'cores'} = $xpc->findvalue($xpath_find_vcpu_count);
-
-    my $memory_id = dtmf_name_to_id('Memory');
-    my $xpath_find_memory = ("/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${memory_id}]/rasd:VirtualQuantity");
-    $qm->{'memory'} = $xpc->findvalue($xpath_find_memory);
-
-    # middle level xpath
-    # here we expect multiple results, so we do not read the element value with
-    # findvalue() but store multiple elements with findnodes()
-    my $disk_id = dtmf_name_to_id('Disk Drive');
-    my $xpath_find_disks="/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${disk_id}]";
-    my @disk_items = $xpc->findnodes($xpath_find_disks);
-
-    # disks metadata is split in four different xml elements:
-    # * as an Item node of type DiskDrive in the VirtualHardwareSection
-    # * as an Disk node in the DiskSection
-    # * as a File node in the References section
-    # * each Item node also holds a reference to its owning controller
-    #
-    # we iterate over the list of Item nodes of type disk drive, and for each item,
-    # find the corresponding Disk node, and File node and owning controller
-    # when all the nodes has been found out, we copy the relevant information to
-    # a $pve_disk hash ref, which we push to @disks;
-
-    foreach my $item_node (@disk_items) {
-
-	my $disk_node;
-	my $file_node;
-	my $controller_node;
-	my $pve_disk;
-
-	print "disk item:\n", $item_node->toString(1), "\n" if $debug;
-
-	# from Item, find corresponding Disk node
-	# here the dot means the search should start from the current element in dom
-	my $host_resource = $xpc->findvalue('rasd:HostResource', $item_node);
-	my $disk_section_path;
-	my $disk_id;
-
-	# RFC 3986 "2.3.  Unreserved Characters"
-	my $valid_uripath_chars = qr/[[:alnum:]]|[\-\._~]/;
-
-	if ($host_resource =~ m|^ovf:/(${valid_uripath_chars}+)/(${valid_uripath_chars}+)$|) {
-	    $disk_section_path = $1;
-	    $disk_id = $2;
-	} else {
-	   warn "invalid host ressource $host_resource, skipping\n";
-	   next;
-	}
-	printf "disk section path: $disk_section_path and disk id: $disk_id\n" if $debug;
-
-	# tricky xpath
-	# @ means we filter the result query based on a the value of an item attribute ( @ = attribute)
-	# @ needs to be escaped to prevent Perl double quote interpolation
-	my $xpath_find_fileref = sprintf("/ovf:Envelope/ovf:DiskSection/\
-ovf:Disk[\@ovf:diskId='%s']/\@ovf:fileRef", $disk_id);
-	my $fileref = $xpc->findvalue($xpath_find_fileref);
-
-	my $valid_url_chars = qr@${valid_uripath_chars}|/@;
-	if (!$fileref || $fileref !~ m/^${valid_url_chars}+$/) {
-	    warn "invalid host ressource $host_resource, skipping\n";
-	    next;
-	}
-
-	# from Disk Node, find corresponding filepath
-	my $xpath_find_filepath = sprintf("/ovf:Envelope/ovf:References/ovf:File[\@ovf:id='%s']/\@ovf:href", $fileref);
-	my $filepath = $xpc->findvalue($xpath_find_filepath);
-	if (!$filepath) {
-	    warn "invalid file reference $fileref, skipping\n";
-	    next;
-	}
-	print "file path: $filepath\n" if $debug;
-
-	# from Item, find owning Controller type
-	my $controller_id = $xpc->findvalue('rasd:Parent', $item_node);
-	my $xpath_find_parent_type = sprintf("/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/\
-ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
-	my $controller_type = $xpc->findvalue($xpath_find_parent_type);
-	if (!$controller_type) {
-	    warn "invalid or missing controller: $controller_type, skipping\n";
-	    next;
-	}
-	print "owning controller type: $controller_type\n" if $debug;
-
-	# extract corresponding Controller node details
-	my $adress_on_controller = $xpc->findvalue('rasd:AddressOnParent', $item_node);
-	my $pve_disk_address = id_to_pve($controller_type) . $adress_on_controller;
-
-	# resolve symlinks and relative path components
-	# and die if the diskimage is not somewhere under the $ovf path
-	my $ovf_dir = realpath(dirname(File::Spec->rel2abs($ovf)));
-	my $backing_file_path = realpath(join ('/', $ovf_dir, $filepath));
-	if ($backing_file_path !~ /^\Q${ovf_dir}\E/) {
-	    die "error parsing $filepath, are you using a symlink ?\n";
-	}
-
-	if (!-e $backing_file_path) {
-	    die "error parsing $filepath, file seems not to exist at $backing_file_path\n";
-	}
-
-	($backing_file_path) = $backing_file_path =~ m|^(/.*)|; # untaint
-
-	my $virtual_size = PVE::Storage::file_size_info($backing_file_path);
-	die "error parsing $backing_file_path, cannot determine file size\n"
-	    if !$virtual_size;
-
-	$pve_disk = {
-	    disk_address => $pve_disk_address,
-	    backing_file => $backing_file_path,
-	    virtual_size => $virtual_size
-	};
-	push @disks, $pve_disk;
-
-    }
-
-    return {qm => $qm, disks => \@disks};
-}
-
-1;
diff --git a/test/Makefile b/test/Makefile
index 9e6d39e8..65ed7bc4 100644
--- a/test/Makefile
+++ b/test/Makefile
@@ -1,14 +1,11 @@
 all: test
 
-test: test_snapshot test_ovf test_cfg_to_cmd test_pci_addr_conflicts test_qemu_img_convert test_migration test_restore_config
+test: test_snapshot test_cfg_to_cmd test_pci_addr_conflicts test_qemu_img_convert test_migration test_restore_config
 
 test_snapshot: run_snapshot_tests.pl
 	./run_snapshot_tests.pl
 	./test_get_replicatable_volumes.pl
 
-test_ovf: run_ovf_tests.pl
-	./run_ovf_tests.pl
-
 test_cfg_to_cmd: run_config2command_tests.pl cfg2cmd/*.conf
 	perl -I../ ./run_config2command_tests.pl
 
diff --git a/test/ovf_manifests/Win10-Liz-disk1.vmdk b/test/ovf_manifests/Win10-Liz-disk1.vmdk
deleted file mode 100644
index 662354a3d1333a2f6c4364005e53bfe7cd8b9044..0000000000000000000000000000000000000000
GIT binary patch
literal 0
HcmV?d00001

literal 65536
zcmeIvy>HV%7zbeUF`dK)46s<q+^B&Pi6H|eMIb;zP1VdMK8V$P$u?2T)IS|NNta69
zSXw<No$qu%-}$}AUq|21A0<ihr0Gwa-nQ%QGfCR@wmshsN%A;JUhL<u_T%+U7Sd<o
zW^TMU0^M{}R2S(eR@1Ur*Q@eVF^^#r%c@u{hyC#J%V?Nq?*?z)=UG^1Wn9+n(yx6B
z(=ujtJiA)QVP~;guI5EOE2iV-%_??6=%y!^b+aeU_aA6Z4X2azC>{U!a5_FoJCkDB
zKRozW{5{B<Li)YUBEQ&fJe$RRZCRbA$5|CacQiT<A<uvIHbq(g$>yIY=etVNVcI$B
zY@^?CwTN|j)tg?;i)G&AZFqPqoW(5P2K~XUq>9sqVVe!!?y@Y;)^#k~TefEvd2_XU
z^M@5mfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk4^iOdL%ftb
z5g<T-009C72;3>~`p!f^fB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ
zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U
zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7
z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N
p0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK;Zui`~%h>XmtPp

diff --git a/test/ovf_manifests/Win10-Liz.ovf b/test/ovf_manifests/Win10-Liz.ovf
deleted file mode 100755
index 46642c04..00000000
--- a/test/ovf_manifests/Win10-Liz.ovf
+++ /dev/null
@@ -1,142 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--Generated by VMware ovftool 4.1.0 (build-2982904), UTC time: 2017-02-07T13:50:15.265014Z-->
-<Envelope vmw:buildId="build-2982904" xmlns="http://schemas.dmtf.org/ovf/envelope/1" xmlns:cim="http://schemas.dmtf.org/wbem/wscim/1/common" xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1" xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" xmlns:vmw="http://www.vmware.com/schema/ovf" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
-  <References>
-    <File ovf:href="Win10-Liz-disk1.vmdk" ovf:id="file1" ovf:size="9155243008"/>
-  </References>
-  <DiskSection>
-    <Info>Virtual disk information</Info>
-    <Disk ovf:capacity="128" ovf:capacityAllocationUnits="byte * 2^30" ovf:diskId="vmdisk1" ovf:fileRef="file1" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:populatedSize="16798056448"/>
-  </DiskSection>
-  <NetworkSection>
-    <Info>The list of logical networks</Info>
-    <Network ovf:name="bridged">
-      <Description>The bridged network</Description>
-    </Network>
-  </NetworkSection>
-  <VirtualSystem ovf:id="vm">
-    <Info>A virtual machine</Info>
-    <Name>Win10-Liz</Name>
-    <OperatingSystemSection ovf:id="1" vmw:osType="windows9_64Guest">
-      <Info>The kind of installed guest operating system</Info>
-    </OperatingSystemSection>
-    <VirtualHardwareSection>
-      <Info>Virtual hardware requirements</Info>
-      <System>
-        <vssd:ElementName>Virtual Hardware Family</vssd:ElementName>
-        <vssd:InstanceID>0</vssd:InstanceID>
-        <vssd:VirtualSystemIdentifier>Win10-Liz</vssd:VirtualSystemIdentifier>
-        <vssd:VirtualSystemType>vmx-11</vssd:VirtualSystemType>
-      </System>
-      <Item>
-        <rasd:AllocationUnits>hertz * 10^6</rasd:AllocationUnits>
-        <rasd:Description>Number of Virtual CPUs</rasd:Description>
-        <rasd:ElementName>4 virtual CPU(s)</rasd:ElementName>
-        <rasd:InstanceID>1</rasd:InstanceID>
-        <rasd:ResourceType>3</rasd:ResourceType>
-        <rasd:VirtualQuantity>4</rasd:VirtualQuantity>
-      </Item>
-      <Item>
-        <rasd:AllocationUnits>byte * 2^20</rasd:AllocationUnits>
-        <rasd:Description>Memory Size</rasd:Description>
-        <rasd:ElementName>6144MB of memory</rasd:ElementName>
-        <rasd:InstanceID>2</rasd:InstanceID>
-        <rasd:ResourceType>4</rasd:ResourceType>
-        <rasd:VirtualQuantity>6144</rasd:VirtualQuantity>
-      </Item>
-      <Item>
-        <rasd:Address>0</rasd:Address>
-        <rasd:Description>SATA Controller</rasd:Description>
-        <rasd:ElementName>sataController0</rasd:ElementName>
-        <rasd:InstanceID>3</rasd:InstanceID>
-        <rasd:ResourceSubType>vmware.sata.ahci</rasd:ResourceSubType>
-        <rasd:ResourceType>20</rasd:ResourceType>
-      </Item>
-      <Item ovf:required="false">
-        <rasd:Address>0</rasd:Address>
-        <rasd:Description>USB Controller (XHCI)</rasd:Description>
-        <rasd:ElementName>usb3</rasd:ElementName>
-        <rasd:InstanceID>4</rasd:InstanceID>
-        <rasd:ResourceSubType>vmware.usb.xhci</rasd:ResourceSubType>
-        <rasd:ResourceType>23</rasd:ResourceType>
-      </Item>
-      <Item ovf:required="false">
-        <rasd:Address>0</rasd:Address>
-        <rasd:Description>USB Controller (EHCI)</rasd:Description>
-        <rasd:ElementName>usb</rasd:ElementName>
-        <rasd:InstanceID>5</rasd:InstanceID>
-        <rasd:ResourceSubType>vmware.usb.ehci</rasd:ResourceSubType>
-        <rasd:ResourceType>23</rasd:ResourceType>
-        <vmw:Config ovf:required="false" vmw:key="ehciEnabled" vmw:value="true"/>
-      </Item>
-      <Item>
-        <rasd:Address>0</rasd:Address>
-        <rasd:Description>SCSI Controller</rasd:Description>
-        <rasd:ElementName>scsiController0</rasd:ElementName>
-        <rasd:InstanceID>6</rasd:InstanceID>
-        <rasd:ResourceSubType>lsilogicsas</rasd:ResourceSubType>
-        <rasd:ResourceType>6</rasd:ResourceType>
-      </Item>
-      <Item ovf:required="false">
-        <rasd:AutomaticAllocation>true</rasd:AutomaticAllocation>
-        <rasd:ElementName>serial0</rasd:ElementName>
-        <rasd:InstanceID>7</rasd:InstanceID>
-        <rasd:ResourceType>21</rasd:ResourceType>
-        <vmw:Config ovf:required="false" vmw:key="yieldOnPoll" vmw:value="false"/>
-      </Item>
-      <Item>
-        <rasd:AddressOnParent>0</rasd:AddressOnParent>
-        <rasd:ElementName>disk0</rasd:ElementName>
-        <rasd:HostResource>ovf:/disk/vmdisk1</rasd:HostResource>
-        <rasd:InstanceID>8</rasd:InstanceID>
-        <rasd:Parent>6</rasd:Parent>
-        <rasd:ResourceType>17</rasd:ResourceType>
-      </Item>
-      <Item>
-        <rasd:AddressOnParent>2</rasd:AddressOnParent>
-        <rasd:AutomaticAllocation>true</rasd:AutomaticAllocation>
-        <rasd:Connection>bridged</rasd:Connection>
-        <rasd:Description>E1000e ethernet adapter on &quot;bridged&quot;</rasd:Description>
-        <rasd:ElementName>ethernet0</rasd:ElementName>
-        <rasd:InstanceID>9</rasd:InstanceID>
-        <rasd:ResourceSubType>E1000e</rasd:ResourceSubType>
-        <rasd:ResourceType>10</rasd:ResourceType>
-        <vmw:Config ovf:required="false" vmw:key="wakeOnLanEnabled" vmw:value="false"/>
-      </Item>
-      <Item ovf:required="false">
-        <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
-        <rasd:ElementName>sound</rasd:ElementName>
-        <rasd:InstanceID>10</rasd:InstanceID>
-        <rasd:ResourceSubType>vmware.soundcard.hdaudio</rasd:ResourceSubType>
-        <rasd:ResourceType>1</rasd:ResourceType>
-      </Item>
-      <Item ovf:required="false">
-        <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
-        <rasd:ElementName>video</rasd:ElementName>
-        <rasd:InstanceID>11</rasd:InstanceID>
-        <rasd:ResourceType>24</rasd:ResourceType>
-        <vmw:Config ovf:required="false" vmw:key="enable3DSupport" vmw:value="true"/>
-      </Item>
-      <Item ovf:required="false">
-        <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
-        <rasd:ElementName>vmci</rasd:ElementName>
-        <rasd:InstanceID>12</rasd:InstanceID>
-        <rasd:ResourceSubType>vmware.vmci</rasd:ResourceSubType>
-        <rasd:ResourceType>1</rasd:ResourceType>
-      </Item>
-      <Item ovf:required="false">
-        <rasd:AddressOnParent>1</rasd:AddressOnParent>
-        <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
-        <rasd:ElementName>cdrom0</rasd:ElementName>
-        <rasd:InstanceID>13</rasd:InstanceID>
-        <rasd:Parent>3</rasd:Parent>
-        <rasd:ResourceType>15</rasd:ResourceType>
-      </Item>
-      <vmw:Config ovf:required="false" vmw:key="memoryHotAddEnabled" vmw:value="true"/>
-      <vmw:Config ovf:required="false" vmw:key="powerOpInfo.powerOffType" vmw:value="soft"/>
-      <vmw:Config ovf:required="false" vmw:key="powerOpInfo.resetType" vmw:value="soft"/>
-      <vmw:Config ovf:required="false" vmw:key="powerOpInfo.suspendType" vmw:value="soft"/>
-      <vmw:Config ovf:required="false" vmw:key="tools.syncTimeWithHost" vmw:value="false"/>
-    </VirtualHardwareSection>
-  </VirtualSystem>
-</Envelope>
\ No newline at end of file
diff --git a/test/ovf_manifests/Win10-Liz_no_default_ns.ovf b/test/ovf_manifests/Win10-Liz_no_default_ns.ovf
deleted file mode 100755
index b93540f4..00000000
--- a/test/ovf_manifests/Win10-Liz_no_default_ns.ovf
+++ /dev/null
@@ -1,142 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--Generated by VMware ovftool 4.1.0 (build-2982904), UTC time: 2017-02-07T13:50:15.265014Z-->
-<Envelope vmw:buildId="build-2982904" xmlns="http://schemas.dmtf.org/ovf/envelope/1" xmlns:cim="http://schemas.dmtf.org/wbem/wscim/1/common" xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1" xmlns:vmw="http://www.vmware.com/schema/ovf" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
-  <References>
-    <File ovf:href="Win10-Liz-disk1.vmdk" ovf:id="file1" ovf:size="9155243008"/>
-  </References>
-  <DiskSection>
-    <Info>Virtual disk information</Info>
-    <Disk ovf:capacity="128" ovf:capacityAllocationUnits="byte * 2^30" ovf:diskId="vmdisk1" ovf:fileRef="file1" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:populatedSize="16798056448"/>
-  </DiskSection>
-  <NetworkSection>
-    <Info>The list of logical networks</Info>
-    <Network ovf:name="bridged">
-      <Description>The bridged network</Description>
-    </Network>
-  </NetworkSection>
-  <VirtualSystem ovf:id="vm">
-    <Info>A virtual machine</Info>
-    <Name>Win10-Liz</Name>
-    <OperatingSystemSection ovf:id="1" vmw:osType="windows9_64Guest">
-      <Info>The kind of installed guest operating system</Info>
-    </OperatingSystemSection>
-    <VirtualHardwareSection>
-      <Info>Virtual hardware requirements</Info>
-      <System>
-        <vssd:ElementName>Virtual Hardware Family</vssd:ElementName>
-        <vssd:InstanceID>0</vssd:InstanceID>
-        <vssd:VirtualSystemIdentifier>Win10-Liz</vssd:VirtualSystemIdentifier>
-        <vssd:VirtualSystemType>vmx-11</vssd:VirtualSystemType>
-      </System>
-      <Item>
-        <rasd:AllocationUnits xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">hertz * 10^6</rasd:AllocationUnits>
-        <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">Number of Virtual CPUs</rasd:Description>
-        <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">4 virtual CPU(s)</rasd:ElementName>
-        <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">1</rasd:InstanceID>
-        <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">3</rasd:ResourceType>
-        <rasd:VirtualQuantity xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">4</rasd:VirtualQuantity>
-      </Item>
-      <Item>
-        <rasd:AllocationUnits xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">byte * 2^20</rasd:AllocationUnits>
-        <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">Memory Size</rasd:Description>
-        <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">6144MB of memory</rasd:ElementName>
-        <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">2</rasd:InstanceID>
-        <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">4</rasd:ResourceType>
-        <rasd:VirtualQuantity xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">6144</rasd:VirtualQuantity>
-      </Item>
-      <Item>
-        <rasd:Address xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">0</rasd:Address>
-        <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">SATA Controller</rasd:Description>
-        <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">sataController0</rasd:ElementName>
-        <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">3</rasd:InstanceID>
-        <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmware.sata.ahci</rasd:ResourceSubType>
-        <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">20</rasd:ResourceType>
-      </Item>
-      <Item ovf:required="false">
-        <rasd:Address xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">0</rasd:Address>
-        <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">USB Controller (XHCI)</rasd:Description>
-        <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">usb3</rasd:ElementName>
-        <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">4</rasd:InstanceID>
-        <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmware.usb.xhci</rasd:ResourceSubType>
-        <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">23</rasd:ResourceType>
-      </Item>
-      <Item ovf:required="false">
-        <rasd:Address xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">0</rasd:Address>
-        <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">USB Controller (EHCI)</rasd:Description>
-        <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">usb</rasd:ElementName>
-        <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">5</rasd:InstanceID>
-        <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmware.usb.ehci</rasd:ResourceSubType>
-        <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">23</rasd:ResourceType>
-        <vmw:Config ovf:required="false" vmw:key="ehciEnabled" vmw:value="true"/>
-      </Item>
-      <Item>
-        <rasd:Address xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">0</rasd:Address>
-        <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">SCSI Controller</rasd:Description>
-        <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">scsiController0</rasd:ElementName>
-        <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">6</rasd:InstanceID>
-        <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">lsilogicsas</rasd:ResourceSubType>
-        <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">6</rasd:ResourceType>
-      </Item>
-      <Item ovf:required="false">
-        <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">true</rasd:AutomaticAllocation>
-        <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">serial0</rasd:ElementName>
-        <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">7</rasd:InstanceID>
-        <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">21</rasd:ResourceType>
-        <vmw:Config ovf:required="false" vmw:key="yieldOnPoll" vmw:value="false"/>
-      </Item>
-      <Item>
-        <rasd:AddressOnParent xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" >0</rasd:AddressOnParent>
-        <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" >disk0</rasd:ElementName>
-        <rasd:HostResource xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" >ovf:/disk/vmdisk1</rasd:HostResource>
-        <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">8</rasd:InstanceID>
-        <rasd:Parent xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">6</rasd:Parent>
-        <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">17</rasd:ResourceType>
-      </Item>
-      <Item>
-        <rasd:AddressOnParent xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">2</rasd:AddressOnParent>
-        <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">true</rasd:AutomaticAllocation>
-        <rasd:Connection xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">bridged</rasd:Connection>
-        <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">E1000e ethernet adapter on &quot;bridged&quot;</rasd:Description>
-        <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">ethernet0</rasd:ElementName>
-        <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">9</rasd:InstanceID>
-        <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">E1000e</rasd:ResourceSubType>
-        <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">10</rasd:ResourceType>
-        <vmw:Config ovf:required="false" vmw:key="wakeOnLanEnabled" vmw:value="false"/>
-      </Item>
-      <Item ovf:required="false">
-        <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">false</rasd:AutomaticAllocation>
-        <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">sound</rasd:ElementName>
-        <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">10</rasd:InstanceID>
-        <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmware.soundcard.hdaudio</rasd:ResourceSubType>
-        <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">1</rasd:ResourceType>
-      </Item>
-      <Item ovf:required="false">
-        <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">false</rasd:AutomaticAllocation>
-        <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">video</rasd:ElementName>
-        <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">11</rasd:InstanceID>
-        <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">24</rasd:ResourceType>
-        <vmw:Config ovf:required="false" vmw:key="enable3DSupport" vmw:value="true"/>
-      </Item>
-      <Item ovf:required="false">
-        <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">false</rasd:AutomaticAllocation>
-        <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmci</rasd:ElementName>
-        <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">12</rasd:InstanceID>
-        <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmware.vmci</rasd:ResourceSubType>
-        <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">1</rasd:ResourceType>
-      </Item>
-      <Item ovf:required="false">
-        <rasd:AddressOnParent xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">1</rasd:AddressOnParent>
-        <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">false</rasd:AutomaticAllocation>
-        <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">cdrom0</rasd:ElementName>
-        <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">13</rasd:InstanceID>
-        <rasd:Parent xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">3</rasd:Parent>
-        <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">15</rasd:ResourceType>
-      </Item>
-      <vmw:Config ovf:required="false" vmw:key="memoryHotAddEnabled" vmw:value="true"/>
-      <vmw:Config ovf:required="false" vmw:key="powerOpInfo.powerOffType" vmw:value="soft"/>
-      <vmw:Config ovf:required="false" vmw:key="powerOpInfo.resetType" vmw:value="soft"/>
-      <vmw:Config ovf:required="false" vmw:key="powerOpInfo.suspendType" vmw:value="soft"/>
-      <vmw:Config ovf:required="false" vmw:key="tools.syncTimeWithHost" vmw:value="false"/>
-    </VirtualHardwareSection>
-  </VirtualSystem>
-</Envelope>
diff --git a/test/ovf_manifests/Win_2008_R2_two-disks.ovf b/test/ovf_manifests/Win_2008_R2_two-disks.ovf
deleted file mode 100755
index a563aabb..00000000
--- a/test/ovf_manifests/Win_2008_R2_two-disks.ovf
+++ /dev/null
@@ -1,145 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--Generated by VMware ovftool 4.1.0 (build-2982904), UTC time: 2017-02-27T15:09:29.768974Z-->
-<Envelope vmw:buildId="build-2982904" xmlns="http://schemas.dmtf.org/ovf/envelope/1" xmlns:cim="http://schemas.dmtf.org/wbem/wscim/1/common" xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1" xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" xmlns:vmw="http://www.vmware.com/schema/ovf" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
-  <References>
-    <File ovf:href="disk1.vmdk" ovf:id="file1" ovf:size="3481968640"/>
-    <File ovf:href="disk2.vmdk" ovf:id="file2" ovf:size="68096"/>
-  </References>
-  <DiskSection>
-    <Info>Virtual disk information</Info>
-    <Disk ovf:capacity="40" ovf:capacityAllocationUnits="byte * 2^30" ovf:diskId="vmdisk1" ovf:fileRef="file1" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:populatedSize="7684882432"/>
-    <Disk ovf:capacity="1" ovf:capacityAllocationUnits="byte * 2^30" ovf:diskId="vmdisk2" ovf:fileRef="file2" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:populatedSize="0"/>
-  </DiskSection>
-  <NetworkSection>
-    <Info>The list of logical networks</Info>
-    <Network ovf:name="bridged">
-      <Description>The bridged network</Description>
-    </Network>
-  </NetworkSection>
-  <VirtualSystem ovf:id="vm">
-    <Info>A virtual machine</Info>
-    <Name>Win_2008-R2x64</Name>
-    <OperatingSystemSection ovf:id="103" vmw:osType="windows7Server64Guest">
-      <Info>The kind of installed guest operating system</Info>
-    </OperatingSystemSection>
-    <VirtualHardwareSection>
-      <Info>Virtual hardware requirements</Info>
-      <System>
-        <vssd:ElementName>Virtual Hardware Family</vssd:ElementName>
-        <vssd:InstanceID>0</vssd:InstanceID>
-        <vssd:VirtualSystemIdentifier>Win_2008-R2x64</vssd:VirtualSystemIdentifier>
-        <vssd:VirtualSystemType>vmx-11</vssd:VirtualSystemType>
-      </System>
-      <Item>
-        <rasd:AllocationUnits>hertz * 10^6</rasd:AllocationUnits>
-        <rasd:Description>Number of Virtual CPUs</rasd:Description>
-        <rasd:ElementName>1 virtual CPU(s)</rasd:ElementName>
-        <rasd:InstanceID>1</rasd:InstanceID>
-        <rasd:ResourceType>3</rasd:ResourceType>
-        <rasd:VirtualQuantity>1</rasd:VirtualQuantity>
-      </Item>
-      <Item>
-        <rasd:AllocationUnits>byte * 2^20</rasd:AllocationUnits>
-        <rasd:Description>Memory Size</rasd:Description>
-        <rasd:ElementName>2048MB of memory</rasd:ElementName>
-        <rasd:InstanceID>2</rasd:InstanceID>
-        <rasd:ResourceType>4</rasd:ResourceType>
-        <rasd:VirtualQuantity>2048</rasd:VirtualQuantity>
-      </Item>
-      <Item>
-        <rasd:Address>0</rasd:Address>
-        <rasd:Description>SATA Controller</rasd:Description>
-        <rasd:ElementName>sataController0</rasd:ElementName>
-        <rasd:InstanceID>3</rasd:InstanceID>
-        <rasd:ResourceSubType>vmware.sata.ahci</rasd:ResourceSubType>
-        <rasd:ResourceType>20</rasd:ResourceType>
-      </Item>
-      <Item ovf:required="false">
-        <rasd:Address>0</rasd:Address>
-        <rasd:Description>USB Controller (EHCI)</rasd:Description>
-        <rasd:ElementName>usb</rasd:ElementName>
-        <rasd:InstanceID>4</rasd:InstanceID>
-        <rasd:ResourceSubType>vmware.usb.ehci</rasd:ResourceSubType>
-        <rasd:ResourceType>23</rasd:ResourceType>
-        <vmw:Config ovf:required="false" vmw:key="ehciEnabled" vmw:value="true"/>
-      </Item>
-      <Item>
-        <rasd:Address>0</rasd:Address>
-        <rasd:Description>SCSI Controller</rasd:Description>
-        <rasd:ElementName>scsiController0</rasd:ElementName>
-        <rasd:InstanceID>5</rasd:InstanceID>
-        <rasd:ResourceSubType>lsilogicsas</rasd:ResourceSubType>
-        <rasd:ResourceType>6</rasd:ResourceType>
-      </Item>
-      <Item ovf:required="false">
-        <rasd:AutomaticAllocation>true</rasd:AutomaticAllocation>
-        <rasd:ElementName>serial0</rasd:ElementName>
-        <rasd:InstanceID>6</rasd:InstanceID>
-        <rasd:ResourceType>21</rasd:ResourceType>
-        <vmw:Config ovf:required="false" vmw:key="yieldOnPoll" vmw:value="false"/>
-      </Item>
-      <Item>
-        <rasd:AddressOnParent>0</rasd:AddressOnParent>
-        <rasd:ElementName>disk0</rasd:ElementName>
-        <rasd:HostResource>ovf:/disk/vmdisk1</rasd:HostResource>
-        <rasd:InstanceID>7</rasd:InstanceID>
-        <rasd:Parent>5</rasd:Parent>
-        <rasd:ResourceType>17</rasd:ResourceType>
-      </Item>
-      <Item>
-        <rasd:AddressOnParent>1</rasd:AddressOnParent>
-        <rasd:ElementName>disk1</rasd:ElementName>
-        <rasd:HostResource>ovf:/disk/vmdisk2</rasd:HostResource>
-        <rasd:InstanceID>8</rasd:InstanceID>
-        <rasd:Parent>5</rasd:Parent>
-        <rasd:ResourceType>17</rasd:ResourceType>
-      </Item>
-      <Item>
-        <rasd:AddressOnParent>2</rasd:AddressOnParent>
-        <rasd:AutomaticAllocation>true</rasd:AutomaticAllocation>
-        <rasd:Connection>bridged</rasd:Connection>
-        <rasd:Description>E1000 ethernet adapter on &quot;bridged&quot;</rasd:Description>
-        <rasd:ElementName>ethernet0</rasd:ElementName>
-        <rasd:InstanceID>9</rasd:InstanceID>
-        <rasd:ResourceSubType>E1000</rasd:ResourceSubType>
-        <rasd:ResourceType>10</rasd:ResourceType>
-        <vmw:Config ovf:required="false" vmw:key="wakeOnLanEnabled" vmw:value="false"/>
-      </Item>
-      <Item ovf:required="false">
-        <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
-        <rasd:ElementName>sound</rasd:ElementName>
-        <rasd:InstanceID>10</rasd:InstanceID>
-        <rasd:ResourceSubType>vmware.soundcard.hdaudio</rasd:ResourceSubType>
-        <rasd:ResourceType>1</rasd:ResourceType>
-      </Item>
-      <Item ovf:required="false">
-        <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
-        <rasd:ElementName>video</rasd:ElementName>
-        <rasd:InstanceID>11</rasd:InstanceID>
-        <rasd:ResourceType>24</rasd:ResourceType>
-        <vmw:Config ovf:required="false" vmw:key="enable3DSupport" vmw:value="true"/>
-      </Item>
-      <Item ovf:required="false">
-        <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
-        <rasd:ElementName>vmci</rasd:ElementName>
-        <rasd:InstanceID>12</rasd:InstanceID>
-        <rasd:ResourceSubType>vmware.vmci</rasd:ResourceSubType>
-        <rasd:ResourceType>1</rasd:ResourceType>
-      </Item>
-      <Item ovf:required="false">
-        <rasd:AddressOnParent>1</rasd:AddressOnParent>
-        <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
-        <rasd:ElementName>cdrom0</rasd:ElementName>
-        <rasd:InstanceID>13</rasd:InstanceID>
-        <rasd:Parent>3</rasd:Parent>
-        <rasd:ResourceType>15</rasd:ResourceType>
-      </Item>
-      <vmw:Config ovf:required="false" vmw:key="cpuHotAddEnabled" vmw:value="true"/>
-      <vmw:Config ovf:required="false" vmw:key="memoryHotAddEnabled" vmw:value="true"/>
-      <vmw:Config ovf:required="false" vmw:key="powerOpInfo.powerOffType" vmw:value="soft"/>
-      <vmw:Config ovf:required="false" vmw:key="powerOpInfo.resetType" vmw:value="soft"/>
-      <vmw:Config ovf:required="false" vmw:key="powerOpInfo.suspendType" vmw:value="soft"/>
-      <vmw:Config ovf:required="false" vmw:key="tools.syncTimeWithHost" vmw:value="false"/>
-    </VirtualHardwareSection>
-  </VirtualSystem>
-</Envelope>
diff --git a/test/ovf_manifests/disk1.vmdk b/test/ovf_manifests/disk1.vmdk
deleted file mode 100644
index 8660602343a1a955f9bcf2e6beaed99316dd8167..0000000000000000000000000000000000000000
GIT binary patch
literal 0
HcmV?d00001

literal 65536
zcmeIvy>HV%7zbeUF`dK)46s<q9ua7|WuUkSgpg2EwX+*viPe0`HWAtSr(-ASlAWQ|
zbJF?F{+-YFKK_yYyn2=-$&0qXY<t)4ch@B8o_Fo_en^t%N%H0}e|H$~AF`0X3J-JR
zqY>z*Sy|tuS*)j3xo%d~*K!`iCRTO1T8@X|%lB-2b2}Q2@{gmi&a1d=x<|K%7N%9q
zn|Qfh$8m45TCV10Gb^W)c4ZxVA@tMpzfJp2S{y#m?iwzx)01@a>+{9rJna?j=ZAyM
zqPW{FznsOxiSi~-&+<BkewLkuP!u<VO<6U6^7*&xtNr=XaoRiS?V{gtwTMl%9Za|L
za#^%_7k)SjXE85!!SM7bspGUQewUqo+Glx@ubWtPwRL-yMO)CL`L7O2fB*pk1PBly
zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pkPg~&a(=JbS1PBlyK!5-N0!ISx
zkM7+PAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+
z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBly
zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF
z5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk
d1PBlyK!5-N0t5&UAV7cs0RjXF5cr=0{{Ra(Wheju

diff --git a/test/ovf_manifests/disk2.vmdk b/test/ovf_manifests/disk2.vmdk
deleted file mode 100644
index c4634513348b392202898374f1c8d2d51d565b27..0000000000000000000000000000000000000000
GIT binary patch
literal 0
HcmV?d00001

literal 65536
zcmeIvy>HV%7zbeUF`dK)46s<q9#IIDI%J@v2!xPOQ?;`jUy0Rx$u<$$`lr`+(j_}X
ztLLQio&7tX?|uAp{Oj^rk|Zyh{<7(9yX&q=(mrq7>)ntf&y(cMe*SJh-aTX?eH9+&
z#z!O2Psc@dn~q~OEsJ%%D!&!;7&fu2iq&#-6u$l#k8ZBB&%=0f64qH6mv#5(X4k^B
zj9DEow(B_REmq6byr^fzbkeM>VlRY#diJkw-bwTQ2bx{O`BgehC%?a(PtMX_-hBS!
zV6(_?yX6<NxIa-=XX$BH#n2y*PeaJ_>%pcd>%ZCj`_<*{eCa6d4SQYmC$1K;F1Lf}
zc3v#=CU3(J2jMJcc^4cVA0$<rHpO?@@uyvu<=MK9Wm{XjSCKabJ(~aOpacjIAV7cs
z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAn>#W-ahT}R7ZdS0RjXF5Fl_M
z@c!W5Edc@q2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk
z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs
z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ
zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U
eAV7cs0RjXF5FkK+009C72oNAZfB=F2DR2*l=VfOA

diff --git a/test/run_ovf_tests.pl b/test/run_ovf_tests.pl
deleted file mode 100755
index ff6c7863..00000000
--- a/test/run_ovf_tests.pl
+++ /dev/null
@@ -1,71 +0,0 @@
-#!/usr/bin/perl
-
-use strict;
-use warnings;
-use lib qw(..); # prepend .. to @INC so we use the local version of PVE packages
-
-use FindBin '$Bin';
-use PVE::QemuServer::OVF;
-use Test::More;
-
-use Data::Dumper;
-
-my $test_manifests = join ('/', $Bin, 'ovf_manifests');
-
-print "parsing ovfs\n";
-
-my $win2008 = eval { PVE::QemuServer::OVF::parse_ovf("$test_manifests/Win_2008_R2_two-disks.ovf") };
-if (my $err = $@) {
-    fail('parse win2008');
-    warn("error: $err\n");
-} else {
-    ok('parse win2008');
-}
-my $win10 = eval { PVE::QemuServer::OVF::parse_ovf("$test_manifests/Win10-Liz.ovf") };
-if (my $err = $@) {
-    fail('parse win10');
-    warn("error: $err\n");
-} else {
-    ok('parse win10');
-}
-my $win10noNs = eval { PVE::QemuServer::OVF::parse_ovf("$test_manifests/Win10-Liz_no_default_ns.ovf") };
-if (my $err = $@) {
-    fail("parse win10 no default rasd NS");
-    warn("error: $err\n");
-} else {
-    ok('parse win10 no default rasd NS');
-}
-
-print "testing disks\n";
-
-is($win2008->{disks}->[0]->{disk_address}, 'scsi0', 'multidisk vm has the correct first disk controller');
-is($win2008->{disks}->[0]->{backing_file}, "$test_manifests/disk1.vmdk", 'multidisk vm has the correct first disk backing device');
-is($win2008->{disks}->[0]->{virtual_size}, 2048, 'multidisk vm has the correct first disk size');
-
-is($win2008->{disks}->[1]->{disk_address}, 'scsi1', 'multidisk vm has the correct second disk controller');
-is($win2008->{disks}->[1]->{backing_file}, "$test_manifests/disk2.vmdk", 'multidisk vm has the correct second disk backing device');
-is($win2008->{disks}->[1]->{virtual_size}, 2048, 'multidisk vm has the correct second disk size');
-
-is($win10->{disks}->[0]->{disk_address}, 'scsi0', 'single disk vm has the correct disk controller');
-is($win10->{disks}->[0]->{backing_file}, "$test_manifests/Win10-Liz-disk1.vmdk", 'single disk vm has the correct disk backing device');
-is($win10->{disks}->[0]->{virtual_size}, 2048, 'single disk vm has the correct size');
-
-is($win10noNs->{disks}->[0]->{disk_address}, 'scsi0', 'single disk vm (no default rasd NS) has the correct disk controller');
-is($win10noNs->{disks}->[0]->{backing_file}, "$test_manifests/Win10-Liz-disk1.vmdk", 'single disk vm (no default rasd NS) has the correct disk backing device');
-is($win10noNs->{disks}->[0]->{virtual_size}, 2048, 'single disk vm (no default rasd NS) has the correct size');
-
-print "\ntesting vm.conf extraction\n";
-
-is($win2008->{qm}->{name}, 'Win2008-R2x64', 'win2008 VM name is correct');
-is($win2008->{qm}->{memory}, '2048', 'win2008 VM memory is correct');
-is($win2008->{qm}->{cores}, '1', 'win2008 VM cores are correct');
-
-is($win10->{qm}->{name}, 'Win10-Liz', 'win10 VM name is correct');
-is($win10->{qm}->{memory}, '6144', 'win10 VM memory is correct');
-is($win10->{qm}->{cores}, '4', 'win10 VM cores are correct');
-
-is($win10noNs->{qm}->{name}, 'Win10-Liz', 'win10 VM (no default rasd NS) name is correct');
-is($win10noNs->{qm}->{memory}, '6144', 'win10 VM (no default rasd NS) memory is correct');
-is($win10noNs->{qm}->{cores}, '4', 'win10 VM (no default rasd NS) cores are correct');
-
-done_testing();
-- 
2.39.2





^ permalink raw reply	[flat|nested] 67+ messages in thread

* [pve-devel] [PATCH qemu-server 3/3] api: create: implement extracting disks when needed for import-from
  2024-04-16 13:18 [pve-devel] [PATCH storage/qemu-server/pve-manager] implement ova/ovf import for directory type storages Dominik Csapak
                   ` (10 preceding siblings ...)
  2024-04-16 13:19 ` [pve-devel] [PATCH qemu-server 2/3] use OVF from Storage Dominik Csapak
@ 2024-04-16 13:19 ` Dominik Csapak
  2024-04-18  9:41   ` Fiona Ebner
  2024-04-16 13:19 ` [pve-devel] [PATCH manager 1/4] ui: fix special 'import' icon for non-esxi storages Dominik Csapak
                   ` (5 subsequent siblings)
  17 siblings, 1 reply; 67+ messages in thread
From: Dominik Csapak @ 2024-04-16 13:19 UTC (permalink / raw)
  To: pve-devel

when 'import-from' contains a disk image that needs extraction
(currently only from an 'ova' archive), do that in 'create_disks'
and overwrite the '$source' volid.

Collect the names into a 'delete_sources' list, that we use later
to clean it up again (either when we're finished with importing or in an
error case).

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 PVE/API2/Qemu.pm          | 26 ++++++++++++++++++++------
 PVE/QemuServer.pm         |  5 ++++-
 PVE/QemuServer/Helpers.pm |  9 +++++++++
 3 files changed, 33 insertions(+), 7 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index f3ce83d6..afdb507f 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -161,8 +161,8 @@ my $check_storage_access = sub {
 	    my $src_vmid;
 	    if (PVE::Storage::parse_volume_id($src_image, 1)) { # PVE-managed volume
 		(my $vtype, undef, $src_vmid) = PVE::Storage::parse_volname($storecfg, $src_image);
-		raise_param_exc({ $ds => "$src_image has wrong type '$vtype' - not an image" })
-		    if $vtype ne 'images';
+		raise_param_exc({ $ds => "$src_image has wrong type '$vtype' - needs to be 'images' or 'import'" })
+		    if $vtype ne 'images' && $vtype ne 'import';
 	    }
 
 	    if ($src_vmid) { # might be actively used by VM and will be copied via clone_disk()
@@ -335,6 +335,7 @@ my sub create_disks : prototype($$$$$$$$$$) {
     my $res = {};
 
     my $live_import_mapping = {};
+    my $delete_sources = [];
 
     my $code = sub {
 	my ($ds, $disk) = @_;
@@ -391,6 +392,13 @@ my sub create_disks : prototype($$$$$$$$$$) {
 
 		$needs_creation = $live_import;
 
+		if (PVE::Storage::copy_needs_extraction($source)) { # needs extraction beforehand
+		    print "extracting $source\n";
+		    $source = PVE::Storage::extract_disk_from_import_file($source, $vmid);
+		    print "finished extracting to $source\n";
+		    push @$delete_sources, $source;
+		}
+
 		if (PVE::Storage::parse_volume_id($source, 1)) { # PVE-managed volume
 		    if ($live_import && $ds ne 'efidisk0') {
 			my $path = PVE::Storage::path($storecfg, $source)
@@ -514,13 +522,14 @@ my sub create_disks : prototype($$$$$$$$$$) {
 	    eval { PVE::Storage::vdisk_free($storecfg, $volid); };
 	    warn $@ if $@;
 	}
+	PVE::QemuServer::Helpers::cleanup_extracted_images($delete_sources);
 	die $err;
     }
 
     # don't return empty import mappings
     $live_import_mapping = undef if !%$live_import_mapping;
 
-    return ($vollist, $res, $live_import_mapping);
+    return ($vollist, $res, $live_import_mapping, $delete_sources);
 };
 
 my $check_cpu_model_access = sub {
@@ -1079,6 +1088,7 @@ __PACKAGE__->register_method({
 
 	my $createfn = sub {
 	    my $live_import_mapping = {};
+	    my $delete_sources = [];
 
 	    # ensure no old replication state are exists
 	    PVE::ReplicationState::delete_guest_states($vmid);
@@ -1096,7 +1106,7 @@ __PACKAGE__->register_method({
 
 		my $vollist = [];
 		eval {
-		    ($vollist, my $created_opts, $live_import_mapping) = create_disks(
+		    ($vollist, my $created_opts, $live_import_mapping, $delete_sources) = create_disks(
 			$rpcenv,
 			$authuser,
 			$conf,
@@ -1148,6 +1158,7 @@ __PACKAGE__->register_method({
 			eval { PVE::Storage::vdisk_free($storecfg, $volid); };
 			warn $@ if $@;
 		    }
+		    PVE::QemuServer::Helpers::cleanup_extracted_images($delete_sources);
 		    die "$emsg $err";
 		}
 
@@ -1164,7 +1175,7 @@ __PACKAGE__->register_method({
 		warn $@ if $@;
 		return;
 	    } else {
-		return $live_import_mapping;
+		return ($live_import_mapping, $delete_sources);
 	    }
 	};
 
@@ -1191,7 +1202,7 @@ __PACKAGE__->register_method({
 	    $code = sub {
 		# If a live import was requested the create function returns
 		# the mapping for the startup.
-		my $live_import_mapping = eval { $createfn->() };
+		my ($live_import_mapping, $delete_sources) = eval { $createfn->() };
 		if (my $err = $@) {
 		    eval {
 			my $conffile = PVE::QemuConfig->config_file($vmid);
@@ -1213,7 +1224,10 @@ __PACKAGE__->register_method({
 			$vmid,
 			$conf,
 			$import_options,
+			$delete_sources,
 		    );
+		} else {
+		    PVE::QemuServer::Helpers::cleanup_extracted_images($delete_sources);
 		}
 	    };
 	}
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index abe175a4..01133f39 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -7292,7 +7292,7 @@ sub pbs_live_restore {
 # therefore already handled in the `$create_disks()` call happening in the
 # `create` api call
 sub live_import_from_files {
-    my ($mapping, $vmid, $conf, $restore_options) = @_;
+    my ($mapping, $vmid, $conf, $restore_options, $delete_sources) = @_;
 
     my $live_restore_backing = {};
     for my $dev (keys %$mapping) {
@@ -7353,6 +7353,8 @@ sub live_import_from_files {
 	    mon_cmd($vmid, 'blockdev-del', 'node-name' => "drive-$ds-restore");
 	}
 
+	PVE::QemuServer::Helpers::cleanup_extracted_images($delete_sources);
+
 	close($qmeventd_fd);
     };
 
@@ -7361,6 +7363,7 @@ sub live_import_from_files {
     if ($err) {
 	warn "An error occurred during live-restore: $err\n";
 	_do_vm_stop($storecfg, $vmid, 1, 1, 10, 0, 1);
+	PVE::QemuServer::Helpers::cleanup_extracted_images($delete_sources);
 	die "live-restore failed\n";
     }
 
diff --git a/PVE/QemuServer/Helpers.pm b/PVE/QemuServer/Helpers.pm
index 0afb6317..40b90a6e 100644
--- a/PVE/QemuServer/Helpers.pm
+++ b/PVE/QemuServer/Helpers.pm
@@ -225,4 +225,13 @@ sub windows_version {
     return $winversion;
 }
 
+sub cleanup_extracted_images {
+    my ($delete_sources) = @_;
+
+    for my $source (@$delete_sources) {
+	eval { PVE::Storage::cleanup_extracted_image($source) };
+	warn $@ if $@;
+    }
+}
+
 1;
-- 
2.39.2





^ permalink raw reply	[flat|nested] 67+ messages in thread

* [pve-devel] [PATCH manager 1/4] ui: fix special 'import' icon for non-esxi storages
  2024-04-16 13:18 [pve-devel] [PATCH storage/qemu-server/pve-manager] implement ova/ovf import for directory type storages Dominik Csapak
                   ` (11 preceding siblings ...)
  2024-04-16 13:19 ` [pve-devel] [PATCH qemu-server 3/3] api: create: implement extracting disks when needed for import-from Dominik Csapak
@ 2024-04-16 13:19 ` Dominik Csapak
  2024-04-16 13:19 ` [pve-devel] [PATCH manager 2/4] ui: guest import: add ova-needs-extracting warning text Dominik Csapak
                   ` (4 subsequent siblings)
  17 siblings, 0 replies; 67+ messages in thread
From: Dominik Csapak @ 2024-04-16 13:19 UTC (permalink / raw)
  To: pve-devel

we only want to show that icon in the tree when the storage is solely
used for importing, not when it's just one of several content types.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 www/manager6/Utils.js | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index 287d651a..782bb59f 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -1244,7 +1244,7 @@ Ext.define('PVE.Utils', {
 	    // templates
 	    objType = 'template';
 	    status = type;
-	} else if (type === 'storage' && record.content.indexOf('import') !== -1) {
+	} else if (type === 'storage' && record.content === 'import') {
 	    return 'fa fa-cloud-download';
 	} else {
 	    // everything else
-- 
2.39.2





^ permalink raw reply	[flat|nested] 67+ messages in thread

* [pve-devel] [PATCH manager 2/4] ui: guest import: add ova-needs-extracting warning text
  2024-04-16 13:18 [pve-devel] [PATCH storage/qemu-server/pve-manager] implement ova/ovf import for directory type storages Dominik Csapak
                   ` (12 preceding siblings ...)
  2024-04-16 13:19 ` [pve-devel] [PATCH manager 1/4] ui: fix special 'import' icon for non-esxi storages Dominik Csapak
@ 2024-04-16 13:19 ` Dominik Csapak
  2024-04-16 13:19 ` [pve-devel] [PATCH manager 3/4] ui: enable import content type for relevant storages Dominik Csapak
                   ` (3 subsequent siblings)
  17 siblings, 0 replies; 67+ messages in thread
From: Dominik Csapak @ 2024-04-16 13:19 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 www/manager6/window/GuestImport.js | 1 +
 1 file changed, 1 insertion(+)

diff --git a/www/manager6/window/GuestImport.js b/www/manager6/window/GuestImport.js
index 944d275b..ad28b616 100644
--- a/www/manager6/window/GuestImport.js
+++ b/www/manager6/window/GuestImport.js
@@ -930,6 +930,7 @@ Ext.define('PVE.window.GuestImport', {
 		    gettext('EFI state cannot be imported, you may need to reconfigure the boot order (see {0})'),
 		    '<a href="https://pve.proxmox.com/wiki/OVMF/UEFI_Boot_Entries">OVMF/UEFI Boot Entries</a>',
 		),
+		'ova-needs-extracting': gettext('Importing from an OVA requires extracting the contained disks into the import storage.'),
 	    };
             let message = warningsCatalogue[w.type];
 	    if (!w.type || !message) {
-- 
2.39.2





^ permalink raw reply	[flat|nested] 67+ messages in thread

* [pve-devel] [PATCH manager 3/4] ui: enable import content type for relevant storages
  2024-04-16 13:18 [pve-devel] [PATCH storage/qemu-server/pve-manager] implement ova/ovf import for directory type storages Dominik Csapak
                   ` (13 preceding siblings ...)
  2024-04-16 13:19 ` [pve-devel] [PATCH manager 2/4] ui: guest import: add ova-needs-extracting warning text Dominik Csapak
@ 2024-04-16 13:19 ` Dominik Csapak
  2024-04-16 13:19 ` [pve-devel] [PATCH manager 4/4] ui: enable upload/download buttons for 'import' type storages Dominik Csapak
                   ` (2 subsequent siblings)
  17 siblings, 0 replies; 67+ messages in thread
From: Dominik Csapak @ 2024-04-16 13:19 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 www/manager6/Utils.js                    | 1 +
 www/manager6/form/ContentTypeSelector.js | 2 +-
 www/manager6/storage/CephFSEdit.js       | 2 +-
 3 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index 782bb59f..f30c9586 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -690,6 +690,7 @@ Ext.define('PVE.Utils', {
 	'iso': gettext('ISO image'),
 	'rootdir': gettext('Container'),
 	'snippets': gettext('Snippets'),
+	'import': gettext('Import'),
     },
 
     volume_is_qemu_backup: function(volid, format) {
diff --git a/www/manager6/form/ContentTypeSelector.js b/www/manager6/form/ContentTypeSelector.js
index d0fa0b08..431bd948 100644
--- a/www/manager6/form/ContentTypeSelector.js
+++ b/www/manager6/form/ContentTypeSelector.js
@@ -10,7 +10,7 @@ Ext.define('PVE.form.ContentTypeSelector', {
 	me.comboItems = [];
 
 	if (me.cts === undefined) {
-	    me.cts = ['images', 'iso', 'vztmpl', 'backup', 'rootdir', 'snippets'];
+	    me.cts = ['images', 'iso', 'vztmpl', 'backup', 'rootdir', 'snippets', 'import'];
 	}
 
 	Ext.Array.each(me.cts, function(ct) {
diff --git a/www/manager6/storage/CephFSEdit.js b/www/manager6/storage/CephFSEdit.js
index 6a95a00a..2cdcf7cd 100644
--- a/www/manager6/storage/CephFSEdit.js
+++ b/www/manager6/storage/CephFSEdit.js
@@ -92,7 +92,7 @@ Ext.define('PVE.storage.CephFSInputPanel', {
 	me.column2 = [
 	    {
 		xtype: 'pveContentTypeSelector',
-		cts: ['backup', 'iso', 'vztmpl', 'snippets'],
+		cts: ['backup', 'iso', 'vztmpl', 'snippets', 'import'],
 		fieldLabel: gettext('Content'),
 		name: 'content',
 		value: 'backup',
-- 
2.39.2





^ permalink raw reply	[flat|nested] 67+ messages in thread

* [pve-devel] [PATCH manager 4/4] ui: enable upload/download buttons for 'import' type storages
  2024-04-16 13:18 [pve-devel] [PATCH storage/qemu-server/pve-manager] implement ova/ovf import for directory type storages Dominik Csapak
                   ` (14 preceding siblings ...)
  2024-04-16 13:19 ` [pve-devel] [PATCH manager 3/4] ui: enable import content type for relevant storages Dominik Csapak
@ 2024-04-16 13:19 ` Dominik Csapak
  2024-04-17 12:37   ` Fabian Grünbichler
  2024-04-18 11:20   ` Fiona Ebner
  2024-04-17 13:11 ` [pve-devel] [PATCH storage/qemu-server/pve-manager] implement ova/ovf import for directory " Fabian Grünbichler
  2024-04-18  9:27 ` Dominik Csapak
  17 siblings, 2 replies; 67+ messages in thread
From: Dominik Csapak @ 2024-04-16 13:19 UTC (permalink / raw)
  To: pve-devel

but only for non esxi ones, since that does not allow
uploading/downloading there

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 www/manager6/storage/Browser.js        | 7 ++++++-
 www/manager6/window/UploadToStorage.js | 1 +
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/www/manager6/storage/Browser.js b/www/manager6/storage/Browser.js
index 2123141d..77d106c1 100644
--- a/www/manager6/storage/Browser.js
+++ b/www/manager6/storage/Browser.js
@@ -28,7 +28,9 @@ Ext.define('PVE.storage.Browser', {
 	let res = storageInfo.data;
 	let plugin = res.plugintype;
 
-	me.items = plugin !== 'esxi' ? [
+	let isEsxi = plugin === 'esxi';
+
+	me.items = !isEsxi ? [
 	    {
 		title: gettext('Summary'),
 		xtype: 'pveStorageSummary',
@@ -144,6 +146,9 @@ Ext.define('PVE.storage.Browser', {
 		    content: 'import',
 		    useCustomRemoveButton: true, // hide default remove button
 		    showColumns: ['name', 'format'],
+		    enableUploadButton: enableUpload && !isEsxi,
+		    enableDownloadUrlButton: enableDownloadUrl && !isEsxi,
+		    useUploadButton: !isEsxi,
 		    itemdblclick: (view, record) => createGuestImportWindow(record),
 		    tbar: [
 			{
diff --git a/www/manager6/window/UploadToStorage.js b/www/manager6/window/UploadToStorage.js
index 3c5bba88..79a6e8a6 100644
--- a/www/manager6/window/UploadToStorage.js
+++ b/www/manager6/window/UploadToStorage.js
@@ -11,6 +11,7 @@ Ext.define('PVE.window.UploadToStorage', {
     acceptedExtensions: {
 	iso: ['.img', '.iso'],
 	vztmpl: ['.tar.gz', '.tar.xz', '.tar.zst'],
+	'import': ['ova'],
     },
 
     cbindData: function(initialConfig) {
-- 
2.39.2





^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH storage 1/9] copy OVF.pm from qemu-server
  2024-04-16 13:18 ` [pve-devel] [PATCH storage 1/9] copy OVF.pm from qemu-server Dominik Csapak
@ 2024-04-16 15:02   ` Thomas Lamprecht
  2024-04-17  9:19     ` Fiona Ebner
  0 siblings, 1 reply; 67+ messages in thread
From: Thomas Lamprecht @ 2024-04-16 15:02 UTC (permalink / raw)
  To: Proxmox VE development discussion, Dominik Csapak

Am 16/04/2024 um 15:18 schrieb Dominik Csapak:
> copies the OVF.pm and relevant ovf tests from qemu-server.
> We need it here, and it uses PVE::Storage already, and since there is no
> intermediary package/repository we could put it, it seems fitting in
> here.
> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
>  src/PVE/Storage/Makefile                      |   1 +
>  src/PVE/Storage/OVF.pm                        | 242 ++++++++++++++++++
>  src/test/Makefile                             |   5 +-
>  src/test/ovf_manifests/Win10-Liz-disk1.vmdk   | Bin 0 -> 65536 bytes
>  src/test/ovf_manifests/Win10-Liz.ovf          | 142 ++++++++++
>  .../ovf_manifests/Win10-Liz_no_default_ns.ovf | 142 ++++++++++
>  .../ovf_manifests/Win_2008_R2_two-disks.ovf   | 145 +++++++++++
>  src/test/ovf_manifests/disk1.vmdk             | Bin 0 -> 65536 bytes
>  src/test/ovf_manifests/disk2.vmdk             | Bin 0 -> 65536 bytes
>  src/test/run_ovf_tests.pl                     |  71 +++++
>  10 files changed, 747 insertions(+), 1 deletion(-)
>  create mode 100644 src/PVE/Storage/OVF.pm
>  create mode 100644 src/test/ovf_manifests/Win10-Liz-disk1.vmdk
>  create mode 100755 src/test/ovf_manifests/Win10-Liz.ovf
>  create mode 100755 src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf
>  create mode 100755 src/test/ovf_manifests/Win_2008_R2_two-disks.ovf
>  create mode 100644 src/test/ovf_manifests/disk1.vmdk
>  create mode 100644 src/test/ovf_manifests/disk2.vmdk
>  create mode 100755 src/test/run_ovf_tests.pl
> 
> diff --git a/src/PVE/Storage/Makefile b/src/PVE/Storage/Makefile
> index d5cc942..2daa0da 100644
> --- a/src/PVE/Storage/Makefile
> +++ b/src/PVE/Storage/Makefile
> @@ -14,6 +14,7 @@ SOURCES= \
>  	PBSPlugin.pm \
>  	BTRFSPlugin.pm \
>  	LvmThinPlugin.pm \
> +	OVF.pm \
>  	ESXiPlugin.pm
>  
>  .PHONY: install
> diff --git a/src/PVE/Storage/OVF.pm b/src/PVE/Storage/OVF.pm
> new file mode 100644
> index 0000000..90ca453
> --- /dev/null
> +++ b/src/PVE/Storage/OVF.pm
> @@ -0,0 +1,242 @@
> +# Open Virtualization Format import routines
> +# https://www.dmtf.org/standards/ovf
> +package PVE::Storage::OVF;
> +


high-level nit: this, and most of the ESXi one, should go into another module
name space, e.g. PVE::GuestImport:: (or if that's to long, or we really are sure
that other stuff can be imported (I doubt it), then just PVE::Import might be
fine too).




^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH storage 1/9] copy OVF.pm from qemu-server
  2024-04-16 15:02   ` Thomas Lamprecht
@ 2024-04-17  9:19     ` Fiona Ebner
  2024-04-17  9:26       ` Thomas Lamprecht
  0 siblings, 1 reply; 67+ messages in thread
From: Fiona Ebner @ 2024-04-17  9:19 UTC (permalink / raw)
  To: Proxmox VE development discussion, Thomas Lamprecht, Dominik Csapak

Am 16.04.24 um 17:02 schrieb Thomas Lamprecht:
> Am 16/04/2024 um 15:18 schrieb Dominik Csapak:
>> copies the OVF.pm and relevant ovf tests from qemu-server.
>> We need it here, and it uses PVE::Storage already, and since there is no
>> intermediary package/repository we could put it, it seems fitting in
>> here.
>>
>> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>

Except for the location of the module:

Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
> 
> high-level nit: this, and most of the ESXi one, should go into another module
> name space, e.g. PVE::GuestImport:: (or if that's to long, or we really are sure
> that other stuff can be imported (I doubt it), then just PVE::Import might be
> fine too).
> 

Hmm, ESXiPlugin.pm is a storage plugin, so it does fit. But no
objections to moving it from my side either. And fully agree that OVF.pm
should live somewhere else, it is not a storage plugin.




^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH storage 1/9] copy OVF.pm from qemu-server
  2024-04-17  9:19     ` Fiona Ebner
@ 2024-04-17  9:26       ` Thomas Lamprecht
  0 siblings, 0 replies; 67+ messages in thread
From: Thomas Lamprecht @ 2024-04-17  9:26 UTC (permalink / raw)
  To: Fiona Ebner, Proxmox VE development discussion, Dominik Csapak

Am 17/04/2024 um 11:19 schrieb Fiona Ebner:
> Am 16.04.24 um 17:02 schrieb Thomas Lamprecht:
>> high-level nit: this, and most of the ESXi one, should go into another module
>> name space, e.g. PVE::GuestImport:: (or if that's to long, or we really are sure
>> that other stuff can be imported (I doubt it), then just PVE::Import might be
>> fine too).
>>
> 
> Hmm, ESXiPlugin.pm is a storage plugin, so it does fit. But no

Yes, ESXiPlugin _is_ a storage plugin, and it must stay there, but about
80% of it's code is not related to being a storage plugin but for importing
only, parts of it might be even shareable with other such import related
stuff. So what I meant with "**most** of the ESXi one" is that I'd separate
these parts from the storage plugin specific code, not moving it completely.

> objections to moving it from my side either. And fully agree that OVF.pm
> should live somewhere else, it is not a storage plugin.





^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH storage 2/9] plugin: dir: implement import content type
  2024-04-16 13:18 ` [pve-devel] [PATCH storage 2/9] plugin: dir: implement import content type Dominik Csapak
@ 2024-04-17 10:07   ` Fiona Ebner
  2024-04-17 10:07     ` Fiona Ebner
  2024-04-17 13:13     ` Dominik Csapak
  2024-04-17 12:46   ` Fabian Grünbichler
  1 sibling, 2 replies; 67+ messages in thread
From: Fiona Ebner @ 2024-04-17 10:07 UTC (permalink / raw)
  To: Proxmox VE development discussion, Dominik Csapak

Am 16.04.24 um 15:18 schrieb Dominik Csapak:
> in DirPlugin and not Plugin (because of cyclic dependency of
> Plugin -> OVF -> Storage -> Plugin otherwise)
> 
> only ovf is currently supported (though ova will be shown in import
> listing), expects the files to not be in a subdir, and adjacent to the
> ovf file.
> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
>  src/PVE/Storage.pm                 |  8 ++++++-
>  src/PVE/Storage/DirPlugin.pm       | 37 +++++++++++++++++++++++++++++-
>  src/PVE/Storage/OVF.pm             |  2 ++
>  src/PVE/Storage/Plugin.pm          | 18 ++++++++++++++-
>  src/test/parse_volname_test.pm     | 13 +++++++++++
>  src/test/path_to_volume_id_test.pm | 16 +++++++++++++
>  6 files changed, 91 insertions(+), 3 deletions(-)
> 
> diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
> index 40314a8..f8ea93d 100755
> --- a/src/PVE/Storage.pm
> +++ b/src/PVE/Storage.pm
> @@ -114,6 +114,8 @@ our $VZTMPL_EXT_RE_1 = qr/\.tar\.(gz|xz|zst)/i;
>  
>  our $BACKUP_EXT_RE_2 = qr/\.(tgz|(?:tar|vma)(?:\.(${\PVE::Storage::Plugin::COMPRESSOR_RE}))?)/;
>  
> +our $IMPORT_EXT_RE_1 = qr/\.(ov[af])/;
> +
>  # FIXME remove with PVE 8.0, add versioned breaks for pve-manager
>  our $vztmpl_extension_re = $VZTMPL_EXT_RE_1;
>  
> @@ -612,6 +614,7 @@ sub path_to_volume_id {
>  	my $backupdir = $plugin->get_subdir($scfg, 'backup');
>  	my $privatedir = $plugin->get_subdir($scfg, 'rootdir');
>  	my $snippetsdir = $plugin->get_subdir($scfg, 'snippets');
> +	my $importdir = $plugin->get_subdir($scfg, 'import');
>  
>  	if ($path =~ m!^$imagedir/(\d+)/([^/\s]+)$!) {
>  	    my $vmid = $1;
> @@ -640,6 +643,9 @@ sub path_to_volume_id {
>  	} elsif ($path =~ m!^$snippetsdir/([^/]+)$!) {
>  	    my $name = $1;
>  	    return ('snippets', "$sid:snippets/$name");
> +	} elsif ($path =~ m!^$importdir/([^/]+${IMPORT_EXT_RE_1})$!) {
> +	    my $name = $1;
> +	    return ('import', "$sid:import/$name");
>  	}
>      }
>  
> @@ -2170,7 +2176,7 @@ sub normalize_content_filename {
>  # If a storage provides an 'import' content type, it should be able to provide
>  # an object implementing the import information interface.
>  sub get_import_metadata {
> -    my ($cfg, $volid) = @_;
> +    my ($cfg, $volid, $target) = @_;
>  

$target is added here but not passed along when calling the plugin's
function

>      my ($storeid, $volname) = parse_volume_id($volid);
>  

Pre-existing and not directly related, but in the ESXi plugin the
prototype seems wrong too:

sub get_import_metadata : prototype($$$$$) {
    my ($class, $scfg, $volname, $storeid) = @_;


> diff --git a/src/PVE/Storage/DirPlugin.pm b/src/PVE/Storage/DirPlugin.pm
> index 2efa8d5..4dc7708 100644
> --- a/src/PVE/Storage/DirPlugin.pm
> +++ b/src/PVE/Storage/DirPlugin.pm
> @@ -10,6 +10,7 @@ use IO::File;
>  use POSIX;
>  
>  use PVE::Storage::Plugin;
> +use PVE::Storage::OVF;
>  use PVE::JSONSchema qw(get_standard_option);
>  
>  use base qw(PVE::Storage::Plugin);
> @@ -22,7 +23,7 @@ sub type {
>  
>  sub plugindata {
>      return {
> -	content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1, backup => 1, snippets => 1, none => 1 },
> +	content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1, backup => 1, snippets => 1, none => 1, import => 1 },
>  		     { images => 1,  rootdir => 1 }],
>  	format => [ { raw => 1, qcow2 => 1, vmdk => 1, subvol => 1 } , 'raw' ],
>      };
> @@ -247,4 +248,38 @@ sub check_config {
>      return $opts;
>  }
>  
> +sub get_import_metadata {
> +    my ($class, $scfg, $volname, $storeid, $target) = @_;
> +
> +    if ($volname !~ m!^([^/]+)/.*${PVE::Storage::IMPORT_EXT_RE_1}$!) {
> +	die "volume '$volname' does not look like an importable vm config\n";
> +    }
> +
> +    my $path = $class->path($scfg, $volname, $storeid, undef);
> +
> +    # NOTE: all types must be added to the return schema of the import-metadata API endpoint

To be extra clear (was confused for a moment): "all types of warnings"

> +    my $warnings = [];
> +
> +    my $res = PVE::Storage::OVF::parse_ovf($path, $isOva);

$isOva does not exist yet (only added by a later patch).

> +    my $disks = {};
> +    for my $disk ($res->{disks}->@*) {
> +	my $id = $disk->{disk_address};
> +	my $size = $disk->{virtual_size};
> +	my $path = $disk->{relative_path};
> +	$disks->{$id} = {
> +	    volid => "$storeid:import/$path",
> +	    defined($size) ? (size => $size) : (),
> +	};
> +    }
> +
> +    return {
> +	type => 'vm',
> +	source => $volname,
> +	'create-args' => $res->{qm},
> +	'disks' => $disks,
> +	warnings => $warnings,
> +	net => [],
> +    };
> +}
> +
>  1;
> diff --git a/src/PVE/Storage/OVF.pm b/src/PVE/Storage/OVF.pm
> index 90ca453..4a322b9 100644
> --- a/src/PVE/Storage/OVF.pm
> +++ b/src/PVE/Storage/OVF.pm
> @@ -222,6 +222,7 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
>  	}
>  
>  	($backing_file_path) = $backing_file_path =~ m|^(/.*)|; # untaint
> +	($filepath) = $filepath =~ m|^(.*)|; # untaint


Hmm, $backing_file_path is the result after going through realpath(),
$filepath is from before. We do check it's not a symlink, so I might be
a bit paranoid, but still, rather than doing a blanket untaint, you
could just use basename() (either here or not return anything new and do
it at the use-site).

>  
>  	my $virtual_size = PVE::Storage::file_size_info($backing_file_path);
>  	die "error parsing $backing_file_path, cannot determine file size\n"
> @@ -231,6 +232,7 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
>  	    disk_address => $pve_disk_address,
>  	    backing_file => $backing_file_path,
>  	    virtual_size => $virtual_size
> +	    relative_path => $filepath,
>  	};
>  	push @disks, $pve_disk;
>  > diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
> index 22a9729..deaf8b2 100644
> --- a/src/PVE/Storage/Plugin.pm
> +++ b/src/PVE/Storage/Plugin.pm
> @@ -654,6 +654,10 @@ sub parse_volname {
>  	return ('backup', $fn);
>      } elsif ($volname =~ m!^snippets/([^/]+)$!) {
>  	return ('snippets', $1);
> +    } elsif ($volname =~ m!^import/([^/]+$PVE::Storage::IMPORT_EXT_RE_1)$!) {
> +	return ('import', $1);
> +    } elsif ($volname =~ m!^import/([^/]+\.(raw|vmdk|qcow2))$!) {
> +	return ('images', $1, 0, undef, undef, undef, $2);

Hmm, $vmid=0, because we have currently have assumptions that each
volume has an associated guest ID? At least might be worth a comment
(also in API description if those volumes can somehow reach there).

>      }
>  
>      die "unable to parse directory volume name '$volname'\n";
> @@ -666,6 +670,7 @@ my $vtype_subdirs = {
>      vztmpl => 'template/cache',
>      backup => 'dump',
>      snippets => 'snippets',
> +    import => 'import',
>  };
>  
>  sub get_vtype_subdirs {
> @@ -691,6 +696,11 @@ sub filesystem_path {
>      my ($vtype, $name, $vmid, undef, undef, $isBase, $format) =
>  	$class->parse_volname($volname);
>  
> +    if (defined($vmid) && $vmid == 0) {
> +	# import volumes?
> +	$vtype = 'import';
> +    }

It is rather hacky :/ At least we could check whether it's an volname
with "import/" instead of relying on $vmid==0 to set the $vtype.

But why return type 'images' in parse_volname() if you override it here
if it's an import image? There should be some comments with the
rationale why it's done like this.

> +
>      # Note: qcow2/qed has internal snapshot, so path is always
>      # the same (with or without snapshot => same file).
>      die "can't snapshot this image format\n"
> @@ -1227,7 +1237,7 @@ sub list_images {
>      return $res;
>  }
>  
> -# list templates ($tt = <iso|vztmpl|backup|snippets>)
> +# list templates ($tt = <iso|vztmpl|backup|snippets|import>)
>  my $get_subdir_files = sub {
>      my ($sid, $path, $tt, $vmid) = @_;
>  
> @@ -1283,6 +1293,10 @@ my $get_subdir_files = sub {
>  		volid => "$sid:snippets/". basename($fn),
>  		format => 'snippet',
>  	    };
> +	} elsif ($tt eq 'import') {
> +	    next if $fn !~ m!/([^/]+$PVE::Storage::IMPORT_EXT_RE_1)$!i;
> +
> +	    $info = { volid => "$sid:import/$1", format => "$2" };
>  	}
>  
>  	$info->{size} = $st->size;
> @@ -1317,6 +1331,8 @@ sub list_volumes {
>  		$data = $get_subdir_files->($storeid, $path, 'backup', $vmid);
>  	    } elsif ($type eq 'snippets') {
>  		$data = $get_subdir_files->($storeid, $path, 'snippets');
> +	    } elsif ($type eq 'import') {
> +		$data = $get_subdir_files->($storeid, $path, 'import');
>  	    }
>  	}
>  
> diff --git a/src/test/parse_volname_test.pm b/src/test/parse_volname_test.pm
> index d6ac885..59819f0 100644
> --- a/src/test/parse_volname_test.pm
> +++ b/src/test/parse_volname_test.pm
> @@ -81,6 +81,19 @@ my $tests = [
>  	expected    => ['snippets', 'hookscript.pl'],
>      },
>      #
> +    #
> +    #
> +    {
> +	description => "Import, ova",
> +	volname     => 'import/import.ova',
> +	expected    => ['import', 'import.ova'],
> +    },
> +    {
> +	description => "Import, ovf",
> +	volname     => 'import/import.ovf',
> +	expected    => ['import', 'import.ovf'],
> +    },
> +    #
>      # failed matches
>      #

Would be nice to also test for failure (with a wrong extension).

>      {
> diff --git a/src/test/path_to_volume_id_test.pm b/src/test/path_to_volume_id_test.pm
> index 8149c88..8bc1bf8 100644
> --- a/src/test/path_to_volume_id_test.pm
> +++ b/src/test/path_to_volume_id_test.pm
> @@ -174,6 +174,22 @@ my @tests = (
>  	    'local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.xz',
>  	],
>      },
> +    {
> +	description => 'Import, ova',
> +	volname     => "$storage_dir/import/import.ova",
> +	expected    => [
> +	    'import',
> +	    'local:import/import.ova',
> +	],
> +    },
> +    {
> +	description => 'Import, ovf',
> +	volname     => "$storage_dir/import/import.ovf",
> +	expected    => [
> +	    'import',
> +	    'local:import/import.ovf',
> +	],
> +    },
>  
>      # no matches, path or files with failures
>      {


Would be nice to also test for failure (with a wrong extension).


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH storage 2/9] plugin: dir: implement import content type
  2024-04-17 10:07   ` Fiona Ebner
@ 2024-04-17 10:07     ` Fiona Ebner
  2024-04-17 13:13     ` Dominik Csapak
  1 sibling, 0 replies; 67+ messages in thread
From: Fiona Ebner @ 2024-04-17 10:07 UTC (permalink / raw)
  To: Proxmox VE development discussion, Dominik Csapak

Am 16.04.24 um 15:18 schrieb Dominik Csapak:
> in DirPlugin and not Plugin (because of cyclic dependency of
> Plugin -> OVF -> Storage -> Plugin otherwise)
> 
> only ovf is currently supported (though ova will be shown in import
> listing), expects the files to not be in a subdir, and adjacent to the
> ovf file.
> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
>  src/PVE/Storage.pm                 |  8 ++++++-
>  src/PVE/Storage/DirPlugin.pm       | 37 +++++++++++++++++++++++++++++-
>  src/PVE/Storage/OVF.pm             |  2 ++
>  src/PVE/Storage/Plugin.pm          | 18 ++++++++++++++-
>  src/test/parse_volname_test.pm     | 13 +++++++++++
>  src/test/path_to_volume_id_test.pm | 16 +++++++++++++
>  6 files changed, 91 insertions(+), 3 deletions(-)
> 
> diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
> index 40314a8..f8ea93d 100755
> --- a/src/PVE/Storage.pm
> +++ b/src/PVE/Storage.pm
> @@ -114,6 +114,8 @@ our $VZTMPL_EXT_RE_1 = qr/\.tar\.(gz|xz|zst)/i;
>  
>  our $BACKUP_EXT_RE_2 = qr/\.(tgz|(?:tar|vma)(?:\.(${\PVE::Storage::Plugin::COMPRESSOR_RE}))?)/;
>  
> +our $IMPORT_EXT_RE_1 = qr/\.(ov[af])/;
> +
>  # FIXME remove with PVE 8.0, add versioned breaks for pve-manager
>  our $vztmpl_extension_re = $VZTMPL_EXT_RE_1;
>  
> @@ -612,6 +614,7 @@ sub path_to_volume_id {
>  	my $backupdir = $plugin->get_subdir($scfg, 'backup');
>  	my $privatedir = $plugin->get_subdir($scfg, 'rootdir');
>  	my $snippetsdir = $plugin->get_subdir($scfg, 'snippets');
> +	my $importdir = $plugin->get_subdir($scfg, 'import');
>  
>  	if ($path =~ m!^$imagedir/(\d+)/([^/\s]+)$!) {
>  	    my $vmid = $1;
> @@ -640,6 +643,9 @@ sub path_to_volume_id {
>  	} elsif ($path =~ m!^$snippetsdir/([^/]+)$!) {
>  	    my $name = $1;
>  	    return ('snippets', "$sid:snippets/$name");
> +	} elsif ($path =~ m!^$importdir/([^/]+${IMPORT_EXT_RE_1})$!) {
> +	    my $name = $1;
> +	    return ('import', "$sid:import/$name");
>  	}
>      }
>  
> @@ -2170,7 +2176,7 @@ sub normalize_content_filename {
>  # If a storage provides an 'import' content type, it should be able to provide
>  # an object implementing the import information interface.
>  sub get_import_metadata {
> -    my ($cfg, $volid) = @_;
> +    my ($cfg, $volid, $target) = @_;
>  

$target is added here but not passed along when calling the plugin's
function

>      my ($storeid, $volname) = parse_volume_id($volid);
>  

Pre-existing and not directly related, but in the ESXi plugin the
prototype seems wrong too:

sub get_import_metadata : prototype($$$$$) {
    my ($class, $scfg, $volname, $storeid) = @_;


> diff --git a/src/PVE/Storage/DirPlugin.pm b/src/PVE/Storage/DirPlugin.pm
> index 2efa8d5..4dc7708 100644
> --- a/src/PVE/Storage/DirPlugin.pm
> +++ b/src/PVE/Storage/DirPlugin.pm
> @@ -10,6 +10,7 @@ use IO::File;
>  use POSIX;
>  
>  use PVE::Storage::Plugin;
> +use PVE::Storage::OVF;
>  use PVE::JSONSchema qw(get_standard_option);
>  
>  use base qw(PVE::Storage::Plugin);
> @@ -22,7 +23,7 @@ sub type {
>  
>  sub plugindata {
>      return {
> -	content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1, backup => 1, snippets => 1, none => 1 },
> +	content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1, backup => 1, snippets => 1, none => 1, import => 1 },
>  		     { images => 1,  rootdir => 1 }],
>  	format => [ { raw => 1, qcow2 => 1, vmdk => 1, subvol => 1 } , 'raw' ],
>      };
> @@ -247,4 +248,38 @@ sub check_config {
>      return $opts;
>  }
>  
> +sub get_import_metadata {
> +    my ($class, $scfg, $volname, $storeid, $target) = @_;
> +
> +    if ($volname !~ m!^([^/]+)/.*${PVE::Storage::IMPORT_EXT_RE_1}$!) {
> +	die "volume '$volname' does not look like an importable vm config\n";
> +    }
> +
> +    my $path = $class->path($scfg, $volname, $storeid, undef);
> +
> +    # NOTE: all types must be added to the return schema of the import-metadata API endpoint

To be extra clear (was confused for a moment): "all types of warnings"

> +    my $warnings = [];
> +
> +    my $res = PVE::Storage::OVF::parse_ovf($path, $isOva);

$isOva does not exist yet (only added by a later patch).

> +    my $disks = {};
> +    for my $disk ($res->{disks}->@*) {
> +	my $id = $disk->{disk_address};
> +	my $size = $disk->{virtual_size};
> +	my $path = $disk->{relative_path};
> +	$disks->{$id} = {
> +	    volid => "$storeid:import/$path",
> +	    defined($size) ? (size => $size) : (),
> +	};
> +    }
> +
> +    return {
> +	type => 'vm',
> +	source => $volname,
> +	'create-args' => $res->{qm},
> +	'disks' => $disks,
> +	warnings => $warnings,
> +	net => [],
> +    };
> +}
> +
>  1;
> diff --git a/src/PVE/Storage/OVF.pm b/src/PVE/Storage/OVF.pm
> index 90ca453..4a322b9 100644
> --- a/src/PVE/Storage/OVF.pm
> +++ b/src/PVE/Storage/OVF.pm
> @@ -222,6 +222,7 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
>  	}
>  
>  	($backing_file_path) = $backing_file_path =~ m|^(/.*)|; # untaint
> +	($filepath) = $filepath =~ m|^(.*)|; # untaint


Hmm, $backing_file_path is the result after going through realpath(),
$filepath is from before. We do check it's not a symlink, so I might be
a bit paranoid, but still, rather than doing a blanket untaint, you
could just use basename() (either here or not return anything new and do
it at the use-site).

>  
>  	my $virtual_size = PVE::Storage::file_size_info($backing_file_path);
>  	die "error parsing $backing_file_path, cannot determine file size\n"
> @@ -231,6 +232,7 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
>  	    disk_address => $pve_disk_address,
>  	    backing_file => $backing_file_path,
>  	    virtual_size => $virtual_size
> +	    relative_path => $filepath,
>  	};
>  	push @disks, $pve_disk;
>  > diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
> index 22a9729..deaf8b2 100644
> --- a/src/PVE/Storage/Plugin.pm
> +++ b/src/PVE/Storage/Plugin.pm
> @@ -654,6 +654,10 @@ sub parse_volname {
>  	return ('backup', $fn);
>      } elsif ($volname =~ m!^snippets/([^/]+)$!) {
>  	return ('snippets', $1);
> +    } elsif ($volname =~ m!^import/([^/]+$PVE::Storage::IMPORT_EXT_RE_1)$!) {
> +	return ('import', $1);
> +    } elsif ($volname =~ m!^import/([^/]+\.(raw|vmdk|qcow2))$!) {
> +	return ('images', $1, 0, undef, undef, undef, $2);

Hmm, $vmid=0, because we have currently have assumptions that each
volume has an associated guest ID? At least might be worth a comment
(also in API description if those volumes can somehow reach there).

>      }
>  
>      die "unable to parse directory volume name '$volname'\n";
> @@ -666,6 +670,7 @@ my $vtype_subdirs = {
>      vztmpl => 'template/cache',
>      backup => 'dump',
>      snippets => 'snippets',
> +    import => 'import',
>  };
>  
>  sub get_vtype_subdirs {
> @@ -691,6 +696,11 @@ sub filesystem_path {
>      my ($vtype, $name, $vmid, undef, undef, $isBase, $format) =
>  	$class->parse_volname($volname);
>  
> +    if (defined($vmid) && $vmid == 0) {
> +	# import volumes?
> +	$vtype = 'import';
> +    }

It is rather hacky :/ At least we could check whether it's an volname
with "import/" instead of relying on $vmid==0 to set the $vtype.

But why return type 'images' in parse_volname() if you override it here
if it's an import image? There should be some comments with the
rationale why it's done like this.

> +
>      # Note: qcow2/qed has internal snapshot, so path is always
>      # the same (with or without snapshot => same file).
>      die "can't snapshot this image format\n"
> @@ -1227,7 +1237,7 @@ sub list_images {
>      return $res;
>  }
>  
> -# list templates ($tt = <iso|vztmpl|backup|snippets>)
> +# list templates ($tt = <iso|vztmpl|backup|snippets|import>)
>  my $get_subdir_files = sub {
>      my ($sid, $path, $tt, $vmid) = @_;
>  
> @@ -1283,6 +1293,10 @@ my $get_subdir_files = sub {
>  		volid => "$sid:snippets/". basename($fn),
>  		format => 'snippet',
>  	    };
> +	} elsif ($tt eq 'import') {
> +	    next if $fn !~ m!/([^/]+$PVE::Storage::IMPORT_EXT_RE_1)$!i;
> +
> +	    $info = { volid => "$sid:import/$1", format => "$2" };
>  	}
>  
>  	$info->{size} = $st->size;
> @@ -1317,6 +1331,8 @@ sub list_volumes {
>  		$data = $get_subdir_files->($storeid, $path, 'backup', $vmid);
>  	    } elsif ($type eq 'snippets') {
>  		$data = $get_subdir_files->($storeid, $path, 'snippets');
> +	    } elsif ($type eq 'import') {
> +		$data = $get_subdir_files->($storeid, $path, 'import');
>  	    }
>  	}
>  
> diff --git a/src/test/parse_volname_test.pm b/src/test/parse_volname_test.pm
> index d6ac885..59819f0 100644
> --- a/src/test/parse_volname_test.pm
> +++ b/src/test/parse_volname_test.pm
> @@ -81,6 +81,19 @@ my $tests = [
>  	expected    => ['snippets', 'hookscript.pl'],
>      },
>      #
> +    #
> +    #
> +    {
> +	description => "Import, ova",
> +	volname     => 'import/import.ova',
> +	expected    => ['import', 'import.ova'],
> +    },
> +    {
> +	description => "Import, ovf",
> +	volname     => 'import/import.ovf',
> +	expected    => ['import', 'import.ovf'],
> +    },
> +    #
>      # failed matches
>      #

Would be nice to also test for failure (with a wrong extension).

>      {
> diff --git a/src/test/path_to_volume_id_test.pm b/src/test/path_to_volume_id_test.pm
> index 8149c88..8bc1bf8 100644
> --- a/src/test/path_to_volume_id_test.pm
> +++ b/src/test/path_to_volume_id_test.pm
> @@ -174,6 +174,22 @@ my @tests = (
>  	    'local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.xz',
>  	],
>      },
> +    {
> +	description => 'Import, ova',
> +	volname     => "$storage_dir/import/import.ova",
> +	expected    => [
> +	    'import',
> +	    'local:import/import.ova',
> +	],
> +    },
> +    {
> +	description => 'Import, ovf',
> +	volname     => "$storage_dir/import/import.ovf",
> +	expected    => [
> +	    'import',
> +	    'local:import/import.ovf',
> +	],
> +    },
>  
>      # no matches, path or files with failures
>      {


Would be nice to also test for failure (with a wrong extension).




^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH storage 3/9] plugin: dir: handle ova files for import
  2024-04-16 13:18 ` [pve-devel] [PATCH storage 3/9] plugin: dir: handle ova files for import Dominik Csapak
@ 2024-04-17 10:52   ` Fiona Ebner
  2024-04-17 13:07     ` Dominik Csapak
  2024-04-17 12:45   ` Fabian Grünbichler
  1 sibling, 1 reply; 67+ messages in thread
From: Fiona Ebner @ 2024-04-17 10:52 UTC (permalink / raw)
  To: Proxmox VE development discussion, Dominik Csapak

Am 16.04.24 um 15:18 schrieb Dominik Csapak:
> since we want to handle ova files (which are only ovf+vmdks bundled in a
> tar file) for import, add code that handles that.
> 
> we introduce a valid volname for files contained in ovas like this:
> 
>  storage:import/archive.ova/disk-1.vmdk
> 
> by basically treating the last part of the path as the name for the
> contained disk we want.
> 
> we then provide 3 functions to use for that:
> 
> * copy_needs_extraction: determines from the given volid (like above) if
>   that needs extraction to copy it, currently only 'import' vtype +
>   defined format returns true here (if we have more options in the
>   future, we can of course easily extend that)
> 
> * extract_disk_from_import_file: this actually extracts the file from
>   the archive. Currently only ova is supported, so the extraction with
>   'tar' is hardcoded, but again we can easily extend/modify that should
>   we need to.
> 
>   we currently extract into the import storage in a directory named:
>   `.tmp_<pid>_<targetvmid>` which should not clash with concurrent
>   operations (though we do extract it multiple times then)
> 

Could we do "extract upon upload", "tar upon download" instead? Sure
some people surely want to drop the ova manually, but we could tell them
they need to extract it first too. Depending on the amount of headache
this would save us, it might be worth it.

>   alternatively we could implement either a 'tmpstorage' parameter,
>   or use e.g. '/var/tmp/' or similar, but re-using the current storage
>   seemed ok.
> 
> * cleanup_extracted_image: intended to cleanup the extracted images from
>   above, including the surrounding temporary directory
> 
> we have to modify the `parse_ovf` a bit to handle the missing disk
> images, and we parse the size out of the ovf part (since this is
> informal only, it should be no problem if we cannot parse it sometimes)
> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
>  src/PVE/API2/Storage/Status.pm |  1 +
>  src/PVE/Storage.pm             | 59 ++++++++++++++++++++++++++++++++++
>  src/PVE/Storage/DirPlugin.pm   | 13 +++++++-
>  src/PVE/Storage/OVF.pm         | 53 ++++++++++++++++++++++++++----
>  src/PVE/Storage/Plugin.pm      |  5 +++
>  5 files changed, 123 insertions(+), 8 deletions(-)
> 
> diff --git a/src/PVE/API2/Storage/Status.pm b/src/PVE/API2/Storage/Status.pm
> index f7e324f..77ed57c 100644
> --- a/src/PVE/API2/Storage/Status.pm
> +++ b/src/PVE/API2/Storage/Status.pm
> @@ -749,6 +749,7 @@ __PACKAGE__->register_method({
>  				'efi-state-lost',
>  				'guest-is-running',
>  				'nvme-unsupported',
> +				'ova-needs-extracting',
>  				'ovmf-with-lsi-unsupported',
>  				'serial-port-socket-only',
>  			    ],
> diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
> index f8ea93d..bc073ef 100755
> --- a/src/PVE/Storage.pm
> +++ b/src/PVE/Storage.pm
> @@ -2189,4 +2189,63 @@ sub get_import_metadata {
>      return $plugin->get_import_metadata($scfg, $volname, $storeid);
>  }
>  

Shouldn't the following three functions call into plugin methods
instead? That'd seem much more future-proof to me.

> +sub copy_needs_extraction {
> +    my ($volid) = @_;
> +    my ($storeid, $volname) = parse_volume_id($volid);
> +    my $cfg = config();
> +    my $scfg = storage_config($cfg, $storeid);
> +    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
> +
> +    my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $file_format) =
> +	$plugin->parse_volname($volname);
> +
> +    return $vtype eq 'import' && defined($file_format);

E.g this seems rather hacky, and puts a weird coupling on a future
import plugin's parse_volname() function (presence of $file_format).

> +}
> +
> +sub extract_disk_from_import_file {
> +    my ($volid, $vmid) = @_;
> +
> +    my ($storeid, $volname) = parse_volume_id($volid);
> +    my $cfg = config();
> +    my $scfg = storage_config($cfg, $storeid);
> +    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
> +
> +    my ($vtype, $name, undef, undef, undef, undef, $file_format) =
> +	$plugin->parse_volname($volname);
> +
> +    die "only files with content type 'import' can be extracted\n"
> +	if $vtype ne 'import' || !defined($file_format);
> +
> +    # extract the inner file from the name
> +    if ($volid =~ m!${name}/([^/]+)$!) {
> +	$name = $1;
> +    }
> +
> +    my ($source_file) = $plugin->path($scfg, $volname, $storeid);
> +
> +    my $destdir = $plugin->get_subdir($scfg, 'import');
> +    my $pid = $$;
> +    $destdir .= "/.tmp_${pid}_${vmid}";
> +    mkdir $destdir;
> +
> +    ($source_file) = $source_file =~ m|^(/.*)|; # untaint
> +
> +    run_command(['tar', '-x', '-C', $destdir, '-f', $source_file, $name]);
> +
> +    return "$destdir/$name";
> +}
> +
> +sub cleanup_extracted_image {
> +    my ($source) = @_;
> +
> +    if ($source =~ m|^(/.+/\.tmp_[0-9]+_[0-9]+)/[^/]+$|) {
> +	my $tmpdir = $1;
> +
> +	unlink $source or $! == ENOENT or die "removing image $source failed: $!\n";
> +	rmdir $tmpdir or $! == ENOENT or die "removing tmpdir $tmpdir failed: $!\n";
> +    } else {
> +	die "invalid extraced image path '$source'\n";
> +    }
> +}
> +
>  1;
> diff --git a/src/PVE/Storage/DirPlugin.pm b/src/PVE/Storage/DirPlugin.pm
> index 4dc7708..50ceab7 100644
> --- a/src/PVE/Storage/DirPlugin.pm
> +++ b/src/PVE/Storage/DirPlugin.pm
> @@ -260,14 +260,25 @@ sub get_import_metadata {
>      # NOTE: all types must be added to the return schema of the import-metadata API endpoint
>      my $warnings = [];
>  
> +    my $isOva = 0;
> +    if ($path =~ m!\.ova!) {

Would be nicer if parse_volname() would return the $file_format and we
chould check for that. Also missing the $ in the regex, so you'd
mismatch a weird filename like ABC.ovaXYZ.ovf or?

> +	$isOva = 1;
> +	push @$warnings, { type => 'ova-needs-extracting' };
> +    }
>      my $res = PVE::Storage::OVF::parse_ovf($path, $isOva);
>      my $disks = {};
>      for my $disk ($res->{disks}->@*) {
>  	my $id = $disk->{disk_address};
>  	my $size = $disk->{virtual_size};
>  	my $path = $disk->{relative_path};
> +	my $volid;
> +	if ($isOva) {
> +	    $volid = "$storeid:$volname/$path";
> +	} else {
> +	    $volid = "$storeid:import/$path",
> +	}
>  	$disks->{$id} = {
> -	    volid => "$storeid:import/$path",
> +	    volid => $volid,
>  	    defined($size) ? (size => $size) : (),
>  	};
>      }
> diff --git a/src/PVE/Storage/OVF.pm b/src/PVE/Storage/OVF.pm
> index 4a322b9..fb850a8 100644
> --- a/src/PVE/Storage/OVF.pm
> +++ b/src/PVE/Storage/OVF.pm
> @@ -85,11 +85,37 @@ sub id_to_pve {
>      }
>  }
>  
> +# technically defined in DSP0004 (https://www.dmtf.org/dsp/DSP0004) as an ABNF
> +# but realistically this always takes the form of 'bytes * base^exponent'

The comment wrongly says 'bytes' instead of 'byte' (your test examples
confirm this).

> +sub try_parse_capacity_unit {
> +    my ($unit_text) = @_;
> +
> +    if ($unit_text =~ m/^\s*byte\s*\*\s*([0-9]+)\s*\^\s*([0-9]+)\s*$/) {

Fun regex :P

> +	my $base = $1;
> +	my $exp = $2;
> +	return $base ** $exp;
> +    }
> +
> +    return undef;
> +}
> +

(...)

> diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
> index deaf8b2..ea069ab 100644
> --- a/src/PVE/Storage/Plugin.pm
> +++ b/src/PVE/Storage/Plugin.pm
> @@ -654,6 +654,11 @@ sub parse_volname {
>  	return ('backup', $fn);
>      } elsif ($volname =~ m!^snippets/([^/]+)$!) {
>  	return ('snippets', $1);
> +    } elsif ($volname =~ m!^import/([^/]+\.ova)\/([^/]+)$!) {
> +	my $archive = $1;
> +	my $file = $2;
> +	my (undef, $format, undef) = parse_name_dir($file);
> +	return ('import', $archive, 0, undef, undef, undef, $format);

So we return the same $name for different things here? Not super happy
with that either. If we were to get creative we could say the archive is
the "base" of the image, but surely also comes with caveats.

>      } elsif ($volname =~ m!^import/([^/]+$PVE::Storage::IMPORT_EXT_RE_1)$!) {
>  	return ('import', $1);
>      } elsif ($volname =~ m!^import/([^/]+\.(raw|vmdk|qcow2))$!) {


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH storage 4/9] ovf: implement parsing the ostype
  2024-04-16 13:18 ` [pve-devel] [PATCH storage 4/9] ovf: implement parsing the ostype Dominik Csapak
@ 2024-04-17 11:32   ` Fiona Ebner
  2024-04-17 13:14     ` Dominik Csapak
  0 siblings, 1 reply; 67+ messages in thread
From: Fiona Ebner @ 2024-04-17 11:32 UTC (permalink / raw)
  To: Proxmox VE development discussion, Dominik Csapak

Am 16.04.24 um 15:18 schrieb Dominik Csapak:
> use the standards info about the ostypes to map to our own
> (see comment for link to the relevant part of the dmtf schema)
> 
> every type that is not listed we map to 'other', so no need to have it
> in a list.
> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>

> diff --git a/src/test/run_ovf_tests.pl b/src/test/run_ovf_tests.pl
> index 1ef78cc..e949c15 100755
> --- a/src/test/run_ovf_tests.pl
> +++ b/src/test/run_ovf_tests.pl
> @@ -59,13 +59,16 @@ print "\ntesting vm.conf extraction\n";
>  is($win2008->{qm}->{name}, 'Win2008-R2x64', 'win2008 VM name is correct');
>  is($win2008->{qm}->{memory}, '2048', 'win2008 VM memory is correct');
>  is($win2008->{qm}->{cores}, '1', 'win2008 VM cores are correct');
> +is($win2008->{qm}->{ostype}, 'win7', 'win2008 VM ostype is correcty');
>  
>  is($win10->{qm}->{name}, 'Win10-Liz', 'win10 VM name is correct');
>  is($win10->{qm}->{memory}, '6144', 'win10 VM memory is correct');
>  is($win10->{qm}->{cores}, '4', 'win10 VM cores are correct');
> +is($win10->{qm}->{ostype}, 'other', 'win10 VM ostype is correct');

Yes, 'other', because the ovf config has id=1, but is there a special
reason why? Maybe worth a comment here and below to avoid potential
confusion.

>  
>  is($win10noNs->{qm}->{name}, 'Win10-Liz', 'win10 VM (no default rasd NS) name is correct');
>  is($win10noNs->{qm}->{memory}, '6144', 'win10 VM (no default rasd NS) memory is correct');
>  is($win10noNs->{qm}->{cores}, '4', 'win10 VM (no default rasd NS) cores are correct');
> +is($win10noNs->{qm}->{ostype}, 'other', 'win10 VM (no default rasd NS) ostype is correct');
>  
>  done_testing();


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH storage 5/9] ovf: implement parsing out firmware type
  2024-04-16 13:18 ` [pve-devel] [PATCH storage 5/9] ovf: implement parsing out firmware type Dominik Csapak
@ 2024-04-17 11:43   ` Fiona Ebner
  0 siblings, 0 replies; 67+ messages in thread
From: Fiona Ebner @ 2024-04-17 11:43 UTC (permalink / raw)
  To: Proxmox VE development discussion, Dominik Csapak

Am 16.04.24 um 15:18 schrieb Dominik Csapak:
> it seems there is no part of the ovf standard that handles which type of
> bios there is (at least i could not find it). Every ovf/ova i tested
> either has no info about it, or has it in a vmware specific property
> which we pare here.

s/pare/parse/

> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>

Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH storage 6/9] ovf: implement rudimentary boot order
  2024-04-16 13:18 ` [pve-devel] [PATCH storage 6/9] ovf: implement rudimentary boot order Dominik Csapak
@ 2024-04-17 11:54   ` Fiona Ebner
  2024-04-17 13:15     ` Dominik Csapak
  0 siblings, 1 reply; 67+ messages in thread
From: Fiona Ebner @ 2024-04-17 11:54 UTC (permalink / raw)
  To: Proxmox VE development discussion, Dominik Csapak

Am 16.04.24 um 15:18 schrieb Dominik Csapak:
> simply add all parsed disks to the boot order in the order we encounter
> them (similar to the esxi plugin).
> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
>  src/PVE/Storage/OVF.pm    | 6 ++++++
>  src/test/run_ovf_tests.pl | 3 +++
>  2 files changed, 9 insertions(+)
> 
> diff --git a/src/PVE/Storage/OVF.pm b/src/PVE/Storage/OVF.pm
> index f56c34d..f438de2 100644
> --- a/src/PVE/Storage/OVF.pm
> +++ b/src/PVE/Storage/OVF.pm
> @@ -245,6 +245,8 @@ sub parse_ovf {
>      # when all the nodes has been found out, we copy the relevant information to
>      # a $pve_disk hash ref, which we push to @disks;
>  
> +    my $boot = [];

Nit: might be better to name it more verbosely since it's a long
function, e.g. boot_order, boot_disk_keys, or similar

> +
>      foreach my $item_node (@disk_items) {
>  
>  	my $disk_node;
> @@ -348,6 +350,10 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
>  	};
>  	$pve_disk->{virtual_size} = $virtual_size if defined($virtual_size);
>  	push @disks, $pve_disk;
> +	push @$boot, $pve_disk_address;
> +    }

This bracket should not be here and the next line below the next bracket
(fixed by the next patch).

> +
> +    $qm->{boot} = "order=" . join(';', @$boot);

Won't this fail later if there are no disks?


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH storage 7/9] ovf: implement parsing nics
  2024-04-16 13:19 ` [pve-devel] [PATCH storage 7/9] ovf: implement parsing nics Dominik Csapak
@ 2024-04-17 12:09   ` Fiona Ebner
  2024-04-17 13:16     ` Dominik Csapak
  2024-04-18  8:22   ` Fiona Ebner
  1 sibling, 1 reply; 67+ messages in thread
From: Fiona Ebner @ 2024-04-17 12:09 UTC (permalink / raw)
  To: Proxmox VE development discussion, Dominik Csapak

Am 16.04.24 um 15:19 schrieb Dominik Csapak:
> by iterating over the relevant parts and trying to parse out the
> 'ResourceSubType'. The content of that is not standardized, but I only
> ever found examples that are compatible with vmware, meaning it's
> either 'e1000', 'e1000e' or 'vmxnet3' (in various capitalizations; thus
> the `lc()`)
> 
> As a fallback i used vmxnet3, since i guess most OVAs are tuned for
> vmware.

I'm not familiar enough with the OVA/OVF ecosystem, but is this really
the best default. I'd kinda expect e1000(e) to cause less issues in case
we were not able to get the type from the OVA/OVF. And people coming
from VMWare are likely going to use the dedicated importer.

> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
>  src/PVE/Storage/DirPlugin.pm |  2 +-
>  src/PVE/Storage/OVF.pm       | 20 +++++++++++++++++++-
>  src/test/run_ovf_tests.pl    |  5 +++++
>  3 files changed, 25 insertions(+), 2 deletions(-)
> 
> diff --git a/src/PVE/Storage/DirPlugin.pm b/src/PVE/Storage/DirPlugin.pm
> index 8a248c7..21c8350 100644
> --- a/src/PVE/Storage/DirPlugin.pm
> +++ b/src/PVE/Storage/DirPlugin.pm
> @@ -294,7 +294,7 @@ sub get_import_metadata {
>  	'create-args' => $res->{qm},
>  	'disks' => $disks,
>  	warnings => $warnings,
> -	net => [],
> +	net => $res->{net},
>      };
>  }
>  
> diff --git a/src/PVE/Storage/OVF.pm b/src/PVE/Storage/OVF.pm
> index f438de2..c3e7ed9 100644
> --- a/src/PVE/Storage/OVF.pm
> +++ b/src/PVE/Storage/OVF.pm
> @@ -120,6 +120,12 @@ sub get_ostype {
>      return $ostype_ids->{$id} // 'other';
>  }
>  
> +my $allowed_nic_models = [
> +    'e1000',
> +    'e1000e',
> +    'vmxnet3',
> +];
> +
>  sub find_by {
>      my ($key, $param) = @_;
>      foreach my $resource (@resources) {
> @@ -355,9 +361,21 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
>  
>      $qm->{boot} = "order=" . join(';', @$boot);
>  
> +    my $nic_id = dtmf_name_to_id('Ethernet Adapter');
> +    my $xpath_find_nics = "/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${nic_id}]";
> +    my @nic_items = $xpc->findnodes($xpath_find_nics);
> +
> +    my $net = {};
> +
> +    my $net_count = 0;
> +    foreach my $item_node (@nic_items) {

Style nit: please use for instead of foreach

> +	my $model = $xpc->findvalue('rasd:ResourceSubType', $item_node);
> +	$model = lc($model);
> +	$model = 'vmxnet3' if ! grep $model, @$allowed_nic_models;


> +	$net->{"net${net_count}"} = { model => $model };
>      }

$net_count is never increased.

>  
> -    return {qm => $qm, disks => \@disks};
> +    return {qm => $qm, disks => \@disks, net => $net};
>  }
>  
>  1;
> diff --git a/src/test/run_ovf_tests.pl b/src/test/run_ovf_tests.pl
> index 8cf5662..d9a7b4b 100755
> --- a/src/test/run_ovf_tests.pl
> +++ b/src/test/run_ovf_tests.pl
> @@ -54,6 +54,11 @@ is($win10noNs->{disks}->[0]->{disk_address}, 'scsi0', 'single disk vm (no defaul
>  is($win10noNs->{disks}->[0]->{backing_file}, "$test_manifests/Win10-Liz-disk1.vmdk", 'single disk vm (no default rasd NS) has the correct disk backing device');
>  is($win10noNs->{disks}->[0]->{virtual_size}, 2048, 'single disk vm (no default rasd NS) has the correct size');
>  
> +print "testing nics\n";
> +is($win2008->{net}->{net0}->{model}, 'e1000', 'win2008 has correct nic model');
> +is($win10->{net}->{net0}->{model}, 'e1000e', 'win10 has correct nic model');
> +is($win10noNs->{net}->{net0}->{model}, 'e1000e', 'win10 (no default rasd NS) has correct nic model');
> +
>  print "\ntesting vm.conf extraction\n";
>  
>  is($win2008->{qm}->{boot}, 'order=scsi0;scsi1', 'win2008 VM boot is correct');


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH manager 4/4] ui: enable upload/download buttons for 'import' type storages
  2024-04-16 13:19 ` [pve-devel] [PATCH manager 4/4] ui: enable upload/download buttons for 'import' type storages Dominik Csapak
@ 2024-04-17 12:37   ` Fabian Grünbichler
  2024-04-18 11:20   ` Fiona Ebner
  1 sibling, 0 replies; 67+ messages in thread
From: Fabian Grünbichler @ 2024-04-17 12:37 UTC (permalink / raw)
  To: Proxmox VE development discussion

On April 16, 2024 3:19 pm, Dominik Csapak wrote:
> but only for non esxi ones, since that does not allow
> uploading/downloading there

what about a remove button? :)

> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
>  www/manager6/storage/Browser.js        | 7 ++++++-
>  www/manager6/window/UploadToStorage.js | 1 +
>  2 files changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/www/manager6/storage/Browser.js b/www/manager6/storage/Browser.js
> index 2123141d..77d106c1 100644
> --- a/www/manager6/storage/Browser.js
> +++ b/www/manager6/storage/Browser.js
> @@ -28,7 +28,9 @@ Ext.define('PVE.storage.Browser', {
>  	let res = storageInfo.data;
>  	let plugin = res.plugintype;
>  
> -	me.items = plugin !== 'esxi' ? [
> +	let isEsxi = plugin === 'esxi';
> +
> +	me.items = !isEsxi ? [
>  	    {
>  		title: gettext('Summary'),
>  		xtype: 'pveStorageSummary',
> @@ -144,6 +146,9 @@ Ext.define('PVE.storage.Browser', {
>  		    content: 'import',
>  		    useCustomRemoveButton: true, // hide default remove button
>  		    showColumns: ['name', 'format'],
> +		    enableUploadButton: enableUpload && !isEsxi,
> +		    enableDownloadUrlButton: enableDownloadUrl && !isEsxi,
> +		    useUploadButton: !isEsxi,
>  		    itemdblclick: (view, record) => createGuestImportWindow(record),
>  		    tbar: [
>  			{
> diff --git a/www/manager6/window/UploadToStorage.js b/www/manager6/window/UploadToStorage.js
> index 3c5bba88..79a6e8a6 100644
> --- a/www/manager6/window/UploadToStorage.js
> +++ b/www/manager6/window/UploadToStorage.js
> @@ -11,6 +11,7 @@ Ext.define('PVE.window.UploadToStorage', {
>      acceptedExtensions: {
>  	iso: ['.img', '.iso'],
>  	vztmpl: ['.tar.gz', '.tar.xz', '.tar.zst'],
> +	'import': ['ova'],
>      },
>  
>      cbindData: function(initialConfig) {
> -- 
> 2.39.2
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH storage 3/9] plugin: dir: handle ova files for import
  2024-04-16 13:18 ` [pve-devel] [PATCH storage 3/9] plugin: dir: handle ova files for import Dominik Csapak
  2024-04-17 10:52   ` Fiona Ebner
@ 2024-04-17 12:45   ` Fabian Grünbichler
  2024-04-17 13:10     ` Dominik Csapak
  1 sibling, 1 reply; 67+ messages in thread
From: Fabian Grünbichler @ 2024-04-17 12:45 UTC (permalink / raw)
  To: Proxmox VE development discussion

On April 16, 2024 3:18 pm, Dominik Csapak wrote:
> since we want to handle ova files (which are only ovf+vmdks bundled in a
> tar file) for import, add code that handles that.
> 
> we introduce a valid volname for files contained in ovas like this:
> 
>  storage:import/archive.ova/disk-1.vmdk
> 
> by basically treating the last part of the path as the name for the
> contained disk we want.
> 
> we then provide 3 functions to use for that:
> 
> * copy_needs_extraction: determines from the given volid (like above) if
>   that needs extraction to copy it, currently only 'import' vtype +
>   defined format returns true here (if we have more options in the
>   future, we can of course easily extend that)
> 
> * extract_disk_from_import_file: this actually extracts the file from
>   the archive. Currently only ova is supported, so the extraction with
>   'tar' is hardcoded, but again we can easily extend/modify that should
>   we need to.
> 
>   we currently extract into the import storage in a directory named:
>   `.tmp_<pid>_<targetvmid>` which should not clash with concurrent
>   operations (though we do extract it multiple times then)
> 
>   alternatively we could implement either a 'tmpstorage' parameter,
>   or use e.g. '/var/tmp/' or similar, but re-using the current storage
>   seemed ok.
> 
> * cleanup_extracted_image: intended to cleanup the extracted images from
>   above, including the surrounding temporary directory

the helpers could also all live in qemu-server for now, which would also
make extending it to use a different storage, or direct importing via a
pipe easier? see below ;)

> 
> we have to modify the `parse_ovf` a bit to handle the missing disk
> images, and we parse the size out of the ovf part (since this is
> informal only, it should be no problem if we cannot parse it sometimes)
> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
>  src/PVE/API2/Storage/Status.pm |  1 +
>  src/PVE/Storage.pm             | 59 ++++++++++++++++++++++++++++++++++
>  src/PVE/Storage/DirPlugin.pm   | 13 +++++++-
>  src/PVE/Storage/OVF.pm         | 53 ++++++++++++++++++++++++++----
>  src/PVE/Storage/Plugin.pm      |  5 +++
>  5 files changed, 123 insertions(+), 8 deletions(-)
> 
> diff --git a/src/PVE/API2/Storage/Status.pm b/src/PVE/API2/Storage/Status.pm
> index f7e324f..77ed57c 100644
> --- a/src/PVE/API2/Storage/Status.pm
> +++ b/src/PVE/API2/Storage/Status.pm
> @@ -749,6 +749,7 @@ __PACKAGE__->register_method({
>  				'efi-state-lost',
>  				'guest-is-running',
>  				'nvme-unsupported',
> +				'ova-needs-extracting',
>  				'ovmf-with-lsi-unsupported',
>  				'serial-port-socket-only',
>  			    ],
> diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
> index f8ea93d..bc073ef 100755
> --- a/src/PVE/Storage.pm
> +++ b/src/PVE/Storage.pm
> @@ -2189,4 +2189,63 @@ sub get_import_metadata {
>      return $plugin->get_import_metadata($scfg, $volname, $storeid);
>  }
>  
> +sub copy_needs_extraction {
> +    my ($volid) = @_;
> +    my ($storeid, $volname) = parse_volume_id($volid);
> +    my $cfg = config();
> +    my $scfg = storage_config($cfg, $storeid);
> +    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
> +
> +    my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $file_format) =
> +	$plugin->parse_volname($volname);
> +
> +    return $vtype eq 'import' && defined($file_format);
> +}

not sure this one is needed? it could also just be a call to
PVE::Storage::parse_volname in qemu-server?

> +
> +sub extract_disk_from_import_file {

similarly, this is basically PVE::Storage::get_import_dir + the
run_command call, and could live in qemu-server?

> +    my ($volid, $vmid) = @_;
> +
> +    my ($storeid, $volname) = parse_volume_id($volid);
> +    my $cfg = config();
> +    my $scfg = storage_config($cfg, $storeid);
> +    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
> +
> +    my ($vtype, $name, undef, undef, undef, undef, $file_format) =
> +	$plugin->parse_volname($volname);
> +
> +    die "only files with content type 'import' can be extracted\n"
> +	if $vtype ne 'import' || !defined($file_format);
> +
> +    # extract the inner file from the name
> +    if ($volid =~ m!${name}/([^/]+)$!) {
> +	$name = $1;

we should probably be very conservative here and only allow [-_a-z0-9]
as a start - or something similar rather restrictive..

> +    }
> +
> +    my ($source_file) = $plugin->path($scfg, $volname, $storeid);
> +
> +    my $destdir = $plugin->get_subdir($scfg, 'import');
> +    my $pid = $$;
> +    $destdir .= "/.tmp_${pid}_${vmid}";
> +    mkdir $destdir;
> +
> +    ($source_file) = $source_file =~ m|^(/.*)|; # untaint

again a rather interesting untaint ;)

> +
> +    run_command(['tar', '-x', '-C', $destdir, '-f', $source_file, $name]);

if $name was a symlink in the archive, you've now created a symlink
pointing wherever..

> +
> +    return "$destdir/$name";

and this returns an absolute path to it, and now we are in trouble land
;) we should check that the file is a real file here..

> +}
> +
> +sub cleanup_extracted_image {

same for this?

> +    my ($source) = @_;
> +
> +    if ($source =~ m|^(/.+/\.tmp_[0-9]+_[0-9]+)/[^/]+$|) {
> +	my $tmpdir = $1;
> +
> +	unlink $source or $! == ENOENT or die "removing image $source failed: $!\n";
> +	rmdir $tmpdir or $! == ENOENT or die "removing tmpdir $tmpdir failed: $!\n";
> +    } else {
> +	die "invalid extraced image path '$source'\n";

nit: typo

these are also not discoverable if the error handling in qemu-server
failed for some reason.. might be a source of unwanted space
consumption..

> +    }
> +}
> +
>  1;
> diff --git a/src/PVE/Storage/DirPlugin.pm b/src/PVE/Storage/DirPlugin.pm
> index 4dc7708..50ceab7 100644
> --- a/src/PVE/Storage/DirPlugin.pm
> +++ b/src/PVE/Storage/DirPlugin.pm
> @@ -260,14 +260,25 @@ sub get_import_metadata {
>      # NOTE: all types must be added to the return schema of the import-metadata API endpoint
>      my $warnings = [];
>  
> +    my $isOva = 0;
> +    if ($path =~ m!\.ova!) {
> +	$isOva = 1;
> +	push @$warnings, { type => 'ova-needs-extracting' };
> +    }
>      my $res = PVE::Storage::OVF::parse_ovf($path, $isOva);
>      my $disks = {};
>      for my $disk ($res->{disks}->@*) {
>  	my $id = $disk->{disk_address};
>  	my $size = $disk->{virtual_size};
>  	my $path = $disk->{relative_path};
> +	my $volid;
> +	if ($isOva) {
> +	    $volid = "$storeid:$volname/$path";
> +	} else {
> +	    $volid = "$storeid:import/$path",
> +	}
>  	$disks->{$id} = {
> -	    volid => "$storeid:import/$path",
> +	    volid => $volid,
>  	    defined($size) ? (size => $size) : (),
>  	};
>      }
> diff --git a/src/PVE/Storage/OVF.pm b/src/PVE/Storage/OVF.pm
> index 4a322b9..fb850a8 100644
> --- a/src/PVE/Storage/OVF.pm
> +++ b/src/PVE/Storage/OVF.pm
> @@ -85,11 +85,37 @@ sub id_to_pve {
>      }
>  }
>  
> +# technically defined in DSP0004 (https://www.dmtf.org/dsp/DSP0004) as an ABNF
> +# but realistically this always takes the form of 'bytes * base^exponent'
> +sub try_parse_capacity_unit {
> +    my ($unit_text) = @_;
> +
> +    if ($unit_text =~ m/^\s*byte\s*\*\s*([0-9]+)\s*\^\s*([0-9]+)\s*$/) {
> +	my $base = $1;
> +	my $exp = $2;
> +	return $base ** $exp;
> +    }
> +
> +    return undef;
> +}
> +
>  # returns two references, $qm which holds qm.conf style key/values, and \@disks
>  sub parse_ovf {
> -    my ($ovf, $debug) = @_;
> +    my ($ovf, $isOva, $debug) = @_;
> +
> +    # we have to ignore missing disk images for ova
> +    my $dom;
> +    if ($isOva) {
> +	my $raw = "";
> +	PVE::Tools::run_command(['tar', '-xO', '--wildcards', '--occurrence=1', '-f', $ovf, '*.ovf'], outfunc => sub {
> +	    my $line = shift;
> +	    $raw .= $line;
> +	});
> +	$dom = XML::LibXML->load_xml(string => $raw, no_blanks => 1);
> +    } else {
> +	$dom = XML::LibXML->load_xml(location => $ovf, no_blanks => 1);
> +    }
>  
> -    my $dom = XML::LibXML->load_xml(location => $ovf, no_blanks => 1);
>  
>      # register the xml namespaces in a xpath context object
>      # 'ovf' is the default namespace so it will prepended to each xml element
> @@ -177,7 +203,17 @@ sub parse_ovf {
>  	# @ needs to be escaped to prevent Perl double quote interpolation
>  	my $xpath_find_fileref = sprintf("/ovf:Envelope/ovf:DiskSection/\
>  ovf:Disk[\@ovf:diskId='%s']/\@ovf:fileRef", $disk_id);
> +	my $xpath_find_capacity = sprintf("/ovf:Envelope/ovf:DiskSection/\
> +ovf:Disk[\@ovf:diskId='%s']/\@ovf:capacity", $disk_id);
> +	my $xpath_find_capacity_unit = sprintf("/ovf:Envelope/ovf:DiskSection/\
> +ovf:Disk[\@ovf:diskId='%s']/\@ovf:capacityAllocationUnits", $disk_id);
>  	my $fileref = $xpc->findvalue($xpath_find_fileref);
> +	my $capacity = $xpc->findvalue($xpath_find_capacity);
> +	my $capacity_unit = $xpc->findvalue($xpath_find_capacity_unit);
> +	my $virtual_size;
> +	if (my $factor = try_parse_capacity_unit($capacity_unit)) {
> +	    $virtual_size = $capacity * $factor;
> +	}
>  
>  	my $valid_url_chars = qr@${valid_uripath_chars}|/@;
>  	if (!$fileref || $fileref !~ m/^${valid_url_chars}+$/) {
> @@ -217,23 +253,26 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
>  	    die "error parsing $filepath, are you using a symlink ?\n";
>  	}
>  
> -	if (!-e $backing_file_path) {
> +	if (!-e $backing_file_path && !$isOva) {

this is actually not enough, the realpath call above can already fail if
$filepath points to a file in a subdir (note that realpath will only
check the path components, not the file itself).

e.g.:

error parsing foo/bar/chr-6.49.13-disk1.vmdk, are you using a symlink ? (500)

we could also tighten what we allow as filepath here, in addition to the
extraction code.

>  	    die "error parsing $filepath, file seems not to exist at $backing_file_path\n";
>  	}
>  
>  	($backing_file_path) = $backing_file_path =~ m|^(/.*)|; # untaint
>  	($filepath) = $filepath =~ m|^(.*)|; # untaint
>  
> -	my $virtual_size = PVE::Storage::file_size_info($backing_file_path);
> -	die "error parsing $backing_file_path, cannot determine file size\n"
> -	    if !$virtual_size;
> +	if (!$isOva) {
> +	    my $size = PVE::Storage::file_size_info($backing_file_path);
> +	    die "error parsing $backing_file_path, cannot determine file size\n"
> +		if !$size;
>  
> +	    $virtual_size = $size;
> +	}
>  	$pve_disk = {
>  	    disk_address => $pve_disk_address,
>  	    backing_file => $backing_file_path,
> -	    virtual_size => $virtual_size
>  	    relative_path => $filepath,
>  	};
> +	$pve_disk->{virtual_size} = $virtual_size if defined($virtual_size);
>  	push @disks, $pve_disk;
>  
>      }
> diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
> index deaf8b2..ea069ab 100644
> --- a/src/PVE/Storage/Plugin.pm
> +++ b/src/PVE/Storage/Plugin.pm
> @@ -654,6 +654,11 @@ sub parse_volname {
>  	return ('backup', $fn);
>      } elsif ($volname =~ m!^snippets/([^/]+)$!) {
>  	return ('snippets', $1);
> +    } elsif ($volname =~ m!^import/([^/]+\.ova)\/([^/]+)$!) {
> +	my $archive = $1;
> +	my $file = $2;
> +	my (undef, $format, undef) = parse_name_dir($file);
> +	return ('import', $archive, 0, undef, undef, undef, $format);
>      } elsif ($volname =~ m!^import/([^/]+$PVE::Storage::IMPORT_EXT_RE_1)$!) {
>  	return ('import', $1);
>      } elsif ($volname =~ m!^import/([^/]+\.(raw|vmdk|qcow2))$!) {
> -- 
> 2.39.2
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH storage 2/9] plugin: dir: implement import content type
  2024-04-16 13:18 ` [pve-devel] [PATCH storage 2/9] plugin: dir: implement import content type Dominik Csapak
  2024-04-17 10:07   ` Fiona Ebner
@ 2024-04-17 12:46   ` Fabian Grünbichler
  1 sibling, 0 replies; 67+ messages in thread
From: Fabian Grünbichler @ 2024-04-17 12:46 UTC (permalink / raw)
  To: Proxmox VE development discussion

On April 16, 2024 3:18 pm, Dominik Csapak wrote:
> in DirPlugin and not Plugin (because of cyclic dependency of
> Plugin -> OVF -> Storage -> Plugin otherwise)
> 
> only ovf is currently supported (though ova will be shown in import
> listing), expects the files to not be in a subdir, and adjacent to the
> ovf file.
> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
>  src/PVE/Storage.pm                 |  8 ++++++-
>  src/PVE/Storage/DirPlugin.pm       | 37 +++++++++++++++++++++++++++++-
>  src/PVE/Storage/OVF.pm             |  2 ++
>  src/PVE/Storage/Plugin.pm          | 18 ++++++++++++++-
>  src/test/parse_volname_test.pm     | 13 +++++++++++
>  src/test/path_to_volume_id_test.pm | 16 +++++++++++++
>  6 files changed, 91 insertions(+), 3 deletions(-)
> 
> diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
> index 40314a8..f8ea93d 100755
> --- a/src/PVE/Storage.pm
> +++ b/src/PVE/Storage.pm
> @@ -114,6 +114,8 @@ our $VZTMPL_EXT_RE_1 = qr/\.tar\.(gz|xz|zst)/i;
>  
>  our $BACKUP_EXT_RE_2 = qr/\.(tgz|(?:tar|vma)(?:\.(${\PVE::Storage::Plugin::COMPRESSOR_RE}))?)/;
>  
> +our $IMPORT_EXT_RE_1 = qr/\.(ov[af])/;
> +
>  # FIXME remove with PVE 8.0, add versioned breaks for pve-manager
>  our $vztmpl_extension_re = $VZTMPL_EXT_RE_1;
>  
> @@ -612,6 +614,7 @@ sub path_to_volume_id {
>  	my $backupdir = $plugin->get_subdir($scfg, 'backup');
>  	my $privatedir = $plugin->get_subdir($scfg, 'rootdir');
>  	my $snippetsdir = $plugin->get_subdir($scfg, 'snippets');
> +	my $importdir = $plugin->get_subdir($scfg, 'import');
>  
>  	if ($path =~ m!^$imagedir/(\d+)/([^/\s]+)$!) {
>  	    my $vmid = $1;
> @@ -640,6 +643,9 @@ sub path_to_volume_id {
>  	} elsif ($path =~ m!^$snippetsdir/([^/]+)$!) {
>  	    my $name = $1;
>  	    return ('snippets', "$sid:snippets/$name");
> +	} elsif ($path =~ m!^$importdir/([^/]+${IMPORT_EXT_RE_1})$!) {
> +	    my $name = $1;
> +	    return ('import', "$sid:import/$name");
>  	}
>      }
>  
> @@ -2170,7 +2176,7 @@ sub normalize_content_filename {
>  # If a storage provides an 'import' content type, it should be able to provide
>  # an object implementing the import information interface.
>  sub get_import_metadata {
> -    my ($cfg, $volid) = @_;
> +    my ($cfg, $volid, $target) = @_;
>  
>      my ($storeid, $volname) = parse_volume_id($volid);
>  
> diff --git a/src/PVE/Storage/DirPlugin.pm b/src/PVE/Storage/DirPlugin.pm
> index 2efa8d5..4dc7708 100644
> --- a/src/PVE/Storage/DirPlugin.pm
> +++ b/src/PVE/Storage/DirPlugin.pm
> @@ -10,6 +10,7 @@ use IO::File;
>  use POSIX;
>  
>  use PVE::Storage::Plugin;
> +use PVE::Storage::OVF;
>  use PVE::JSONSchema qw(get_standard_option);
>  
>  use base qw(PVE::Storage::Plugin);
> @@ -22,7 +23,7 @@ sub type {
>  
>  sub plugindata {
>      return {
> -	content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1, backup => 1, snippets => 1, none => 1 },
> +	content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1, backup => 1, snippets => 1, none => 1, import => 1 },
>  		     { images => 1,  rootdir => 1 }],
>  	format => [ { raw => 1, qcow2 => 1, vmdk => 1, subvol => 1 } , 'raw' ],
>      };
> @@ -247,4 +248,38 @@ sub check_config {
>      return $opts;
>  }
>  
> +sub get_import_metadata {
> +    my ($class, $scfg, $volname, $storeid, $target) = @_;
> +
> +    if ($volname !~ m!^([^/]+)/.*${PVE::Storage::IMPORT_EXT_RE_1}$!) {
> +	die "volume '$volname' does not look like an importable vm config\n";
> +    }

shouldn't this happen in parse_volname? or rather, why is this different
than the code there?

> +
> +    my $path = $class->path($scfg, $volname, $storeid, undef);
> +
> +    # NOTE: all types must be added to the return schema of the import-metadata API endpoint
> +    my $warnings = [];
> +
> +    my $res = PVE::Storage::OVF::parse_ovf($path, $isOva);

nit: $isOva doesn't yet exist in this patch, neither as variable here,
nor as parameter in parse_ovf ;)

> +    my $disks = {};
> +    for my $disk ($res->{disks}->@*) {
> +	my $id = $disk->{disk_address};
> +	my $size = $disk->{virtual_size};
> +	my $path = $disk->{relative_path};

see below

> +	$disks->{$id} = {
> +	    volid => "$storeid:import/$path",
> +	    defined($size) ? (size => $size) : (),
> +	};
> +    }
> +
> +    return {
> +	type => 'vm',
> +	source => $volname,
> +	'create-args' => $res->{qm},
> +	'disks' => $disks,
> +	warnings => $warnings,
> +	net => [],
> +    };
> +}
> +
>  1;
> diff --git a/src/PVE/Storage/OVF.pm b/src/PVE/Storage/OVF.pm
> index 90ca453..4a322b9 100644
> --- a/src/PVE/Storage/OVF.pm
> +++ b/src/PVE/Storage/OVF.pm
> @@ -222,6 +222,7 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
>  	}
>  
>  	($backing_file_path) = $backing_file_path =~ m|^(/.*)|; # untaint
> +	($filepath) = $filepath =~ m|^(.*)|; # untaint

nit: that's a weird untaint ;) maybe add the `$` at least to prevent
future bugs?

>  
>  	my $virtual_size = PVE::Storage::file_size_info($backing_file_path);
>  	die "error parsing $backing_file_path, cannot determine file size\n"
> @@ -231,6 +232,7 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
>  	    disk_address => $pve_disk_address,
>  	    backing_file => $backing_file_path,
>  	    virtual_size => $virtual_size
> +	    relative_path => $filepath,

nothing actually ensures $filepath doesn't contain an absolute path
(that is also valid when concatenating it with the ovs dir).. right now
it would then choke on the double `//`, but I haven't tried all the
wonky stuff you could possibly do..

wouldn't it be more safe to transform back from $backing_file_path to
ensure there is no mismatch/potential for confusion? also, I am not sure
if this wouldn't allow injecting stuff (if $filepath contains weird
characters, or its resolved variant $backing_file_path does) - should we
limit ourselves to a certain character set that we know to be okay as
part of a volid?

I have to admit I didn't follow the XML DTD stuff in detail (it's rather
convoluted), so maybe parts of this are handled there, but I am not sure
I would want to rely on that in any case ;)

>  	};
>  	push @disks, $pve_disk;
>  
> diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
> index 22a9729..deaf8b2 100644
> --- a/src/PVE/Storage/Plugin.pm
> +++ b/src/PVE/Storage/Plugin.pm
> @@ -654,6 +654,10 @@ sub parse_volname {
>  	return ('backup', $fn);
>      } elsif ($volname =~ m!^snippets/([^/]+)$!) {
>  	return ('snippets', $1);
> +    } elsif ($volname =~ m!^import/([^/]+$PVE::Storage::IMPORT_EXT_RE_1)$!) {
> +	return ('import', $1);
> +    } elsif ($volname =~ m!^import/([^/]+\.(raw|vmdk|qcow2))$!) {
> +	return ('images', $1, 0, undef, undef, undef, $2);
>      }
>  
>      die "unable to parse directory volume name '$volname'\n";
> @@ -666,6 +670,7 @@ my $vtype_subdirs = {
>      vztmpl => 'template/cache',
>      backup => 'dump',
>      snippets => 'snippets',
> +    import => 'import',
>  };
>  
>  sub get_vtype_subdirs {
> @@ -691,6 +696,11 @@ sub filesystem_path {
>      my ($vtype, $name, $vmid, undef, undef, $isBase, $format) =
>  	$class->parse_volname($volname);
>  
> +    if (defined($vmid) && $vmid == 0) {
> +	# import volumes?
> +	$vtype = 'import';
> +    }
> +
>      # Note: qcow2/qed has internal snapshot, so path is always
>      # the same (with or without snapshot => same file).
>      die "can't snapshot this image format\n"
> @@ -1227,7 +1237,7 @@ sub list_images {
>      return $res;
>  }
>  
> -# list templates ($tt = <iso|vztmpl|backup|snippets>)
> +# list templates ($tt = <iso|vztmpl|backup|snippets|import>)
>  my $get_subdir_files = sub {
>      my ($sid, $path, $tt, $vmid) = @_;
>  
> @@ -1283,6 +1293,10 @@ my $get_subdir_files = sub {
>  		volid => "$sid:snippets/". basename($fn),
>  		format => 'snippet',
>  	    };
> +	} elsif ($tt eq 'import') {
> +	    next if $fn !~ m!/([^/]+$PVE::Storage::IMPORT_EXT_RE_1)$!i;
> +
> +	    $info = { volid => "$sid:import/$1", format => "$2" };
>  	}
>  
>  	$info->{size} = $st->size;
> @@ -1317,6 +1331,8 @@ sub list_volumes {
>  		$data = $get_subdir_files->($storeid, $path, 'backup', $vmid);
>  	    } elsif ($type eq 'snippets') {
>  		$data = $get_subdir_files->($storeid, $path, 'snippets');
> +	    } elsif ($type eq 'import') {
> +		$data = $get_subdir_files->($storeid, $path, 'import');
>  	    }
>  	}
>  
> diff --git a/src/test/parse_volname_test.pm b/src/test/parse_volname_test.pm
> index d6ac885..59819f0 100644
> --- a/src/test/parse_volname_test.pm
> +++ b/src/test/parse_volname_test.pm
> @@ -81,6 +81,19 @@ my $tests = [
>  	expected    => ['snippets', 'hookscript.pl'],
>      },
>      #
> +    #
> +    #
> +    {
> +	description => "Import, ova",
> +	volname     => 'import/import.ova',
> +	expected    => ['import', 'import.ova'],
> +    },
> +    {
> +	description => "Import, ovf",
> +	volname     => 'import/import.ovf',
> +	expected    => ['import', 'import.ovf'],
> +    },
> +    #
>      # failed matches
>      #
>      {
> diff --git a/src/test/path_to_volume_id_test.pm b/src/test/path_to_volume_id_test.pm
> index 8149c88..8bc1bf8 100644
> --- a/src/test/path_to_volume_id_test.pm
> +++ b/src/test/path_to_volume_id_test.pm
> @@ -174,6 +174,22 @@ my @tests = (
>  	    'local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.xz',
>  	],
>      },
> +    {
> +	description => 'Import, ova',
> +	volname     => "$storage_dir/import/import.ova",
> +	expected    => [
> +	    'import',
> +	    'local:import/import.ova',
> +	],
> +    },
> +    {
> +	description => 'Import, ovf',
> +	volname     => "$storage_dir/import/import.ovf",
> +	expected    => [
> +	    'import',
> +	    'local:import/import.ovf',
> +	],
> +    },
>  
>      # no matches, path or files with failures
>      {
> -- 
> 2.39.2
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH storage 3/9] plugin: dir: handle ova files for import
  2024-04-17 10:52   ` Fiona Ebner
@ 2024-04-17 13:07     ` Dominik Csapak
  2024-04-17 13:39       ` Fabian Grünbichler
  2024-04-18  7:22       ` Fiona Ebner
  0 siblings, 2 replies; 67+ messages in thread
From: Dominik Csapak @ 2024-04-17 13:07 UTC (permalink / raw)
  To: Fiona Ebner, Proxmox VE development discussion

On 4/17/24 12:52, Fiona Ebner wrote:
> Am 16.04.24 um 15:18 schrieb Dominik Csapak:
>> since we want to handle ova files (which are only ovf+vmdks bundled in a
>> tar file) for import, add code that handles that.
>>
>> we introduce a valid volname for files contained in ovas like this:
>>
>>   storage:import/archive.ova/disk-1.vmdk
>>
>> by basically treating the last part of the path as the name for the
>> contained disk we want.
>>
>> we then provide 3 functions to use for that:
>>
>> * copy_needs_extraction: determines from the given volid (like above) if
>>    that needs extraction to copy it, currently only 'import' vtype +
>>    defined format returns true here (if we have more options in the
>>    future, we can of course easily extend that)
>>
>> * extract_disk_from_import_file: this actually extracts the file from
>>    the archive. Currently only ova is supported, so the extraction with
>>    'tar' is hardcoded, but again we can easily extend/modify that should
>>    we need to.
>>
>>    we currently extract into the import storage in a directory named:
>>    `.tmp_<pid>_<targetvmid>` which should not clash with concurrent
>>    operations (though we do extract it multiple times then)
>>
> 
> Could we do "extract upon upload", "tar upon download" instead? Sure
> some people surely want to drop the ova manually, but we could tell them
> they need to extract it first too. Depending on the amount of headache
> this would save us, it might be worth it.

we could, but this opens a whole other can of worms, namely
what to do with conflicting filenames for different ovas?

we'd then either have to magically match the paths from the ovfs
to some subdir that don't overlap

or we'd have to abort everytime we encounter identical disk names

IMHO this would be less practical than just extract on demand...

> 
>>    alternatively we could implement either a 'tmpstorage' parameter,
>>    or use e.g. '/var/tmp/' or similar, but re-using the current storage
>>    seemed ok.
>>
>> * cleanup_extracted_image: intended to cleanup the extracted images from
>>    above, including the surrounding temporary directory
>>
>> we have to modify the `parse_ovf` a bit to handle the missing disk
>> images, and we parse the size out of the ovf part (since this is
>> informal only, it should be no problem if we cannot parse it sometimes)
>>
>> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
>> ---
>>   src/PVE/API2/Storage/Status.pm |  1 +
>>   src/PVE/Storage.pm             | 59 ++++++++++++++++++++++++++++++++++
>>   src/PVE/Storage/DirPlugin.pm   | 13 +++++++-
>>   src/PVE/Storage/OVF.pm         | 53 ++++++++++++++++++++++++++----
>>   src/PVE/Storage/Plugin.pm      |  5 +++
>>   5 files changed, 123 insertions(+), 8 deletions(-)
>>
>> diff --git a/src/PVE/API2/Storage/Status.pm b/src/PVE/API2/Storage/Status.pm
>> index f7e324f..77ed57c 100644
>> --- a/src/PVE/API2/Storage/Status.pm
>> +++ b/src/PVE/API2/Storage/Status.pm
>> @@ -749,6 +749,7 @@ __PACKAGE__->register_method({
>>   				'efi-state-lost',
>>   				'guest-is-running',
>>   				'nvme-unsupported',
>> +				'ova-needs-extracting',
>>   				'ovmf-with-lsi-unsupported',
>>   				'serial-port-socket-only',
>>   			    ],
>> diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
>> index f8ea93d..bc073ef 100755
>> --- a/src/PVE/Storage.pm
>> +++ b/src/PVE/Storage.pm
>> @@ -2189,4 +2189,63 @@ sub get_import_metadata {
>>       return $plugin->get_import_metadata($scfg, $volname, $storeid);
>>   }
>>   
> 
> Shouldn't the following three functions call into plugin methods
> instead? That'd seem much more future-proof to me.

could be, i just did not want to extend the plugin api for that
but as fabian wrote, maybe we should put them in qemu-server
altogether for now?

(after thinking about it a bit, i'd be in favor of putting it in
qemu-server, because mainly i don't want to add to the plugin api further)

what do you think @fiona @fabian?

> 
>> +sub copy_needs_extraction {
>> +    my ($volid) = @_;
>> +    my ($storeid, $volname) = parse_volume_id($volid);
>> +    my $cfg = config();
>> +    my $scfg = storage_config($cfg, $storeid);
>> +    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
>> +
>> +    my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $file_format) =
>> +	$plugin->parse_volname($volname);
>> +
>> +    return $vtype eq 'import' && defined($file_format);
> 
> E.g this seems rather hacky, and puts a weird coupling on a future
> import plugin's parse_volname() function (presence of $file_format).

would it be better to check the volid again for '.ova/something$' ?
or do you have a better idea?
(especially if we want to have this maybe in qemu-server)

> 
>> +}
>> +
>> +sub extract_disk_from_import_file {
>> +    my ($volid, $vmid) = @_;
>> +
>> +    my ($storeid, $volname) = parse_volume_id($volid);
>> +    my $cfg = config();
>> +    my $scfg = storage_config($cfg, $storeid);
>> +    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
>> +
>> +    my ($vtype, $name, undef, undef, undef, undef, $file_format) =
>> +	$plugin->parse_volname($volname);
>> +
>> +    die "only files with content type 'import' can be extracted\n"
>> +	if $vtype ne 'import' || !defined($file_format);
>> +
>> +    # extract the inner file from the name
>> +    if ($volid =~ m!${name}/([^/]+)$!) {
>> +	$name = $1;
>> +    }
>> +
>> +    my ($source_file) = $plugin->path($scfg, $volname, $storeid);
>> +
>> +    my $destdir = $plugin->get_subdir($scfg, 'import');
>> +    my $pid = $$;
>> +    $destdir .= "/.tmp_${pid}_${vmid}";
>> +    mkdir $destdir;
>> +
>> +    ($source_file) = $source_file =~ m|^(/.*)|; # untaint
>> +
>> +    run_command(['tar', '-x', '-C', $destdir, '-f', $source_file, $name]);
>> +
>> +    return "$destdir/$name";
>> +}
>> +
>> +sub cleanup_extracted_image {
>> +    my ($source) = @_;
>> +
>> +    if ($source =~ m|^(/.+/\.tmp_[0-9]+_[0-9]+)/[^/]+$|) {
>> +	my $tmpdir = $1;
>> +
>> +	unlink $source or $! == ENOENT or die "removing image $source failed: $!\n";
>> +	rmdir $tmpdir or $! == ENOENT or die "removing tmpdir $tmpdir failed: $!\n";
>> +    } else {
>> +	die "invalid extraced image path '$source'\n";
>> +    }
>> +}
>> +
>>   1;
>> diff --git a/src/PVE/Storage/DirPlugin.pm b/src/PVE/Storage/DirPlugin.pm
>> index 4dc7708..50ceab7 100644
>> --- a/src/PVE/Storage/DirPlugin.pm
>> +++ b/src/PVE/Storage/DirPlugin.pm
>> @@ -260,14 +260,25 @@ sub get_import_metadata {
>>       # NOTE: all types must be added to the return schema of the import-metadata API endpoint
>>       my $warnings = [];
>>   
>> +    my $isOva = 0;
>> +    if ($path =~ m!\.ova!) {
> 
> Would be nicer if parse_volname() would return the $file_format and we
> chould check for that. Also missing the $ in the regex, so you'd
> mismatch a weird filename like ABC.ovaXYZ.ovf or?

yeah the $ is missing, and yes, we could return ova/ovf as format there
as we want to change the 'needs extracting' check anyway


> 
>> +	$isOva = 1;
>> +	push @$warnings, { type => 'ova-needs-extracting' };
>> +    }
>>       my $res = PVE::Storage::OVF::parse_ovf($path, $isOva);
>>       my $disks = {};
>>       for my $disk ($res->{disks}->@*) {
>>   	my $id = $disk->{disk_address};
>>   	my $size = $disk->{virtual_size};
>>   	my $path = $disk->{relative_path};
>> +	my $volid;
>> +	if ($isOva) {
>> +	    $volid = "$storeid:$volname/$path";
>> +	} else {
>> +	    $volid = "$storeid:import/$path",
>> +	}
>>   	$disks->{$id} = {
>> -	    volid => "$storeid:import/$path",
>> +	    volid => $volid,
>>   	    defined($size) ? (size => $size) : (),
>>   	};
>>       }
>> diff --git a/src/PVE/Storage/OVF.pm b/src/PVE/Storage/OVF.pm
>> index 4a322b9..fb850a8 100644
>> --- a/src/PVE/Storage/OVF.pm
>> +++ b/src/PVE/Storage/OVF.pm
>> @@ -85,11 +85,37 @@ sub id_to_pve {
>>       }
>>   }
>>   
>> +# technically defined in DSP0004 (https://www.dmtf.org/dsp/DSP0004) as an ABNF
>> +# but realistically this always takes the form of 'bytes * base^exponent'
> 
> The comment wrongly says 'bytes' instead of 'byte' (your test examples
> confirm this).
> 
>> +sub try_parse_capacity_unit {
>> +    my ($unit_text) = @_;
>> +
>> +    if ($unit_text =~ m/^\s*byte\s*\*\s*([0-9]+)\s*\^\s*([0-9]+)\s*$/) {
> 
> Fun regex :P
> 
>> +	my $base = $1;
>> +	my $exp = $2;
>> +	return $base ** $exp;
>> +    }
>> +
>> +    return undef;
>> +}
>> +
> 
> (...)
> 
>> diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
>> index deaf8b2..ea069ab 100644
>> --- a/src/PVE/Storage/Plugin.pm
>> +++ b/src/PVE/Storage/Plugin.pm
>> @@ -654,6 +654,11 @@ sub parse_volname {
>>   	return ('backup', $fn);
>>       } elsif ($volname =~ m!^snippets/([^/]+)$!) {
>>   	return ('snippets', $1);
>> +    } elsif ($volname =~ m!^import/([^/]+\.ova)\/([^/]+)$!) {
>> +	my $archive = $1;
>> +	my $file = $2;
>> +	my (undef, $format, undef) = parse_name_dir($file);
>> +	return ('import', $archive, 0, undef, undef, undef, $format);
> 
> So we return the same $name for different things here? Not super happy
> with that either. If we were to get creative we could say the archive is
> the "base" of the image, but surely also comes with caveats.

i'll change this in a v2 should not be necessary

> 
>>       } elsif ($volname =~ m!^import/([^/]+$PVE::Storage::IMPORT_EXT_RE_1)$!) {
>>   	return ('import', $1);
>>       } elsif ($volname =~ m!^import/([^/]+\.(raw|vmdk|qcow2))$!) {



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH storage 3/9] plugin: dir: handle ova files for import
  2024-04-17 12:45   ` Fabian Grünbichler
@ 2024-04-17 13:10     ` Dominik Csapak
  2024-04-17 13:52       ` Fabian Grünbichler
  0 siblings, 1 reply; 67+ messages in thread
From: Dominik Csapak @ 2024-04-17 13:10 UTC (permalink / raw)
  To: Proxmox VE development discussion, Fabian Grünbichler

On 4/17/24 14:45, Fabian Grünbichler wrote:
> On April 16, 2024 3:18 pm, Dominik Csapak wrote:
>> since we want to handle ova files (which are only ovf+vmdks bundled in a
>> tar file) for import, add code that handles that.
>>
>> we introduce a valid volname for files contained in ovas like this:
>>
>>   storage:import/archive.ova/disk-1.vmdk
>>
>> by basically treating the last part of the path as the name for the
>> contained disk we want.
>>
>> we then provide 3 functions to use for that:
>>
>> * copy_needs_extraction: determines from the given volid (like above) if
>>    that needs extraction to copy it, currently only 'import' vtype +
>>    defined format returns true here (if we have more options in the
>>    future, we can of course easily extend that)
>>
>> * extract_disk_from_import_file: this actually extracts the file from
>>    the archive. Currently only ova is supported, so the extraction with
>>    'tar' is hardcoded, but again we can easily extend/modify that should
>>    we need to.
>>
>>    we currently extract into the import storage in a directory named:
>>    `.tmp_<pid>_<targetvmid>` which should not clash with concurrent
>>    operations (though we do extract it multiple times then)
>>
>>    alternatively we could implement either a 'tmpstorage' parameter,
>>    or use e.g. '/var/tmp/' or similar, but re-using the current storage
>>    seemed ok.
>>
>> * cleanup_extracted_image: intended to cleanup the extracted images from
>>    above, including the surrounding temporary directory
> 
> the helpers could also all live in qemu-server for now, which would also
> make extending it to use a different storage, or direct importing via a
> pipe easier? see below ;)
> 
>>
>> we have to modify the `parse_ovf` a bit to handle the missing disk
>> images, and we parse the size out of the ovf part (since this is
>> informal only, it should be no problem if we cannot parse it sometimes)
>>
>> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
>> ---
>>   src/PVE/API2/Storage/Status.pm |  1 +
>>   src/PVE/Storage.pm             | 59 ++++++++++++++++++++++++++++++++++
>>   src/PVE/Storage/DirPlugin.pm   | 13 +++++++-
>>   src/PVE/Storage/OVF.pm         | 53 ++++++++++++++++++++++++++----
>>   src/PVE/Storage/Plugin.pm      |  5 +++
>>   5 files changed, 123 insertions(+), 8 deletions(-)
>>
>> diff --git a/src/PVE/API2/Storage/Status.pm b/src/PVE/API2/Storage/Status.pm
>> index f7e324f..77ed57c 100644
>> --- a/src/PVE/API2/Storage/Status.pm
>> +++ b/src/PVE/API2/Storage/Status.pm
>> @@ -749,6 +749,7 @@ __PACKAGE__->register_method({
>>   				'efi-state-lost',
>>   				'guest-is-running',
>>   				'nvme-unsupported',
>> +				'ova-needs-extracting',
>>   				'ovmf-with-lsi-unsupported',
>>   				'serial-port-socket-only',
>>   			    ],
>> diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
>> index f8ea93d..bc073ef 100755
>> --- a/src/PVE/Storage.pm
>> +++ b/src/PVE/Storage.pm
>> @@ -2189,4 +2189,63 @@ sub get_import_metadata {
>>       return $plugin->get_import_metadata($scfg, $volname, $storeid);
>>   }
>>   
>> +sub copy_needs_extraction {
>> +    my ($volid) = @_;
>> +    my ($storeid, $volname) = parse_volume_id($volid);
>> +    my $cfg = config();
>> +    my $scfg = storage_config($cfg, $storeid);
>> +    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
>> +
>> +    my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $file_format) =
>> +	$plugin->parse_volname($volname);
>> +
>> +    return $vtype eq 'import' && defined($file_format);
>> +}
> 
> not sure this one is needed? it could also just be a call to
> PVE::Storage::parse_volname in qemu-server?
> 
>> +
>> +sub extract_disk_from_import_file {
> 
> similarly, this is basically PVE::Storage::get_import_dir + the
> run_command call, and could live in qemu-server?
> 
>> +    my ($volid, $vmid) = @_;
>> +
>> +    my ($storeid, $volname) = parse_volume_id($volid);
>> +    my $cfg = config();
>> +    my $scfg = storage_config($cfg, $storeid);
>> +    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
>> +
>> +    my ($vtype, $name, undef, undef, undef, undef, $file_format) =
>> +	$plugin->parse_volname($volname);
>> +
>> +    die "only files with content type 'import' can be extracted\n"
>> +	if $vtype ne 'import' || !defined($file_format);
>> +
>> +    # extract the inner file from the name
>> +    if ($volid =~ m!${name}/([^/]+)$!) {
>> +	$name = $1;
> 
> we should probably be very conservative here and only allow [-_a-z0-9]
> as a start - or something similar rather restrictive..
> 
>> +    }
>> +
>> +    my ($source_file) = $plugin->path($scfg, $volname, $storeid);
>> +
>> +    my $destdir = $plugin->get_subdir($scfg, 'import');
>> +    my $pid = $$;
>> +    $destdir .= "/.tmp_${pid}_${vmid}";
>> +    mkdir $destdir;
>> +
>> +    ($source_file) = $source_file =~ m|^(/.*)|; # untaint
> 
> again a rather interesting untaint ;)
> 
>> +
>> +    run_command(['tar', '-x', '-C', $destdir, '-f', $source_file, $name]);
> 
> if $name was a symlink in the archive, you've now created a symlink
> pointing wherever..
> 
>> +
>> +    return "$destdir/$name";
> 
> and this returns an absolute path to it, and now we are in trouble land
> ;) we should check that the file is a real file here..
> 
>> +}
>> +
>> +sub cleanup_extracted_image {
> 
> same for this?
> 
>> +    my ($source) = @_;
>> +
>> +    if ($source =~ m|^(/.+/\.tmp_[0-9]+_[0-9]+)/[^/]+$|) {
>> +	my $tmpdir = $1;
>> +
>> +	unlink $source or $! == ENOENT or die "removing image $source failed: $!\n";
>> +	rmdir $tmpdir or $! == ENOENT or die "removing tmpdir $tmpdir failed: $!\n";
>> +    } else {
>> +	die "invalid extraced image path '$source'\n";
> 
> nit: typo
> 
> these are also not discoverable if the error handling in qemu-server
> failed for some reason.. might be a source of unwanted space
> consumption..

any suggestions for better handling that cleanup?
we could put it at the beginning of each cleanup step, that should
at least make sure we cleaned up the temporary images

> 
>> +    }
>> +}
>> +
>>   1;
>> diff --git a/src/PVE/Storage/DirPlugin.pm b/src/PVE/Storage/DirPlugin.pm
>> index 4dc7708..50ceab7 100644
>> --- a/src/PVE/Storage/DirPlugin.pm
>> +++ b/src/PVE/Storage/DirPlugin.pm
>> @@ -260,14 +260,25 @@ sub get_import_metadata {
>>       # NOTE: all types must be added to the return schema of the import-metadata API endpoint
>>       my $warnings = [];
>>   
>> +    my $isOva = 0;
>> +    if ($path =~ m!\.ova!) {
>> +	$isOva = 1;
>> +	push @$warnings, { type => 'ova-needs-extracting' };
>> +    }
>>       my $res = PVE::Storage::OVF::parse_ovf($path, $isOva);
>>       my $disks = {};
>>       for my $disk ($res->{disks}->@*) {
>>   	my $id = $disk->{disk_address};
>>   	my $size = $disk->{virtual_size};
>>   	my $path = $disk->{relative_path};
>> +	my $volid;
>> +	if ($isOva) {
>> +	    $volid = "$storeid:$volname/$path";
>> +	} else {
>> +	    $volid = "$storeid:import/$path",
>> +	}
>>   	$disks->{$id} = {
>> -	    volid => "$storeid:import/$path",
>> +	    volid => $volid,
>>   	    defined($size) ? (size => $size) : (),
>>   	};
>>       }
>> diff --git a/src/PVE/Storage/OVF.pm b/src/PVE/Storage/OVF.pm
>> index 4a322b9..fb850a8 100644
>> --- a/src/PVE/Storage/OVF.pm
>> +++ b/src/PVE/Storage/OVF.pm
>> @@ -85,11 +85,37 @@ sub id_to_pve {
>>       }
>>   }
>>   
>> +# technically defined in DSP0004 (https://www.dmtf.org/dsp/DSP0004) as an ABNF
>> +# but realistically this always takes the form of 'bytes * base^exponent'
>> +sub try_parse_capacity_unit {
>> +    my ($unit_text) = @_;
>> +
>> +    if ($unit_text =~ m/^\s*byte\s*\*\s*([0-9]+)\s*\^\s*([0-9]+)\s*$/) {
>> +	my $base = $1;
>> +	my $exp = $2;
>> +	return $base ** $exp;
>> +    }
>> +
>> +    return undef;
>> +}
>> +
>>   # returns two references, $qm which holds qm.conf style key/values, and \@disks
>>   sub parse_ovf {
>> -    my ($ovf, $debug) = @_;
>> +    my ($ovf, $isOva, $debug) = @_;
>> +
>> +    # we have to ignore missing disk images for ova
>> +    my $dom;
>> +    if ($isOva) {
>> +	my $raw = "";
>> +	PVE::Tools::run_command(['tar', '-xO', '--wildcards', '--occurrence=1', '-f', $ovf, '*.ovf'], outfunc => sub {
>> +	    my $line = shift;
>> +	    $raw .= $line;
>> +	});
>> +	$dom = XML::LibXML->load_xml(string => $raw, no_blanks => 1);
>> +    } else {
>> +	$dom = XML::LibXML->load_xml(location => $ovf, no_blanks => 1);
>> +    }
>>   
>> -    my $dom = XML::LibXML->load_xml(location => $ovf, no_blanks => 1);
>>   
>>       # register the xml namespaces in a xpath context object
>>       # 'ovf' is the default namespace so it will prepended to each xml element
>> @@ -177,7 +203,17 @@ sub parse_ovf {
>>   	# @ needs to be escaped to prevent Perl double quote interpolation
>>   	my $xpath_find_fileref = sprintf("/ovf:Envelope/ovf:DiskSection/\
>>   ovf:Disk[\@ovf:diskId='%s']/\@ovf:fileRef", $disk_id);
>> +	my $xpath_find_capacity = sprintf("/ovf:Envelope/ovf:DiskSection/\
>> +ovf:Disk[\@ovf:diskId='%s']/\@ovf:capacity", $disk_id);
>> +	my $xpath_find_capacity_unit = sprintf("/ovf:Envelope/ovf:DiskSection/\
>> +ovf:Disk[\@ovf:diskId='%s']/\@ovf:capacityAllocationUnits", $disk_id);
>>   	my $fileref = $xpc->findvalue($xpath_find_fileref);
>> +	my $capacity = $xpc->findvalue($xpath_find_capacity);
>> +	my $capacity_unit = $xpc->findvalue($xpath_find_capacity_unit);
>> +	my $virtual_size;
>> +	if (my $factor = try_parse_capacity_unit($capacity_unit)) {
>> +	    $virtual_size = $capacity * $factor;
>> +	}
>>   
>>   	my $valid_url_chars = qr@${valid_uripath_chars}|/@;
>>   	if (!$fileref || $fileref !~ m/^${valid_url_chars}+$/) {
>> @@ -217,23 +253,26 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
>>   	    die "error parsing $filepath, are you using a symlink ?\n";
>>   	}
>>   
>> -	if (!-e $backing_file_path) {
>> +	if (!-e $backing_file_path && !$isOva) {
> 
> this is actually not enough, the realpath call above can already fail if
> $filepath points to a file in a subdir (note that realpath will only
> check the path components, not the file itself).
> 
> e.g.:
> 
> error parsing foo/bar/chr-6.49.13-disk1.vmdk, are you using a symlink ? (500)
> 
> we could also tighten what we allow as filepath here, in addition to the
> extraction code.
> 
>>   	    die "error parsing $filepath, file seems not to exist at $backing_file_path\n";
>>   	}
>>   
>>   	($backing_file_path) = $backing_file_path =~ m|^(/.*)|; # untaint
>>   	($filepath) = $filepath =~ m|^(.*)|; # untaint
>>   
>> -	my $virtual_size = PVE::Storage::file_size_info($backing_file_path);
>> -	die "error parsing $backing_file_path, cannot determine file size\n"
>> -	    if !$virtual_size;
>> +	if (!$isOva) {
>> +	    my $size = PVE::Storage::file_size_info($backing_file_path);
>> +	    die "error parsing $backing_file_path, cannot determine file size\n"
>> +		if !$size;
>>   
>> +	    $virtual_size = $size;
>> +	}
>>   	$pve_disk = {
>>   	    disk_address => $pve_disk_address,
>>   	    backing_file => $backing_file_path,
>> -	    virtual_size => $virtual_size
>>   	    relative_path => $filepath,
>>   	};
>> +	$pve_disk->{virtual_size} = $virtual_size if defined($virtual_size);
>>   	push @disks, $pve_disk;
>>   
>>       }
>> diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
>> index deaf8b2..ea069ab 100644
>> --- a/src/PVE/Storage/Plugin.pm
>> +++ b/src/PVE/Storage/Plugin.pm
>> @@ -654,6 +654,11 @@ sub parse_volname {
>>   	return ('backup', $fn);
>>       } elsif ($volname =~ m!^snippets/([^/]+)$!) {
>>   	return ('snippets', $1);
>> +    } elsif ($volname =~ m!^import/([^/]+\.ova)\/([^/]+)$!) {
>> +	my $archive = $1;
>> +	my $file = $2;
>> +	my (undef, $format, undef) = parse_name_dir($file);
>> +	return ('import', $archive, 0, undef, undef, undef, $format);
>>       } elsif ($volname =~ m!^import/([^/]+$PVE::Storage::IMPORT_EXT_RE_1)$!) {
>>   	return ('import', $1);
>>       } elsif ($volname =~ m!^import/([^/]+\.(raw|vmdk|qcow2))$!) {
>> -- 
>> 2.39.2
>>
>>
>>
>> _______________________________________________
>> pve-devel mailing list
>> pve-devel@lists.proxmox.com
>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>>
>>
>>
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH storage/qemu-server/pve-manager] implement ova/ovf import for directory type storages
  2024-04-16 13:18 [pve-devel] [PATCH storage/qemu-server/pve-manager] implement ova/ovf import for directory type storages Dominik Csapak
                   ` (15 preceding siblings ...)
  2024-04-16 13:19 ` [pve-devel] [PATCH manager 4/4] ui: enable upload/download buttons for 'import' type storages Dominik Csapak
@ 2024-04-17 13:11 ` Fabian Grünbichler
  2024-04-17 13:19   ` Dominik Csapak
  2024-04-18  9:27 ` Dominik Csapak
  17 siblings, 1 reply; 67+ messages in thread
From: Fabian Grünbichler @ 2024-04-17 13:11 UTC (permalink / raw)
  To: Proxmox VE development discussion

On April 16, 2024 3:18 pm, Dominik Csapak wrote:
> This series enables importing ova/ovf from directory based storages,
> inclusive upload/download via the webui (ova only).
> 
> It also improves the ovf importer by parsing the ostype, nics, bootorder
> (and firmware from vmware exported files).
> 
> I currently opted to move the OVF.pm to pve-storage, since there is no
> real other place where we could put it. Building a seperate package
> from qemu-servers git repo would also not be ideal, since we still
> have a cyclic dev dependency then
> (If someone has a better idea how to handle that, please do tell, and
> i can do that in a v2)
> 
> There are surely some wrinkles left i did not think of, but all in all,
> it should be pretty usable. E.g. i downloaded some ovas, uploaded them
> on my cephfs in my virtual cluster, and successfully imported that with
> live-import.
> 
> The biggest caveat when importing from ovas is that we have to
> temporarily extract the disk images. I opted for doing that into the
> import storage, but if we have a better idea where to put that, i can
> implement it in a v2 (or as a follow up). For example, we could add a
> new 'tmpdir' parameter to the create call and use that for extractig.

something is wrong with the permissions, since the import images are not
added to check_volume_access, I can now upload an OVA, but not see it
afterwards ;)

I guess if a user has upload rights for improt images
(Datastore.AllocateTemplate), they should also be able to see and use
(and remove) import images?


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH storage 2/9] plugin: dir: implement import content type
  2024-04-17 10:07   ` Fiona Ebner
  2024-04-17 10:07     ` Fiona Ebner
@ 2024-04-17 13:13     ` Dominik Csapak
  1 sibling, 0 replies; 67+ messages in thread
From: Dominik Csapak @ 2024-04-17 13:13 UTC (permalink / raw)
  To: Fiona Ebner, Proxmox VE development discussion

On 4/17/24 12:07, Fiona Ebner wrote:
> Am 16.04.24 um 15:18 schrieb Dominik Csapak:
>> in DirPlugin and not Plugin (because of cyclic dependency of
>> Plugin -> OVF -> Storage -> Plugin otherwise)
>>
>> only ovf is currently supported (though ova will be shown in import
>> listing), expects the files to not be in a subdir, and adjacent to the
>> ovf file.
>>
>> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
>> ---
>>   src/PVE/Storage.pm                 |  8 ++++++-
>>   src/PVE/Storage/DirPlugin.pm       | 37 +++++++++++++++++++++++++++++-
>>   src/PVE/Storage/OVF.pm             |  2 ++
>>   src/PVE/Storage/Plugin.pm          | 18 ++++++++++++++-
>>   src/test/parse_volname_test.pm     | 13 +++++++++++
>>   src/test/path_to_volume_id_test.pm | 16 +++++++++++++
>>   6 files changed, 91 insertions(+), 3 deletions(-)
>>
>> diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
>> index 40314a8..f8ea93d 100755
>> --- a/src/PVE/Storage.pm
>> +++ b/src/PVE/Storage.pm
>> @@ -114,6 +114,8 @@ our $VZTMPL_EXT_RE_1 = qr/\.tar\.(gz|xz|zst)/i;
>>   
>>   our $BACKUP_EXT_RE_2 = qr/\.(tgz|(?:tar|vma)(?:\.(${\PVE::Storage::Plugin::COMPRESSOR_RE}))?)/;
>>   
>> +our $IMPORT_EXT_RE_1 = qr/\.(ov[af])/;
>> +
>>   # FIXME remove with PVE 8.0, add versioned breaks for pve-manager
>>   our $vztmpl_extension_re = $VZTMPL_EXT_RE_1;
>>   
>> @@ -612,6 +614,7 @@ sub path_to_volume_id {
>>   	my $backupdir = $plugin->get_subdir($scfg, 'backup');
>>   	my $privatedir = $plugin->get_subdir($scfg, 'rootdir');
>>   	my $snippetsdir = $plugin->get_subdir($scfg, 'snippets');
>> +	my $importdir = $plugin->get_subdir($scfg, 'import');
>>   
>>   	if ($path =~ m!^$imagedir/(\d+)/([^/\s]+)$!) {
>>   	    my $vmid = $1;
>> @@ -640,6 +643,9 @@ sub path_to_volume_id {
>>   	} elsif ($path =~ m!^$snippetsdir/([^/]+)$!) {
>>   	    my $name = $1;
>>   	    return ('snippets', "$sid:snippets/$name");
>> +	} elsif ($path =~ m!^$importdir/([^/]+${IMPORT_EXT_RE_1})$!) {
>> +	    my $name = $1;
>> +	    return ('import', "$sid:import/$name");
>>   	}
>>       }
>>   
>> @@ -2170,7 +2176,7 @@ sub normalize_content_filename {
>>   # If a storage provides an 'import' content type, it should be able to provide
>>   # an object implementing the import information interface.
>>   sub get_import_metadata {
>> -    my ($cfg, $volid) = @_;
>> +    my ($cfg, $volid, $target) = @_;
>>   
> 
> $target is added here but not passed along when calling the plugin's
> function

leftover from a previous iteration of the patches

> 
>>       my ($storeid, $volname) = parse_volume_id($volid);
>>   
> 
> Pre-existing and not directly related, but in the ESXi plugin the
> prototype seems wrong too:
> 
> sub get_import_metadata : prototype($$$$$) {
>      my ($class, $scfg, $volname, $storeid) = @_;

same here

> 
> 
>> diff --git a/src/PVE/Storage/DirPlugin.pm b/src/PVE/Storage/DirPlugin.pm
>> index 2efa8d5..4dc7708 100644
>> --- a/src/PVE/Storage/DirPlugin.pm
>> +++ b/src/PVE/Storage/DirPlugin.pm
>> @@ -10,6 +10,7 @@ use IO::File;
>>   use POSIX;
>>   
>>   use PVE::Storage::Plugin;
>> +use PVE::Storage::OVF;
>>   use PVE::JSONSchema qw(get_standard_option);
>>   
>>   use base qw(PVE::Storage::Plugin);
>> @@ -22,7 +23,7 @@ sub type {
>>   
>>   sub plugindata {
>>       return {
>> -	content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1, backup => 1, snippets => 1, none => 1 },
>> +	content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1, backup => 1, snippets => 1, none => 1, import => 1 },
>>   		     { images => 1,  rootdir => 1 }],
>>   	format => [ { raw => 1, qcow2 => 1, vmdk => 1, subvol => 1 } , 'raw' ],
>>       };
>> @@ -247,4 +248,38 @@ sub check_config {
>>       return $opts;
>>   }
>>   
>> +sub get_import_metadata {
>> +    my ($class, $scfg, $volname, $storeid, $target) = @_;
>> +
>> +    if ($volname !~ m!^([^/]+)/.*${PVE::Storage::IMPORT_EXT_RE_1}$!) {
>> +	die "volume '$volname' does not look like an importable vm config\n";
>> +    }
>> +
>> +    my $path = $class->path($scfg, $volname, $storeid, undef);
>> +
>> +    # NOTE: all types must be added to the return schema of the import-metadata API endpoint
> 
> To be extra clear (was confused for a moment): "all types of warnings"
> 
>> +    my $warnings = [];
>> +
>> +    my $res = PVE::Storage::OVF::parse_ovf($path, $isOva);
> 
> $isOva does not exist yet (only added by a later patch).
> 
>> +    my $disks = {};
>> +    for my $disk ($res->{disks}->@*) {
>> +	my $id = $disk->{disk_address};
>> +	my $size = $disk->{virtual_size};
>> +	my $path = $disk->{relative_path};
>> +	$disks->{$id} = {
>> +	    volid => "$storeid:import/$path",
>> +	    defined($size) ? (size => $size) : (),
>> +	};
>> +    }
>> +
>> +    return {
>> +	type => 'vm',
>> +	source => $volname,
>> +	'create-args' => $res->{qm},
>> +	'disks' => $disks,
>> +	warnings => $warnings,
>> +	net => [],
>> +    };
>> +}
>> +
>>   1;
>> diff --git a/src/PVE/Storage/OVF.pm b/src/PVE/Storage/OVF.pm
>> index 90ca453..4a322b9 100644
>> --- a/src/PVE/Storage/OVF.pm
>> +++ b/src/PVE/Storage/OVF.pm
>> @@ -222,6 +222,7 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
>>   	}
>>   
>>   	($backing_file_path) = $backing_file_path =~ m|^(/.*)|; # untaint
>> +	($filepath) = $filepath =~ m|^(.*)|; # untaint
> 
> 
> Hmm, $backing_file_path is the result after going through realpath(),
> $filepath is from before. We do check it's not a symlink, so I might be
> a bit paranoid, but still, rather than doing a blanket untaint, you
> could just use basename() (either here or not return anything new and do
> it at the use-site).
> 
>>   
>>   	my $virtual_size = PVE::Storage::file_size_info($backing_file_path);
>>   	die "error parsing $backing_file_path, cannot determine file size\n"
>> @@ -231,6 +232,7 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
>>   	    disk_address => $pve_disk_address,
>>   	    backing_file => $backing_file_path,
>>   	    virtual_size => $virtual_size
>> +	    relative_path => $filepath,
>>   	};
>>   	push @disks, $pve_disk;
>>   > diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
>> index 22a9729..deaf8b2 100644
>> --- a/src/PVE/Storage/Plugin.pm
>> +++ b/src/PVE/Storage/Plugin.pm
>> @@ -654,6 +654,10 @@ sub parse_volname {
>>   	return ('backup', $fn);
>>       } elsif ($volname =~ m!^snippets/([^/]+)$!) {
>>   	return ('snippets', $1);
>> +    } elsif ($volname =~ m!^import/([^/]+$PVE::Storage::IMPORT_EXT_RE_1)$!) {
>> +	return ('import', $1);
>> +    } elsif ($volname =~ m!^import/([^/]+\.(raw|vmdk|qcow2))$!) {
>> +	return ('images', $1, 0, undef, undef, undef, $2);
> 
> Hmm, $vmid=0, because we have currently have assumptions that each
> volume has an associated guest ID? At least might be worth a comment
> (also in API description if those volumes can somehow reach there).

i mimicked the way ESXIPlugin handles that, there we also do the same
with vmdks vs vmx files

i.e. vmdks are returned as 'images' with a format and vmx
files are just returned plain without a file format

i can change it for the dir plugin, since i have to handle
the ova contained images differently anyway and that requires
adaption in qemu-server

then we can simply do that check there differently
(vtype import + fileformat vmdk/qcow2/raw for example)


> 
>>       }
>>   
>>       die "unable to parse directory volume name '$volname'\n";
>> @@ -666,6 +670,7 @@ my $vtype_subdirs = {
>>       vztmpl => 'template/cache',
>>       backup => 'dump',
>>       snippets => 'snippets',
>> +    import => 'import',
>>   };
>>   
>>   sub get_vtype_subdirs {
>> @@ -691,6 +696,11 @@ sub filesystem_path {
>>       my ($vtype, $name, $vmid, undef, undef, $isBase, $format) =
>>   	$class->parse_volname($volname);
>>   
>> +    if (defined($vmid) && $vmid == 0) {
>> +	# import volumes?
>> +	$vtype = 'import';
>> +    }
> 
> It is rather hacky :/ At least we could check whether it's an volname
> with "import/" instead of relying on $vmid==0 to set the $vtype.
> 
> But why return type 'images' in parse_volname() if you override it here
> if it's an import image? There should be some comments with the
> rationale why it's done like this.
> 

not needed when i return the images as 'import' type with a file format

>> +
>>       # Note: qcow2/qed has internal snapshot, so path is always
>>       # the same (with or without snapshot => same file).
>>       die "can't snapshot this image format\n"
>> @@ -1227,7 +1237,7 @@ sub list_images {
>>       return $res;
>>   }
>>   
>> -# list templates ($tt = <iso|vztmpl|backup|snippets>)
>> +# list templates ($tt = <iso|vztmpl|backup|snippets|import>)
>>   my $get_subdir_files = sub {
>>       my ($sid, $path, $tt, $vmid) = @_;
>>   
>> @@ -1283,6 +1293,10 @@ my $get_subdir_files = sub {
>>   		volid => "$sid:snippets/". basename($fn),
>>   		format => 'snippet',
>>   	    };
>> +	} elsif ($tt eq 'import') {
>> +	    next if $fn !~ m!/([^/]+$PVE::Storage::IMPORT_EXT_RE_1)$!i;
>> +
>> +	    $info = { volid => "$sid:import/$1", format => "$2" };
>>   	}
>>   
>>   	$info->{size} = $st->size;
>> @@ -1317,6 +1331,8 @@ sub list_volumes {
>>   		$data = $get_subdir_files->($storeid, $path, 'backup', $vmid);
>>   	    } elsif ($type eq 'snippets') {
>>   		$data = $get_subdir_files->($storeid, $path, 'snippets');
>> +	    } elsif ($type eq 'import') {
>> +		$data = $get_subdir_files->($storeid, $path, 'import');
>>   	    }
>>   	}
>>   
>> diff --git a/src/test/parse_volname_test.pm b/src/test/parse_volname_test.pm
>> index d6ac885..59819f0 100644
>> --- a/src/test/parse_volname_test.pm
>> +++ b/src/test/parse_volname_test.pm
>> @@ -81,6 +81,19 @@ my $tests = [
>>   	expected    => ['snippets', 'hookscript.pl'],
>>       },
>>       #
>> +    #
>> +    #
>> +    {
>> +	description => "Import, ova",
>> +	volname     => 'import/import.ova',
>> +	expected    => ['import', 'import.ova'],
>> +    },
>> +    {
>> +	description => "Import, ovf",
>> +	volname     => 'import/import.ovf',
>> +	expected    => ['import', 'import.ovf'],
>> +    },
>> +    #
>>       # failed matches
>>       #
> 
> Would be nice to also test for failure (with a wrong extension).
> 
>>       {
>> diff --git a/src/test/path_to_volume_id_test.pm b/src/test/path_to_volume_id_test.pm
>> index 8149c88..8bc1bf8 100644
>> --- a/src/test/path_to_volume_id_test.pm
>> +++ b/src/test/path_to_volume_id_test.pm
>> @@ -174,6 +174,22 @@ my @tests = (
>>   	    'local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.xz',
>>   	],
>>       },
>> +    {
>> +	description => 'Import, ova',
>> +	volname     => "$storage_dir/import/import.ova",
>> +	expected    => [
>> +	    'import',
>> +	    'local:import/import.ova',
>> +	],
>> +    },
>> +    {
>> +	description => 'Import, ovf',
>> +	volname     => "$storage_dir/import/import.ovf",
>> +	expected    => [
>> +	    'import',
>> +	    'local:import/import.ovf',
>> +	],
>> +    },
>>   
>>       # no matches, path or files with failures
>>       {
> 
> 
> Would be nice to also test for failure (with a wrong extension).

sure. will do in a v2


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH storage 4/9] ovf: implement parsing the ostype
  2024-04-17 11:32   ` Fiona Ebner
@ 2024-04-17 13:14     ` Dominik Csapak
  2024-04-18  7:31       ` Fiona Ebner
  0 siblings, 1 reply; 67+ messages in thread
From: Dominik Csapak @ 2024-04-17 13:14 UTC (permalink / raw)
  To: Fiona Ebner, Proxmox VE development discussion

On 4/17/24 13:32, Fiona Ebner wrote:
> Am 16.04.24 um 15:18 schrieb Dominik Csapak:
>> use the standards info about the ostypes to map to our own
>> (see comment for link to the relevant part of the dmtf schema)
>>
>> every type that is not listed we map to 'other', so no need to have it
>> in a list.
>>
>> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
>>
> Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
> 
>> diff --git a/src/test/run_ovf_tests.pl b/src/test/run_ovf_tests.pl
>> index 1ef78cc..e949c15 100755
>> --- a/src/test/run_ovf_tests.pl
>> +++ b/src/test/run_ovf_tests.pl
>> @@ -59,13 +59,16 @@ print "\ntesting vm.conf extraction\n";
>>   is($win2008->{qm}->{name}, 'Win2008-R2x64', 'win2008 VM name is correct');
>>   is($win2008->{qm}->{memory}, '2048', 'win2008 VM memory is correct');
>>   is($win2008->{qm}->{cores}, '1', 'win2008 VM cores are correct');
>> +is($win2008->{qm}->{ostype}, 'win7', 'win2008 VM ostype is correcty');
>>   
>>   is($win10->{qm}->{name}, 'Win10-Liz', 'win10 VM name is correct');
>>   is($win10->{qm}->{memory}, '6144', 'win10 VM memory is correct');
>>   is($win10->{qm}->{cores}, '4', 'win10 VM cores are correct');
>> +is($win10->{qm}->{ostype}, 'other', 'win10 VM ostype is correct');
> 
> Yes, 'other', because the ovf config has id=1, but is there a special
> reason why? Maybe worth a comment here and below to avoid potential
> confusion.

my guess is that the ovf spec did not include windows 10 yet (or something
similar like the esxi exporter not knowing the newest spec)

and i did not want to change the testcase just for this

> 
>>   
>>   is($win10noNs->{qm}->{name}, 'Win10-Liz', 'win10 VM (no default rasd NS) name is correct');
>>   is($win10noNs->{qm}->{memory}, '6144', 'win10 VM (no default rasd NS) memory is correct');
>>   is($win10noNs->{qm}->{cores}, '4', 'win10 VM (no default rasd NS) cores are correct');
>> +is($win10noNs->{qm}->{ostype}, 'other', 'win10 VM (no default rasd NS) ostype is correct');
>>   
>>   done_testing();



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH storage 6/9] ovf: implement rudimentary boot order
  2024-04-17 11:54   ` Fiona Ebner
@ 2024-04-17 13:15     ` Dominik Csapak
  0 siblings, 0 replies; 67+ messages in thread
From: Dominik Csapak @ 2024-04-17 13:15 UTC (permalink / raw)
  To: Fiona Ebner, Proxmox VE development discussion

On 4/17/24 13:54, Fiona Ebner wrote:
> Am 16.04.24 um 15:18 schrieb Dominik Csapak:
>> simply add all parsed disks to the boot order in the order we encounter
>> them (similar to the esxi plugin).
>>
>> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
>> ---
>>   src/PVE/Storage/OVF.pm    | 6 ++++++
>>   src/test/run_ovf_tests.pl | 3 +++
>>   2 files changed, 9 insertions(+)
>>
>> diff --git a/src/PVE/Storage/OVF.pm b/src/PVE/Storage/OVF.pm
>> index f56c34d..f438de2 100644
>> --- a/src/PVE/Storage/OVF.pm
>> +++ b/src/PVE/Storage/OVF.pm
>> @@ -245,6 +245,8 @@ sub parse_ovf {
>>       # when all the nodes has been found out, we copy the relevant information to
>>       # a $pve_disk hash ref, which we push to @disks;
>>   
>> +    my $boot = [];
> 
> Nit: might be better to name it more verbosely since it's a long
> function, e.g. boot_order, boot_disk_keys, or similar
> 
>> +
>>       foreach my $item_node (@disk_items) {
>>   
>>   	my $disk_node;
>> @@ -348,6 +350,10 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
>>   	};
>>   	$pve_disk->{virtual_size} = $virtual_size if defined($virtual_size);
>>   	push @disks, $pve_disk;
>> +	push @$boot, $pve_disk_address;
>> +    }
> 
> This bracket should not be here and the next line below the next bracket
> (fixed by the next patch).
> 
>> +
>> +    $qm->{boot} = "order=" . join(';', @$boot);
> 
> Won't this fail later if there are no disks?

yes, oops, will check if boot(_order) is empty


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH storage 7/9] ovf: implement parsing nics
  2024-04-17 12:09   ` Fiona Ebner
@ 2024-04-17 13:16     ` Dominik Csapak
  0 siblings, 0 replies; 67+ messages in thread
From: Dominik Csapak @ 2024-04-17 13:16 UTC (permalink / raw)
  To: Fiona Ebner, Proxmox VE development discussion

On 4/17/24 14:09, Fiona Ebner wrote:
> Am 16.04.24 um 15:19 schrieb Dominik Csapak:
>> by iterating over the relevant parts and trying to parse out the
>> 'ResourceSubType'. The content of that is not standardized, but I only
>> ever found examples that are compatible with vmware, meaning it's
>> either 'e1000', 'e1000e' or 'vmxnet3' (in various capitalizations; thus
>> the `lc()`)
>>
>> As a fallback i used vmxnet3, since i guess most OVAs are tuned for
>> vmware.
> 
> I'm not familiar enough with the OVA/OVF ecosystem, but is this really
> the best default. I'd kinda expect e1000(e) to cause less issues in case
> we were not able to get the type from the OVA/OVF. And people coming
> from VMWare are likely going to use the dedicated importer.

i did choose that, since from what i saw looking for ovas, they are mostly
tailored for vmware consumption, so i thought it'd make sense to use
that as default.

not opposed to use e1000 though. i think in practice it won't make much difference

> 
>>
>> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
>> ---
>>   src/PVE/Storage/DirPlugin.pm |  2 +-
>>   src/PVE/Storage/OVF.pm       | 20 +++++++++++++++++++-
>>   src/test/run_ovf_tests.pl    |  5 +++++
>>   3 files changed, 25 insertions(+), 2 deletions(-)
>>
>> diff --git a/src/PVE/Storage/DirPlugin.pm b/src/PVE/Storage/DirPlugin.pm
>> index 8a248c7..21c8350 100644
>> --- a/src/PVE/Storage/DirPlugin.pm
>> +++ b/src/PVE/Storage/DirPlugin.pm
>> @@ -294,7 +294,7 @@ sub get_import_metadata {
>>   	'create-args' => $res->{qm},
>>   	'disks' => $disks,
>>   	warnings => $warnings,
>> -	net => [],
>> +	net => $res->{net},
>>       };
>>   }
>>   
>> diff --git a/src/PVE/Storage/OVF.pm b/src/PVE/Storage/OVF.pm
>> index f438de2..c3e7ed9 100644
>> --- a/src/PVE/Storage/OVF.pm
>> +++ b/src/PVE/Storage/OVF.pm
>> @@ -120,6 +120,12 @@ sub get_ostype {
>>       return $ostype_ids->{$id} // 'other';
>>   }
>>   
>> +my $allowed_nic_models = [
>> +    'e1000',
>> +    'e1000e',
>> +    'vmxnet3',
>> +];
>> +
>>   sub find_by {
>>       my ($key, $param) = @_;
>>       foreach my $resource (@resources) {
>> @@ -355,9 +361,21 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
>>   
>>       $qm->{boot} = "order=" . join(';', @$boot);
>>   
>> +    my $nic_id = dtmf_name_to_id('Ethernet Adapter');
>> +    my $xpath_find_nics = "/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${nic_id}]";
>> +    my @nic_items = $xpc->findnodes($xpath_find_nics);
>> +
>> +    my $net = {};
>> +
>> +    my $net_count = 0;
>> +    foreach my $item_node (@nic_items) {
> 
> Style nit: please use for instead of foreach
> 
>> +	my $model = $xpc->findvalue('rasd:ResourceSubType', $item_node);
>> +	$model = lc($model);
>> +	$model = 'vmxnet3' if ! grep $model, @$allowed_nic_models;
> 
> 
>> +	$net->{"net${net_count}"} = { model => $model };
>>       }
> 
> $net_count is never increased.
> 
>>   
>> -    return {qm => $qm, disks => \@disks};
>> +    return {qm => $qm, disks => \@disks, net => $net};
>>   }
>>   
>>   1;
>> diff --git a/src/test/run_ovf_tests.pl b/src/test/run_ovf_tests.pl
>> index 8cf5662..d9a7b4b 100755
>> --- a/src/test/run_ovf_tests.pl
>> +++ b/src/test/run_ovf_tests.pl
>> @@ -54,6 +54,11 @@ is($win10noNs->{disks}->[0]->{disk_address}, 'scsi0', 'single disk vm (no defaul
>>   is($win10noNs->{disks}->[0]->{backing_file}, "$test_manifests/Win10-Liz-disk1.vmdk", 'single disk vm (no default rasd NS) has the correct disk backing device');
>>   is($win10noNs->{disks}->[0]->{virtual_size}, 2048, 'single disk vm (no default rasd NS) has the correct size');
>>   
>> +print "testing nics\n";
>> +is($win2008->{net}->{net0}->{model}, 'e1000', 'win2008 has correct nic model');
>> +is($win10->{net}->{net0}->{model}, 'e1000e', 'win10 has correct nic model');
>> +is($win10noNs->{net}->{net0}->{model}, 'e1000e', 'win10 (no default rasd NS) has correct nic model');
>> +
>>   print "\ntesting vm.conf extraction\n";
>>   
>>   is($win2008->{qm}->{boot}, 'order=scsi0;scsi1', 'win2008 VM boot is correct');



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH storage/qemu-server/pve-manager] implement ova/ovf import for directory type storages
  2024-04-17 13:11 ` [pve-devel] [PATCH storage/qemu-server/pve-manager] implement ova/ovf import for directory " Fabian Grünbichler
@ 2024-04-17 13:19   ` Dominik Csapak
  2024-04-18  6:40     ` Fabian Grünbichler
  0 siblings, 1 reply; 67+ messages in thread
From: Dominik Csapak @ 2024-04-17 13:19 UTC (permalink / raw)
  To: Proxmox VE development discussion, Fabian Grünbichler

On 4/17/24 15:11, Fabian Grünbichler wrote:
> On April 16, 2024 3:18 pm, Dominik Csapak wrote:
>> This series enables importing ova/ovf from directory based storages,
>> inclusive upload/download via the webui (ova only).
>>
>> It also improves the ovf importer by parsing the ostype, nics, bootorder
>> (and firmware from vmware exported files).
>>
>> I currently opted to move the OVF.pm to pve-storage, since there is no
>> real other place where we could put it. Building a seperate package
>> from qemu-servers git repo would also not be ideal, since we still
>> have a cyclic dev dependency then
>> (If someone has a better idea how to handle that, please do tell, and
>> i can do that in a v2)
>>
>> There are surely some wrinkles left i did not think of, but all in all,
>> it should be pretty usable. E.g. i downloaded some ovas, uploaded them
>> on my cephfs in my virtual cluster, and successfully imported that with
>> live-import.
>>
>> The biggest caveat when importing from ovas is that we have to
>> temporarily extract the disk images. I opted for doing that into the
>> import storage, but if we have a better idea where to put that, i can
>> implement it in a v2 (or as a follow up). For example, we could add a
>> new 'tmpdir' parameter to the create call and use that for extractig.
> 
> something is wrong with the permissions, since the import images are not
> added to check_volume_access, I can now upload an OVA, but not see it
> afterwards ;)
> 
> I guess if a user has upload rights for improt images
> (Datastore.AllocateTemplate), they should also be able to see and use
> (and remove) import images?
> 

ah yes, i forgot to add it there.

but FWICS isos can have the same problem?
upload only requires 'Datastore.AllocateTemplate' but seeing them requires
'Datastore.AllocateSpace' or 'Datastore.Audit'

is that a mistake?

> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH storage 3/9] plugin: dir: handle ova files for import
  2024-04-17 13:07     ` Dominik Csapak
@ 2024-04-17 13:39       ` Fabian Grünbichler
  2024-04-18  7:22       ` Fiona Ebner
  1 sibling, 0 replies; 67+ messages in thread
From: Fabian Grünbichler @ 2024-04-17 13:39 UTC (permalink / raw)
  To: Fiona Ebner, Proxmox VE development discussion

On April 17, 2024 3:07 pm, Dominik Csapak wrote:
> On 4/17/24 12:52, Fiona Ebner wrote:
>> Am 16.04.24 um 15:18 schrieb Dominik Csapak:
>>> since we want to handle ova files (which are only ovf+vmdks bundled in a
>>> tar file) for import, add code that handles that.
>>>
>>> we introduce a valid volname for files contained in ovas like this:
>>>
>>>   storage:import/archive.ova/disk-1.vmdk
>>>
>>> by basically treating the last part of the path as the name for the
>>> contained disk we want.
>>>
>>> we then provide 3 functions to use for that:
>>>
>>> * copy_needs_extraction: determines from the given volid (like above) if
>>>    that needs extraction to copy it, currently only 'import' vtype +
>>>    defined format returns true here (if we have more options in the
>>>    future, we can of course easily extend that)
>>>
>>> * extract_disk_from_import_file: this actually extracts the file from
>>>    the archive. Currently only ova is supported, so the extraction with
>>>    'tar' is hardcoded, but again we can easily extend/modify that should
>>>    we need to.
>>>
>>>    we currently extract into the import storage in a directory named:
>>>    `.tmp_<pid>_<targetvmid>` which should not clash with concurrent
>>>    operations (though we do extract it multiple times then)
>>>
>> 
>> Could we do "extract upon upload", "tar upon download" instead? Sure
>> some people surely want to drop the ova manually, but we could tell them
>> they need to extract it first too. Depending on the amount of headache
>> this would save us, it might be worth it.
> 
> we could, but this opens a whole other can of worms, namely
> what to do with conflicting filenames for different ovas?
> 
> we'd then either have to magically match the paths from the ovfs
> to some subdir that don't overlap

we could just use the ova name as dir name, and never store the ova
under that name but use some tmp placeholder for that ;)

> 
> or we'd have to abort everytime we encounter identical disk names
> 
> IMHO this would be less practical than just extract on demand...
> 
>> 
>>>    alternatively we could implement either a 'tmpstorage' parameter,
>>>    or use e.g. '/var/tmp/' or similar, but re-using the current storage
>>>    seemed ok.
>>>
>>> * cleanup_extracted_image: intended to cleanup the extracted images from
>>>    above, including the surrounding temporary directory
>>>
>>> we have to modify the `parse_ovf` a bit to handle the missing disk
>>> images, and we parse the size out of the ovf part (since this is
>>> informal only, it should be no problem if we cannot parse it sometimes)
>>>
>>> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
>>> ---
>>>   src/PVE/API2/Storage/Status.pm |  1 +
>>>   src/PVE/Storage.pm             | 59 ++++++++++++++++++++++++++++++++++
>>>   src/PVE/Storage/DirPlugin.pm   | 13 +++++++-
>>>   src/PVE/Storage/OVF.pm         | 53 ++++++++++++++++++++++++++----
>>>   src/PVE/Storage/Plugin.pm      |  5 +++
>>>   5 files changed, 123 insertions(+), 8 deletions(-)
>>>
>>> diff --git a/src/PVE/API2/Storage/Status.pm b/src/PVE/API2/Storage/Status.pm
>>> index f7e324f..77ed57c 100644
>>> --- a/src/PVE/API2/Storage/Status.pm
>>> +++ b/src/PVE/API2/Storage/Status.pm
>>> @@ -749,6 +749,7 @@ __PACKAGE__->register_method({
>>>   				'efi-state-lost',
>>>   				'guest-is-running',
>>>   				'nvme-unsupported',
>>> +				'ova-needs-extracting',
>>>   				'ovmf-with-lsi-unsupported',
>>>   				'serial-port-socket-only',
>>>   			    ],
>>> diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
>>> index f8ea93d..bc073ef 100755
>>> --- a/src/PVE/Storage.pm
>>> +++ b/src/PVE/Storage.pm
>>> @@ -2189,4 +2189,63 @@ sub get_import_metadata {
>>>       return $plugin->get_import_metadata($scfg, $volname, $storeid);
>>>   }
>>>   
>> 
>> Shouldn't the following three functions call into plugin methods
>> instead? That'd seem much more future-proof to me.
> 
> could be, i just did not want to extend the plugin api for that
> but as fabian wrote, maybe we should put them in qemu-server
> altogether for now?
> 
> (after thinking about it a bit, i'd be in favor of putting it in
> qemu-server, because mainly i don't want to add to the plugin api further)
> 
> what do you think @fiona @fabian?

another alternative would be to put them into the non-storage-plugin OVF
helper module?

>>> +sub copy_needs_extraction {
>>> +    my ($volid) = @_;
>>> +    my ($storeid, $volname) = parse_volume_id($volid);
>>> +    my $cfg = config();
>>> +    my $scfg = storage_config($cfg, $storeid);
>>> +    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
>>> +
>>> +    my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $file_format) =
>>> +	$plugin->parse_volname($volname);
>>> +
>>> +    return $vtype eq 'import' && defined($file_format);
>> 
>> E.g this seems rather hacky, and puts a weird coupling on a future
>> import plugin's parse_volname() function (presence of $file_format).
> 
> would it be better to check the volid again for '.ova/something$' ?
> or do you have a better idea?
> (especially if we want to have this maybe in qemu-server)

hmm, could parse_volname return 'ova' as format? or 'ova+vmdk'? we don't
actually need the format for extracting, and afterwards we get it from
the extract file name anyway?

>>> +}
>>> +
>>> +sub extract_disk_from_import_file {
>>> +    my ($volid, $vmid) = @_;
>>> +
>>> +    my ($storeid, $volname) = parse_volume_id($volid);
>>> +    my $cfg = config();
>>> +    my $scfg = storage_config($cfg, $storeid);
>>> +    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
>>> +
>>> +    my ($vtype, $name, undef, undef, undef, undef, $file_format) =
>>> +	$plugin->parse_volname($volname);
>>> +
>>> +    die "only files with content type 'import' can be extracted\n"
>>> +	if $vtype ne 'import' || !defined($file_format);
>>> +
>>> +    # extract the inner file from the name
>>> +    if ($volid =~ m!${name}/([^/]+)$!) {
>>> +	$name = $1;
>>> +    }
>>> +
>>> +    my ($source_file) = $plugin->path($scfg, $volname, $storeid);
>>> +
>>> +    my $destdir = $plugin->get_subdir($scfg, 'import');
>>> +    my $pid = $$;
>>> +    $destdir .= "/.tmp_${pid}_${vmid}";
>>> +    mkdir $destdir;
>>> +
>>> +    ($source_file) = $source_file =~ m|^(/.*)|; # untaint
>>> +
>>> +    run_command(['tar', '-x', '-C', $destdir, '-f', $source_file, $name]);
>>> +
>>> +    return "$destdir/$name";
>>> +}
>>> +
>>> +sub cleanup_extracted_image {
>>> +    my ($source) = @_;
>>> +
>>> +    if ($source =~ m|^(/.+/\.tmp_[0-9]+_[0-9]+)/[^/]+$|) {
>>> +	my $tmpdir = $1;
>>> +
>>> +	unlink $source or $! == ENOENT or die "removing image $source failed: $!\n";
>>> +	rmdir $tmpdir or $! == ENOENT or die "removing tmpdir $tmpdir failed: $!\n";
>>> +    } else {
>>> +	die "invalid extraced image path '$source'\n";
>>> +    }
>>> +}
>>> +
>>>   1;
>>> diff --git a/src/PVE/Storage/DirPlugin.pm b/src/PVE/Storage/DirPlugin.pm
>>> index 4dc7708..50ceab7 100644
>>> --- a/src/PVE/Storage/DirPlugin.pm
>>> +++ b/src/PVE/Storage/DirPlugin.pm
>>> @@ -260,14 +260,25 @@ sub get_import_metadata {
>>>       # NOTE: all types must be added to the return schema of the import-metadata API endpoint
>>>       my $warnings = [];
>>>   
>>> +    my $isOva = 0;
>>> +    if ($path =~ m!\.ova!) {
>> 
>> Would be nicer if parse_volname() would return the $file_format and we
>> chould check for that. Also missing the $ in the regex, so you'd
>> mismatch a weird filename like ABC.ovaXYZ.ovf or?
> 
> yeah the $ is missing, and yes, we could return ova/ovf as format there
> as we want to change the 'needs extracting' check anyway
> 
> 
>> 
>>> +	$isOva = 1;
>>> +	push @$warnings, { type => 'ova-needs-extracting' };
>>> +    }
>>>       my $res = PVE::Storage::OVF::parse_ovf($path, $isOva);
>>>       my $disks = {};
>>>       for my $disk ($res->{disks}->@*) {
>>>   	my $id = $disk->{disk_address};
>>>   	my $size = $disk->{virtual_size};
>>>   	my $path = $disk->{relative_path};
>>> +	my $volid;
>>> +	if ($isOva) {
>>> +	    $volid = "$storeid:$volname/$path";
>>> +	} else {
>>> +	    $volid = "$storeid:import/$path",
>>> +	}
>>>   	$disks->{$id} = {
>>> -	    volid => "$storeid:import/$path",
>>> +	    volid => $volid,
>>>   	    defined($size) ? (size => $size) : (),
>>>   	};
>>>       }
>>> diff --git a/src/PVE/Storage/OVF.pm b/src/PVE/Storage/OVF.pm
>>> index 4a322b9..fb850a8 100644
>>> --- a/src/PVE/Storage/OVF.pm
>>> +++ b/src/PVE/Storage/OVF.pm
>>> @@ -85,11 +85,37 @@ sub id_to_pve {
>>>       }
>>>   }
>>>   
>>> +# technically defined in DSP0004 (https://www.dmtf.org/dsp/DSP0004) as an ABNF
>>> +# but realistically this always takes the form of 'bytes * base^exponent'
>> 
>> The comment wrongly says 'bytes' instead of 'byte' (your test examples
>> confirm this).
>> 
>>> +sub try_parse_capacity_unit {
>>> +    my ($unit_text) = @_;
>>> +
>>> +    if ($unit_text =~ m/^\s*byte\s*\*\s*([0-9]+)\s*\^\s*([0-9]+)\s*$/) {
>> 
>> Fun regex :P
>> 
>>> +	my $base = $1;
>>> +	my $exp = $2;
>>> +	return $base ** $exp;
>>> +    }
>>> +
>>> +    return undef;
>>> +}
>>> +
>> 
>> (...)
>> 
>>> diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
>>> index deaf8b2..ea069ab 100644
>>> --- a/src/PVE/Storage/Plugin.pm
>>> +++ b/src/PVE/Storage/Plugin.pm
>>> @@ -654,6 +654,11 @@ sub parse_volname {
>>>   	return ('backup', $fn);
>>>       } elsif ($volname =~ m!^snippets/([^/]+)$!) {
>>>   	return ('snippets', $1);
>>> +    } elsif ($volname =~ m!^import/([^/]+\.ova)\/([^/]+)$!) {
>>> +	my $archive = $1;
>>> +	my $file = $2;
>>> +	my (undef, $format, undef) = parse_name_dir($file);
>>> +	return ('import', $archive, 0, undef, undef, undef, $format);
>> 
>> So we return the same $name for different things here? Not super happy
>> with that either. If we were to get creative we could say the archive is
>> the "base" of the image, but surely also comes with caveats.
> 
> i'll change this in a v2 should not be necessary
> 
>> 
>>>       } elsif ($volname =~ m!^import/([^/]+$PVE::Storage::IMPORT_EXT_RE_1)$!) {
>>>   	return ('import', $1);
>>>       } elsif ($volname =~ m!^import/([^/]+\.(raw|vmdk|qcow2))$!) {
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH storage 3/9] plugin: dir: handle ova files for import
  2024-04-17 13:10     ` Dominik Csapak
@ 2024-04-17 13:52       ` Fabian Grünbichler
  2024-04-17 14:07         ` Dominik Csapak
  0 siblings, 1 reply; 67+ messages in thread
From: Fabian Grünbichler @ 2024-04-17 13:52 UTC (permalink / raw)
  To: Dominik Csapak, Proxmox VE development discussion

On April 17, 2024 3:10 pm, Dominik Csapak wrote:
> On 4/17/24 14:45, Fabian Grünbichler wrote:
>> On April 16, 2024 3:18 pm, Dominik Csapak wrote:
>>> +sub cleanup_extracted_image {
>> 
>> same for this?
>> 
>>> +    my ($source) = @_;
>>> +
>>> +    if ($source =~ m|^(/.+/\.tmp_[0-9]+_[0-9]+)/[^/]+$|) {
>>> +	my $tmpdir = $1;
>>> +
>>> +	unlink $source or $! == ENOENT or die "removing image $source failed: $!\n";
>>> +	rmdir $tmpdir or $! == ENOENT or die "removing tmpdir $tmpdir failed: $!\n";
>>> +    } else {
>>> +	die "invalid extraced image path '$source'\n";
>> 
>> nit: typo
>> 
>> these are also not discoverable if the error handling in qemu-server
>> failed for some reason.. might be a source of unwanted space
>> consumption..
> 
> any suggestions for better handling that cleanup?
> we could put it at the beginning of each cleanup step, that should
> at least make sure we cleaned up the temporary images

we could extract them into images/XXX/vm-XXX-disk-.. directly (or
rename/move them there after extraction), that way at least they could
be cleaned up via the storage API or rescan + delete (and via a regular
vdisk_free in qemu-server, instead of requiring a special helper).

other than that, I don't think we have an easy way of
- exposing them in list & free_image
- while ensuring nobody deletes them while the import is still going on
  (the target VM ownership checks ensure that at least via the UI if we
  make it an owned volume)

it would also allow skipping the conversion if the storage+format
already match the target spec as well..


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH storage 3/9] plugin: dir: handle ova files for import
  2024-04-17 13:52       ` Fabian Grünbichler
@ 2024-04-17 14:07         ` Dominik Csapak
  2024-04-18  6:46           ` Fabian Grünbichler
  0 siblings, 1 reply; 67+ messages in thread
From: Dominik Csapak @ 2024-04-17 14:07 UTC (permalink / raw)
  To: Fabian Grünbichler, Proxmox VE development discussion

On 4/17/24 15:52, Fabian Grünbichler wrote:
> On April 17, 2024 3:10 pm, Dominik Csapak wrote:
>> On 4/17/24 14:45, Fabian Grünbichler wrote:
>>> On April 16, 2024 3:18 pm, Dominik Csapak wrote:
>>>> +sub cleanup_extracted_image {
>>>
>>> same for this?
>>>
>>>> +    my ($source) = @_;
>>>> +
>>>> +    if ($source =~ m|^(/.+/\.tmp_[0-9]+_[0-9]+)/[^/]+$|) {
>>>> +	my $tmpdir = $1;
>>>> +
>>>> +	unlink $source or $! == ENOENT or die "removing image $source failed: $!\n";
>>>> +	rmdir $tmpdir or $! == ENOENT or die "removing tmpdir $tmpdir failed: $!\n";
>>>> +    } else {
>>>> +	die "invalid extraced image path '$source'\n";
>>>
>>> nit: typo
>>>
>>> these are also not discoverable if the error handling in qemu-server
>>> failed for some reason.. might be a source of unwanted space
>>> consumption..
>>
>> any suggestions for better handling that cleanup?
>> we could put it at the beginning of each cleanup step, that should
>> at least make sure we cleaned up the temporary images
> 
> we could extract them into images/XXX/vm-XXX-disk-.. directly (or
> rename/move them there after extraction), that way at least they could
> be cleaned up via the storage API or rescan + delete (and via a regular
> vdisk_free in qemu-server, instead of requiring a special helper).
> 
> other than that, I don't think we have an easy way of
> - exposing them in list & free_image
> - while ensuring nobody deletes them while the import is still going on
>    (the target VM ownership checks ensure that at least via the UI if we
>    make it an owned volume)
> 
> it would also allow skipping the conversion if the storage+format
> already match the target spec as well..

mhmm that could work, but what if the storage does not have
the 'images' content type enabled? should we simply fail then?


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH storage/qemu-server/pve-manager] implement ova/ovf import for directory type storages
  2024-04-17 13:19   ` Dominik Csapak
@ 2024-04-18  6:40     ` Fabian Grünbichler
  0 siblings, 0 replies; 67+ messages in thread
From: Fabian Grünbichler @ 2024-04-18  6:40 UTC (permalink / raw)
  To: Dominik Csapak, Proxmox VE development discussion

> Dominik Csapak <d.csapak@proxmox.com> hat am 17.04.2024 15:19 CEST geschrieben:
> On 4/17/24 15:11, Fabian Grünbichler wrote:
> > On April 16, 2024 3:18 pm, Dominik Csapak wrote:
> >> This series enables importing ova/ovf from directory based storages,
> >> inclusive upload/download via the webui (ova only).
> >>
> >> It also improves the ovf importer by parsing the ostype, nics, bootorder
> >> (and firmware from vmware exported files).
> >>
> >> I currently opted to move the OVF.pm to pve-storage, since there is no
> >> real other place where we could put it. Building a seperate package
> >> from qemu-servers git repo would also not be ideal, since we still
> >> have a cyclic dev dependency then
> >> (If someone has a better idea how to handle that, please do tell, and
> >> i can do that in a v2)
> >>
> >> There are surely some wrinkles left i did not think of, but all in all,
> >> it should be pretty usable. E.g. i downloaded some ovas, uploaded them
> >> on my cephfs in my virtual cluster, and successfully imported that with
> >> live-import.
> >>
> >> The biggest caveat when importing from ovas is that we have to
> >> temporarily extract the disk images. I opted for doing that into the
> >> import storage, but if we have a better idea where to put that, i can
> >> implement it in a v2 (or as a follow up). For example, we could add a
> >> new 'tmpdir' parameter to the create call and use that for extractig.
> > 
> > something is wrong with the permissions, since the import images are not
> > added to check_volume_access, I can now upload an OVA, but not see it
> > afterwards ;)
> > 
> > I guess if a user has upload rights for improt images
> > (Datastore.AllocateTemplate), they should also be able to see and use
> > (and remove) import images?
> > 
> 
> ah yes, i forgot to add it there.
> 
> but FWICS isos can have the same problem?
> upload only requires 'Datastore.AllocateTemplate' but seeing them requires
> 'Datastore.AllocateSpace' or 'Datastore.Audit'
> 
> is that a mistake?

that's a slightly less problematic variant of a similar issue, yes. Datastore.AllocateSpace and Datastore.Audit are the "weaker cousins" of Datastore.AllocateTemplate, in most configurations if you have the latter you'll also have (one of) the former. IMHO it wouldn't hurt to allow Datastore.AllocateTemplate users access to iso files (and container templates), since they can upload them the behaviour is weird as it is.

for the OVA files right now you'd need Datastore.Allocate, which is a higher privilege than those others. I guess treating OVA files like iso and templates w.r.t. ACLs kind of makes sense, even if they have a slightly bigger attack surface behind the scenes. this would also allow just giving trusted admins the option to upload new OVAs, while allowing users to create VMs based on that trusted set.


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH storage 3/9] plugin: dir: handle ova files for import
  2024-04-17 14:07         ` Dominik Csapak
@ 2024-04-18  6:46           ` Fabian Grünbichler
  0 siblings, 0 replies; 67+ messages in thread
From: Fabian Grünbichler @ 2024-04-18  6:46 UTC (permalink / raw)
  To: Dominik Csapak, Proxmox VE development discussion

> Dominik Csapak <d.csapak@proxmox.com> hat am 17.04.2024 16:07 CEST geschrieben:
> On 4/17/24 15:52, Fabian Grünbichler wrote:
> > On April 17, 2024 3:10 pm, Dominik Csapak wrote:
> >> On 4/17/24 14:45, Fabian Grünbichler wrote:
> >>> On April 16, 2024 3:18 pm, Dominik Csapak wrote:
> >>>> +sub cleanup_extracted_image {
> >>>
> >>> same for this?
> >>>
> >>>> +    my ($source) = @_;
> >>>> +
> >>>> +    if ($source =~ m|^(/.+/\.tmp_[0-9]+_[0-9]+)/[^/]+$|) {
> >>>> +	my $tmpdir = $1;
> >>>> +
> >>>> +	unlink $source or $! == ENOENT or die "removing image $source failed: $!\n";
> >>>> +	rmdir $tmpdir or $! == ENOENT or die "removing tmpdir $tmpdir failed: $!\n";
> >>>> +    } else {
> >>>> +	die "invalid extraced image path '$source'\n";
> >>>
> >>> nit: typo
> >>>
> >>> these are also not discoverable if the error handling in qemu-server
> >>> failed for some reason.. might be a source of unwanted space
> >>> consumption..
> >>
> >> any suggestions for better handling that cleanup?
> >> we could put it at the beginning of each cleanup step, that should
> >> at least make sure we cleaned up the temporary images
> > 
> > we could extract them into images/XXX/vm-XXX-disk-.. directly (or
> > rename/move them there after extraction), that way at least they could
> > be cleaned up via the storage API or rescan + delete (and via a regular
> > vdisk_free in qemu-server, instead of requiring a special helper).
> > 
> > other than that, I don't think we have an easy way of
> > - exposing them in list & free_image
> > - while ensuring nobody deletes them while the import is still going on
> >    (the target VM ownership checks ensure that at least via the UI if we
> >    make it an owned volume)
> > 
> > it would also allow skipping the conversion if the storage+format
> > already match the target spec as well..
> 
> mhmm that could work, but what if the storage does not have
> the 'images' content type enabled? should we simply fail then?

right, that would make it a bit limiting. we could clear the tmpdir on reboots? ;)

it might also be nice (as a follow-up?) to make the tmpdir configurable and/or see what limitations direct streaming actually has (other than live-import not working) - because if the OVA is on NFS/.. right now we incur a lot of back and forth copying..

I wonder if live-import even makes much sense here - if I have to copy/extract the disks anyway before starting the live-import (which then does another copy), I can just as well do a regular import and start the VM afterwards, especially if that saves me one copy action?


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH storage 3/9] plugin: dir: handle ova files for import
  2024-04-17 13:07     ` Dominik Csapak
  2024-04-17 13:39       ` Fabian Grünbichler
@ 2024-04-18  7:22       ` Fiona Ebner
  2024-04-18  7:25         ` Fiona Ebner
  2024-04-18  8:55         ` Fabian Grünbichler
  1 sibling, 2 replies; 67+ messages in thread
From: Fiona Ebner @ 2024-04-18  7:22 UTC (permalink / raw)
  To: Dominik Csapak, Proxmox VE development discussion

Am 17.04.24 um 15:07 schrieb Dominik Csapak:
> On 4/17/24 12:52, Fiona Ebner wrote:
>> Am 16.04.24 um 15:18 schrieb Dominik Csapak:
>>>
>>>    we currently extract into the import storage in a directory named:
>>>    `.tmp_<pid>_<targetvmid>` which should not clash with concurrent
>>>    operations (though we do extract it multiple times then)
>>>
>>
>> Could we do "extract upon upload", "tar upon download" instead? Sure
>> some people surely want to drop the ova manually, but we could tell them
>> they need to extract it first too. Depending on the amount of headache
>> this would save us, it might be worth it.
> 
> we could, but this opens a whole other can of worms, namely
> what to do with conflicting filenames for different ovas?
> 
> we'd then either have to magically match the paths from the ovfs
> to some subdir that don't overlap
> 
> or we'd have to abort everytime we encounter identical disk names
> 
> IMHO this would be less practical than just extract on demand...
> 

Yes, I was thinking about just having a subdir named based on the ova
file (e.g. just strip the extension).

>>> diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
>>> index f8ea93d..bc073ef 100755
>>> --- a/src/PVE/Storage.pm
>>> +++ b/src/PVE/Storage.pm
>>> @@ -2189,4 +2189,63 @@ sub get_import_metadata {
>>>       return $plugin->get_import_metadata($scfg, $volname, $storeid);
>>>   }
>>>   
>>
>> Shouldn't the following three functions call into plugin methods
>> instead? That'd seem much more future-proof to me.
> 
> could be, i just did not want to extend the plugin api for that
> but as fabian wrote, maybe we should put them in qemu-server
> altogether for now?
> 
> (after thinking about it a bit, i'd be in favor of putting it in
> qemu-server, because mainly i don't want to add to the plugin api further)
> 
> what do you think @fiona @fabian?
> 

Doesn't that kinda defeat the purpose to move OVF here? Ideally
qemu-server just uses the import storage API without any knowledge about
how the import content is organized by the storage layer. I mean we
could potentially avoid extending the plugin API by doing the "extract
upon upload". I'd prefer to extend the plugin API, because other future
plugins might also want to offer archive-based import, but if we really
don't want to do it for now, fine by me too.

>>
>>> +sub copy_needs_extraction {
>>> +    my ($volid) = @_;
>>> +    my ($storeid, $volname) = parse_volume_id($volid);
>>> +    my $cfg = config();
>>> +    my $scfg = storage_config($cfg, $storeid);
>>> +    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
>>> +
>>> +    my ($vtype, $name, $vmid, $basename, $basevmid, $isBase,
>>> $file_format) =
>>> +    $plugin->parse_volname($volname);
>>> +
>>> +    return $vtype eq 'import' && defined($file_format);
>>
>> E.g this seems rather hacky, and puts a weird coupling on a future
>> import plugin's parse_volname() function (presence of $file_format).
> 
> would it be better to check the volid again for '.ova/something$' ?
> or do you have a better idea?
> (especially if we want to have this maybe in qemu-server)
> 

IMHO, it's the plugin's job to decide this. The plugin should know how
the import content is organized and nobody else needs to know.


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH storage 3/9] plugin: dir: handle ova files for import
  2024-04-18  7:22       ` Fiona Ebner
@ 2024-04-18  7:25         ` Fiona Ebner
  2024-04-18  8:55         ` Fabian Grünbichler
  1 sibling, 0 replies; 67+ messages in thread
From: Fiona Ebner @ 2024-04-18  7:25 UTC (permalink / raw)
  To: Dominik Csapak, Proxmox VE development discussion

Am 18.04.24 um 09:22 schrieb Fiona Ebner:
>>>> diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
>>>> index f8ea93d..bc073ef 100755
>>>> --- a/src/PVE/Storage.pm
>>>> +++ b/src/PVE/Storage.pm
>>>> @@ -2189,4 +2189,63 @@ sub get_import_metadata {
>>>>       return $plugin->get_import_metadata($scfg, $volname, $storeid);
>>>>   }
>>>>   
>>>
>>> Shouldn't the following three functions call into plugin methods
>>> instead? That'd seem much more future-proof to me.
>>
>> could be, i just did not want to extend the plugin api for that
>> but as fabian wrote, maybe we should put them in qemu-server
>> altogether for now?
>>
>> (after thinking about it a bit, i'd be in favor of putting it in
>> qemu-server, because mainly i don't want to add to the plugin api further)
>>
>> what do you think @fiona @fabian?
>>
> 
> Doesn't that kinda defeat the purpose to move OVF here? Ideally
> qemu-server just uses the import storage API without any knowledge about
> how the import content is organized by the storage layer. I mean we
> could potentially avoid extending the plugin API by doing the "extract
> upon upload". I'd prefer to extend the plugin API, because other future

To clarify, here I mean: "I'd prefer to extend the plugin API if we
don't go for "extract upon upload"".

> plugins might also want to offer archive-based import, but if we really
> don't want to do it for now, fine by me too.
> 


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH storage 4/9] ovf: implement parsing the ostype
  2024-04-17 13:14     ` Dominik Csapak
@ 2024-04-18  7:31       ` Fiona Ebner
  0 siblings, 0 replies; 67+ messages in thread
From: Fiona Ebner @ 2024-04-18  7:31 UTC (permalink / raw)
  To: Dominik Csapak, Proxmox VE development discussion

Am 17.04.24 um 15:14 schrieb Dominik Csapak:
> On 4/17/24 13:32, Fiona Ebner wrote:
>> Am 16.04.24 um 15:18 schrieb Dominik Csapak:
>>> use the standards info about the ostypes to map to our own
>>> (see comment for link to the relevant part of the dmtf schema)
>>>
>>> every type that is not listed we map to 'other', so no need to have it
>>> in a list.
>>>
>>> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
>>>
>> Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
>>
>>> diff --git a/src/test/run_ovf_tests.pl b/src/test/run_ovf_tests.pl
>>> index 1ef78cc..e949c15 100755
>>> --- a/src/test/run_ovf_tests.pl
>>> +++ b/src/test/run_ovf_tests.pl
>>> @@ -59,13 +59,16 @@ print "\ntesting vm.conf extraction\n";
>>>   is($win2008->{qm}->{name}, 'Win2008-R2x64', 'win2008 VM name is
>>> correct');
>>>   is($win2008->{qm}->{memory}, '2048', 'win2008 VM memory is correct');
>>>   is($win2008->{qm}->{cores}, '1', 'win2008 VM cores are correct');
>>> +is($win2008->{qm}->{ostype}, 'win7', 'win2008 VM ostype is correcty');
>>>     is($win10->{qm}->{name}, 'Win10-Liz', 'win10 VM name is correct');
>>>   is($win10->{qm}->{memory}, '6144', 'win10 VM memory is correct');
>>>   is($win10->{qm}->{cores}, '4', 'win10 VM cores are correct');
>>> +is($win10->{qm}->{ostype}, 'other', 'win10 VM ostype is correct');
>>
>> Yes, 'other', because the ovf config has id=1, but is there a special
>> reason why? Maybe worth a comment here and below to avoid potential
>> confusion.
> 
> my guess is that the ovf spec did not include windows 10 yet (or something
> similar like the esxi exporter not knowing the newest spec)
> 
> and i did not want to change the testcase just for this
> 

That's fine. But everybody reading this in the future will wonder "why
is win10 not detected as win10?", so a comment would be nice to have.

>>
>>>     is($win10noNs->{qm}->{name}, 'Win10-Liz', 'win10 VM (no default
>>> rasd NS) name is correct');
>>>   is($win10noNs->{qm}->{memory}, '6144', 'win10 VM (no default rasd
>>> NS) memory is correct');
>>>   is($win10noNs->{qm}->{cores}, '4', 'win10 VM (no default rasd NS)
>>> cores are correct');
>>> +is($win10noNs->{qm}->{ostype}, 'other', 'win10 VM (no default rasd
>>> NS) ostype is correct');
>>>     done_testing();
> 
> 


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH storage 8/9] api: allow ova upload/download
  2024-04-16 13:19 ` [pve-devel] [PATCH storage 8/9] api: allow ova upload/download Dominik Csapak
@ 2024-04-18  8:05   ` Fiona Ebner
  0 siblings, 0 replies; 67+ messages in thread
From: Fiona Ebner @ 2024-04-18  8:05 UTC (permalink / raw)
  To: Proxmox VE development discussion, Dominik Csapak

Am 16.04.24 um 15:19 schrieb Dominik Csapak:
> introducing a seperate regex that only contains ova, since

s/seperate/separate/

> upload/downloading ovfs does not make sense (since the disks are then
> missing).
> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>

With my single comment below addressed:

Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>

> ---
>  src/PVE/API2/Storage/Status.pm | 14 ++++++++++++--
>  src/PVE/Storage.pm             | 11 +++++++++++
>  2 files changed, 23 insertions(+), 2 deletions(-)
> 
> diff --git a/src/PVE/API2/Storage/Status.pm b/src/PVE/API2/Storage/Status.pm
> index 77ed57c..14d6fe8 100644
> --- a/src/PVE/API2/Storage/Status.pm
> +++ b/src/PVE/API2/Storage/Status.pm
> @@ -382,7 +382,7 @@ __PACKAGE__->register_method ({

Description above here should be updated to mention OVAs

>  	    content => {
>  		description => "Content type.",
>  		type => 'string', format => 'pve-storage-content',
> -		enum => ['iso', 'vztmpl'],
> +		enum => ['iso', 'vztmpl', 'import'],
>  	    },
>  	    filename => {
>  		description => "The name of the file to create. Caution: This will be normalized!",


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH storage 7/9] ovf: implement parsing nics
  2024-04-16 13:19 ` [pve-devel] [PATCH storage 7/9] ovf: implement parsing nics Dominik Csapak
  2024-04-17 12:09   ` Fiona Ebner
@ 2024-04-18  8:22   ` Fiona Ebner
  1 sibling, 0 replies; 67+ messages in thread
From: Fiona Ebner @ 2024-04-18  8:22 UTC (permalink / raw)
  To: Proxmox VE development discussion, Dominik Csapak

Am 16.04.24 um 15:19 schrieb Dominik Csapak:
> @@ -355,9 +361,21 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
>  
>      $qm->{boot} = "order=" . join(';', @$boot);
>  
> +    my $nic_id = dtmf_name_to_id('Ethernet Adapter');
> +    my $xpath_find_nics = "/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${nic_id}]";
> +    my @nic_items = $xpc->findnodes($xpath_find_nics);
> +
> +    my $net = {};
> +
> +    my $net_count = 0;
> +    foreach my $item_node (@nic_items) {
> +	my $model = $xpc->findvalue('rasd:ResourceSubType', $item_node);
> +	$model = lc($model);
> +	$model = 'vmxnet3' if ! grep $model, @$allowed_nic_models;

Noticed another issue while testing. This doesn't work and should be

> $model = 'vmxnet3' if !grep { $_ eq $model } @$allowed_nic_models;

> +	$net->{"net${net_count}"} = { model => $model };
>      }
>  
> -    return {qm => $qm, disks => \@disks};
> +    return {qm => $qm, disks => \@disks, net => $net};
>  }
>  
>  1;


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH storage 9/9] plugin: enable import for nfs/btfs/cifs/cephfs
  2024-04-16 13:19 ` [pve-devel] [PATCH storage 9/9] plugin: enable import for nfs/btfs/cifs/cephfs Dominik Csapak
@ 2024-04-18  8:43   ` Fiona Ebner
  0 siblings, 0 replies; 67+ messages in thread
From: Fiona Ebner @ 2024-04-18  8:43 UTC (permalink / raw)
  To: Proxmox VE development discussion, Dominik Csapak

s/btfs/btrfs/

What about GlusterFS? Or is more required to add it there?

Am 16.04.24 um 15:19 schrieb Dominik Csapak:
> and reuse the DirPlugin implementation
> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>

Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 1/3] api: delete unused OVF.pm
  2024-04-16 13:19 ` [pve-devel] [PATCH qemu-server 1/3] api: delete unused OVF.pm Dominik Csapak
@ 2024-04-18  8:52   ` Fiona Ebner
  2024-04-18  8:57     ` Dominik Csapak
  0 siblings, 1 reply; 67+ messages in thread
From: Fiona Ebner @ 2024-04-18  8:52 UTC (permalink / raw)
  To: Proxmox VE development discussion, Dominik Csapak

Am 16.04.24 um 15:19 schrieb Dominik Csapak:
> the api part was never in use by anything
> 

We don't know for sure if there is not some external client that makes
use of it. But I also think we can drop it and wait for somebody to
complain.

> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>

Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH storage 3/9] plugin: dir: handle ova files for import
  2024-04-18  7:22       ` Fiona Ebner
  2024-04-18  7:25         ` Fiona Ebner
@ 2024-04-18  8:55         ` Fabian Grünbichler
  1 sibling, 0 replies; 67+ messages in thread
From: Fabian Grünbichler @ 2024-04-18  8:55 UTC (permalink / raw)
  To: Dominik Csapak, Proxmox VE development discussion

On April 18, 2024 9:22 am, Fiona Ebner wrote:
> Am 17.04.24 um 15:07 schrieb Dominik Csapak:
>> On 4/17/24 12:52, Fiona Ebner wrote:
>>> Am 16.04.24 um 15:18 schrieb Dominik Csapak:
>>>>
>>>>    we currently extract into the import storage in a directory named:
>>>>    `.tmp_<pid>_<targetvmid>` which should not clash with concurrent
>>>>    operations (though we do extract it multiple times then)
>>>>
>>>
>>> Could we do "extract upon upload", "tar upon download" instead? Sure
>>> some people surely want to drop the ova manually, but we could tell them
>>> they need to extract it first too. Depending on the amount of headache
>>> this would save us, it might be worth it.
>> 
>> we could, but this opens a whole other can of worms, namely
>> what to do with conflicting filenames for different ovas?
>> 
>> we'd then either have to magically match the paths from the ovfs
>> to some subdir that don't overlap
>> 
>> or we'd have to abort everytime we encounter identical disk names
>> 
>> IMHO this would be less practical than just extract on demand...
>> 
> 
> Yes, I was thinking about just having a subdir named based on the ova
> file (e.g. just strip the extension).
> 
>>>> diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
>>>> index f8ea93d..bc073ef 100755
>>>> --- a/src/PVE/Storage.pm
>>>> +++ b/src/PVE/Storage.pm
>>>> @@ -2189,4 +2189,63 @@ sub get_import_metadata {
>>>>       return $plugin->get_import_metadata($scfg, $volname, $storeid);
>>>>   }
>>>>   
>>>
>>> Shouldn't the following three functions call into plugin methods
>>> instead? That'd seem much more future-proof to me.
>> 
>> could be, i just did not want to extend the plugin api for that
>> but as fabian wrote, maybe we should put them in qemu-server
>> altogether for now?
>> 
>> (after thinking about it a bit, i'd be in favor of putting it in
>> qemu-server, because mainly i don't want to add to the plugin api further)
>> 
>> what do you think @fiona @fabian?
>> 
> 
> Doesn't that kinda defeat the purpose to move OVF here? Ideally
> qemu-server just uses the import storage API without any knowledge about
> how the import content is organized by the storage layer. I mean we
> could potentially avoid extending the plugin API by doing the "extract
> upon upload". I'd prefer to extend the plugin API, because other future
> plugins might also want to offer archive-based import, but if we really
> don't want to do it for now, fine by me too.

I am not convinved of that - the import sits in a weird place between
storage and qemu-server, it's basically a layering violation already ;)
and we have lots of other places where qemu-server second-guesses the
storage layer/does custom things..

>>>> +sub copy_needs_extraction {
>>>> +    my ($volid) = @_;
>>>> +    my ($storeid, $volname) = parse_volume_id($volid);
>>>> +    my $cfg = config();
>>>> +    my $scfg = storage_config($cfg, $storeid);
>>>> +    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
>>>> +
>>>> +    my ($vtype, $name, $vmid, $basename, $basevmid, $isBase,
>>>> $file_format) =
>>>> +    $plugin->parse_volname($volname);
>>>> +
>>>> +    return $vtype eq 'import' && defined($file_format);
>>>
>>> E.g this seems rather hacky, and puts a weird coupling on a future
>>> import plugin's parse_volname() function (presence of $file_format).
>> 
>> would it be better to check the volid again for '.ova/something$' ?
>> or do you have a better idea?
>> (especially if we want to have this maybe in qemu-server)
>> 
> 
> IMHO, it's the plugin's job to decide this. The plugin should know how
> the import content is organized and nobody else needs to know.

I'd dislike moving it into the plugin API for the same reason I dislike
it being in PVE::Storage. it should live in some import-specific module
(whether that lives in pve-storage or qemu-server).


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 1/3] api: delete unused OVF.pm
  2024-04-18  8:52   ` Fiona Ebner
@ 2024-04-18  8:57     ` Dominik Csapak
  2024-04-18  9:03       ` Fiona Ebner
  0 siblings, 1 reply; 67+ messages in thread
From: Dominik Csapak @ 2024-04-18  8:57 UTC (permalink / raw)
  To: Fiona Ebner, Proxmox VE development discussion

On 4/18/24 10:52, Fiona Ebner wrote:
> Am 16.04.24 um 15:19 schrieb Dominik Csapak:
>> the api part was never in use by anything
>>
> 
> We don't know for sure if there is not some external client that makes
> use of it. But I also think we can drop it and wait for somebody to
> complain.
> 
>> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> 
> Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>

AFAICS it was not included in the api tree since it was not used anywhere
so it was just shipped but never used ?


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 1/3] api: delete unused OVF.pm
  2024-04-18  8:57     ` Dominik Csapak
@ 2024-04-18  9:03       ` Fiona Ebner
  0 siblings, 0 replies; 67+ messages in thread
From: Fiona Ebner @ 2024-04-18  9:03 UTC (permalink / raw)
  To: Dominik Csapak, Proxmox VE development discussion

Am 18.04.24 um 10:57 schrieb Dominik Csapak:
> On 4/18/24 10:52, Fiona Ebner wrote:
>> Am 16.04.24 um 15:19 schrieb Dominik Csapak:
>>> the api part was never in use by anything
>>>
>>
>> We don't know for sure if there is not some external client that makes
>> use of it. But I also think we can drop it and wait for somebody to
>> complain.
>>
>>> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
>>
>> Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
> 
> AFAICS it was not included in the api tree since it was not used anywhere
> so it was just shipped but never used ?
> 

Oh, you're right. That would've been done by the old UI series for OVF
import which never made it in.


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 2/3] use OVF from Storage
  2024-04-16 13:19 ` [pve-devel] [PATCH qemu-server 2/3] use OVF from Storage Dominik Csapak
@ 2024-04-18  9:07   ` Fiona Ebner
  0 siblings, 0 replies; 67+ messages in thread
From: Fiona Ebner @ 2024-04-18  9:07 UTC (permalink / raw)
  To: Proxmox VE development discussion, Dominik Csapak

Am 16.04.24 um 15:19 schrieb Dominik Csapak:
> and delete it here (incl tests; they live in pve-storage now).
> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>

Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>

> diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
> index b105830f..d1d35800 100755
> --- a/PVE/CLI/qm.pm
> +++ b/PVE/CLI/qm.pm
> @@ -28,13 +28,13 @@ use PVE::Tools qw(extract_param file_get_contents);
>  
>  use PVE::API2::Qemu::Agent;
>  use PVE::API2::Qemu;
> +use PVE::Storage::OVF;

Nit: not ordered alphabetically

>  use PVE::QemuConfig;
>  use PVE::QemuServer::Drive;
>  use PVE::QemuServer::Helpers;
>  use PVE::QemuServer::Agent qw(agent_available);
>  use PVE::QemuServer::ImportDisk;
>  use PVE::QemuServer::Monitor qw(mon_cmd);
> -use PVE::QemuServer::OVF;
>  use PVE::QemuServer;
>  
>  use PVE::CLIHandler;


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH storage/qemu-server/pve-manager] implement ova/ovf import for directory type storages
  2024-04-16 13:18 [pve-devel] [PATCH storage/qemu-server/pve-manager] implement ova/ovf import for directory type storages Dominik Csapak
                   ` (16 preceding siblings ...)
  2024-04-17 13:11 ` [pve-devel] [PATCH storage/qemu-server/pve-manager] implement ova/ovf import for directory " Fabian Grünbichler
@ 2024-04-18  9:27 ` Dominik Csapak
  2024-04-18 10:35   ` Fiona Ebner
  17 siblings, 1 reply; 67+ messages in thread
From: Dominik Csapak @ 2024-04-18  9:27 UTC (permalink / raw)
  To: pve-devel

ok after a bit of thinking and discussing off-list
the plan to go forward from my side is this:

(please tell if there is something obviously wrong with it or you'd
strongly prefer something differently)

extract on demand vs on upload:
  i'd go with extract on demand because managing the extraction of the tarball + subdir etc is not 
really a win in my book, since we have to have most safeguards anyway and we have the same
issue of where to store/map it etc. also it's not convenient for users that have already
a bunch of ovas and want to store them in a central place, now they'd have to extract
them just for us (and importing should be as small a hassle as it can be)

for placing the extract code, i'd strongly prefer the (future) PVE::GuestImport namespace in 
pve-storage, as that does not pollute the plugin api with irrelevant stuff and is relatively
far away from qemu-server (so we could reuse it later for other things if we need to)

as for the extraction/cleanup step:
i'll reuse the 'images' part of the storage to extract it there with a valid vm disk name.
that way if the cleanup fails, it can be deleted from the ui at least
if the storage does not have an images content type or the user does not have the relevant
privileges, i'd force the user to provide (with a new parameter for creating) a file based
storage with content type images. that can be shown in the gui only for ova imports.
(and would default to the import storage if possible)

i think that were all the "big" questions for this series, please do tell if i forgot something ;)
if no one objects to these things (or has even better ideas to solve some of these),
i'd get started on that ASAP


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 3/3] api: create: implement extracting disks when needed for import-from
  2024-04-16 13:19 ` [pve-devel] [PATCH qemu-server 3/3] api: create: implement extracting disks when needed for import-from Dominik Csapak
@ 2024-04-18  9:41   ` Fiona Ebner
  2024-04-18  9:48     ` Dominik Csapak
  0 siblings, 1 reply; 67+ messages in thread
From: Fiona Ebner @ 2024-04-18  9:41 UTC (permalink / raw)
  To: Proxmox VE development discussion, Dominik Csapak

Am 16.04.24 um 15:19 schrieb Dominik Csapak:
> @@ -391,6 +392,13 @@ my sub create_disks : prototype($$$$$$$$$$) {
>  
>  		$needs_creation = $live_import;
>  
> +		if (PVE::Storage::copy_needs_extraction($source)) { # needs extraction beforehand
> +		    print "extracting $source\n";
> +		    $source = PVE::Storage::extract_disk_from_import_file($source, $vmid);
> +		    print "finished extracting to $source\n";
> +		    push @$delete_sources, $source;
> +		}
> +

This breaks import from an absolute path: copy_needs_extraction()
expects to be called with a PVE-managed volid, so the above should be
moved into the if below.

>  		if (PVE::Storage::parse_volume_id($source, 1)) { # PVE-managed volume
>  		    if ($live_import && $ds ne 'efidisk0') {
>  			my $path = PVE::Storage::path($storecfg, $source)
> @@ -514,13 +522,14 @@ my sub create_disks : prototype($$$$$$$$$$) {
>  	    eval { PVE::Storage::vdisk_free($storecfg, $volid); };
>  	    warn $@ if $@;
>  	}
> +	PVE::QemuServer::Helpers::cleanup_extracted_images($delete_sources);
>  	die $err;
>      }
>  
>      # don't return empty import mappings
>      $live_import_mapping = undef if !%$live_import_mapping;
>  
> -    return ($vollist, $res, $live_import_mapping);
> +    return ($vollist, $res, $live_import_mapping, $delete_sources);
>  };
>  
>  my $check_cpu_model_access = sub {

The second caller of create_disks(), i.e. when updating an existing VM,
is not updated to handle $delete_sources. (You can also do a disk
import-from from an OVA for an existing VM).

When I tested that my suspicion is correct I didn't notice initially
that the temporary dirs were hidden, should we really make them so hard
to find?


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 3/3] api: create: implement extracting disks when needed for import-from
  2024-04-18  9:41   ` Fiona Ebner
@ 2024-04-18  9:48     ` Dominik Csapak
  2024-04-18  9:55       ` Fiona Ebner
  0 siblings, 1 reply; 67+ messages in thread
From: Dominik Csapak @ 2024-04-18  9:48 UTC (permalink / raw)
  To: Fiona Ebner, Proxmox VE development discussion

On 4/18/24 11:41, Fiona Ebner wrote:
> Am 16.04.24 um 15:19 schrieb Dominik Csapak:
>> @@ -391,6 +392,13 @@ my sub create_disks : prototype($$$$$$$$$$) {
>>   
>>   		$needs_creation = $live_import;
>>   
>> +		if (PVE::Storage::copy_needs_extraction($source)) { # needs extraction beforehand
>> +		    print "extracting $source\n";
>> +		    $source = PVE::Storage::extract_disk_from_import_file($source, $vmid);
>> +		    print "finished extracting to $source\n";
>> +		    push @$delete_sources, $source;
>> +		}
>> +
> 
> This breaks import from an absolute path: copy_needs_extraction()
> expects to be called with a PVE-managed volid, so the above should be
> moved into the if below.

true, that will be fixed in the next iteration since we then get a
pve managed volid back after extraction
(see my answer to the cover letter)

> 
>>   		if (PVE::Storage::parse_volume_id($source, 1)) { # PVE-managed volume
>>   		    if ($live_import && $ds ne 'efidisk0') {
>>   			my $path = PVE::Storage::path($storecfg, $source)
>> @@ -514,13 +522,14 @@ my sub create_disks : prototype($$$$$$$$$$) {
>>   	    eval { PVE::Storage::vdisk_free($storecfg, $volid); };
>>   	    warn $@ if $@;
>>   	}
>> +	PVE::QemuServer::Helpers::cleanup_extracted_images($delete_sources);
>>   	die $err;
>>       }
>>   
>>       # don't return empty import mappings
>>       $live_import_mapping = undef if !%$live_import_mapping;
>>   
>> -    return ($vollist, $res, $live_import_mapping);
>> +    return ($vollist, $res, $live_import_mapping, $delete_sources);
>>   };
>>   
>>   my $check_cpu_model_access = sub {
> 
> The second caller of create_disks(), i.e. when updating an existing VM,
> is not updated to handle $delete_sources. (You can also do a disk
> import-from from an OVA for an existing VM).
> 
> When I tested that my suspicion is correct I didn't notice initially
> that the temporary dirs were hidden, should we really make them so hard
> to find?

see my recent answer to the cover letter, this shouldn't be an issue when
we put the extracted image into a valid image path on the storage


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 3/3] api: create: implement extracting disks when needed for import-from
  2024-04-18  9:48     ` Dominik Csapak
@ 2024-04-18  9:55       ` Fiona Ebner
  2024-04-18  9:58         ` Dominik Csapak
  0 siblings, 1 reply; 67+ messages in thread
From: Fiona Ebner @ 2024-04-18  9:55 UTC (permalink / raw)
  To: Dominik Csapak, Proxmox VE development discussion



Am 18.04.24 um 11:48 schrieb Dominik Csapak:
> On 4/18/24 11:41, Fiona Ebner wrote:
>> Am 16.04.24 um 15:19 schrieb Dominik Csapak:
>>> @@ -391,6 +392,13 @@ my sub create_disks : prototype($$$$$$$$$$) {
>>>             $needs_creation = $live_import;
>>>   +        if (PVE::Storage::copy_needs_extraction($source)) { #
>>> needs extraction beforehand
>>> +            print "extracting $source\n";
>>> +            $source =
>>> PVE::Storage::extract_disk_from_import_file($source, $vmid);
>>> +            print "finished extracting to $source\n";
>>> +            push @$delete_sources, $source;
>>> +        }
>>> +
>>
>> This breaks import from an absolute path: copy_needs_extraction()
>> expects to be called with a PVE-managed volid, so the above should be
>> moved into the if below.
> 
> true, that will be fixed in the next iteration since we then get a
> pve managed volid back after extraction
> (see my answer to the cover letter)
> 

Sorry, I don't understand. The breakage is for import from an absolute
path, because copy_needs_extraction() cannot be called on an absolute
path. Why does it matter whether extraction returns a managed volid or not?

>>
>>>           if (PVE::Storage::parse_volume_id($source, 1)) { #
>>> PVE-managed volume
>>>               if ($live_import && $ds ne 'efidisk0') {
>>>               my $path = PVE::Storage::path($storecfg, $source)
>>> @@ -514,13 +522,14 @@ my sub create_disks : prototype($$$$$$$$$$) {
>>>           eval { PVE::Storage::vdisk_free($storecfg, $volid); };
>>>           warn $@ if $@;
>>>       }
>>> +   
>>> PVE::QemuServer::Helpers::cleanup_extracted_images($delete_sources);
>>>       die $err;
>>>       }
>>>         # don't return empty import mappings
>>>       $live_import_mapping = undef if !%$live_import_mapping;
>>>   -    return ($vollist, $res, $live_import_mapping);
>>> +    return ($vollist, $res, $live_import_mapping, $delete_sources);
>>>   };
>>>     my $check_cpu_model_access = sub {
>>
>> The second caller of create_disks(), i.e. when updating an existing VM,
>> is not updated to handle $delete_sources. (You can also do a disk
>> import-from from an OVA for an existing VM).
>>
>> When I tested that my suspicion is correct I didn't notice initially
>> that the temporary dirs were hidden, should we really make them so hard
>> to find?
> 
> see my recent answer to the cover letter, this shouldn't be an issue when
> we put the extracted image into a valid image path on the storage
> 

But we should still attempt cleanup and not just ignore the
$delete_sources for the second caller.


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 3/3] api: create: implement extracting disks when needed for import-from
  2024-04-18  9:55       ` Fiona Ebner
@ 2024-04-18  9:58         ` Dominik Csapak
  2024-04-18 10:01           ` Fiona Ebner
  0 siblings, 1 reply; 67+ messages in thread
From: Dominik Csapak @ 2024-04-18  9:58 UTC (permalink / raw)
  To: Fiona Ebner, Proxmox VE development discussion

On 4/18/24 11:55, Fiona Ebner wrote:
> 
> 
> Am 18.04.24 um 11:48 schrieb Dominik Csapak:
>> On 4/18/24 11:41, Fiona Ebner wrote:
>>> Am 16.04.24 um 15:19 schrieb Dominik Csapak:
>>>> @@ -391,6 +392,13 @@ my sub create_disks : prototype($$$$$$$$$$) {
>>>>              $needs_creation = $live_import;
>>>>    +        if (PVE::Storage::copy_needs_extraction($source)) { #
>>>> needs extraction beforehand
>>>> +            print "extracting $source\n";
>>>> +            $source =
>>>> PVE::Storage::extract_disk_from_import_file($source, $vmid);
>>>> +            print "finished extracting to $source\n";
>>>> +            push @$delete_sources, $source;
>>>> +        }
>>>> +
>>>
>>> This breaks import from an absolute path: copy_needs_extraction()
>>> expects to be called with a PVE-managed volid, so the above should be
>>> moved into the if below.
>>
>> true, that will be fixed in the next iteration since we then get a
>> pve managed volid back after extraction
>> (see my answer to the cover letter)
>>
> 
> Sorry, I don't understand. The breakage is for import from an absolute
> path, because copy_needs_extraction() cannot be called on an absolute
> path. Why does it matter whether extraction returns a managed volid or not?
> 

sorry i was a step further along in my mind ^^

the reason i put it here was that we got an absolute path back, which
would have been complicated when i'd have put it in the branch

so with my next patch i'll return a volid again and we can safely put
it there as you suggested

>>>
>>>>            if (PVE::Storage::parse_volume_id($source, 1)) { #
>>>> PVE-managed volume
>>>>                if ($live_import && $ds ne 'efidisk0') {
>>>>                my $path = PVE::Storage::path($storecfg, $source)
>>>> @@ -514,13 +522,14 @@ my sub create_disks : prototype($$$$$$$$$$) {
>>>>            eval { PVE::Storage::vdisk_free($storecfg, $volid); };
>>>>            warn $@ if $@;
>>>>        }
>>>> +
>>>> PVE::QemuServer::Helpers::cleanup_extracted_images($delete_sources);
>>>>        die $err;
>>>>        }
>>>>          # don't return empty import mappings
>>>>        $live_import_mapping = undef if !%$live_import_mapping;
>>>>    -    return ($vollist, $res, $live_import_mapping);
>>>> +    return ($vollist, $res, $live_import_mapping, $delete_sources);
>>>>    };
>>>>      my $check_cpu_model_access = sub {
>>>
>>> The second caller of create_disks(), i.e. when updating an existing VM,
>>> is not updated to handle $delete_sources. (You can also do a disk
>>> import-from from an OVA for an existing VM).
>>>
>>> When I tested that my suspicion is correct I didn't notice initially
>>> that the temporary dirs were hidden, should we really make them so hard
>>> to find?
>>
>> see my recent answer to the cover letter, this shouldn't be an issue when
>> we put the extracted image into a valid image path on the storage
>>
> 
> But we should still attempt cleanup and not just ignore the
> $delete_sources for the second caller.

of course we have to clean up for the other case, i just meant
accidentally left over images can be more easily found and deleted

sorry for the confusion!


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 3/3] api: create: implement extracting disks when needed for import-from
  2024-04-18  9:58         ` Dominik Csapak
@ 2024-04-18 10:01           ` Fiona Ebner
  0 siblings, 0 replies; 67+ messages in thread
From: Fiona Ebner @ 2024-04-18 10:01 UTC (permalink / raw)
  To: Dominik Csapak, Proxmox VE development discussion

Am 18.04.24 um 11:58 schrieb Dominik Csapak:
> On 4/18/24 11:55, Fiona Ebner wrote:
>>
>>
>> Am 18.04.24 um 11:48 schrieb Dominik Csapak:
>>> On 4/18/24 11:41, Fiona Ebner wrote:
>>>> Am 16.04.24 um 15:19 schrieb Dominik Csapak:
>>>>> @@ -391,6 +392,13 @@ my sub create_disks : prototype($$$$$$$$$$) {
>>>>>              $needs_creation = $live_import;
>>>>>    +        if (PVE::Storage::copy_needs_extraction($source)) { #
>>>>> needs extraction beforehand
>>>>> +            print "extracting $source\n";
>>>>> +            $source =
>>>>> PVE::Storage::extract_disk_from_import_file($source, $vmid);
>>>>> +            print "finished extracting to $source\n";
>>>>> +            push @$delete_sources, $source;
>>>>> +        }
>>>>> +
>>>>
>>>> This breaks import from an absolute path: copy_needs_extraction()
>>>> expects to be called with a PVE-managed volid, so the above should be
>>>> moved into the if below.
>>>
>>> true, that will be fixed in the next iteration since we then get a
>>> pve managed volid back after extraction
>>> (see my answer to the cover letter)
>>>
>>
>> Sorry, I don't understand. The breakage is for import from an absolute
>> path, because copy_needs_extraction() cannot be called on an absolute
>> path. Why does it matter whether extraction returns a managed volid or
>> not?
>>
> 
> sorry i was a step further along in my mind ^^
> 
> the reason i put it here was that we got an absolute path back, which
> would have been complicated when i'd have put it in the branch
> 
> so with my next patch i'll return a volid again and we can safely put
> it there as you suggested
> 

Ah, I see :)

>>>>
>>>>>            if (PVE::Storage::parse_volume_id($source, 1)) { #
>>>>> PVE-managed volume
>>>>>                if ($live_import && $ds ne 'efidisk0') {
>>>>>                my $path = PVE::Storage::path($storecfg, $source)
>>>>> @@ -514,13 +522,14 @@ my sub create_disks : prototype($$$$$$$$$$) {
>>>>>            eval { PVE::Storage::vdisk_free($storecfg, $volid); };
>>>>>            warn $@ if $@;
>>>>>        }
>>>>> +
>>>>> PVE::QemuServer::Helpers::cleanup_extracted_images($delete_sources);
>>>>>        die $err;
>>>>>        }
>>>>>          # don't return empty import mappings
>>>>>        $live_import_mapping = undef if !%$live_import_mapping;
>>>>>    -    return ($vollist, $res, $live_import_mapping);
>>>>> +    return ($vollist, $res, $live_import_mapping, $delete_sources);
>>>>>    };
>>>>>      my $check_cpu_model_access = sub {
>>>>
>>>> The second caller of create_disks(), i.e. when updating an existing VM,
>>>> is not updated to handle $delete_sources. (You can also do a disk
>>>> import-from from an OVA for an existing VM).
>>>>
>>>> When I tested that my suspicion is correct I didn't notice initially
>>>> that the temporary dirs were hidden, should we really make them so hard
>>>> to find?
>>>
>>> see my recent answer to the cover letter, this shouldn't be an issue
>>> when
>>> we put the extracted image into a valid image path on the storage
>>>
>>
>> But we should still attempt cleanup and not just ignore the
>> $delete_sources for the second caller.
> 
> of course we have to clean up for the other case, i just meant
> accidentally left over images can be more easily found and deleted
> 
> sorry for the confusion!
> 

Sorry for not understanding ;) I see what you mean now, thanks for the
explanations!


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH storage/qemu-server/pve-manager] implement ova/ovf import for directory type storages
  2024-04-18  9:27 ` Dominik Csapak
@ 2024-04-18 10:35   ` Fiona Ebner
  2024-04-18 11:10     ` Dominik Csapak
  2024-04-18 11:17     ` Fabian Grünbichler
  0 siblings, 2 replies; 67+ messages in thread
From: Fiona Ebner @ 2024-04-18 10:35 UTC (permalink / raw)
  To: Proxmox VE development discussion, Dominik Csapak

Am 18.04.24 um 11:27 schrieb Dominik Csapak:
> ok after a bit of thinking and discussing off-list
> the plan to go forward from my side is this:
> 
> (please tell if there is something obviously wrong with it or you'd
> strongly prefer something differently)
> 
> extract on demand vs on upload:
>  i'd go with extract on demand because managing the extraction of the
> tarball + subdir etc is not really a win in my book, since we have to
> have most safeguards anyway and we have the same
> issue of where to store/map it etc. also it's not convenient for users
> that have already
> a bunch of ovas and want to store them in a central place, now they'd
> have to extract
> them just for us (and importing should be as small a hassle as it can be)
> 

The upside is that it would avoid all the extra cleanup handling and be
more efficient for users that want to import from a single OVA multiple
times. But you are right, the downside is also very big.

I'm thinking now, is there no way to expose a file in an OVA/tar without
actually extracting it? I.e. something like

> root@pve8a1 ~ # cat B         
> secret
> root@pve8a1 ~ # tar cf arch.tar A B dir
> root@pve8a1 ~ # losetup --offset 1536 --sizelimit 512 --read-only --show -f arch.tar
> /dev/loop0
> root@pve8a1 ~ # cat /dev/loop0
> secret

but that doesn't seem to work with sizelimit < 512. Not claiming losetup
is a good mechanism for that, just to illustrate the idea.


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH storage/qemu-server/pve-manager] implement ova/ovf import for directory type storages
  2024-04-18 10:35   ` Fiona Ebner
@ 2024-04-18 11:10     ` Dominik Csapak
  2024-04-18 11:13       ` Fiona Ebner
  2024-04-18 11:17     ` Fabian Grünbichler
  1 sibling, 1 reply; 67+ messages in thread
From: Dominik Csapak @ 2024-04-18 11:10 UTC (permalink / raw)
  To: Fiona Ebner, Proxmox VE development discussion

On 4/18/24 12:35, Fiona Ebner wrote:
> Am 18.04.24 um 11:27 schrieb Dominik Csapak:
>> ok after a bit of thinking and discussing off-list
>> the plan to go forward from my side is this:
>>
>> (please tell if there is something obviously wrong with it or you'd
>> strongly prefer something differently)
>>
>> extract on demand vs on upload:
>>   i'd go with extract on demand because managing the extraction of the
>> tarball + subdir etc is not really a win in my book, since we have to
>> have most safeguards anyway and we have the same
>> issue of where to store/map it etc. also it's not convenient for users
>> that have already
>> a bunch of ovas and want to store them in a central place, now they'd
>> have to extract
>> them just for us (and importing should be as small a hassle as it can be)
>>
> 
> The upside is that it would avoid all the extra cleanup handling and be
> more efficient for users that want to import from a single OVA multiple
> times. But you are right, the downside is also very big.
> 
> I'm thinking now, is there no way to expose a file in an OVA/tar without
> actually extracting it? I.e. something like
> 
>> root@pve8a1 ~ # cat B
>> secret
>> root@pve8a1 ~ # tar cf arch.tar A B dir
>> root@pve8a1 ~ # losetup --offset 1536 --sizelimit 512 --read-only --show -f arch.tar
>> /dev/loop0
>> root@pve8a1 ~ # cat /dev/loop0
>> secret
> 
> but that doesn't seem to work with sizelimit < 512. Not claiming losetup
> is a good mechanism for that, just to illustrate the idea.


AFAIU that only works for uncompressed tars and ova can by definition
be compressed, so that won't work reliably

there is archivemount[0], which can fuse mount archives

i tested that, and it's *very* slow for randomly accessing the files inside
(i guess because it must seek much further back to get the compression
state correct)

i tested a qemu-img convert from such a file and it took >10 minutes
for a file that would normally be extracted + converted in under a minute

0: https://github.com/cybernoid/archivemount


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH storage/qemu-server/pve-manager] implement ova/ovf import for directory type storages
  2024-04-18 11:10     ` Dominik Csapak
@ 2024-04-18 11:13       ` Fiona Ebner
  0 siblings, 0 replies; 67+ messages in thread
From: Fiona Ebner @ 2024-04-18 11:13 UTC (permalink / raw)
  To: Dominik Csapak, Proxmox VE development discussion



Am 18.04.24 um 13:10 schrieb Dominik Csapak:
> On 4/18/24 12:35, Fiona Ebner wrote:
>> Am 18.04.24 um 11:27 schrieb Dominik Csapak:
>>> ok after a bit of thinking and discussing off-list
>>> the plan to go forward from my side is this:
>>>
>>> (please tell if there is something obviously wrong with it or you'd
>>> strongly prefer something differently)
>>>
>>> extract on demand vs on upload:
>>>   i'd go with extract on demand because managing the extraction of the
>>> tarball + subdir etc is not really a win in my book, since we have to
>>> have most safeguards anyway and we have the same
>>> issue of where to store/map it etc. also it's not convenient for users
>>> that have already
>>> a bunch of ovas and want to store them in a central place, now they'd
>>> have to extract
>>> them just for us (and importing should be as small a hassle as it can
>>> be)
>>>
>>
>> The upside is that it would avoid all the extra cleanup handling and be
>> more efficient for users that want to import from a single OVA multiple
>> times. But you are right, the downside is also very big.
>>
>> I'm thinking now, is there no way to expose a file in an OVA/tar without
>> actually extracting it? I.e. something like
>>
>>> root@pve8a1 ~ # cat B
>>> secret
>>> root@pve8a1 ~ # tar cf arch.tar A B dir
>>> root@pve8a1 ~ # losetup --offset 1536 --sizelimit 512 --read-only
>>> --show -f arch.tar
>>> /dev/loop0
>>> root@pve8a1 ~ # cat /dev/loop0
>>> secret
>>
>> but that doesn't seem to work with sizelimit < 512. Not claiming losetup
>> is a good mechanism for that, just to illustrate the idea.
> 
> 
> AFAIU that only works for uncompressed tars and ova can by definition
> be compressed, so that won't work reliably
> 

Okay, I didn't know that. Then it's not going to be nice and easy of course.

> there is archivemount[0], which can fuse mount archives
> 
> i tested that, and it's *very* slow for randomly accessing the files inside
> (i guess because it must seek much further back to get the compression
> state correct)
> 

Yes, makes sense.

> i tested a qemu-img convert from such a file and it took >10 minutes
> for a file that would normally be extracted + converted in under a minute
> 
> 0: https://github.com/cybernoid/archivemount
> 


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH storage/qemu-server/pve-manager] implement ova/ovf import for directory type storages
  2024-04-18 10:35   ` Fiona Ebner
  2024-04-18 11:10     ` Dominik Csapak
@ 2024-04-18 11:17     ` Fabian Grünbichler
  1 sibling, 0 replies; 67+ messages in thread
From: Fabian Grünbichler @ 2024-04-18 11:17 UTC (permalink / raw)
  To: Dominik Csapak, Proxmox VE development discussion

On April 18, 2024 12:35 pm, Fiona Ebner wrote:
> Am 18.04.24 um 11:27 schrieb Dominik Csapak:
>> ok after a bit of thinking and discussing off-list
>> the plan to go forward from my side is this:
>> 
>> (please tell if there is something obviously wrong with it or you'd
>> strongly prefer something differently)
>> 
>> extract on demand vs on upload:
>>  i'd go with extract on demand because managing the extraction of the
>> tarball + subdir etc is not really a win in my book, since we have to
>> have most safeguards anyway and we have the same
>> issue of where to store/map it etc. also it's not convenient for users
>> that have already
>> a bunch of ovas and want to store them in a central place, now they'd
>> have to extract
>> them just for us (and importing should be as small a hassle as it can be)
>> 
> 
> The upside is that it would avoid all the extra cleanup handling and be
> more efficient for users that want to import from a single OVA multiple
> times. But you are right, the downside is also very big.
> 
> I'm thinking now, is there no way to expose a file in an OVA/tar without
> actually extracting it? I.e. something like
> 
>> root@pve8a1 ~ # cat B         
>> secret
>> root@pve8a1 ~ # tar cf arch.tar A B dir
>> root@pve8a1 ~ # losetup --offset 1536 --sizelimit 512 --read-only --show -f arch.tar
>> /dev/loop0
>> root@pve8a1 ~ # cat /dev/loop0
>> secret
> 
> but that doesn't seem to work with sizelimit < 512. Not claiming losetup
> is a good mechanism for that, just to illustrate the idea.

there are a few projects that basically offer "tar browsing via FUSE" in
some fashion, not sure about their overhead and/or compat with various
tar variants and features though..


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH manager 4/4] ui: enable upload/download buttons for 'import' type storages
  2024-04-16 13:19 ` [pve-devel] [PATCH manager 4/4] ui: enable upload/download buttons for 'import' type storages Dominik Csapak
  2024-04-17 12:37   ` Fabian Grünbichler
@ 2024-04-18 11:20   ` Fiona Ebner
  2024-04-18 11:23     ` Dominik Csapak
  1 sibling, 1 reply; 67+ messages in thread
From: Fiona Ebner @ 2024-04-18 11:20 UTC (permalink / raw)
  To: Proxmox VE development discussion, Dominik Csapak

Am 16.04.24 um 15:19 schrieb Dominik Csapak:
> diff --git a/www/manager6/window/UploadToStorage.js b/www/manager6/window/UploadToStorage.js
> index 3c5bba88..79a6e8a6 100644
> --- a/www/manager6/window/UploadToStorage.js
> +++ b/www/manager6/window/UploadToStorage.js
> @@ -11,6 +11,7 @@ Ext.define('PVE.window.UploadToStorage', {
>      acceptedExtensions: {
>  	iso: ['.img', '.iso'],
>  	vztmpl: ['.tar.gz', '.tar.xz', '.tar.zst'],
> +	'import': ['ova'],

Nit: not ordered alphabetically, single quotes not required for key

Missing dot before ova

>      },
>  
>      cbindData: function(initialConfig) {


Apart from that, all pve-manager patches:

Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH manager 4/4] ui: enable upload/download buttons for 'import' type storages
  2024-04-18 11:20   ` Fiona Ebner
@ 2024-04-18 11:23     ` Dominik Csapak
  2024-04-18 11:26       ` Fiona Ebner
  0 siblings, 1 reply; 67+ messages in thread
From: Dominik Csapak @ 2024-04-18 11:23 UTC (permalink / raw)
  To: Fiona Ebner, Proxmox VE development discussion

On 4/18/24 13:20, Fiona Ebner wrote:
> Am 16.04.24 um 15:19 schrieb Dominik Csapak:
>> diff --git a/www/manager6/window/UploadToStorage.js b/www/manager6/window/UploadToStorage.js
>> index 3c5bba88..79a6e8a6 100644
>> --- a/www/manager6/window/UploadToStorage.js
>> +++ b/www/manager6/window/UploadToStorage.js
>> @@ -11,6 +11,7 @@ Ext.define('PVE.window.UploadToStorage', {
>>       acceptedExtensions: {
>>   	iso: ['.img', '.iso'],
>>   	vztmpl: ['.tar.gz', '.tar.xz', '.tar.zst'],
>> +	'import': ['ova'],
> 
> Nit: not ordered alphabetically, single quotes not required for key

generally you're right about the quotes, but in this case required
as 'import' is a reserved name and eslint will complain about
unquoted reserved words in that context ;)

> 
> Missing dot before ova
> 
>>       },
>>   
>>       cbindData: function(initialConfig) {
> 
> 
> Apart from that, all pve-manager patches:
> 
> Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [pve-devel] [PATCH manager 4/4] ui: enable upload/download buttons for 'import' type storages
  2024-04-18 11:23     ` Dominik Csapak
@ 2024-04-18 11:26       ` Fiona Ebner
  0 siblings, 0 replies; 67+ messages in thread
From: Fiona Ebner @ 2024-04-18 11:26 UTC (permalink / raw)
  To: Dominik Csapak, Proxmox VE development discussion

Am 18.04.24 um 13:23 schrieb Dominik Csapak:
> On 4/18/24 13:20, Fiona Ebner wrote:
>> Am 16.04.24 um 15:19 schrieb Dominik Csapak:
>>> diff --git a/www/manager6/window/UploadToStorage.js
>>> b/www/manager6/window/UploadToStorage.js
>>> index 3c5bba88..79a6e8a6 100644
>>> --- a/www/manager6/window/UploadToStorage.js
>>> +++ b/www/manager6/window/UploadToStorage.js
>>> @@ -11,6 +11,7 @@ Ext.define('PVE.window.UploadToStorage', {
>>>       acceptedExtensions: {
>>>       iso: ['.img', '.iso'],
>>>       vztmpl: ['.tar.gz', '.tar.xz', '.tar.zst'],
>>> +    'import': ['ova'],
>>
>> Nit: not ordered alphabetically, single quotes not required for key
> 
> generally you're right about the quotes, but in this case required
> as 'import' is a reserved name and eslint will complain about
> unquoted reserved words in that context ;)
> 

Well, you (or rather JS) got me :)

>>
>> Missing dot before ova
>>
>>>       },
>>>         cbindData: function(initialConfig) {
>>
>>
>> Apart from that, all pve-manager patches:
>>
>> Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
> 
> 


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 67+ messages in thread

end of thread, other threads:[~2024-04-18 11:27 UTC | newest]

Thread overview: 67+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-04-16 13:18 [pve-devel] [PATCH storage/qemu-server/pve-manager] implement ova/ovf import for directory type storages Dominik Csapak
2024-04-16 13:18 ` [pve-devel] [PATCH storage 1/9] copy OVF.pm from qemu-server Dominik Csapak
2024-04-16 15:02   ` Thomas Lamprecht
2024-04-17  9:19     ` Fiona Ebner
2024-04-17  9:26       ` Thomas Lamprecht
2024-04-16 13:18 ` [pve-devel] [PATCH storage 2/9] plugin: dir: implement import content type Dominik Csapak
2024-04-17 10:07   ` Fiona Ebner
2024-04-17 10:07     ` Fiona Ebner
2024-04-17 13:13     ` Dominik Csapak
2024-04-17 12:46   ` Fabian Grünbichler
2024-04-16 13:18 ` [pve-devel] [PATCH storage 3/9] plugin: dir: handle ova files for import Dominik Csapak
2024-04-17 10:52   ` Fiona Ebner
2024-04-17 13:07     ` Dominik Csapak
2024-04-17 13:39       ` Fabian Grünbichler
2024-04-18  7:22       ` Fiona Ebner
2024-04-18  7:25         ` Fiona Ebner
2024-04-18  8:55         ` Fabian Grünbichler
2024-04-17 12:45   ` Fabian Grünbichler
2024-04-17 13:10     ` Dominik Csapak
2024-04-17 13:52       ` Fabian Grünbichler
2024-04-17 14:07         ` Dominik Csapak
2024-04-18  6:46           ` Fabian Grünbichler
2024-04-16 13:18 ` [pve-devel] [PATCH storage 4/9] ovf: implement parsing the ostype Dominik Csapak
2024-04-17 11:32   ` Fiona Ebner
2024-04-17 13:14     ` Dominik Csapak
2024-04-18  7:31       ` Fiona Ebner
2024-04-16 13:18 ` [pve-devel] [PATCH storage 5/9] ovf: implement parsing out firmware type Dominik Csapak
2024-04-17 11:43   ` Fiona Ebner
2024-04-16 13:18 ` [pve-devel] [PATCH storage 6/9] ovf: implement rudimentary boot order Dominik Csapak
2024-04-17 11:54   ` Fiona Ebner
2024-04-17 13:15     ` Dominik Csapak
2024-04-16 13:19 ` [pve-devel] [PATCH storage 7/9] ovf: implement parsing nics Dominik Csapak
2024-04-17 12:09   ` Fiona Ebner
2024-04-17 13:16     ` Dominik Csapak
2024-04-18  8:22   ` Fiona Ebner
2024-04-16 13:19 ` [pve-devel] [PATCH storage 8/9] api: allow ova upload/download Dominik Csapak
2024-04-18  8:05   ` Fiona Ebner
2024-04-16 13:19 ` [pve-devel] [PATCH storage 9/9] plugin: enable import for nfs/btfs/cifs/cephfs Dominik Csapak
2024-04-18  8:43   ` Fiona Ebner
2024-04-16 13:19 ` [pve-devel] [PATCH qemu-server 1/3] api: delete unused OVF.pm Dominik Csapak
2024-04-18  8:52   ` Fiona Ebner
2024-04-18  8:57     ` Dominik Csapak
2024-04-18  9:03       ` Fiona Ebner
2024-04-16 13:19 ` [pve-devel] [PATCH qemu-server 2/3] use OVF from Storage Dominik Csapak
2024-04-18  9:07   ` Fiona Ebner
2024-04-16 13:19 ` [pve-devel] [PATCH qemu-server 3/3] api: create: implement extracting disks when needed for import-from Dominik Csapak
2024-04-18  9:41   ` Fiona Ebner
2024-04-18  9:48     ` Dominik Csapak
2024-04-18  9:55       ` Fiona Ebner
2024-04-18  9:58         ` Dominik Csapak
2024-04-18 10:01           ` Fiona Ebner
2024-04-16 13:19 ` [pve-devel] [PATCH manager 1/4] ui: fix special 'import' icon for non-esxi storages Dominik Csapak
2024-04-16 13:19 ` [pve-devel] [PATCH manager 2/4] ui: guest import: add ova-needs-extracting warning text Dominik Csapak
2024-04-16 13:19 ` [pve-devel] [PATCH manager 3/4] ui: enable import content type for relevant storages Dominik Csapak
2024-04-16 13:19 ` [pve-devel] [PATCH manager 4/4] ui: enable upload/download buttons for 'import' type storages Dominik Csapak
2024-04-17 12:37   ` Fabian Grünbichler
2024-04-18 11:20   ` Fiona Ebner
2024-04-18 11:23     ` Dominik Csapak
2024-04-18 11:26       ` Fiona Ebner
2024-04-17 13:11 ` [pve-devel] [PATCH storage/qemu-server/pve-manager] implement ova/ovf import for directory " Fabian Grünbichler
2024-04-17 13:19   ` Dominik Csapak
2024-04-18  6:40     ` Fabian Grünbichler
2024-04-18  9:27 ` Dominik Csapak
2024-04-18 10:35   ` Fiona Ebner
2024-04-18 11:10     ` Dominik Csapak
2024-04-18 11:13       ` Fiona Ebner
2024-04-18 11:17     ` Fabian Grünbichler

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal