* [pve-devel] [PATCH storage/qemu-server/manager v3] implement ova/ovf import for file based storages
@ 2024-04-29 11:21 Dominik Csapak
2024-04-29 11:21 ` [pve-devel] [PATCH storage v3 01/10] copy OVF.pm from qemu-server Dominik Csapak
` (23 more replies)
0 siblings, 24 replies; 38+ messages in thread
From: Dominik Csapak @ 2024-04-29 11:21 UTC (permalink / raw)
To: pve-devel
This series enables importing ova/ovf from directory based storages,
inclusive upload/download via the webui (ova only).
It also improves the ovf importer by parsing the ostype, nics, bootorder
(and firmware from vmware exported files).
I opted to move the OVF.pm to pve-storage, since there is no
real other place where we could put it. I put it in a new module
'GuestImport'
We now extract the images into either a given target storage or in the
import storage in the 'images' dir so accidentally left over images
are discoverable by the ui/cli.
changes from v2:
* use better 'format' values for embedded images (e.g. ova+vmdk)
* use this format to decide if images should be extracted
* consistent use of the 'safe character' classes when listing
and parsing
* also list vmdk/qcow2/raw images in content listing
(this will be useful when we have a gui for the 'import-from'
in the wizard/disk edit for vms)
* a few gui adaptions
changes from v1:
* move ovf code to GuestImport
* move extract/checking code to GuestImport
* don't return 'image' types from import volumes
* use allow 'safe' characters for filenames of ova/ovfs and inside
* check for non-regular files (e.g. symlinks) after extraction
* add new 'import-extraction-storage' for import
* rename panel in gui for directory storages
* typo fixes
* and probably more, see the individual patches for details
pve-storage:
Dominik Csapak (10):
copy OVF.pm from qemu-server
plugin: dir: implement import content type
plugin: dir: handle ova files for import
ovf: implement parsing the ostype
ovf: implement parsing out firmware type
ovf: implement rudimentary boot order
ovf: implement parsing nics
api: allow ova upload/download
plugin: enable import for nfs/btrfs/cifs/cephfs/glusterfs
add 'import' content type to 'check_volume_access'
src/PVE/API2/Storage/Status.pm | 19 +-
src/PVE/GuestImport.pm | 100 +++++
src/PVE/GuestImport/Makefile | 3 +
src/PVE/GuestImport/OVF.pm | 383 ++++++++++++++++++
src/PVE/Makefile | 2 +
src/PVE/Storage.pm | 21 +-
src/PVE/Storage/BTRFSPlugin.pm | 5 +
src/PVE/Storage/CIFSPlugin.pm | 6 +-
src/PVE/Storage/CephFSPlugin.pm | 6 +-
src/PVE/Storage/DirPlugin.pm | 52 ++-
src/PVE/Storage/GlusterfsPlugin.pm | 6 +-
src/PVE/Storage/Makefile | 1 +
src/PVE/Storage/NFSPlugin.pm | 6 +-
src/PVE/Storage/Plugin.pm | 16 +-
src/test/Makefile | 5 +-
src/test/ovf_manifests/Win10-Liz-disk1.vmdk | Bin 0 -> 65536 bytes
src/test/ovf_manifests/Win10-Liz.ovf | 142 +++++++
.../ovf_manifests/Win10-Liz_no_default_ns.ovf | 143 +++++++
.../ovf_manifests/Win_2008_R2_two-disks.ovf | 145 +++++++
src/test/ovf_manifests/disk1.vmdk | Bin 0 -> 65536 bytes
src/test/ovf_manifests/disk2.vmdk | Bin 0 -> 65536 bytes
src/test/parse_volname_test.pm | 33 ++
src/test/path_to_volume_id_test.pm | 21 +
src/test/run_ovf_tests.pl | 85 ++++
24 files changed, 1188 insertions(+), 12 deletions(-)
create mode 100644 src/PVE/GuestImport.pm
create mode 100644 src/PVE/GuestImport/Makefile
create mode 100644 src/PVE/GuestImport/OVF.pm
create mode 100644 src/test/ovf_manifests/Win10-Liz-disk1.vmdk
create mode 100755 src/test/ovf_manifests/Win10-Liz.ovf
create mode 100755 src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf
create mode 100755 src/test/ovf_manifests/Win_2008_R2_two-disks.ovf
create mode 100644 src/test/ovf_manifests/disk1.vmdk
create mode 100644 src/test/ovf_manifests/disk2.vmdk
create mode 100755 src/test/run_ovf_tests.pl
qemu-server:
Dominik Csapak (4):
api: delete unused OVF.pm
use OVF from Storage
api: create: implement extracting disks when needed for import-from
api: create: add 'import-extraction-storage' parameter
PVE/API2/Qemu.pm | 92 ++++++-
PVE/API2/Qemu/Makefile | 2 +-
PVE/API2/Qemu/OVF.pm | 53 ----
PVE/CLI/qm.pm | 4 +-
PVE/QemuServer.pm | 5 +-
PVE/QemuServer/Helpers.pm | 10 +
PVE/QemuServer/Makefile | 1 -
PVE/QemuServer/OVF.pm | 242 ------------------
test/Makefile | 5 +-
test/ovf_manifests/Win10-Liz-disk1.vmdk | Bin 65536 -> 0 bytes
test/ovf_manifests/Win10-Liz.ovf | 142 ----------
.../ovf_manifests/Win10-Liz_no_default_ns.ovf | 142 ----------
test/ovf_manifests/Win_2008_R2_two-disks.ovf | 145 -----------
test/ovf_manifests/disk1.vmdk | Bin 65536 -> 0 bytes
test/ovf_manifests/disk2.vmdk | Bin 65536 -> 0 bytes
test/run_ovf_tests.pl | 71 -----
16 files changed, 96 insertions(+), 818 deletions(-)
delete mode 100644 PVE/API2/Qemu/OVF.pm
delete mode 100644 PVE/QemuServer/OVF.pm
delete mode 100644 test/ovf_manifests/Win10-Liz-disk1.vmdk
delete mode 100755 test/ovf_manifests/Win10-Liz.ovf
delete mode 100755 test/ovf_manifests/Win10-Liz_no_default_ns.ovf
delete mode 100755 test/ovf_manifests/Win_2008_R2_two-disks.ovf
delete mode 100644 test/ovf_manifests/disk1.vmdk
delete mode 100644 test/ovf_manifests/disk2.vmdk
delete mode 100755 test/run_ovf_tests.pl
pve-manager:
Dominik Csapak (9):
ui: fix special 'import' icon for non-esxi storages
ui: guest import: add ova-needs-extracting warning text
ui: enable import content type for relevant storages
ui: enable upload/download/remove buttons for 'import' type storages
ui: disable 'import' button for non importable formats
ui: import: improve rendering of volume names
ui: guest import: add storage selector for ova extraction storage
ui: guest import: change icon/text for non-esxi import storage
ui: import: show size for dir-based storages
www/manager6/Utils.js | 11 +++++++++--
www/manager6/form/ContentTypeSelector.js | 2 +-
www/manager6/storage/Browser.js | 25 ++++++++++++++++++------
www/manager6/storage/CephFSEdit.js | 2 +-
www/manager6/storage/GlusterFsEdit.js | 2 +-
www/manager6/window/GuestImport.js | 24 +++++++++++++++++++++++
www/manager6/window/UploadToStorage.js | 1 +
7 files changed, 56 insertions(+), 11 deletions(-)
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 38+ messages in thread
* [pve-devel] [PATCH storage v3 01/10] copy OVF.pm from qemu-server
2024-04-29 11:21 [pve-devel] [PATCH storage/qemu-server/manager v3] implement ova/ovf import for file based storages Dominik Csapak
@ 2024-04-29 11:21 ` Dominik Csapak
2024-05-22 8:56 ` Fabian Grünbichler
2024-05-22 9:35 ` Fabian Grünbichler
2024-04-29 11:21 ` [pve-devel] [PATCH storage v3 02/10] plugin: dir: implement import content type Dominik Csapak
` (22 subsequent siblings)
23 siblings, 2 replies; 38+ messages in thread
From: Dominik Csapak @ 2024-04-29 11:21 UTC (permalink / raw)
To: pve-devel
copies the OVF.pm and relevant ovf tests from qemu-server.
We need it here, and it uses PVE::Storage already, and since there is no
intermediary package/repository we could put it, it seems fitting in
here.
Put it in a new GuestImport module
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/PVE/GuestImport/Makefile | 3 +
src/PVE/GuestImport/OVF.pm | 242 ++++++++++++++++++
src/PVE/Makefile | 1 +
src/PVE/Storage/Makefile | 1 +
src/test/Makefile | 5 +-
src/test/ovf_manifests/Win10-Liz-disk1.vmdk | Bin 0 -> 65536 bytes
src/test/ovf_manifests/Win10-Liz.ovf | 142 ++++++++++
.../ovf_manifests/Win10-Liz_no_default_ns.ovf | 142 ++++++++++
.../ovf_manifests/Win_2008_R2_two-disks.ovf | 145 +++++++++++
src/test/ovf_manifests/disk1.vmdk | Bin 0 -> 65536 bytes
src/test/ovf_manifests/disk2.vmdk | Bin 0 -> 65536 bytes
src/test/run_ovf_tests.pl | 71 +++++
12 files changed, 751 insertions(+), 1 deletion(-)
create mode 100644 src/PVE/GuestImport/Makefile
create mode 100644 src/PVE/GuestImport/OVF.pm
create mode 100644 src/test/ovf_manifests/Win10-Liz-disk1.vmdk
create mode 100755 src/test/ovf_manifests/Win10-Liz.ovf
create mode 100755 src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf
create mode 100755 src/test/ovf_manifests/Win_2008_R2_two-disks.ovf
create mode 100644 src/test/ovf_manifests/disk1.vmdk
create mode 100644 src/test/ovf_manifests/disk2.vmdk
create mode 100755 src/test/run_ovf_tests.pl
diff --git a/src/PVE/GuestImport/Makefile b/src/PVE/GuestImport/Makefile
new file mode 100644
index 0000000..5948384
--- /dev/null
+++ b/src/PVE/GuestImport/Makefile
@@ -0,0 +1,3 @@
+.PHONY: install
+install:
+ install -D -m 0644 OVF.pm ${DESTDIR}${PERLDIR}/PVE/GuestImport/OVF.pm
diff --git a/src/PVE/GuestImport/OVF.pm b/src/PVE/GuestImport/OVF.pm
new file mode 100644
index 0000000..055ebf5
--- /dev/null
+++ b/src/PVE/GuestImport/OVF.pm
@@ -0,0 +1,242 @@
+# Open Virtualization Format import routines
+# https://www.dmtf.org/standards/ovf
+package PVE::GuestImport::OVF;
+
+use strict;
+use warnings;
+
+use XML::LibXML;
+use File::Spec;
+use File::Basename;
+use Data::Dumper;
+use Cwd 'realpath';
+
+use PVE::Tools;
+use PVE::Storage;
+
+# map OVF resources types to descriptive strings
+# this will allow us to explore the xml tree without using magic numbers
+# http://schemas.dmtf.org/wbem/cim-html/2/CIM_ResourceAllocationSettingData.html
+my @resources = (
+ { id => 1, dtmf_name => 'Other' },
+ { id => 2, dtmf_name => 'Computer System' },
+ { id => 3, dtmf_name => 'Processor' },
+ { id => 4, dtmf_name => 'Memory' },
+ { id => 5, dtmf_name => 'IDE Controller', pve_type => 'ide' },
+ { id => 6, dtmf_name => 'Parallel SCSI HBA', pve_type => 'scsi' },
+ { id => 7, dtmf_name => 'FC HBA' },
+ { id => 8, dtmf_name => 'iSCSI HBA' },
+ { id => 9, dtmf_name => 'IB HCA' },
+ { id => 10, dtmf_name => 'Ethernet Adapter' },
+ { id => 11, dtmf_name => 'Other Network Adapter' },
+ { id => 12, dtmf_name => 'I/O Slot' },
+ { id => 13, dtmf_name => 'I/O Device' },
+ { id => 14, dtmf_name => 'Floppy Drive' },
+ { id => 15, dtmf_name => 'CD Drive' },
+ { id => 16, dtmf_name => 'DVD drive' },
+ { id => 17, dtmf_name => 'Disk Drive' },
+ { id => 18, dtmf_name => 'Tape Drive' },
+ { id => 19, dtmf_name => 'Storage Extent' },
+ { id => 20, dtmf_name => 'Other storage device', pve_type => 'sata'},
+ { id => 21, dtmf_name => 'Serial port' },
+ { id => 22, dtmf_name => 'Parallel port' },
+ { id => 23, dtmf_name => 'USB Controller' },
+ { id => 24, dtmf_name => 'Graphics controller' },
+ { id => 25, dtmf_name => 'IEEE 1394 Controller' },
+ { id => 26, dtmf_name => 'Partitionable Unit' },
+ { id => 27, dtmf_name => 'Base Partitionable Unit' },
+ { id => 28, dtmf_name => 'Power' },
+ { id => 29, dtmf_name => 'Cooling Capacity' },
+ { id => 30, dtmf_name => 'Ethernet Switch Port' },
+ { id => 31, dtmf_name => 'Logical Disk' },
+ { id => 32, dtmf_name => 'Storage Volume' },
+ { id => 33, dtmf_name => 'Ethernet Connection' },
+ { id => 34, dtmf_name => 'DMTF reserved' },
+ { id => 35, dtmf_name => 'Vendor Reserved'}
+);
+
+sub find_by {
+ my ($key, $param) = @_;
+ foreach my $resource (@resources) {
+ if ($resource->{$key} eq $param) {
+ return ($resource);
+ }
+ }
+ return;
+}
+
+sub dtmf_name_to_id {
+ my ($dtmf_name) = @_;
+ my $found = find_by('dtmf_name', $dtmf_name);
+ if ($found) {
+ return $found->{id};
+ } else {
+ return;
+ }
+}
+
+sub id_to_pve {
+ my ($id) = @_;
+ my $resource = find_by('id', $id);
+ if ($resource) {
+ return $resource->{pve_type};
+ } else {
+ return;
+ }
+}
+
+# returns two references, $qm which holds qm.conf style key/values, and \@disks
+sub parse_ovf {
+ my ($ovf, $debug) = @_;
+
+ my $dom = XML::LibXML->load_xml(location => $ovf, no_blanks => 1);
+
+ # register the xml namespaces in a xpath context object
+ # 'ovf' is the default namespace so it will prepended to each xml element
+ my $xpc = XML::LibXML::XPathContext->new($dom);
+ $xpc->registerNs('ovf', 'http://schemas.dmtf.org/ovf/envelope/1');
+ $xpc->registerNs('rasd', 'http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData');
+ $xpc->registerNs('vssd', 'http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData');
+
+
+ # hash to save qm.conf parameters
+ my $qm;
+
+ #array to save a disk list
+ my @disks;
+
+ # easy xpath
+ # walk down the dom until we find the matching XML element
+ my $xpath_find_name = "/ovf:Envelope/ovf:VirtualSystem/ovf:Name";
+ my $ovf_name = $xpc->findvalue($xpath_find_name);
+
+ if ($ovf_name) {
+ # PVE::QemuServer::confdesc requires a valid DNS name
+ ($qm->{name} = $ovf_name) =~ s/[^a-zA-Z0-9\-\.]//g;
+ } else {
+ warn "warning: unable to parse the VM name in this OVF manifest, generating a default value\n";
+ }
+
+ # middle level xpath
+ # element[child] search the elements which have this [child]
+ my $processor_id = dtmf_name_to_id('Processor');
+ my $xpath_find_vcpu_count = "/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${processor_id}]/rasd:VirtualQuantity";
+ $qm->{'cores'} = $xpc->findvalue($xpath_find_vcpu_count);
+
+ my $memory_id = dtmf_name_to_id('Memory');
+ my $xpath_find_memory = ("/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${memory_id}]/rasd:VirtualQuantity");
+ $qm->{'memory'} = $xpc->findvalue($xpath_find_memory);
+
+ # middle level xpath
+ # here we expect multiple results, so we do not read the element value with
+ # findvalue() but store multiple elements with findnodes()
+ my $disk_id = dtmf_name_to_id('Disk Drive');
+ my $xpath_find_disks="/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${disk_id}]";
+ my @disk_items = $xpc->findnodes($xpath_find_disks);
+
+ # disks metadata is split in four different xml elements:
+ # * as an Item node of type DiskDrive in the VirtualHardwareSection
+ # * as an Disk node in the DiskSection
+ # * as a File node in the References section
+ # * each Item node also holds a reference to its owning controller
+ #
+ # we iterate over the list of Item nodes of type disk drive, and for each item,
+ # find the corresponding Disk node, and File node and owning controller
+ # when all the nodes has been found out, we copy the relevant information to
+ # a $pve_disk hash ref, which we push to @disks;
+
+ foreach my $item_node (@disk_items) {
+
+ my $disk_node;
+ my $file_node;
+ my $controller_node;
+ my $pve_disk;
+
+ print "disk item:\n", $item_node->toString(1), "\n" if $debug;
+
+ # from Item, find corresponding Disk node
+ # here the dot means the search should start from the current element in dom
+ my $host_resource = $xpc->findvalue('rasd:HostResource', $item_node);
+ my $disk_section_path;
+ my $disk_id;
+
+ # RFC 3986 "2.3. Unreserved Characters"
+ my $valid_uripath_chars = qr/[[:alnum:]]|[\-\._~]/;
+
+ if ($host_resource =~ m|^ovf:/(${valid_uripath_chars}+)/(${valid_uripath_chars}+)$|) {
+ $disk_section_path = $1;
+ $disk_id = $2;
+ } else {
+ warn "invalid host ressource $host_resource, skipping\n";
+ next;
+ }
+ printf "disk section path: $disk_section_path and disk id: $disk_id\n" if $debug;
+
+ # tricky xpath
+ # @ means we filter the result query based on a the value of an item attribute ( @ = attribute)
+ # @ needs to be escaped to prevent Perl double quote interpolation
+ my $xpath_find_fileref = sprintf("/ovf:Envelope/ovf:DiskSection/\
+ovf:Disk[\@ovf:diskId='%s']/\@ovf:fileRef", $disk_id);
+ my $fileref = $xpc->findvalue($xpath_find_fileref);
+
+ my $valid_url_chars = qr@${valid_uripath_chars}|/@;
+ if (!$fileref || $fileref !~ m/^${valid_url_chars}+$/) {
+ warn "invalid host ressource $host_resource, skipping\n";
+ next;
+ }
+
+ # from Disk Node, find corresponding filepath
+ my $xpath_find_filepath = sprintf("/ovf:Envelope/ovf:References/ovf:File[\@ovf:id='%s']/\@ovf:href", $fileref);
+ my $filepath = $xpc->findvalue($xpath_find_filepath);
+ if (!$filepath) {
+ warn "invalid file reference $fileref, skipping\n";
+ next;
+ }
+ print "file path: $filepath\n" if $debug;
+
+ # from Item, find owning Controller type
+ my $controller_id = $xpc->findvalue('rasd:Parent', $item_node);
+ my $xpath_find_parent_type = sprintf("/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/\
+ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
+ my $controller_type = $xpc->findvalue($xpath_find_parent_type);
+ if (!$controller_type) {
+ warn "invalid or missing controller: $controller_type, skipping\n";
+ next;
+ }
+ print "owning controller type: $controller_type\n" if $debug;
+
+ # extract corresponding Controller node details
+ my $adress_on_controller = $xpc->findvalue('rasd:AddressOnParent', $item_node);
+ my $pve_disk_address = id_to_pve($controller_type) . $adress_on_controller;
+
+ # resolve symlinks and relative path components
+ # and die if the diskimage is not somewhere under the $ovf path
+ my $ovf_dir = realpath(dirname(File::Spec->rel2abs($ovf)));
+ my $backing_file_path = realpath(join ('/', $ovf_dir, $filepath));
+ if ($backing_file_path !~ /^\Q${ovf_dir}\E/) {
+ die "error parsing $filepath, are you using a symlink ?\n";
+ }
+
+ if (!-e $backing_file_path) {
+ die "error parsing $filepath, file seems not to exist at $backing_file_path\n";
+ }
+
+ ($backing_file_path) = $backing_file_path =~ m|^(/.*)|; # untaint
+
+ my $virtual_size = PVE::Storage::file_size_info($backing_file_path);
+ die "error parsing $backing_file_path, cannot determine file size\n"
+ if !$virtual_size;
+
+ $pve_disk = {
+ disk_address => $pve_disk_address,
+ backing_file => $backing_file_path,
+ virtual_size => $virtual_size
+ };
+ push @disks, $pve_disk;
+
+ }
+
+ return {qm => $qm, disks => \@disks};
+}
+
+1;
diff --git a/src/PVE/Makefile b/src/PVE/Makefile
index d438804..e15a275 100644
--- a/src/PVE/Makefile
+++ b/src/PVE/Makefile
@@ -6,6 +6,7 @@ install:
install -D -m 0644 Diskmanage.pm ${DESTDIR}${PERLDIR}/PVE/Diskmanage.pm
install -D -m 0644 CephConfig.pm ${DESTDIR}${PERLDIR}/PVE/CephConfig.pm
make -C Storage install
+ make -C GuestImport install
make -C API2 install
make -C CLI install
diff --git a/src/PVE/Storage/Makefile b/src/PVE/Storage/Makefile
index d5cc942..2daa0da 100644
--- a/src/PVE/Storage/Makefile
+++ b/src/PVE/Storage/Makefile
@@ -14,6 +14,7 @@ SOURCES= \
PBSPlugin.pm \
BTRFSPlugin.pm \
LvmThinPlugin.pm \
+ OVF.pm \
ESXiPlugin.pm
.PHONY: install
diff --git a/src/test/Makefile b/src/test/Makefile
index c54b10f..12991da 100644
--- a/src/test/Makefile
+++ b/src/test/Makefile
@@ -1,6 +1,6 @@
all: test
-test: test_zfspoolplugin test_disklist test_bwlimit test_plugin
+test: test_zfspoolplugin test_disklist test_bwlimit test_plugin test_ovf
test_zfspoolplugin: run_test_zfspoolplugin.pl
./run_test_zfspoolplugin.pl
@@ -13,3 +13,6 @@ test_bwlimit: run_bwlimit_tests.pl
test_plugin: run_plugin_tests.pl
./run_plugin_tests.pl
+
+test_ovf: run_ovf_tests.pl
+ ./run_ovf_tests.pl
diff --git a/src/test/ovf_manifests/Win10-Liz-disk1.vmdk b/src/test/ovf_manifests/Win10-Liz-disk1.vmdk
new file mode 100644
index 0000000000000000000000000000000000000000..662354a3d1333a2f6c4364005e53bfe7cd8b9044
GIT binary patch
literal 65536
zcmeIvy>HV%7zbeUF`dK)46s<q+^B&Pi6H|eMIb;zP1VdMK8V$P$u?2T)IS|NNta69
zSXw<No$qu%-}$}AUq|21A0<ihr0Gwa-nQ%QGfCR@wmshsN%A;JUhL<u_T%+U7Sd<o
zW^TMU0^M{}R2S(eR@1Ur*Q@eVF^^#r%c@u{hyC#J%V?Nq?*?z)=UG^1Wn9+n(yx6B
z(=ujtJiA)QVP~;guI5EOE2iV-%_??6=%y!^b+aeU_aA6Z4X2azC>{U!a5_FoJCkDB
zKRozW{5{B<Li)YUBEQ&fJe$RRZCRbA$5|CacQiT<A<uvIHbq(g$>yIY=etVNVcI$B
zY@^?CwTN|j)tg?;i)G&AZFqPqoW(5P2K~XUq>9sqVVe!!?y@Y;)^#k~TefEvd2_XU
z^M@5mfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk4^iOdL%ftb
z5g<T-009C72;3>~`p!f^fB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ
zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U
zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7
z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N
p0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK;Zui`~%h>XmtPp
literal 0
HcmV?d00001
diff --git a/src/test/ovf_manifests/Win10-Liz.ovf b/src/test/ovf_manifests/Win10-Liz.ovf
new file mode 100755
index 0000000..bf4b41a
--- /dev/null
+++ b/src/test/ovf_manifests/Win10-Liz.ovf
@@ -0,0 +1,142 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--Generated by VMware ovftool 4.1.0 (build-2982904), UTC time: 2017-02-07T13:50:15.265014Z-->
+<Envelope vmw:buildId="build-2982904" xmlns="http://schemas.dmtf.org/ovf/envelope/1" xmlns:cim="http://schemas.dmtf.org/wbem/wscim/1/common" xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1" xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" xmlns:vmw="http://www.vmware.com/schema/ovf" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
+ <References>
+ <File ovf:href="Win10-Liz-disk1.vmdk" ovf:id="file1" ovf:size="9155243008"/>
+ </References>
+ <DiskSection>
+ <Info>Virtual disk information</Info>
+ <Disk ovf:capacity="128" ovf:capacityAllocationUnits="byte * 2^30" ovf:diskId="vmdisk1" ovf:fileRef="file1" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:populatedSize="16798056448"/>
+ </DiskSection>
+ <NetworkSection>
+ <Info>The list of logical networks</Info>
+ <Network ovf:name="bridged">
+ <Description>The bridged network</Description>
+ </Network>
+ </NetworkSection>
+ <VirtualSystem ovf:id="vm">
+ <Info>A virtual machine</Info>
+ <Name>Win10-Liz</Name>
+ <OperatingSystemSection ovf:id="1" vmw:osType="windows9_64Guest">
+ <Info>The kind of installed guest operating system</Info>
+ </OperatingSystemSection>
+ <VirtualHardwareSection>
+ <Info>Virtual hardware requirements</Info>
+ <System>
+ <vssd:ElementName>Virtual Hardware Family</vssd:ElementName>
+ <vssd:InstanceID>0</vssd:InstanceID>
+ <vssd:VirtualSystemIdentifier>Win10-Liz</vssd:VirtualSystemIdentifier>
+ <vssd:VirtualSystemType>vmx-11</vssd:VirtualSystemType>
+ </System>
+ <Item>
+ <rasd:AllocationUnits>hertz * 10^6</rasd:AllocationUnits>
+ <rasd:Description>Number of Virtual CPUs</rasd:Description>
+ <rasd:ElementName>4 virtual CPU(s)</rasd:ElementName>
+ <rasd:InstanceID>1</rasd:InstanceID>
+ <rasd:ResourceType>3</rasd:ResourceType>
+ <rasd:VirtualQuantity>4</rasd:VirtualQuantity>
+ </Item>
+ <Item>
+ <rasd:AllocationUnits>byte * 2^20</rasd:AllocationUnits>
+ <rasd:Description>Memory Size</rasd:Description>
+ <rasd:ElementName>6144MB of memory</rasd:ElementName>
+ <rasd:InstanceID>2</rasd:InstanceID>
+ <rasd:ResourceType>4</rasd:ResourceType>
+ <rasd:VirtualQuantity>6144</rasd:VirtualQuantity>
+ </Item>
+ <Item>
+ <rasd:Address>0</rasd:Address>
+ <rasd:Description>SATA Controller</rasd:Description>
+ <rasd:ElementName>sataController0</rasd:ElementName>
+ <rasd:InstanceID>3</rasd:InstanceID>
+ <rasd:ResourceSubType>vmware.sata.ahci</rasd:ResourceSubType>
+ <rasd:ResourceType>20</rasd:ResourceType>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:Address>0</rasd:Address>
+ <rasd:Description>USB Controller (XHCI)</rasd:Description>
+ <rasd:ElementName>usb3</rasd:ElementName>
+ <rasd:InstanceID>4</rasd:InstanceID>
+ <rasd:ResourceSubType>vmware.usb.xhci</rasd:ResourceSubType>
+ <rasd:ResourceType>23</rasd:ResourceType>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:Address>0</rasd:Address>
+ <rasd:Description>USB Controller (EHCI)</rasd:Description>
+ <rasd:ElementName>usb</rasd:ElementName>
+ <rasd:InstanceID>5</rasd:InstanceID>
+ <rasd:ResourceSubType>vmware.usb.ehci</rasd:ResourceSubType>
+ <rasd:ResourceType>23</rasd:ResourceType>
+ <vmw:Config ovf:required="false" vmw:key="ehciEnabled" vmw:value="true"/>
+ </Item>
+ <Item>
+ <rasd:Address>0</rasd:Address>
+ <rasd:Description>SCSI Controller</rasd:Description>
+ <rasd:ElementName>scsiController0</rasd:ElementName>
+ <rasd:InstanceID>6</rasd:InstanceID>
+ <rasd:ResourceSubType>lsilogicsas</rasd:ResourceSubType>
+ <rasd:ResourceType>6</rasd:ResourceType>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:AutomaticAllocation>true</rasd:AutomaticAllocation>
+ <rasd:ElementName>serial0</rasd:ElementName>
+ <rasd:InstanceID>7</rasd:InstanceID>
+ <rasd:ResourceType>21</rasd:ResourceType>
+ <vmw:Config ovf:required="false" vmw:key="yieldOnPoll" vmw:value="false"/>
+ </Item>
+ <Item>
+ <rasd:AddressOnParent>0</rasd:AddressOnParent>
+ <rasd:ElementName>disk0</rasd:ElementName>
+ <rasd:HostResource>ovf:/disk/vmdisk1</rasd:HostResource>
+ <rasd:InstanceID>8</rasd:InstanceID>
+ <rasd:Parent>6</rasd:Parent>
+ <rasd:ResourceType>17</rasd:ResourceType>
+ </Item>
+ <Item>
+ <rasd:AddressOnParent>2</rasd:AddressOnParent>
+ <rasd:AutomaticAllocation>true</rasd:AutomaticAllocation>
+ <rasd:Connection>bridged</rasd:Connection>
+ <rasd:Description>E1000e ethernet adapter on "bridged"</rasd:Description>
+ <rasd:ElementName>ethernet0</rasd:ElementName>
+ <rasd:InstanceID>9</rasd:InstanceID>
+ <rasd:ResourceSubType>E1000e</rasd:ResourceSubType>
+ <rasd:ResourceType>10</rasd:ResourceType>
+ <vmw:Config ovf:required="false" vmw:key="wakeOnLanEnabled" vmw:value="false"/>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
+ <rasd:ElementName>sound</rasd:ElementName>
+ <rasd:InstanceID>10</rasd:InstanceID>
+ <rasd:ResourceSubType>vmware.soundcard.hdaudio</rasd:ResourceSubType>
+ <rasd:ResourceType>1</rasd:ResourceType>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
+ <rasd:ElementName>video</rasd:ElementName>
+ <rasd:InstanceID>11</rasd:InstanceID>
+ <rasd:ResourceType>24</rasd:ResourceType>
+ <vmw:Config ovf:required="false" vmw:key="enable3DSupport" vmw:value="true"/>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
+ <rasd:ElementName>vmci</rasd:ElementName>
+ <rasd:InstanceID>12</rasd:InstanceID>
+ <rasd:ResourceSubType>vmware.vmci</rasd:ResourceSubType>
+ <rasd:ResourceType>1</rasd:ResourceType>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:AddressOnParent>1</rasd:AddressOnParent>
+ <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
+ <rasd:ElementName>cdrom0</rasd:ElementName>
+ <rasd:InstanceID>13</rasd:InstanceID>
+ <rasd:Parent>3</rasd:Parent>
+ <rasd:ResourceType>15</rasd:ResourceType>
+ </Item>
+ <vmw:Config ovf:required="false" vmw:key="memoryHotAddEnabled" vmw:value="true"/>
+ <vmw:Config ovf:required="false" vmw:key="powerOpInfo.powerOffType" vmw:value="soft"/>
+ <vmw:Config ovf:required="false" vmw:key="powerOpInfo.resetType" vmw:value="soft"/>
+ <vmw:Config ovf:required="false" vmw:key="powerOpInfo.suspendType" vmw:value="soft"/>
+ <vmw:Config ovf:required="false" vmw:key="tools.syncTimeWithHost" vmw:value="false"/>
+ </VirtualHardwareSection>
+ </VirtualSystem>
+</Envelope>
diff --git a/src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf b/src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf
new file mode 100755
index 0000000..b93540f
--- /dev/null
+++ b/src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf
@@ -0,0 +1,142 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--Generated by VMware ovftool 4.1.0 (build-2982904), UTC time: 2017-02-07T13:50:15.265014Z-->
+<Envelope vmw:buildId="build-2982904" xmlns="http://schemas.dmtf.org/ovf/envelope/1" xmlns:cim="http://schemas.dmtf.org/wbem/wscim/1/common" xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1" xmlns:vmw="http://www.vmware.com/schema/ovf" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
+ <References>
+ <File ovf:href="Win10-Liz-disk1.vmdk" ovf:id="file1" ovf:size="9155243008"/>
+ </References>
+ <DiskSection>
+ <Info>Virtual disk information</Info>
+ <Disk ovf:capacity="128" ovf:capacityAllocationUnits="byte * 2^30" ovf:diskId="vmdisk1" ovf:fileRef="file1" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:populatedSize="16798056448"/>
+ </DiskSection>
+ <NetworkSection>
+ <Info>The list of logical networks</Info>
+ <Network ovf:name="bridged">
+ <Description>The bridged network</Description>
+ </Network>
+ </NetworkSection>
+ <VirtualSystem ovf:id="vm">
+ <Info>A virtual machine</Info>
+ <Name>Win10-Liz</Name>
+ <OperatingSystemSection ovf:id="1" vmw:osType="windows9_64Guest">
+ <Info>The kind of installed guest operating system</Info>
+ </OperatingSystemSection>
+ <VirtualHardwareSection>
+ <Info>Virtual hardware requirements</Info>
+ <System>
+ <vssd:ElementName>Virtual Hardware Family</vssd:ElementName>
+ <vssd:InstanceID>0</vssd:InstanceID>
+ <vssd:VirtualSystemIdentifier>Win10-Liz</vssd:VirtualSystemIdentifier>
+ <vssd:VirtualSystemType>vmx-11</vssd:VirtualSystemType>
+ </System>
+ <Item>
+ <rasd:AllocationUnits xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">hertz * 10^6</rasd:AllocationUnits>
+ <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">Number of Virtual CPUs</rasd:Description>
+ <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">4 virtual CPU(s)</rasd:ElementName>
+ <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">1</rasd:InstanceID>
+ <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">3</rasd:ResourceType>
+ <rasd:VirtualQuantity xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">4</rasd:VirtualQuantity>
+ </Item>
+ <Item>
+ <rasd:AllocationUnits xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">byte * 2^20</rasd:AllocationUnits>
+ <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">Memory Size</rasd:Description>
+ <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">6144MB of memory</rasd:ElementName>
+ <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">2</rasd:InstanceID>
+ <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">4</rasd:ResourceType>
+ <rasd:VirtualQuantity xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">6144</rasd:VirtualQuantity>
+ </Item>
+ <Item>
+ <rasd:Address xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">0</rasd:Address>
+ <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">SATA Controller</rasd:Description>
+ <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">sataController0</rasd:ElementName>
+ <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">3</rasd:InstanceID>
+ <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmware.sata.ahci</rasd:ResourceSubType>
+ <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">20</rasd:ResourceType>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:Address xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">0</rasd:Address>
+ <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">USB Controller (XHCI)</rasd:Description>
+ <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">usb3</rasd:ElementName>
+ <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">4</rasd:InstanceID>
+ <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmware.usb.xhci</rasd:ResourceSubType>
+ <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">23</rasd:ResourceType>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:Address xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">0</rasd:Address>
+ <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">USB Controller (EHCI)</rasd:Description>
+ <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">usb</rasd:ElementName>
+ <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">5</rasd:InstanceID>
+ <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmware.usb.ehci</rasd:ResourceSubType>
+ <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">23</rasd:ResourceType>
+ <vmw:Config ovf:required="false" vmw:key="ehciEnabled" vmw:value="true"/>
+ </Item>
+ <Item>
+ <rasd:Address xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">0</rasd:Address>
+ <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">SCSI Controller</rasd:Description>
+ <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">scsiController0</rasd:ElementName>
+ <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">6</rasd:InstanceID>
+ <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">lsilogicsas</rasd:ResourceSubType>
+ <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">6</rasd:ResourceType>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">true</rasd:AutomaticAllocation>
+ <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">serial0</rasd:ElementName>
+ <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">7</rasd:InstanceID>
+ <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">21</rasd:ResourceType>
+ <vmw:Config ovf:required="false" vmw:key="yieldOnPoll" vmw:value="false"/>
+ </Item>
+ <Item>
+ <rasd:AddressOnParent xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" >0</rasd:AddressOnParent>
+ <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" >disk0</rasd:ElementName>
+ <rasd:HostResource xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" >ovf:/disk/vmdisk1</rasd:HostResource>
+ <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">8</rasd:InstanceID>
+ <rasd:Parent xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">6</rasd:Parent>
+ <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">17</rasd:ResourceType>
+ </Item>
+ <Item>
+ <rasd:AddressOnParent xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">2</rasd:AddressOnParent>
+ <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">true</rasd:AutomaticAllocation>
+ <rasd:Connection xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">bridged</rasd:Connection>
+ <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">E1000e ethernet adapter on "bridged"</rasd:Description>
+ <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">ethernet0</rasd:ElementName>
+ <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">9</rasd:InstanceID>
+ <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">E1000e</rasd:ResourceSubType>
+ <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">10</rasd:ResourceType>
+ <vmw:Config ovf:required="false" vmw:key="wakeOnLanEnabled" vmw:value="false"/>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">false</rasd:AutomaticAllocation>
+ <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">sound</rasd:ElementName>
+ <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">10</rasd:InstanceID>
+ <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmware.soundcard.hdaudio</rasd:ResourceSubType>
+ <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">1</rasd:ResourceType>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">false</rasd:AutomaticAllocation>
+ <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">video</rasd:ElementName>
+ <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">11</rasd:InstanceID>
+ <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">24</rasd:ResourceType>
+ <vmw:Config ovf:required="false" vmw:key="enable3DSupport" vmw:value="true"/>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">false</rasd:AutomaticAllocation>
+ <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmci</rasd:ElementName>
+ <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">12</rasd:InstanceID>
+ <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmware.vmci</rasd:ResourceSubType>
+ <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">1</rasd:ResourceType>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:AddressOnParent xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">1</rasd:AddressOnParent>
+ <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">false</rasd:AutomaticAllocation>
+ <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">cdrom0</rasd:ElementName>
+ <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">13</rasd:InstanceID>
+ <rasd:Parent xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">3</rasd:Parent>
+ <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">15</rasd:ResourceType>
+ </Item>
+ <vmw:Config ovf:required="false" vmw:key="memoryHotAddEnabled" vmw:value="true"/>
+ <vmw:Config ovf:required="false" vmw:key="powerOpInfo.powerOffType" vmw:value="soft"/>
+ <vmw:Config ovf:required="false" vmw:key="powerOpInfo.resetType" vmw:value="soft"/>
+ <vmw:Config ovf:required="false" vmw:key="powerOpInfo.suspendType" vmw:value="soft"/>
+ <vmw:Config ovf:required="false" vmw:key="tools.syncTimeWithHost" vmw:value="false"/>
+ </VirtualHardwareSection>
+ </VirtualSystem>
+</Envelope>
diff --git a/src/test/ovf_manifests/Win_2008_R2_two-disks.ovf b/src/test/ovf_manifests/Win_2008_R2_two-disks.ovf
new file mode 100755
index 0000000..a563aab
--- /dev/null
+++ b/src/test/ovf_manifests/Win_2008_R2_two-disks.ovf
@@ -0,0 +1,145 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--Generated by VMware ovftool 4.1.0 (build-2982904), UTC time: 2017-02-27T15:09:29.768974Z-->
+<Envelope vmw:buildId="build-2982904" xmlns="http://schemas.dmtf.org/ovf/envelope/1" xmlns:cim="http://schemas.dmtf.org/wbem/wscim/1/common" xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1" xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" xmlns:vmw="http://www.vmware.com/schema/ovf" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
+ <References>
+ <File ovf:href="disk1.vmdk" ovf:id="file1" ovf:size="3481968640"/>
+ <File ovf:href="disk2.vmdk" ovf:id="file2" ovf:size="68096"/>
+ </References>
+ <DiskSection>
+ <Info>Virtual disk information</Info>
+ <Disk ovf:capacity="40" ovf:capacityAllocationUnits="byte * 2^30" ovf:diskId="vmdisk1" ovf:fileRef="file1" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:populatedSize="7684882432"/>
+ <Disk ovf:capacity="1" ovf:capacityAllocationUnits="byte * 2^30" ovf:diskId="vmdisk2" ovf:fileRef="file2" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:populatedSize="0"/>
+ </DiskSection>
+ <NetworkSection>
+ <Info>The list of logical networks</Info>
+ <Network ovf:name="bridged">
+ <Description>The bridged network</Description>
+ </Network>
+ </NetworkSection>
+ <VirtualSystem ovf:id="vm">
+ <Info>A virtual machine</Info>
+ <Name>Win_2008-R2x64</Name>
+ <OperatingSystemSection ovf:id="103" vmw:osType="windows7Server64Guest">
+ <Info>The kind of installed guest operating system</Info>
+ </OperatingSystemSection>
+ <VirtualHardwareSection>
+ <Info>Virtual hardware requirements</Info>
+ <System>
+ <vssd:ElementName>Virtual Hardware Family</vssd:ElementName>
+ <vssd:InstanceID>0</vssd:InstanceID>
+ <vssd:VirtualSystemIdentifier>Win_2008-R2x64</vssd:VirtualSystemIdentifier>
+ <vssd:VirtualSystemType>vmx-11</vssd:VirtualSystemType>
+ </System>
+ <Item>
+ <rasd:AllocationUnits>hertz * 10^6</rasd:AllocationUnits>
+ <rasd:Description>Number of Virtual CPUs</rasd:Description>
+ <rasd:ElementName>1 virtual CPU(s)</rasd:ElementName>
+ <rasd:InstanceID>1</rasd:InstanceID>
+ <rasd:ResourceType>3</rasd:ResourceType>
+ <rasd:VirtualQuantity>1</rasd:VirtualQuantity>
+ </Item>
+ <Item>
+ <rasd:AllocationUnits>byte * 2^20</rasd:AllocationUnits>
+ <rasd:Description>Memory Size</rasd:Description>
+ <rasd:ElementName>2048MB of memory</rasd:ElementName>
+ <rasd:InstanceID>2</rasd:InstanceID>
+ <rasd:ResourceType>4</rasd:ResourceType>
+ <rasd:VirtualQuantity>2048</rasd:VirtualQuantity>
+ </Item>
+ <Item>
+ <rasd:Address>0</rasd:Address>
+ <rasd:Description>SATA Controller</rasd:Description>
+ <rasd:ElementName>sataController0</rasd:ElementName>
+ <rasd:InstanceID>3</rasd:InstanceID>
+ <rasd:ResourceSubType>vmware.sata.ahci</rasd:ResourceSubType>
+ <rasd:ResourceType>20</rasd:ResourceType>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:Address>0</rasd:Address>
+ <rasd:Description>USB Controller (EHCI)</rasd:Description>
+ <rasd:ElementName>usb</rasd:ElementName>
+ <rasd:InstanceID>4</rasd:InstanceID>
+ <rasd:ResourceSubType>vmware.usb.ehci</rasd:ResourceSubType>
+ <rasd:ResourceType>23</rasd:ResourceType>
+ <vmw:Config ovf:required="false" vmw:key="ehciEnabled" vmw:value="true"/>
+ </Item>
+ <Item>
+ <rasd:Address>0</rasd:Address>
+ <rasd:Description>SCSI Controller</rasd:Description>
+ <rasd:ElementName>scsiController0</rasd:ElementName>
+ <rasd:InstanceID>5</rasd:InstanceID>
+ <rasd:ResourceSubType>lsilogicsas</rasd:ResourceSubType>
+ <rasd:ResourceType>6</rasd:ResourceType>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:AutomaticAllocation>true</rasd:AutomaticAllocation>
+ <rasd:ElementName>serial0</rasd:ElementName>
+ <rasd:InstanceID>6</rasd:InstanceID>
+ <rasd:ResourceType>21</rasd:ResourceType>
+ <vmw:Config ovf:required="false" vmw:key="yieldOnPoll" vmw:value="false"/>
+ </Item>
+ <Item>
+ <rasd:AddressOnParent>0</rasd:AddressOnParent>
+ <rasd:ElementName>disk0</rasd:ElementName>
+ <rasd:HostResource>ovf:/disk/vmdisk1</rasd:HostResource>
+ <rasd:InstanceID>7</rasd:InstanceID>
+ <rasd:Parent>5</rasd:Parent>
+ <rasd:ResourceType>17</rasd:ResourceType>
+ </Item>
+ <Item>
+ <rasd:AddressOnParent>1</rasd:AddressOnParent>
+ <rasd:ElementName>disk1</rasd:ElementName>
+ <rasd:HostResource>ovf:/disk/vmdisk2</rasd:HostResource>
+ <rasd:InstanceID>8</rasd:InstanceID>
+ <rasd:Parent>5</rasd:Parent>
+ <rasd:ResourceType>17</rasd:ResourceType>
+ </Item>
+ <Item>
+ <rasd:AddressOnParent>2</rasd:AddressOnParent>
+ <rasd:AutomaticAllocation>true</rasd:AutomaticAllocation>
+ <rasd:Connection>bridged</rasd:Connection>
+ <rasd:Description>E1000 ethernet adapter on "bridged"</rasd:Description>
+ <rasd:ElementName>ethernet0</rasd:ElementName>
+ <rasd:InstanceID>9</rasd:InstanceID>
+ <rasd:ResourceSubType>E1000</rasd:ResourceSubType>
+ <rasd:ResourceType>10</rasd:ResourceType>
+ <vmw:Config ovf:required="false" vmw:key="wakeOnLanEnabled" vmw:value="false"/>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
+ <rasd:ElementName>sound</rasd:ElementName>
+ <rasd:InstanceID>10</rasd:InstanceID>
+ <rasd:ResourceSubType>vmware.soundcard.hdaudio</rasd:ResourceSubType>
+ <rasd:ResourceType>1</rasd:ResourceType>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
+ <rasd:ElementName>video</rasd:ElementName>
+ <rasd:InstanceID>11</rasd:InstanceID>
+ <rasd:ResourceType>24</rasd:ResourceType>
+ <vmw:Config ovf:required="false" vmw:key="enable3DSupport" vmw:value="true"/>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
+ <rasd:ElementName>vmci</rasd:ElementName>
+ <rasd:InstanceID>12</rasd:InstanceID>
+ <rasd:ResourceSubType>vmware.vmci</rasd:ResourceSubType>
+ <rasd:ResourceType>1</rasd:ResourceType>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:AddressOnParent>1</rasd:AddressOnParent>
+ <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
+ <rasd:ElementName>cdrom0</rasd:ElementName>
+ <rasd:InstanceID>13</rasd:InstanceID>
+ <rasd:Parent>3</rasd:Parent>
+ <rasd:ResourceType>15</rasd:ResourceType>
+ </Item>
+ <vmw:Config ovf:required="false" vmw:key="cpuHotAddEnabled" vmw:value="true"/>
+ <vmw:Config ovf:required="false" vmw:key="memoryHotAddEnabled" vmw:value="true"/>
+ <vmw:Config ovf:required="false" vmw:key="powerOpInfo.powerOffType" vmw:value="soft"/>
+ <vmw:Config ovf:required="false" vmw:key="powerOpInfo.resetType" vmw:value="soft"/>
+ <vmw:Config ovf:required="false" vmw:key="powerOpInfo.suspendType" vmw:value="soft"/>
+ <vmw:Config ovf:required="false" vmw:key="tools.syncTimeWithHost" vmw:value="false"/>
+ </VirtualHardwareSection>
+ </VirtualSystem>
+</Envelope>
diff --git a/src/test/ovf_manifests/disk1.vmdk b/src/test/ovf_manifests/disk1.vmdk
new file mode 100644
index 0000000000000000000000000000000000000000..8660602343a1a955f9bcf2e6beaed99316dd8167
GIT binary patch
literal 65536
zcmeIvy>HV%7zbeUF`dK)46s<q9ua7|WuUkSgpg2EwX+*viPe0`HWAtSr(-ASlAWQ|
zbJF?F{+-YFKK_yYyn2=-$&0qXY<t)4ch@B8o_Fo_en^t%N%H0}e|H$~AF`0X3J-JR
zqY>z*Sy|tuS*)j3xo%d~*K!`iCRTO1T8@X|%lB-2b2}Q2@{gmi&a1d=x<|K%7N%9q
zn|Qfh$8m45TCV10Gb^W)c4ZxVA@tMpzfJp2S{y#m?iwzx)01@a>+{9rJna?j=ZAyM
zqPW{FznsOxiSi~-&+<BkewLkuP!u<VO<6U6^7*&xtNr=XaoRiS?V{gtwTMl%9Za|L
za#^%_7k)SjXE85!!SM7bspGUQewUqo+Glx@ubWtPwRL-yMO)CL`L7O2fB*pk1PBly
zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pkPg~&a(=JbS1PBlyK!5-N0!ISx
zkM7+PAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+
z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBly
zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF
z5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk
d1PBlyK!5-N0t5&UAV7cs0RjXF5cr=0{{Ra(Wheju
literal 0
HcmV?d00001
diff --git a/src/test/ovf_manifests/disk2.vmdk b/src/test/ovf_manifests/disk2.vmdk
new file mode 100644
index 0000000000000000000000000000000000000000..c4634513348b392202898374f1c8d2d51d565b27
GIT binary patch
literal 65536
zcmeIvy>HV%7zbeUF`dK)46s<q9#IIDI%J@v2!xPOQ?;`jUy0Rx$u<$$`lr`+(j_}X
ztLLQio&7tX?|uAp{Oj^rk|Zyh{<7(9yX&q=(mrq7>)ntf&y(cMe*SJh-aTX?eH9+&
z#z!O2Psc@dn~q~OEsJ%%D!&!;7&fu2iq&#-6u$l#k8ZBB&%=0f64qH6mv#5(X4k^B
zj9DEow(B_REmq6byr^fzbkeM>VlRY#diJkw-bwTQ2bx{O`BgehC%?a(PtMX_-hBS!
zV6(_?yX6<NxIa-=XX$BH#n2y*PeaJ_>%pcd>%ZCj`_<*{eCa6d4SQYmC$1K;F1Lf}
zc3v#=CU3(J2jMJcc^4cVA0$<rHpO?@@uyvu<=MK9Wm{XjSCKabJ(~aOpacjIAV7cs
z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAn>#W-ahT}R7ZdS0RjXF5Fl_M
z@c!W5Edc@q2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk
z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs
z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ
zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U
eAV7cs0RjXF5FkK+009C72oNAZfB=F2DR2*l=VfOA
literal 0
HcmV?d00001
diff --git a/src/test/run_ovf_tests.pl b/src/test/run_ovf_tests.pl
new file mode 100755
index 0000000..5a80ab2
--- /dev/null
+++ b/src/test/run_ovf_tests.pl
@@ -0,0 +1,71 @@
+#!/usr/bin/perl
+
+use strict;
+use warnings;
+use lib qw(..); # prepend .. to @INC so we use the local version of PVE packages
+
+use FindBin '$Bin';
+use PVE::GuestImport::OVF;
+use Test::More;
+
+use Data::Dumper;
+
+my $test_manifests = join ('/', $Bin, 'ovf_manifests');
+
+print "parsing ovfs\n";
+
+my $win2008 = eval { PVE::GuestImport::OVF::parse_ovf("$test_manifests/Win_2008_R2_two-disks.ovf") };
+if (my $err = $@) {
+ fail('parse win2008');
+ warn("error: $err\n");
+} else {
+ ok('parse win2008');
+}
+my $win10 = eval { PVE::GuestImport::OVF::parse_ovf("$test_manifests/Win10-Liz.ovf") };
+if (my $err = $@) {
+ fail('parse win10');
+ warn("error: $err\n");
+} else {
+ ok('parse win10');
+}
+my $win10noNs = eval { PVE::GuestImport::OVF::parse_ovf("$test_manifests/Win10-Liz_no_default_ns.ovf") };
+if (my $err = $@) {
+ fail("parse win10 no default rasd NS");
+ warn("error: $err\n");
+} else {
+ ok('parse win10 no default rasd NS');
+}
+
+print "testing disks\n";
+
+is($win2008->{disks}->[0]->{disk_address}, 'scsi0', 'multidisk vm has the correct first disk controller');
+is($win2008->{disks}->[0]->{backing_file}, "$test_manifests/disk1.vmdk", 'multidisk vm has the correct first disk backing device');
+is($win2008->{disks}->[0]->{virtual_size}, 2048, 'multidisk vm has the correct first disk size');
+
+is($win2008->{disks}->[1]->{disk_address}, 'scsi1', 'multidisk vm has the correct second disk controller');
+is($win2008->{disks}->[1]->{backing_file}, "$test_manifests/disk2.vmdk", 'multidisk vm has the correct second disk backing device');
+is($win2008->{disks}->[1]->{virtual_size}, 2048, 'multidisk vm has the correct second disk size');
+
+is($win10->{disks}->[0]->{disk_address}, 'scsi0', 'single disk vm has the correct disk controller');
+is($win10->{disks}->[0]->{backing_file}, "$test_manifests/Win10-Liz-disk1.vmdk", 'single disk vm has the correct disk backing device');
+is($win10->{disks}->[0]->{virtual_size}, 2048, 'single disk vm has the correct size');
+
+is($win10noNs->{disks}->[0]->{disk_address}, 'scsi0', 'single disk vm (no default rasd NS) has the correct disk controller');
+is($win10noNs->{disks}->[0]->{backing_file}, "$test_manifests/Win10-Liz-disk1.vmdk", 'single disk vm (no default rasd NS) has the correct disk backing device');
+is($win10noNs->{disks}->[0]->{virtual_size}, 2048, 'single disk vm (no default rasd NS) has the correct size');
+
+print "\ntesting vm.conf extraction\n";
+
+is($win2008->{qm}->{name}, 'Win2008-R2x64', 'win2008 VM name is correct');
+is($win2008->{qm}->{memory}, '2048', 'win2008 VM memory is correct');
+is($win2008->{qm}->{cores}, '1', 'win2008 VM cores are correct');
+
+is($win10->{qm}->{name}, 'Win10-Liz', 'win10 VM name is correct');
+is($win10->{qm}->{memory}, '6144', 'win10 VM memory is correct');
+is($win10->{qm}->{cores}, '4', 'win10 VM cores are correct');
+
+is($win10noNs->{qm}->{name}, 'Win10-Liz', 'win10 VM (no default rasd NS) name is correct');
+is($win10noNs->{qm}->{memory}, '6144', 'win10 VM (no default rasd NS) memory is correct');
+is($win10noNs->{qm}->{cores}, '4', 'win10 VM (no default rasd NS) cores are correct');
+
+done_testing();
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 38+ messages in thread
* [pve-devel] [PATCH storage v3 02/10] plugin: dir: implement import content type
2024-04-29 11:21 [pve-devel] [PATCH storage/qemu-server/manager v3] implement ova/ovf import for file based storages Dominik Csapak
2024-04-29 11:21 ` [pve-devel] [PATCH storage v3 01/10] copy OVF.pm from qemu-server Dominik Csapak
@ 2024-04-29 11:21 ` Dominik Csapak
2024-05-22 9:24 ` Fabian Grünbichler
2024-04-29 11:21 ` [pve-devel] [PATCH storage v3 03/10] plugin: dir: handle ova files for import Dominik Csapak
` (21 subsequent siblings)
23 siblings, 1 reply; 38+ messages in thread
From: Dominik Csapak @ 2024-04-29 11:21 UTC (permalink / raw)
To: pve-devel
in DirPlugin and not Plugin (because of cyclic dependency of
Plugin -> OVF -> Storage -> Plugin otherwise)
only ovf is currently supported (though ova will be shown in import
listing), expects the files to not be in a subdir, and adjacent to the
ovf file.
listed will be all ovf/qcow2/raw/vmdk files.
ovf because it can be imported, and the rest because they can be used
in the 'import-from' part of qemu-server.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/PVE/GuestImport/OVF.pm | 3 +++
src/PVE/Storage.pm | 8 +++++++
src/PVE/Storage/DirPlugin.pm | 36 +++++++++++++++++++++++++++++-
src/PVE/Storage/Plugin.pm | 11 ++++++++-
src/test/parse_volname_test.pm | 18 +++++++++++++++
src/test/path_to_volume_id_test.pm | 21 +++++++++++++++++
6 files changed, 95 insertions(+), 2 deletions(-)
diff --git a/src/PVE/GuestImport/OVF.pm b/src/PVE/GuestImport/OVF.pm
index 055ebf5..0eb5e9c 100644
--- a/src/PVE/GuestImport/OVF.pm
+++ b/src/PVE/GuestImport/OVF.pm
@@ -222,6 +222,8 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
}
($backing_file_path) = $backing_file_path =~ m|^(/.*)|; # untaint
+ ($filepath) = $filepath =~ m|^(${PVE::Storage::SAFE_CHAR_CLASS_RE}+)$|; # untaint & check no sub/parent dirs
+ die "invalid path\n" if !$filepath;
my $virtual_size = PVE::Storage::file_size_info($backing_file_path);
die "error parsing $backing_file_path, cannot determine file size\n"
@@ -231,6 +233,7 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
disk_address => $pve_disk_address,
backing_file => $backing_file_path,
virtual_size => $virtual_size
+ relative_path => $filepath,
};
push @disks, $pve_disk;
diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
index f19a115..1ed91c2 100755
--- a/src/PVE/Storage.pm
+++ b/src/PVE/Storage.pm
@@ -114,6 +114,10 @@ our $VZTMPL_EXT_RE_1 = qr/\.tar\.(gz|xz|zst)/i;
our $BACKUP_EXT_RE_2 = qr/\.(tgz|(?:tar|vma)(?:\.(${\PVE::Storage::Plugin::COMPRESSOR_RE}))?)/;
+our $IMPORT_EXT_RE_1 = qr/\.(ovf|qcow2|raw|vmdk)/;
+
+our $SAFE_CHAR_CLASS_RE = qr/[a-zA-Z0-9\-\.\+\=\_]/;
+
# FIXME remove with PVE 8.0, add versioned breaks for pve-manager
our $vztmpl_extension_re = $VZTMPL_EXT_RE_1;
@@ -612,6 +616,7 @@ sub path_to_volume_id {
my $backupdir = $plugin->get_subdir($scfg, 'backup');
my $privatedir = $plugin->get_subdir($scfg, 'rootdir');
my $snippetsdir = $plugin->get_subdir($scfg, 'snippets');
+ my $importdir = $plugin->get_subdir($scfg, 'import');
if ($path =~ m!^$imagedir/(\d+)/([^/\s]+)$!) {
my $vmid = $1;
@@ -640,6 +645,9 @@ sub path_to_volume_id {
} elsif ($path =~ m!^$snippetsdir/([^/]+)$!) {
my $name = $1;
return ('snippets', "$sid:snippets/$name");
+ } elsif ($path =~ m!^$importdir/(${SAFE_CHAR_CLASS_RE}+${IMPORT_EXT_RE_1})$!) {
+ my $name = $1;
+ return ('import', "$sid:import/$name");
}
}
diff --git a/src/PVE/Storage/DirPlugin.pm b/src/PVE/Storage/DirPlugin.pm
index 2efa8d5..3e3b1e7 100644
--- a/src/PVE/Storage/DirPlugin.pm
+++ b/src/PVE/Storage/DirPlugin.pm
@@ -10,6 +10,7 @@ use IO::File;
use POSIX;
use PVE::Storage::Plugin;
+use PVE::GuestImport::OVF;
use PVE::JSONSchema qw(get_standard_option);
use base qw(PVE::Storage::Plugin);
@@ -22,7 +23,7 @@ sub type {
sub plugindata {
return {
- content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1, backup => 1, snippets => 1, none => 1 },
+ content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1, backup => 1, snippets => 1, none => 1, import => 1 },
{ images => 1, rootdir => 1 }],
format => [ { raw => 1, qcow2 => 1, vmdk => 1, subvol => 1 } , 'raw' ],
};
@@ -247,4 +248,37 @@ sub check_config {
return $opts;
}
+sub get_import_metadata {
+ my ($class, $scfg, $volname, $storeid) = @_;
+
+ my ($vtype, $name, undef, undef, undef, undef, $fmt) = $class->parse_volname($volname);
+ die "invalid content type '$vtype'\n" if $vtype ne 'import';
+ die "invalid format\n" if $fmt ne 'ova' && $fmt ne 'ovf';
+
+ # NOTE: all types of warnings must be added to the return schema of the import-metadata API endpoint
+ my $warnings = [];
+
+ my $path = $class->path($scfg, $volname, $storeid, undef);
+ my $res = PVE::GuestImport::OVF::parse_ovf($path);
+ my $disks = {};
+ for my $disk ($res->{disks}->@*) {
+ my $id = $disk->{disk_address};
+ my $size = $disk->{virtual_size};
+ my $path = $disk->{relative_path};
+ $disks->{$id} = {
+ volid => "$storeid:import/$path",
+ defined($size) ? (size => $size) : (),
+ };
+ }
+
+ return {
+ type => 'vm',
+ source => $volname,
+ 'create-args' => $res->{qm},
+ 'disks' => $disks,
+ warnings => $warnings,
+ net => [],
+ };
+}
+
1;
diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
index 22a9729..33f0f3a 100644
--- a/src/PVE/Storage/Plugin.pm
+++ b/src/PVE/Storage/Plugin.pm
@@ -654,6 +654,8 @@ sub parse_volname {
return ('backup', $fn);
} elsif ($volname =~ m!^snippets/([^/]+)$!) {
return ('snippets', $1);
+ } elsif ($volname =~ m!^import/(${PVE::Storage::SAFE_CHAR_CLASS_RE}+$PVE::Storage::IMPORT_EXT_RE_1)$!) {
+ return ('import', $1, undef, undef, undef, undef, $2);
}
die "unable to parse directory volume name '$volname'\n";
@@ -666,6 +668,7 @@ my $vtype_subdirs = {
vztmpl => 'template/cache',
backup => 'dump',
snippets => 'snippets',
+ import => 'import',
};
sub get_vtype_subdirs {
@@ -1227,7 +1230,7 @@ sub list_images {
return $res;
}
-# list templates ($tt = <iso|vztmpl|backup|snippets>)
+# list templates ($tt = <iso|vztmpl|backup|snippets|import>)
my $get_subdir_files = sub {
my ($sid, $path, $tt, $vmid) = @_;
@@ -1283,6 +1286,10 @@ my $get_subdir_files = sub {
volid => "$sid:snippets/". basename($fn),
format => 'snippet',
};
+ } elsif ($tt eq 'import') {
+ next if $fn !~ m!/(${PVE::Storage::SAFE_CHAR_CLASS_RE}+$PVE::Storage::IMPORT_EXT_RE_1)$!i;
+
+ $info = { volid => "$sid:import/$1", format => "$2" };
}
$info->{size} = $st->size;
@@ -1317,6 +1324,8 @@ sub list_volumes {
$data = $get_subdir_files->($storeid, $path, 'backup', $vmid);
} elsif ($type eq 'snippets') {
$data = $get_subdir_files->($storeid, $path, 'snippets');
+ } elsif ($type eq 'import') {
+ $data = $get_subdir_files->($storeid, $path, 'import');
}
}
diff --git a/src/test/parse_volname_test.pm b/src/test/parse_volname_test.pm
index d6ac885..a8c746f 100644
--- a/src/test/parse_volname_test.pm
+++ b/src/test/parse_volname_test.pm
@@ -81,6 +81,19 @@ my $tests = [
expected => ['snippets', 'hookscript.pl'],
},
#
+ # Import
+ #
+ {
+ description => "Import, ova",
+ volname => 'import/import.ova',
+ expected => ['import', 'import.ova', undef, undef, undef ,undef, 'ova'],
+ },
+ {
+ description => "Import, ovf",
+ volname => 'import/import.ovf',
+ expected => ['import', 'import.ovf', undef, undef, undef ,undef, 'ovf'],
+ },
+ #
# failed matches
#
{
@@ -123,6 +136,11 @@ my $tests = [
volname => "$vmid/base-$vmid-disk-0.qcow2/ssss/vm-$vmid-disk-0.qcow2",
expected => "unable to parse volume filename 'base-$vmid-disk-0.qcow2/ssss/vm-$vmid-disk-0.qcow2'\n",
},
+ {
+ description => "Failed match: import dir but no ova/ovf/disk image",
+ volname => "import/test.foo",
+ expected => "unable to parse directory volume name 'import/test.foo'\n",
+ },
];
# create more test cases for VM disk images matches
diff --git a/src/test/path_to_volume_id_test.pm b/src/test/path_to_volume_id_test.pm
index 8149c88..0d238f9 100644
--- a/src/test/path_to_volume_id_test.pm
+++ b/src/test/path_to_volume_id_test.pm
@@ -174,6 +174,22 @@ my @tests = (
'local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.xz',
],
},
+ {
+ description => 'Import, ova',
+ volname => "$storage_dir/import/import.ova",
+ expected => [
+ 'import',
+ 'local:import/import.ova',
+ ],
+ },
+ {
+ description => 'Import, ovf',
+ volname => "$storage_dir/import/import.ovf",
+ expected => [
+ 'import',
+ 'local:import/import.ovf',
+ ],
+ },
# no matches, path or files with failures
{
@@ -231,6 +247,11 @@ my @tests = (
volname => "$storage_dir/images/ssss/vm-1234-disk-0.qcow2",
expected => [''],
},
+ {
+ description => 'Import, non ova/ovf/disk image in import dir',
+ volname => "$storage_dir/import/test.foo",
+ expected => [''],
+ },
);
plan tests => scalar @tests + 1;
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 38+ messages in thread
* [pve-devel] [PATCH storage v3 03/10] plugin: dir: handle ova files for import
2024-04-29 11:21 [pve-devel] [PATCH storage/qemu-server/manager v3] implement ova/ovf import for file based storages Dominik Csapak
2024-04-29 11:21 ` [pve-devel] [PATCH storage v3 01/10] copy OVF.pm from qemu-server Dominik Csapak
2024-04-29 11:21 ` [pve-devel] [PATCH storage v3 02/10] plugin: dir: implement import content type Dominik Csapak
@ 2024-04-29 11:21 ` Dominik Csapak
2024-05-22 10:08 ` Fabian Grünbichler
2024-05-22 13:13 ` Fabian Grünbichler
2024-04-29 11:21 ` [pve-devel] [PATCH storage v3 04/10] ovf: implement parsing the ostype Dominik Csapak
` (20 subsequent siblings)
23 siblings, 2 replies; 38+ messages in thread
From: Dominik Csapak @ 2024-04-29 11:21 UTC (permalink / raw)
To: pve-devel
since we want to handle ova files (which are only ovf+images bundled in
a tar file) for import, add code that handles that.
we introduce a valid volname for files contained in ovas like this:
storage:import/archive.ova/disk-1.vmdk
by basically treating the last part of the path as the name for the
contained disk we want.
in that case we return 'import' as type with 'vmdk/qcow2/raw' as format
(we cannot use something like 'ova+vmdk' without extending the 'format'
parsing to that for all storages/formats. This is because it runs
though a verify format check at least once)
we then provide 3 functions to use for that:
* copy_needs_extraction: determines from the given volid (like above) if
that needs extraction to copy it, currently only 'import' vtype + a
volid with the above format returns true
* extract_disk_from_import_file: this actually extracts the file from
the archive. Currently only ova is supported, so the extraction with
'tar' is hardcoded, but again we can easily extend/modify that should
we need to.
we currently extract into the either the import storage or a given
target storage in the images directory so if the cleanup does not
happen, the user can still see and interact with the image via
api/cli/gui
* cleanup_extracted_image: intended to cleanup the extracted images from
above
we have to modify the `parse_ovf` a bit to handle the missing disk
images, and we parse the size out of the ovf part (since this is
informal only, it should be no problem if we cannot parse it sometimes)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/PVE/API2/Storage/Status.pm | 1 +
src/PVE/GuestImport.pm | 100 +++++++++++++++++++++++++++++++++
src/PVE/GuestImport/OVF.pm | 53 ++++++++++++++---
src/PVE/Makefile | 1 +
src/PVE/Storage.pm | 2 +-
src/PVE/Storage/DirPlugin.pm | 15 ++++-
src/PVE/Storage/Plugin.pm | 5 ++
src/test/parse_volname_test.pm | 15 +++++
8 files changed, 182 insertions(+), 10 deletions(-)
create mode 100644 src/PVE/GuestImport.pm
diff --git a/src/PVE/API2/Storage/Status.pm b/src/PVE/API2/Storage/Status.pm
index dc6cc69..acde730 100644
--- a/src/PVE/API2/Storage/Status.pm
+++ b/src/PVE/API2/Storage/Status.pm
@@ -749,6 +749,7 @@ __PACKAGE__->register_method({
'efi-state-lost',
'guest-is-running',
'nvme-unsupported',
+ 'ova-needs-extracting',
'ovmf-with-lsi-unsupported',
'serial-port-socket-only',
],
diff --git a/src/PVE/GuestImport.pm b/src/PVE/GuestImport.pm
new file mode 100644
index 0000000..d405e30
--- /dev/null
+++ b/src/PVE/GuestImport.pm
@@ -0,0 +1,100 @@
+package PVE::GuestImport;
+
+use strict;
+use warnings;
+
+use File::Path;
+
+use PVE::Storage;
+use PVE::Tools qw(run_command);
+
+sub copy_needs_extraction {
+ my ($volid) = @_;
+ my $cfg = PVE::Storage::config();
+ my ($vtype, $name, undef, undef, undef, undef, $fmt) = PVE::Storage::parse_volname($cfg, $volid);
+
+ # only volumes inside ovas need extraction
+ return $vtype eq 'import' && $fmt =~ m/^ova\+(.*)$/;
+}
+
+sub extract_disk_from_import_file {
+ my ($volid, $vmid, $target_storeid) = @_;
+
+ my ($source_storeid, $volname) = PVE::Storage::parse_volume_id($volid);
+ $target_storeid //= $source_storeid;
+ my $cfg = PVE::Storage::config();
+ my $source_scfg = PVE::Storage::storage_config($cfg, $source_storeid);
+ my $source_plugin = PVE::Storage::Plugin->lookup($source_scfg->{type});
+
+ my ($vtype, $name, undef, undef, undef, undef, $fmt) =
+ $source_plugin->parse_volname($volname);
+
+ die "only files with content type 'import' can be extracted\n"
+ if $vtype ne 'import' || $fmt !~ m/^ova\+/;
+
+ # extract the inner file from the name
+ my $archive;
+ my $inner_file;
+ if ($name =~ m!^(.*\.ova)/(${PVE::Storage::SAFE_CHAR_CLASS_RE}+)$!) {
+ $archive = "import/$1";
+ $inner_file = $2;
+ ($fmt) = $fmt =~ /^ova\+(.*)$/;
+ } else {
+ die "cannot extract $volid - invalid volname $volname\n";
+ }
+
+ my ($ova_path) = $source_plugin->path($source_scfg, $archive, $source_storeid);
+
+ my $target_scfg = PVE::Storage::storage_config($cfg, $target_storeid);
+ my $target_plugin = PVE::Storage::Plugin->lookup($target_scfg->{type});
+
+ my $destdir = $target_plugin->get_subdir($target_scfg, 'images');
+
+ my $pid = $$;
+ $destdir .= "/tmp_${pid}_${vmid}";
+ mkpath $destdir;
+
+ ($ova_path) = $ova_path =~ m|^(.*)$|; # untaint
+
+ my $source_path = "$destdir/$inner_file";
+ my $target_path;
+ my $target_volname;
+ eval {
+ run_command(['tar', '-x', '--force-local', '-C', $destdir, '-f', $ova_path, $inner_file]);
+
+ # check for symlinks and other non regular files
+ if (-l $source_path || ! -f $source_path) {
+ die "only regular files are allowed\n";
+ }
+
+ my $target_diskname
+ = $target_plugin->find_free_diskname($target_storeid, $target_scfg, $vmid, $fmt, 1);
+ $target_volname = "$vmid/" . $target_diskname;
+ $target_path = $target_plugin->filesystem_path($target_scfg, $target_volname);
+
+ print "renaming $source_path to $target_path\n";
+ my $imagedir = $target_plugin->get_subdir($target_scfg, 'images');
+ mkpath "$imagedir/$vmid";
+
+ rename($source_path, $target_path) or die "unable to move - $!\n";
+ };
+ if (my $err = $@) {
+ unlink $source_path;
+ unlink $target_path if defined($target_path);
+ rmdir $destdir;
+ die "error during extraction: $err\n";
+ }
+
+ rmdir $destdir;
+
+ return "$target_storeid:$target_volname";
+}
+
+sub cleanup_extracted_image {
+ my ($source) = @_;
+
+ my $cfg = PVE::Storage::config();
+ PVE::Storage::vdisk_free($cfg, $source);
+}
+
+1;
diff --git a/src/PVE/GuestImport/OVF.pm b/src/PVE/GuestImport/OVF.pm
index 0eb5e9c..6b79078 100644
--- a/src/PVE/GuestImport/OVF.pm
+++ b/src/PVE/GuestImport/OVF.pm
@@ -85,11 +85,37 @@ sub id_to_pve {
}
}
+# technically defined in DSP0004 (https://www.dmtf.org/dsp/DSP0004) as an ABNF
+# but realistically this always takes the form of 'byte * base^exponent'
+sub try_parse_capacity_unit {
+ my ($unit_text) = @_;
+
+ if ($unit_text =~ m/^\s*byte\s*\*\s*([0-9]+)\s*\^\s*([0-9]+)\s*$/) {
+ my $base = $1;
+ my $exp = $2;
+ return $base ** $exp;
+ }
+
+ return undef;
+}
+
# returns two references, $qm which holds qm.conf style key/values, and \@disks
sub parse_ovf {
- my ($ovf, $debug) = @_;
+ my ($ovf, $isOva, $debug) = @_;
+
+ # we have to ignore missing disk images for ova
+ my $dom;
+ if ($isOva) {
+ my $raw = "";
+ PVE::Tools::run_command(['tar', '-xO', '--wildcards', '--occurrence=1', '-f', $ovf, '*.ovf'], outfunc => sub {
+ my $line = shift;
+ $raw .= $line;
+ });
+ $dom = XML::LibXML->load_xml(string => $raw, no_blanks => 1);
+ } else {
+ $dom = XML::LibXML->load_xml(location => $ovf, no_blanks => 1);
+ }
- my $dom = XML::LibXML->load_xml(location => $ovf, no_blanks => 1);
# register the xml namespaces in a xpath context object
# 'ovf' is the default namespace so it will prepended to each xml element
@@ -177,7 +203,17 @@ sub parse_ovf {
# @ needs to be escaped to prevent Perl double quote interpolation
my $xpath_find_fileref = sprintf("/ovf:Envelope/ovf:DiskSection/\
ovf:Disk[\@ovf:diskId='%s']/\@ovf:fileRef", $disk_id);
+ my $xpath_find_capacity = sprintf("/ovf:Envelope/ovf:DiskSection/\
+ovf:Disk[\@ovf:diskId='%s']/\@ovf:capacity", $disk_id);
+ my $xpath_find_capacity_unit = sprintf("/ovf:Envelope/ovf:DiskSection/\
+ovf:Disk[\@ovf:diskId='%s']/\@ovf:capacityAllocationUnits", $disk_id);
my $fileref = $xpc->findvalue($xpath_find_fileref);
+ my $capacity = $xpc->findvalue($xpath_find_capacity);
+ my $capacity_unit = $xpc->findvalue($xpath_find_capacity_unit);
+ my $virtual_size;
+ if (my $factor = try_parse_capacity_unit($capacity_unit)) {
+ $virtual_size = $capacity * $factor;
+ }
my $valid_url_chars = qr@${valid_uripath_chars}|/@;
if (!$fileref || $fileref !~ m/^${valid_url_chars}+$/) {
@@ -217,7 +253,7 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
die "error parsing $filepath, are you using a symlink ?\n";
}
- if (!-e $backing_file_path) {
+ if (!-e $backing_file_path && !$isOva) {
die "error parsing $filepath, file seems not to exist at $backing_file_path\n";
}
@@ -225,16 +261,19 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
($filepath) = $filepath =~ m|^(${PVE::Storage::SAFE_CHAR_CLASS_RE}+)$|; # untaint & check no sub/parent dirs
die "invalid path\n" if !$filepath;
- my $virtual_size = PVE::Storage::file_size_info($backing_file_path);
- die "error parsing $backing_file_path, cannot determine file size\n"
- if !$virtual_size;
+ if (!$isOva) {
+ my $size = PVE::Storage::file_size_info($backing_file_path);
+ die "error parsing $backing_file_path, cannot determine file size\n"
+ if !$size;
+ $virtual_size = $size;
+ }
$pve_disk = {
disk_address => $pve_disk_address,
backing_file => $backing_file_path,
- virtual_size => $virtual_size
relative_path => $filepath,
};
+ $pve_disk->{virtual_size} = $virtual_size if defined($virtual_size);
push @disks, $pve_disk;
}
diff --git a/src/PVE/Makefile b/src/PVE/Makefile
index e15a275..0af3081 100644
--- a/src/PVE/Makefile
+++ b/src/PVE/Makefile
@@ -5,6 +5,7 @@ install:
install -D -m 0644 Storage.pm ${DESTDIR}${PERLDIR}/PVE/Storage.pm
install -D -m 0644 Diskmanage.pm ${DESTDIR}${PERLDIR}/PVE/Diskmanage.pm
install -D -m 0644 CephConfig.pm ${DESTDIR}${PERLDIR}/PVE/CephConfig.pm
+ install -D -m 0644 GuestImport.pm ${DESTDIR}${PERLDIR}/PVE/GuestImport.pm
make -C Storage install
make -C GuestImport install
make -C API2 install
diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
index 1ed91c2..adc1b45 100755
--- a/src/PVE/Storage.pm
+++ b/src/PVE/Storage.pm
@@ -114,7 +114,7 @@ our $VZTMPL_EXT_RE_1 = qr/\.tar\.(gz|xz|zst)/i;
our $BACKUP_EXT_RE_2 = qr/\.(tgz|(?:tar|vma)(?:\.(${\PVE::Storage::Plugin::COMPRESSOR_RE}))?)/;
-our $IMPORT_EXT_RE_1 = qr/\.(ovf|qcow2|raw|vmdk)/;
+our $IMPORT_EXT_RE_1 = qr/\.(ova|ovf|qcow2|raw|vmdk)/;
our $SAFE_CHAR_CLASS_RE = qr/[a-zA-Z0-9\-\.\+\=\_]/;
diff --git a/src/PVE/Storage/DirPlugin.pm b/src/PVE/Storage/DirPlugin.pm
index 3e3b1e7..ea89464 100644
--- a/src/PVE/Storage/DirPlugin.pm
+++ b/src/PVE/Storage/DirPlugin.pm
@@ -258,15 +258,26 @@ sub get_import_metadata {
# NOTE: all types of warnings must be added to the return schema of the import-metadata API endpoint
my $warnings = [];
+ my $isOva = 0;
+ if ($name =~ m/\.ova$/) {
+ $isOva = 1;
+ push @$warnings, { type => 'ova-needs-extracting' };
+ }
my $path = $class->path($scfg, $volname, $storeid, undef);
- my $res = PVE::GuestImport::OVF::parse_ovf($path);
+ my $res = PVE::GuestImport::OVF::parse_ovf($path, $isOva);
my $disks = {};
for my $disk ($res->{disks}->@*) {
my $id = $disk->{disk_address};
my $size = $disk->{virtual_size};
my $path = $disk->{relative_path};
+ my $volid;
+ if ($isOva) {
+ $volid = "$storeid:$volname/$path";
+ } else {
+ $volid = "$storeid:import/$path",
+ }
$disks->{$id} = {
- volid => "$storeid:import/$path",
+ volid => $volid,
defined($size) ? (size => $size) : (),
};
}
diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
index 33f0f3a..640d156 100644
--- a/src/PVE/Storage/Plugin.pm
+++ b/src/PVE/Storage/Plugin.pm
@@ -654,6 +654,11 @@ sub parse_volname {
return ('backup', $fn);
} elsif ($volname =~ m!^snippets/([^/]+)$!) {
return ('snippets', $1);
+ } elsif ($volname =~ m!^import/(${PVE::Storage::SAFE_CHAR_CLASS_RE}+\.ova\/(${PVE::Storage::SAFE_CHAR_CLASS_RE}+))$!) {
+ my $archive = $1;
+ my $file = $2;
+ my (undef, $format, undef) = parse_name_dir($file);
+ return ('import', $archive, undef, undef, undef, undef, "ova+$format");
} elsif ($volname =~ m!^import/(${PVE::Storage::SAFE_CHAR_CLASS_RE}+$PVE::Storage::IMPORT_EXT_RE_1)$!) {
return ('import', $1, undef, undef, undef, undef, $2);
}
diff --git a/src/test/parse_volname_test.pm b/src/test/parse_volname_test.pm
index a8c746f..bc7b4e8 100644
--- a/src/test/parse_volname_test.pm
+++ b/src/test/parse_volname_test.pm
@@ -93,6 +93,21 @@ my $tests = [
volname => 'import/import.ovf',
expected => ['import', 'import.ovf', undef, undef, undef ,undef, 'ovf'],
},
+ {
+ description => "Import, innner file of ova",
+ volname => 'import/import.ova/disk.qcow2',
+ expected => ['import', 'import.ova/disk.qcow2', undef, undef, undef, undef, 'ova+qcow2'],
+ },
+ {
+ description => "Import, innner file of ova",
+ volname => 'import/import.ova/disk.vmdk',
+ expected => ['import', 'import.ova/disk.vmdk', undef, undef, undef, undef, 'ova+vmdk'],
+ },
+ {
+ description => "Import, innner file of ova",
+ volname => 'import/import.ova/disk.raw',
+ expected => ['import', 'import.ova/disk.raw', undef, undef, undef, undef, 'ova+raw'],
+ },
#
# failed matches
#
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 38+ messages in thread
* [pve-devel] [PATCH storage v3 04/10] ovf: implement parsing the ostype
2024-04-29 11:21 [pve-devel] [PATCH storage/qemu-server/manager v3] implement ova/ovf import for file based storages Dominik Csapak
` (2 preceding siblings ...)
2024-04-29 11:21 ` [pve-devel] [PATCH storage v3 03/10] plugin: dir: handle ova files for import Dominik Csapak
@ 2024-04-29 11:21 ` Dominik Csapak
2024-04-29 11:21 ` [pve-devel] [PATCH storage v3 05/10] ovf: implement parsing out firmware type Dominik Csapak
` (19 subsequent siblings)
23 siblings, 0 replies; 38+ messages in thread
From: Dominik Csapak @ 2024-04-29 11:21 UTC (permalink / raw)
To: pve-devel
use the standards info about the ostypes to map to our own
(see comment for link to the relevant part of the dmtf schema)
every type that is not listed we map to 'other', so no need to have it
in a list.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/PVE/GuestImport/OVF.pm | 69 ++++++++++++++++++++++++++++++++++++++
src/test/run_ovf_tests.pl | 5 +++
2 files changed, 74 insertions(+)
diff --git a/src/PVE/GuestImport/OVF.pm b/src/PVE/GuestImport/OVF.pm
index 6b79078..cf08cb6 100644
--- a/src/PVE/GuestImport/OVF.pm
+++ b/src/PVE/GuestImport/OVF.pm
@@ -55,6 +55,71 @@ my @resources = (
{ id => 35, dtmf_name => 'Vendor Reserved'}
);
+# see https://schemas.dmtf.org/wbem/cim-html/2.55.0+/CIM_OperatingSystem.html
+my $ostype_ids = {
+ 18 => 'winxp', # 'WINNT',
+ 29 => 'solaris', # 'Solaris',
+ 36 => 'l26', # 'LINUX',
+ 58 => 'w2k', # 'Windows 2000',
+ 67 => 'wxp', #'Windows XP',
+ 69 => 'w2k3', # 'Microsoft Windows Server 2003',
+ 70 => 'w2k3', # 'Microsoft Windows Server 2003 64-Bit',
+ 71 => 'wxp', # 'Windows XP 64-Bit',
+ 72 => 'wxp', # 'Windows XP Embedded',
+ 73 => 'wvista', # 'Windows Vista',
+ 74 => 'wvista', # 'Windows Vista 64-Bit',
+ 75 => 'wxp', # 'Windows Embedded for Point of Service', ??
+ 76 => 'w2k8', # 'Microsoft Windows Server 2008',
+ 77 => 'w2k8', # 'Microsoft Windows Server 2008 64-Bit',
+ 79 => 'l26', # 'RedHat Enterprise Linux',
+ 80 => 'l26', # 'RedHat Enterprise Linux 64-Bit',
+ 81 => 'solaris', #'Solaris 64-Bit',
+ 82 => 'l26', # 'SUSE',
+ 83 => 'l26', # 'SUSE 64-Bit',
+ 84 => 'l26', # 'SLES',
+ 85 => 'l26', # 'SLES 64-Bit',
+ 87 => 'l26', # 'Novell Linux Desktop',
+ 89 => 'l26', # 'Mandriva',
+ 90 => 'l26', # 'Mandriva 64-Bit',
+ 91 => 'l26', # 'TurboLinux',
+ 92 => 'l26', # 'TurboLinux 64-Bit',
+ 93 => 'l26', # 'Ubuntu',
+ 94 => 'l26', # 'Ubuntu 64-Bit',
+ 95 => 'l26', # 'Debian',
+ 96 => 'l26', # 'Debian 64-Bit',
+ 97 => 'l24', # 'Linux 2.4.x',
+ 98 => 'l24', # 'Linux 2.4.x 64-Bit',
+ 99 => 'l26', # 'Linux 2.6.x',
+ 100 => 'l26', # 'Linux 2.6.x 64-Bit',
+ 101 => 'l26', # 'Linux 64-Bit',
+ 103 => 'win7', # 'Microsoft Windows Server 2008 R2',
+ 105 => 'win7', # 'Microsoft Windows 7',
+ 106 => 'l26', # 'CentOS 32-bit',
+ 107 => 'l26', # 'CentOS 64-bit',
+ 108 => 'l26', # 'Oracle Linux 32-bit',
+ 109 => 'l26', # 'Oracle Linux 64-bit',
+ 111 => 'win8', # 'Microsoft Windows Server 2011', ??
+ 112 => 'win8', # 'Microsoft Windows Server 2012',
+ 113 => 'win8', # 'Microsoft Windows 8',
+ 114 => 'win8', # 'Microsoft Windows 8 64-bit',
+ 115 => 'win8', # 'Microsoft Windows Server 2012 R2',
+ 116 => 'win10', # 'Microsoft Windows Server 2016',
+ 117 => 'win8', # 'Microsoft Windows 8.1',
+ 118 => 'win8', # 'Microsoft Windows 8.1 64-bit',
+ 119 => 'win10', # 'Microsoft Windows 10',
+ 120 => 'win10', # 'Microsoft Windows 10 64-bit',
+ 121 => 'win10', # 'Microsoft Windows Server 2019',
+ 122 => 'win11', # 'Microsoft Windows 11 64-bit',
+ 123 => 'win11', # 'Microsoft Windows Server 2022',
+ # others => 'other',
+};
+
+sub get_ostype {
+ my ($id) = @_;
+
+ return $ostype_ids->{$id} // 'other';
+}
+
sub find_by {
my ($key, $param) = @_;
foreach my $resource (@resources) {
@@ -160,6 +225,10 @@ sub parse_ovf {
my $xpath_find_disks="/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${disk_id}]";
my @disk_items = $xpc->findnodes($xpath_find_disks);
+ my $xpath_find_ostype_id = "/ovf:Envelope/ovf:VirtualSystem/ovf:OperatingSystemSection/\@ovf:id";
+ my $ostype_id = $xpc->findvalue($xpath_find_ostype_id);
+ $qm->{ostype} = get_ostype($ostype_id);
+
# disks metadata is split in four different xml elements:
# * as an Item node of type DiskDrive in the VirtualHardwareSection
# * as an Disk node in the DiskSection
diff --git a/src/test/run_ovf_tests.pl b/src/test/run_ovf_tests.pl
index 5a80ab2..c433c9d 100755
--- a/src/test/run_ovf_tests.pl
+++ b/src/test/run_ovf_tests.pl
@@ -59,13 +59,18 @@ print "\ntesting vm.conf extraction\n";
is($win2008->{qm}->{name}, 'Win2008-R2x64', 'win2008 VM name is correct');
is($win2008->{qm}->{memory}, '2048', 'win2008 VM memory is correct');
is($win2008->{qm}->{cores}, '1', 'win2008 VM cores are correct');
+is($win2008->{qm}->{ostype}, 'win7', 'win2008 VM ostype is correcty');
is($win10->{qm}->{name}, 'Win10-Liz', 'win10 VM name is correct');
is($win10->{qm}->{memory}, '6144', 'win10 VM memory is correct');
is($win10->{qm}->{cores}, '4', 'win10 VM cores are correct');
+# older esxi/ovf standard used 'other' for windows10
+is($win10->{qm}->{ostype}, 'other', 'win10 VM ostype is correct');
is($win10noNs->{qm}->{name}, 'Win10-Liz', 'win10 VM (no default rasd NS) name is correct');
is($win10noNs->{qm}->{memory}, '6144', 'win10 VM (no default rasd NS) memory is correct');
is($win10noNs->{qm}->{cores}, '4', 'win10 VM (no default rasd NS) cores are correct');
+# older esxi/ovf standard used 'other' for windows10
+is($win10noNs->{qm}->{ostype}, 'other', 'win10 VM (no default rasd NS) ostype is correct');
done_testing();
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 38+ messages in thread
* [pve-devel] [PATCH storage v3 05/10] ovf: implement parsing out firmware type
2024-04-29 11:21 [pve-devel] [PATCH storage/qemu-server/manager v3] implement ova/ovf import for file based storages Dominik Csapak
` (3 preceding siblings ...)
2024-04-29 11:21 ` [pve-devel] [PATCH storage v3 04/10] ovf: implement parsing the ostype Dominik Csapak
@ 2024-04-29 11:21 ` Dominik Csapak
2024-04-29 11:21 ` [pve-devel] [PATCH storage v3 06/10] ovf: implement rudimentary boot order Dominik Csapak
` (18 subsequent siblings)
23 siblings, 0 replies; 38+ messages in thread
From: Dominik Csapak @ 2024-04-29 11:21 UTC (permalink / raw)
To: pve-devel
it seems there is no part of the ovf standard that handles which type of
bios there is (at least i could not find it). Every ovf/ova i tested
either has no info about it, or has it in a vmware specific property
which we parse here.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/PVE/GuestImport/OVF.pm | 5 +++++
src/PVE/Storage/DirPlugin.pm | 5 +++++
src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf | 1 +
src/test/run_ovf_tests.pl | 1 +
4 files changed, 12 insertions(+)
diff --git a/src/PVE/GuestImport/OVF.pm b/src/PVE/GuestImport/OVF.pm
index cf08cb6..767590e 100644
--- a/src/PVE/GuestImport/OVF.pm
+++ b/src/PVE/GuestImport/OVF.pm
@@ -229,6 +229,11 @@ sub parse_ovf {
my $ostype_id = $xpc->findvalue($xpath_find_ostype_id);
$qm->{ostype} = get_ostype($ostype_id);
+ # vmware specific firmware config, seems to not be standardized in ovf ?
+ my $xpath_find_firmware = "/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/vmw:Config[\@vmw:key=\"firmware\"]/\@vmw:value";
+ my $firmware = $xpc->findvalue($xpath_find_firmware) || 'seabios';
+ $qm->{bios} = 'ovmf' if $firmware eq 'efi';
+
# disks metadata is split in four different xml elements:
# * as an Item node of type DiskDrive in the VirtualHardwareSection
# * as an Disk node in the DiskSection
diff --git a/src/PVE/Storage/DirPlugin.pm b/src/PVE/Storage/DirPlugin.pm
index ea89464..b98b603 100644
--- a/src/PVE/Storage/DirPlugin.pm
+++ b/src/PVE/Storage/DirPlugin.pm
@@ -282,6 +282,11 @@ sub get_import_metadata {
};
}
+ if (defined($res->{qm}->{bios}) && $res->{qm}->{bios} eq 'ovmf') {
+ $disks->{efidisk0} = 1;
+ push @$warnings, { type => 'efi-state-lost', key => 'bios', value => 'ovmf' };
+ }
+
return {
type => 'vm',
source => $volname,
diff --git a/src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf b/src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf
index b93540f..10ccaf1 100755
--- a/src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf
+++ b/src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf
@@ -137,6 +137,7 @@
<vmw:Config ovf:required="false" vmw:key="powerOpInfo.resetType" vmw:value="soft"/>
<vmw:Config ovf:required="false" vmw:key="powerOpInfo.suspendType" vmw:value="soft"/>
<vmw:Config ovf:required="false" vmw:key="tools.syncTimeWithHost" vmw:value="false"/>
+ <vmw:Config ovf:required="false" vmw:key="firmware" vmw:value="efi"/>
</VirtualHardwareSection>
</VirtualSystem>
</Envelope>
diff --git a/src/test/run_ovf_tests.pl b/src/test/run_ovf_tests.pl
index c433c9d..e92258d 100755
--- a/src/test/run_ovf_tests.pl
+++ b/src/test/run_ovf_tests.pl
@@ -72,5 +72,6 @@ is($win10noNs->{qm}->{memory}, '6144', 'win10 VM (no default rasd NS) memory is
is($win10noNs->{qm}->{cores}, '4', 'win10 VM (no default rasd NS) cores are correct');
# older esxi/ovf standard used 'other' for windows10
is($win10noNs->{qm}->{ostype}, 'other', 'win10 VM (no default rasd NS) ostype is correct');
+is($win10noNs->{qm}->{bios}, 'ovmf', 'win10 VM (no default rasd NS) bios is correct');
done_testing();
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 38+ messages in thread
* [pve-devel] [PATCH storage v3 06/10] ovf: implement rudimentary boot order
2024-04-29 11:21 [pve-devel] [PATCH storage/qemu-server/manager v3] implement ova/ovf import for file based storages Dominik Csapak
` (4 preceding siblings ...)
2024-04-29 11:21 ` [pve-devel] [PATCH storage v3 05/10] ovf: implement parsing out firmware type Dominik Csapak
@ 2024-04-29 11:21 ` Dominik Csapak
2024-04-29 11:21 ` [pve-devel] [PATCH storage v3 07/10] ovf: implement parsing nics Dominik Csapak
` (17 subsequent siblings)
23 siblings, 0 replies; 38+ messages in thread
From: Dominik Csapak @ 2024-04-29 11:21 UTC (permalink / raw)
To: pve-devel
simply add all parsed disks to the boot order in the order we encounter
them (similar to the esxi plugin).
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/PVE/GuestImport/OVF.pm | 6 +++++-
src/test/run_ovf_tests.pl | 3 +++
2 files changed, 8 insertions(+), 1 deletion(-)
diff --git a/src/PVE/GuestImport/OVF.pm b/src/PVE/GuestImport/OVF.pm
index 767590e..f0609de 100644
--- a/src/PVE/GuestImport/OVF.pm
+++ b/src/PVE/GuestImport/OVF.pm
@@ -245,6 +245,8 @@ sub parse_ovf {
# when all the nodes has been found out, we copy the relevant information to
# a $pve_disk hash ref, which we push to @disks;
+ my $boot_order = [];
+
foreach my $item_node (@disk_items) {
my $disk_node;
@@ -349,9 +351,11 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
};
$pve_disk->{virtual_size} = $virtual_size if defined($virtual_size);
push @disks, $pve_disk;
-
+ push @$boot_order, $pve_disk_address;
}
+ $qm->{boot} = "order=" . join(';', @$boot_order) if scalar(@$boot_order) > 0;
+
return {qm => $qm, disks => \@disks};
}
diff --git a/src/test/run_ovf_tests.pl b/src/test/run_ovf_tests.pl
index e92258d..3b04100 100755
--- a/src/test/run_ovf_tests.pl
+++ b/src/test/run_ovf_tests.pl
@@ -56,17 +56,20 @@ is($win10noNs->{disks}->[0]->{virtual_size}, 2048, 'single disk vm (no default r
print "\ntesting vm.conf extraction\n";
+is($win2008->{qm}->{boot}, 'order=scsi0;scsi1', 'win2008 VM boot is correct');
is($win2008->{qm}->{name}, 'Win2008-R2x64', 'win2008 VM name is correct');
is($win2008->{qm}->{memory}, '2048', 'win2008 VM memory is correct');
is($win2008->{qm}->{cores}, '1', 'win2008 VM cores are correct');
is($win2008->{qm}->{ostype}, 'win7', 'win2008 VM ostype is correcty');
+is($win10->{qm}->{boot}, 'order=scsi0', 'win10 VM boot is correct');
is($win10->{qm}->{name}, 'Win10-Liz', 'win10 VM name is correct');
is($win10->{qm}->{memory}, '6144', 'win10 VM memory is correct');
is($win10->{qm}->{cores}, '4', 'win10 VM cores are correct');
# older esxi/ovf standard used 'other' for windows10
is($win10->{qm}->{ostype}, 'other', 'win10 VM ostype is correct');
+is($win10noNs->{qm}->{boot}, 'order=scsi0', 'win10 VM (no default rasd NS) boot is correct');
is($win10noNs->{qm}->{name}, 'Win10-Liz', 'win10 VM (no default rasd NS) name is correct');
is($win10noNs->{qm}->{memory}, '6144', 'win10 VM (no default rasd NS) memory is correct');
is($win10noNs->{qm}->{cores}, '4', 'win10 VM (no default rasd NS) cores are correct');
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 38+ messages in thread
* [pve-devel] [PATCH storage v3 07/10] ovf: implement parsing nics
2024-04-29 11:21 [pve-devel] [PATCH storage/qemu-server/manager v3] implement ova/ovf import for file based storages Dominik Csapak
` (5 preceding siblings ...)
2024-04-29 11:21 ` [pve-devel] [PATCH storage v3 06/10] ovf: implement rudimentary boot order Dominik Csapak
@ 2024-04-29 11:21 ` Dominik Csapak
2024-04-29 11:21 ` [pve-devel] [PATCH storage v3 08/10] api: allow ova upload/download Dominik Csapak
` (16 subsequent siblings)
23 siblings, 0 replies; 38+ messages in thread
From: Dominik Csapak @ 2024-04-29 11:21 UTC (permalink / raw)
To: pve-devel
by iterating over the relevant parts and trying to parse out the
'ResourceSubType'. The content of that is not standardized, but I only
ever found examples that are compatible with vmware, meaning it's
either 'e1000', 'e1000e' or 'vmxnet3' (in various capitalizations; thus
the `lc()`)
As a fallback i used e1000, since that is our default too, and should
work for most guest operating systems.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/PVE/GuestImport/OVF.pm | 23 ++++++++++++++++++++++-
src/PVE/Storage/DirPlugin.pm | 2 +-
src/test/run_ovf_tests.pl | 5 +++++
3 files changed, 28 insertions(+), 2 deletions(-)
diff --git a/src/PVE/GuestImport/OVF.pm b/src/PVE/GuestImport/OVF.pm
index f0609de..d7e3ce4 100644
--- a/src/PVE/GuestImport/OVF.pm
+++ b/src/PVE/GuestImport/OVF.pm
@@ -120,6 +120,12 @@ sub get_ostype {
return $ostype_ids->{$id} // 'other';
}
+my $allowed_nic_models = [
+ 'e1000',
+ 'e1000e',
+ 'vmxnet3',
+];
+
sub find_by {
my ($key, $param) = @_;
foreach my $resource (@resources) {
@@ -356,7 +362,22 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
$qm->{boot} = "order=" . join(';', @$boot_order) if scalar(@$boot_order) > 0;
- return {qm => $qm, disks => \@disks};
+ my $nic_id = dtmf_name_to_id('Ethernet Adapter');
+ my $xpath_find_nics = "/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${nic_id}]";
+ my @nic_items = $xpc->findnodes($xpath_find_nics);
+
+ my $net = {};
+
+ my $net_count = 0;
+ for my $item_node (@nic_items) {
+ my $model = $xpc->findvalue('rasd:ResourceSubType', $item_node);
+ $model = lc($model);
+ $model = 'e1000' if ! grep { $_ eq $model } @$allowed_nic_models;
+ $net->{"net${net_count}"} = { model => $model };
+ $net_count++;
+ }
+
+ return {qm => $qm, disks => \@disks, net => $net};
}
1;
diff --git a/src/PVE/Storage/DirPlugin.pm b/src/PVE/Storage/DirPlugin.pm
index b98b603..6a6b5e9 100644
--- a/src/PVE/Storage/DirPlugin.pm
+++ b/src/PVE/Storage/DirPlugin.pm
@@ -293,7 +293,7 @@ sub get_import_metadata {
'create-args' => $res->{qm},
'disks' => $disks,
warnings => $warnings,
- net => [],
+ net => $res->{net},
};
}
diff --git a/src/test/run_ovf_tests.pl b/src/test/run_ovf_tests.pl
index 3b04100..b8fa4b1 100755
--- a/src/test/run_ovf_tests.pl
+++ b/src/test/run_ovf_tests.pl
@@ -54,6 +54,11 @@ is($win10noNs->{disks}->[0]->{disk_address}, 'scsi0', 'single disk vm (no defaul
is($win10noNs->{disks}->[0]->{backing_file}, "$test_manifests/Win10-Liz-disk1.vmdk", 'single disk vm (no default rasd NS) has the correct disk backing device');
is($win10noNs->{disks}->[0]->{virtual_size}, 2048, 'single disk vm (no default rasd NS) has the correct size');
+print "testing nics\n";
+is($win2008->{net}->{net0}->{model}, 'e1000', 'win2008 has correct nic model');
+is($win10->{net}->{net0}->{model}, 'e1000e', 'win10 has correct nic model');
+is($win10noNs->{net}->{net0}->{model}, 'e1000e', 'win10 (no default rasd NS) has correct nic model');
+
print "\ntesting vm.conf extraction\n";
is($win2008->{qm}->{boot}, 'order=scsi0;scsi1', 'win2008 VM boot is correct');
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 38+ messages in thread
* [pve-devel] [PATCH storage v3 08/10] api: allow ova upload/download
2024-04-29 11:21 [pve-devel] [PATCH storage/qemu-server/manager v3] implement ova/ovf import for file based storages Dominik Csapak
` (6 preceding siblings ...)
2024-04-29 11:21 ` [pve-devel] [PATCH storage v3 07/10] ovf: implement parsing nics Dominik Csapak
@ 2024-04-29 11:21 ` Dominik Csapak
2024-05-22 10:20 ` Fabian Grünbichler
2024-04-29 11:21 ` [pve-devel] [PATCH storage v3 09/10] plugin: enable import for nfs/btrfs/cifs/cephfs/glusterfs Dominik Csapak
` (15 subsequent siblings)
23 siblings, 1 reply; 38+ messages in thread
From: Dominik Csapak @ 2024-04-29 11:21 UTC (permalink / raw)
To: pve-devel
introducing a separate regex that only contains ova, since
upload/downloading ovfs does not make sense (since the disks are then
missing).
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/PVE/API2/Storage/Status.pm | 18 ++++++++++++++----
src/PVE/Storage.pm | 11 +++++++++++
2 files changed, 25 insertions(+), 4 deletions(-)
diff --git a/src/PVE/API2/Storage/Status.pm b/src/PVE/API2/Storage/Status.pm
index acde730..6c0c1e5 100644
--- a/src/PVE/API2/Storage/Status.pm
+++ b/src/PVE/API2/Storage/Status.pm
@@ -369,7 +369,7 @@ __PACKAGE__->register_method ({
name => 'upload',
path => '{storage}/upload',
method => 'POST',
- description => "Upload templates and ISO images.",
+ description => "Upload templates, ISO images and OVAs.",
permissions => {
check => ['perm', '/storage/{storage}', ['Datastore.AllocateTemplate']],
},
@@ -382,7 +382,7 @@ __PACKAGE__->register_method ({
content => {
description => "Content type.",
type => 'string', format => 'pve-storage-content',
- enum => ['iso', 'vztmpl'],
+ enum => ['iso', 'vztmpl', 'import'],
},
filename => {
description => "The name of the file to create. Caution: This will be normalized!",
@@ -448,6 +448,11 @@ __PACKAGE__->register_method ({
raise_param_exc({ filename => "wrong file extension" });
}
$path = PVE::Storage::get_vztmpl_dir($cfg, $param->{storage});
+ } elsif ($content eq 'import') {
+ if ($filename !~ m![^/]+$PVE::Storage::UPLOAD_IMPORT_EXT_RE_1$!) {
+ raise_param_exc({ filename => "wrong file extension" });
+ }
+ $path = PVE::Storage::get_import_dir($cfg, $param->{storage});
} else {
raise_param_exc({ content => "upload content type '$content' not allowed" });
}
@@ -544,7 +549,7 @@ __PACKAGE__->register_method({
name => 'download_url',
path => '{storage}/download-url',
method => 'POST',
- description => "Download templates and ISO images by using an URL.",
+ description => "Download templates, ISO images and OVAs by using an URL.",
proxyto => 'node',
permissions => {
description => 'Requires allocation access on the storage and as this allows one to probe'
@@ -572,7 +577,7 @@ __PACKAGE__->register_method({
content => {
description => "Content type.", # TODO: could be optional & detected in most cases
type => 'string', format => 'pve-storage-content',
- enum => ['iso', 'vztmpl'],
+ enum => ['iso', 'vztmpl', 'import'],
},
filename => {
description => "The name of the file to create. Caution: This will be normalized!",
@@ -642,6 +647,11 @@ __PACKAGE__->register_method({
raise_param_exc({ filename => "wrong file extension" });
}
$path = PVE::Storage::get_vztmpl_dir($cfg, $storage);
+ } elsif ($content eq 'import') {
+ if ($filename !~ m![^/]+$PVE::Storage::UPLOAD_IMPORT_EXT_RE_1$!) {
+ raise_param_exc({ filename => "wrong file extension" });
+ }
+ $path = PVE::Storage::get_import_dir($cfg, $param->{storage});
} else {
raise_param_exc({ content => "upload content-type '$content' is not allowed" });
}
diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
index adc1b45..31b2ad5 100755
--- a/src/PVE/Storage.pm
+++ b/src/PVE/Storage.pm
@@ -116,6 +116,8 @@ our $BACKUP_EXT_RE_2 = qr/\.(tgz|(?:tar|vma)(?:\.(${\PVE::Storage::Plugin::COMPR
our $IMPORT_EXT_RE_1 = qr/\.(ova|ovf|qcow2|raw|vmdk)/;
+our $UPLOAD_IMPORT_EXT_RE_1 = qr/\.(ova)/;
+
our $SAFE_CHAR_CLASS_RE = qr/[a-zA-Z0-9\-\.\+\=\_]/;
# FIXME remove with PVE 8.0, add versioned breaks for pve-manager
@@ -464,6 +466,15 @@ sub get_iso_dir {
return $plugin->get_subdir($scfg, 'iso');
}
+sub get_import_dir {
+ my ($cfg, $storeid) = @_;
+
+ my $scfg = storage_config($cfg, $storeid);
+ my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
+
+ return $plugin->get_subdir($scfg, 'import');
+}
+
sub get_vztmpl_dir {
my ($cfg, $storeid) = @_;
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 38+ messages in thread
* [pve-devel] [PATCH storage v3 09/10] plugin: enable import for nfs/btrfs/cifs/cephfs/glusterfs
2024-04-29 11:21 [pve-devel] [PATCH storage/qemu-server/manager v3] implement ova/ovf import for file based storages Dominik Csapak
` (7 preceding siblings ...)
2024-04-29 11:21 ` [pve-devel] [PATCH storage v3 08/10] api: allow ova upload/download Dominik Csapak
@ 2024-04-29 11:21 ` Dominik Csapak
2024-04-29 11:21 ` [pve-devel] [PATCH storage v3 10/10] add 'import' content type to 'check_volume_access' Dominik Csapak
` (14 subsequent siblings)
23 siblings, 0 replies; 38+ messages in thread
From: Dominik Csapak @ 2024-04-29 11:21 UTC (permalink / raw)
To: pve-devel
and reuse the DirPlugin implementation
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/PVE/Storage/BTRFSPlugin.pm | 5 +++++
src/PVE/Storage/CIFSPlugin.pm | 6 +++++-
src/PVE/Storage/CephFSPlugin.pm | 6 +++++-
src/PVE/Storage/GlusterfsPlugin.pm | 6 +++++-
src/PVE/Storage/NFSPlugin.pm | 6 +++++-
5 files changed, 25 insertions(+), 4 deletions(-)
diff --git a/src/PVE/Storage/BTRFSPlugin.pm b/src/PVE/Storage/BTRFSPlugin.pm
index 42815cb..b7e3f82 100644
--- a/src/PVE/Storage/BTRFSPlugin.pm
+++ b/src/PVE/Storage/BTRFSPlugin.pm
@@ -40,6 +40,7 @@ sub plugindata {
backup => 1,
snippets => 1,
none => 1,
+ import => 1,
},
{ images => 1, rootdir => 1 },
],
@@ -930,4 +931,8 @@ sub volume_import {
return "$storeid:$volname";
}
+sub get_import_metadata {
+ return PVE::Storage::DirPlugin::get_import_metadata(@_);
+}
+
1
diff --git a/src/PVE/Storage/CIFSPlugin.pm b/src/PVE/Storage/CIFSPlugin.pm
index 2184471..475065a 100644
--- a/src/PVE/Storage/CIFSPlugin.pm
+++ b/src/PVE/Storage/CIFSPlugin.pm
@@ -99,7 +99,7 @@ sub type {
sub plugindata {
return {
content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1,
- backup => 1, snippets => 1}, { images => 1 }],
+ backup => 1, snippets => 1, import => 1}, { images => 1 }],
format => [ { raw => 1, qcow2 => 1, vmdk => 1 } , 'raw' ],
};
}
@@ -314,4 +314,8 @@ sub update_volume_attribute {
return PVE::Storage::DirPlugin::update_volume_attribute(@_);
}
+sub get_import_metadata {
+ return PVE::Storage::DirPlugin::get_import_metadata(@_);
+}
+
1;
diff --git a/src/PVE/Storage/CephFSPlugin.pm b/src/PVE/Storage/CephFSPlugin.pm
index 8aad518..36c64ea 100644
--- a/src/PVE/Storage/CephFSPlugin.pm
+++ b/src/PVE/Storage/CephFSPlugin.pm
@@ -116,7 +116,7 @@ sub type {
sub plugindata {
return {
- content => [ { vztmpl => 1, iso => 1, backup => 1, snippets => 1},
+ content => [ { vztmpl => 1, iso => 1, backup => 1, snippets => 1, import => 1 },
{ backup => 1 }],
};
}
@@ -261,4 +261,8 @@ sub update_volume_attribute {
return PVE::Storage::DirPlugin::update_volume_attribute(@_);
}
+sub get_import_metadata {
+ return PVE::Storage::DirPlugin::get_import_metadata(@_);
+}
+
1;
diff --git a/src/PVE/Storage/GlusterfsPlugin.pm b/src/PVE/Storage/GlusterfsPlugin.pm
index 2b7f9e1..9d17180 100644
--- a/src/PVE/Storage/GlusterfsPlugin.pm
+++ b/src/PVE/Storage/GlusterfsPlugin.pm
@@ -97,7 +97,7 @@ sub type {
sub plugindata {
return {
- content => [ { images => 1, vztmpl => 1, iso => 1, backup => 1, snippets => 1},
+ content => [ { images => 1, vztmpl => 1, iso => 1, backup => 1, snippets => 1, import => 1},
{ images => 1 }],
format => [ { raw => 1, qcow2 => 1, vmdk => 1 } , 'raw' ],
};
@@ -352,4 +352,8 @@ sub check_connection {
return defined($server) ? 1 : 0;
}
+sub get_import_metadata {
+ return PVE::Storage::DirPlugin::get_import_metadata(@_);
+}
+
1;
diff --git a/src/PVE/Storage/NFSPlugin.pm b/src/PVE/Storage/NFSPlugin.pm
index f2e4c0d..72e9c6d 100644
--- a/src/PVE/Storage/NFSPlugin.pm
+++ b/src/PVE/Storage/NFSPlugin.pm
@@ -53,7 +53,7 @@ sub type {
sub plugindata {
return {
- content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1, backup => 1, snippets => 1 },
+ content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1, backup => 1, snippets => 1, import => 1 },
{ images => 1 }],
format => [ { raw => 1, qcow2 => 1, vmdk => 1 } , 'raw' ],
};
@@ -223,4 +223,8 @@ sub update_volume_attribute {
return PVE::Storage::DirPlugin::update_volume_attribute(@_);
}
+sub get_import_metadata {
+ return PVE::Storage::DirPlugin::get_import_metadata(@_);
+}
+
1;
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 38+ messages in thread
* [pve-devel] [PATCH storage v3 10/10] add 'import' content type to 'check_volume_access'
2024-04-29 11:21 [pve-devel] [PATCH storage/qemu-server/manager v3] implement ova/ovf import for file based storages Dominik Csapak
` (8 preceding siblings ...)
2024-04-29 11:21 ` [pve-devel] [PATCH storage v3 09/10] plugin: enable import for nfs/btrfs/cifs/cephfs/glusterfs Dominik Csapak
@ 2024-04-29 11:21 ` Dominik Csapak
2024-04-29 11:21 ` [pve-devel] [PATCH qemu-server v3 1/4] api: delete unused OVF.pm Dominik Csapak
` (13 subsequent siblings)
23 siblings, 0 replies; 38+ messages in thread
From: Dominik Csapak @ 2024-04-29 11:21 UTC (permalink / raw)
To: pve-devel
in the same branch as 'vztmpl' and 'iso'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/PVE/Storage.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
index 31b2ad5..fe29842 100755
--- a/src/PVE/Storage.pm
+++ b/src/PVE/Storage.pm
@@ -540,7 +540,7 @@ sub check_volume_access {
return if $rpcenv->check($user, "/storage/$sid", ['Datastore.Allocate'], 1);
- if ($vtype eq 'iso' || $vtype eq 'vztmpl') {
+ if ($vtype eq 'iso' || $vtype eq 'vztmpl' || $vtype eq 'import') {
# require at least read access to storage, (custom) templates/ISOs could be sensitive
$rpcenv->check_any($user, "/storage/$sid", ['Datastore.AllocateSpace', 'Datastore.Audit']);
} elsif (defined($ownervm) && defined($vmid) && ($ownervm == $vmid)) {
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 38+ messages in thread
* [pve-devel] [PATCH qemu-server v3 1/4] api: delete unused OVF.pm
2024-04-29 11:21 [pve-devel] [PATCH storage/qemu-server/manager v3] implement ova/ovf import for file based storages Dominik Csapak
` (9 preceding siblings ...)
2024-04-29 11:21 ` [pve-devel] [PATCH storage v3 10/10] add 'import' content type to 'check_volume_access' Dominik Csapak
@ 2024-04-29 11:21 ` Dominik Csapak
2024-05-22 10:25 ` Fabian Grünbichler
2024-04-29 11:21 ` [pve-devel] [PATCH qemu-server v3 2/4] use OVF from Storage Dominik Csapak
` (12 subsequent siblings)
23 siblings, 1 reply; 38+ messages in thread
From: Dominik Csapak @ 2024-04-29 11:21 UTC (permalink / raw)
To: pve-devel
the api part was never in use by anything
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
PVE/API2/Qemu/Makefile | 2 +-
PVE/API2/Qemu/OVF.pm | 53 ------------------------------------------
2 files changed, 1 insertion(+), 54 deletions(-)
delete mode 100644 PVE/API2/Qemu/OVF.pm
diff --git a/PVE/API2/Qemu/Makefile b/PVE/API2/Qemu/Makefile
index bdd4762b..5d4abda6 100644
--- a/PVE/API2/Qemu/Makefile
+++ b/PVE/API2/Qemu/Makefile
@@ -1,4 +1,4 @@
-SOURCES=Agent.pm CPU.pm Machine.pm OVF.pm
+SOURCES=Agent.pm CPU.pm Machine.pm
.PHONY: install
install:
diff --git a/PVE/API2/Qemu/OVF.pm b/PVE/API2/Qemu/OVF.pm
deleted file mode 100644
index cc0ef2da..00000000
--- a/PVE/API2/Qemu/OVF.pm
+++ /dev/null
@@ -1,53 +0,0 @@
-package PVE::API2::Qemu::OVF;
-
-use strict;
-use warnings;
-
-use PVE::JSONSchema qw(get_standard_option);
-use PVE::QemuServer::OVF;
-use PVE::RESTHandler;
-
-use base qw(PVE::RESTHandler);
-
-__PACKAGE__->register_method ({
- name => 'readovf',
- path => '',
- method => 'GET',
- proxyto => 'node',
- description => "Read an .ovf manifest.",
- protected => 1,
- parameters => {
- additionalProperties => 0,
- properties => {
- node => get_standard_option('pve-node'),
- manifest => {
- description => "Path to .ovf manifest.",
- type => 'string',
- },
- },
- },
- returns => {
- type => 'object',
- additionalProperties => 1,
- properties => PVE::QemuServer::json_ovf_properties(),
- description => "VM config according to .ovf manifest.",
- },
- code => sub {
- my ($param) = @_;
-
- my $manifest = $param->{manifest};
- die "check for file $manifest failed - $!\n" if !-f $manifest;
-
- my $parsed = PVE::QemuServer::OVF::parse_ovf($manifest);
- my $result;
- $result->{cores} = $parsed->{qm}->{cores};
- $result->{name} = $parsed->{qm}->{name};
- $result->{memory} = $parsed->{qm}->{memory};
- my $disks = $parsed->{disks};
- for my $disk (@$disks) {
- $result->{$disk->{disk_address}} = $disk->{backing_file};
- }
- return $result;
- }});
-
-1;
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 38+ messages in thread
* [pve-devel] [PATCH qemu-server v3 2/4] use OVF from Storage
2024-04-29 11:21 [pve-devel] [PATCH storage/qemu-server/manager v3] implement ova/ovf import for file based storages Dominik Csapak
` (10 preceding siblings ...)
2024-04-29 11:21 ` [pve-devel] [PATCH qemu-server v3 1/4] api: delete unused OVF.pm Dominik Csapak
@ 2024-04-29 11:21 ` Dominik Csapak
2024-04-29 11:21 ` [pve-devel] [PATCH qemu-server v3 3/4] api: create: implement extracting disks when needed for import-from Dominik Csapak
` (11 subsequent siblings)
23 siblings, 0 replies; 38+ messages in thread
From: Dominik Csapak @ 2024-04-29 11:21 UTC (permalink / raw)
To: pve-devel
and delete it here (incl tests; they live in pve-storage now).
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
PVE/CLI/qm.pm | 4 +-
PVE/QemuServer/Makefile | 1 -
PVE/QemuServer/OVF.pm | 242 ------------------
test/Makefile | 5 +-
test/ovf_manifests/Win10-Liz-disk1.vmdk | Bin 65536 -> 0 bytes
test/ovf_manifests/Win10-Liz.ovf | 142 ----------
.../ovf_manifests/Win10-Liz_no_default_ns.ovf | 142 ----------
test/ovf_manifests/Win_2008_R2_two-disks.ovf | 145 -----------
test/ovf_manifests/disk1.vmdk | Bin 65536 -> 0 bytes
test/ovf_manifests/disk2.vmdk | Bin 65536 -> 0 bytes
test/run_ovf_tests.pl | 71 -----
11 files changed, 3 insertions(+), 749 deletions(-)
delete mode 100644 PVE/QemuServer/OVF.pm
delete mode 100644 test/ovf_manifests/Win10-Liz-disk1.vmdk
delete mode 100755 test/ovf_manifests/Win10-Liz.ovf
delete mode 100755 test/ovf_manifests/Win10-Liz_no_default_ns.ovf
delete mode 100755 test/ovf_manifests/Win_2008_R2_two-disks.ovf
delete mode 100644 test/ovf_manifests/disk1.vmdk
delete mode 100644 test/ovf_manifests/disk2.vmdk
delete mode 100755 test/run_ovf_tests.pl
diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index b105830f..2b85d072 100755
--- a/PVE/CLI/qm.pm
+++ b/PVE/CLI/qm.pm
@@ -28,13 +28,13 @@ use PVE::Tools qw(extract_param file_get_contents);
use PVE::API2::Qemu::Agent;
use PVE::API2::Qemu;
+use PVE::GuestImport::OVF;
use PVE::QemuConfig;
use PVE::QemuServer::Drive;
use PVE::QemuServer::Helpers;
use PVE::QemuServer::Agent qw(agent_available);
use PVE::QemuServer::ImportDisk;
use PVE::QemuServer::Monitor qw(mon_cmd);
-use PVE::QemuServer::OVF;
use PVE::QemuServer;
use PVE::CLIHandler;
@@ -729,7 +729,7 @@ __PACKAGE__->register_method ({
my $storecfg = PVE::Storage::config();
PVE::Storage::storage_check_enabled($storecfg, $storeid);
- my $parsed = PVE::QemuServer::OVF::parse_ovf($ovf_file);
+ my $parsed = PVE::GuestImport::OVF::parse_ovf($ovf_file);
if ($dryrun) {
print to_json($parsed, { pretty => 1, canonical => 1});
diff --git a/PVE/QemuServer/Makefile b/PVE/QemuServer/Makefile
index ac26e56f..89d12091 100644
--- a/PVE/QemuServer/Makefile
+++ b/PVE/QemuServer/Makefile
@@ -2,7 +2,6 @@ SOURCES=PCI.pm \
USB.pm \
Memory.pm \
ImportDisk.pm \
- OVF.pm \
Cloudinit.pm \
Agent.pm \
Helpers.pm \
diff --git a/PVE/QemuServer/OVF.pm b/PVE/QemuServer/OVF.pm
deleted file mode 100644
index b97b0520..00000000
--- a/PVE/QemuServer/OVF.pm
+++ /dev/null
@@ -1,242 +0,0 @@
-# Open Virtualization Format import routines
-# https://www.dmtf.org/standards/ovf
-package PVE::QemuServer::OVF;
-
-use strict;
-use warnings;
-
-use XML::LibXML;
-use File::Spec;
-use File::Basename;
-use Data::Dumper;
-use Cwd 'realpath';
-
-use PVE::Tools;
-use PVE::Storage;
-
-# map OVF resources types to descriptive strings
-# this will allow us to explore the xml tree without using magic numbers
-# http://schemas.dmtf.org/wbem/cim-html/2/CIM_ResourceAllocationSettingData.html
-my @resources = (
- { id => 1, dtmf_name => 'Other' },
- { id => 2, dtmf_name => 'Computer System' },
- { id => 3, dtmf_name => 'Processor' },
- { id => 4, dtmf_name => 'Memory' },
- { id => 5, dtmf_name => 'IDE Controller', pve_type => 'ide' },
- { id => 6, dtmf_name => 'Parallel SCSI HBA', pve_type => 'scsi' },
- { id => 7, dtmf_name => 'FC HBA' },
- { id => 8, dtmf_name => 'iSCSI HBA' },
- { id => 9, dtmf_name => 'IB HCA' },
- { id => 10, dtmf_name => 'Ethernet Adapter' },
- { id => 11, dtmf_name => 'Other Network Adapter' },
- { id => 12, dtmf_name => 'I/O Slot' },
- { id => 13, dtmf_name => 'I/O Device' },
- { id => 14, dtmf_name => 'Floppy Drive' },
- { id => 15, dtmf_name => 'CD Drive' },
- { id => 16, dtmf_name => 'DVD drive' },
- { id => 17, dtmf_name => 'Disk Drive' },
- { id => 18, dtmf_name => 'Tape Drive' },
- { id => 19, dtmf_name => 'Storage Extent' },
- { id => 20, dtmf_name => 'Other storage device', pve_type => 'sata'},
- { id => 21, dtmf_name => 'Serial port' },
- { id => 22, dtmf_name => 'Parallel port' },
- { id => 23, dtmf_name => 'USB Controller' },
- { id => 24, dtmf_name => 'Graphics controller' },
- { id => 25, dtmf_name => 'IEEE 1394 Controller' },
- { id => 26, dtmf_name => 'Partitionable Unit' },
- { id => 27, dtmf_name => 'Base Partitionable Unit' },
- { id => 28, dtmf_name => 'Power' },
- { id => 29, dtmf_name => 'Cooling Capacity' },
- { id => 30, dtmf_name => 'Ethernet Switch Port' },
- { id => 31, dtmf_name => 'Logical Disk' },
- { id => 32, dtmf_name => 'Storage Volume' },
- { id => 33, dtmf_name => 'Ethernet Connection' },
- { id => 34, dtmf_name => 'DMTF reserved' },
- { id => 35, dtmf_name => 'Vendor Reserved'}
-);
-
-sub find_by {
- my ($key, $param) = @_;
- foreach my $resource (@resources) {
- if ($resource->{$key} eq $param) {
- return ($resource);
- }
- }
- return;
-}
-
-sub dtmf_name_to_id {
- my ($dtmf_name) = @_;
- my $found = find_by('dtmf_name', $dtmf_name);
- if ($found) {
- return $found->{id};
- } else {
- return;
- }
-}
-
-sub id_to_pve {
- my ($id) = @_;
- my $resource = find_by('id', $id);
- if ($resource) {
- return $resource->{pve_type};
- } else {
- return;
- }
-}
-
-# returns two references, $qm which holds qm.conf style key/values, and \@disks
-sub parse_ovf {
- my ($ovf, $debug) = @_;
-
- my $dom = XML::LibXML->load_xml(location => $ovf, no_blanks => 1);
-
- # register the xml namespaces in a xpath context object
- # 'ovf' is the default namespace so it will prepended to each xml element
- my $xpc = XML::LibXML::XPathContext->new($dom);
- $xpc->registerNs('ovf', 'http://schemas.dmtf.org/ovf/envelope/1');
- $xpc->registerNs('rasd', 'http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData');
- $xpc->registerNs('vssd', 'http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData');
-
-
- # hash to save qm.conf parameters
- my $qm;
-
- #array to save a disk list
- my @disks;
-
- # easy xpath
- # walk down the dom until we find the matching XML element
- my $xpath_find_name = "/ovf:Envelope/ovf:VirtualSystem/ovf:Name";
- my $ovf_name = $xpc->findvalue($xpath_find_name);
-
- if ($ovf_name) {
- # PVE::QemuServer::confdesc requires a valid DNS name
- ($qm->{name} = $ovf_name) =~ s/[^a-zA-Z0-9\-\.]//g;
- } else {
- warn "warning: unable to parse the VM name in this OVF manifest, generating a default value\n";
- }
-
- # middle level xpath
- # element[child] search the elements which have this [child]
- my $processor_id = dtmf_name_to_id('Processor');
- my $xpath_find_vcpu_count = "/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${processor_id}]/rasd:VirtualQuantity";
- $qm->{'cores'} = $xpc->findvalue($xpath_find_vcpu_count);
-
- my $memory_id = dtmf_name_to_id('Memory');
- my $xpath_find_memory = ("/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${memory_id}]/rasd:VirtualQuantity");
- $qm->{'memory'} = $xpc->findvalue($xpath_find_memory);
-
- # middle level xpath
- # here we expect multiple results, so we do not read the element value with
- # findvalue() but store multiple elements with findnodes()
- my $disk_id = dtmf_name_to_id('Disk Drive');
- my $xpath_find_disks="/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${disk_id}]";
- my @disk_items = $xpc->findnodes($xpath_find_disks);
-
- # disks metadata is split in four different xml elements:
- # * as an Item node of type DiskDrive in the VirtualHardwareSection
- # * as an Disk node in the DiskSection
- # * as a File node in the References section
- # * each Item node also holds a reference to its owning controller
- #
- # we iterate over the list of Item nodes of type disk drive, and for each item,
- # find the corresponding Disk node, and File node and owning controller
- # when all the nodes has been found out, we copy the relevant information to
- # a $pve_disk hash ref, which we push to @disks;
-
- foreach my $item_node (@disk_items) {
-
- my $disk_node;
- my $file_node;
- my $controller_node;
- my $pve_disk;
-
- print "disk item:\n", $item_node->toString(1), "\n" if $debug;
-
- # from Item, find corresponding Disk node
- # here the dot means the search should start from the current element in dom
- my $host_resource = $xpc->findvalue('rasd:HostResource', $item_node);
- my $disk_section_path;
- my $disk_id;
-
- # RFC 3986 "2.3. Unreserved Characters"
- my $valid_uripath_chars = qr/[[:alnum:]]|[\-\._~]/;
-
- if ($host_resource =~ m|^ovf:/(${valid_uripath_chars}+)/(${valid_uripath_chars}+)$|) {
- $disk_section_path = $1;
- $disk_id = $2;
- } else {
- warn "invalid host ressource $host_resource, skipping\n";
- next;
- }
- printf "disk section path: $disk_section_path and disk id: $disk_id\n" if $debug;
-
- # tricky xpath
- # @ means we filter the result query based on a the value of an item attribute ( @ = attribute)
- # @ needs to be escaped to prevent Perl double quote interpolation
- my $xpath_find_fileref = sprintf("/ovf:Envelope/ovf:DiskSection/\
-ovf:Disk[\@ovf:diskId='%s']/\@ovf:fileRef", $disk_id);
- my $fileref = $xpc->findvalue($xpath_find_fileref);
-
- my $valid_url_chars = qr@${valid_uripath_chars}|/@;
- if (!$fileref || $fileref !~ m/^${valid_url_chars}+$/) {
- warn "invalid host ressource $host_resource, skipping\n";
- next;
- }
-
- # from Disk Node, find corresponding filepath
- my $xpath_find_filepath = sprintf("/ovf:Envelope/ovf:References/ovf:File[\@ovf:id='%s']/\@ovf:href", $fileref);
- my $filepath = $xpc->findvalue($xpath_find_filepath);
- if (!$filepath) {
- warn "invalid file reference $fileref, skipping\n";
- next;
- }
- print "file path: $filepath\n" if $debug;
-
- # from Item, find owning Controller type
- my $controller_id = $xpc->findvalue('rasd:Parent', $item_node);
- my $xpath_find_parent_type = sprintf("/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/\
-ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
- my $controller_type = $xpc->findvalue($xpath_find_parent_type);
- if (!$controller_type) {
- warn "invalid or missing controller: $controller_type, skipping\n";
- next;
- }
- print "owning controller type: $controller_type\n" if $debug;
-
- # extract corresponding Controller node details
- my $adress_on_controller = $xpc->findvalue('rasd:AddressOnParent', $item_node);
- my $pve_disk_address = id_to_pve($controller_type) . $adress_on_controller;
-
- # resolve symlinks and relative path components
- # and die if the diskimage is not somewhere under the $ovf path
- my $ovf_dir = realpath(dirname(File::Spec->rel2abs($ovf)));
- my $backing_file_path = realpath(join ('/', $ovf_dir, $filepath));
- if ($backing_file_path !~ /^\Q${ovf_dir}\E/) {
- die "error parsing $filepath, are you using a symlink ?\n";
- }
-
- if (!-e $backing_file_path) {
- die "error parsing $filepath, file seems not to exist at $backing_file_path\n";
- }
-
- ($backing_file_path) = $backing_file_path =~ m|^(/.*)|; # untaint
-
- my $virtual_size = PVE::Storage::file_size_info($backing_file_path);
- die "error parsing $backing_file_path, cannot determine file size\n"
- if !$virtual_size;
-
- $pve_disk = {
- disk_address => $pve_disk_address,
- backing_file => $backing_file_path,
- virtual_size => $virtual_size
- };
- push @disks, $pve_disk;
-
- }
-
- return {qm => $qm, disks => \@disks};
-}
-
-1;
diff --git a/test/Makefile b/test/Makefile
index 9e6d39e8..65ed7bc4 100644
--- a/test/Makefile
+++ b/test/Makefile
@@ -1,14 +1,11 @@
all: test
-test: test_snapshot test_ovf test_cfg_to_cmd test_pci_addr_conflicts test_qemu_img_convert test_migration test_restore_config
+test: test_snapshot test_cfg_to_cmd test_pci_addr_conflicts test_qemu_img_convert test_migration test_restore_config
test_snapshot: run_snapshot_tests.pl
./run_snapshot_tests.pl
./test_get_replicatable_volumes.pl
-test_ovf: run_ovf_tests.pl
- ./run_ovf_tests.pl
-
test_cfg_to_cmd: run_config2command_tests.pl cfg2cmd/*.conf
perl -I../ ./run_config2command_tests.pl
diff --git a/test/ovf_manifests/Win10-Liz-disk1.vmdk b/test/ovf_manifests/Win10-Liz-disk1.vmdk
deleted file mode 100644
index 662354a3d1333a2f6c4364005e53bfe7cd8b9044..0000000000000000000000000000000000000000
GIT binary patch
literal 0
HcmV?d00001
literal 65536
zcmeIvy>HV%7zbeUF`dK)46s<q+^B&Pi6H|eMIb;zP1VdMK8V$P$u?2T)IS|NNta69
zSXw<No$qu%-}$}AUq|21A0<ihr0Gwa-nQ%QGfCR@wmshsN%A;JUhL<u_T%+U7Sd<o
zW^TMU0^M{}R2S(eR@1Ur*Q@eVF^^#r%c@u{hyC#J%V?Nq?*?z)=UG^1Wn9+n(yx6B
z(=ujtJiA)QVP~;guI5EOE2iV-%_??6=%y!^b+aeU_aA6Z4X2azC>{U!a5_FoJCkDB
zKRozW{5{B<Li)YUBEQ&fJe$RRZCRbA$5|CacQiT<A<uvIHbq(g$>yIY=etVNVcI$B
zY@^?CwTN|j)tg?;i)G&AZFqPqoW(5P2K~XUq>9sqVVe!!?y@Y;)^#k~TefEvd2_XU
z^M@5mfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk4^iOdL%ftb
z5g<T-009C72;3>~`p!f^fB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ
zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U
zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7
z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N
p0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK;Zui`~%h>XmtPp
diff --git a/test/ovf_manifests/Win10-Liz.ovf b/test/ovf_manifests/Win10-Liz.ovf
deleted file mode 100755
index 46642c04..00000000
--- a/test/ovf_manifests/Win10-Liz.ovf
+++ /dev/null
@@ -1,142 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--Generated by VMware ovftool 4.1.0 (build-2982904), UTC time: 2017-02-07T13:50:15.265014Z-->
-<Envelope vmw:buildId="build-2982904" xmlns="http://schemas.dmtf.org/ovf/envelope/1" xmlns:cim="http://schemas.dmtf.org/wbem/wscim/1/common" xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1" xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" xmlns:vmw="http://www.vmware.com/schema/ovf" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
- <References>
- <File ovf:href="Win10-Liz-disk1.vmdk" ovf:id="file1" ovf:size="9155243008"/>
- </References>
- <DiskSection>
- <Info>Virtual disk information</Info>
- <Disk ovf:capacity="128" ovf:capacityAllocationUnits="byte * 2^30" ovf:diskId="vmdisk1" ovf:fileRef="file1" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:populatedSize="16798056448"/>
- </DiskSection>
- <NetworkSection>
- <Info>The list of logical networks</Info>
- <Network ovf:name="bridged">
- <Description>The bridged network</Description>
- </Network>
- </NetworkSection>
- <VirtualSystem ovf:id="vm">
- <Info>A virtual machine</Info>
- <Name>Win10-Liz</Name>
- <OperatingSystemSection ovf:id="1" vmw:osType="windows9_64Guest">
- <Info>The kind of installed guest operating system</Info>
- </OperatingSystemSection>
- <VirtualHardwareSection>
- <Info>Virtual hardware requirements</Info>
- <System>
- <vssd:ElementName>Virtual Hardware Family</vssd:ElementName>
- <vssd:InstanceID>0</vssd:InstanceID>
- <vssd:VirtualSystemIdentifier>Win10-Liz</vssd:VirtualSystemIdentifier>
- <vssd:VirtualSystemType>vmx-11</vssd:VirtualSystemType>
- </System>
- <Item>
- <rasd:AllocationUnits>hertz * 10^6</rasd:AllocationUnits>
- <rasd:Description>Number of Virtual CPUs</rasd:Description>
- <rasd:ElementName>4 virtual CPU(s)</rasd:ElementName>
- <rasd:InstanceID>1</rasd:InstanceID>
- <rasd:ResourceType>3</rasd:ResourceType>
- <rasd:VirtualQuantity>4</rasd:VirtualQuantity>
- </Item>
- <Item>
- <rasd:AllocationUnits>byte * 2^20</rasd:AllocationUnits>
- <rasd:Description>Memory Size</rasd:Description>
- <rasd:ElementName>6144MB of memory</rasd:ElementName>
- <rasd:InstanceID>2</rasd:InstanceID>
- <rasd:ResourceType>4</rasd:ResourceType>
- <rasd:VirtualQuantity>6144</rasd:VirtualQuantity>
- </Item>
- <Item>
- <rasd:Address>0</rasd:Address>
- <rasd:Description>SATA Controller</rasd:Description>
- <rasd:ElementName>sataController0</rasd:ElementName>
- <rasd:InstanceID>3</rasd:InstanceID>
- <rasd:ResourceSubType>vmware.sata.ahci</rasd:ResourceSubType>
- <rasd:ResourceType>20</rasd:ResourceType>
- </Item>
- <Item ovf:required="false">
- <rasd:Address>0</rasd:Address>
- <rasd:Description>USB Controller (XHCI)</rasd:Description>
- <rasd:ElementName>usb3</rasd:ElementName>
- <rasd:InstanceID>4</rasd:InstanceID>
- <rasd:ResourceSubType>vmware.usb.xhci</rasd:ResourceSubType>
- <rasd:ResourceType>23</rasd:ResourceType>
- </Item>
- <Item ovf:required="false">
- <rasd:Address>0</rasd:Address>
- <rasd:Description>USB Controller (EHCI)</rasd:Description>
- <rasd:ElementName>usb</rasd:ElementName>
- <rasd:InstanceID>5</rasd:InstanceID>
- <rasd:ResourceSubType>vmware.usb.ehci</rasd:ResourceSubType>
- <rasd:ResourceType>23</rasd:ResourceType>
- <vmw:Config ovf:required="false" vmw:key="ehciEnabled" vmw:value="true"/>
- </Item>
- <Item>
- <rasd:Address>0</rasd:Address>
- <rasd:Description>SCSI Controller</rasd:Description>
- <rasd:ElementName>scsiController0</rasd:ElementName>
- <rasd:InstanceID>6</rasd:InstanceID>
- <rasd:ResourceSubType>lsilogicsas</rasd:ResourceSubType>
- <rasd:ResourceType>6</rasd:ResourceType>
- </Item>
- <Item ovf:required="false">
- <rasd:AutomaticAllocation>true</rasd:AutomaticAllocation>
- <rasd:ElementName>serial0</rasd:ElementName>
- <rasd:InstanceID>7</rasd:InstanceID>
- <rasd:ResourceType>21</rasd:ResourceType>
- <vmw:Config ovf:required="false" vmw:key="yieldOnPoll" vmw:value="false"/>
- </Item>
- <Item>
- <rasd:AddressOnParent>0</rasd:AddressOnParent>
- <rasd:ElementName>disk0</rasd:ElementName>
- <rasd:HostResource>ovf:/disk/vmdisk1</rasd:HostResource>
- <rasd:InstanceID>8</rasd:InstanceID>
- <rasd:Parent>6</rasd:Parent>
- <rasd:ResourceType>17</rasd:ResourceType>
- </Item>
- <Item>
- <rasd:AddressOnParent>2</rasd:AddressOnParent>
- <rasd:AutomaticAllocation>true</rasd:AutomaticAllocation>
- <rasd:Connection>bridged</rasd:Connection>
- <rasd:Description>E1000e ethernet adapter on "bridged"</rasd:Description>
- <rasd:ElementName>ethernet0</rasd:ElementName>
- <rasd:InstanceID>9</rasd:InstanceID>
- <rasd:ResourceSubType>E1000e</rasd:ResourceSubType>
- <rasd:ResourceType>10</rasd:ResourceType>
- <vmw:Config ovf:required="false" vmw:key="wakeOnLanEnabled" vmw:value="false"/>
- </Item>
- <Item ovf:required="false">
- <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
- <rasd:ElementName>sound</rasd:ElementName>
- <rasd:InstanceID>10</rasd:InstanceID>
- <rasd:ResourceSubType>vmware.soundcard.hdaudio</rasd:ResourceSubType>
- <rasd:ResourceType>1</rasd:ResourceType>
- </Item>
- <Item ovf:required="false">
- <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
- <rasd:ElementName>video</rasd:ElementName>
- <rasd:InstanceID>11</rasd:InstanceID>
- <rasd:ResourceType>24</rasd:ResourceType>
- <vmw:Config ovf:required="false" vmw:key="enable3DSupport" vmw:value="true"/>
- </Item>
- <Item ovf:required="false">
- <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
- <rasd:ElementName>vmci</rasd:ElementName>
- <rasd:InstanceID>12</rasd:InstanceID>
- <rasd:ResourceSubType>vmware.vmci</rasd:ResourceSubType>
- <rasd:ResourceType>1</rasd:ResourceType>
- </Item>
- <Item ovf:required="false">
- <rasd:AddressOnParent>1</rasd:AddressOnParent>
- <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
- <rasd:ElementName>cdrom0</rasd:ElementName>
- <rasd:InstanceID>13</rasd:InstanceID>
- <rasd:Parent>3</rasd:Parent>
- <rasd:ResourceType>15</rasd:ResourceType>
- </Item>
- <vmw:Config ovf:required="false" vmw:key="memoryHotAddEnabled" vmw:value="true"/>
- <vmw:Config ovf:required="false" vmw:key="powerOpInfo.powerOffType" vmw:value="soft"/>
- <vmw:Config ovf:required="false" vmw:key="powerOpInfo.resetType" vmw:value="soft"/>
- <vmw:Config ovf:required="false" vmw:key="powerOpInfo.suspendType" vmw:value="soft"/>
- <vmw:Config ovf:required="false" vmw:key="tools.syncTimeWithHost" vmw:value="false"/>
- </VirtualHardwareSection>
- </VirtualSystem>
-</Envelope>
\ No newline at end of file
diff --git a/test/ovf_manifests/Win10-Liz_no_default_ns.ovf b/test/ovf_manifests/Win10-Liz_no_default_ns.ovf
deleted file mode 100755
index b93540f4..00000000
--- a/test/ovf_manifests/Win10-Liz_no_default_ns.ovf
+++ /dev/null
@@ -1,142 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--Generated by VMware ovftool 4.1.0 (build-2982904), UTC time: 2017-02-07T13:50:15.265014Z-->
-<Envelope vmw:buildId="build-2982904" xmlns="http://schemas.dmtf.org/ovf/envelope/1" xmlns:cim="http://schemas.dmtf.org/wbem/wscim/1/common" xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1" xmlns:vmw="http://www.vmware.com/schema/ovf" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
- <References>
- <File ovf:href="Win10-Liz-disk1.vmdk" ovf:id="file1" ovf:size="9155243008"/>
- </References>
- <DiskSection>
- <Info>Virtual disk information</Info>
- <Disk ovf:capacity="128" ovf:capacityAllocationUnits="byte * 2^30" ovf:diskId="vmdisk1" ovf:fileRef="file1" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:populatedSize="16798056448"/>
- </DiskSection>
- <NetworkSection>
- <Info>The list of logical networks</Info>
- <Network ovf:name="bridged">
- <Description>The bridged network</Description>
- </Network>
- </NetworkSection>
- <VirtualSystem ovf:id="vm">
- <Info>A virtual machine</Info>
- <Name>Win10-Liz</Name>
- <OperatingSystemSection ovf:id="1" vmw:osType="windows9_64Guest">
- <Info>The kind of installed guest operating system</Info>
- </OperatingSystemSection>
- <VirtualHardwareSection>
- <Info>Virtual hardware requirements</Info>
- <System>
- <vssd:ElementName>Virtual Hardware Family</vssd:ElementName>
- <vssd:InstanceID>0</vssd:InstanceID>
- <vssd:VirtualSystemIdentifier>Win10-Liz</vssd:VirtualSystemIdentifier>
- <vssd:VirtualSystemType>vmx-11</vssd:VirtualSystemType>
- </System>
- <Item>
- <rasd:AllocationUnits xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">hertz * 10^6</rasd:AllocationUnits>
- <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">Number of Virtual CPUs</rasd:Description>
- <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">4 virtual CPU(s)</rasd:ElementName>
- <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">1</rasd:InstanceID>
- <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">3</rasd:ResourceType>
- <rasd:VirtualQuantity xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">4</rasd:VirtualQuantity>
- </Item>
- <Item>
- <rasd:AllocationUnits xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">byte * 2^20</rasd:AllocationUnits>
- <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">Memory Size</rasd:Description>
- <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">6144MB of memory</rasd:ElementName>
- <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">2</rasd:InstanceID>
- <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">4</rasd:ResourceType>
- <rasd:VirtualQuantity xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">6144</rasd:VirtualQuantity>
- </Item>
- <Item>
- <rasd:Address xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">0</rasd:Address>
- <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">SATA Controller</rasd:Description>
- <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">sataController0</rasd:ElementName>
- <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">3</rasd:InstanceID>
- <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmware.sata.ahci</rasd:ResourceSubType>
- <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">20</rasd:ResourceType>
- </Item>
- <Item ovf:required="false">
- <rasd:Address xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">0</rasd:Address>
- <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">USB Controller (XHCI)</rasd:Description>
- <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">usb3</rasd:ElementName>
- <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">4</rasd:InstanceID>
- <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmware.usb.xhci</rasd:ResourceSubType>
- <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">23</rasd:ResourceType>
- </Item>
- <Item ovf:required="false">
- <rasd:Address xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">0</rasd:Address>
- <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">USB Controller (EHCI)</rasd:Description>
- <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">usb</rasd:ElementName>
- <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">5</rasd:InstanceID>
- <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmware.usb.ehci</rasd:ResourceSubType>
- <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">23</rasd:ResourceType>
- <vmw:Config ovf:required="false" vmw:key="ehciEnabled" vmw:value="true"/>
- </Item>
- <Item>
- <rasd:Address xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">0</rasd:Address>
- <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">SCSI Controller</rasd:Description>
- <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">scsiController0</rasd:ElementName>
- <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">6</rasd:InstanceID>
- <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">lsilogicsas</rasd:ResourceSubType>
- <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">6</rasd:ResourceType>
- </Item>
- <Item ovf:required="false">
- <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">true</rasd:AutomaticAllocation>
- <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">serial0</rasd:ElementName>
- <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">7</rasd:InstanceID>
- <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">21</rasd:ResourceType>
- <vmw:Config ovf:required="false" vmw:key="yieldOnPoll" vmw:value="false"/>
- </Item>
- <Item>
- <rasd:AddressOnParent xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" >0</rasd:AddressOnParent>
- <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" >disk0</rasd:ElementName>
- <rasd:HostResource xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" >ovf:/disk/vmdisk1</rasd:HostResource>
- <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">8</rasd:InstanceID>
- <rasd:Parent xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">6</rasd:Parent>
- <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">17</rasd:ResourceType>
- </Item>
- <Item>
- <rasd:AddressOnParent xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">2</rasd:AddressOnParent>
- <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">true</rasd:AutomaticAllocation>
- <rasd:Connection xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">bridged</rasd:Connection>
- <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">E1000e ethernet adapter on "bridged"</rasd:Description>
- <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">ethernet0</rasd:ElementName>
- <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">9</rasd:InstanceID>
- <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">E1000e</rasd:ResourceSubType>
- <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">10</rasd:ResourceType>
- <vmw:Config ovf:required="false" vmw:key="wakeOnLanEnabled" vmw:value="false"/>
- </Item>
- <Item ovf:required="false">
- <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">false</rasd:AutomaticAllocation>
- <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">sound</rasd:ElementName>
- <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">10</rasd:InstanceID>
- <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmware.soundcard.hdaudio</rasd:ResourceSubType>
- <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">1</rasd:ResourceType>
- </Item>
- <Item ovf:required="false">
- <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">false</rasd:AutomaticAllocation>
- <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">video</rasd:ElementName>
- <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">11</rasd:InstanceID>
- <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">24</rasd:ResourceType>
- <vmw:Config ovf:required="false" vmw:key="enable3DSupport" vmw:value="true"/>
- </Item>
- <Item ovf:required="false">
- <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">false</rasd:AutomaticAllocation>
- <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmci</rasd:ElementName>
- <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">12</rasd:InstanceID>
- <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmware.vmci</rasd:ResourceSubType>
- <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">1</rasd:ResourceType>
- </Item>
- <Item ovf:required="false">
- <rasd:AddressOnParent xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">1</rasd:AddressOnParent>
- <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">false</rasd:AutomaticAllocation>
- <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">cdrom0</rasd:ElementName>
- <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">13</rasd:InstanceID>
- <rasd:Parent xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">3</rasd:Parent>
- <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">15</rasd:ResourceType>
- </Item>
- <vmw:Config ovf:required="false" vmw:key="memoryHotAddEnabled" vmw:value="true"/>
- <vmw:Config ovf:required="false" vmw:key="powerOpInfo.powerOffType" vmw:value="soft"/>
- <vmw:Config ovf:required="false" vmw:key="powerOpInfo.resetType" vmw:value="soft"/>
- <vmw:Config ovf:required="false" vmw:key="powerOpInfo.suspendType" vmw:value="soft"/>
- <vmw:Config ovf:required="false" vmw:key="tools.syncTimeWithHost" vmw:value="false"/>
- </VirtualHardwareSection>
- </VirtualSystem>
-</Envelope>
diff --git a/test/ovf_manifests/Win_2008_R2_two-disks.ovf b/test/ovf_manifests/Win_2008_R2_two-disks.ovf
deleted file mode 100755
index a563aabb..00000000
--- a/test/ovf_manifests/Win_2008_R2_two-disks.ovf
+++ /dev/null
@@ -1,145 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--Generated by VMware ovftool 4.1.0 (build-2982904), UTC time: 2017-02-27T15:09:29.768974Z-->
-<Envelope vmw:buildId="build-2982904" xmlns="http://schemas.dmtf.org/ovf/envelope/1" xmlns:cim="http://schemas.dmtf.org/wbem/wscim/1/common" xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1" xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" xmlns:vmw="http://www.vmware.com/schema/ovf" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
- <References>
- <File ovf:href="disk1.vmdk" ovf:id="file1" ovf:size="3481968640"/>
- <File ovf:href="disk2.vmdk" ovf:id="file2" ovf:size="68096"/>
- </References>
- <DiskSection>
- <Info>Virtual disk information</Info>
- <Disk ovf:capacity="40" ovf:capacityAllocationUnits="byte * 2^30" ovf:diskId="vmdisk1" ovf:fileRef="file1" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:populatedSize="7684882432"/>
- <Disk ovf:capacity="1" ovf:capacityAllocationUnits="byte * 2^30" ovf:diskId="vmdisk2" ovf:fileRef="file2" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:populatedSize="0"/>
- </DiskSection>
- <NetworkSection>
- <Info>The list of logical networks</Info>
- <Network ovf:name="bridged">
- <Description>The bridged network</Description>
- </Network>
- </NetworkSection>
- <VirtualSystem ovf:id="vm">
- <Info>A virtual machine</Info>
- <Name>Win_2008-R2x64</Name>
- <OperatingSystemSection ovf:id="103" vmw:osType="windows7Server64Guest">
- <Info>The kind of installed guest operating system</Info>
- </OperatingSystemSection>
- <VirtualHardwareSection>
- <Info>Virtual hardware requirements</Info>
- <System>
- <vssd:ElementName>Virtual Hardware Family</vssd:ElementName>
- <vssd:InstanceID>0</vssd:InstanceID>
- <vssd:VirtualSystemIdentifier>Win_2008-R2x64</vssd:VirtualSystemIdentifier>
- <vssd:VirtualSystemType>vmx-11</vssd:VirtualSystemType>
- </System>
- <Item>
- <rasd:AllocationUnits>hertz * 10^6</rasd:AllocationUnits>
- <rasd:Description>Number of Virtual CPUs</rasd:Description>
- <rasd:ElementName>1 virtual CPU(s)</rasd:ElementName>
- <rasd:InstanceID>1</rasd:InstanceID>
- <rasd:ResourceType>3</rasd:ResourceType>
- <rasd:VirtualQuantity>1</rasd:VirtualQuantity>
- </Item>
- <Item>
- <rasd:AllocationUnits>byte * 2^20</rasd:AllocationUnits>
- <rasd:Description>Memory Size</rasd:Description>
- <rasd:ElementName>2048MB of memory</rasd:ElementName>
- <rasd:InstanceID>2</rasd:InstanceID>
- <rasd:ResourceType>4</rasd:ResourceType>
- <rasd:VirtualQuantity>2048</rasd:VirtualQuantity>
- </Item>
- <Item>
- <rasd:Address>0</rasd:Address>
- <rasd:Description>SATA Controller</rasd:Description>
- <rasd:ElementName>sataController0</rasd:ElementName>
- <rasd:InstanceID>3</rasd:InstanceID>
- <rasd:ResourceSubType>vmware.sata.ahci</rasd:ResourceSubType>
- <rasd:ResourceType>20</rasd:ResourceType>
- </Item>
- <Item ovf:required="false">
- <rasd:Address>0</rasd:Address>
- <rasd:Description>USB Controller (EHCI)</rasd:Description>
- <rasd:ElementName>usb</rasd:ElementName>
- <rasd:InstanceID>4</rasd:InstanceID>
- <rasd:ResourceSubType>vmware.usb.ehci</rasd:ResourceSubType>
- <rasd:ResourceType>23</rasd:ResourceType>
- <vmw:Config ovf:required="false" vmw:key="ehciEnabled" vmw:value="true"/>
- </Item>
- <Item>
- <rasd:Address>0</rasd:Address>
- <rasd:Description>SCSI Controller</rasd:Description>
- <rasd:ElementName>scsiController0</rasd:ElementName>
- <rasd:InstanceID>5</rasd:InstanceID>
- <rasd:ResourceSubType>lsilogicsas</rasd:ResourceSubType>
- <rasd:ResourceType>6</rasd:ResourceType>
- </Item>
- <Item ovf:required="false">
- <rasd:AutomaticAllocation>true</rasd:AutomaticAllocation>
- <rasd:ElementName>serial0</rasd:ElementName>
- <rasd:InstanceID>6</rasd:InstanceID>
- <rasd:ResourceType>21</rasd:ResourceType>
- <vmw:Config ovf:required="false" vmw:key="yieldOnPoll" vmw:value="false"/>
- </Item>
- <Item>
- <rasd:AddressOnParent>0</rasd:AddressOnParent>
- <rasd:ElementName>disk0</rasd:ElementName>
- <rasd:HostResource>ovf:/disk/vmdisk1</rasd:HostResource>
- <rasd:InstanceID>7</rasd:InstanceID>
- <rasd:Parent>5</rasd:Parent>
- <rasd:ResourceType>17</rasd:ResourceType>
- </Item>
- <Item>
- <rasd:AddressOnParent>1</rasd:AddressOnParent>
- <rasd:ElementName>disk1</rasd:ElementName>
- <rasd:HostResource>ovf:/disk/vmdisk2</rasd:HostResource>
- <rasd:InstanceID>8</rasd:InstanceID>
- <rasd:Parent>5</rasd:Parent>
- <rasd:ResourceType>17</rasd:ResourceType>
- </Item>
- <Item>
- <rasd:AddressOnParent>2</rasd:AddressOnParent>
- <rasd:AutomaticAllocation>true</rasd:AutomaticAllocation>
- <rasd:Connection>bridged</rasd:Connection>
- <rasd:Description>E1000 ethernet adapter on "bridged"</rasd:Description>
- <rasd:ElementName>ethernet0</rasd:ElementName>
- <rasd:InstanceID>9</rasd:InstanceID>
- <rasd:ResourceSubType>E1000</rasd:ResourceSubType>
- <rasd:ResourceType>10</rasd:ResourceType>
- <vmw:Config ovf:required="false" vmw:key="wakeOnLanEnabled" vmw:value="false"/>
- </Item>
- <Item ovf:required="false">
- <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
- <rasd:ElementName>sound</rasd:ElementName>
- <rasd:InstanceID>10</rasd:InstanceID>
- <rasd:ResourceSubType>vmware.soundcard.hdaudio</rasd:ResourceSubType>
- <rasd:ResourceType>1</rasd:ResourceType>
- </Item>
- <Item ovf:required="false">
- <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
- <rasd:ElementName>video</rasd:ElementName>
- <rasd:InstanceID>11</rasd:InstanceID>
- <rasd:ResourceType>24</rasd:ResourceType>
- <vmw:Config ovf:required="false" vmw:key="enable3DSupport" vmw:value="true"/>
- </Item>
- <Item ovf:required="false">
- <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
- <rasd:ElementName>vmci</rasd:ElementName>
- <rasd:InstanceID>12</rasd:InstanceID>
- <rasd:ResourceSubType>vmware.vmci</rasd:ResourceSubType>
- <rasd:ResourceType>1</rasd:ResourceType>
- </Item>
- <Item ovf:required="false">
- <rasd:AddressOnParent>1</rasd:AddressOnParent>
- <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
- <rasd:ElementName>cdrom0</rasd:ElementName>
- <rasd:InstanceID>13</rasd:InstanceID>
- <rasd:Parent>3</rasd:Parent>
- <rasd:ResourceType>15</rasd:ResourceType>
- </Item>
- <vmw:Config ovf:required="false" vmw:key="cpuHotAddEnabled" vmw:value="true"/>
- <vmw:Config ovf:required="false" vmw:key="memoryHotAddEnabled" vmw:value="true"/>
- <vmw:Config ovf:required="false" vmw:key="powerOpInfo.powerOffType" vmw:value="soft"/>
- <vmw:Config ovf:required="false" vmw:key="powerOpInfo.resetType" vmw:value="soft"/>
- <vmw:Config ovf:required="false" vmw:key="powerOpInfo.suspendType" vmw:value="soft"/>
- <vmw:Config ovf:required="false" vmw:key="tools.syncTimeWithHost" vmw:value="false"/>
- </VirtualHardwareSection>
- </VirtualSystem>
-</Envelope>
diff --git a/test/ovf_manifests/disk1.vmdk b/test/ovf_manifests/disk1.vmdk
deleted file mode 100644
index 8660602343a1a955f9bcf2e6beaed99316dd8167..0000000000000000000000000000000000000000
GIT binary patch
literal 0
HcmV?d00001
literal 65536
zcmeIvy>HV%7zbeUF`dK)46s<q9ua7|WuUkSgpg2EwX+*viPe0`HWAtSr(-ASlAWQ|
zbJF?F{+-YFKK_yYyn2=-$&0qXY<t)4ch@B8o_Fo_en^t%N%H0}e|H$~AF`0X3J-JR
zqY>z*Sy|tuS*)j3xo%d~*K!`iCRTO1T8@X|%lB-2b2}Q2@{gmi&a1d=x<|K%7N%9q
zn|Qfh$8m45TCV10Gb^W)c4ZxVA@tMpzfJp2S{y#m?iwzx)01@a>+{9rJna?j=ZAyM
zqPW{FznsOxiSi~-&+<BkewLkuP!u<VO<6U6^7*&xtNr=XaoRiS?V{gtwTMl%9Za|L
za#^%_7k)SjXE85!!SM7bspGUQewUqo+Glx@ubWtPwRL-yMO)CL`L7O2fB*pk1PBly
zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pkPg~&a(=JbS1PBlyK!5-N0!ISx
zkM7+PAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+
z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBly
zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF
z5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk
d1PBlyK!5-N0t5&UAV7cs0RjXF5cr=0{{Ra(Wheju
diff --git a/test/ovf_manifests/disk2.vmdk b/test/ovf_manifests/disk2.vmdk
deleted file mode 100644
index c4634513348b392202898374f1c8d2d51d565b27..0000000000000000000000000000000000000000
GIT binary patch
literal 0
HcmV?d00001
literal 65536
zcmeIvy>HV%7zbeUF`dK)46s<q9#IIDI%J@v2!xPOQ?;`jUy0Rx$u<$$`lr`+(j_}X
ztLLQio&7tX?|uAp{Oj^rk|Zyh{<7(9yX&q=(mrq7>)ntf&y(cMe*SJh-aTX?eH9+&
z#z!O2Psc@dn~q~OEsJ%%D!&!;7&fu2iq&#-6u$l#k8ZBB&%=0f64qH6mv#5(X4k^B
zj9DEow(B_REmq6byr^fzbkeM>VlRY#diJkw-bwTQ2bx{O`BgehC%?a(PtMX_-hBS!
zV6(_?yX6<NxIa-=XX$BH#n2y*PeaJ_>%pcd>%ZCj`_<*{eCa6d4SQYmC$1K;F1Lf}
zc3v#=CU3(J2jMJcc^4cVA0$<rHpO?@@uyvu<=MK9Wm{XjSCKabJ(~aOpacjIAV7cs
z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAn>#W-ahT}R7ZdS0RjXF5Fl_M
z@c!W5Edc@q2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk
z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs
z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ
zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U
eAV7cs0RjXF5FkK+009C72oNAZfB=F2DR2*l=VfOA
diff --git a/test/run_ovf_tests.pl b/test/run_ovf_tests.pl
deleted file mode 100755
index ff6c7863..00000000
--- a/test/run_ovf_tests.pl
+++ /dev/null
@@ -1,71 +0,0 @@
-#!/usr/bin/perl
-
-use strict;
-use warnings;
-use lib qw(..); # prepend .. to @INC so we use the local version of PVE packages
-
-use FindBin '$Bin';
-use PVE::QemuServer::OVF;
-use Test::More;
-
-use Data::Dumper;
-
-my $test_manifests = join ('/', $Bin, 'ovf_manifests');
-
-print "parsing ovfs\n";
-
-my $win2008 = eval { PVE::QemuServer::OVF::parse_ovf("$test_manifests/Win_2008_R2_two-disks.ovf") };
-if (my $err = $@) {
- fail('parse win2008');
- warn("error: $err\n");
-} else {
- ok('parse win2008');
-}
-my $win10 = eval { PVE::QemuServer::OVF::parse_ovf("$test_manifests/Win10-Liz.ovf") };
-if (my $err = $@) {
- fail('parse win10');
- warn("error: $err\n");
-} else {
- ok('parse win10');
-}
-my $win10noNs = eval { PVE::QemuServer::OVF::parse_ovf("$test_manifests/Win10-Liz_no_default_ns.ovf") };
-if (my $err = $@) {
- fail("parse win10 no default rasd NS");
- warn("error: $err\n");
-} else {
- ok('parse win10 no default rasd NS');
-}
-
-print "testing disks\n";
-
-is($win2008->{disks}->[0]->{disk_address}, 'scsi0', 'multidisk vm has the correct first disk controller');
-is($win2008->{disks}->[0]->{backing_file}, "$test_manifests/disk1.vmdk", 'multidisk vm has the correct first disk backing device');
-is($win2008->{disks}->[0]->{virtual_size}, 2048, 'multidisk vm has the correct first disk size');
-
-is($win2008->{disks}->[1]->{disk_address}, 'scsi1', 'multidisk vm has the correct second disk controller');
-is($win2008->{disks}->[1]->{backing_file}, "$test_manifests/disk2.vmdk", 'multidisk vm has the correct second disk backing device');
-is($win2008->{disks}->[1]->{virtual_size}, 2048, 'multidisk vm has the correct second disk size');
-
-is($win10->{disks}->[0]->{disk_address}, 'scsi0', 'single disk vm has the correct disk controller');
-is($win10->{disks}->[0]->{backing_file}, "$test_manifests/Win10-Liz-disk1.vmdk", 'single disk vm has the correct disk backing device');
-is($win10->{disks}->[0]->{virtual_size}, 2048, 'single disk vm has the correct size');
-
-is($win10noNs->{disks}->[0]->{disk_address}, 'scsi0', 'single disk vm (no default rasd NS) has the correct disk controller');
-is($win10noNs->{disks}->[0]->{backing_file}, "$test_manifests/Win10-Liz-disk1.vmdk", 'single disk vm (no default rasd NS) has the correct disk backing device');
-is($win10noNs->{disks}->[0]->{virtual_size}, 2048, 'single disk vm (no default rasd NS) has the correct size');
-
-print "\ntesting vm.conf extraction\n";
-
-is($win2008->{qm}->{name}, 'Win2008-R2x64', 'win2008 VM name is correct');
-is($win2008->{qm}->{memory}, '2048', 'win2008 VM memory is correct');
-is($win2008->{qm}->{cores}, '1', 'win2008 VM cores are correct');
-
-is($win10->{qm}->{name}, 'Win10-Liz', 'win10 VM name is correct');
-is($win10->{qm}->{memory}, '6144', 'win10 VM memory is correct');
-is($win10->{qm}->{cores}, '4', 'win10 VM cores are correct');
-
-is($win10noNs->{qm}->{name}, 'Win10-Liz', 'win10 VM (no default rasd NS) name is correct');
-is($win10noNs->{qm}->{memory}, '6144', 'win10 VM (no default rasd NS) memory is correct');
-is($win10noNs->{qm}->{cores}, '4', 'win10 VM (no default rasd NS) cores are correct');
-
-done_testing();
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 38+ messages in thread
* [pve-devel] [PATCH qemu-server v3 3/4] api: create: implement extracting disks when needed for import-from
2024-04-29 11:21 [pve-devel] [PATCH storage/qemu-server/manager v3] implement ova/ovf import for file based storages Dominik Csapak
` (11 preceding siblings ...)
2024-04-29 11:21 ` [pve-devel] [PATCH qemu-server v3 2/4] use OVF from Storage Dominik Csapak
@ 2024-04-29 11:21 ` Dominik Csapak
2024-05-22 12:55 ` Fabian Grünbichler
2024-04-29 11:21 ` [pve-devel] [PATCH qemu-server v3 4/4] api: create: add 'import-extraction-storage' parameter Dominik Csapak
` (10 subsequent siblings)
23 siblings, 1 reply; 38+ messages in thread
From: Dominik Csapak @ 2024-04-29 11:21 UTC (permalink / raw)
To: pve-devel
when 'import-from' contains a disk image that needs extraction
(currently only from an 'ova' archive), do that in 'create_disks'
and overwrite the '$source' volid.
Collect the names into a 'delete_sources' list, that we use later
to clean it up again (either when we're finished with importing or in an
error case).
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
PVE/API2/Qemu.pm | 44 ++++++++++++++++++++++++++++++---------
PVE/QemuServer.pm | 5 ++++-
PVE/QemuServer/Helpers.pm | 10 +++++++++
3 files changed, 48 insertions(+), 11 deletions(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 2a349c8c..d32967dc 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -24,6 +24,7 @@ use PVE::JSONSchema qw(get_standard_option);
use PVE::RESTHandler;
use PVE::ReplicationConfig;
use PVE::GuestHelpers qw(assert_tag_permissions);
+use PVE::GuestImport;
use PVE::QemuConfig;
use PVE::QemuServer;
use PVE::QemuServer::Cloudinit;
@@ -159,10 +160,19 @@ my $check_storage_access = sub {
if (my $src_image = $drive->{'import-from'}) {
my $src_vmid;
- if (PVE::Storage::parse_volume_id($src_image, 1)) { # PVE-managed volume
- (my $vtype, undef, $src_vmid) = PVE::Storage::parse_volname($storecfg, $src_image);
- raise_param_exc({ $ds => "$src_image has wrong type '$vtype' - not an image" })
- if $vtype ne 'images';
+ if (my ($storeid, $volname) = PVE::Storage::parse_volume_id($src_image, 1)) { # PVE-managed volume
+ my $scfg = PVE::Storage::storage_config($storecfg, $storeid);
+ my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
+ (my $vtype, undef, $src_vmid) = $plugin->parse_volname($volname);
+
+ raise_param_exc({ $ds => "$src_image has wrong type '$vtype' - needs to be 'images' or 'import'" })
+ if $vtype ne 'images' && $vtype ne 'import';
+
+ if (PVE::GuestImport::copy_needs_extraction($src_image)) {
+ raise_param_exc({ $ds => "$src_image is not on an storage with 'images' content type."})
+ if !$scfg->{content}->{images};
+ $rpcenv->check($authuser, "/storage/$storeid", ['Datastore.AllocateSpace']);
+ }
}
if ($src_vmid) { # might be actively used by VM and will be copied via clone_disk()
@@ -335,6 +345,7 @@ my sub create_disks : prototype($$$$$$$$$$) {
my $res = {};
my $live_import_mapping = {};
+ my $delete_sources = [];
my $code = sub {
my ($ds, $disk) = @_;
@@ -392,6 +403,12 @@ my sub create_disks : prototype($$$$$$$$$$) {
$needs_creation = $live_import;
if (PVE::Storage::parse_volume_id($source, 1)) { # PVE-managed volume
+ if (PVE::GuestImport::copy_needs_extraction($source)) { # needs extraction beforehand
+ print "extracting $source\n";
+ $source = PVE::GuestImport::extract_disk_from_import_file($source, $vmid);
+ print "finished extracting to $source\n";
+ push @$delete_sources, $source;
+ }
if ($live_import && $ds ne 'efidisk0') {
my $path = PVE::Storage::path($storecfg, $source)
or die "failed to get a path for '$source'\n";
@@ -514,13 +531,14 @@ my sub create_disks : prototype($$$$$$$$$$) {
eval { PVE::Storage::vdisk_free($storecfg, $volid); };
warn $@ if $@;
}
+ PVE::QemuServer::Helpers::cleanup_extracted_images($delete_sources);
die $err;
}
# don't return empty import mappings
$live_import_mapping = undef if !%$live_import_mapping;
- return ($vollist, $res, $live_import_mapping);
+ return ($vollist, $res, $live_import_mapping, $delete_sources);
};
my $check_cpu_model_access = sub {
@@ -1079,6 +1097,7 @@ __PACKAGE__->register_method({
my $createfn = sub {
my $live_import_mapping = {};
+ my $delete_sources = [];
# ensure no old replication state are exists
PVE::ReplicationState::delete_guest_states($vmid);
@@ -1096,7 +1115,7 @@ __PACKAGE__->register_method({
my $vollist = [];
eval {
- ($vollist, my $created_opts, $live_import_mapping) = create_disks(
+ ($vollist, my $created_opts, $live_import_mapping, $delete_sources) = create_disks(
$rpcenv,
$authuser,
$conf,
@@ -1149,6 +1168,7 @@ __PACKAGE__->register_method({
eval { PVE::Storage::vdisk_free($storecfg, $volid); };
warn $@ if $@;
}
+ PVE::QemuServer::Helpers::cleanup_extracted_images($delete_sources);
die "$emsg $err";
}
@@ -1165,7 +1185,7 @@ __PACKAGE__->register_method({
warn $@ if $@;
return;
} else {
- return $live_import_mapping;
+ return ($live_import_mapping, $delete_sources);
}
};
@@ -1192,7 +1212,7 @@ __PACKAGE__->register_method({
$code = sub {
# If a live import was requested the create function returns
# the mapping for the startup.
- my $live_import_mapping = eval { $createfn->() };
+ my ($live_import_mapping, $delete_sources) = eval { $createfn->() };
if (my $err = $@) {
eval {
my $conffile = PVE::QemuConfig->config_file($vmid);
@@ -1214,7 +1234,10 @@ __PACKAGE__->register_method({
$vmid,
$conf,
$import_options,
+ $delete_sources,
);
+ } else {
+ PVE::QemuServer::Helpers::cleanup_extracted_images($delete_sources);
}
};
}
@@ -1939,8 +1962,7 @@ my $update_vm_api = sub {
assert_scsi_feature_compatibility($opt, $conf, $storecfg, $param->{$opt})
if $opt =~ m/^scsi\d+$/;
-
- my (undef, $created_opts) = create_disks(
+ my (undef, $created_opts, undef, $delete_sources) = create_disks(
$rpcenv,
$authuser,
$conf,
@@ -1954,6 +1976,8 @@ my $update_vm_api = sub {
);
$conf->{pending}->{$_} = $created_opts->{$_} for keys $created_opts->%*;
+ PVE::QemuServer::Helpers::cleanup_extracted_images($delete_sources);
+
# default legacy boot order implies all cdroms anyway
if (@bootorder) {
# append new CD drives to bootorder to mark them bootable
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 82e7d6a6..4bd0ae85 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -7303,7 +7303,7 @@ sub pbs_live_restore {
# therefore already handled in the `$create_disks()` call happening in the
# `create` api call
sub live_import_from_files {
- my ($mapping, $vmid, $conf, $restore_options) = @_;
+ my ($mapping, $vmid, $conf, $restore_options, $delete_sources) = @_;
my $live_restore_backing = {};
for my $dev (keys %$mapping) {
@@ -7364,6 +7364,8 @@ sub live_import_from_files {
mon_cmd($vmid, 'blockdev-del', 'node-name' => "drive-$ds-restore");
}
+ PVE::QemuServer::Helpers::cleanup_extracted_images($delete_sources);
+
close($qmeventd_fd);
};
@@ -7372,6 +7374,7 @@ sub live_import_from_files {
if ($err) {
warn "An error occurred during live-restore: $err\n";
_do_vm_stop($storecfg, $vmid, 1, 1, 10, 0, 1);
+ PVE::QemuServer::Helpers::cleanup_extracted_images($delete_sources);
die "live-restore failed\n";
}
diff --git a/PVE/QemuServer/Helpers.pm b/PVE/QemuServer/Helpers.pm
index 0afb6317..f6bec1d4 100644
--- a/PVE/QemuServer/Helpers.pm
+++ b/PVE/QemuServer/Helpers.pm
@@ -6,6 +6,7 @@ use warnings;
use File::stat;
use JSON;
+use PVE::GuestImport;
use PVE::INotify;
use PVE::ProcFSTools;
@@ -225,4 +226,13 @@ sub windows_version {
return $winversion;
}
+sub cleanup_extracted_images {
+ my ($delete_sources) = @_;
+
+ for my $source (@$delete_sources) {
+ eval { PVE::GuestImport::cleanup_extracted_image($source) };
+ warn $@ if $@;
+ }
+}
+
1;
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 38+ messages in thread
* [pve-devel] [PATCH qemu-server v3 4/4] api: create: add 'import-extraction-storage' parameter
2024-04-29 11:21 [pve-devel] [PATCH storage/qemu-server/manager v3] implement ova/ovf import for file based storages Dominik Csapak
` (12 preceding siblings ...)
2024-04-29 11:21 ` [pve-devel] [PATCH qemu-server v3 3/4] api: create: implement extracting disks when needed for import-from Dominik Csapak
@ 2024-04-29 11:21 ` Dominik Csapak
2024-05-22 12:16 ` Fabian Grünbichler
2024-04-29 11:21 ` [pve-devel] [PATCH manager v3 1/9] ui: fix special 'import' icon for non-esxi storages Dominik Csapak
` (9 subsequent siblings)
23 siblings, 1 reply; 38+ messages in thread
From: Dominik Csapak @ 2024-04-29 11:21 UTC (permalink / raw)
To: pve-devel
this is to override the target extraction storage for the option disk
extraction for 'import-from'. This way if the storage does not
supports the content type 'images', one can give an alternative one.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
PVE/API2/Qemu.pm | 56 +++++++++++++++++++++++++++++++++++++++++-------
1 file changed, 48 insertions(+), 8 deletions(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index d32967dc..74d0e240 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -128,7 +128,9 @@ my $check_drive_param = sub {
};
my $check_storage_access = sub {
- my ($rpcenv, $authuser, $storecfg, $vmid, $settings, $default_storage) = @_;
+ my ($rpcenv, $authuser, $storecfg, $vmid, $settings, $default_storage, $extraction_storage) = @_;
+
+ my $needs_extraction = 0;
$foreach_volume_with_alloc->($settings, sub {
my ($ds, $drive) = @_;
@@ -169,9 +171,13 @@ my $check_storage_access = sub {
if $vtype ne 'images' && $vtype ne 'import';
if (PVE::GuestImport::copy_needs_extraction($src_image)) {
- raise_param_exc({ $ds => "$src_image is not on an storage with 'images' content type."})
- if !$scfg->{content}->{images};
- $rpcenv->check($authuser, "/storage/$storeid", ['Datastore.AllocateSpace']);
+ $needs_extraction = 1;
+ if (!defined($extraction_storage)) {
+ raise_param_exc({ $ds => "$src_image is not on an storage with 'images'"
+ ." content type and no 'import-extraction-storage' was given."})
+ if !$scfg->{content}->{images};
+ $rpcenv->check($authuser, "/storage/$storeid", ['Datastore.AllocateSpace']);
+ }
}
}
@@ -183,6 +189,14 @@ my $check_storage_access = sub {
}
});
+ if ($needs_extraction && defined($extraction_storage)) {
+ my $scfg = PVE::Storage::storage_config($storecfg, $extraction_storage);
+ raise_param_exc({ 'import-extraction-storage' => "$extraction_storage does not support"
+ ." 'images' content type or is not file based."})
+ if !$scfg->{content}->{images} || !$scfg->{path};
+ $rpcenv->check($authuser, "/storage/$extraction_storage", ['Datastore.AllocateSpace']);
+ }
+
$rpcenv->check($authuser, "/storage/$settings->{vmstatestorage}", ['Datastore.AllocateSpace'])
if defined($settings->{vmstatestorage});
};
@@ -326,7 +340,7 @@ my $import_from_volid = sub {
# Note: $pool is only needed when creating a VM, because pool permissions
# are automatically inherited if VM already exists inside a pool.
-my sub create_disks : prototype($$$$$$$$$$) {
+my sub create_disks : prototype($$$$$$$$$$$) {
my (
$rpcenv,
$authuser,
@@ -338,6 +352,7 @@ my sub create_disks : prototype($$$$$$$$$$) {
$settings,
$default_storage,
$is_live_import,
+ $extraction_storage,
) = @_;
my $vollist = [];
@@ -405,7 +420,8 @@ my sub create_disks : prototype($$$$$$$$$$) {
if (PVE::Storage::parse_volume_id($source, 1)) { # PVE-managed volume
if (PVE::GuestImport::copy_needs_extraction($source)) { # needs extraction beforehand
print "extracting $source\n";
- $source = PVE::GuestImport::extract_disk_from_import_file($source, $vmid);
+ $source = PVE::GuestImport::extract_disk_from_import_file(
+ $source, $vmid, $extraction_storage);
print "finished extracting to $source\n";
push @$delete_sources, $source;
}
@@ -925,6 +941,12 @@ __PACKAGE__->register_method({
default => 0,
description => "Start VM after it was created successfully.",
},
+ 'import-extraction-storage' => get_standard_option('pve-storage-id', {
+ description => "Storage to put extracted images when using 'import-from' that"
+ ." needs extraction",
+ optional => 1,
+ completion => \&PVE::QemuServer::complete_storage,
+ }),
},
1, # with_disk_alloc
),
@@ -951,6 +973,7 @@ __PACKAGE__->register_method({
my $storage = extract_param($param, 'storage');
my $unique = extract_param($param, 'unique');
my $live_restore = extract_param($param, 'live-restore');
+ my $extraction_storage = extract_param($param, 'import-extraction-storage');
if (defined(my $ssh_keys = $param->{sshkeys})) {
$ssh_keys = URI::Escape::uri_unescape($ssh_keys);
@@ -1010,7 +1033,8 @@ __PACKAGE__->register_method({
if (scalar(keys $param->%*) > 0) {
&$resolve_cdrom_alias($param);
- &$check_storage_access($rpcenv, $authuser, $storecfg, $vmid, $param, $storage);
+ &$check_storage_access(
+ $rpcenv, $authuser, $storecfg, $vmid, $param, $storage, $extraction_storage);
&$check_vm_modify_config_perm($rpcenv, $authuser, $vmid, $pool, [ keys %$param]);
@@ -1126,6 +1150,7 @@ __PACKAGE__->register_method({
$param,
$storage,
$live_restore,
+ $extraction_storage
);
$conf->{$_} = $created_opts->{$_} for keys $created_opts->%*;
@@ -1672,6 +1697,8 @@ my $update_vm_api = sub {
my $skip_cloud_init = extract_param($param, 'skip_cloud_init');
+ my $extraction_storage = extract_param($param, 'import-extraction-storage');
+
if (defined(my $cipassword = $param->{cipassword})) {
# Same logic as in cloud-init (but with the regex fixed...)
$param->{cipassword} = PVE::Tools::encrypt_pw($cipassword)
@@ -1791,7 +1818,7 @@ my $update_vm_api = sub {
&$check_vm_modify_config_perm($rpcenv, $authuser, $vmid, undef, [keys %$param]);
- &$check_storage_access($rpcenv, $authuser, $storecfg, $vmid, $param);
+ &$check_storage_access($rpcenv, $authuser, $storecfg, $vmid, $param, $extraction_storage);
PVE::QemuServer::check_bridge_access($rpcenv, $authuser, $param);
@@ -1973,6 +2000,7 @@ my $update_vm_api = sub {
{$opt => $param->{$opt}},
undef,
undef,
+ $extraction_storage,
);
$conf->{pending}->{$_} = $created_opts->{$_} for keys $created_opts->%*;
@@ -2170,6 +2198,12 @@ __PACKAGE__->register_method({
maximum => 30,
optional => 1,
},
+ 'import-extraction-storage' => get_standard_option('pve-storage-id', {
+ description => "Storage to put extracted images when using 'import-from' that"
+ ." needs extraction",
+ optional => 1,
+ completion => \&PVE::QemuServer::complete_storage,
+ }),
},
1, # with_disk_alloc
),
@@ -2220,6 +2254,12 @@ __PACKAGE__->register_method({
maxLength => 40,
optional => 1,
},
+ 'import-extraction-storage' => get_standard_option('pve-storage-id', {
+ description => "Storage to put extracted images when using 'import-from' that"
+ ." needs extraction",
+ optional => 1,
+ completion => \&PVE::QemuServer::complete_storage,
+ }),
},
1, # with_disk_alloc
),
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 38+ messages in thread
* [pve-devel] [PATCH manager v3 1/9] ui: fix special 'import' icon for non-esxi storages
2024-04-29 11:21 [pve-devel] [PATCH storage/qemu-server/manager v3] implement ova/ovf import for file based storages Dominik Csapak
` (13 preceding siblings ...)
2024-04-29 11:21 ` [pve-devel] [PATCH qemu-server v3 4/4] api: create: add 'import-extraction-storage' parameter Dominik Csapak
@ 2024-04-29 11:21 ` Dominik Csapak
2024-04-29 11:21 ` [pve-devel] [PATCH manager v3 2/9] ui: guest import: add ova-needs-extracting warning text Dominik Csapak
` (8 subsequent siblings)
23 siblings, 0 replies; 38+ messages in thread
From: Dominik Csapak @ 2024-04-29 11:21 UTC (permalink / raw)
To: pve-devel
we only want to show that icon in the tree when the storage is solely
used for importing, not when it's just one of several content types.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
www/manager6/Utils.js | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index f5608944..1310b04d 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -1244,7 +1244,7 @@ Ext.define('PVE.Utils', {
// templates
objType = 'template';
status = type;
- } else if (type === 'storage' && record.content.indexOf('import') !== -1) {
+ } else if (type === 'storage' && record.content === 'import') {
return 'fa fa-cloud-download';
} else {
// everything else
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 38+ messages in thread
* [pve-devel] [PATCH manager v3 2/9] ui: guest import: add ova-needs-extracting warning text
2024-04-29 11:21 [pve-devel] [PATCH storage/qemu-server/manager v3] implement ova/ovf import for file based storages Dominik Csapak
` (14 preceding siblings ...)
2024-04-29 11:21 ` [pve-devel] [PATCH manager v3 1/9] ui: fix special 'import' icon for non-esxi storages Dominik Csapak
@ 2024-04-29 11:21 ` Dominik Csapak
2024-04-29 11:21 ` [pve-devel] [PATCH manager v3 3/9] ui: enable import content type for relevant storages Dominik Csapak
` (7 subsequent siblings)
23 siblings, 0 replies; 38+ messages in thread
From: Dominik Csapak @ 2024-04-29 11:21 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
www/manager6/window/GuestImport.js | 1 +
1 file changed, 1 insertion(+)
diff --git a/www/manager6/window/GuestImport.js b/www/manager6/window/GuestImport.js
index 4bedc211..76ba6dc8 100644
--- a/www/manager6/window/GuestImport.js
+++ b/www/manager6/window/GuestImport.js
@@ -937,6 +937,7 @@ Ext.define('PVE.window.GuestImport', {
gettext('EFI state cannot be imported, you may need to reconfigure the boot order (see {0})'),
'<a href="https://pve.proxmox.com/wiki/OVMF/UEFI_Boot_Entries">OVMF/UEFI Boot Entries</a>',
),
+ 'ova-needs-extracting': gettext('Importing from an OVA requires extra space while extracting the contained disks into the import or selected storage.'),
};
let message = warningsCatalogue[w.type];
if (!w.type || !message) {
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 38+ messages in thread
* [pve-devel] [PATCH manager v3 3/9] ui: enable import content type for relevant storages
2024-04-29 11:21 [pve-devel] [PATCH storage/qemu-server/manager v3] implement ova/ovf import for file based storages Dominik Csapak
` (15 preceding siblings ...)
2024-04-29 11:21 ` [pve-devel] [PATCH manager v3 2/9] ui: guest import: add ova-needs-extracting warning text Dominik Csapak
@ 2024-04-29 11:21 ` Dominik Csapak
2024-04-29 11:21 ` [pve-devel] [PATCH manager v3 4/9] ui: enable upload/download/remove buttons for 'import' type storages Dominik Csapak
` (6 subsequent siblings)
23 siblings, 0 replies; 38+ messages in thread
From: Dominik Csapak @ 2024-04-29 11:21 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
www/manager6/Utils.js | 1 +
www/manager6/form/ContentTypeSelector.js | 2 +-
www/manager6/storage/CephFSEdit.js | 2 +-
www/manager6/storage/GlusterFsEdit.js | 2 +-
4 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index 1310b04d..ff2fae25 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -690,6 +690,7 @@ Ext.define('PVE.Utils', {
'iso': gettext('ISO image'),
'rootdir': gettext('Container'),
'snippets': gettext('Snippets'),
+ 'import': gettext('Import'),
},
volume_is_qemu_backup: function(volid, format) {
diff --git a/www/manager6/form/ContentTypeSelector.js b/www/manager6/form/ContentTypeSelector.js
index d0fa0b08..431bd948 100644
--- a/www/manager6/form/ContentTypeSelector.js
+++ b/www/manager6/form/ContentTypeSelector.js
@@ -10,7 +10,7 @@ Ext.define('PVE.form.ContentTypeSelector', {
me.comboItems = [];
if (me.cts === undefined) {
- me.cts = ['images', 'iso', 'vztmpl', 'backup', 'rootdir', 'snippets'];
+ me.cts = ['images', 'iso', 'vztmpl', 'backup', 'rootdir', 'snippets', 'import'];
}
Ext.Array.each(me.cts, function(ct) {
diff --git a/www/manager6/storage/CephFSEdit.js b/www/manager6/storage/CephFSEdit.js
index 6a95a00a..2cdcf7cd 100644
--- a/www/manager6/storage/CephFSEdit.js
+++ b/www/manager6/storage/CephFSEdit.js
@@ -92,7 +92,7 @@ Ext.define('PVE.storage.CephFSInputPanel', {
me.column2 = [
{
xtype: 'pveContentTypeSelector',
- cts: ['backup', 'iso', 'vztmpl', 'snippets'],
+ cts: ['backup', 'iso', 'vztmpl', 'snippets', 'import'],
fieldLabel: gettext('Content'),
name: 'content',
value: 'backup',
diff --git a/www/manager6/storage/GlusterFsEdit.js b/www/manager6/storage/GlusterFsEdit.js
index 8155d9c2..df7fe23f 100644
--- a/www/manager6/storage/GlusterFsEdit.js
+++ b/www/manager6/storage/GlusterFsEdit.js
@@ -99,7 +99,7 @@ Ext.define('PVE.storage.GlusterFsInputPanel', {
},
{
xtype: 'pveContentTypeSelector',
- cts: ['images', 'iso', 'backup', 'vztmpl', 'snippets'],
+ cts: ['images', 'iso', 'backup', 'vztmpl', 'snippets', 'import'],
name: 'content',
value: 'images',
multiSelect: true,
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 38+ messages in thread
* [pve-devel] [PATCH manager v3 4/9] ui: enable upload/download/remove buttons for 'import' type storages
2024-04-29 11:21 [pve-devel] [PATCH storage/qemu-server/manager v3] implement ova/ovf import for file based storages Dominik Csapak
` (16 preceding siblings ...)
2024-04-29 11:21 ` [pve-devel] [PATCH manager v3 3/9] ui: enable import content type for relevant storages Dominik Csapak
@ 2024-04-29 11:21 ` Dominik Csapak
2024-04-29 11:21 ` [pve-devel] [PATCH manager v3 5/9] ui: disable 'import' button for non importable formats Dominik Csapak
` (5 subsequent siblings)
23 siblings, 0 replies; 38+ messages in thread
From: Dominik Csapak @ 2024-04-29 11:21 UTC (permalink / raw)
To: pve-devel
but only for non esxi ones, since that does not allow
uploading/downloading there
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
www/manager6/storage/Browser.js | 9 +++++++--
www/manager6/window/UploadToStorage.js | 1 +
2 files changed, 8 insertions(+), 2 deletions(-)
diff --git a/www/manager6/storage/Browser.js b/www/manager6/storage/Browser.js
index 2123141d..934ce706 100644
--- a/www/manager6/storage/Browser.js
+++ b/www/manager6/storage/Browser.js
@@ -28,7 +28,9 @@ Ext.define('PVE.storage.Browser', {
let res = storageInfo.data;
let plugin = res.plugintype;
- me.items = plugin !== 'esxi' ? [
+ let isEsxi = plugin === 'esxi';
+
+ me.items = !isEsxi ? [
{
title: gettext('Summary'),
xtype: 'pveStorageSummary',
@@ -142,8 +144,11 @@ Ext.define('PVE.storage.Browser', {
iconCls: 'fa fa-desktop',
itemId: 'contentImport',
content: 'import',
- useCustomRemoveButton: true, // hide default remove button
+ useCustomRemoveButton: isEsxi, // hide default remove button for esxi
showColumns: ['name', 'format'],
+ enableUploadButton: enableUpload && !isEsxi,
+ enableDownloadUrlButton: enableDownloadUrl && !isEsxi,
+ useUploadButton: !isEsxi,
itemdblclick: (view, record) => createGuestImportWindow(record),
tbar: [
{
diff --git a/www/manager6/window/UploadToStorage.js b/www/manager6/window/UploadToStorage.js
index 3c5bba88..cdf548a8 100644
--- a/www/manager6/window/UploadToStorage.js
+++ b/www/manager6/window/UploadToStorage.js
@@ -9,6 +9,7 @@ Ext.define('PVE.window.UploadToStorage', {
title: gettext('Upload'),
acceptedExtensions: {
+ 'import': ['.ova'],
iso: ['.img', '.iso'],
vztmpl: ['.tar.gz', '.tar.xz', '.tar.zst'],
},
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 38+ messages in thread
* [pve-devel] [PATCH manager v3 5/9] ui: disable 'import' button for non importable formats
2024-04-29 11:21 [pve-devel] [PATCH storage/qemu-server/manager v3] implement ova/ovf import for file based storages Dominik Csapak
` (17 preceding siblings ...)
2024-04-29 11:21 ` [pve-devel] [PATCH manager v3 4/9] ui: enable upload/download/remove buttons for 'import' type storages Dominik Csapak
@ 2024-04-29 11:21 ` Dominik Csapak
2024-04-29 11:21 ` [pve-devel] [PATCH manager v3 6/9] ui: import: improve rendering of volume names Dominik Csapak
` (4 subsequent siblings)
23 siblings, 0 replies; 38+ messages in thread
From: Dominik Csapak @ 2024-04-29 11:21 UTC (permalink / raw)
To: pve-devel
importable formats are currently ova/ovf/vmx
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
www/manager6/storage/Browser.js | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/www/manager6/storage/Browser.js b/www/manager6/storage/Browser.js
index 934ce706..822257e7 100644
--- a/www/manager6/storage/Browser.js
+++ b/www/manager6/storage/Browser.js
@@ -124,6 +124,7 @@ Ext.define('PVE.storage.Browser', {
});
}
if (contents.includes('import')) {
+ let isImportable = format => ['ova', 'ovf', 'vmx'].indexOf(format) !== -1;
let createGuestImportWindow = (selection) => {
if (!selection) {
return;
@@ -149,13 +150,18 @@ Ext.define('PVE.storage.Browser', {
enableUploadButton: enableUpload && !isEsxi,
enableDownloadUrlButton: enableDownloadUrl && !isEsxi,
useUploadButton: !isEsxi,
- itemdblclick: (view, record) => createGuestImportWindow(record),
+ itemdblclick: (view, record) => {
+ if (isImportable(record.data.format)) {
+ createGuestImportWindow(record);
+ }
+ },
tbar: [
{
xtype: 'proxmoxButton',
disabled: true,
text: gettext('Import'),
iconCls: 'fa fa-cloud-download',
+ enableFn: rec => isImportable(rec.data.format),
handler: function() {
let grid = this.up('pveStorageContentView');
let selection = grid.getSelection()?.[0];
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 38+ messages in thread
* [pve-devel] [PATCH manager v3 6/9] ui: import: improve rendering of volume names
2024-04-29 11:21 [pve-devel] [PATCH storage/qemu-server/manager v3] implement ova/ovf import for file based storages Dominik Csapak
` (18 preceding siblings ...)
2024-04-29 11:21 ` [pve-devel] [PATCH manager v3 5/9] ui: disable 'import' button for non importable formats Dominik Csapak
@ 2024-04-29 11:21 ` Dominik Csapak
2024-04-29 11:21 ` [pve-devel] [PATCH manager v3 7/9] ui: guest import: add storage selector for ova extraction storage Dominik Csapak
` (3 subsequent siblings)
23 siblings, 0 replies; 38+ messages in thread
From: Dominik Csapak @ 2024-04-29 11:21 UTC (permalink / raw)
To: pve-devel
in directory storages, we don't need the 'import/' part of the volumes,
as that is implied in dir based storages
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
www/manager6/Utils.js | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index ff2fae25..ea6e30e8 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -1024,7 +1024,13 @@ Ext.define('PVE.Utils', {
Ext.String.leftPad(data.channel, 2, '0') +
" ID " + data.id + " LUN " + data.lun;
} else if (data.content === 'import') {
- result = data.volid.replace(/^.*?:/, '');
+ if (data.volid.match(/^.*?:import\//)) {
+ // dir-based storages
+ result = data.volid.replace(/^.*?:import\//, '');
+ } else {
+ // esxi storage
+ result = data.volid.replace(/^.*?:/, '');
+ }
} else {
result = data.volid.replace(/^.*?:(.*?\/)?/, '');
}
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 38+ messages in thread
* [pve-devel] [PATCH manager v3 7/9] ui: guest import: add storage selector for ova extraction storage
2024-04-29 11:21 [pve-devel] [PATCH storage/qemu-server/manager v3] implement ova/ovf import for file based storages Dominik Csapak
` (19 preceding siblings ...)
2024-04-29 11:21 ` [pve-devel] [PATCH manager v3 6/9] ui: import: improve rendering of volume names Dominik Csapak
@ 2024-04-29 11:21 ` Dominik Csapak
2024-04-29 11:21 ` [pve-devel] [PATCH manager v3 8/9] ui: guest import: change icon/text for non-esxi import storage Dominik Csapak
` (2 subsequent siblings)
23 siblings, 0 replies; 38+ messages in thread
From: Dominik Csapak @ 2024-04-29 11:21 UTC (permalink / raw)
To: pve-devel
but only when we detect the 'ova-needs-extraction' warning.
This can be used to select the storage where the disks contained in an
OVA will be extracted to temporarily.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
www/manager6/window/GuestImport.js | 23 +++++++++++++++++++++++
1 file changed, 23 insertions(+)
diff --git a/www/manager6/window/GuestImport.js b/www/manager6/window/GuestImport.js
index 76ba6dc8..972f715b 100644
--- a/www/manager6/window/GuestImport.js
+++ b/www/manager6/window/GuestImport.js
@@ -303,6 +303,7 @@ Ext.define('PVE.window.GuestImport', {
os: 'l26',
maxCdDrives: false,
uniqueMACAdresses: false,
+ isOva: false,
warnings: [],
},
@@ -432,6 +433,10 @@ Ext.define('PVE.window.GuestImport', {
}
}
+ if (config['import-extraction-storage'] === '') {
+ delete config['import-extraction-storage'];
+ }
+
return config;
},
@@ -553,6 +558,22 @@ Ext.define('PVE.window.GuestImport', {
allowBlank: false,
fieldLabel: gettext('Default Bridge'),
},
+ {
+ xtype: 'pveStorageSelector',
+ reference: 'extractionStorage',
+ fieldLabel: gettext('Extraction Storage'),
+ storageContent: 'images',
+ emptyText: gettext('Import Storage'),
+ autoSelect: false,
+ name: 'import-extraction-storage',
+ disabled: true,
+ hidden: true,
+ allowBlank: true,
+ bind: {
+ disabled: '{!isOva}',
+ hidden: '{!isOva}',
+ },
+ },
],
columnB: [
@@ -925,6 +946,7 @@ Ext.define('PVE.window.GuestImport', {
me.lookup('defaultStorage').setNodename(me.nodename);
me.lookup('defaultBridge').setNodename(me.nodename);
+ me.lookup('extractionStorage').setNodename(me.nodename);
let renderWarning = w => {
const warningsCatalogue = {
@@ -1006,6 +1028,7 @@ Ext.define('PVE.window.GuestImport', {
}
me.getViewModel().set('warnings', data.warnings.map(w => renderWarning(w)));
+ me.getViewModel().set('isOva', data.warnings.map(w => w.type).indexOf('ova-needs-extracting') !== -1);
let osinfo = PVE.Utils.get_kvm_osinfo(me.vmConfig.ostype ?? '');
let prepareForVirtIO = (me.vmConfig.ostype ?? '').startsWith('w') && (me.vmConfig.bios ?? '').indexOf('ovmf') !== -1;
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 38+ messages in thread
* [pve-devel] [PATCH manager v3 8/9] ui: guest import: change icon/text for non-esxi import storage
2024-04-29 11:21 [pve-devel] [PATCH storage/qemu-server/manager v3] implement ova/ovf import for file based storages Dominik Csapak
` (20 preceding siblings ...)
2024-04-29 11:21 ` [pve-devel] [PATCH manager v3 7/9] ui: guest import: add storage selector for ova extraction storage Dominik Csapak
@ 2024-04-29 11:21 ` Dominik Csapak
2024-04-29 11:21 ` [pve-devel] [PATCH manager v3 9/9] ui: import: show size for dir-based storages Dominik Csapak
2024-05-24 13:38 ` [pve-devel] [PATCH storage/qemu-server/manager v3] implement ova/ovf import for file based storages Dominik Csapak
23 siblings, 0 replies; 38+ messages in thread
From: Dominik Csapak @ 2024-04-29 11:21 UTC (permalink / raw)
To: pve-devel
since 'virtual guests' only make sense for a hypervisor, not e.g. a
directory for OVAs
also change the icon from 'desktop' to 'cloud-download' in the
non-esxi case
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
www/manager6/storage/Browser.js | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/www/manager6/storage/Browser.js b/www/manager6/storage/Browser.js
index 822257e7..763abc70 100644
--- a/www/manager6/storage/Browser.js
+++ b/www/manager6/storage/Browser.js
@@ -141,8 +141,10 @@ Ext.define('PVE.storage.Browser', {
};
me.items.push({
xtype: 'pveStorageContentView',
- title: gettext('Virtual Guests'),
- iconCls: 'fa fa-desktop',
+ // each gettext needs to be in a separate line
+ title: isEsxi ? gettext('Virtual Guests')
+ : gettext('Import'),
+ iconCls: isEsxi ? 'fa fa-desktop' : 'fa fa-cloud-download',
itemId: 'contentImport',
content: 'import',
useCustomRemoveButton: isEsxi, // hide default remove button for esxi
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 38+ messages in thread
* [pve-devel] [PATCH manager v3 9/9] ui: import: show size for dir-based storages
2024-04-29 11:21 [pve-devel] [PATCH storage/qemu-server/manager v3] implement ova/ovf import for file based storages Dominik Csapak
` (21 preceding siblings ...)
2024-04-29 11:21 ` [pve-devel] [PATCH manager v3 8/9] ui: guest import: change icon/text for non-esxi import storage Dominik Csapak
@ 2024-04-29 11:21 ` Dominik Csapak
2024-05-24 13:38 ` [pve-devel] [PATCH storage/qemu-server/manager v3] implement ova/ovf import for file based storages Dominik Csapak
23 siblings, 0 replies; 38+ messages in thread
From: Dominik Csapak @ 2024-04-29 11:21 UTC (permalink / raw)
To: pve-devel
since there we already have the size information
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
www/manager6/storage/Browser.js | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/www/manager6/storage/Browser.js b/www/manager6/storage/Browser.js
index 763abc70..c0b66acc 100644
--- a/www/manager6/storage/Browser.js
+++ b/www/manager6/storage/Browser.js
@@ -148,7 +148,7 @@ Ext.define('PVE.storage.Browser', {
itemId: 'contentImport',
content: 'import',
useCustomRemoveButton: isEsxi, // hide default remove button for esxi
- showColumns: ['name', 'format'],
+ showColumns: isEsxi ? ['name', 'format'] : ['name', 'size', 'format'],
enableUploadButton: enableUpload && !isEsxi,
enableDownloadUrlButton: enableDownloadUrl && !isEsxi,
useUploadButton: !isEsxi,
--
2.39.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [pve-devel] [PATCH storage v3 01/10] copy OVF.pm from qemu-server
2024-04-29 11:21 ` [pve-devel] [PATCH storage v3 01/10] copy OVF.pm from qemu-server Dominik Csapak
@ 2024-05-22 8:56 ` Fabian Grünbichler
2024-05-22 9:35 ` Fabian Grünbichler
1 sibling, 0 replies; 38+ messages in thread
From: Fabian Grünbichler @ 2024-05-22 8:56 UTC (permalink / raw)
To: Proxmox VE development discussion
On April 29, 2024 1:21 pm, Dominik Csapak wrote:
> copies the OVF.pm and relevant ovf tests from qemu-server.
> We need it here, and it uses PVE::Storage already, and since there is no
> intermediary package/repository we could put it, it seems fitting in
> here.
>
> Put it in a new GuestImport module
>
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
> src/PVE/GuestImport/Makefile | 3 +
> src/PVE/GuestImport/OVF.pm | 242 ++++++++++++++++++
> src/PVE/Makefile | 1 +
> src/PVE/Storage/Makefile | 1 +
> src/test/Makefile | 5 +-
> src/test/ovf_manifests/Win10-Liz-disk1.vmdk | Bin 0 -> 65536 bytes
> src/test/ovf_manifests/Win10-Liz.ovf | 142 ++++++++++
> .../ovf_manifests/Win10-Liz_no_default_ns.ovf | 142 ++++++++++
> .../ovf_manifests/Win_2008_R2_two-disks.ovf | 145 +++++++++++
> src/test/ovf_manifests/disk1.vmdk | Bin 0 -> 65536 bytes
> src/test/ovf_manifests/disk2.vmdk | Bin 0 -> 65536 bytes
> src/test/run_ovf_tests.pl | 71 +++++
> 12 files changed, 751 insertions(+), 1 deletion(-)
> create mode 100644 src/PVE/GuestImport/Makefile
> create mode 100644 src/PVE/GuestImport/OVF.pm
> create mode 100644 src/test/ovf_manifests/Win10-Liz-disk1.vmdk
> create mode 100755 src/test/ovf_manifests/Win10-Liz.ovf
> create mode 100755 src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf
> create mode 100755 src/test/ovf_manifests/Win_2008_R2_two-disks.ovf
> create mode 100644 src/test/ovf_manifests/disk1.vmdk
> create mode 100644 src/test/ovf_manifests/disk2.vmdk
> create mode 100755 src/test/run_ovf_tests.pl
>
> diff --git a/src/PVE/GuestImport/Makefile b/src/PVE/GuestImport/Makefile
> new file mode 100644
> index 0000000..5948384
> --- /dev/null
> +++ b/src/PVE/GuestImport/Makefile
> @@ -0,0 +1,3 @@
> +.PHONY: install
> +install:
> + install -D -m 0644 OVF.pm ${DESTDIR}${PERLDIR}/PVE/GuestImport/OVF.pm
> diff --git a/src/PVE/GuestImport/OVF.pm b/src/PVE/GuestImport/OVF.pm
> new file mode 100644
> index 0000000..055ebf5
> --- /dev/null
> +++ b/src/PVE/GuestImport/OVF.pm
> @@ -0,0 +1,242 @@
> +# Open Virtualization Format import routines
> +# https://www.dmtf.org/standards/ovf
> +package PVE::GuestImport::OVF;
> +
> +use strict;
> +use warnings;
> +
> +use XML::LibXML;
this means the libxml-libxml-perl dependency should also move from
qemu-server to libpve-storage-perl
> +use File::Spec;
> +use File::Basename;
> +use Data::Dumper;
not used?
> +use Cwd 'realpath';
> +
> +use PVE::Tools;
> +use PVE::Storage;
this one here makes a circular dependency, since the DirPlugin then uses
this module.. it is within the repository though, which we have quite
often, but it's a bit of a bummer..
> +
> [..]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [pve-devel] [PATCH storage v3 02/10] plugin: dir: implement import content type
2024-04-29 11:21 ` [pve-devel] [PATCH storage v3 02/10] plugin: dir: implement import content type Dominik Csapak
@ 2024-05-22 9:24 ` Fabian Grünbichler
0 siblings, 0 replies; 38+ messages in thread
From: Fabian Grünbichler @ 2024-05-22 9:24 UTC (permalink / raw)
To: Proxmox VE development discussion
On April 29, 2024 1:21 pm, Dominik Csapak wrote:
> in DirPlugin and not Plugin (because of cyclic dependency of
> Plugin -> OVF -> Storage -> Plugin otherwise)
>
> only ovf is currently supported (though ova will be shown in import
> listing), expects the files to not be in a subdir, and adjacent to the
> ovf file.
>
> listed will be all ovf/qcow2/raw/vmdk files.
> ovf because it can be imported, and the rest because they can be used
> in the 'import-from' part of qemu-server.
>
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
> src/PVE/GuestImport/OVF.pm | 3 +++
> src/PVE/Storage.pm | 8 +++++++
> src/PVE/Storage/DirPlugin.pm | 36 +++++++++++++++++++++++++++++-
> src/PVE/Storage/Plugin.pm | 11 ++++++++-
> src/test/parse_volname_test.pm | 18 +++++++++++++++
> src/test/path_to_volume_id_test.pm | 21 +++++++++++++++++
> 6 files changed, 95 insertions(+), 2 deletions(-)
>
> diff --git a/src/PVE/GuestImport/OVF.pm b/src/PVE/GuestImport/OVF.pm
> index 055ebf5..0eb5e9c 100644
> --- a/src/PVE/GuestImport/OVF.pm
> +++ b/src/PVE/GuestImport/OVF.pm
> @@ -222,6 +222,8 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
> }
>
> ($backing_file_path) = $backing_file_path =~ m|^(/.*)|; # untaint
> + ($filepath) = $filepath =~ m|^(${PVE::Storage::SAFE_CHAR_CLASS_RE}+)$|; # untaint & check no sub/parent dirs
> + die "invalid path\n" if !$filepath;
>
> my $virtual_size = PVE::Storage::file_size_info($backing_file_path);
> die "error parsing $backing_file_path, cannot determine file size\n"
> @@ -231,6 +233,7 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
> disk_address => $pve_disk_address,
> backing_file => $backing_file_path,
> virtual_size => $virtual_size
> + relative_path => $filepath,
syntax error here (cleaned up in next patch)
> };
> push @disks, $pve_disk;
>
> diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
> index f19a115..1ed91c2 100755
> --- a/src/PVE/Storage.pm
> +++ b/src/PVE/Storage.pm
> @@ -114,6 +114,10 @@ our $VZTMPL_EXT_RE_1 = qr/\.tar\.(gz|xz|zst)/i;
>
> our $BACKUP_EXT_RE_2 = qr/\.(tgz|(?:tar|vma)(?:\.(${\PVE::Storage::Plugin::COMPRESSOR_RE}))?)/;
>
> +our $IMPORT_EXT_RE_1 = qr/\.(ovf|qcow2|raw|vmdk)/;
> +
> +our $SAFE_CHAR_CLASS_RE = qr/[a-zA-Z0-9\-\.\+\=\_]/;
> +
> # FIXME remove with PVE 8.0, add versioned breaks for pve-manager
> our $vztmpl_extension_re = $VZTMPL_EXT_RE_1;
>
> @@ -612,6 +616,7 @@ sub path_to_volume_id {
> my $backupdir = $plugin->get_subdir($scfg, 'backup');
> my $privatedir = $plugin->get_subdir($scfg, 'rootdir');
> my $snippetsdir = $plugin->get_subdir($scfg, 'snippets');
> + my $importdir = $plugin->get_subdir($scfg, 'import');
>
> if ($path =~ m!^$imagedir/(\d+)/([^/\s]+)$!) {
> my $vmid = $1;
> @@ -640,6 +645,9 @@ sub path_to_volume_id {
> } elsif ($path =~ m!^$snippetsdir/([^/]+)$!) {
> my $name = $1;
> return ('snippets', "$sid:snippets/$name");
> + } elsif ($path =~ m!^$importdir/(${SAFE_CHAR_CLASS_RE}+${IMPORT_EXT_RE_1})$!) {
> + my $name = $1;
> + return ('import', "$sid:import/$name");
> }
> }
>
> diff --git a/src/PVE/Storage/DirPlugin.pm b/src/PVE/Storage/DirPlugin.pm
> index 2efa8d5..3e3b1e7 100644
> --- a/src/PVE/Storage/DirPlugin.pm
> +++ b/src/PVE/Storage/DirPlugin.pm
> @@ -10,6 +10,7 @@ use IO::File;
> use POSIX;
>
> use PVE::Storage::Plugin;
> +use PVE::GuestImport::OVF;
> use PVE::JSONSchema qw(get_standard_option);
>
> use base qw(PVE::Storage::Plugin);
> @@ -22,7 +23,7 @@ sub type {
>
> sub plugindata {
> return {
> - content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1, backup => 1, snippets => 1, none => 1 },
> + content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1, backup => 1, snippets => 1, none => 1, import => 1 },
> { images => 1, rootdir => 1 }],
> format => [ { raw => 1, qcow2 => 1, vmdk => 1, subvol => 1 } , 'raw' ],
> };
> @@ -247,4 +248,37 @@ sub check_config {
> return $opts;
> }
>
> +sub get_import_metadata {
> + my ($class, $scfg, $volname, $storeid) = @_;
> +
> + my ($vtype, $name, undef, undef, undef, undef, $fmt) = $class->parse_volname($volname);
> + die "invalid content type '$vtype'\n" if $vtype ne 'import';
> + die "invalid format\n" if $fmt ne 'ova' && $fmt ne 'ovf';
> +
> + # NOTE: all types of warnings must be added to the return schema of the import-metadata API endpoint
> + my $warnings = [];
> +
> + my $path = $class->path($scfg, $volname, $storeid, undef);
> + my $res = PVE::GuestImport::OVF::parse_ovf($path);
> + my $disks = {};
> + for my $disk ($res->{disks}->@*) {
> + my $id = $disk->{disk_address};
> + my $size = $disk->{virtual_size};
> + my $path = $disk->{relative_path};
> + $disks->{$id} = {
> + volid => "$storeid:import/$path",
> + defined($size) ? (size => $size) : (),
> + };
> + }
> +
> + return {
> + type => 'vm',
> + source => $volname,
> + 'create-args' => $res->{qm},
> + 'disks' => $disks,
> + warnings => $warnings,
> + net => [],
> + };
> +}
> +
> 1;
> diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
> index 22a9729..33f0f3a 100644
> --- a/src/PVE/Storage/Plugin.pm
> +++ b/src/PVE/Storage/Plugin.pm
> @@ -654,6 +654,8 @@ sub parse_volname {
> return ('backup', $fn);
> } elsif ($volname =~ m!^snippets/([^/]+)$!) {
> return ('snippets', $1);
> + } elsif ($volname =~ m!^import/(${PVE::Storage::SAFE_CHAR_CLASS_RE}+$PVE::Storage::IMPORT_EXT_RE_1)$!) {
> + return ('import', $1, undef, undef, undef, undef, $2);
> }
>
> die "unable to parse directory volume name '$volname'\n";
> @@ -666,6 +668,7 @@ my $vtype_subdirs = {
> vztmpl => 'template/cache',
> backup => 'dump',
> snippets => 'snippets',
> + import => 'import',
> };
>
> sub get_vtype_subdirs {
> @@ -1227,7 +1230,7 @@ sub list_images {
> return $res;
> }
>
> -# list templates ($tt = <iso|vztmpl|backup|snippets>)
> +# list templates ($tt = <iso|vztmpl|backup|snippets|import>)
> my $get_subdir_files = sub {
> my ($sid, $path, $tt, $vmid) = @_;
>
> @@ -1283,6 +1286,10 @@ my $get_subdir_files = sub {
> volid => "$sid:snippets/". basename($fn),
> format => 'snippet',
> };
> + } elsif ($tt eq 'import') {
> + next if $fn !~ m!/(${PVE::Storage::SAFE_CHAR_CLASS_RE}+$PVE::Storage::IMPORT_EXT_RE_1)$!i;
> +
> + $info = { volid => "$sid:import/$1", format => "$2" };
> }
>
> $info->{size} = $st->size;
> @@ -1317,6 +1324,8 @@ sub list_volumes {
> $data = $get_subdir_files->($storeid, $path, 'backup', $vmid);
> } elsif ($type eq 'snippets') {
> $data = $get_subdir_files->($storeid, $path, 'snippets');
> + } elsif ($type eq 'import') {
> + $data = $get_subdir_files->($storeid, $path, 'import');
> }
> }
>
> diff --git a/src/test/parse_volname_test.pm b/src/test/parse_volname_test.pm
> index d6ac885..a8c746f 100644
> --- a/src/test/parse_volname_test.pm
> +++ b/src/test/parse_volname_test.pm
> @@ -81,6 +81,19 @@ my $tests = [
> expected => ['snippets', 'hookscript.pl'],
> },
> #
> + # Import
> + #
> + {
> + description => "Import, ova",
> + volname => 'import/import.ova',
> + expected => ['import', 'import.ova', undef, undef, undef ,undef, 'ova'],
> + },
with the syntax error above cleaned up, this test here fails since OVA
is not yet recognized as format here.. (similarly below as well).
> + {
> + description => "Import, ovf",
> + volname => 'import/import.ovf',
> + expected => ['import', 'import.ovf', undef, undef, undef ,undef, 'ovf'],
> + },
> + #
> # failed matches
> #
> {
> @@ -123,6 +136,11 @@ my $tests = [
> volname => "$vmid/base-$vmid-disk-0.qcow2/ssss/vm-$vmid-disk-0.qcow2",
> expected => "unable to parse volume filename 'base-$vmid-disk-0.qcow2/ssss/vm-$vmid-disk-0.qcow2'\n",
> },
> + {
> + description => "Failed match: import dir but no ova/ovf/disk image",
> + volname => "import/test.foo",
> + expected => "unable to parse directory volume name 'import/test.foo'\n",
> + },
> ];
>
> # create more test cases for VM disk images matches
> diff --git a/src/test/path_to_volume_id_test.pm b/src/test/path_to_volume_id_test.pm
> index 8149c88..0d238f9 100644
> --- a/src/test/path_to_volume_id_test.pm
> +++ b/src/test/path_to_volume_id_test.pm
> @@ -174,6 +174,22 @@ my @tests = (
> 'local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.xz',
> ],
> },
> + {
> + description => 'Import, ova',
> + volname => "$storage_dir/import/import.ova",
> + expected => [
> + 'import',
> + 'local:import/import.ova',
> + ],
> + },
> + {
> + description => 'Import, ovf',
> + volname => "$storage_dir/import/import.ovf",
> + expected => [
> + 'import',
> + 'local:import/import.ovf',
> + ],
> + },
>
> # no matches, path or files with failures
> {
> @@ -231,6 +247,11 @@ my @tests = (
> volname => "$storage_dir/images/ssss/vm-1234-disk-0.qcow2",
> expected => [''],
> },
> + {
> + description => 'Import, non ova/ovf/disk image in import dir',
> + volname => "$storage_dir/import/test.foo",
> + expected => [''],
> + },
> );
>
> plan tests => scalar @tests + 1;
> --
> 2.39.2
>
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>
>
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [pve-devel] [PATCH storage v3 01/10] copy OVF.pm from qemu-server
2024-04-29 11:21 ` [pve-devel] [PATCH storage v3 01/10] copy OVF.pm from qemu-server Dominik Csapak
2024-05-22 8:56 ` Fabian Grünbichler
@ 2024-05-22 9:35 ` Fabian Grünbichler
1 sibling, 0 replies; 38+ messages in thread
From: Fabian Grünbichler @ 2024-05-22 9:35 UTC (permalink / raw)
To: Proxmox VE development discussion
On April 29, 2024 1:21 pm, Dominik Csapak wrote:
> copies the OVF.pm and relevant ovf tests from qemu-server.
> We need it here, and it uses PVE::Storage already, and since there is no
> intermediary package/repository we could put it, it seems fitting in
> here.
>
> Put it in a new GuestImport module
>
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
> src/PVE/GuestImport/Makefile | 3 +
> src/PVE/GuestImport/OVF.pm | 242 ++++++++++++++++++
> src/PVE/Makefile | 1 +
> src/PVE/Storage/Makefile | 1 +
> src/test/Makefile | 5 +-
> src/test/ovf_manifests/Win10-Liz-disk1.vmdk | Bin 0 -> 65536 bytes
> src/test/ovf_manifests/Win10-Liz.ovf | 142 ++++++++++
> .../ovf_manifests/Win10-Liz_no_default_ns.ovf | 142 ++++++++++
> .../ovf_manifests/Win_2008_R2_two-disks.ovf | 145 +++++++++++
> src/test/ovf_manifests/disk1.vmdk | Bin 0 -> 65536 bytes
> src/test/ovf_manifests/disk2.vmdk | Bin 0 -> 65536 bytes
> src/test/run_ovf_tests.pl | 71 +++++
> 12 files changed, 751 insertions(+), 1 deletion(-)
> create mode 100644 src/PVE/GuestImport/Makefile
> create mode 100644 src/PVE/GuestImport/OVF.pm
> create mode 100644 src/test/ovf_manifests/Win10-Liz-disk1.vmdk
> create mode 100755 src/test/ovf_manifests/Win10-Liz.ovf
> create mode 100755 src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf
> create mode 100755 src/test/ovf_manifests/Win_2008_R2_two-disks.ovf
> create mode 100644 src/test/ovf_manifests/disk1.vmdk
> create mode 100644 src/test/ovf_manifests/disk2.vmdk
> create mode 100755 src/test/run_ovf_tests.pl
>
> diff --git a/src/PVE/GuestImport/Makefile b/src/PVE/GuestImport/Makefile
> new file mode 100644
> index 0000000..5948384
> --- /dev/null
> +++ b/src/PVE/GuestImport/Makefile
> @@ -0,0 +1,3 @@
> +.PHONY: install
> +install:
> + install -D -m 0644 OVF.pm ${DESTDIR}${PERLDIR}/PVE/GuestImport/OVF.pm
> diff --git a/src/PVE/GuestImport/OVF.pm b/src/PVE/GuestImport/OVF.pm
> new file mode 100644
> index 0000000..055ebf5
> --- /dev/null
> +++ b/src/PVE/GuestImport/OVF.pm
> @@ -0,0 +1,242 @@
> +# Open Virtualization Format import routines
> +# https://www.dmtf.org/standards/ovf
> +package PVE::GuestImport::OVF;
> +
> +use strict;
> +use warnings;
> +
> +use XML::LibXML;
> +use File::Spec;
> +use File::Basename;
> +use Data::Dumper;
> +use Cwd 'realpath';
> +
> +use PVE::Tools;
> +use PVE::Storage;
> +
> +# map OVF resources types to descriptive strings
> +# this will allow us to explore the xml tree without using magic numbers
> +# http://schemas.dmtf.org/wbem/cim-html/2/CIM_ResourceAllocationSettingData.html
> +my @resources = (
> + { id => 1, dtmf_name => 'Other' },
> + { id => 2, dtmf_name => 'Computer System' },
> + { id => 3, dtmf_name => 'Processor' },
> + { id => 4, dtmf_name => 'Memory' },
> + { id => 5, dtmf_name => 'IDE Controller', pve_type => 'ide' },
> + { id => 6, dtmf_name => 'Parallel SCSI HBA', pve_type => 'scsi' },
> + { id => 7, dtmf_name => 'FC HBA' },
> + { id => 8, dtmf_name => 'iSCSI HBA' },
> + { id => 9, dtmf_name => 'IB HCA' },
> + { id => 10, dtmf_name => 'Ethernet Adapter' },
> + { id => 11, dtmf_name => 'Other Network Adapter' },
> + { id => 12, dtmf_name => 'I/O Slot' },
> + { id => 13, dtmf_name => 'I/O Device' },
> + { id => 14, dtmf_name => 'Floppy Drive' },
> + { id => 15, dtmf_name => 'CD Drive' },
> + { id => 16, dtmf_name => 'DVD drive' },
> + { id => 17, dtmf_name => 'Disk Drive' },
> + { id => 18, dtmf_name => 'Tape Drive' },
> + { id => 19, dtmf_name => 'Storage Extent' },
> + { id => 20, dtmf_name => 'Other storage device', pve_type => 'sata'},
> + { id => 21, dtmf_name => 'Serial port' },
> + { id => 22, dtmf_name => 'Parallel port' },
> + { id => 23, dtmf_name => 'USB Controller' },
> + { id => 24, dtmf_name => 'Graphics controller' },
> + { id => 25, dtmf_name => 'IEEE 1394 Controller' },
> + { id => 26, dtmf_name => 'Partitionable Unit' },
> + { id => 27, dtmf_name => 'Base Partitionable Unit' },
> + { id => 28, dtmf_name => 'Power' },
> + { id => 29, dtmf_name => 'Cooling Capacity' },
> + { id => 30, dtmf_name => 'Ethernet Switch Port' },
> + { id => 31, dtmf_name => 'Logical Disk' },
> + { id => 32, dtmf_name => 'Storage Volume' },
> + { id => 33, dtmf_name => 'Ethernet Connection' },
> + { id => 34, dtmf_name => 'DMTF reserved' },
> + { id => 35, dtmf_name => 'Vendor Reserved'}
> +);
> +
> +sub find_by {
> + my ($key, $param) = @_;
> + foreach my $resource (@resources) {
> + if ($resource->{$key} eq $param) {
> + return ($resource);
> + }
> + }
> + return;
> +}
> +
> +sub dtmf_name_to_id {
> + my ($dtmf_name) = @_;
> + my $found = find_by('dtmf_name', $dtmf_name);
> + if ($found) {
> + return $found->{id};
> + } else {
> + return;
> + }
> +}
> +
> +sub id_to_pve {
> + my ($id) = @_;
> + my $resource = find_by('id', $id);
> + if ($resource) {
> + return $resource->{pve_type};
> + } else {
> + return;
> + }
> +}
> +
> +# returns two references, $qm which holds qm.conf style key/values, and \@disks
> +sub parse_ovf {
> + my ($ovf, $debug) = @_;
> +
> + my $dom = XML::LibXML->load_xml(location => $ovf, no_blanks => 1);
> +
> + # register the xml namespaces in a xpath context object
> + # 'ovf' is the default namespace so it will prepended to each xml element
> + my $xpc = XML::LibXML::XPathContext->new($dom);
> + $xpc->registerNs('ovf', 'http://schemas.dmtf.org/ovf/envelope/1');
> + $xpc->registerNs('rasd', 'http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData');
> + $xpc->registerNs('vssd', 'http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData');
> +
> +
> + # hash to save qm.conf parameters
> + my $qm;
> +
> + #array to save a disk list
> + my @disks;
> +
> + # easy xpath
> + # walk down the dom until we find the matching XML element
> + my $xpath_find_name = "/ovf:Envelope/ovf:VirtualSystem/ovf:Name";
> + my $ovf_name = $xpc->findvalue($xpath_find_name);
> +
> + if ($ovf_name) {
> + # PVE::QemuServer::confdesc requires a valid DNS name
> + ($qm->{name} = $ovf_name) =~ s/[^a-zA-Z0-9\-\.]//g;
> + } else {
> + warn "warning: unable to parse the VM name in this OVF manifest, generating a default value\n";
> + }
> +
> + # middle level xpath
> + # element[child] search the elements which have this [child]
> + my $processor_id = dtmf_name_to_id('Processor');
> + my $xpath_find_vcpu_count = "/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${processor_id}]/rasd:VirtualQuantity";
> + $qm->{'cores'} = $xpc->findvalue($xpath_find_vcpu_count);
> +
> + my $memory_id = dtmf_name_to_id('Memory');
> + my $xpath_find_memory = ("/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${memory_id}]/rasd:VirtualQuantity");
> + $qm->{'memory'} = $xpc->findvalue($xpath_find_memory);
> +
> + # middle level xpath
> + # here we expect multiple results, so we do not read the element value with
> + # findvalue() but store multiple elements with findnodes()
> + my $disk_id = dtmf_name_to_id('Disk Drive');
> + my $xpath_find_disks="/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${disk_id}]";
> + my @disk_items = $xpc->findnodes($xpath_find_disks);
> +
> + # disks metadata is split in four different xml elements:
> + # * as an Item node of type DiskDrive in the VirtualHardwareSection
> + # * as an Disk node in the DiskSection
> + # * as a File node in the References section
> + # * each Item node also holds a reference to its owning controller
> + #
> + # we iterate over the list of Item nodes of type disk drive, and for each item,
> + # find the corresponding Disk node, and File node and owning controller
> + # when all the nodes has been found out, we copy the relevant information to
> + # a $pve_disk hash ref, which we push to @disks;
> +
> + foreach my $item_node (@disk_items) {
> +
> + my $disk_node;
> + my $file_node;
> + my $controller_node;
> + my $pve_disk;
> +
> + print "disk item:\n", $item_node->toString(1), "\n" if $debug;
> +
> + # from Item, find corresponding Disk node
> + # here the dot means the search should start from the current element in dom
> + my $host_resource = $xpc->findvalue('rasd:HostResource', $item_node);
> + my $disk_section_path;
> + my $disk_id;
> +
> + # RFC 3986 "2.3. Unreserved Characters"
> + my $valid_uripath_chars = qr/[[:alnum:]]|[\-\._~]/;
> +
> + if ($host_resource =~ m|^ovf:/(${valid_uripath_chars}+)/(${valid_uripath_chars}+)$|) {
> + $disk_section_path = $1;
> + $disk_id = $2;
> + } else {
> + warn "invalid host ressource $host_resource, skipping\n";
> + next;
> + }
> + printf "disk section path: $disk_section_path and disk id: $disk_id\n" if $debug;
> +
> + # tricky xpath
> + # @ means we filter the result query based on a the value of an item attribute ( @ = attribute)
> + # @ needs to be escaped to prevent Perl double quote interpolation
> + my $xpath_find_fileref = sprintf("/ovf:Envelope/ovf:DiskSection/\
> +ovf:Disk[\@ovf:diskId='%s']/\@ovf:fileRef", $disk_id);
> + my $fileref = $xpc->findvalue($xpath_find_fileref);
> +
> + my $valid_url_chars = qr@${valid_uripath_chars}|/@;
> + if (!$fileref || $fileref !~ m/^${valid_url_chars}+$/) {
> + warn "invalid host ressource $host_resource, skipping\n";
> + next;
> + }
> +
> + # from Disk Node, find corresponding filepath
> + my $xpath_find_filepath = sprintf("/ovf:Envelope/ovf:References/ovf:File[\@ovf:id='%s']/\@ovf:href", $fileref);
> + my $filepath = $xpc->findvalue($xpath_find_filepath);
> + if (!$filepath) {
> + warn "invalid file reference $fileref, skipping\n";
> + next;
> + }
> + print "file path: $filepath\n" if $debug;
> +
> + # from Item, find owning Controller type
> + my $controller_id = $xpc->findvalue('rasd:Parent', $item_node);
> + my $xpath_find_parent_type = sprintf("/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/\
> +ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
> + my $controller_type = $xpc->findvalue($xpath_find_parent_type);
> + if (!$controller_type) {
> + warn "invalid or missing controller: $controller_type, skipping\n";
> + next;
> + }
> + print "owning controller type: $controller_type\n" if $debug;
> +
> + # extract corresponding Controller node details
> + my $adress_on_controller = $xpc->findvalue('rasd:AddressOnParent', $item_node);
> + my $pve_disk_address = id_to_pve($controller_type) . $adress_on_controller;
> +
> + # resolve symlinks and relative path components
> + # and die if the diskimage is not somewhere under the $ovf path
> + my $ovf_dir = realpath(dirname(File::Spec->rel2abs($ovf)));
should we maybe just enforce that $ovf must already be the absolute
path? there is only one pre-existing caller, and the new one passes in
the result of $plugin->path which should be an absolute path as well..
also, the usage of realpath here and below is lacking error handling (it
returns undef and sets $! in case of an error).
> + my $backing_file_path = realpath(join ('/', $ovf_dir, $filepath));
> + if ($backing_file_path !~ /^\Q${ovf_dir}\E/) {
> + die "error parsing $filepath, are you using a symlink ?\n";
> + }
with the later changes, the only thing this does is check that $filepath
is not '..' or '.'. we could just enforce that instead? well, actually,
'.' is not even handled by this (in the context of an OVA file
having such a reference), it just trips up parse_volname later when it
tries to get the format out of it, which thankfully happens before
extraction ;)
> +
> + if (!-e $backing_file_path) {
> + die "error parsing $filepath, file seems not to exist at $backing_file_path\n";
> + }
> +
> + ($backing_file_path) = $backing_file_path =~ m|^(/.*)|; # untaint
> +
> + my $virtual_size = PVE::Storage::file_size_info($backing_file_path);
> + die "error parsing $backing_file_path, cannot determine file size\n"
> + if !$virtual_size;
> +
> + $pve_disk = {
> + disk_address => $pve_disk_address,
> + backing_file => $backing_file_path,
> + virtual_size => $virtual_size
> + };
> + push @disks, $pve_disk;
> +
> + }
> +
> + return {qm => $qm, disks => \@disks};
> +}
> +
> +1;
> diff --git a/src/PVE/Makefile b/src/PVE/Makefile
> index d438804..e15a275 100644
> --- a/src/PVE/Makefile
> +++ b/src/PVE/Makefile
> @@ -6,6 +6,7 @@ install:
> install -D -m 0644 Diskmanage.pm ${DESTDIR}${PERLDIR}/PVE/Diskmanage.pm
> install -D -m 0644 CephConfig.pm ${DESTDIR}${PERLDIR}/PVE/CephConfig.pm
> make -C Storage install
> + make -C GuestImport install
> make -C API2 install
> make -C CLI install
>
> diff --git a/src/PVE/Storage/Makefile b/src/PVE/Storage/Makefile
> index d5cc942..2daa0da 100644
> --- a/src/PVE/Storage/Makefile
> +++ b/src/PVE/Storage/Makefile
> @@ -14,6 +14,7 @@ SOURCES= \
> PBSPlugin.pm \
> BTRFSPlugin.pm \
> LvmThinPlugin.pm \
> + OVF.pm \
> ESXiPlugin.pm
>
> .PHONY: install
> diff --git a/src/test/Makefile b/src/test/Makefile
> index c54b10f..12991da 100644
> --- a/src/test/Makefile
> +++ b/src/test/Makefile
> @@ -1,6 +1,6 @@
> all: test
>
> -test: test_zfspoolplugin test_disklist test_bwlimit test_plugin
> +test: test_zfspoolplugin test_disklist test_bwlimit test_plugin test_ovf
>
> test_zfspoolplugin: run_test_zfspoolplugin.pl
> ./run_test_zfspoolplugin.pl
> @@ -13,3 +13,6 @@ test_bwlimit: run_bwlimit_tests.pl
>
> test_plugin: run_plugin_tests.pl
> ./run_plugin_tests.pl
> +
> +test_ovf: run_ovf_tests.pl
> + ./run_ovf_tests.pl
> diff --git a/src/test/ovf_manifests/Win10-Liz-disk1.vmdk b/src/test/ovf_manifests/Win10-Liz-disk1.vmdk
> new file mode 100644
> index 0000000000000000000000000000000000000000..662354a3d1333a2f6c4364005e53bfe7cd8b9044
> GIT binary patch
> literal 65536
> zcmeIvy>HV%7zbeUF`dK)46s<q+^B&Pi6H|eMIb;zP1VdMK8V$P$u?2T)IS|NNta69
> zSXw<No$qu%-}$}AUq|21A0<ihr0Gwa-nQ%QGfCR@wmshsN%A;JUhL<u_T%+U7Sd<o
> zW^TMU0^M{}R2S(eR@1Ur*Q@eVF^^#r%c@u{hyC#J%V?Nq?*?z)=UG^1Wn9+n(yx6B
> z(=ujtJiA)QVP~;guI5EOE2iV-%_??6=%y!^b+aeU_aA6Z4X2azC>{U!a5_FoJCkDB
> zKRozW{5{B<Li)YUBEQ&fJe$RRZCRbA$5|CacQiT<A<uvIHbq(g$>yIY=etVNVcI$B
> zY@^?CwTN|j)tg?;i)G&AZFqPqoW(5P2K~XUq>9sqVVe!!?y@Y;)^#k~TefEvd2_XU
> z^M@5mfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk4^iOdL%ftb
> z5g<T-009C72;3>~`p!f^fB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ
> zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U
> zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7
> z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N
> p0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK;Zui`~%h>XmtPp
>
> literal 0
> HcmV?d00001
>
> diff --git a/src/test/ovf_manifests/Win10-Liz.ovf b/src/test/ovf_manifests/Win10-Liz.ovf
> new file mode 100755
> index 0000000..bf4b41a
> --- /dev/null
> +++ b/src/test/ovf_manifests/Win10-Liz.ovf
> @@ -0,0 +1,142 @@
> +<?xml version="1.0" encoding="UTF-8"?>
> +<!--Generated by VMware ovftool 4.1.0 (build-2982904), UTC time: 2017-02-07T13:50:15.265014Z-->
> +<Envelope vmw:buildId="build-2982904" xmlns="http://schemas.dmtf.org/ovf/envelope/1" xmlns:cim="http://schemas.dmtf.org/wbem/wscim/1/common" xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1" xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" xmlns:vmw="http://www.vmware.com/schema/ovf" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
> + <References>
> + <File ovf:href="Win10-Liz-disk1.vmdk" ovf:id="file1" ovf:size="9155243008"/>
> + </References>
> + <DiskSection>
> + <Info>Virtual disk information</Info>
> + <Disk ovf:capacity="128" ovf:capacityAllocationUnits="byte * 2^30" ovf:diskId="vmdisk1" ovf:fileRef="file1" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:populatedSize="16798056448"/>
> + </DiskSection>
> + <NetworkSection>
> + <Info>The list of logical networks</Info>
> + <Network ovf:name="bridged">
> + <Description>The bridged network</Description>
> + </Network>
> + </NetworkSection>
> + <VirtualSystem ovf:id="vm">
> + <Info>A virtual machine</Info>
> + <Name>Win10-Liz</Name>
> + <OperatingSystemSection ovf:id="1" vmw:osType="windows9_64Guest">
> + <Info>The kind of installed guest operating system</Info>
> + </OperatingSystemSection>
> + <VirtualHardwareSection>
> + <Info>Virtual hardware requirements</Info>
> + <System>
> + <vssd:ElementName>Virtual Hardware Family</vssd:ElementName>
> + <vssd:InstanceID>0</vssd:InstanceID>
> + <vssd:VirtualSystemIdentifier>Win10-Liz</vssd:VirtualSystemIdentifier>
> + <vssd:VirtualSystemType>vmx-11</vssd:VirtualSystemType>
> + </System>
> + <Item>
> + <rasd:AllocationUnits>hertz * 10^6</rasd:AllocationUnits>
> + <rasd:Description>Number of Virtual CPUs</rasd:Description>
> + <rasd:ElementName>4 virtual CPU(s)</rasd:ElementName>
> + <rasd:InstanceID>1</rasd:InstanceID>
> + <rasd:ResourceType>3</rasd:ResourceType>
> + <rasd:VirtualQuantity>4</rasd:VirtualQuantity>
> + </Item>
> + <Item>
> + <rasd:AllocationUnits>byte * 2^20</rasd:AllocationUnits>
> + <rasd:Description>Memory Size</rasd:Description>
> + <rasd:ElementName>6144MB of memory</rasd:ElementName>
> + <rasd:InstanceID>2</rasd:InstanceID>
> + <rasd:ResourceType>4</rasd:ResourceType>
> + <rasd:VirtualQuantity>6144</rasd:VirtualQuantity>
> + </Item>
> + <Item>
> + <rasd:Address>0</rasd:Address>
> + <rasd:Description>SATA Controller</rasd:Description>
> + <rasd:ElementName>sataController0</rasd:ElementName>
> + <rasd:InstanceID>3</rasd:InstanceID>
> + <rasd:ResourceSubType>vmware.sata.ahci</rasd:ResourceSubType>
> + <rasd:ResourceType>20</rasd:ResourceType>
> + </Item>
> + <Item ovf:required="false">
> + <rasd:Address>0</rasd:Address>
> + <rasd:Description>USB Controller (XHCI)</rasd:Description>
> + <rasd:ElementName>usb3</rasd:ElementName>
> + <rasd:InstanceID>4</rasd:InstanceID>
> + <rasd:ResourceSubType>vmware.usb.xhci</rasd:ResourceSubType>
> + <rasd:ResourceType>23</rasd:ResourceType>
> + </Item>
> + <Item ovf:required="false">
> + <rasd:Address>0</rasd:Address>
> + <rasd:Description>USB Controller (EHCI)</rasd:Description>
> + <rasd:ElementName>usb</rasd:ElementName>
> + <rasd:InstanceID>5</rasd:InstanceID>
> + <rasd:ResourceSubType>vmware.usb.ehci</rasd:ResourceSubType>
> + <rasd:ResourceType>23</rasd:ResourceType>
> + <vmw:Config ovf:required="false" vmw:key="ehciEnabled" vmw:value="true"/>
> + </Item>
> + <Item>
> + <rasd:Address>0</rasd:Address>
> + <rasd:Description>SCSI Controller</rasd:Description>
> + <rasd:ElementName>scsiController0</rasd:ElementName>
> + <rasd:InstanceID>6</rasd:InstanceID>
> + <rasd:ResourceSubType>lsilogicsas</rasd:ResourceSubType>
> + <rasd:ResourceType>6</rasd:ResourceType>
> + </Item>
> + <Item ovf:required="false">
> + <rasd:AutomaticAllocation>true</rasd:AutomaticAllocation>
> + <rasd:ElementName>serial0</rasd:ElementName>
> + <rasd:InstanceID>7</rasd:InstanceID>
> + <rasd:ResourceType>21</rasd:ResourceType>
> + <vmw:Config ovf:required="false" vmw:key="yieldOnPoll" vmw:value="false"/>
> + </Item>
> + <Item>
> + <rasd:AddressOnParent>0</rasd:AddressOnParent>
> + <rasd:ElementName>disk0</rasd:ElementName>
> + <rasd:HostResource>ovf:/disk/vmdisk1</rasd:HostResource>
> + <rasd:InstanceID>8</rasd:InstanceID>
> + <rasd:Parent>6</rasd:Parent>
> + <rasd:ResourceType>17</rasd:ResourceType>
> + </Item>
> + <Item>
> + <rasd:AddressOnParent>2</rasd:AddressOnParent>
> + <rasd:AutomaticAllocation>true</rasd:AutomaticAllocation>
> + <rasd:Connection>bridged</rasd:Connection>
> + <rasd:Description>E1000e ethernet adapter on "bridged"</rasd:Description>
> + <rasd:ElementName>ethernet0</rasd:ElementName>
> + <rasd:InstanceID>9</rasd:InstanceID>
> + <rasd:ResourceSubType>E1000e</rasd:ResourceSubType>
> + <rasd:ResourceType>10</rasd:ResourceType>
> + <vmw:Config ovf:required="false" vmw:key="wakeOnLanEnabled" vmw:value="false"/>
> + </Item>
> + <Item ovf:required="false">
> + <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
> + <rasd:ElementName>sound</rasd:ElementName>
> + <rasd:InstanceID>10</rasd:InstanceID>
> + <rasd:ResourceSubType>vmware.soundcard.hdaudio</rasd:ResourceSubType>
> + <rasd:ResourceType>1</rasd:ResourceType>
> + </Item>
> + <Item ovf:required="false">
> + <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
> + <rasd:ElementName>video</rasd:ElementName>
> + <rasd:InstanceID>11</rasd:InstanceID>
> + <rasd:ResourceType>24</rasd:ResourceType>
> + <vmw:Config ovf:required="false" vmw:key="enable3DSupport" vmw:value="true"/>
> + </Item>
> + <Item ovf:required="false">
> + <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
> + <rasd:ElementName>vmci</rasd:ElementName>
> + <rasd:InstanceID>12</rasd:InstanceID>
> + <rasd:ResourceSubType>vmware.vmci</rasd:ResourceSubType>
> + <rasd:ResourceType>1</rasd:ResourceType>
> + </Item>
> + <Item ovf:required="false">
> + <rasd:AddressOnParent>1</rasd:AddressOnParent>
> + <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
> + <rasd:ElementName>cdrom0</rasd:ElementName>
> + <rasd:InstanceID>13</rasd:InstanceID>
> + <rasd:Parent>3</rasd:Parent>
> + <rasd:ResourceType>15</rasd:ResourceType>
> + </Item>
> + <vmw:Config ovf:required="false" vmw:key="memoryHotAddEnabled" vmw:value="true"/>
> + <vmw:Config ovf:required="false" vmw:key="powerOpInfo.powerOffType" vmw:value="soft"/>
> + <vmw:Config ovf:required="false" vmw:key="powerOpInfo.resetType" vmw:value="soft"/>
> + <vmw:Config ovf:required="false" vmw:key="powerOpInfo.suspendType" vmw:value="soft"/>
> + <vmw:Config ovf:required="false" vmw:key="tools.syncTimeWithHost" vmw:value="false"/>
> + </VirtualHardwareSection>
> + </VirtualSystem>
> +</Envelope>
> diff --git a/src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf b/src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf
> new file mode 100755
> index 0000000..b93540f
> --- /dev/null
> +++ b/src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf
> @@ -0,0 +1,142 @@
> +<?xml version="1.0" encoding="UTF-8"?>
> +<!--Generated by VMware ovftool 4.1.0 (build-2982904), UTC time: 2017-02-07T13:50:15.265014Z-->
> +<Envelope vmw:buildId="build-2982904" xmlns="http://schemas.dmtf.org/ovf/envelope/1" xmlns:cim="http://schemas.dmtf.org/wbem/wscim/1/common" xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1" xmlns:vmw="http://www.vmware.com/schema/ovf" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
> + <References>
> + <File ovf:href="Win10-Liz-disk1.vmdk" ovf:id="file1" ovf:size="9155243008"/>
> + </References>
> + <DiskSection>
> + <Info>Virtual disk information</Info>
> + <Disk ovf:capacity="128" ovf:capacityAllocationUnits="byte * 2^30" ovf:diskId="vmdisk1" ovf:fileRef="file1" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:populatedSize="16798056448"/>
> + </DiskSection>
> + <NetworkSection>
> + <Info>The list of logical networks</Info>
> + <Network ovf:name="bridged">
> + <Description>The bridged network</Description>
> + </Network>
> + </NetworkSection>
> + <VirtualSystem ovf:id="vm">
> + <Info>A virtual machine</Info>
> + <Name>Win10-Liz</Name>
> + <OperatingSystemSection ovf:id="1" vmw:osType="windows9_64Guest">
> + <Info>The kind of installed guest operating system</Info>
> + </OperatingSystemSection>
> + <VirtualHardwareSection>
> + <Info>Virtual hardware requirements</Info>
> + <System>
> + <vssd:ElementName>Virtual Hardware Family</vssd:ElementName>
> + <vssd:InstanceID>0</vssd:InstanceID>
> + <vssd:VirtualSystemIdentifier>Win10-Liz</vssd:VirtualSystemIdentifier>
> + <vssd:VirtualSystemType>vmx-11</vssd:VirtualSystemType>
> + </System>
> + <Item>
> + <rasd:AllocationUnits xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">hertz * 10^6</rasd:AllocationUnits>
> + <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">Number of Virtual CPUs</rasd:Description>
> + <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">4 virtual CPU(s)</rasd:ElementName>
> + <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">1</rasd:InstanceID>
> + <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">3</rasd:ResourceType>
> + <rasd:VirtualQuantity xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">4</rasd:VirtualQuantity>
> + </Item>
> + <Item>
> + <rasd:AllocationUnits xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">byte * 2^20</rasd:AllocationUnits>
> + <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">Memory Size</rasd:Description>
> + <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">6144MB of memory</rasd:ElementName>
> + <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">2</rasd:InstanceID>
> + <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">4</rasd:ResourceType>
> + <rasd:VirtualQuantity xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">6144</rasd:VirtualQuantity>
> + </Item>
> + <Item>
> + <rasd:Address xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">0</rasd:Address>
> + <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">SATA Controller</rasd:Description>
> + <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">sataController0</rasd:ElementName>
> + <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">3</rasd:InstanceID>
> + <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmware.sata.ahci</rasd:ResourceSubType>
> + <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">20</rasd:ResourceType>
> + </Item>
> + <Item ovf:required="false">
> + <rasd:Address xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">0</rasd:Address>
> + <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">USB Controller (XHCI)</rasd:Description>
> + <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">usb3</rasd:ElementName>
> + <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">4</rasd:InstanceID>
> + <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmware.usb.xhci</rasd:ResourceSubType>
> + <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">23</rasd:ResourceType>
> + </Item>
> + <Item ovf:required="false">
> + <rasd:Address xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">0</rasd:Address>
> + <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">USB Controller (EHCI)</rasd:Description>
> + <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">usb</rasd:ElementName>
> + <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">5</rasd:InstanceID>
> + <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmware.usb.ehci</rasd:ResourceSubType>
> + <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">23</rasd:ResourceType>
> + <vmw:Config ovf:required="false" vmw:key="ehciEnabled" vmw:value="true"/>
> + </Item>
> + <Item>
> + <rasd:Address xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">0</rasd:Address>
> + <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">SCSI Controller</rasd:Description>
> + <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">scsiController0</rasd:ElementName>
> + <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">6</rasd:InstanceID>
> + <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">lsilogicsas</rasd:ResourceSubType>
> + <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">6</rasd:ResourceType>
> + </Item>
> + <Item ovf:required="false">
> + <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">true</rasd:AutomaticAllocation>
> + <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">serial0</rasd:ElementName>
> + <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">7</rasd:InstanceID>
> + <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">21</rasd:ResourceType>
> + <vmw:Config ovf:required="false" vmw:key="yieldOnPoll" vmw:value="false"/>
> + </Item>
> + <Item>
> + <rasd:AddressOnParent xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" >0</rasd:AddressOnParent>
> + <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" >disk0</rasd:ElementName>
> + <rasd:HostResource xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" >ovf:/disk/vmdisk1</rasd:HostResource>
> + <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">8</rasd:InstanceID>
> + <rasd:Parent xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">6</rasd:Parent>
> + <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">17</rasd:ResourceType>
> + </Item>
> + <Item>
> + <rasd:AddressOnParent xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">2</rasd:AddressOnParent>
> + <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">true</rasd:AutomaticAllocation>
> + <rasd:Connection xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">bridged</rasd:Connection>
> + <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">E1000e ethernet adapter on "bridged"</rasd:Description>
> + <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">ethernet0</rasd:ElementName>
> + <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">9</rasd:InstanceID>
> + <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">E1000e</rasd:ResourceSubType>
> + <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">10</rasd:ResourceType>
> + <vmw:Config ovf:required="false" vmw:key="wakeOnLanEnabled" vmw:value="false"/>
> + </Item>
> + <Item ovf:required="false">
> + <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">false</rasd:AutomaticAllocation>
> + <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">sound</rasd:ElementName>
> + <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">10</rasd:InstanceID>
> + <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmware.soundcard.hdaudio</rasd:ResourceSubType>
> + <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">1</rasd:ResourceType>
> + </Item>
> + <Item ovf:required="false">
> + <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">false</rasd:AutomaticAllocation>
> + <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">video</rasd:ElementName>
> + <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">11</rasd:InstanceID>
> + <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">24</rasd:ResourceType>
> + <vmw:Config ovf:required="false" vmw:key="enable3DSupport" vmw:value="true"/>
> + </Item>
> + <Item ovf:required="false">
> + <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">false</rasd:AutomaticAllocation>
> + <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmci</rasd:ElementName>
> + <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">12</rasd:InstanceID>
> + <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmware.vmci</rasd:ResourceSubType>
> + <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">1</rasd:ResourceType>
> + </Item>
> + <Item ovf:required="false">
> + <rasd:AddressOnParent xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">1</rasd:AddressOnParent>
> + <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">false</rasd:AutomaticAllocation>
> + <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">cdrom0</rasd:ElementName>
> + <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">13</rasd:InstanceID>
> + <rasd:Parent xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">3</rasd:Parent>
> + <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">15</rasd:ResourceType>
> + </Item>
> + <vmw:Config ovf:required="false" vmw:key="memoryHotAddEnabled" vmw:value="true"/>
> + <vmw:Config ovf:required="false" vmw:key="powerOpInfo.powerOffType" vmw:value="soft"/>
> + <vmw:Config ovf:required="false" vmw:key="powerOpInfo.resetType" vmw:value="soft"/>
> + <vmw:Config ovf:required="false" vmw:key="powerOpInfo.suspendType" vmw:value="soft"/>
> + <vmw:Config ovf:required="false" vmw:key="tools.syncTimeWithHost" vmw:value="false"/>
> + </VirtualHardwareSection>
> + </VirtualSystem>
> +</Envelope>
> diff --git a/src/test/ovf_manifests/Win_2008_R2_two-disks.ovf b/src/test/ovf_manifests/Win_2008_R2_two-disks.ovf
> new file mode 100755
> index 0000000..a563aab
> --- /dev/null
> +++ b/src/test/ovf_manifests/Win_2008_R2_two-disks.ovf
> @@ -0,0 +1,145 @@
> +<?xml version="1.0" encoding="UTF-8"?>
> +<!--Generated by VMware ovftool 4.1.0 (build-2982904), UTC time: 2017-02-27T15:09:29.768974Z-->
> +<Envelope vmw:buildId="build-2982904" xmlns="http://schemas.dmtf.org/ovf/envelope/1" xmlns:cim="http://schemas.dmtf.org/wbem/wscim/1/common" xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1" xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" xmlns:vmw="http://www.vmware.com/schema/ovf" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
> + <References>
> + <File ovf:href="disk1.vmdk" ovf:id="file1" ovf:size="3481968640"/>
> + <File ovf:href="disk2.vmdk" ovf:id="file2" ovf:size="68096"/>
> + </References>
> + <DiskSection>
> + <Info>Virtual disk information</Info>
> + <Disk ovf:capacity="40" ovf:capacityAllocationUnits="byte * 2^30" ovf:diskId="vmdisk1" ovf:fileRef="file1" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:populatedSize="7684882432"/>
> + <Disk ovf:capacity="1" ovf:capacityAllocationUnits="byte * 2^30" ovf:diskId="vmdisk2" ovf:fileRef="file2" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:populatedSize="0"/>
> + </DiskSection>
> + <NetworkSection>
> + <Info>The list of logical networks</Info>
> + <Network ovf:name="bridged">
> + <Description>The bridged network</Description>
> + </Network>
> + </NetworkSection>
> + <VirtualSystem ovf:id="vm">
> + <Info>A virtual machine</Info>
> + <Name>Win_2008-R2x64</Name>
> + <OperatingSystemSection ovf:id="103" vmw:osType="windows7Server64Guest">
> + <Info>The kind of installed guest operating system</Info>
> + </OperatingSystemSection>
> + <VirtualHardwareSection>
> + <Info>Virtual hardware requirements</Info>
> + <System>
> + <vssd:ElementName>Virtual Hardware Family</vssd:ElementName>
> + <vssd:InstanceID>0</vssd:InstanceID>
> + <vssd:VirtualSystemIdentifier>Win_2008-R2x64</vssd:VirtualSystemIdentifier>
> + <vssd:VirtualSystemType>vmx-11</vssd:VirtualSystemType>
> + </System>
> + <Item>
> + <rasd:AllocationUnits>hertz * 10^6</rasd:AllocationUnits>
> + <rasd:Description>Number of Virtual CPUs</rasd:Description>
> + <rasd:ElementName>1 virtual CPU(s)</rasd:ElementName>
> + <rasd:InstanceID>1</rasd:InstanceID>
> + <rasd:ResourceType>3</rasd:ResourceType>
> + <rasd:VirtualQuantity>1</rasd:VirtualQuantity>
> + </Item>
> + <Item>
> + <rasd:AllocationUnits>byte * 2^20</rasd:AllocationUnits>
> + <rasd:Description>Memory Size</rasd:Description>
> + <rasd:ElementName>2048MB of memory</rasd:ElementName>
> + <rasd:InstanceID>2</rasd:InstanceID>
> + <rasd:ResourceType>4</rasd:ResourceType>
> + <rasd:VirtualQuantity>2048</rasd:VirtualQuantity>
> + </Item>
> + <Item>
> + <rasd:Address>0</rasd:Address>
> + <rasd:Description>SATA Controller</rasd:Description>
> + <rasd:ElementName>sataController0</rasd:ElementName>
> + <rasd:InstanceID>3</rasd:InstanceID>
> + <rasd:ResourceSubType>vmware.sata.ahci</rasd:ResourceSubType>
> + <rasd:ResourceType>20</rasd:ResourceType>
> + </Item>
> + <Item ovf:required="false">
> + <rasd:Address>0</rasd:Address>
> + <rasd:Description>USB Controller (EHCI)</rasd:Description>
> + <rasd:ElementName>usb</rasd:ElementName>
> + <rasd:InstanceID>4</rasd:InstanceID>
> + <rasd:ResourceSubType>vmware.usb.ehci</rasd:ResourceSubType>
> + <rasd:ResourceType>23</rasd:ResourceType>
> + <vmw:Config ovf:required="false" vmw:key="ehciEnabled" vmw:value="true"/>
> + </Item>
> + <Item>
> + <rasd:Address>0</rasd:Address>
> + <rasd:Description>SCSI Controller</rasd:Description>
> + <rasd:ElementName>scsiController0</rasd:ElementName>
> + <rasd:InstanceID>5</rasd:InstanceID>
> + <rasd:ResourceSubType>lsilogicsas</rasd:ResourceSubType>
> + <rasd:ResourceType>6</rasd:ResourceType>
> + </Item>
> + <Item ovf:required="false">
> + <rasd:AutomaticAllocation>true</rasd:AutomaticAllocation>
> + <rasd:ElementName>serial0</rasd:ElementName>
> + <rasd:InstanceID>6</rasd:InstanceID>
> + <rasd:ResourceType>21</rasd:ResourceType>
> + <vmw:Config ovf:required="false" vmw:key="yieldOnPoll" vmw:value="false"/>
> + </Item>
> + <Item>
> + <rasd:AddressOnParent>0</rasd:AddressOnParent>
> + <rasd:ElementName>disk0</rasd:ElementName>
> + <rasd:HostResource>ovf:/disk/vmdisk1</rasd:HostResource>
> + <rasd:InstanceID>7</rasd:InstanceID>
> + <rasd:Parent>5</rasd:Parent>
> + <rasd:ResourceType>17</rasd:ResourceType>
> + </Item>
> + <Item>
> + <rasd:AddressOnParent>1</rasd:AddressOnParent>
> + <rasd:ElementName>disk1</rasd:ElementName>
> + <rasd:HostResource>ovf:/disk/vmdisk2</rasd:HostResource>
> + <rasd:InstanceID>8</rasd:InstanceID>
> + <rasd:Parent>5</rasd:Parent>
> + <rasd:ResourceType>17</rasd:ResourceType>
> + </Item>
> + <Item>
> + <rasd:AddressOnParent>2</rasd:AddressOnParent>
> + <rasd:AutomaticAllocation>true</rasd:AutomaticAllocation>
> + <rasd:Connection>bridged</rasd:Connection>
> + <rasd:Description>E1000 ethernet adapter on "bridged"</rasd:Description>
> + <rasd:ElementName>ethernet0</rasd:ElementName>
> + <rasd:InstanceID>9</rasd:InstanceID>
> + <rasd:ResourceSubType>E1000</rasd:ResourceSubType>
> + <rasd:ResourceType>10</rasd:ResourceType>
> + <vmw:Config ovf:required="false" vmw:key="wakeOnLanEnabled" vmw:value="false"/>
> + </Item>
> + <Item ovf:required="false">
> + <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
> + <rasd:ElementName>sound</rasd:ElementName>
> + <rasd:InstanceID>10</rasd:InstanceID>
> + <rasd:ResourceSubType>vmware.soundcard.hdaudio</rasd:ResourceSubType>
> + <rasd:ResourceType>1</rasd:ResourceType>
> + </Item>
> + <Item ovf:required="false">
> + <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
> + <rasd:ElementName>video</rasd:ElementName>
> + <rasd:InstanceID>11</rasd:InstanceID>
> + <rasd:ResourceType>24</rasd:ResourceType>
> + <vmw:Config ovf:required="false" vmw:key="enable3DSupport" vmw:value="true"/>
> + </Item>
> + <Item ovf:required="false">
> + <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
> + <rasd:ElementName>vmci</rasd:ElementName>
> + <rasd:InstanceID>12</rasd:InstanceID>
> + <rasd:ResourceSubType>vmware.vmci</rasd:ResourceSubType>
> + <rasd:ResourceType>1</rasd:ResourceType>
> + </Item>
> + <Item ovf:required="false">
> + <rasd:AddressOnParent>1</rasd:AddressOnParent>
> + <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
> + <rasd:ElementName>cdrom0</rasd:ElementName>
> + <rasd:InstanceID>13</rasd:InstanceID>
> + <rasd:Parent>3</rasd:Parent>
> + <rasd:ResourceType>15</rasd:ResourceType>
> + </Item>
> + <vmw:Config ovf:required="false" vmw:key="cpuHotAddEnabled" vmw:value="true"/>
> + <vmw:Config ovf:required="false" vmw:key="memoryHotAddEnabled" vmw:value="true"/>
> + <vmw:Config ovf:required="false" vmw:key="powerOpInfo.powerOffType" vmw:value="soft"/>
> + <vmw:Config ovf:required="false" vmw:key="powerOpInfo.resetType" vmw:value="soft"/>
> + <vmw:Config ovf:required="false" vmw:key="powerOpInfo.suspendType" vmw:value="soft"/>
> + <vmw:Config ovf:required="false" vmw:key="tools.syncTimeWithHost" vmw:value="false"/>
> + </VirtualHardwareSection>
> + </VirtualSystem>
> +</Envelope>
> diff --git a/src/test/ovf_manifests/disk1.vmdk b/src/test/ovf_manifests/disk1.vmdk
> new file mode 100644
> index 0000000000000000000000000000000000000000..8660602343a1a955f9bcf2e6beaed99316dd8167
> GIT binary patch
> literal 65536
> zcmeIvy>HV%7zbeUF`dK)46s<q9ua7|WuUkSgpg2EwX+*viPe0`HWAtSr(-ASlAWQ|
> zbJF?F{+-YFKK_yYyn2=-$&0qXY<t)4ch@B8o_Fo_en^t%N%H0}e|H$~AF`0X3J-JR
> zqY>z*Sy|tuS*)j3xo%d~*K!`iCRTO1T8@X|%lB-2b2}Q2@{gmi&a1d=x<|K%7N%9q
> zn|Qfh$8m45TCV10Gb^W)c4ZxVA@tMpzfJp2S{y#m?iwzx)01@a>+{9rJna?j=ZAyM
> zqPW{FznsOxiSi~-&+<BkewLkuP!u<VO<6U6^7*&xtNr=XaoRiS?V{gtwTMl%9Za|L
> za#^%_7k)SjXE85!!SM7bspGUQewUqo+Glx@ubWtPwRL-yMO)CL`L7O2fB*pk1PBly
> zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pkPg~&a(=JbS1PBlyK!5-N0!ISx
> zkM7+PAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+
> z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBly
> zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF
> z5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk
> d1PBlyK!5-N0t5&UAV7cs0RjXF5cr=0{{Ra(Wheju
>
> literal 0
> HcmV?d00001
>
> diff --git a/src/test/ovf_manifests/disk2.vmdk b/src/test/ovf_manifests/disk2.vmdk
> new file mode 100644
> index 0000000000000000000000000000000000000000..c4634513348b392202898374f1c8d2d51d565b27
> GIT binary patch
> literal 65536
> zcmeIvy>HV%7zbeUF`dK)46s<q9#IIDI%J@v2!xPOQ?;`jUy0Rx$u<$$`lr`+(j_}X
> ztLLQio&7tX?|uAp{Oj^rk|Zyh{<7(9yX&q=(mrq7>)ntf&y(cMe*SJh-aTX?eH9+&
> z#z!O2Psc@dn~q~OEsJ%%D!&!;7&fu2iq&#-6u$l#k8ZBB&%=0f64qH6mv#5(X4k^B
> zj9DEow(B_REmq6byr^fzbkeM>VlRY#diJkw-bwTQ2bx{O`BgehC%?a(PtMX_-hBS!
> zV6(_?yX6<NxIa-=XX$BH#n2y*PeaJ_>%pcd>%ZCj`_<*{eCa6d4SQYmC$1K;F1Lf}
> zc3v#=CU3(J2jMJcc^4cVA0$<rHpO?@@uyvu<=MK9Wm{XjSCKabJ(~aOpacjIAV7cs
> z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAn>#W-ahT}R7ZdS0RjXF5Fl_M
> z@c!W5Edc@q2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk
> z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs
> z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ
> zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U
> eAV7cs0RjXF5FkK+009C72oNAZfB=F2DR2*l=VfOA
>
> literal 0
> HcmV?d00001
>
> diff --git a/src/test/run_ovf_tests.pl b/src/test/run_ovf_tests.pl
> new file mode 100755
> index 0000000..5a80ab2
> --- /dev/null
> +++ b/src/test/run_ovf_tests.pl
> @@ -0,0 +1,71 @@
> +#!/usr/bin/perl
> +
> +use strict;
> +use warnings;
> +use lib qw(..); # prepend .. to @INC so we use the local version of PVE packages
> +
> +use FindBin '$Bin';
> +use PVE::GuestImport::OVF;
> +use Test::More;
> +
> +use Data::Dumper;
> +
> +my $test_manifests = join ('/', $Bin, 'ovf_manifests');
> +
> +print "parsing ovfs\n";
> +
> +my $win2008 = eval { PVE::GuestImport::OVF::parse_ovf("$test_manifests/Win_2008_R2_two-disks.ovf") };
> +if (my $err = $@) {
> + fail('parse win2008');
> + warn("error: $err\n");
> +} else {
> + ok('parse win2008');
> +}
> +my $win10 = eval { PVE::GuestImport::OVF::parse_ovf("$test_manifests/Win10-Liz.ovf") };
> +if (my $err = $@) {
> + fail('parse win10');
> + warn("error: $err\n");
> +} else {
> + ok('parse win10');
> +}
> +my $win10noNs = eval { PVE::GuestImport::OVF::parse_ovf("$test_manifests/Win10-Liz_no_default_ns.ovf") };
> +if (my $err = $@) {
> + fail("parse win10 no default rasd NS");
> + warn("error: $err\n");
> +} else {
> + ok('parse win10 no default rasd NS');
> +}
> +
> +print "testing disks\n";
> +
> +is($win2008->{disks}->[0]->{disk_address}, 'scsi0', 'multidisk vm has the correct first disk controller');
> +is($win2008->{disks}->[0]->{backing_file}, "$test_manifests/disk1.vmdk", 'multidisk vm has the correct first disk backing device');
> +is($win2008->{disks}->[0]->{virtual_size}, 2048, 'multidisk vm has the correct first disk size');
> +
> +is($win2008->{disks}->[1]->{disk_address}, 'scsi1', 'multidisk vm has the correct second disk controller');
> +is($win2008->{disks}->[1]->{backing_file}, "$test_manifests/disk2.vmdk", 'multidisk vm has the correct second disk backing device');
> +is($win2008->{disks}->[1]->{virtual_size}, 2048, 'multidisk vm has the correct second disk size');
> +
> +is($win10->{disks}->[0]->{disk_address}, 'scsi0', 'single disk vm has the correct disk controller');
> +is($win10->{disks}->[0]->{backing_file}, "$test_manifests/Win10-Liz-disk1.vmdk", 'single disk vm has the correct disk backing device');
> +is($win10->{disks}->[0]->{virtual_size}, 2048, 'single disk vm has the correct size');
> +
> +is($win10noNs->{disks}->[0]->{disk_address}, 'scsi0', 'single disk vm (no default rasd NS) has the correct disk controller');
> +is($win10noNs->{disks}->[0]->{backing_file}, "$test_manifests/Win10-Liz-disk1.vmdk", 'single disk vm (no default rasd NS) has the correct disk backing device');
> +is($win10noNs->{disks}->[0]->{virtual_size}, 2048, 'single disk vm (no default rasd NS) has the correct size');
> +
> +print "\ntesting vm.conf extraction\n";
> +
> +is($win2008->{qm}->{name}, 'Win2008-R2x64', 'win2008 VM name is correct');
> +is($win2008->{qm}->{memory}, '2048', 'win2008 VM memory is correct');
> +is($win2008->{qm}->{cores}, '1', 'win2008 VM cores are correct');
> +
> +is($win10->{qm}->{name}, 'Win10-Liz', 'win10 VM name is correct');
> +is($win10->{qm}->{memory}, '6144', 'win10 VM memory is correct');
> +is($win10->{qm}->{cores}, '4', 'win10 VM cores are correct');
> +
> +is($win10noNs->{qm}->{name}, 'Win10-Liz', 'win10 VM (no default rasd NS) name is correct');
> +is($win10noNs->{qm}->{memory}, '6144', 'win10 VM (no default rasd NS) memory is correct');
> +is($win10noNs->{qm}->{cores}, '4', 'win10 VM (no default rasd NS) cores are correct');
> +
> +done_testing();
> --
> 2.39.2
>
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>
>
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [pve-devel] [PATCH storage v3 03/10] plugin: dir: handle ova files for import
2024-04-29 11:21 ` [pve-devel] [PATCH storage v3 03/10] plugin: dir: handle ova files for import Dominik Csapak
@ 2024-05-22 10:08 ` Fabian Grünbichler
2024-05-23 10:40 ` Dominik Csapak
2024-05-22 13:13 ` Fabian Grünbichler
1 sibling, 1 reply; 38+ messages in thread
From: Fabian Grünbichler @ 2024-05-22 10:08 UTC (permalink / raw)
To: Proxmox VE development discussion
On April 29, 2024 1:21 pm, Dominik Csapak wrote:
> since we want to handle ova files (which are only ovf+images bundled in
> a tar file) for import, add code that handles that.
>
> we introduce a valid volname for files contained in ovas like this:
>
> storage:import/archive.ova/disk-1.vmdk
>
> by basically treating the last part of the path as the name for the
> contained disk we want.
>
> in that case we return 'import' as type with 'vmdk/qcow2/raw' as format
> (we cannot use something like 'ova+vmdk' without extending the 'format'
> parsing to that for all storages/formats. This is because it runs
> though a verify format check at least once)
>
> we then provide 3 functions to use for that:
>
> * copy_needs_extraction: determines from the given volid (like above) if
> that needs extraction to copy it, currently only 'import' vtype + a
> volid with the above format returns true
>
> * extract_disk_from_import_file: this actually extracts the file from
> the archive. Currently only ova is supported, so the extraction with
> 'tar' is hardcoded, but again we can easily extend/modify that should
> we need to.
>
> we currently extract into the either the import storage or a given
> target storage in the images directory so if the cleanup does not
> happen, the user can still see and interact with the image via
> api/cli/gui
>
> * cleanup_extracted_image: intended to cleanup the extracted images from
> above
>
> we have to modify the `parse_ovf` a bit to handle the missing disk
> images, and we parse the size out of the ovf part (since this is
> informal only, it should be no problem if we cannot parse it sometimes)
>
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
> src/PVE/API2/Storage/Status.pm | 1 +
> src/PVE/GuestImport.pm | 100 +++++++++++++++++++++++++++++++++
> src/PVE/GuestImport/OVF.pm | 53 ++++++++++++++---
> src/PVE/Makefile | 1 +
> src/PVE/Storage.pm | 2 +-
> src/PVE/Storage/DirPlugin.pm | 15 ++++-
> src/PVE/Storage/Plugin.pm | 5 ++
> src/test/parse_volname_test.pm | 15 +++++
> 8 files changed, 182 insertions(+), 10 deletions(-)
> create mode 100644 src/PVE/GuestImport.pm
>
> diff --git a/src/PVE/API2/Storage/Status.pm b/src/PVE/API2/Storage/Status.pm
> index dc6cc69..acde730 100644
> --- a/src/PVE/API2/Storage/Status.pm
> +++ b/src/PVE/API2/Storage/Status.pm
> @@ -749,6 +749,7 @@ __PACKAGE__->register_method({
> 'efi-state-lost',
> 'guest-is-running',
> 'nvme-unsupported',
> + 'ova-needs-extracting',
> 'ovmf-with-lsi-unsupported',
> 'serial-port-socket-only',
> ],
> diff --git a/src/PVE/GuestImport.pm b/src/PVE/GuestImport.pm
> new file mode 100644
> index 0000000..d405e30
> --- /dev/null
> +++ b/src/PVE/GuestImport.pm
> @@ -0,0 +1,100 @@
> +package PVE::GuestImport;
> +
> +use strict;
> +use warnings;
> +
> +use File::Path;
> +
> +use PVE::Storage;
another circular module dependency..
> +use PVE::Tools qw(run_command);
> +
> +sub copy_needs_extraction {
> + my ($volid) = @_;
> + my $cfg = PVE::Storage::config();
> + my ($vtype, $name, undef, undef, undef, undef, $fmt) = PVE::Storage::parse_volname($cfg, $volid);
> +
> + # only volumes inside ovas need extraction
> + return $vtype eq 'import' && $fmt =~ m/^ova\+(.*)$/;
> +}
this could just as well live in qemu-server, there's only two call sites
in one module there.. one of them even already has the parsed volname ;)
> +
> +sub extract_disk_from_import_file {
I don't really like that this is using lots of plugin stuff..
> + my ($volid, $vmid, $target_storeid) = @_;
> +
> + my ($source_storeid, $volname) = PVE::Storage::parse_volume_id($volid);
> + $target_storeid //= $source_storeid;
> + my $cfg = PVE::Storage::config();
> + my $source_scfg = PVE::Storage::storage_config($cfg, $source_storeid);
> + my $source_plugin = PVE::Storage::Plugin->lookup($source_scfg->{type});
> +
> + my ($vtype, $name, undef, undef, undef, undef, $fmt) =
> + $source_plugin->parse_volname($volname);
could be PVE::Storage::parse_volname
> +
> + die "only files with content type 'import' can be extracted\n"
> + if $vtype ne 'import' || $fmt !~ m/^ova\+/;
> +
> + # extract the inner file from the name
> + my $archive;
> + my $inner_file;
> + if ($name =~ m!^(.*\.ova)/(${PVE::Storage::SAFE_CHAR_CLASS_RE}+)$!) {
> + $archive = "import/$1";
> + $inner_file = $2;
> + ($fmt) = $fmt =~ /^ova\+(.*)$/;
> + } else {
> + die "cannot extract $volid - invalid volname $volname\n";
> + }
> +
> + my ($ova_path) = $source_plugin->path($source_scfg, $archive, $source_storeid);
could be PVE::Storage::path
> +
> + my $target_scfg = PVE::Storage::storage_config($cfg, $target_storeid);
> + my $target_plugin = PVE::Storage::Plugin->lookup($target_scfg->{type});
> +
> + my $destdir = $target_plugin->get_subdir($target_scfg, 'images');
could be PVE::Storage::get_image_dir
> +
> + my $pid = $$;
> + $destdir .= "/tmp_${pid}_${vmid}";
> + mkpath $destdir;
> +
> + ($ova_path) = $ova_path =~ m|^(.*)$|; # untaint
> +
> + my $source_path = "$destdir/$inner_file";
> + my $target_path;
> + my $target_volname;
> + eval {
> + run_command(['tar', '-x', '--force-local', '-C', $destdir, '-f', $ova_path, $inner_file]);
> +
> + # check for symlinks and other non regular files
> + if (-l $source_path || ! -f $source_path) {
> + die "only regular files are allowed\n";
> + }
> +
> + my $target_diskname
> + = $target_plugin->find_free_diskname($target_storeid, $target_scfg, $vmid, $fmt, 1);
these here requires holding a lock until the diskname is actually used
(the rename below), else it's racey..
> + $target_volname = "$vmid/" . $target_diskname;
this encodes a fact about volname semantics that might not be a given
for external, dir-based plugins (not sure if we want to worry about that
though, or how to avoid it ;)).
> + $target_path = $target_plugin->filesystem_path($target_scfg, $target_volname);
this should be equivalent to PVE::Storage::path for DirPlugin based
storages?
> +
> + print "renaming $source_path to $target_path\n";
> + my $imagedir = $target_plugin->get_subdir($target_scfg, 'images');
we already did this above, but see comment there ;)
> + mkpath "$imagedir/$vmid";
> +
> + rename($source_path, $target_path) or die "unable to move - $!\n";
> + };
> + if (my $err = $@) {
> + unlink $source_path;
> + unlink $target_path if defined($target_path);
isn't this pretty much impossible to happen? the last thing we do in the
eval block is the rename - if that failed, $target_path can't exist yet.
if it didn't fail, we can't end up here?
> + rmdir $destdir;
this and unlink $source_path could just be a remove_tree on $destdir
instead, with less chances of leaving stuff around?
> + die "error during extraction: $err\n";
> + }
> +
> + rmdir $destdir;
could also be a remove_tree, just to be on the safe side?
> +
> + return "$target_storeid:$target_volname";
> +}
> +
> +sub cleanup_extracted_image {
> + my ($source) = @_;
> +
> + my $cfg = PVE::Storage::config();
> + PVE::Storage::vdisk_free($cfg, $source);
> +}
why do we need this helper, and not just call vdisk_free directly in
qemu-server (we do that in tons of places there as part of error
handling for freshly allocated volumes)?
> +
> +1;
> diff --git a/src/PVE/GuestImport/OVF.pm b/src/PVE/GuestImport/OVF.pm
> index 0eb5e9c..6b79078 100644
> --- a/src/PVE/GuestImport/OVF.pm
> +++ b/src/PVE/GuestImport/OVF.pm
> @@ -85,11 +85,37 @@ sub id_to_pve {
> }
> }
>
> +# technically defined in DSP0004 (https://www.dmtf.org/dsp/DSP0004) as an ABNF
> +# but realistically this always takes the form of 'byte * base^exponent'
> +sub try_parse_capacity_unit {
> + my ($unit_text) = @_;
> +
> + if ($unit_text =~ m/^\s*byte\s*\*\s*([0-9]+)\s*\^\s*([0-9]+)\s*$/) {
> + my $base = $1;
> + my $exp = $2;
> + return $base ** $exp;
> + }
> +
> + return undef;
> +}
> +
> # returns two references, $qm which holds qm.conf style key/values, and \@disks
> sub parse_ovf {
> - my ($ovf, $debug) = @_;
> + my ($ovf, $isOva, $debug) = @_;
> +
> + # we have to ignore missing disk images for ova
> + my $dom;
> + if ($isOva) {
> + my $raw = "";
> + PVE::Tools::run_command(['tar', '-xO', '--wildcards', '--occurrence=1', '-f', $ovf, '*.ovf'], outfunc => sub {
> + my $line = shift;
> + $raw .= $line;
> + });
> + $dom = XML::LibXML->load_xml(string => $raw, no_blanks => 1);
> + } else {
> + $dom = XML::LibXML->load_xml(location => $ovf, no_blanks => 1);
> + }
>
> - my $dom = XML::LibXML->load_xml(location => $ovf, no_blanks => 1);
>
> # register the xml namespaces in a xpath context object
> # 'ovf' is the default namespace so it will prepended to each xml element
> @@ -177,7 +203,17 @@ sub parse_ovf {
> # @ needs to be escaped to prevent Perl double quote interpolation
> my $xpath_find_fileref = sprintf("/ovf:Envelope/ovf:DiskSection/\
> ovf:Disk[\@ovf:diskId='%s']/\@ovf:fileRef", $disk_id);
> + my $xpath_find_capacity = sprintf("/ovf:Envelope/ovf:DiskSection/\
> +ovf:Disk[\@ovf:diskId='%s']/\@ovf:capacity", $disk_id);
> + my $xpath_find_capacity_unit = sprintf("/ovf:Envelope/ovf:DiskSection/\
> +ovf:Disk[\@ovf:diskId='%s']/\@ovf:capacityAllocationUnits", $disk_id);
> my $fileref = $xpc->findvalue($xpath_find_fileref);
> + my $capacity = $xpc->findvalue($xpath_find_capacity);
> + my $capacity_unit = $xpc->findvalue($xpath_find_capacity_unit);
> + my $virtual_size;
> + if (my $factor = try_parse_capacity_unit($capacity_unit)) {
> + $virtual_size = $capacity * $factor;
> + }
>
> my $valid_url_chars = qr@${valid_uripath_chars}|/@;
> if (!$fileref || $fileref !~ m/^${valid_url_chars}+$/) {
> @@ -217,7 +253,7 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
> die "error parsing $filepath, are you using a symlink ?\n";
> }
>
> - if (!-e $backing_file_path) {
> + if (!-e $backing_file_path && !$isOva) {
> die "error parsing $filepath, file seems not to exist at $backing_file_path\n";
> }
>
> @@ -225,16 +261,19 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
> ($filepath) = $filepath =~ m|^(${PVE::Storage::SAFE_CHAR_CLASS_RE}+)$|; # untaint & check no sub/parent dirs
> die "invalid path\n" if !$filepath;
>
> - my $virtual_size = PVE::Storage::file_size_info($backing_file_path);
> - die "error parsing $backing_file_path, cannot determine file size\n"
> - if !$virtual_size;
> + if (!$isOva) {
> + my $size = PVE::Storage::file_size_info($backing_file_path);
> + die "error parsing $backing_file_path, cannot determine file size\n"
> + if !$size;
>
> + $virtual_size = $size;
> + }
> $pve_disk = {
> disk_address => $pve_disk_address,
> backing_file => $backing_file_path,
> - virtual_size => $virtual_size
> relative_path => $filepath,
> };
> + $pve_disk->{virtual_size} = $virtual_size if defined($virtual_size);
> push @disks, $pve_disk;
>
> }
> diff --git a/src/PVE/Makefile b/src/PVE/Makefile
> index e15a275..0af3081 100644
> --- a/src/PVE/Makefile
> +++ b/src/PVE/Makefile
> @@ -5,6 +5,7 @@ install:
> install -D -m 0644 Storage.pm ${DESTDIR}${PERLDIR}/PVE/Storage.pm
> install -D -m 0644 Diskmanage.pm ${DESTDIR}${PERLDIR}/PVE/Diskmanage.pm
> install -D -m 0644 CephConfig.pm ${DESTDIR}${PERLDIR}/PVE/CephConfig.pm
> + install -D -m 0644 GuestImport.pm ${DESTDIR}${PERLDIR}/PVE/GuestImport.pm
> make -C Storage install
> make -C GuestImport install
> make -C API2 install
> diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
> index 1ed91c2..adc1b45 100755
> --- a/src/PVE/Storage.pm
> +++ b/src/PVE/Storage.pm
> @@ -114,7 +114,7 @@ our $VZTMPL_EXT_RE_1 = qr/\.tar\.(gz|xz|zst)/i;
>
> our $BACKUP_EXT_RE_2 = qr/\.(tgz|(?:tar|vma)(?:\.(${\PVE::Storage::Plugin::COMPRESSOR_RE}))?)/;
>
> -our $IMPORT_EXT_RE_1 = qr/\.(ovf|qcow2|raw|vmdk)/;
> +our $IMPORT_EXT_RE_1 = qr/\.(ova|ovf|qcow2|raw|vmdk)/;
>
> our $SAFE_CHAR_CLASS_RE = qr/[a-zA-Z0-9\-\.\+\=\_]/;
>
> diff --git a/src/PVE/Storage/DirPlugin.pm b/src/PVE/Storage/DirPlugin.pm
> index 3e3b1e7..ea89464 100644
> --- a/src/PVE/Storage/DirPlugin.pm
> +++ b/src/PVE/Storage/DirPlugin.pm
> @@ -258,15 +258,26 @@ sub get_import_metadata {
> # NOTE: all types of warnings must be added to the return schema of the import-metadata API endpoint
> my $warnings = [];
>
> + my $isOva = 0;
> + if ($name =~ m/\.ova$/) {
> + $isOva = 1;
> + push @$warnings, { type => 'ova-needs-extracting' };
> + }
> my $path = $class->path($scfg, $volname, $storeid, undef);
> - my $res = PVE::GuestImport::OVF::parse_ovf($path);
> + my $res = PVE::GuestImport::OVF::parse_ovf($path, $isOva);
> my $disks = {};
> for my $disk ($res->{disks}->@*) {
> my $id = $disk->{disk_address};
> my $size = $disk->{virtual_size};
> my $path = $disk->{relative_path};
> + my $volid;
> + if ($isOva) {
> + $volid = "$storeid:$volname/$path";
> + } else {
> + $volid = "$storeid:import/$path",
> + }
> $disks->{$id} = {
> - volid => "$storeid:import/$path",
> + volid => $volid,
> defined($size) ? (size => $size) : (),
> };
> }
> diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
> index 33f0f3a..640d156 100644
> --- a/src/PVE/Storage/Plugin.pm
> +++ b/src/PVE/Storage/Plugin.pm
> @@ -654,6 +654,11 @@ sub parse_volname {
> return ('backup', $fn);
> } elsif ($volname =~ m!^snippets/([^/]+)$!) {
> return ('snippets', $1);
> + } elsif ($volname =~ m!^import/(${PVE::Storage::SAFE_CHAR_CLASS_RE}+\.ova\/(${PVE::Storage::SAFE_CHAR_CLASS_RE}+))$!) {
> + my $archive = $1;
> + my $file = $2;
> + my (undef, $format, undef) = parse_name_dir($file);
> + return ('import', $archive, undef, undef, undef, undef, "ova+$format");
these could be improved if the format was already checked in the elsif
condition I think, since the error message of parse_name_dir is a bit
opaque/lacking context.. also, parse_name_dir allows subvol, which we
don't want to allow here I think?
> } elsif ($volname =~ m!^import/(${PVE::Storage::SAFE_CHAR_CLASS_RE}+$PVE::Storage::IMPORT_EXT_RE_1)$!) {
> return ('import', $1, undef, undef, undef, undef, $2);
> }
> diff --git a/src/test/parse_volname_test.pm b/src/test/parse_volname_test.pm
> index a8c746f..bc7b4e8 100644
> --- a/src/test/parse_volname_test.pm
> +++ b/src/test/parse_volname_test.pm
> @@ -93,6 +93,21 @@ my $tests = [
> volname => 'import/import.ovf',
> expected => ['import', 'import.ovf', undef, undef, undef ,undef, 'ovf'],
> },
> + {
> + description => "Import, innner file of ova",
> + volname => 'import/import.ova/disk.qcow2',
> + expected => ['import', 'import.ova/disk.qcow2', undef, undef, undef, undef, 'ova+qcow2'],
> + },
> + {
> + description => "Import, innner file of ova",
> + volname => 'import/import.ova/disk.vmdk',
> + expected => ['import', 'import.ova/disk.vmdk', undef, undef, undef, undef, 'ova+vmdk'],
> + },
> + {
> + description => "Import, innner file of ova",
> + volname => 'import/import.ova/disk.raw',
> + expected => ['import', 'import.ova/disk.raw', undef, undef, undef, undef, 'ova+raw'],
> + },
> #
> # failed matches
> #
> --
> 2.39.2
>
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>
>
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [pve-devel] [PATCH storage v3 08/10] api: allow ova upload/download
2024-04-29 11:21 ` [pve-devel] [PATCH storage v3 08/10] api: allow ova upload/download Dominik Csapak
@ 2024-05-22 10:20 ` Fabian Grünbichler
0 siblings, 0 replies; 38+ messages in thread
From: Fabian Grünbichler @ 2024-05-22 10:20 UTC (permalink / raw)
To: Proxmox VE development discussion
On April 29, 2024 1:21 pm, Dominik Csapak wrote:
> introducing a separate regex that only contains ova, since
> upload/downloading ovfs does not make sense (since the disks are then
> missing).
>
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
> src/PVE/API2/Storage/Status.pm | 18 ++++++++++++++----
> src/PVE/Storage.pm | 11 +++++++++++
> 2 files changed, 25 insertions(+), 4 deletions(-)
>
> diff --git a/src/PVE/API2/Storage/Status.pm b/src/PVE/API2/Storage/Status.pm
> index acde730..6c0c1e5 100644
> --- a/src/PVE/API2/Storage/Status.pm
> +++ b/src/PVE/API2/Storage/Status.pm
> @@ -369,7 +369,7 @@ __PACKAGE__->register_method ({
> name => 'upload',
> path => '{storage}/upload',
> method => 'POST',
> - description => "Upload templates and ISO images.",
> + description => "Upload templates, ISO images and OVAs.",
> permissions => {
> check => ['perm', '/storage/{storage}', ['Datastore.AllocateTemplate']],
> },
> @@ -382,7 +382,7 @@ __PACKAGE__->register_method ({
> content => {
> description => "Content type.",
> type => 'string', format => 'pve-storage-content',
> - enum => ['iso', 'vztmpl'],
> + enum => ['iso', 'vztmpl', 'import'],
> },
> filename => {
> description => "The name of the file to create. Caution: This will be normalized!",
> @@ -448,6 +448,11 @@ __PACKAGE__->register_method ({
> raise_param_exc({ filename => "wrong file extension" });
> }
> $path = PVE::Storage::get_vztmpl_dir($cfg, $param->{storage});
> + } elsif ($content eq 'import') {
> + if ($filename !~ m![^/]+$PVE::Storage::UPLOAD_IMPORT_EXT_RE_1$!) {
> + raise_param_exc({ filename => "wrong file extension" });
> + }
> + $path = PVE::Storage::get_import_dir($cfg, $param->{storage});
> } else {
> raise_param_exc({ content => "upload content type '$content' not allowed" });
> }
> @@ -544,7 +549,7 @@ __PACKAGE__->register_method({
> name => 'download_url',
> path => '{storage}/download-url',
> method => 'POST',
> - description => "Download templates and ISO images by using an URL.",
> + description => "Download templates, ISO images and OVAs by using an URL.",
> proxyto => 'node',
> permissions => {
> description => 'Requires allocation access on the storage and as this allows one to probe'
> @@ -572,7 +577,7 @@ __PACKAGE__->register_method({
> content => {
> description => "Content type.", # TODO: could be optional & detected in most cases
> type => 'string', format => 'pve-storage-content',
> - enum => ['iso', 'vztmpl'],
> + enum => ['iso', 'vztmpl', 'import'],
> },
> filename => {
> description => "The name of the file to create. Caution: This will be normalized!",
> @@ -642,6 +647,11 @@ __PACKAGE__->register_method({
> raise_param_exc({ filename => "wrong file extension" });
> }
> $path = PVE::Storage::get_vztmpl_dir($cfg, $storage);
> + } elsif ($content eq 'import') {
> + if ($filename !~ m![^/]+$PVE::Storage::UPLOAD_IMPORT_EXT_RE_1$!) {
was a bit stumped here, but the others have it as well - $filename is
normalized first and that removes any slashes anyway. this also means
uploaded OVAs only have a subset of characters compared to what we
accept otherwise. do we still want to be extra-cautious in case we relax
the normalization in the future, and check for the same characters we
allow otherwise? would be rather weird if users can upload files but
possible not even see them afterwards ^^
> + raise_param_exc({ filename => "wrong file extension" });
> + }
> + $path = PVE::Storage::get_import_dir($cfg, $param->{storage});
> } else {
> raise_param_exc({ content => "upload content-type '$content' is not allowed" });
> }
> diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
> index adc1b45..31b2ad5 100755
> --- a/src/PVE/Storage.pm
> +++ b/src/PVE/Storage.pm
> @@ -116,6 +116,8 @@ our $BACKUP_EXT_RE_2 = qr/\.(tgz|(?:tar|vma)(?:\.(${\PVE::Storage::Plugin::COMPR
>
> our $IMPORT_EXT_RE_1 = qr/\.(ova|ovf|qcow2|raw|vmdk)/;
>
> +our $UPLOAD_IMPORT_EXT_RE_1 = qr/\.(ova)/;
> +
> our $SAFE_CHAR_CLASS_RE = qr/[a-zA-Z0-9\-\.\+\=\_]/;
>
> # FIXME remove with PVE 8.0, add versioned breaks for pve-manager
> @@ -464,6 +466,15 @@ sub get_iso_dir {
> return $plugin->get_subdir($scfg, 'iso');
> }
>
> +sub get_import_dir {
> + my ($cfg, $storeid) = @_;
> +
> + my $scfg = storage_config($cfg, $storeid);
> + my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
> +
> + return $plugin->get_subdir($scfg, 'import');
> +}
> +
> sub get_vztmpl_dir {
> my ($cfg, $storeid) = @_;
>
> --
> 2.39.2
>
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>
>
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [pve-devel] [PATCH qemu-server v3 1/4] api: delete unused OVF.pm
2024-04-29 11:21 ` [pve-devel] [PATCH qemu-server v3 1/4] api: delete unused OVF.pm Dominik Csapak
@ 2024-05-22 10:25 ` Fabian Grünbichler
2024-05-22 10:26 ` Fabian Grünbichler
0 siblings, 1 reply; 38+ messages in thread
From: Fabian Grünbichler @ 2024-05-22 10:25 UTC (permalink / raw)
To: Proxmox VE development discussion
On April 29, 2024 1:21 pm, Dominik Csapak wrote:
> the api part was never in use by anything
>
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
> PVE/API2/Qemu/Makefile | 2 +-
> PVE/API2/Qemu/OVF.pm | 53 ------------------------------------------
as noted in the pve-storage patch, this should also drop
libxml-libxml-perl from d/control here..
> 2 files changed, 1 insertion(+), 54 deletions(-)
> delete mode 100644 PVE/API2/Qemu/OVF.pm
>
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [pve-devel] [PATCH qemu-server v3 1/4] api: delete unused OVF.pm
2024-05-22 10:25 ` Fabian Grünbichler
@ 2024-05-22 10:26 ` Fabian Grünbichler
0 siblings, 0 replies; 38+ messages in thread
From: Fabian Grünbichler @ 2024-05-22 10:26 UTC (permalink / raw)
To: Proxmox VE development discussion
On May 22, 2024 12:25 pm, Fabian Grünbichler wrote:
> On April 29, 2024 1:21 pm, Dominik Csapak wrote:
>> the api part was never in use by anything
>>
>> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
>> ---
>> PVE/API2/Qemu/Makefile | 2 +-
>> PVE/API2/Qemu/OVF.pm | 53 ------------------------------------------
>
> as noted in the pve-storage patch, this should also drop
> libxml-libxml-perl from d/control here..
sorry, this was meant for the next patch in qemu-server ;)
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [pve-devel] [PATCH qemu-server v3 4/4] api: create: add 'import-extraction-storage' parameter
2024-04-29 11:21 ` [pve-devel] [PATCH qemu-server v3 4/4] api: create: add 'import-extraction-storage' parameter Dominik Csapak
@ 2024-05-22 12:16 ` Fabian Grünbichler
0 siblings, 0 replies; 38+ messages in thread
From: Fabian Grünbichler @ 2024-05-22 12:16 UTC (permalink / raw)
To: Proxmox VE development discussion
On April 29, 2024 1:21 pm, Dominik Csapak wrote:
> this is to override the target extraction storage for the option disk
> extraction for 'import-from'. This way if the storage does not
> supports the content type 'images', one can give an alternative one.
>
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
> PVE/API2/Qemu.pm | 56 +++++++++++++++++++++++++++++++++++++++++-------
> 1 file changed, 48 insertions(+), 8 deletions(-)
>
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index d32967dc..74d0e240 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -128,7 +128,9 @@ my $check_drive_param = sub {
> };
>
> my $check_storage_access = sub {
> - my ($rpcenv, $authuser, $storecfg, $vmid, $settings, $default_storage) = @_;
> + my ($rpcenv, $authuser, $storecfg, $vmid, $settings, $default_storage, $extraction_storage) = @_;
> +
> + my $needs_extraction = 0;
this is not needed
>
> $foreach_volume_with_alloc->($settings, sub {
> my ($ds, $drive) = @_;
> @@ -169,9 +171,13 @@ my $check_storage_access = sub {
> if $vtype ne 'images' && $vtype ne 'import';
>
> if (PVE::GuestImport::copy_needs_extraction($src_image)) {
> - raise_param_exc({ $ds => "$src_image is not on an storage with 'images' content type."})
> - if !$scfg->{content}->{images};
> - $rpcenv->check($authuser, "/storage/$storeid", ['Datastore.AllocateSpace']);
> + $needs_extraction = 1;
> + if (!defined($extraction_storage)) {
> + raise_param_exc({ $ds => "$src_image is not on an storage with 'images'"
> + ." content type and no 'import-extraction-storage' was given."})
> + if !$scfg->{content}->{images};
> + $rpcenv->check($authuser, "/storage/$storeid", ['Datastore.AllocateSpace']);
> + }
> }
> }
>
> @@ -183,6 +189,14 @@ my $check_storage_access = sub {
> }
> });
>
> + if ($needs_extraction && defined($extraction_storage)) {
> + my $scfg = PVE::Storage::storage_config($storecfg, $extraction_storage);
> + raise_param_exc({ 'import-extraction-storage' => "$extraction_storage does not support"
> + ." 'images' content type or is not file based."})
> + if !$scfg->{content}->{images} || !$scfg->{path};
> + $rpcenv->check($authuser, "/storage/$extraction_storage", ['Datastore.AllocateSpace']);
> + }
> +
because this can just move up to/merged with the code where the
no-explicit-extraction-storage case is handled..
> $rpcenv->check($authuser, "/storage/$settings->{vmstatestorage}", ['Datastore.AllocateSpace'])
> if defined($settings->{vmstatestorage});
> };
> @@ -326,7 +340,7 @@ my $import_from_volid = sub {
>
> # Note: $pool is only needed when creating a VM, because pool permissions
> # are automatically inherited if VM already exists inside a pool.
> -my sub create_disks : prototype($$$$$$$$$$) {
> +my sub create_disks : prototype($$$$$$$$$$$) {
> my (
> $rpcenv,
> $authuser,
> @@ -338,6 +352,7 @@ my sub create_disks : prototype($$$$$$$$$$) {
> $settings,
> $default_storage,
> $is_live_import,
> + $extraction_storage,
> ) = @_;
>
> my $vollist = [];
> @@ -405,7 +420,8 @@ my sub create_disks : prototype($$$$$$$$$$) {
> if (PVE::Storage::parse_volume_id($source, 1)) { # PVE-managed volume
> if (PVE::GuestImport::copy_needs_extraction($source)) { # needs extraction beforehand
> print "extracting $source\n";
should we mention the storage here as well?
> - $source = PVE::GuestImport::extract_disk_from_import_file($source, $vmid);
> + $source = PVE::GuestImport::extract_disk_from_import_file(
> + $source, $vmid, $extraction_storage);
> print "finished extracting to $source\n";
> push @$delete_sources, $source;
> }
> @@ -925,6 +941,12 @@ __PACKAGE__->register_method({
> default => 0,
> description => "Start VM after it was created successfully.",
> },
> + 'import-extraction-storage' => get_standard_option('pve-storage-id', {
> + description => "Storage to put extracted images when using 'import-from' that"
> + ." needs extraction",
something something "temporary" ;)
maybe
"Storage for temporarily extracted `import-from` image files (default:
import source storage)."
or something like that?
> + optional => 1,
> + completion => \&PVE::QemuServer::complete_storage,
> + }),
> },
> 1, # with_disk_alloc
> ),
> @@ -951,6 +973,7 @@ __PACKAGE__->register_method({
> my $storage = extract_param($param, 'storage');
> my $unique = extract_param($param, 'unique');
> my $live_restore = extract_param($param, 'live-restore');
> + my $extraction_storage = extract_param($param, 'import-extraction-storage');
>
> if (defined(my $ssh_keys = $param->{sshkeys})) {
> $ssh_keys = URI::Escape::uri_unescape($ssh_keys);
> @@ -1010,7 +1033,8 @@ __PACKAGE__->register_method({
> if (scalar(keys $param->%*) > 0) {
> &$resolve_cdrom_alias($param);
>
> - &$check_storage_access($rpcenv, $authuser, $storecfg, $vmid, $param, $storage);
> + &$check_storage_access(
> + $rpcenv, $authuser, $storecfg, $vmid, $param, $storage, $extraction_storage);
>
> &$check_vm_modify_config_perm($rpcenv, $authuser, $vmid, $pool, [ keys %$param]);
>
> @@ -1126,6 +1150,7 @@ __PACKAGE__->register_method({
> $param,
> $storage,
> $live_restore,
> + $extraction_storage
> );
> $conf->{$_} = $created_opts->{$_} for keys $created_opts->%*;
>
> @@ -1672,6 +1697,8 @@ my $update_vm_api = sub {
>
> my $skip_cloud_init = extract_param($param, 'skip_cloud_init');
>
> + my $extraction_storage = extract_param($param, 'import-extraction-storage');
> +
> if (defined(my $cipassword = $param->{cipassword})) {
> # Same logic as in cloud-init (but with the regex fixed...)
> $param->{cipassword} = PVE::Tools::encrypt_pw($cipassword)
> @@ -1791,7 +1818,7 @@ my $update_vm_api = sub {
>
> &$check_vm_modify_config_perm($rpcenv, $authuser, $vmid, undef, [keys %$param]);
>
> - &$check_storage_access($rpcenv, $authuser, $storecfg, $vmid, $param);
> + &$check_storage_access($rpcenv, $authuser, $storecfg, $vmid, $param, $extraction_storage);
perl strikes again - this is missing an undef!
>
> PVE::QemuServer::check_bridge_access($rpcenv, $authuser, $param);
>
> @@ -1973,6 +2000,7 @@ my $update_vm_api = sub {
> {$opt => $param->{$opt}},
> undef,
> undef,
> + $extraction_storage,
> );
> $conf->{pending}->{$_} = $created_opts->{$_} for keys $created_opts->%*;
>
> @@ -2170,6 +2198,12 @@ __PACKAGE__->register_method({
> maximum => 30,
> optional => 1,
> },
> + 'import-extraction-storage' => get_standard_option('pve-storage-id', {
> + description => "Storage to put extracted images when using 'import-from' that"
> + ." needs extraction",
same as above..
> + optional => 1,
> + completion => \&PVE::QemuServer::complete_storage,
> + }),
> },
> 1, # with_disk_alloc
> ),
> @@ -2220,6 +2254,12 @@ __PACKAGE__->register_method({
> maxLength => 40,
> optional => 1,
> },
> + 'import-extraction-storage' => get_standard_option('pve-storage-id', {
> + description => "Storage to put extracted images when using 'import-from' that"
> + ." needs extraction",
> + optional => 1,
> + completion => \&PVE::QemuServer::complete_storage,
> + }),
here as well, but do we really need this here? by definition the PUT
variant is wrong for such a use case..
> },
> 1, # with_disk_alloc
> ),
> --
> 2.39.2
>
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>
>
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [pve-devel] [PATCH qemu-server v3 3/4] api: create: implement extracting disks when needed for import-from
2024-04-29 11:21 ` [pve-devel] [PATCH qemu-server v3 3/4] api: create: implement extracting disks when needed for import-from Dominik Csapak
@ 2024-05-22 12:55 ` Fabian Grünbichler
0 siblings, 0 replies; 38+ messages in thread
From: Fabian Grünbichler @ 2024-05-22 12:55 UTC (permalink / raw)
To: Proxmox VE development discussion
On April 29, 2024 1:21 pm, Dominik Csapak wrote:
> when 'import-from' contains a disk image that needs extraction
> (currently only from an 'ova' archive), do that in 'create_disks'
> and overwrite the '$source' volid.
>
> Collect the names into a 'delete_sources' list, that we use later
> to clean it up again (either when we're finished with importing or in an
> error case).
>
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
> PVE/API2/Qemu.pm | 44 ++++++++++++++++++++++++++++++---------
> PVE/QemuServer.pm | 5 ++++-
> PVE/QemuServer/Helpers.pm | 10 +++++++++
> 3 files changed, 48 insertions(+), 11 deletions(-)
>
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index 2a349c8c..d32967dc 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -24,6 +24,7 @@ use PVE::JSONSchema qw(get_standard_option);
> use PVE::RESTHandler;
> use PVE::ReplicationConfig;
> use PVE::GuestHelpers qw(assert_tag_permissions);
> +use PVE::GuestImport;
> use PVE::QemuConfig;
> use PVE::QemuServer;
> use PVE::QemuServer::Cloudinit;
> @@ -159,10 +160,19 @@ my $check_storage_access = sub {
>
> if (my $src_image = $drive->{'import-from'}) {
> my $src_vmid;
> - if (PVE::Storage::parse_volume_id($src_image, 1)) { # PVE-managed volume
> - (my $vtype, undef, $src_vmid) = PVE::Storage::parse_volname($storecfg, $src_image);
> - raise_param_exc({ $ds => "$src_image has wrong type '$vtype' - not an image" })
> - if $vtype ne 'images';
> + if (my ($storeid, $volname) = PVE::Storage::parse_volume_id($src_image, 1)) { # PVE-managed volume
> + my $scfg = PVE::Storage::storage_config($storecfg, $storeid);
> + my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
> + (my $vtype, undef, $src_vmid) = $plugin->parse_volname($volname);
please use PVE::Storage instead!
> +
> + raise_param_exc({ $ds => "$src_image has wrong type '$vtype' - needs to be 'images' or 'import'" })
> + if $vtype ne 'images' && $vtype ne 'import';
> +
> + if (PVE::GuestImport::copy_needs_extraction($src_image)) {
like noted in the patch introducing that helper, it could just be
inlined here..
> + raise_param_exc({ $ds => "$src_image is not on an storage with 'images' content type."})
> + if !$scfg->{content}->{images};
> + $rpcenv->check($authuser, "/storage/$storeid", ['Datastore.AllocateSpace']);
> + }
> }
>
> if ($src_vmid) { # might be actively used by VM and will be copied via clone_disk()
> @@ -335,6 +345,7 @@ my sub create_disks : prototype($$$$$$$$$$) {
> my $res = {};
>
> my $live_import_mapping = {};
> + my $delete_sources = [];
we already have a list of created volumes here that are cleaned up on
error ($vollist), so this is just to also clean them up after importing
if that worked? and then, it's basically just for live importing (since
for non-live imports, we can just free the volume right after the import
was successful?)? but live imports already have their own hash anyway
($live_import_mapping), we could just annotate the volume there?
>
> my $code = sub {
> my ($ds, $disk) = @_;
> @@ -392,6 +403,12 @@ my sub create_disks : prototype($$$$$$$$$$) {
> $needs_creation = $live_import;
>
> if (PVE::Storage::parse_volume_id($source, 1)) { # PVE-managed volume
> + if (PVE::GuestImport::copy_needs_extraction($source)) { # needs extraction beforehand
> + print "extracting $source\n";
> + $source = PVE::GuestImport::extract_disk_from_import_file($source, $vmid);
> + print "finished extracting to $source\n";
this is a bit hard to follow, it might be more readable to do
my $extracted_volid = ..;
$source = $extracted_volid;
even if the end result is the same, it makes it much more explicit what
is happening here with $source?
> + push @$delete_sources, $source;
this could just push to $vollist I think..
> + }
> if ($live_import && $ds ne 'efidisk0') {
> my $path = PVE::Storage::path($storecfg, $source)
> or die "failed to get a path for '$source'\n";
> @@ -514,13 +531,14 @@ my sub create_disks : prototype($$$$$$$$$$) {
> eval { PVE::Storage::vdisk_free($storecfg, $volid); };
> warn $@ if $@;
> }
> + PVE::QemuServer::Helpers::cleanup_extracted_images($delete_sources);
then this would not be needed, since we cleanup all the freshly
allocated volumes above anyway..
> die $err;
> }
>
> # don't return empty import mappings
> $live_import_mapping = undef if !%$live_import_mapping;
>
> - return ($vollist, $res, $live_import_mapping);
> + return ($vollist, $res, $live_import_mapping, $delete_sources);
this can then be dropped as well..
> };
>
> my $check_cpu_model_access = sub {
> @@ -1079,6 +1097,7 @@ __PACKAGE__->register_method({
>
> my $createfn = sub {
> my $live_import_mapping = {};
> + my $delete_sources = [];
so can this
>
> # ensure no old replication state are exists
> PVE::ReplicationState::delete_guest_states($vmid);
> @@ -1096,7 +1115,7 @@ __PACKAGE__->register_method({
>
> my $vollist = [];
> eval {
> - ($vollist, my $created_opts, $live_import_mapping) = create_disks(
> + ($vollist, my $created_opts, $live_import_mapping, $delete_sources) = create_disks(
and this
> $rpcenv,
> $authuser,
> $conf,
> @@ -1149,6 +1168,7 @@ __PACKAGE__->register_method({
> eval { PVE::Storage::vdisk_free($storecfg, $volid); };
> warn $@ if $@;
> }
> + PVE::QemuServer::Helpers::cleanup_extracted_images($delete_sources);
and this :)
> die "$emsg $err";
> }
>
> @@ -1165,7 +1185,7 @@ __PACKAGE__->register_method({
> warn $@ if $@;
> return;
> } else {
> - return $live_import_mapping;
> + return ($live_import_mapping, $delete_sources);
as well as this
> }
> };
>
> @@ -1192,7 +1212,7 @@ __PACKAGE__->register_method({
> $code = sub {
> # If a live import was requested the create function returns
> # the mapping for the startup.
> - my $live_import_mapping = eval { $createfn->() };
> + my ($live_import_mapping, $delete_sources) = eval { $createfn->() };
this
> if (my $err = $@) {
> eval {
> my $conffile = PVE::QemuConfig->config_file($vmid);
> @@ -1214,7 +1234,10 @@ __PACKAGE__->register_method({
> $vmid,
> $conf,
> $import_options,
> + $delete_sources,
this
> );
> + } else {
> + PVE::QemuServer::Helpers::cleanup_extracted_images($delete_sources);
and this?
> }
> };
> }
> @@ -1939,8 +1962,7 @@ my $update_vm_api = sub {
>
> assert_scsi_feature_compatibility($opt, $conf, $storecfg, $param->{$opt})
> if $opt =~ m/^scsi\d+$/;
> -
> - my (undef, $created_opts) = create_disks(
> + my (undef, $created_opts, undef, $delete_sources) = create_disks(
not needed either
> $rpcenv,
> $authuser,
> $conf,
> @@ -1954,6 +1976,8 @@ my $update_vm_api = sub {
> );
> $conf->{pending}->{$_} = $created_opts->{$_} for keys $created_opts->%*;
>
> + PVE::QemuServer::Helpers::cleanup_extracted_images($delete_sources);
same here
> +
> # default legacy boot order implies all cdroms anyway
> if (@bootorder) {
> # append new CD drives to bootorder to mark them bootable
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index 82e7d6a6..4bd0ae85 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -7303,7 +7303,7 @@ sub pbs_live_restore {
> # therefore already handled in the `$create_disks()` call happening in the
> # `create` api call
> sub live_import_from_files {
> - my ($mapping, $vmid, $conf, $restore_options) = @_;
> + my ($mapping, $vmid, $conf, $restore_options, $delete_sources) = @_;
here
>
> my $live_restore_backing = {};
> for my $dev (keys %$mapping) {
> @@ -7364,6 +7364,8 @@ sub live_import_from_files {
> mon_cmd($vmid, 'blockdev-del', 'node-name' => "drive-$ds-restore");
> }
>
> + PVE::QemuServer::Helpers::cleanup_extracted_images($delete_sources);
and this could then just free based on a flag in the mapping..
> +
> close($qmeventd_fd);
> };
>
> @@ -7372,6 +7374,7 @@ sub live_import_from_files {
> if ($err) {
> warn "An error occurred during live-restore: $err\n";
> _do_vm_stop($storecfg, $vmid, 1, 1, 10, 0, 1);
> + PVE::QemuServer::Helpers::cleanup_extracted_images($delete_sources);
and here as well..
> die "live-restore failed\n";
> }
>
> diff --git a/PVE/QemuServer/Helpers.pm b/PVE/QemuServer/Helpers.pm
> index 0afb6317..f6bec1d4 100644
> --- a/PVE/QemuServer/Helpers.pm
> +++ b/PVE/QemuServer/Helpers.pm
> @@ -6,6 +6,7 @@ use warnings;
> use File::stat;
> use JSON;
>
> +use PVE::GuestImport;
> use PVE::INotify;
> use PVE::ProcFSTools;
>
> @@ -225,4 +226,13 @@ sub windows_version {
> return $winversion;
> }
>
> +sub cleanup_extracted_images {
> + my ($delete_sources) = @_;
> +
> + for my $source (@$delete_sources) {
> + eval { PVE::GuestImport::cleanup_extracted_image($source) };
> + warn $@ if $@;
> + }
> +}
> +
and this can then be dropped, since it's just a wrapper around a helper
that is itself just a wrapper of vdisk_free..
> 1;
> --
> 2.39.2
>
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>
>
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [pve-devel] [PATCH storage v3 03/10] plugin: dir: handle ova files for import
2024-04-29 11:21 ` [pve-devel] [PATCH storage v3 03/10] plugin: dir: handle ova files for import Dominik Csapak
2024-05-22 10:08 ` Fabian Grünbichler
@ 2024-05-22 13:13 ` Fabian Grünbichler
1 sibling, 0 replies; 38+ messages in thread
From: Fabian Grünbichler @ 2024-05-22 13:13 UTC (permalink / raw)
To: Proxmox VE development discussion
On April 29, 2024 1:21 pm, Dominik Csapak wrote:
> diff --git a/src/PVE/GuestImport.pm b/src/PVE/GuestImport.pm
> new file mode 100644
> index 0000000..d405e30
> --- /dev/null
> +++ b/src/PVE/GuestImport.pm
> @@ -0,0 +1,100 @@
> +package PVE::GuestImport;
> +
> +use strict;
> +use warnings;
> +
> +use File::Path;
> +
> +use PVE::Storage;
> +use PVE::Tools qw(run_command);
> +
> +sub copy_needs_extraction {
> + my ($volid) = @_;
> + my $cfg = PVE::Storage::config();
> + my ($vtype, $name, undef, undef, undef, undef, $fmt) = PVE::Storage::parse_volname($cfg, $volid);
> +
> + # only volumes inside ovas need extraction
> + return $vtype eq 'import' && $fmt =~ m/^ova\+(.*)$/;
> +}
> +
> +sub extract_disk_from_import_file {
> + my ($volid, $vmid, $target_storeid) = @_;
> +
> + my ($source_storeid, $volname) = PVE::Storage::parse_volume_id($volid);
> + $target_storeid //= $source_storeid;
> + my $cfg = PVE::Storage::config();
> + my $source_scfg = PVE::Storage::storage_config($cfg, $source_storeid);
> + my $source_plugin = PVE::Storage::Plugin->lookup($source_scfg->{type});
> +
> + my ($vtype, $name, undef, undef, undef, undef, $fmt) =
> + $source_plugin->parse_volname($volname);
> +
> + die "only files with content type 'import' can be extracted\n"
> + if $vtype ne 'import' || $fmt !~ m/^ova\+/;
> +
> + # extract the inner file from the name
> + my $archive;
> + my $inner_file;
> + if ($name =~ m!^(.*\.ova)/(${PVE::Storage::SAFE_CHAR_CLASS_RE}+)$!) {
> + $archive = "import/$1";
> + $inner_file = $2;
> + ($fmt) = $fmt =~ /^ova\+(.*)$/;
> + } else {
> + die "cannot extract $volid - invalid volname $volname\n";
> + }
> +
> + my ($ova_path) = $source_plugin->path($source_scfg, $archive, $source_storeid);
> +
> + my $target_scfg = PVE::Storage::storage_config($cfg, $target_storeid);
> + my $target_plugin = PVE::Storage::Plugin->lookup($target_scfg->{type});
> +
> + my $destdir = $target_plugin->get_subdir($target_scfg, 'images');
> +
> + my $pid = $$;
> + $destdir .= "/tmp_${pid}_${vmid}";
> + mkpath $destdir;
> +
> + ($ova_path) = $ova_path =~ m|^(.*)$|; # untaint
> +
> + my $source_path = "$destdir/$inner_file";
> + my $target_path;
> + my $target_volname;
> + eval {
> + run_command(['tar', '-x', '--force-local', '-C', $destdir, '-f', $ova_path, $inner_file]);
> +
> + # check for symlinks and other non regular files
> + if (-l $source_path || ! -f $source_path) {
> + die "only regular files are allowed\n";
> + }
> +
> + my $target_diskname
> + = $target_plugin->find_free_diskname($target_storeid, $target_scfg, $vmid, $fmt, 1);
thought some more about this part. I don't think we currently consider
find_free_diskname to be public API for consumption outside of plugins
(rightfully so, IMHO).
I wonder how we could avoid that problem here. we could extend the
existing rename feature to allow moving from an arbitrary path to the
target storage (but that is risky, since it might mean we are copying
and not moving/renaming, unless we add extra checks)?
or we could handle the "temp extracted volume" as a volume, allowing a
regular PVE::Storage::rename_volume call to work?
maybe would risk breaking existing external plugins, but we could bump
APIVER and APIAGE and check APIVER to only allow storages as extraction
target that explicitly opted into it?
> + $target_volname = "$vmid/" . $target_diskname;
> + $target_path = $target_plugin->filesystem_path($target_scfg, $target_volname);
> +
> + print "renaming $source_path to $target_path\n";
> + my $imagedir = $target_plugin->get_subdir($target_scfg, 'images');
> + mkpath "$imagedir/$vmid";
> +
> + rename($source_path, $target_path) or die "unable to move - $!\n";
> + };
> + if (my $err = $@) {
> + unlink $source_path;
> + unlink $target_path if defined($target_path);
> + rmdir $destdir;
> + die "error during extraction: $err\n";
> + }
> +
> + rmdir $destdir;
> +
> + return "$target_storeid:$target_volname";
> +}
> +
> +sub cleanup_extracted_image {
> + my ($source) = @_;
> +
> + my $cfg = PVE::Storage::config();
> + PVE::Storage::vdisk_free($cfg, $source);
> +}
> +
> +1;
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [pve-devel] [PATCH storage v3 03/10] plugin: dir: handle ova files for import
2024-05-22 10:08 ` Fabian Grünbichler
@ 2024-05-23 10:40 ` Dominik Csapak
2024-05-23 12:25 ` Fabian Grünbichler
0 siblings, 1 reply; 38+ messages in thread
From: Dominik Csapak @ 2024-05-23 10:40 UTC (permalink / raw)
To: Proxmox VE development discussion, Fabian Grünbichler
On 5/22/24 12:08, Fabian Grünbichler wrote:
> On April 29, 2024 1:21 pm, Dominik Csapak wrote:
>> since we want to handle ova files (which are only ovf+images bundled in
>> a tar file) for import, add code that handles that.
>>
>> we introduce a valid volname for files contained in ovas like this:
>>
>> storage:import/archive.ova/disk-1.vmdk
>>
>> by basically treating the last part of the path as the name for the
>> contained disk we want.
>>
>> in that case we return 'import' as type with 'vmdk/qcow2/raw' as format
>> (we cannot use something like 'ova+vmdk' without extending the 'format'
>> parsing to that for all storages/formats. This is because it runs
>> though a verify format check at least once)
>>
>> we then provide 3 functions to use for that:
>>
>> * copy_needs_extraction: determines from the given volid (like above) if
>> that needs extraction to copy it, currently only 'import' vtype + a
>> volid with the above format returns true
>>
>> * extract_disk_from_import_file: this actually extracts the file from
>> the archive. Currently only ova is supported, so the extraction with
>> 'tar' is hardcoded, but again we can easily extend/modify that should
>> we need to.
>>
>> we currently extract into the either the import storage or a given
>> target storage in the images directory so if the cleanup does not
>> happen, the user can still see and interact with the image via
>> api/cli/gui
>>
>> * cleanup_extracted_image: intended to cleanup the extracted images from
>> above
>>
>> we have to modify the `parse_ovf` a bit to handle the missing disk
>> images, and we parse the size out of the ovf part (since this is
>> informal only, it should be no problem if we cannot parse it sometimes)
>>
>> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
>> ---
>> src/PVE/API2/Storage/Status.pm | 1 +
>> src/PVE/GuestImport.pm | 100 +++++++++++++++++++++++++++++++++
>> src/PVE/GuestImport/OVF.pm | 53 ++++++++++++++---
>> src/PVE/Makefile | 1 +
>> src/PVE/Storage.pm | 2 +-
>> src/PVE/Storage/DirPlugin.pm | 15 ++++-
>> src/PVE/Storage/Plugin.pm | 5 ++
>> src/test/parse_volname_test.pm | 15 +++++
>> 8 files changed, 182 insertions(+), 10 deletions(-)
>> create mode 100644 src/PVE/GuestImport.pm
>>
>> diff --git a/src/PVE/API2/Storage/Status.pm b/src/PVE/API2/Storage/Status.pm
>> index dc6cc69..acde730 100644
>> --- a/src/PVE/API2/Storage/Status.pm
>> +++ b/src/PVE/API2/Storage/Status.pm
>> @@ -749,6 +749,7 @@ __PACKAGE__->register_method({
>> 'efi-state-lost',
>> 'guest-is-running',
>> 'nvme-unsupported',
>> + 'ova-needs-extracting',
>> 'ovmf-with-lsi-unsupported',
>> 'serial-port-socket-only',
>> ],
>> diff --git a/src/PVE/GuestImport.pm b/src/PVE/GuestImport.pm
>> new file mode 100644
>> index 0000000..d405e30
>> --- /dev/null
>> +++ b/src/PVE/GuestImport.pm
>> @@ -0,0 +1,100 @@
>> +package PVE::GuestImport;
>> +
>> +use strict;
>> +use warnings;
>> +
>> +use File::Path;
>> +
>> +use PVE::Storage;
>
> another circular module dependency..
>
why do you think so? nothing in storage is using PVE::GuestImport only PVE::GuestImport::OVF ?
>> +use PVE::Tools qw(run_command);
>> +
>> +sub copy_needs_extraction {
>> + my ($volid) = @_;
>> + my $cfg = PVE::Storage::config();
>> + my ($vtype, $name, undef, undef, undef, undef, $fmt) = PVE::Storage::parse_volname($cfg, $volid);
>> +
>> + # only volumes inside ovas need extraction
>> + return $vtype eq 'import' && $fmt =~ m/^ova\+(.*)$/;
>> +}
>
> this could just as well live in qemu-server, there's only two call sites
> in one module there.. one of them even already has the parsed volname ;)
true
>
>> +
>> +sub extract_disk_from_import_file {
>
> I don't really like that this is using lots of plugin stuff..
>
>> + my ($volid, $vmid, $target_storeid) = @_;
>> +
>> + my ($source_storeid, $volname) = PVE::Storage::parse_volume_id($volid);
>> + $target_storeid //= $source_storeid;
>> + my $cfg = PVE::Storage::config();
>> + my $source_scfg = PVE::Storage::storage_config($cfg, $source_storeid);
>> + my $source_plugin = PVE::Storage::Plugin->lookup($source_scfg->{type});
>> +
>> + my ($vtype, $name, undef, undef, undef, undef, $fmt) =
>> + $source_plugin->parse_volname($volname);
>
> could be PVE::Storage::parse_volname
>
>> +
>> + die "only files with content type 'import' can be extracted\n"
>> + if $vtype ne 'import' || $fmt !~ m/^ova\+/;
>> +
>> + # extract the inner file from the name
>> + my $archive;
>> + my $inner_file;
>> + if ($name =~ m!^(.*\.ova)/(${PVE::Storage::SAFE_CHAR_CLASS_RE}+)$!) {
>> + $archive = "import/$1";
>> + $inner_file = $2;
>> + ($fmt) = $fmt =~ /^ova\+(.*)$/;
>> + } else {
>> + die "cannot extract $volid - invalid volname $volname\n";
>> + }
>> +
>> + my ($ova_path) = $source_plugin->path($source_scfg, $archive, $source_storeid);
>
> could be PVE::Storage::path
>
>> +
>> + my $target_scfg = PVE::Storage::storage_config($cfg, $target_storeid);
>> + my $target_plugin = PVE::Storage::Plugin->lookup($target_scfg->{type});
>> +
>> + my $destdir = $target_plugin->get_subdir($target_scfg, 'images');
>
> could be PVE::Storage::get_image_dir
>
>> +
>> + my $pid = $$;
>> + $destdir .= "/tmp_${pid}_${vmid}";
>> + mkpath $destdir;
>> +
>> + ($ova_path) = $ova_path =~ m|^(.*)$|; # untaint
>> +
>> + my $source_path = "$destdir/$inner_file";
>> + my $target_path;
>> + my $target_volname;
>> + eval {
>> + run_command(['tar', '-x', '--force-local', '-C', $destdir, '-f', $ova_path, $inner_file]);
>> +
>> + # check for symlinks and other non regular files
>> + if (-l $source_path || ! -f $source_path) {
>> + die "only regular files are allowed\n";
>> + }
>> +
>> + my $target_diskname
>> + = $target_plugin->find_free_diskname($target_storeid, $target_scfg, $vmid, $fmt, 1);
>
> these here requires holding a lock until the diskname is actually used
> (the rename below), else it's racey..
we do have a lock over vm creation in the only path this is called and it is vmid specific...
so is this really a problem?
>
>> + $target_volname = "$vmid/" . $target_diskname;
>
> this encodes a fact about volname semantics that might not be a given
> for external, dir-based plugins (not sure if we want to worry about that
> though, or how to avoid it ;)).
i mean we could call 'alloc' with a very small size instead
and simply "overwrite" it? then we'd also get around things like
mkpath and imagedir etc.
>
>> + $target_path = $target_plugin->filesystem_path($target_scfg, $target_volname);
>
> this should be equivalent to PVE::Storage::path for DirPlugin based
> storages?
>
>> +
>> + print "renaming $source_path to $target_path\n";
>> + my $imagedir = $target_plugin->get_subdir($target_scfg, 'images');
>
> we already did this above, but see comment there ;)
true ;)
>
>> + mkpath "$imagedir/$vmid";
>> +
>> + rename($source_path, $target_path) or die "unable to move - $!\n";
>> + };
>> + if (my $err = $@) {
>> + unlink $source_path;
>> + unlink $target_path if defined($target_path);
>
> isn't this pretty much impossible to happen? the last thing we do in the
> eval block is the rename - if that failed, $target_path can't exist yet.
> if it didn't fail, we can't end up here?
that probably depends on the underlying filesystem no? not
every fs has posix rename semantics i guess?
in that case we'd cleanup the file, and if it does not exists, it doesn't hurt
but
>
>> + rmdir $destdir;
>
> this and unlink $source_path could just be a remove_tree on $destdir
> instead, with less chances of leaving stuff around?
this is true of course and removes the issue above
>
>> + die "error during extraction: $err\n";
>> + }
>> +
>> + rmdir $destdir;
>
> could also be a remove_tree, just to be on the safe side?
yup
>
>> +
>> + return "$target_storeid:$target_volname";
>> +}
>> +
>> +sub cleanup_extracted_image {
>> + my ($source) = @_;
>> +
>> + my $cfg = PVE::Storage::config();
>> + PVE::Storage::vdisk_free($cfg, $source);
>> +}
>
> why do we need this helper, and not just call vdisk_free directly in
> qemu-server (we do that in tons of places there as part of error
> handling for freshly allocated volumes)?
ok makes sense
>
>> +
>> +1;
>> diff --git a/src/PVE/GuestImport/OVF.pm b/src/PVE/GuestImport/OVF.pm
>> index 0eb5e9c..6b79078 100644
>> --- a/src/PVE/GuestImport/OVF.pm
>> +++ b/src/PVE/GuestImport/OVF.pm
>> @@ -85,11 +85,37 @@ sub id_to_pve {
>> }
>> }
>>
>> +# technically defined in DSP0004 (https://www.dmtf.org/dsp/DSP0004) as an ABNF
>> +# but realistically this always takes the form of 'byte * base^exponent'
>> +sub try_parse_capacity_unit {
>> + my ($unit_text) = @_;
>> +
>> + if ($unit_text =~ m/^\s*byte\s*\*\s*([0-9]+)\s*\^\s*([0-9]+)\s*$/) {
>> + my $base = $1;
>> + my $exp = $2;
>> + return $base ** $exp;
>> + }
>> +
>> + return undef;
>> +}
>> +
>> # returns two references, $qm which holds qm.conf style key/values, and \@disks
>> sub parse_ovf {
>> - my ($ovf, $debug) = @_;
>> + my ($ovf, $isOva, $debug) = @_;
>> +
>> + # we have to ignore missing disk images for ova
>> + my $dom;
>> + if ($isOva) {
>> + my $raw = "";
>> + PVE::Tools::run_command(['tar', '-xO', '--wildcards', '--occurrence=1', '-f', $ovf, '*.ovf'], outfunc => sub {
>> + my $line = shift;
>> + $raw .= $line;
>> + });
>> + $dom = XML::LibXML->load_xml(string => $raw, no_blanks => 1);
>> + } else {
>> + $dom = XML::LibXML->load_xml(location => $ovf, no_blanks => 1);
>> + }
>>
>> - my $dom = XML::LibXML->load_xml(location => $ovf, no_blanks => 1);
>>
>> # register the xml namespaces in a xpath context object
>> # 'ovf' is the default namespace so it will prepended to each xml element
>> @@ -177,7 +203,17 @@ sub parse_ovf {
>> # @ needs to be escaped to prevent Perl double quote interpolation
>> my $xpath_find_fileref = sprintf("/ovf:Envelope/ovf:DiskSection/\
>> ovf:Disk[\@ovf:diskId='%s']/\@ovf:fileRef", $disk_id);
>> + my $xpath_find_capacity = sprintf("/ovf:Envelope/ovf:DiskSection/\
>> +ovf:Disk[\@ovf:diskId='%s']/\@ovf:capacity", $disk_id);
>> + my $xpath_find_capacity_unit = sprintf("/ovf:Envelope/ovf:DiskSection/\
>> +ovf:Disk[\@ovf:diskId='%s']/\@ovf:capacityAllocationUnits", $disk_id);
>> my $fileref = $xpc->findvalue($xpath_find_fileref);
>> + my $capacity = $xpc->findvalue($xpath_find_capacity);
>> + my $capacity_unit = $xpc->findvalue($xpath_find_capacity_unit);
>> + my $virtual_size;
>> + if (my $factor = try_parse_capacity_unit($capacity_unit)) {
>> + $virtual_size = $capacity * $factor;
>> + }
>>
>> my $valid_url_chars = qr@${valid_uripath_chars}|/@;
>> if (!$fileref || $fileref !~ m/^${valid_url_chars}+$/) {
>> @@ -217,7 +253,7 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
>> die "error parsing $filepath, are you using a symlink ?\n";
>> }
>>
>> - if (!-e $backing_file_path) {
>> + if (!-e $backing_file_path && !$isOva) {
>> die "error parsing $filepath, file seems not to exist at $backing_file_path\n";
>> }
>>
>> @@ -225,16 +261,19 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
>> ($filepath) = $filepath =~ m|^(${PVE::Storage::SAFE_CHAR_CLASS_RE}+)$|; # untaint & check no sub/parent dirs
>> die "invalid path\n" if !$filepath;
>>
>> - my $virtual_size = PVE::Storage::file_size_info($backing_file_path);
>> - die "error parsing $backing_file_path, cannot determine file size\n"
>> - if !$virtual_size;
>> + if (!$isOva) {
>> + my $size = PVE::Storage::file_size_info($backing_file_path);
>> + die "error parsing $backing_file_path, cannot determine file size\n"
>> + if !$size;
>>
>> + $virtual_size = $size;
>> + }
>> $pve_disk = {
>> disk_address => $pve_disk_address,
>> backing_file => $backing_file_path,
>> - virtual_size => $virtual_size
>> relative_path => $filepath,
>> };
>> + $pve_disk->{virtual_size} = $virtual_size if defined($virtual_size);
>> push @disks, $pve_disk;
>>
>> }
>> diff --git a/src/PVE/Makefile b/src/PVE/Makefile
>> index e15a275..0af3081 100644
>> --- a/src/PVE/Makefile
>> +++ b/src/PVE/Makefile
>> @@ -5,6 +5,7 @@ install:
>> install -D -m 0644 Storage.pm ${DESTDIR}${PERLDIR}/PVE/Storage.pm
>> install -D -m 0644 Diskmanage.pm ${DESTDIR}${PERLDIR}/PVE/Diskmanage.pm
>> install -D -m 0644 CephConfig.pm ${DESTDIR}${PERLDIR}/PVE/CephConfig.pm
>> + install -D -m 0644 GuestImport.pm ${DESTDIR}${PERLDIR}/PVE/GuestImport.pm
>> make -C Storage install
>> make -C GuestImport install
>> make -C API2 install
>> diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
>> index 1ed91c2..adc1b45 100755
>> --- a/src/PVE/Storage.pm
>> +++ b/src/PVE/Storage.pm
>> @@ -114,7 +114,7 @@ our $VZTMPL_EXT_RE_1 = qr/\.tar\.(gz|xz|zst)/i;
>>
>> our $BACKUP_EXT_RE_2 = qr/\.(tgz|(?:tar|vma)(?:\.(${\PVE::Storage::Plugin::COMPRESSOR_RE}))?)/;
>>
>> -our $IMPORT_EXT_RE_1 = qr/\.(ovf|qcow2|raw|vmdk)/;
>> +our $IMPORT_EXT_RE_1 = qr/\.(ova|ovf|qcow2|raw|vmdk)/;
>>
>> our $SAFE_CHAR_CLASS_RE = qr/[a-zA-Z0-9\-\.\+\=\_]/;
>>
>> diff --git a/src/PVE/Storage/DirPlugin.pm b/src/PVE/Storage/DirPlugin.pm
>> index 3e3b1e7..ea89464 100644
>> --- a/src/PVE/Storage/DirPlugin.pm
>> +++ b/src/PVE/Storage/DirPlugin.pm
>> @@ -258,15 +258,26 @@ sub get_import_metadata {
>> # NOTE: all types of warnings must be added to the return schema of the import-metadata API endpoint
>> my $warnings = [];
>>
>> + my $isOva = 0;
>> + if ($name =~ m/\.ova$/) {
>> + $isOva = 1;
>> + push @$warnings, { type => 'ova-needs-extracting' };
>> + }
>> my $path = $class->path($scfg, $volname, $storeid, undef);
>> - my $res = PVE::GuestImport::OVF::parse_ovf($path);
>> + my $res = PVE::GuestImport::OVF::parse_ovf($path, $isOva);
>> my $disks = {};
>> for my $disk ($res->{disks}->@*) {
>> my $id = $disk->{disk_address};
>> my $size = $disk->{virtual_size};
>> my $path = $disk->{relative_path};
>> + my $volid;
>> + if ($isOva) {
>> + $volid = "$storeid:$volname/$path";
>> + } else {
>> + $volid = "$storeid:import/$path",
>> + }
>> $disks->{$id} = {
>> - volid => "$storeid:import/$path",
>> + volid => $volid,
>> defined($size) ? (size => $size) : (),
>> };
>> }
>> diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
>> index 33f0f3a..640d156 100644
>> --- a/src/PVE/Storage/Plugin.pm
>> +++ b/src/PVE/Storage/Plugin.pm
>> @@ -654,6 +654,11 @@ sub parse_volname {
>> return ('backup', $fn);
>> } elsif ($volname =~ m!^snippets/([^/]+)$!) {
>> return ('snippets', $1);
>> + } elsif ($volname =~ m!^import/(${PVE::Storage::SAFE_CHAR_CLASS_RE}+\.ova\/(${PVE::Storage::SAFE_CHAR_CLASS_RE}+))$!) {
>> + my $archive = $1;
>> + my $file = $2;
>> + my (undef, $format, undef) = parse_name_dir($file);
>> + return ('import', $archive, undef, undef, undef, undef, "ova+$format");
>
> these could be improved if the format was already checked in the elsif
> condition I think, since the error message of parse_name_dir is a bit
> opaque/lacking context.. also, parse_name_dir allows subvol, which we
> don't want to allow here I think?
ok yeah that makes sense
>
>> } elsif ($volname =~ m!^import/(${PVE::Storage::SAFE_CHAR_CLASS_RE}+$PVE::Storage::IMPORT_EXT_RE_1)$!) {
>> return ('import', $1, undef, undef, undef, undef, $2);
>> }
>> diff --git a/src/test/parse_volname_test.pm b/src/test/parse_volname_test.pm
>> index a8c746f..bc7b4e8 100644
>> --- a/src/test/parse_volname_test.pm
>> +++ b/src/test/parse_volname_test.pm
>> @@ -93,6 +93,21 @@ my $tests = [
>> volname => 'import/import.ovf',
>> expected => ['import', 'import.ovf', undef, undef, undef ,undef, 'ovf'],
>> },
>> + {
>> + description => "Import, innner file of ova",
>> + volname => 'import/import.ova/disk.qcow2',
>> + expected => ['import', 'import.ova/disk.qcow2', undef, undef, undef, undef, 'ova+qcow2'],
>> + },
>> + {
>> + description => "Import, innner file of ova",
>> + volname => 'import/import.ova/disk.vmdk',
>> + expected => ['import', 'import.ova/disk.vmdk', undef, undef, undef, undef, 'ova+vmdk'],
>> + },
>> + {
>> + description => "Import, innner file of ova",
>> + volname => 'import/import.ova/disk.raw',
>> + expected => ['import', 'import.ova/disk.raw', undef, undef, undef, undef, 'ova+raw'],
>> + },
>> #
>> # failed matches
>> #
>> --
>> 2.39.2
>>
>>
>>
>> _______________________________________________
>> pve-devel mailing list
>> pve-devel@lists.proxmox.com
>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>>
>>
>>
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [pve-devel] [PATCH storage v3 03/10] plugin: dir: handle ova files for import
2024-05-23 10:40 ` Dominik Csapak
@ 2024-05-23 12:25 ` Fabian Grünbichler
2024-05-23 12:32 ` Dominik Csapak
0 siblings, 1 reply; 38+ messages in thread
From: Fabian Grünbichler @ 2024-05-23 12:25 UTC (permalink / raw)
To: Dominik Csapak, Proxmox VE development discussion
On May 23, 2024 12:40 pm, Dominik Csapak wrote:
> On 5/22/24 12:08, Fabian Grünbichler wrote:
>> On April 29, 2024 1:21 pm, Dominik Csapak wrote:
>>> since we want to handle ova files (which are only ovf+images bundled in
>>> a tar file) for import, add code that handles that.
>>>
>>> we introduce a valid volname for files contained in ovas like this:
>>>
>>> storage:import/archive.ova/disk-1.vmdk
>>>
>>> by basically treating the last part of the path as the name for the
>>> contained disk we want.
>>>
>>> in that case we return 'import' as type with 'vmdk/qcow2/raw' as format
>>> (we cannot use something like 'ova+vmdk' without extending the 'format'
>>> parsing to that for all storages/formats. This is because it runs
>>> though a verify format check at least once)
>>>
>>> we then provide 3 functions to use for that:
>>>
>>> * copy_needs_extraction: determines from the given volid (like above) if
>>> that needs extraction to copy it, currently only 'import' vtype + a
>>> volid with the above format returns true
>>>
>>> * extract_disk_from_import_file: this actually extracts the file from
>>> the archive. Currently only ova is supported, so the extraction with
>>> 'tar' is hardcoded, but again we can easily extend/modify that should
>>> we need to.
>>>
>>> we currently extract into the either the import storage or a given
>>> target storage in the images directory so if the cleanup does not
>>> happen, the user can still see and interact with the image via
>>> api/cli/gui
>>>
>>> * cleanup_extracted_image: intended to cleanup the extracted images from
>>> above
>>>
>>> we have to modify the `parse_ovf` a bit to handle the missing disk
>>> images, and we parse the size out of the ovf part (since this is
>>> informal only, it should be no problem if we cannot parse it sometimes)
>>>
>>> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
>>> ---
>>> src/PVE/API2/Storage/Status.pm | 1 +
>>> src/PVE/GuestImport.pm | 100 +++++++++++++++++++++++++++++++++
>>> src/PVE/GuestImport/OVF.pm | 53 ++++++++++++++---
>>> src/PVE/Makefile | 1 +
>>> src/PVE/Storage.pm | 2 +-
>>> src/PVE/Storage/DirPlugin.pm | 15 ++++-
>>> src/PVE/Storage/Plugin.pm | 5 ++
>>> src/test/parse_volname_test.pm | 15 +++++
>>> 8 files changed, 182 insertions(+), 10 deletions(-)
>>> create mode 100644 src/PVE/GuestImport.pm
>>>
>>> diff --git a/src/PVE/API2/Storage/Status.pm b/src/PVE/API2/Storage/Status.pm
>>> index dc6cc69..acde730 100644
>>> --- a/src/PVE/API2/Storage/Status.pm
>>> +++ b/src/PVE/API2/Storage/Status.pm
>>> @@ -749,6 +749,7 @@ __PACKAGE__->register_method({
>>> 'efi-state-lost',
>>> 'guest-is-running',
>>> 'nvme-unsupported',
>>> + 'ova-needs-extracting',
>>> 'ovmf-with-lsi-unsupported',
>>> 'serial-port-socket-only',
>>> ],
>>> diff --git a/src/PVE/GuestImport.pm b/src/PVE/GuestImport.pm
>>> new file mode 100644
>>> index 0000000..d405e30
>>> --- /dev/null
>>> +++ b/src/PVE/GuestImport.pm
>>> @@ -0,0 +1,100 @@
>>> +package PVE::GuestImport;
>>> +
>>> +use strict;
>>> +use warnings;
>>> +
>>> +use File::Path;
>>> +
>>> +use PVE::Storage;
>>
>> another circular module dependency..
>>
true, sorry for the noise! :)
> why do you think so? nothing in storage is using PVE::GuestImport only PVE::GuestImport::OVF ?
>
>>> +use PVE::Tools qw(run_command);
>>> +
>>> +sub copy_needs_extraction {
>>> + my ($volid) = @_;
>>> + my $cfg = PVE::Storage::config();
>>> + my ($vtype, $name, undef, undef, undef, undef, $fmt) = PVE::Storage::parse_volname($cfg, $volid);
>>> +
>>> + # only volumes inside ovas need extraction
>>> + return $vtype eq 'import' && $fmt =~ m/^ova\+(.*)$/;
>>> +}
>>
>> this could just as well live in qemu-server, there's only two call sites
>> in one module there.. one of them even already has the parsed volname ;)
>
> true
>
>>
>>> +
>>> +sub extract_disk_from_import_file {
>>
>> I don't really like that this is using lots of plugin stuff..
>>
>>> + my ($volid, $vmid, $target_storeid) = @_;
>>> +
>>> + my ($source_storeid, $volname) = PVE::Storage::parse_volume_id($volid);
>>> + $target_storeid //= $source_storeid;
>>> + my $cfg = PVE::Storage::config();
>>> + my $source_scfg = PVE::Storage::storage_config($cfg, $source_storeid);
>>> + my $source_plugin = PVE::Storage::Plugin->lookup($source_scfg->{type});
>>> +
>>> + my ($vtype, $name, undef, undef, undef, undef, $fmt) =
>>> + $source_plugin->parse_volname($volname);
>>
>> could be PVE::Storage::parse_volname
>>
>>> +
>>> + die "only files with content type 'import' can be extracted\n"
>>> + if $vtype ne 'import' || $fmt !~ m/^ova\+/;
>>> +
>>> + # extract the inner file from the name
>>> + my $archive;
>>> + my $inner_file;
>>> + if ($name =~ m!^(.*\.ova)/(${PVE::Storage::SAFE_CHAR_CLASS_RE}+)$!) {
>>> + $archive = "import/$1";
>>> + $inner_file = $2;
>>> + ($fmt) = $fmt =~ /^ova\+(.*)$/;
>>> + } else {
>>> + die "cannot extract $volid - invalid volname $volname\n";
>>> + }
>>> +
>>> + my ($ova_path) = $source_plugin->path($source_scfg, $archive, $source_storeid);
>>
>> could be PVE::Storage::path
>>
>>> +
>>> + my $target_scfg = PVE::Storage::storage_config($cfg, $target_storeid);
>>> + my $target_plugin = PVE::Storage::Plugin->lookup($target_scfg->{type});
>>> +
>>> + my $destdir = $target_plugin->get_subdir($target_scfg, 'images');
>>
>> could be PVE::Storage::get_image_dir
>>
>>> +
>>> + my $pid = $$;
>>> + $destdir .= "/tmp_${pid}_${vmid}";
>>> + mkpath $destdir;
>>> +
>>> + ($ova_path) = $ova_path =~ m|^(.*)$|; # untaint
>>> +
>>> + my $source_path = "$destdir/$inner_file";
>>> + my $target_path;
>>> + my $target_volname;
>>> + eval {
>>> + run_command(['tar', '-x', '--force-local', '-C', $destdir, '-f', $ova_path, $inner_file]);
>>> +
>>> + # check for symlinks and other non regular files
>>> + if (-l $source_path || ! -f $source_path) {
>>> + die "only regular files are allowed\n";
>>> + }
>>> +
>>> + my $target_diskname
>>> + = $target_plugin->find_free_diskname($target_storeid, $target_scfg, $vmid, $fmt, 1);
>>
>> these here requires holding a lock until the diskname is actually used
>> (the rename below), else it's racey..
>
> we do have a lock over vm creation in the only path this is called and it is vmid specific...
> so is this really a problem?
yes, every disk allocation needs to hold the storage lock to avoid two
actions in parallel thinking they own a "new" disk name that hasn't yet
been allocated properly.
it's possible to allocate new volumes without going over a guest
specific API after all, and there are automation use cases doing just
that (pre-allocating the volumes, then creating/updating the VM).
>>> + $target_volname = "$vmid/" . $target_diskname;
>>
>> this encodes a fact about volname semantics that might not be a given
>> for external, dir-based plugins (not sure if we want to worry about that
>> though, or how to avoid it ;)).
>
> i mean we could call 'alloc' with a very small size instead
> and simply "overwrite" it? then we'd also get around things like
> mkpath and imagedir etc.
that might actually be nice(r) than the current approach since it avoids
the volname format issue entirely. the only downside is that we then
briefly have a "wrong" disk visible, but since the VM has to be locked
at that point there shouldn't be too much harm in that?
>>> + $target_path = $target_plugin->filesystem_path($target_scfg, $target_volname);
>>
>> this should be equivalent to PVE::Storage::path for DirPlugin based
>> storages?
>>
>>> +
>>> + print "renaming $source_path to $target_path\n";
>>> + my $imagedir = $target_plugin->get_subdir($target_scfg, 'images');
>>
>> we already did this above, but see comment there ;)
>
> true ;)
>
>>
>>> + mkpath "$imagedir/$vmid";
>>> +
>>> + rename($source_path, $target_path) or die "unable to move - $!\n";
>>> + };
>>> + if (my $err = $@) {
>>> + unlink $source_path;
>>> + unlink $target_path if defined($target_path);
>>
>> isn't this pretty much impossible to happen? the last thing we do in the
>> eval block is the rename - if that failed, $target_path can't exist yet.
>> if it didn't fail, we can't end up here?
>
> that probably depends on the underlying filesystem no? not
> every fs has posix rename semantics i guess?
I think we can assume an intra-FS rename to either work and have an
effect, or not work and not have an effect on anything we want to
support as dir storage? :)
> in that case we'd cleanup the file, and if it does not exists, it doesn't hurt
> but
sure, but error handling tends to get more complicated over time, so not
having nops in there reduces the complexity somewhat IMHO.
>>> + rmdir $destdir;
>>
>> this and unlink $source_path could just be a remove_tree on $destdir
>> instead, with less chances of leaving stuff around?
>
> this is true of course and removes the issue above
>
>>
>>> + die "error during extraction: $err\n";
>>> + }
>>> +
>>> + rmdir $destdir;
>>
>> could also be a remove_tree, just to be on the safe side?
>
> yup
>
>>
>>> +
>>> + return "$target_storeid:$target_volname";
>>> +}
>>> +
>>> +sub cleanup_extracted_image {
>>> + my ($source) = @_;
>>> +
>>> + my $cfg = PVE::Storage::config();
>>> + PVE::Storage::vdisk_free($cfg, $source);
>>> +}
>>
>> why do we need this helper, and not just call vdisk_free directly in
>> qemu-server (we do that in tons of places there as part of error
>> handling for freshly allocated volumes)?
>
> ok makes sense
>
>>
>>> +
>>> +1;
>>> diff --git a/src/PVE/GuestImport/OVF.pm b/src/PVE/GuestImport/OVF.pm
>>> index 0eb5e9c..6b79078 100644
>>> --- a/src/PVE/GuestImport/OVF.pm
>>> +++ b/src/PVE/GuestImport/OVF.pm
>>> @@ -85,11 +85,37 @@ sub id_to_pve {
>>> }
>>> }
>>>
>>> +# technically defined in DSP0004 (https://www.dmtf.org/dsp/DSP0004) as an ABNF
>>> +# but realistically this always takes the form of 'byte * base^exponent'
>>> +sub try_parse_capacity_unit {
>>> + my ($unit_text) = @_;
>>> +
>>> + if ($unit_text =~ m/^\s*byte\s*\*\s*([0-9]+)\s*\^\s*([0-9]+)\s*$/) {
>>> + my $base = $1;
>>> + my $exp = $2;
>>> + return $base ** $exp;
>>> + }
>>> +
>>> + return undef;
>>> +}
>>> +
>>> # returns two references, $qm which holds qm.conf style key/values, and \@disks
>>> sub parse_ovf {
>>> - my ($ovf, $debug) = @_;
>>> + my ($ovf, $isOva, $debug) = @_;
>>> +
>>> + # we have to ignore missing disk images for ova
>>> + my $dom;
>>> + if ($isOva) {
>>> + my $raw = "";
>>> + PVE::Tools::run_command(['tar', '-xO', '--wildcards', '--occurrence=1', '-f', $ovf, '*.ovf'], outfunc => sub {
>>> + my $line = shift;
>>> + $raw .= $line;
>>> + });
>>> + $dom = XML::LibXML->load_xml(string => $raw, no_blanks => 1);
>>> + } else {
>>> + $dom = XML::LibXML->load_xml(location => $ovf, no_blanks => 1);
>>> + }
>>>
>>> - my $dom = XML::LibXML->load_xml(location => $ovf, no_blanks => 1);
>>>
>>> # register the xml namespaces in a xpath context object
>>> # 'ovf' is the default namespace so it will prepended to each xml element
>>> @@ -177,7 +203,17 @@ sub parse_ovf {
>>> # @ needs to be escaped to prevent Perl double quote interpolation
>>> my $xpath_find_fileref = sprintf("/ovf:Envelope/ovf:DiskSection/\
>>> ovf:Disk[\@ovf:diskId='%s']/\@ovf:fileRef", $disk_id);
>>> + my $xpath_find_capacity = sprintf("/ovf:Envelope/ovf:DiskSection/\
>>> +ovf:Disk[\@ovf:diskId='%s']/\@ovf:capacity", $disk_id);
>>> + my $xpath_find_capacity_unit = sprintf("/ovf:Envelope/ovf:DiskSection/\
>>> +ovf:Disk[\@ovf:diskId='%s']/\@ovf:capacityAllocationUnits", $disk_id);
>>> my $fileref = $xpc->findvalue($xpath_find_fileref);
>>> + my $capacity = $xpc->findvalue($xpath_find_capacity);
>>> + my $capacity_unit = $xpc->findvalue($xpath_find_capacity_unit);
>>> + my $virtual_size;
>>> + if (my $factor = try_parse_capacity_unit($capacity_unit)) {
>>> + $virtual_size = $capacity * $factor;
>>> + }
>>>
>>> my $valid_url_chars = qr@${valid_uripath_chars}|/@;
>>> if (!$fileref || $fileref !~ m/^${valid_url_chars}+$/) {
>>> @@ -217,7 +253,7 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
>>> die "error parsing $filepath, are you using a symlink ?\n";
>>> }
>>>
>>> - if (!-e $backing_file_path) {
>>> + if (!-e $backing_file_path && !$isOva) {
>>> die "error parsing $filepath, file seems not to exist at $backing_file_path\n";
>>> }
>>>
>>> @@ -225,16 +261,19 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
>>> ($filepath) = $filepath =~ m|^(${PVE::Storage::SAFE_CHAR_CLASS_RE}+)$|; # untaint & check no sub/parent dirs
>>> die "invalid path\n" if !$filepath;
>>>
>>> - my $virtual_size = PVE::Storage::file_size_info($backing_file_path);
>>> - die "error parsing $backing_file_path, cannot determine file size\n"
>>> - if !$virtual_size;
>>> + if (!$isOva) {
>>> + my $size = PVE::Storage::file_size_info($backing_file_path);
>>> + die "error parsing $backing_file_path, cannot determine file size\n"
>>> + if !$size;
>>>
>>> + $virtual_size = $size;
>>> + }
>>> $pve_disk = {
>>> disk_address => $pve_disk_address,
>>> backing_file => $backing_file_path,
>>> - virtual_size => $virtual_size
>>> relative_path => $filepath,
>>> };
>>> + $pve_disk->{virtual_size} = $virtual_size if defined($virtual_size);
>>> push @disks, $pve_disk;
>>>
>>> }
>>> diff --git a/src/PVE/Makefile b/src/PVE/Makefile
>>> index e15a275..0af3081 100644
>>> --- a/src/PVE/Makefile
>>> +++ b/src/PVE/Makefile
>>> @@ -5,6 +5,7 @@ install:
>>> install -D -m 0644 Storage.pm ${DESTDIR}${PERLDIR}/PVE/Storage.pm
>>> install -D -m 0644 Diskmanage.pm ${DESTDIR}${PERLDIR}/PVE/Diskmanage.pm
>>> install -D -m 0644 CephConfig.pm ${DESTDIR}${PERLDIR}/PVE/CephConfig.pm
>>> + install -D -m 0644 GuestImport.pm ${DESTDIR}${PERLDIR}/PVE/GuestImport.pm
>>> make -C Storage install
>>> make -C GuestImport install
>>> make -C API2 install
>>> diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
>>> index 1ed91c2..adc1b45 100755
>>> --- a/src/PVE/Storage.pm
>>> +++ b/src/PVE/Storage.pm
>>> @@ -114,7 +114,7 @@ our $VZTMPL_EXT_RE_1 = qr/\.tar\.(gz|xz|zst)/i;
>>>
>>> our $BACKUP_EXT_RE_2 = qr/\.(tgz|(?:tar|vma)(?:\.(${\PVE::Storage::Plugin::COMPRESSOR_RE}))?)/;
>>>
>>> -our $IMPORT_EXT_RE_1 = qr/\.(ovf|qcow2|raw|vmdk)/;
>>> +our $IMPORT_EXT_RE_1 = qr/\.(ova|ovf|qcow2|raw|vmdk)/;
>>>
>>> our $SAFE_CHAR_CLASS_RE = qr/[a-zA-Z0-9\-\.\+\=\_]/;
>>>
>>> diff --git a/src/PVE/Storage/DirPlugin.pm b/src/PVE/Storage/DirPlugin.pm
>>> index 3e3b1e7..ea89464 100644
>>> --- a/src/PVE/Storage/DirPlugin.pm
>>> +++ b/src/PVE/Storage/DirPlugin.pm
>>> @@ -258,15 +258,26 @@ sub get_import_metadata {
>>> # NOTE: all types of warnings must be added to the return schema of the import-metadata API endpoint
>>> my $warnings = [];
>>>
>>> + my $isOva = 0;
>>> + if ($name =~ m/\.ova$/) {
>>> + $isOva = 1;
>>> + push @$warnings, { type => 'ova-needs-extracting' };
>>> + }
>>> my $path = $class->path($scfg, $volname, $storeid, undef);
>>> - my $res = PVE::GuestImport::OVF::parse_ovf($path);
>>> + my $res = PVE::GuestImport::OVF::parse_ovf($path, $isOva);
>>> my $disks = {};
>>> for my $disk ($res->{disks}->@*) {
>>> my $id = $disk->{disk_address};
>>> my $size = $disk->{virtual_size};
>>> my $path = $disk->{relative_path};
>>> + my $volid;
>>> + if ($isOva) {
>>> + $volid = "$storeid:$volname/$path";
>>> + } else {
>>> + $volid = "$storeid:import/$path",
>>> + }
>>> $disks->{$id} = {
>>> - volid => "$storeid:import/$path",
>>> + volid => $volid,
>>> defined($size) ? (size => $size) : (),
>>> };
>>> }
>>> diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
>>> index 33f0f3a..640d156 100644
>>> --- a/src/PVE/Storage/Plugin.pm
>>> +++ b/src/PVE/Storage/Plugin.pm
>>> @@ -654,6 +654,11 @@ sub parse_volname {
>>> return ('backup', $fn);
>>> } elsif ($volname =~ m!^snippets/([^/]+)$!) {
>>> return ('snippets', $1);
>>> + } elsif ($volname =~ m!^import/(${PVE::Storage::SAFE_CHAR_CLASS_RE}+\.ova\/(${PVE::Storage::SAFE_CHAR_CLASS_RE}+))$!) {
>>> + my $archive = $1;
>>> + my $file = $2;
>>> + my (undef, $format, undef) = parse_name_dir($file);
>>> + return ('import', $archive, undef, undef, undef, undef, "ova+$format");
>>
>> these could be improved if the format was already checked in the elsif
>> condition I think, since the error message of parse_name_dir is a bit
>> opaque/lacking context.. also, parse_name_dir allows subvol, which we
>> don't want to allow here I think?
>
> ok yeah that makes sense
>
>>
>>> } elsif ($volname =~ m!^import/(${PVE::Storage::SAFE_CHAR_CLASS_RE}+$PVE::Storage::IMPORT_EXT_RE_1)$!) {
>>> return ('import', $1, undef, undef, undef, undef, $2);
>>> }
>>> diff --git a/src/test/parse_volname_test.pm b/src/test/parse_volname_test.pm
>>> index a8c746f..bc7b4e8 100644
>>> --- a/src/test/parse_volname_test.pm
>>> +++ b/src/test/parse_volname_test.pm
>>> @@ -93,6 +93,21 @@ my $tests = [
>>> volname => 'import/import.ovf',
>>> expected => ['import', 'import.ovf', undef, undef, undef ,undef, 'ovf'],
>>> },
>>> + {
>>> + description => "Import, innner file of ova",
>>> + volname => 'import/import.ova/disk.qcow2',
>>> + expected => ['import', 'import.ova/disk.qcow2', undef, undef, undef, undef, 'ova+qcow2'],
>>> + },
>>> + {
>>> + description => "Import, innner file of ova",
>>> + volname => 'import/import.ova/disk.vmdk',
>>> + expected => ['import', 'import.ova/disk.vmdk', undef, undef, undef, undef, 'ova+vmdk'],
>>> + },
>>> + {
>>> + description => "Import, innner file of ova",
>>> + volname => 'import/import.ova/disk.raw',
>>> + expected => ['import', 'import.ova/disk.raw', undef, undef, undef, undef, 'ova+raw'],
>>> + },
>>> #
>>> # failed matches
>>> #
>>> --
>>> 2.39.2
>>>
>>>
>>>
>>> _______________________________________________
>>> pve-devel mailing list
>>> pve-devel@lists.proxmox.com
>>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>>>
>>>
>>>
>>
>>
>> _______________________________________________
>> pve-devel mailing list
>> pve-devel@lists.proxmox.com
>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>>
>>
>
>
>
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [pve-devel] [PATCH storage v3 03/10] plugin: dir: handle ova files for import
2024-05-23 12:25 ` Fabian Grünbichler
@ 2024-05-23 12:32 ` Dominik Csapak
0 siblings, 0 replies; 38+ messages in thread
From: Dominik Csapak @ 2024-05-23 12:32 UTC (permalink / raw)
To: Fabian Grünbichler, Proxmox VE development discussion
On 5/23/24 14:25, Fabian Grünbichler wrote:
> On May 23, 2024 12:40 pm, Dominik Csapak wrote:
>> On 5/22/24 12:08, Fabian Grünbichler wrote:
>>> On April 29, 2024 1:21 pm, Dominik Csapak wrote:
[snip]
>>>> + $target_volname = "$vmid/" . $target_diskname;
>>>
>>> this encodes a fact about volname semantics that might not be a given
>>> for external, dir-based plugins (not sure if we want to worry about that
>>> though, or how to avoid it ;)).
>>
>> i mean we could call 'alloc' with a very small size instead
>> and simply "overwrite" it? then we'd also get around things like
>> mkpath and imagedir etc.
>
> that might actually be nice(r) than the current approach since it avoids
> the volname format issue entirely. the only downside is that we then
> briefly have a "wrong" disk visible, but since the VM has to be locked
> at that point there shouldn't be too much harm in that?
>
we also have a 'wrong' disk after extraction before the import step though
so yes I do think that approach should work fine
>>>> + $target_path = $target_plugin->filesystem_path($target_scfg, $target_volname);
>>>
>>> this should be equivalent to PVE::Storage::path for DirPlugin based
>>> storages?
>>>
>>>> +
>>>> + print "renaming $source_path to $target_path\n";
>>>> + my $imagedir = $target_plugin->get_subdir($target_scfg, 'images');
>>>
>>> we already did this above, but see comment there ;)
>>
>> true ;)
>>
>>>
>>>> + mkpath "$imagedir/$vmid";
>>>> +
>>>> + rename($source_path, $target_path) or die "unable to move - $!\n";
>>>> + };
>>>> + if (my $err = $@) {
>>>> + unlink $source_path;
>>>> + unlink $target_path if defined($target_path);
>>>
>>> isn't this pretty much impossible to happen? the last thing we do in the
>>> eval block is the rename - if that failed, $target_path can't exist yet.
>>> if it didn't fail, we can't end up here?
>>
>> that probably depends on the underlying filesystem no? not
>> every fs has posix rename semantics i guess?
>
> I think we can assume an intra-FS rename to either work and have an
> effect, or not work and not have an effect on anything we want to
> support as dir storage? :)
>
>> in that case we'd cleanup the file, and if it does not exists, it doesn't hurt
>> but
>
> sure, but error handling tends to get more complicated over time, so not
> having nops in there reduces the complexity somewhat IMHO.
fine with me :)
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [pve-devel] [PATCH storage/qemu-server/manager v3] implement ova/ovf import for file based storages
2024-04-29 11:21 [pve-devel] [PATCH storage/qemu-server/manager v3] implement ova/ovf import for file based storages Dominik Csapak
` (22 preceding siblings ...)
2024-04-29 11:21 ` [pve-devel] [PATCH manager v3 9/9] ui: import: show size for dir-based storages Dominik Csapak
@ 2024-05-24 13:38 ` Dominik Csapak
23 siblings, 0 replies; 38+ messages in thread
From: Dominik Csapak @ 2024-05-24 13:38 UTC (permalink / raw)
To: pve-devel
sent a v4 of this
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 38+ messages in thread
end of thread, other threads:[~2024-05-24 13:38 UTC | newest]
Thread overview: 38+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-04-29 11:21 [pve-devel] [PATCH storage/qemu-server/manager v3] implement ova/ovf import for file based storages Dominik Csapak
2024-04-29 11:21 ` [pve-devel] [PATCH storage v3 01/10] copy OVF.pm from qemu-server Dominik Csapak
2024-05-22 8:56 ` Fabian Grünbichler
2024-05-22 9:35 ` Fabian Grünbichler
2024-04-29 11:21 ` [pve-devel] [PATCH storage v3 02/10] plugin: dir: implement import content type Dominik Csapak
2024-05-22 9:24 ` Fabian Grünbichler
2024-04-29 11:21 ` [pve-devel] [PATCH storage v3 03/10] plugin: dir: handle ova files for import Dominik Csapak
2024-05-22 10:08 ` Fabian Grünbichler
2024-05-23 10:40 ` Dominik Csapak
2024-05-23 12:25 ` Fabian Grünbichler
2024-05-23 12:32 ` Dominik Csapak
2024-05-22 13:13 ` Fabian Grünbichler
2024-04-29 11:21 ` [pve-devel] [PATCH storage v3 04/10] ovf: implement parsing the ostype Dominik Csapak
2024-04-29 11:21 ` [pve-devel] [PATCH storage v3 05/10] ovf: implement parsing out firmware type Dominik Csapak
2024-04-29 11:21 ` [pve-devel] [PATCH storage v3 06/10] ovf: implement rudimentary boot order Dominik Csapak
2024-04-29 11:21 ` [pve-devel] [PATCH storage v3 07/10] ovf: implement parsing nics Dominik Csapak
2024-04-29 11:21 ` [pve-devel] [PATCH storage v3 08/10] api: allow ova upload/download Dominik Csapak
2024-05-22 10:20 ` Fabian Grünbichler
2024-04-29 11:21 ` [pve-devel] [PATCH storage v3 09/10] plugin: enable import for nfs/btrfs/cifs/cephfs/glusterfs Dominik Csapak
2024-04-29 11:21 ` [pve-devel] [PATCH storage v3 10/10] add 'import' content type to 'check_volume_access' Dominik Csapak
2024-04-29 11:21 ` [pve-devel] [PATCH qemu-server v3 1/4] api: delete unused OVF.pm Dominik Csapak
2024-05-22 10:25 ` Fabian Grünbichler
2024-05-22 10:26 ` Fabian Grünbichler
2024-04-29 11:21 ` [pve-devel] [PATCH qemu-server v3 2/4] use OVF from Storage Dominik Csapak
2024-04-29 11:21 ` [pve-devel] [PATCH qemu-server v3 3/4] api: create: implement extracting disks when needed for import-from Dominik Csapak
2024-05-22 12:55 ` Fabian Grünbichler
2024-04-29 11:21 ` [pve-devel] [PATCH qemu-server v3 4/4] api: create: add 'import-extraction-storage' parameter Dominik Csapak
2024-05-22 12:16 ` Fabian Grünbichler
2024-04-29 11:21 ` [pve-devel] [PATCH manager v3 1/9] ui: fix special 'import' icon for non-esxi storages Dominik Csapak
2024-04-29 11:21 ` [pve-devel] [PATCH manager v3 2/9] ui: guest import: add ova-needs-extracting warning text Dominik Csapak
2024-04-29 11:21 ` [pve-devel] [PATCH manager v3 3/9] ui: enable import content type for relevant storages Dominik Csapak
2024-04-29 11:21 ` [pve-devel] [PATCH manager v3 4/9] ui: enable upload/download/remove buttons for 'import' type storages Dominik Csapak
2024-04-29 11:21 ` [pve-devel] [PATCH manager v3 5/9] ui: disable 'import' button for non importable formats Dominik Csapak
2024-04-29 11:21 ` [pve-devel] [PATCH manager v3 6/9] ui: import: improve rendering of volume names Dominik Csapak
2024-04-29 11:21 ` [pve-devel] [PATCH manager v3 7/9] ui: guest import: add storage selector for ova extraction storage Dominik Csapak
2024-04-29 11:21 ` [pve-devel] [PATCH manager v3 8/9] ui: guest import: change icon/text for non-esxi import storage Dominik Csapak
2024-04-29 11:21 ` [pve-devel] [PATCH manager v3 9/9] ui: import: show size for dir-based storages Dominik Csapak
2024-05-24 13:38 ` [pve-devel] [PATCH storage/qemu-server/manager v3] implement ova/ovf import for file based storages Dominik Csapak
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox