* [pve-devel] [PATCH storage v6 01/12] copy OVF.pm from qemu-server
2024-11-15 15:17 [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages Dominik Csapak
@ 2024-11-15 15:17 ` Dominik Csapak
2024-11-17 15:50 ` [pve-devel] applied: " Thomas Lamprecht
2024-11-15 15:17 ` [pve-devel] [PATCH storage v6 02/12] plugin: dir: implement import content type Dominik Csapak
` (29 subsequent siblings)
30 siblings, 1 reply; 68+ messages in thread
From: Dominik Csapak @ 2024-11-15 15:17 UTC (permalink / raw)
To: pve-devel
copies the OVF.pm and relevant ovf tests from qemu-server.
We need it here, and it uses PVE::Storage already, and since there is no
intermediary package/repository we could put it, it seems fitting in
here.
Put it in a new GuestImport module
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
changes from v5:
* omitted leftover hunk in makefile
debian/control | 2 +
src/PVE/GuestImport/Makefile | 3 +
src/PVE/GuestImport/OVF.pm | 241 ++++++++++++++++++
src/PVE/Makefile | 1 +
src/test/Makefile | 5 +-
src/test/ovf_manifests/Win10-Liz-disk1.vmdk | Bin 0 -> 65536 bytes
src/test/ovf_manifests/Win10-Liz.ovf | 142 +++++++++++
.../ovf_manifests/Win10-Liz_no_default_ns.ovf | 142 +++++++++++
.../ovf_manifests/Win_2008_R2_two-disks.ovf | 145 +++++++++++
src/test/ovf_manifests/disk1.vmdk | Bin 0 -> 65536 bytes
src/test/ovf_manifests/disk2.vmdk | Bin 0 -> 65536 bytes
src/test/run_ovf_tests.pl | 71 ++++++
12 files changed, 751 insertions(+), 1 deletion(-)
create mode 100644 src/PVE/GuestImport/Makefile
create mode 100644 src/PVE/GuestImport/OVF.pm
create mode 100644 src/test/ovf_manifests/Win10-Liz-disk1.vmdk
create mode 100755 src/test/ovf_manifests/Win10-Liz.ovf
create mode 100755 src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf
create mode 100755 src/test/ovf_manifests/Win_2008_R2_two-disks.ovf
create mode 100644 src/test/ovf_manifests/disk1.vmdk
create mode 100644 src/test/ovf_manifests/disk2.vmdk
create mode 100755 src/test/run_ovf_tests.pl
diff --git a/debian/control b/debian/control
index 35dd0ae..3198757 100644
--- a/debian/control
+++ b/debian/control
@@ -10,6 +10,7 @@ Build-Depends: debhelper-compat (= 13),
libpve-common-perl (>= 8.2.3),
librados2-perl,
libtest-mockmodule-perl,
+ libxml-libxml-perl,
lintian,
perl,
pve-cluster (>= 5.0-32),
@@ -39,6 +40,7 @@ Depends: bzip2,
libpve-cluster-perl (>= 8.0.6),
libpve-common-perl (>= 8.2.3),
librados2-perl,
+ libxml-libxml-perl,
lvm2,
lzop,
nfs-common,
diff --git a/src/PVE/GuestImport/Makefile b/src/PVE/GuestImport/Makefile
new file mode 100644
index 0000000..5948384
--- /dev/null
+++ b/src/PVE/GuestImport/Makefile
@@ -0,0 +1,3 @@
+.PHONY: install
+install:
+ install -D -m 0644 OVF.pm ${DESTDIR}${PERLDIR}/PVE/GuestImport/OVF.pm
diff --git a/src/PVE/GuestImport/OVF.pm b/src/PVE/GuestImport/OVF.pm
new file mode 100644
index 0000000..3950289
--- /dev/null
+++ b/src/PVE/GuestImport/OVF.pm
@@ -0,0 +1,241 @@
+# Open Virtualization Format import routines
+# https://www.dmtf.org/standards/ovf
+package PVE::GuestImport::OVF;
+
+use strict;
+use warnings;
+
+use XML::LibXML;
+use File::Spec;
+use File::Basename;
+use Cwd 'realpath';
+
+use PVE::Tools;
+use PVE::Storage;
+
+# map OVF resources types to descriptive strings
+# this will allow us to explore the xml tree without using magic numbers
+# http://schemas.dmtf.org/wbem/cim-html/2/CIM_ResourceAllocationSettingData.html
+my @resources = (
+ { id => 1, dtmf_name => 'Other' },
+ { id => 2, dtmf_name => 'Computer System' },
+ { id => 3, dtmf_name => 'Processor' },
+ { id => 4, dtmf_name => 'Memory' },
+ { id => 5, dtmf_name => 'IDE Controller', pve_type => 'ide' },
+ { id => 6, dtmf_name => 'Parallel SCSI HBA', pve_type => 'scsi' },
+ { id => 7, dtmf_name => 'FC HBA' },
+ { id => 8, dtmf_name => 'iSCSI HBA' },
+ { id => 9, dtmf_name => 'IB HCA' },
+ { id => 10, dtmf_name => 'Ethernet Adapter' },
+ { id => 11, dtmf_name => 'Other Network Adapter' },
+ { id => 12, dtmf_name => 'I/O Slot' },
+ { id => 13, dtmf_name => 'I/O Device' },
+ { id => 14, dtmf_name => 'Floppy Drive' },
+ { id => 15, dtmf_name => 'CD Drive' },
+ { id => 16, dtmf_name => 'DVD drive' },
+ { id => 17, dtmf_name => 'Disk Drive' },
+ { id => 18, dtmf_name => 'Tape Drive' },
+ { id => 19, dtmf_name => 'Storage Extent' },
+ { id => 20, dtmf_name => 'Other storage device', pve_type => 'sata'},
+ { id => 21, dtmf_name => 'Serial port' },
+ { id => 22, dtmf_name => 'Parallel port' },
+ { id => 23, dtmf_name => 'USB Controller' },
+ { id => 24, dtmf_name => 'Graphics controller' },
+ { id => 25, dtmf_name => 'IEEE 1394 Controller' },
+ { id => 26, dtmf_name => 'Partitionable Unit' },
+ { id => 27, dtmf_name => 'Base Partitionable Unit' },
+ { id => 28, dtmf_name => 'Power' },
+ { id => 29, dtmf_name => 'Cooling Capacity' },
+ { id => 30, dtmf_name => 'Ethernet Switch Port' },
+ { id => 31, dtmf_name => 'Logical Disk' },
+ { id => 32, dtmf_name => 'Storage Volume' },
+ { id => 33, dtmf_name => 'Ethernet Connection' },
+ { id => 34, dtmf_name => 'DMTF reserved' },
+ { id => 35, dtmf_name => 'Vendor Reserved'}
+);
+
+sub find_by {
+ my ($key, $param) = @_;
+ foreach my $resource (@resources) {
+ if ($resource->{$key} eq $param) {
+ return ($resource);
+ }
+ }
+ return;
+}
+
+sub dtmf_name_to_id {
+ my ($dtmf_name) = @_;
+ my $found = find_by('dtmf_name', $dtmf_name);
+ if ($found) {
+ return $found->{id};
+ } else {
+ return;
+ }
+}
+
+sub id_to_pve {
+ my ($id) = @_;
+ my $resource = find_by('id', $id);
+ if ($resource) {
+ return $resource->{pve_type};
+ } else {
+ return;
+ }
+}
+
+# returns two references, $qm which holds qm.conf style key/values, and \@disks
+sub parse_ovf {
+ my ($ovf, $debug) = @_;
+
+ my $dom = XML::LibXML->load_xml(location => $ovf, no_blanks => 1);
+
+ # register the xml namespaces in a xpath context object
+ # 'ovf' is the default namespace so it will prepended to each xml element
+ my $xpc = XML::LibXML::XPathContext->new($dom);
+ $xpc->registerNs('ovf', 'http://schemas.dmtf.org/ovf/envelope/1');
+ $xpc->registerNs('rasd', 'http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData');
+ $xpc->registerNs('vssd', 'http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData');
+
+
+ # hash to save qm.conf parameters
+ my $qm;
+
+ #array to save a disk list
+ my @disks;
+
+ # easy xpath
+ # walk down the dom until we find the matching XML element
+ my $xpath_find_name = "/ovf:Envelope/ovf:VirtualSystem/ovf:Name";
+ my $ovf_name = $xpc->findvalue($xpath_find_name);
+
+ if ($ovf_name) {
+ # PVE::QemuServer::confdesc requires a valid DNS name
+ ($qm->{name} = $ovf_name) =~ s/[^a-zA-Z0-9\-\.]//g;
+ } else {
+ warn "warning: unable to parse the VM name in this OVF manifest, generating a default value\n";
+ }
+
+ # middle level xpath
+ # element[child] search the elements which have this [child]
+ my $processor_id = dtmf_name_to_id('Processor');
+ my $xpath_find_vcpu_count = "/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${processor_id}]/rasd:VirtualQuantity";
+ $qm->{'cores'} = $xpc->findvalue($xpath_find_vcpu_count);
+
+ my $memory_id = dtmf_name_to_id('Memory');
+ my $xpath_find_memory = ("/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${memory_id}]/rasd:VirtualQuantity");
+ $qm->{'memory'} = $xpc->findvalue($xpath_find_memory);
+
+ # middle level xpath
+ # here we expect multiple results, so we do not read the element value with
+ # findvalue() but store multiple elements with findnodes()
+ my $disk_id = dtmf_name_to_id('Disk Drive');
+ my $xpath_find_disks="/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${disk_id}]";
+ my @disk_items = $xpc->findnodes($xpath_find_disks);
+
+ # disks metadata is split in four different xml elements:
+ # * as an Item node of type DiskDrive in the VirtualHardwareSection
+ # * as an Disk node in the DiskSection
+ # * as a File node in the References section
+ # * each Item node also holds a reference to its owning controller
+ #
+ # we iterate over the list of Item nodes of type disk drive, and for each item,
+ # find the corresponding Disk node, and File node and owning controller
+ # when all the nodes has been found out, we copy the relevant information to
+ # a $pve_disk hash ref, which we push to @disks;
+
+ foreach my $item_node (@disk_items) {
+
+ my $disk_node;
+ my $file_node;
+ my $controller_node;
+ my $pve_disk;
+
+ print "disk item:\n", $item_node->toString(1), "\n" if $debug;
+
+ # from Item, find corresponding Disk node
+ # here the dot means the search should start from the current element in dom
+ my $host_resource = $xpc->findvalue('rasd:HostResource', $item_node);
+ my $disk_section_path;
+ my $disk_id;
+
+ # RFC 3986 "2.3. Unreserved Characters"
+ my $valid_uripath_chars = qr/[[:alnum:]]|[\-\._~]/;
+
+ if ($host_resource =~ m|^ovf:/(${valid_uripath_chars}+)/(${valid_uripath_chars}+)$|) {
+ $disk_section_path = $1;
+ $disk_id = $2;
+ } else {
+ warn "invalid host resource $host_resource, skipping\n";
+ next;
+ }
+ printf "disk section path: $disk_section_path and disk id: $disk_id\n" if $debug;
+
+ # tricky xpath
+ # @ means we filter the result query based on a the value of an item attribute ( @ = attribute)
+ # @ needs to be escaped to prevent Perl double quote interpolation
+ my $xpath_find_fileref = sprintf("/ovf:Envelope/ovf:DiskSection/\
+ovf:Disk[\@ovf:diskId='%s']/\@ovf:fileRef", $disk_id);
+ my $fileref = $xpc->findvalue($xpath_find_fileref);
+
+ my $valid_url_chars = qr@${valid_uripath_chars}|/@;
+ if (!$fileref || $fileref !~ m/^${valid_url_chars}+$/) {
+ warn "invalid host resource $host_resource, skipping\n";
+ next;
+ }
+
+ # from Disk Node, find corresponding filepath
+ my $xpath_find_filepath = sprintf("/ovf:Envelope/ovf:References/ovf:File[\@ovf:id='%s']/\@ovf:href", $fileref);
+ my $filepath = $xpc->findvalue($xpath_find_filepath);
+ if (!$filepath) {
+ warn "invalid file reference $fileref, skipping\n";
+ next;
+ }
+ print "file path: $filepath\n" if $debug;
+
+ # from Item, find owning Controller type
+ my $controller_id = $xpc->findvalue('rasd:Parent', $item_node);
+ my $xpath_find_parent_type = sprintf("/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/\
+ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
+ my $controller_type = $xpc->findvalue($xpath_find_parent_type);
+ if (!$controller_type) {
+ warn "invalid or missing controller: $controller_type, skipping\n";
+ next;
+ }
+ print "owning controller type: $controller_type\n" if $debug;
+
+ # extract corresponding Controller node details
+ my $adress_on_controller = $xpc->findvalue('rasd:AddressOnParent', $item_node);
+ my $pve_disk_address = id_to_pve($controller_type) . $adress_on_controller;
+
+ # resolve symlinks and relative path components
+ # and die if the diskimage is not somewhere under the $ovf path
+ my $ovf_dir = realpath(dirname(File::Spec->rel2abs($ovf)));
+ my $backing_file_path = realpath(join ('/', $ovf_dir, $filepath));
+ if ($backing_file_path !~ /^\Q${ovf_dir}\E/) {
+ die "error parsing $filepath, are you using a symlink ?\n";
+ }
+
+ if (!-e $backing_file_path) {
+ die "error parsing $filepath, file seems not to exist at $backing_file_path\n";
+ }
+
+ ($backing_file_path) = $backing_file_path =~ m|^(/.*)|; # untaint
+
+ my $virtual_size = PVE::Storage::file_size_info($backing_file_path);
+ die "error parsing $backing_file_path, cannot determine file size\n"
+ if !$virtual_size;
+
+ $pve_disk = {
+ disk_address => $pve_disk_address,
+ backing_file => $backing_file_path,
+ virtual_size => $virtual_size
+ };
+ push @disks, $pve_disk;
+
+ }
+
+ return {qm => $qm, disks => \@disks};
+}
+
+1;
diff --git a/src/PVE/Makefile b/src/PVE/Makefile
index d438804..e15a275 100644
--- a/src/PVE/Makefile
+++ b/src/PVE/Makefile
@@ -6,6 +6,7 @@ install:
install -D -m 0644 Diskmanage.pm ${DESTDIR}${PERLDIR}/PVE/Diskmanage.pm
install -D -m 0644 CephConfig.pm ${DESTDIR}${PERLDIR}/PVE/CephConfig.pm
make -C Storage install
+ make -C GuestImport install
make -C API2 install
make -C CLI install
diff --git a/src/test/Makefile b/src/test/Makefile
index c54b10f..12991da 100644
--- a/src/test/Makefile
+++ b/src/test/Makefile
@@ -1,6 +1,6 @@
all: test
-test: test_zfspoolplugin test_disklist test_bwlimit test_plugin
+test: test_zfspoolplugin test_disklist test_bwlimit test_plugin test_ovf
test_zfspoolplugin: run_test_zfspoolplugin.pl
./run_test_zfspoolplugin.pl
@@ -13,3 +13,6 @@ test_bwlimit: run_bwlimit_tests.pl
test_plugin: run_plugin_tests.pl
./run_plugin_tests.pl
+
+test_ovf: run_ovf_tests.pl
+ ./run_ovf_tests.pl
diff --git a/src/test/ovf_manifests/Win10-Liz-disk1.vmdk b/src/test/ovf_manifests/Win10-Liz-disk1.vmdk
new file mode 100644
index 0000000000000000000000000000000000000000..662354a3d1333a2f6c4364005e53bfe7cd8b9044
GIT binary patch
literal 65536
zcmeIvy>HV%7zbeUF`dK)46s<q+^B&Pi6H|eMIb;zP1VdMK8V$P$u?2T)IS|NNta69
zSXw<No$qu%-}$}AUq|21A0<ihr0Gwa-nQ%QGfCR@wmshsN%A;JUhL<u_T%+U7Sd<o
zW^TMU0^M{}R2S(eR@1Ur*Q@eVF^^#r%c@u{hyC#J%V?Nq?*?z)=UG^1Wn9+n(yx6B
z(=ujtJiA)QVP~;guI5EOE2iV-%_??6=%y!^b+aeU_aA6Z4X2azC>{U!a5_FoJCkDB
zKRozW{5{B<Li)YUBEQ&fJe$RRZCRbA$5|CacQiT<A<uvIHbq(g$>yIY=etVNVcI$B
zY@^?CwTN|j)tg?;i)G&AZFqPqoW(5P2K~XUq>9sqVVe!!?y@Y;)^#k~TefEvd2_XU
z^M@5mfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk4^iOdL%ftb
z5g<T-009C72;3>~`p!f^fB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ
zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U
zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7
z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N
p0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK;Zui`~%h>XmtPp
literal 0
HcmV?d00001
diff --git a/src/test/ovf_manifests/Win10-Liz.ovf b/src/test/ovf_manifests/Win10-Liz.ovf
new file mode 100755
index 0000000..bf4b41a
--- /dev/null
+++ b/src/test/ovf_manifests/Win10-Liz.ovf
@@ -0,0 +1,142 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--Generated by VMware ovftool 4.1.0 (build-2982904), UTC time: 2017-02-07T13:50:15.265014Z-->
+<Envelope vmw:buildId="build-2982904" xmlns="http://schemas.dmtf.org/ovf/envelope/1" xmlns:cim="http://schemas.dmtf.org/wbem/wscim/1/common" xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1" xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" xmlns:vmw="http://www.vmware.com/schema/ovf" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
+ <References>
+ <File ovf:href="Win10-Liz-disk1.vmdk" ovf:id="file1" ovf:size="9155243008"/>
+ </References>
+ <DiskSection>
+ <Info>Virtual disk information</Info>
+ <Disk ovf:capacity="128" ovf:capacityAllocationUnits="byte * 2^30" ovf:diskId="vmdisk1" ovf:fileRef="file1" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:populatedSize="16798056448"/>
+ </DiskSection>
+ <NetworkSection>
+ <Info>The list of logical networks</Info>
+ <Network ovf:name="bridged">
+ <Description>The bridged network</Description>
+ </Network>
+ </NetworkSection>
+ <VirtualSystem ovf:id="vm">
+ <Info>A virtual machine</Info>
+ <Name>Win10-Liz</Name>
+ <OperatingSystemSection ovf:id="1" vmw:osType="windows9_64Guest">
+ <Info>The kind of installed guest operating system</Info>
+ </OperatingSystemSection>
+ <VirtualHardwareSection>
+ <Info>Virtual hardware requirements</Info>
+ <System>
+ <vssd:ElementName>Virtual Hardware Family</vssd:ElementName>
+ <vssd:InstanceID>0</vssd:InstanceID>
+ <vssd:VirtualSystemIdentifier>Win10-Liz</vssd:VirtualSystemIdentifier>
+ <vssd:VirtualSystemType>vmx-11</vssd:VirtualSystemType>
+ </System>
+ <Item>
+ <rasd:AllocationUnits>hertz * 10^6</rasd:AllocationUnits>
+ <rasd:Description>Number of Virtual CPUs</rasd:Description>
+ <rasd:ElementName>4 virtual CPU(s)</rasd:ElementName>
+ <rasd:InstanceID>1</rasd:InstanceID>
+ <rasd:ResourceType>3</rasd:ResourceType>
+ <rasd:VirtualQuantity>4</rasd:VirtualQuantity>
+ </Item>
+ <Item>
+ <rasd:AllocationUnits>byte * 2^20</rasd:AllocationUnits>
+ <rasd:Description>Memory Size</rasd:Description>
+ <rasd:ElementName>6144MB of memory</rasd:ElementName>
+ <rasd:InstanceID>2</rasd:InstanceID>
+ <rasd:ResourceType>4</rasd:ResourceType>
+ <rasd:VirtualQuantity>6144</rasd:VirtualQuantity>
+ </Item>
+ <Item>
+ <rasd:Address>0</rasd:Address>
+ <rasd:Description>SATA Controller</rasd:Description>
+ <rasd:ElementName>sataController0</rasd:ElementName>
+ <rasd:InstanceID>3</rasd:InstanceID>
+ <rasd:ResourceSubType>vmware.sata.ahci</rasd:ResourceSubType>
+ <rasd:ResourceType>20</rasd:ResourceType>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:Address>0</rasd:Address>
+ <rasd:Description>USB Controller (XHCI)</rasd:Description>
+ <rasd:ElementName>usb3</rasd:ElementName>
+ <rasd:InstanceID>4</rasd:InstanceID>
+ <rasd:ResourceSubType>vmware.usb.xhci</rasd:ResourceSubType>
+ <rasd:ResourceType>23</rasd:ResourceType>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:Address>0</rasd:Address>
+ <rasd:Description>USB Controller (EHCI)</rasd:Description>
+ <rasd:ElementName>usb</rasd:ElementName>
+ <rasd:InstanceID>5</rasd:InstanceID>
+ <rasd:ResourceSubType>vmware.usb.ehci</rasd:ResourceSubType>
+ <rasd:ResourceType>23</rasd:ResourceType>
+ <vmw:Config ovf:required="false" vmw:key="ehciEnabled" vmw:value="true"/>
+ </Item>
+ <Item>
+ <rasd:Address>0</rasd:Address>
+ <rasd:Description>SCSI Controller</rasd:Description>
+ <rasd:ElementName>scsiController0</rasd:ElementName>
+ <rasd:InstanceID>6</rasd:InstanceID>
+ <rasd:ResourceSubType>lsilogicsas</rasd:ResourceSubType>
+ <rasd:ResourceType>6</rasd:ResourceType>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:AutomaticAllocation>true</rasd:AutomaticAllocation>
+ <rasd:ElementName>serial0</rasd:ElementName>
+ <rasd:InstanceID>7</rasd:InstanceID>
+ <rasd:ResourceType>21</rasd:ResourceType>
+ <vmw:Config ovf:required="false" vmw:key="yieldOnPoll" vmw:value="false"/>
+ </Item>
+ <Item>
+ <rasd:AddressOnParent>0</rasd:AddressOnParent>
+ <rasd:ElementName>disk0</rasd:ElementName>
+ <rasd:HostResource>ovf:/disk/vmdisk1</rasd:HostResource>
+ <rasd:InstanceID>8</rasd:InstanceID>
+ <rasd:Parent>6</rasd:Parent>
+ <rasd:ResourceType>17</rasd:ResourceType>
+ </Item>
+ <Item>
+ <rasd:AddressOnParent>2</rasd:AddressOnParent>
+ <rasd:AutomaticAllocation>true</rasd:AutomaticAllocation>
+ <rasd:Connection>bridged</rasd:Connection>
+ <rasd:Description>E1000e ethernet adapter on "bridged"</rasd:Description>
+ <rasd:ElementName>ethernet0</rasd:ElementName>
+ <rasd:InstanceID>9</rasd:InstanceID>
+ <rasd:ResourceSubType>E1000e</rasd:ResourceSubType>
+ <rasd:ResourceType>10</rasd:ResourceType>
+ <vmw:Config ovf:required="false" vmw:key="wakeOnLanEnabled" vmw:value="false"/>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
+ <rasd:ElementName>sound</rasd:ElementName>
+ <rasd:InstanceID>10</rasd:InstanceID>
+ <rasd:ResourceSubType>vmware.soundcard.hdaudio</rasd:ResourceSubType>
+ <rasd:ResourceType>1</rasd:ResourceType>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
+ <rasd:ElementName>video</rasd:ElementName>
+ <rasd:InstanceID>11</rasd:InstanceID>
+ <rasd:ResourceType>24</rasd:ResourceType>
+ <vmw:Config ovf:required="false" vmw:key="enable3DSupport" vmw:value="true"/>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
+ <rasd:ElementName>vmci</rasd:ElementName>
+ <rasd:InstanceID>12</rasd:InstanceID>
+ <rasd:ResourceSubType>vmware.vmci</rasd:ResourceSubType>
+ <rasd:ResourceType>1</rasd:ResourceType>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:AddressOnParent>1</rasd:AddressOnParent>
+ <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
+ <rasd:ElementName>cdrom0</rasd:ElementName>
+ <rasd:InstanceID>13</rasd:InstanceID>
+ <rasd:Parent>3</rasd:Parent>
+ <rasd:ResourceType>15</rasd:ResourceType>
+ </Item>
+ <vmw:Config ovf:required="false" vmw:key="memoryHotAddEnabled" vmw:value="true"/>
+ <vmw:Config ovf:required="false" vmw:key="powerOpInfo.powerOffType" vmw:value="soft"/>
+ <vmw:Config ovf:required="false" vmw:key="powerOpInfo.resetType" vmw:value="soft"/>
+ <vmw:Config ovf:required="false" vmw:key="powerOpInfo.suspendType" vmw:value="soft"/>
+ <vmw:Config ovf:required="false" vmw:key="tools.syncTimeWithHost" vmw:value="false"/>
+ </VirtualHardwareSection>
+ </VirtualSystem>
+</Envelope>
diff --git a/src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf b/src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf
new file mode 100755
index 0000000..b93540f
--- /dev/null
+++ b/src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf
@@ -0,0 +1,142 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--Generated by VMware ovftool 4.1.0 (build-2982904), UTC time: 2017-02-07T13:50:15.265014Z-->
+<Envelope vmw:buildId="build-2982904" xmlns="http://schemas.dmtf.org/ovf/envelope/1" xmlns:cim="http://schemas.dmtf.org/wbem/wscim/1/common" xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1" xmlns:vmw="http://www.vmware.com/schema/ovf" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
+ <References>
+ <File ovf:href="Win10-Liz-disk1.vmdk" ovf:id="file1" ovf:size="9155243008"/>
+ </References>
+ <DiskSection>
+ <Info>Virtual disk information</Info>
+ <Disk ovf:capacity="128" ovf:capacityAllocationUnits="byte * 2^30" ovf:diskId="vmdisk1" ovf:fileRef="file1" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:populatedSize="16798056448"/>
+ </DiskSection>
+ <NetworkSection>
+ <Info>The list of logical networks</Info>
+ <Network ovf:name="bridged">
+ <Description>The bridged network</Description>
+ </Network>
+ </NetworkSection>
+ <VirtualSystem ovf:id="vm">
+ <Info>A virtual machine</Info>
+ <Name>Win10-Liz</Name>
+ <OperatingSystemSection ovf:id="1" vmw:osType="windows9_64Guest">
+ <Info>The kind of installed guest operating system</Info>
+ </OperatingSystemSection>
+ <VirtualHardwareSection>
+ <Info>Virtual hardware requirements</Info>
+ <System>
+ <vssd:ElementName>Virtual Hardware Family</vssd:ElementName>
+ <vssd:InstanceID>0</vssd:InstanceID>
+ <vssd:VirtualSystemIdentifier>Win10-Liz</vssd:VirtualSystemIdentifier>
+ <vssd:VirtualSystemType>vmx-11</vssd:VirtualSystemType>
+ </System>
+ <Item>
+ <rasd:AllocationUnits xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">hertz * 10^6</rasd:AllocationUnits>
+ <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">Number of Virtual CPUs</rasd:Description>
+ <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">4 virtual CPU(s)</rasd:ElementName>
+ <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">1</rasd:InstanceID>
+ <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">3</rasd:ResourceType>
+ <rasd:VirtualQuantity xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">4</rasd:VirtualQuantity>
+ </Item>
+ <Item>
+ <rasd:AllocationUnits xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">byte * 2^20</rasd:AllocationUnits>
+ <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">Memory Size</rasd:Description>
+ <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">6144MB of memory</rasd:ElementName>
+ <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">2</rasd:InstanceID>
+ <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">4</rasd:ResourceType>
+ <rasd:VirtualQuantity xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">6144</rasd:VirtualQuantity>
+ </Item>
+ <Item>
+ <rasd:Address xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">0</rasd:Address>
+ <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">SATA Controller</rasd:Description>
+ <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">sataController0</rasd:ElementName>
+ <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">3</rasd:InstanceID>
+ <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmware.sata.ahci</rasd:ResourceSubType>
+ <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">20</rasd:ResourceType>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:Address xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">0</rasd:Address>
+ <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">USB Controller (XHCI)</rasd:Description>
+ <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">usb3</rasd:ElementName>
+ <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">4</rasd:InstanceID>
+ <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmware.usb.xhci</rasd:ResourceSubType>
+ <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">23</rasd:ResourceType>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:Address xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">0</rasd:Address>
+ <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">USB Controller (EHCI)</rasd:Description>
+ <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">usb</rasd:ElementName>
+ <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">5</rasd:InstanceID>
+ <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmware.usb.ehci</rasd:ResourceSubType>
+ <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">23</rasd:ResourceType>
+ <vmw:Config ovf:required="false" vmw:key="ehciEnabled" vmw:value="true"/>
+ </Item>
+ <Item>
+ <rasd:Address xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">0</rasd:Address>
+ <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">SCSI Controller</rasd:Description>
+ <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">scsiController0</rasd:ElementName>
+ <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">6</rasd:InstanceID>
+ <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">lsilogicsas</rasd:ResourceSubType>
+ <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">6</rasd:ResourceType>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">true</rasd:AutomaticAllocation>
+ <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">serial0</rasd:ElementName>
+ <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">7</rasd:InstanceID>
+ <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">21</rasd:ResourceType>
+ <vmw:Config ovf:required="false" vmw:key="yieldOnPoll" vmw:value="false"/>
+ </Item>
+ <Item>
+ <rasd:AddressOnParent xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" >0</rasd:AddressOnParent>
+ <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" >disk0</rasd:ElementName>
+ <rasd:HostResource xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" >ovf:/disk/vmdisk1</rasd:HostResource>
+ <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">8</rasd:InstanceID>
+ <rasd:Parent xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">6</rasd:Parent>
+ <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">17</rasd:ResourceType>
+ </Item>
+ <Item>
+ <rasd:AddressOnParent xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">2</rasd:AddressOnParent>
+ <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">true</rasd:AutomaticAllocation>
+ <rasd:Connection xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">bridged</rasd:Connection>
+ <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">E1000e ethernet adapter on "bridged"</rasd:Description>
+ <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">ethernet0</rasd:ElementName>
+ <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">9</rasd:InstanceID>
+ <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">E1000e</rasd:ResourceSubType>
+ <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">10</rasd:ResourceType>
+ <vmw:Config ovf:required="false" vmw:key="wakeOnLanEnabled" vmw:value="false"/>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">false</rasd:AutomaticAllocation>
+ <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">sound</rasd:ElementName>
+ <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">10</rasd:InstanceID>
+ <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmware.soundcard.hdaudio</rasd:ResourceSubType>
+ <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">1</rasd:ResourceType>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">false</rasd:AutomaticAllocation>
+ <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">video</rasd:ElementName>
+ <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">11</rasd:InstanceID>
+ <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">24</rasd:ResourceType>
+ <vmw:Config ovf:required="false" vmw:key="enable3DSupport" vmw:value="true"/>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">false</rasd:AutomaticAllocation>
+ <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmci</rasd:ElementName>
+ <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">12</rasd:InstanceID>
+ <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmware.vmci</rasd:ResourceSubType>
+ <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">1</rasd:ResourceType>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:AddressOnParent xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">1</rasd:AddressOnParent>
+ <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">false</rasd:AutomaticAllocation>
+ <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">cdrom0</rasd:ElementName>
+ <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">13</rasd:InstanceID>
+ <rasd:Parent xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">3</rasd:Parent>
+ <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">15</rasd:ResourceType>
+ </Item>
+ <vmw:Config ovf:required="false" vmw:key="memoryHotAddEnabled" vmw:value="true"/>
+ <vmw:Config ovf:required="false" vmw:key="powerOpInfo.powerOffType" vmw:value="soft"/>
+ <vmw:Config ovf:required="false" vmw:key="powerOpInfo.resetType" vmw:value="soft"/>
+ <vmw:Config ovf:required="false" vmw:key="powerOpInfo.suspendType" vmw:value="soft"/>
+ <vmw:Config ovf:required="false" vmw:key="tools.syncTimeWithHost" vmw:value="false"/>
+ </VirtualHardwareSection>
+ </VirtualSystem>
+</Envelope>
diff --git a/src/test/ovf_manifests/Win_2008_R2_two-disks.ovf b/src/test/ovf_manifests/Win_2008_R2_two-disks.ovf
new file mode 100755
index 0000000..a563aab
--- /dev/null
+++ b/src/test/ovf_manifests/Win_2008_R2_two-disks.ovf
@@ -0,0 +1,145 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--Generated by VMware ovftool 4.1.0 (build-2982904), UTC time: 2017-02-27T15:09:29.768974Z-->
+<Envelope vmw:buildId="build-2982904" xmlns="http://schemas.dmtf.org/ovf/envelope/1" xmlns:cim="http://schemas.dmtf.org/wbem/wscim/1/common" xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1" xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" xmlns:vmw="http://www.vmware.com/schema/ovf" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
+ <References>
+ <File ovf:href="disk1.vmdk" ovf:id="file1" ovf:size="3481968640"/>
+ <File ovf:href="disk2.vmdk" ovf:id="file2" ovf:size="68096"/>
+ </References>
+ <DiskSection>
+ <Info>Virtual disk information</Info>
+ <Disk ovf:capacity="40" ovf:capacityAllocationUnits="byte * 2^30" ovf:diskId="vmdisk1" ovf:fileRef="file1" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:populatedSize="7684882432"/>
+ <Disk ovf:capacity="1" ovf:capacityAllocationUnits="byte * 2^30" ovf:diskId="vmdisk2" ovf:fileRef="file2" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:populatedSize="0"/>
+ </DiskSection>
+ <NetworkSection>
+ <Info>The list of logical networks</Info>
+ <Network ovf:name="bridged">
+ <Description>The bridged network</Description>
+ </Network>
+ </NetworkSection>
+ <VirtualSystem ovf:id="vm">
+ <Info>A virtual machine</Info>
+ <Name>Win_2008-R2x64</Name>
+ <OperatingSystemSection ovf:id="103" vmw:osType="windows7Server64Guest">
+ <Info>The kind of installed guest operating system</Info>
+ </OperatingSystemSection>
+ <VirtualHardwareSection>
+ <Info>Virtual hardware requirements</Info>
+ <System>
+ <vssd:ElementName>Virtual Hardware Family</vssd:ElementName>
+ <vssd:InstanceID>0</vssd:InstanceID>
+ <vssd:VirtualSystemIdentifier>Win_2008-R2x64</vssd:VirtualSystemIdentifier>
+ <vssd:VirtualSystemType>vmx-11</vssd:VirtualSystemType>
+ </System>
+ <Item>
+ <rasd:AllocationUnits>hertz * 10^6</rasd:AllocationUnits>
+ <rasd:Description>Number of Virtual CPUs</rasd:Description>
+ <rasd:ElementName>1 virtual CPU(s)</rasd:ElementName>
+ <rasd:InstanceID>1</rasd:InstanceID>
+ <rasd:ResourceType>3</rasd:ResourceType>
+ <rasd:VirtualQuantity>1</rasd:VirtualQuantity>
+ </Item>
+ <Item>
+ <rasd:AllocationUnits>byte * 2^20</rasd:AllocationUnits>
+ <rasd:Description>Memory Size</rasd:Description>
+ <rasd:ElementName>2048MB of memory</rasd:ElementName>
+ <rasd:InstanceID>2</rasd:InstanceID>
+ <rasd:ResourceType>4</rasd:ResourceType>
+ <rasd:VirtualQuantity>2048</rasd:VirtualQuantity>
+ </Item>
+ <Item>
+ <rasd:Address>0</rasd:Address>
+ <rasd:Description>SATA Controller</rasd:Description>
+ <rasd:ElementName>sataController0</rasd:ElementName>
+ <rasd:InstanceID>3</rasd:InstanceID>
+ <rasd:ResourceSubType>vmware.sata.ahci</rasd:ResourceSubType>
+ <rasd:ResourceType>20</rasd:ResourceType>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:Address>0</rasd:Address>
+ <rasd:Description>USB Controller (EHCI)</rasd:Description>
+ <rasd:ElementName>usb</rasd:ElementName>
+ <rasd:InstanceID>4</rasd:InstanceID>
+ <rasd:ResourceSubType>vmware.usb.ehci</rasd:ResourceSubType>
+ <rasd:ResourceType>23</rasd:ResourceType>
+ <vmw:Config ovf:required="false" vmw:key="ehciEnabled" vmw:value="true"/>
+ </Item>
+ <Item>
+ <rasd:Address>0</rasd:Address>
+ <rasd:Description>SCSI Controller</rasd:Description>
+ <rasd:ElementName>scsiController0</rasd:ElementName>
+ <rasd:InstanceID>5</rasd:InstanceID>
+ <rasd:ResourceSubType>lsilogicsas</rasd:ResourceSubType>
+ <rasd:ResourceType>6</rasd:ResourceType>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:AutomaticAllocation>true</rasd:AutomaticAllocation>
+ <rasd:ElementName>serial0</rasd:ElementName>
+ <rasd:InstanceID>6</rasd:InstanceID>
+ <rasd:ResourceType>21</rasd:ResourceType>
+ <vmw:Config ovf:required="false" vmw:key="yieldOnPoll" vmw:value="false"/>
+ </Item>
+ <Item>
+ <rasd:AddressOnParent>0</rasd:AddressOnParent>
+ <rasd:ElementName>disk0</rasd:ElementName>
+ <rasd:HostResource>ovf:/disk/vmdisk1</rasd:HostResource>
+ <rasd:InstanceID>7</rasd:InstanceID>
+ <rasd:Parent>5</rasd:Parent>
+ <rasd:ResourceType>17</rasd:ResourceType>
+ </Item>
+ <Item>
+ <rasd:AddressOnParent>1</rasd:AddressOnParent>
+ <rasd:ElementName>disk1</rasd:ElementName>
+ <rasd:HostResource>ovf:/disk/vmdisk2</rasd:HostResource>
+ <rasd:InstanceID>8</rasd:InstanceID>
+ <rasd:Parent>5</rasd:Parent>
+ <rasd:ResourceType>17</rasd:ResourceType>
+ </Item>
+ <Item>
+ <rasd:AddressOnParent>2</rasd:AddressOnParent>
+ <rasd:AutomaticAllocation>true</rasd:AutomaticAllocation>
+ <rasd:Connection>bridged</rasd:Connection>
+ <rasd:Description>E1000 ethernet adapter on "bridged"</rasd:Description>
+ <rasd:ElementName>ethernet0</rasd:ElementName>
+ <rasd:InstanceID>9</rasd:InstanceID>
+ <rasd:ResourceSubType>E1000</rasd:ResourceSubType>
+ <rasd:ResourceType>10</rasd:ResourceType>
+ <vmw:Config ovf:required="false" vmw:key="wakeOnLanEnabled" vmw:value="false"/>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
+ <rasd:ElementName>sound</rasd:ElementName>
+ <rasd:InstanceID>10</rasd:InstanceID>
+ <rasd:ResourceSubType>vmware.soundcard.hdaudio</rasd:ResourceSubType>
+ <rasd:ResourceType>1</rasd:ResourceType>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
+ <rasd:ElementName>video</rasd:ElementName>
+ <rasd:InstanceID>11</rasd:InstanceID>
+ <rasd:ResourceType>24</rasd:ResourceType>
+ <vmw:Config ovf:required="false" vmw:key="enable3DSupport" vmw:value="true"/>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
+ <rasd:ElementName>vmci</rasd:ElementName>
+ <rasd:InstanceID>12</rasd:InstanceID>
+ <rasd:ResourceSubType>vmware.vmci</rasd:ResourceSubType>
+ <rasd:ResourceType>1</rasd:ResourceType>
+ </Item>
+ <Item ovf:required="false">
+ <rasd:AddressOnParent>1</rasd:AddressOnParent>
+ <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
+ <rasd:ElementName>cdrom0</rasd:ElementName>
+ <rasd:InstanceID>13</rasd:InstanceID>
+ <rasd:Parent>3</rasd:Parent>
+ <rasd:ResourceType>15</rasd:ResourceType>
+ </Item>
+ <vmw:Config ovf:required="false" vmw:key="cpuHotAddEnabled" vmw:value="true"/>
+ <vmw:Config ovf:required="false" vmw:key="memoryHotAddEnabled" vmw:value="true"/>
+ <vmw:Config ovf:required="false" vmw:key="powerOpInfo.powerOffType" vmw:value="soft"/>
+ <vmw:Config ovf:required="false" vmw:key="powerOpInfo.resetType" vmw:value="soft"/>
+ <vmw:Config ovf:required="false" vmw:key="powerOpInfo.suspendType" vmw:value="soft"/>
+ <vmw:Config ovf:required="false" vmw:key="tools.syncTimeWithHost" vmw:value="false"/>
+ </VirtualHardwareSection>
+ </VirtualSystem>
+</Envelope>
diff --git a/src/test/ovf_manifests/disk1.vmdk b/src/test/ovf_manifests/disk1.vmdk
new file mode 100644
index 0000000000000000000000000000000000000000..8660602343a1a955f9bcf2e6beaed99316dd8167
GIT binary patch
literal 65536
zcmeIvy>HV%7zbeUF`dK)46s<q9ua7|WuUkSgpg2EwX+*viPe0`HWAtSr(-ASlAWQ|
zbJF?F{+-YFKK_yYyn2=-$&0qXY<t)4ch@B8o_Fo_en^t%N%H0}e|H$~AF`0X3J-JR
zqY>z*Sy|tuS*)j3xo%d~*K!`iCRTO1T8@X|%lB-2b2}Q2@{gmi&a1d=x<|K%7N%9q
zn|Qfh$8m45TCV10Gb^W)c4ZxVA@tMpzfJp2S{y#m?iwzx)01@a>+{9rJna?j=ZAyM
zqPW{FznsOxiSi~-&+<BkewLkuP!u<VO<6U6^7*&xtNr=XaoRiS?V{gtwTMl%9Za|L
za#^%_7k)SjXE85!!SM7bspGUQewUqo+Glx@ubWtPwRL-yMO)CL`L7O2fB*pk1PBly
zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pkPg~&a(=JbS1PBlyK!5-N0!ISx
zkM7+PAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+
z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBly
zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF
z5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk
d1PBlyK!5-N0t5&UAV7cs0RjXF5cr=0{{Ra(Wheju
literal 0
HcmV?d00001
diff --git a/src/test/ovf_manifests/disk2.vmdk b/src/test/ovf_manifests/disk2.vmdk
new file mode 100644
index 0000000000000000000000000000000000000000..c4634513348b392202898374f1c8d2d51d565b27
GIT binary patch
literal 65536
zcmeIvy>HV%7zbeUF`dK)46s<q9#IIDI%J@v2!xPOQ?;`jUy0Rx$u<$$`lr`+(j_}X
ztLLQio&7tX?|uAp{Oj^rk|Zyh{<7(9yX&q=(mrq7>)ntf&y(cMe*SJh-aTX?eH9+&
z#z!O2Psc@dn~q~OEsJ%%D!&!;7&fu2iq&#-6u$l#k8ZBB&%=0f64qH6mv#5(X4k^B
zj9DEow(B_REmq6byr^fzbkeM>VlRY#diJkw-bwTQ2bx{O`BgehC%?a(PtMX_-hBS!
zV6(_?yX6<NxIa-=XX$BH#n2y*PeaJ_>%pcd>%ZCj`_<*{eCa6d4SQYmC$1K;F1Lf}
zc3v#=CU3(J2jMJcc^4cVA0$<rHpO?@@uyvu<=MK9Wm{XjSCKabJ(~aOpacjIAV7cs
z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAn>#W-ahT}R7ZdS0RjXF5Fl_M
z@c!W5Edc@q2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk
z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs
z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ
zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U
eAV7cs0RjXF5FkK+009C72oNAZfB=F2DR2*l=VfOA
literal 0
HcmV?d00001
diff --git a/src/test/run_ovf_tests.pl b/src/test/run_ovf_tests.pl
new file mode 100755
index 0000000..5a80ab2
--- /dev/null
+++ b/src/test/run_ovf_tests.pl
@@ -0,0 +1,71 @@
+#!/usr/bin/perl
+
+use strict;
+use warnings;
+use lib qw(..); # prepend .. to @INC so we use the local version of PVE packages
+
+use FindBin '$Bin';
+use PVE::GuestImport::OVF;
+use Test::More;
+
+use Data::Dumper;
+
+my $test_manifests = join ('/', $Bin, 'ovf_manifests');
+
+print "parsing ovfs\n";
+
+my $win2008 = eval { PVE::GuestImport::OVF::parse_ovf("$test_manifests/Win_2008_R2_two-disks.ovf") };
+if (my $err = $@) {
+ fail('parse win2008');
+ warn("error: $err\n");
+} else {
+ ok('parse win2008');
+}
+my $win10 = eval { PVE::GuestImport::OVF::parse_ovf("$test_manifests/Win10-Liz.ovf") };
+if (my $err = $@) {
+ fail('parse win10');
+ warn("error: $err\n");
+} else {
+ ok('parse win10');
+}
+my $win10noNs = eval { PVE::GuestImport::OVF::parse_ovf("$test_manifests/Win10-Liz_no_default_ns.ovf") };
+if (my $err = $@) {
+ fail("parse win10 no default rasd NS");
+ warn("error: $err\n");
+} else {
+ ok('parse win10 no default rasd NS');
+}
+
+print "testing disks\n";
+
+is($win2008->{disks}->[0]->{disk_address}, 'scsi0', 'multidisk vm has the correct first disk controller');
+is($win2008->{disks}->[0]->{backing_file}, "$test_manifests/disk1.vmdk", 'multidisk vm has the correct first disk backing device');
+is($win2008->{disks}->[0]->{virtual_size}, 2048, 'multidisk vm has the correct first disk size');
+
+is($win2008->{disks}->[1]->{disk_address}, 'scsi1', 'multidisk vm has the correct second disk controller');
+is($win2008->{disks}->[1]->{backing_file}, "$test_manifests/disk2.vmdk", 'multidisk vm has the correct second disk backing device');
+is($win2008->{disks}->[1]->{virtual_size}, 2048, 'multidisk vm has the correct second disk size');
+
+is($win10->{disks}->[0]->{disk_address}, 'scsi0', 'single disk vm has the correct disk controller');
+is($win10->{disks}->[0]->{backing_file}, "$test_manifests/Win10-Liz-disk1.vmdk", 'single disk vm has the correct disk backing device');
+is($win10->{disks}->[0]->{virtual_size}, 2048, 'single disk vm has the correct size');
+
+is($win10noNs->{disks}->[0]->{disk_address}, 'scsi0', 'single disk vm (no default rasd NS) has the correct disk controller');
+is($win10noNs->{disks}->[0]->{backing_file}, "$test_manifests/Win10-Liz-disk1.vmdk", 'single disk vm (no default rasd NS) has the correct disk backing device');
+is($win10noNs->{disks}->[0]->{virtual_size}, 2048, 'single disk vm (no default rasd NS) has the correct size');
+
+print "\ntesting vm.conf extraction\n";
+
+is($win2008->{qm}->{name}, 'Win2008-R2x64', 'win2008 VM name is correct');
+is($win2008->{qm}->{memory}, '2048', 'win2008 VM memory is correct');
+is($win2008->{qm}->{cores}, '1', 'win2008 VM cores are correct');
+
+is($win10->{qm}->{name}, 'Win10-Liz', 'win10 VM name is correct');
+is($win10->{qm}->{memory}, '6144', 'win10 VM memory is correct');
+is($win10->{qm}->{cores}, '4', 'win10 VM cores are correct');
+
+is($win10noNs->{qm}->{name}, 'Win10-Liz', 'win10 VM (no default rasd NS) name is correct');
+is($win10noNs->{qm}->{memory}, '6144', 'win10 VM (no default rasd NS) memory is correct');
+is($win10noNs->{qm}->{cores}, '4', 'win10 VM (no default rasd NS) cores are correct');
+
+done_testing();
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* [pve-devel] applied: [PATCH storage v6 01/12] copy OVF.pm from qemu-server
2024-11-15 15:17 ` [pve-devel] [PATCH storage v6 01/12] copy OVF.pm from qemu-server Dominik Csapak
@ 2024-11-17 15:50 ` Thomas Lamprecht
0 siblings, 0 replies; 68+ messages in thread
From: Thomas Lamprecht @ 2024-11-17 15:50 UTC (permalink / raw)
To: Proxmox VE development discussion, Dominik Csapak
Am 15.11.24 um 16:17 schrieb Dominik Csapak:
> copies the OVF.pm and relevant ovf tests from qemu-server.
> We need it here, and it uses PVE::Storage already, and since there is no
> intermediary package/repository we could put it, it seems fitting in
> here.
>
> Put it in a new GuestImport module
>
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
> changes from v5:
> * omitted leftover hunk in makefile
>
> debian/control | 2 +
> src/PVE/GuestImport/Makefile | 3 +
> src/PVE/GuestImport/OVF.pm | 241 ++++++++++++++++++
> src/PVE/Makefile | 1 +
> src/test/Makefile | 5 +-
> src/test/ovf_manifests/Win10-Liz-disk1.vmdk | Bin 0 -> 65536 bytes
> src/test/ovf_manifests/Win10-Liz.ovf | 142 +++++++++++
> .../ovf_manifests/Win10-Liz_no_default_ns.ovf | 142 +++++++++++
> .../ovf_manifests/Win_2008_R2_two-disks.ovf | 145 +++++++++++
> src/test/ovf_manifests/disk1.vmdk | Bin 0 -> 65536 bytes
> src/test/ovf_manifests/disk2.vmdk | Bin 0 -> 65536 bytes
> src/test/run_ovf_tests.pl | 71 ++++++
> 12 files changed, 751 insertions(+), 1 deletion(-)
> create mode 100644 src/PVE/GuestImport/Makefile
> create mode 100644 src/PVE/GuestImport/OVF.pm
> create mode 100644 src/test/ovf_manifests/Win10-Liz-disk1.vmdk
> create mode 100755 src/test/ovf_manifests/Win10-Liz.ovf
> create mode 100755 src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf
> create mode 100755 src/test/ovf_manifests/Win_2008_R2_two-disks.ovf
> create mode 100644 src/test/ovf_manifests/disk1.vmdk
> create mode 100644 src/test/ovf_manifests/disk2.vmdk
> create mode 100755 src/test/run_ovf_tests.pl
>
>
applied with the commit message reworded to avoid .pm file endings that add
no information here and to add more background for why storage is chosen as
package, thanks!
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* [pve-devel] [PATCH storage v6 02/12] plugin: dir: implement import content type
2024-11-15 15:17 [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages Dominik Csapak
2024-11-15 15:17 ` [pve-devel] [PATCH storage v6 01/12] copy OVF.pm from qemu-server Dominik Csapak
@ 2024-11-15 15:17 ` Dominik Csapak
2024-11-18 12:16 ` Fiona Ebner
2024-11-15 15:17 ` [pve-devel] [PATCH storage v6 03/12] plugin: dir: handle ova files for import Dominik Csapak
` (28 subsequent siblings)
30 siblings, 1 reply; 68+ messages in thread
From: Dominik Csapak @ 2024-11-15 15:17 UTC (permalink / raw)
To: pve-devel
in DirPlugin and not Plugin (because of cyclic dependency of
Plugin -> OVF -> Storage -> Plugin otherwise)
only ovf is currently supported (though ova will be shown in import
listing), expects the files to not be in a subdir, and adjacent to the
ovf file.
listed will be all ovf/qcow2/raw/vmdk files.
ovf because it can be imported, and the rest because they can be used
in the 'import-from' part of qemu-server.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
changes from v2:
* moved check for .ova into next patch
src/PVE/GuestImport/OVF.pm | 5 ++++-
src/PVE/Storage.pm | 8 +++++++
src/PVE/Storage/DirPlugin.pm | 36 +++++++++++++++++++++++++++++-
src/PVE/Storage/Plugin.pm | 11 ++++++++-
src/test/parse_volname_test.pm | 13 +++++++++++
src/test/path_to_volume_id_test.pm | 13 +++++++++++
6 files changed, 83 insertions(+), 3 deletions(-)
diff --git a/src/PVE/GuestImport/OVF.pm b/src/PVE/GuestImport/OVF.pm
index 3950289..29dfaad 100644
--- a/src/PVE/GuestImport/OVF.pm
+++ b/src/PVE/GuestImport/OVF.pm
@@ -221,6 +221,8 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
}
($backing_file_path) = $backing_file_path =~ m|^(/.*)|; # untaint
+ ($filepath) = $filepath =~ m|^(${PVE::Storage::SAFE_CHAR_CLASS_RE}+)$|; # untaint & check no sub/parent dirs
+ die "invalid path\n" if !$filepath;
my $virtual_size = PVE::Storage::file_size_info($backing_file_path);
die "error parsing $backing_file_path, cannot determine file size\n"
@@ -229,7 +231,8 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
$pve_disk = {
disk_address => $pve_disk_address,
backing_file => $backing_file_path,
- virtual_size => $virtual_size
+ virtual_size => $virtual_size,
+ relative_path => $filepath,
};
push @disks, $pve_disk;
diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
index b876651..78a3405 100755
--- a/src/PVE/Storage.pm
+++ b/src/PVE/Storage.pm
@@ -114,6 +114,10 @@ our $VZTMPL_EXT_RE_1 = qr/\.tar\.(gz|xz|zst|bz2)/i;
our $BACKUP_EXT_RE_2 = qr/\.(tgz|(?:tar|vma)(?:\.(${\PVE::Storage::Plugin::COMPRESSOR_RE}))?)/;
+our $IMPORT_EXT_RE_1 = qr/\.(ovf|qcow2|raw|vmdk)/;
+
+our $SAFE_CHAR_CLASS_RE = qr/[a-zA-Z0-9\-\.\+\=\_]/;
+
# FIXME remove with PVE 9.0, add versioned breaks for pve-manager
our $vztmpl_extension_re = $VZTMPL_EXT_RE_1;
@@ -612,6 +616,7 @@ sub path_to_volume_id {
my $backupdir = $plugin->get_subdir($scfg, 'backup');
my $privatedir = $plugin->get_subdir($scfg, 'rootdir');
my $snippetsdir = $plugin->get_subdir($scfg, 'snippets');
+ my $importdir = $plugin->get_subdir($scfg, 'import');
if ($path =~ m!^$imagedir/(\d+)/([^/\s]+)$!) {
my $vmid = $1;
@@ -640,6 +645,9 @@ sub path_to_volume_id {
} elsif ($path =~ m!^$snippetsdir/([^/]+)$!) {
my $name = $1;
return ('snippets', "$sid:snippets/$name");
+ } elsif ($path =~ m!^$importdir/(${SAFE_CHAR_CLASS_RE}+${IMPORT_EXT_RE_1})$!) {
+ my $name = $1;
+ return ('import', "$sid:import/$name");
}
}
diff --git a/src/PVE/Storage/DirPlugin.pm b/src/PVE/Storage/DirPlugin.pm
index 2efa8d5..efbca0c 100644
--- a/src/PVE/Storage/DirPlugin.pm
+++ b/src/PVE/Storage/DirPlugin.pm
@@ -10,6 +10,7 @@ use IO::File;
use POSIX;
use PVE::Storage::Plugin;
+use PVE::GuestImport::OVF;
use PVE::JSONSchema qw(get_standard_option);
use base qw(PVE::Storage::Plugin);
@@ -22,7 +23,7 @@ sub type {
sub plugindata {
return {
- content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1, backup => 1, snippets => 1, none => 1 },
+ content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1, backup => 1, snippets => 1, none => 1, import => 1 },
{ images => 1, rootdir => 1 }],
format => [ { raw => 1, qcow2 => 1, vmdk => 1, subvol => 1 } , 'raw' ],
};
@@ -247,4 +248,37 @@ sub check_config {
return $opts;
}
+sub get_import_metadata {
+ my ($class, $scfg, $volname, $storeid) = @_;
+
+ my ($vtype, $name, undef, undef, undef, undef, $fmt) = $class->parse_volname($volname);
+ die "invalid content type '$vtype'\n" if $vtype ne 'import';
+ die "invalid format\n" if $fmt ne 'ovf';
+
+ # NOTE: all types of warnings must be added to the return schema of the import-metadata API endpoint
+ my $warnings = [];
+
+ my $path = $class->path($scfg, $volname, $storeid, undef);
+ my $res = PVE::GuestImport::OVF::parse_ovf($path);
+ my $disks = {};
+ for my $disk ($res->{disks}->@*) {
+ my $id = $disk->{disk_address};
+ my $size = $disk->{virtual_size};
+ my $path = $disk->{relative_path};
+ $disks->{$id} = {
+ volid => "$storeid:import/$path",
+ defined($size) ? (size => $size) : (),
+ };
+ }
+
+ return {
+ type => 'vm',
+ source => $volname,
+ 'create-args' => $res->{qm},
+ 'disks' => $disks,
+ warnings => $warnings,
+ net => [],
+ };
+}
+
1;
diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
index 437365a..3655e6a 100644
--- a/src/PVE/Storage/Plugin.pm
+++ b/src/PVE/Storage/Plugin.pm
@@ -663,6 +663,8 @@ sub parse_volname {
return ('backup', $fn);
} elsif ($volname =~ m!^snippets/([^/]+)$!) {
return ('snippets', $1);
+ } elsif ($volname =~ m!^import/(${PVE::Storage::SAFE_CHAR_CLASS_RE}+$PVE::Storage::IMPORT_EXT_RE_1)$!) {
+ return ('import', $1, undef, undef, undef, undef, $2);
}
die "unable to parse directory volume name '$volname'\n";
@@ -675,6 +677,7 @@ my $vtype_subdirs = {
vztmpl => 'template/cache',
backup => 'dump',
snippets => 'snippets',
+ import => 'import',
};
sub get_vtype_subdirs {
@@ -1269,7 +1272,7 @@ sub list_images {
return $res;
}
-# list templates ($tt = <iso|vztmpl|backup|snippets>)
+# list templates ($tt = <iso|vztmpl|backup|snippets|import>)
my $get_subdir_files = sub {
my ($sid, $path, $tt, $vmid) = @_;
@@ -1325,6 +1328,10 @@ my $get_subdir_files = sub {
volid => "$sid:snippets/". basename($fn),
format => 'snippet',
};
+ } elsif ($tt eq 'import') {
+ next if $fn !~ m!/(${PVE::Storage::SAFE_CHAR_CLASS_RE}+$PVE::Storage::IMPORT_EXT_RE_1)$!i;
+
+ $info = { volid => "$sid:import/$1", format => "$2" };
}
$info->{size} = $st->size;
@@ -1359,6 +1366,8 @@ sub list_volumes {
$data = $get_subdir_files->($storeid, $path, 'backup', $vmid);
} elsif ($type eq 'snippets') {
$data = $get_subdir_files->($storeid, $path, 'snippets');
+ } elsif ($type eq 'import') {
+ $data = $get_subdir_files->($storeid, $path, 'import');
}
}
diff --git a/src/test/parse_volname_test.pm b/src/test/parse_volname_test.pm
index 6c5ba04..92e984f 100644
--- a/src/test/parse_volname_test.pm
+++ b/src/test/parse_volname_test.pm
@@ -86,6 +86,14 @@ my $tests = [
expected => ['snippets', 'hookscript.pl'],
},
#
+ # Import
+ #
+ {
+ description => "Import, ovf",
+ volname => 'import/import.ovf',
+ expected => ['import', 'import.ovf', undef, undef, undef ,undef, 'ovf'],
+ },
+ #
# failed matches
#
{
@@ -123,6 +131,11 @@ my $tests = [
volname => "$vmid/base-$vmid-disk-0.qcow2/ssss/vm-$vmid-disk-0.qcow2",
expected => "unable to parse volume filename 'base-$vmid-disk-0.qcow2/ssss/vm-$vmid-disk-0.qcow2'\n",
},
+ {
+ description => "Failed match: import dir but no ova/ovf/disk image",
+ volname => "import/test.foo",
+ expected => "unable to parse directory volume name 'import/test.foo'\n",
+ },
];
# create more test cases for VM disk images matches
diff --git a/src/test/path_to_volume_id_test.pm b/src/test/path_to_volume_id_test.pm
index 3198752..d954f4b 100644
--- a/src/test/path_to_volume_id_test.pm
+++ b/src/test/path_to_volume_id_test.pm
@@ -190,6 +190,14 @@ my @tests = (
'local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.xz',
],
},
+ {
+ description => 'Import, ovf',
+ volname => "$storage_dir/import/import.ovf",
+ expected => [
+ 'import',
+ 'local:import/import.ovf',
+ ],
+ },
# no matches, path or files with failures
{
@@ -237,6 +245,11 @@ my @tests = (
volname => "$storage_dir/images/ssss/vm-1234-disk-0.qcow2",
expected => [''],
},
+ {
+ description => 'Import, non ova/ovf/disk image in import dir',
+ volname => "$storage_dir/import/test.foo",
+ expected => [''],
+ },
);
plan tests => scalar @tests + 1;
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* [pve-devel] [PATCH storage v6 03/12] plugin: dir: handle ova files for import
2024-11-15 15:17 [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages Dominik Csapak
2024-11-15 15:17 ` [pve-devel] [PATCH storage v6 01/12] copy OVF.pm from qemu-server Dominik Csapak
2024-11-15 15:17 ` [pve-devel] [PATCH storage v6 02/12] plugin: dir: implement import content type Dominik Csapak
@ 2024-11-15 15:17 ` Dominik Csapak
2024-11-18 12:17 ` Fiona Ebner
2024-11-15 15:17 ` [pve-devel] [PATCH storage v6 04/12] ovf: improve and simplify path checking code Dominik Csapak
` (27 subsequent siblings)
30 siblings, 1 reply; 68+ messages in thread
From: Dominik Csapak @ 2024-11-15 15:17 UTC (permalink / raw)
To: pve-devel
since we want to handle ova files (which are only ovf+images bundled in
a tar file) for import, add code that handles that.
we introduce a valid volname for files contained in ovas like this:
storage:import/archive.ova/disk-1.vmdk
by basically treating the last part of the path as the name for the
contained disk we want.
in that case we return 'import' as type with 'vmdk/qcow2/raw' as format
(we cannot use something like 'ova+vmdk' without extending the 'format'
parsing to that for all storages/formats. This is because it runs
though a verify format check at least once)
we then provide a function to use for that:
* extract_disk_from_import_file: this actually extracts the file from
the archive. Currently only ova is supported, so the extraction with
'tar' is hardcoded, but again we can easily extend/modify that should
we need to.
we currently extract into the either the import storage or a given
target storage in the images directory so if the cleanup does not
happen, the user can still see and interact with the image via
api/cli/gui
we have to modify the `parse_ovf` a bit to handle the missing disk
images, and we parse the size out of the ovf part (since this is
informal only, it should be no problem if we cannot parse it sometimes)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
changes from v5:
* adapted commit message to reflect actual changes
* split up errors
* removed unnecessary untaint
* improved error message with context
* check $fmt instead of $name for ova formats
src/PVE/API2/Storage/Status.pm | 1 +
src/PVE/GuestImport.pm | 79 ++++++++++++++++++++++++++++++
src/PVE/GuestImport/OVF.pm | 52 +++++++++++++++++---
src/PVE/Makefile | 1 +
src/PVE/Storage.pm | 4 +-
src/PVE/Storage/DirPlugin.pm | 17 +++++--
src/PVE/Storage/Plugin.pm | 4 ++
src/test/parse_volname_test.pm | 20 ++++++++
src/test/path_to_volume_id_test.pm | 8 +++
9 files changed, 176 insertions(+), 10 deletions(-)
create mode 100644 src/PVE/GuestImport.pm
diff --git a/src/PVE/API2/Storage/Status.pm b/src/PVE/API2/Storage/Status.pm
index f86e5d3..bdf1c18 100644
--- a/src/PVE/API2/Storage/Status.pm
+++ b/src/PVE/API2/Storage/Status.pm
@@ -749,6 +749,7 @@ __PACKAGE__->register_method({
'efi-state-lost',
'guest-is-running',
'nvme-unsupported',
+ 'ova-needs-extracting',
'ovmf-with-lsi-unsupported',
'serial-port-socket-only',
],
diff --git a/src/PVE/GuestImport.pm b/src/PVE/GuestImport.pm
new file mode 100644
index 0000000..f7ebf92
--- /dev/null
+++ b/src/PVE/GuestImport.pm
@@ -0,0 +1,79 @@
+package PVE::GuestImport;
+
+use strict;
+use warnings;
+
+use File::Path;
+
+use PVE::Storage;
+use PVE::Tools qw(run_command);
+
+sub extract_disk_from_import_file {
+ my ($volid, $vmid, $target_storeid) = @_;
+
+ my ($source_storeid, $volname) = PVE::Storage::parse_volume_id($volid);
+ $target_storeid //= $source_storeid;
+ my $cfg = PVE::Storage::config();
+
+ my ($vtype, $name, undef, undef, undef, undef, $fmt) =
+ PVE::Storage::parse_volname($cfg, $volid);
+
+ die "only files with content type 'import' can be extracted\n"
+ if $vtype ne 'import';
+
+ die "only files from 'ova' format can be extracted\n"
+ if $fmt !~ m/^ova\+/;
+
+ # extract the inner file from the name
+ my $archive_volid;
+ my $inner_file;
+ my $inner_fmt;
+ if ($name =~ m!^(.*\.ova)/(${PVE::Storage::SAFE_CHAR_CLASS_RE}+)$!) {
+ $archive_volid = "$source_storeid:import/$1";
+ $inner_file = $2;
+ ($inner_fmt) = $fmt =~ /^ova\+(.*)$/;
+ } else {
+ die "cannot extract $volid - invalid volname $volname\n";
+ }
+
+ my $ova_path = PVE::Storage::path($cfg, $archive_volid);
+
+ my $tmpdir = PVE::Storage::get_image_dir($cfg, $target_storeid, $vmid);
+ my $pid = $$;
+ $tmpdir .= "/tmp_${pid}_${vmid}";
+ mkpath $tmpdir;
+
+ my $source_path = "$tmpdir/$inner_file";
+ my $target_path;
+ my $target_volid;
+ eval {
+ run_command(['tar', '-x', '--force-local', '-C', $tmpdir, '-f', $ova_path, $inner_file]);
+
+ # check for symlinks and other non regular files
+ if (-l $source_path || ! -f $source_path) {
+ die "extracted file '$inner_file' from archive '$archive_volid' is not a regular file\n";
+ }
+
+ # check potentially untrusted image file!
+ PVE::Storage::file_size_info($source_path, undef, 1);
+
+ # create temporary 1M image that will get overwritten by the rename
+ # to reserve the filename and take care of locking
+ $target_volid = PVE::Storage::vdisk_alloc($cfg, $target_storeid, $vmid, $inner_fmt, undef, 1024);
+ $target_path = PVE::Storage::path($cfg, $target_volid);
+
+ print "renaming $source_path to $target_path\n";
+
+ rename($source_path, $target_path) or die "unable to move - $!\n";
+ };
+ if (my $err = $@) {
+ File::Path::remove_tree($tmpdir);
+ die "error during extraction: $err\n";
+ }
+
+ File::Path::remove_tree($tmpdir);
+
+ return $target_volid;
+}
+
+1;
diff --git a/src/PVE/GuestImport/OVF.pm b/src/PVE/GuestImport/OVF.pm
index 29dfaad..c7bff5f 100644
--- a/src/PVE/GuestImport/OVF.pm
+++ b/src/PVE/GuestImport/OVF.pm
@@ -84,11 +84,37 @@ sub id_to_pve {
}
}
+# technically defined in DSP0004 (https://www.dmtf.org/dsp/DSP0004) as an ABNF
+# but realistically this always takes the form of 'byte * base^exponent'
+sub try_parse_capacity_unit {
+ my ($unit_text) = @_;
+
+ if ($unit_text =~ m/^\s*byte\s*\*\s*([0-9]+)\s*\^\s*([0-9]+)\s*$/) {
+ my $base = $1;
+ my $exp = $2;
+ return $base ** $exp;
+ }
+
+ return undef;
+}
+
# returns two references, $qm which holds qm.conf style key/values, and \@disks
sub parse_ovf {
- my ($ovf, $debug) = @_;
+ my ($ovf, $isOva, $debug) = @_;
+
+ # we have to ignore missing disk images for ova
+ my $dom;
+ if ($isOva) {
+ my $raw = "";
+ PVE::Tools::run_command(['tar', '-xO', '--wildcards', '--occurrence=1', '-f', $ovf, '*.ovf'], outfunc => sub {
+ my $line = shift;
+ $raw .= $line;
+ });
+ $dom = XML::LibXML->load_xml(string => $raw, no_blanks => 1);
+ } else {
+ $dom = XML::LibXML->load_xml(location => $ovf, no_blanks => 1);
+ }
- my $dom = XML::LibXML->load_xml(location => $ovf, no_blanks => 1);
# register the xml namespaces in a xpath context object
# 'ovf' is the default namespace so it will prepended to each xml element
@@ -176,7 +202,17 @@ sub parse_ovf {
# @ needs to be escaped to prevent Perl double quote interpolation
my $xpath_find_fileref = sprintf("/ovf:Envelope/ovf:DiskSection/\
ovf:Disk[\@ovf:diskId='%s']/\@ovf:fileRef", $disk_id);
+ my $xpath_find_capacity = sprintf("/ovf:Envelope/ovf:DiskSection/\
+ovf:Disk[\@ovf:diskId='%s']/\@ovf:capacity", $disk_id);
+ my $xpath_find_capacity_unit = sprintf("/ovf:Envelope/ovf:DiskSection/\
+ovf:Disk[\@ovf:diskId='%s']/\@ovf:capacityAllocationUnits", $disk_id);
my $fileref = $xpc->findvalue($xpath_find_fileref);
+ my $capacity = $xpc->findvalue($xpath_find_capacity);
+ my $capacity_unit = $xpc->findvalue($xpath_find_capacity_unit);
+ my $virtual_size;
+ if (my $factor = try_parse_capacity_unit($capacity_unit)) {
+ $virtual_size = $capacity * $factor;
+ }
my $valid_url_chars = qr@${valid_uripath_chars}|/@;
if (!$fileref || $fileref !~ m/^${valid_url_chars}+$/) {
@@ -216,7 +252,7 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
die "error parsing $filepath, are you using a symlink ?\n";
}
- if (!-e $backing_file_path) {
+ if (!-e $backing_file_path && !$isOva) {
die "error parsing $filepath, file seems not to exist at $backing_file_path\n";
}
@@ -224,16 +260,20 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
($filepath) = $filepath =~ m|^(${PVE::Storage::SAFE_CHAR_CLASS_RE}+)$|; # untaint & check no sub/parent dirs
die "invalid path\n" if !$filepath;
- my $virtual_size = PVE::Storage::file_size_info($backing_file_path);
- die "error parsing $backing_file_path, cannot determine file size\n"
- if !$virtual_size;
+ if (!$isOva) {
+ my $size = PVE::Storage::file_size_info($backing_file_path);
+ die "error parsing $backing_file_path, cannot determine file size\n"
+ if !$size;
+ $virtual_size = $size;
+ }
$pve_disk = {
disk_address => $pve_disk_address,
backing_file => $backing_file_path,
virtual_size => $virtual_size,
relative_path => $filepath,
};
+ $pve_disk->{virtual_size} = $virtual_size if defined($virtual_size);
push @disks, $pve_disk;
}
diff --git a/src/PVE/Makefile b/src/PVE/Makefile
index e15a275..0af3081 100644
--- a/src/PVE/Makefile
+++ b/src/PVE/Makefile
@@ -5,6 +5,7 @@ install:
install -D -m 0644 Storage.pm ${DESTDIR}${PERLDIR}/PVE/Storage.pm
install -D -m 0644 Diskmanage.pm ${DESTDIR}${PERLDIR}/PVE/Diskmanage.pm
install -D -m 0644 CephConfig.pm ${DESTDIR}${PERLDIR}/PVE/CephConfig.pm
+ install -D -m 0644 GuestImport.pm ${DESTDIR}${PERLDIR}/PVE/GuestImport.pm
make -C Storage install
make -C GuestImport install
make -C API2 install
diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
index 78a3405..4df1a84 100755
--- a/src/PVE/Storage.pm
+++ b/src/PVE/Storage.pm
@@ -114,10 +114,12 @@ our $VZTMPL_EXT_RE_1 = qr/\.tar\.(gz|xz|zst|bz2)/i;
our $BACKUP_EXT_RE_2 = qr/\.(tgz|(?:tar|vma)(?:\.(${\PVE::Storage::Plugin::COMPRESSOR_RE}))?)/;
-our $IMPORT_EXT_RE_1 = qr/\.(ovf|qcow2|raw|vmdk)/;
+our $IMPORT_EXT_RE_1 = qr/\.(ova|ovf|qcow2|raw|vmdk)/;
our $SAFE_CHAR_CLASS_RE = qr/[a-zA-Z0-9\-\.\+\=\_]/;
+our $OVA_CONTENT_RE_1 = qr/${SAFE_CHAR_CLASS_RE}+\.(qcow2|raw|vmdk)/;
+
# FIXME remove with PVE 9.0, add versioned breaks for pve-manager
our $vztmpl_extension_re = $VZTMPL_EXT_RE_1;
diff --git a/src/PVE/Storage/DirPlugin.pm b/src/PVE/Storage/DirPlugin.pm
index efbca0c..04a0485 100644
--- a/src/PVE/Storage/DirPlugin.pm
+++ b/src/PVE/Storage/DirPlugin.pm
@@ -253,20 +253,31 @@ sub get_import_metadata {
my ($vtype, $name, undef, undef, undef, undef, $fmt) = $class->parse_volname($volname);
die "invalid content type '$vtype'\n" if $vtype ne 'import';
- die "invalid format\n" if $fmt ne 'ovf';
+ die "invalid format\n" if $fmt ne 'ova' && $fmt ne 'ovf';
# NOTE: all types of warnings must be added to the return schema of the import-metadata API endpoint
my $warnings = [];
+ my $isOva = 0;
+ if ($fmt =~ m/^ova/) {
+ $isOva = 1;
+ push @$warnings, { type => 'ova-needs-extracting' };
+ }
my $path = $class->path($scfg, $volname, $storeid, undef);
- my $res = PVE::GuestImport::OVF::parse_ovf($path);
+ my $res = PVE::GuestImport::OVF::parse_ovf($path, $isOva);
my $disks = {};
for my $disk ($res->{disks}->@*) {
my $id = $disk->{disk_address};
my $size = $disk->{virtual_size};
my $path = $disk->{relative_path};
+ my $volid;
+ if ($isOva) {
+ $volid = "$storeid:$volname/$path";
+ } else {
+ $volid = "$storeid:import/$path",
+ }
$disks->{$id} = {
- volid => "$storeid:import/$path",
+ volid => $volid,
defined($size) ? (size => $size) : (),
};
}
diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
index 3655e6a..eed764d 100644
--- a/src/PVE/Storage/Plugin.pm
+++ b/src/PVE/Storage/Plugin.pm
@@ -663,6 +663,10 @@ sub parse_volname {
return ('backup', $fn);
} elsif ($volname =~ m!^snippets/([^/]+)$!) {
return ('snippets', $1);
+ } elsif ($volname =~ m!^import/(${PVE::Storage::SAFE_CHAR_CLASS_RE}+\.ova\/${PVE::Storage::OVA_CONTENT_RE_1})$!) {
+ my $archive = $1;
+ my $format = $2;
+ return ('import', $archive, undef, undef, undef, undef, "ova+$format");
} elsif ($volname =~ m!^import/(${PVE::Storage::SAFE_CHAR_CLASS_RE}+$PVE::Storage::IMPORT_EXT_RE_1)$!) {
return ('import', $1, undef, undef, undef, undef, $2);
}
diff --git a/src/test/parse_volname_test.pm b/src/test/parse_volname_test.pm
index 92e984f..eecd7df 100644
--- a/src/test/parse_volname_test.pm
+++ b/src/test/parse_volname_test.pm
@@ -88,11 +88,31 @@ my $tests = [
#
# Import
#
+ {
+ description => "Import, ova",
+ volname => 'import/import.ova',
+ expected => ['import', 'import.ova', undef, undef, undef ,undef, 'ova'],
+ },
{
description => "Import, ovf",
volname => 'import/import.ovf',
expected => ['import', 'import.ovf', undef, undef, undef ,undef, 'ovf'],
},
+ {
+ description => "Import, innner file of ova",
+ volname => 'import/import.ova/disk.qcow2',
+ expected => ['import', 'import.ova/disk.qcow2', undef, undef, undef, undef, 'ova+qcow2'],
+ },
+ {
+ description => "Import, innner file of ova",
+ volname => 'import/import.ova/disk.vmdk',
+ expected => ['import', 'import.ova/disk.vmdk', undef, undef, undef, undef, 'ova+vmdk'],
+ },
+ {
+ description => "Import, innner file of ova",
+ volname => 'import/import.ova/disk.raw',
+ expected => ['import', 'import.ova/disk.raw', undef, undef, undef, undef, 'ova+raw'],
+ },
#
# failed matches
#
diff --git a/src/test/path_to_volume_id_test.pm b/src/test/path_to_volume_id_test.pm
index d954f4b..23c5a23 100644
--- a/src/test/path_to_volume_id_test.pm
+++ b/src/test/path_to_volume_id_test.pm
@@ -190,6 +190,14 @@ my @tests = (
'local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.xz',
],
},
+ {
+ description => 'Import, ova',
+ volname => "$storage_dir/import/import.ova",
+ expected => [
+ 'import',
+ 'local:import/import.ova',
+ ],
+ },
{
description => 'Import, ovf',
volname => "$storage_dir/import/import.ovf",
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [pve-devel] [PATCH storage v6 03/12] plugin: dir: handle ova files for import
2024-11-15 15:17 ` [pve-devel] [PATCH storage v6 03/12] plugin: dir: handle ova files for import Dominik Csapak
@ 2024-11-18 12:17 ` Fiona Ebner
0 siblings, 0 replies; 68+ messages in thread
From: Fiona Ebner @ 2024-11-18 12:17 UTC (permalink / raw)
To: Proxmox VE development discussion, Dominik Csapak
Am 15.11.24 um 16:17 schrieb Dominik Csapak:
> since we want to handle ova files (which are only ovf+images bundled in
> a tar file) for import, add code that handles that.
>
> we introduce a valid volname for files contained in ovas like this:
>
> storage:import/archive.ova/disk-1.vmdk
>
> by basically treating the last part of the path as the name for the
> contained disk we want.
>
> in that case we return 'import' as type with 'vmdk/qcow2/raw' as format
> (we cannot use something like 'ova+vmdk' without extending the 'format'
> parsing to that for all storages/formats. This is because it runs
> though a verify format check at least once)
>
> we then provide a function to use for that:
>
> * extract_disk_from_import_file: this actually extracts the file from
> the archive. Currently only ova is supported, so the extraction with
> 'tar' is hardcoded, but again we can easily extend/modify that should
> we need to.
>
> we currently extract into the either the import storage or a given
> target storage in the images directory so if the cleanup does not
> happen, the user can still see and interact with the image via
> api/cli/gui
>
>
> we have to modify the `parse_ovf` a bit to handle the missing disk
> images, and we parse the size out of the ovf part (since this is
> informal only, it should be no problem if we cannot parse it sometimes)
>
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
One minor nit below, but:
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
> diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
> index 3655e6a..eed764d 100644
> --- a/src/PVE/Storage/Plugin.pm
> +++ b/src/PVE/Storage/Plugin.pm
> @@ -663,6 +663,10 @@ sub parse_volname {
> return ('backup', $fn);
> } elsif ($volname =~ m!^snippets/([^/]+)$!) {
> return ('snippets', $1);
> + } elsif ($volname =~ m!^import/(${PVE::Storage::SAFE_CHAR_CLASS_RE}+\.ova\/${PVE::Storage::OVA_CONTENT_RE_1})$!) {
> + my $archive = $1;
Nit: That's the volname for the disk inside the archive, so 'archive' is
not the best variable name.
> + my $format = $2;
> + return ('import', $archive, undef, undef, undef, undef, "ova+$format");
> } elsif ($volname =~ m!^import/(${PVE::Storage::SAFE_CHAR_CLASS_RE}+$PVE::Storage::IMPORT_EXT_RE_1)$!) {
> return ('import', $1, undef, undef, undef, undef, $2);
> }
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* [pve-devel] [PATCH storage v6 04/12] ovf: improve and simplify path checking code
2024-11-15 15:17 [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages Dominik Csapak
` (2 preceding siblings ...)
2024-11-15 15:17 ` [pve-devel] [PATCH storage v6 03/12] plugin: dir: handle ova files for import Dominik Csapak
@ 2024-11-15 15:17 ` Dominik Csapak
2024-11-18 12:25 ` Fiona Ebner
2024-11-15 15:17 ` [pve-devel] [PATCH storage v6 05/12] ovf: implement parsing the ostype Dominik Csapak
` (26 subsequent siblings)
30 siblings, 1 reply; 68+ messages in thread
From: Dominik Csapak @ 2024-11-15 15:17 UTC (permalink / raw)
To: pve-devel
moves the filepath code a bit more closer to where it's actually used
checks the contained path before trying to find it's absolute path
properly add error handling to realpath
instead of checking the combined ovf_path + filepath, just make sure
filepath can't point to anythign besides a file in this directory
by checking for '.' and '..' (slashes are not allowed in SAFE_CHAR_CLASS_RE)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
changes from v5:
* reintroduce check for symlinks
src/PVE/GuestImport/OVF.pm | 33 ++++++++++++++++++---------------
1 file changed, 18 insertions(+), 15 deletions(-)
diff --git a/src/PVE/GuestImport/OVF.pm b/src/PVE/GuestImport/OVF.pm
index c7bff5f..966dcd1 100644
--- a/src/PVE/GuestImport/OVF.pm
+++ b/src/PVE/GuestImport/OVF.pm
@@ -220,15 +220,6 @@ ovf:Disk[\@ovf:diskId='%s']/\@ovf:capacityAllocationUnits", $disk_id);
next;
}
- # from Disk Node, find corresponding filepath
- my $xpath_find_filepath = sprintf("/ovf:Envelope/ovf:References/ovf:File[\@ovf:id='%s']/\@ovf:href", $fileref);
- my $filepath = $xpc->findvalue($xpath_find_filepath);
- if (!$filepath) {
- warn "invalid file reference $fileref, skipping\n";
- next;
- }
- print "file path: $filepath\n" if $debug;
-
# from Item, find owning Controller type
my $controller_id = $xpc->findvalue('rasd:Parent', $item_node);
my $xpath_find_parent_type = sprintf("/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/\
@@ -244,22 +235,34 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
my $adress_on_controller = $xpc->findvalue('rasd:AddressOnParent', $item_node);
my $pve_disk_address = id_to_pve($controller_type) . $adress_on_controller;
+ # from Disk Node, find corresponding filepath
+ my $xpath_find_filepath = sprintf("/ovf:Envelope/ovf:References/ovf:File[\@ovf:id='%s']/\@ovf:href", $fileref);
+ my $filepath = $xpc->findvalue($xpath_find_filepath);
+ if (!$filepath) {
+ warn "invalid file reference $fileref, skipping\n";
+ next;
+ }
+ print "file path: $filepath\n" if $debug;
+ my $original_filepath = $filepath;
+ ($filepath) = $filepath =~ m|^(${PVE::Storage::SAFE_CHAR_CLASS_RE}+)$|; # untaint & check no sub/parent dirs
+ die "referenced path '$original_filepath' is invalid\n" if !$filepath || $filepath eq "." || $filepath eq "..";
+
# resolve symlinks and relative path components
# and die if the diskimage is not somewhere under the $ovf path
- my $ovf_dir = realpath(dirname(File::Spec->rel2abs($ovf)));
- my $backing_file_path = realpath(join ('/', $ovf_dir, $filepath));
+ my $ovf_dir = realpath(dirname(File::Spec->rel2abs($ovf)))
+ or die "could not get absolute path of $ovf: $!\n";
+ my $backing_file_path = realpath(join ('/', $ovf_dir, $filepath))
+ or die "could not get absolute path of $filepath: $!\n";
if ($backing_file_path !~ /^\Q${ovf_dir}\E/) {
die "error parsing $filepath, are you using a symlink ?\n";
}
+ ($backing_file_path) = $backing_file_path =~ m|^(/.*)|; # untaint
+
if (!-e $backing_file_path && !$isOva) {
die "error parsing $filepath, file seems not to exist at $backing_file_path\n";
}
- ($backing_file_path) = $backing_file_path =~ m|^(/.*)|; # untaint
- ($filepath) = $filepath =~ m|^(${PVE::Storage::SAFE_CHAR_CLASS_RE}+)$|; # untaint & check no sub/parent dirs
- die "invalid path\n" if !$filepath;
-
if (!$isOva) {
my $size = PVE::Storage::file_size_info($backing_file_path);
die "error parsing $backing_file_path, cannot determine file size\n"
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* [pve-devel] [PATCH storage v6 05/12] ovf: implement parsing the ostype
2024-11-15 15:17 [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages Dominik Csapak
` (3 preceding siblings ...)
2024-11-15 15:17 ` [pve-devel] [PATCH storage v6 04/12] ovf: improve and simplify path checking code Dominik Csapak
@ 2024-11-15 15:17 ` Dominik Csapak
2024-11-15 15:17 ` [pve-devel] [PATCH storage v6 06/12] ovf: implement parsing out firmware type Dominik Csapak
` (25 subsequent siblings)
30 siblings, 0 replies; 68+ messages in thread
From: Dominik Csapak @ 2024-11-15 15:17 UTC (permalink / raw)
To: pve-devel
use the standards info about the ostypes to map to our own
(see comment for link to the relevant part of the dmtf schema)
every type that is not listed we map to 'other', so no need to have it
in a list.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/GuestImport/OVF.pm | 69 ++++++++++++++++++++++++++++++++++++++
src/test/run_ovf_tests.pl | 5 +++
2 files changed, 74 insertions(+)
diff --git a/src/PVE/GuestImport/OVF.pm b/src/PVE/GuestImport/OVF.pm
index 966dcd1..a760e1e 100644
--- a/src/PVE/GuestImport/OVF.pm
+++ b/src/PVE/GuestImport/OVF.pm
@@ -54,6 +54,71 @@ my @resources = (
{ id => 35, dtmf_name => 'Vendor Reserved'}
);
+# see https://schemas.dmtf.org/wbem/cim-html/2.55.0+/CIM_OperatingSystem.html
+my $ostype_ids = {
+ 18 => 'winxp', # 'WINNT',
+ 29 => 'solaris', # 'Solaris',
+ 36 => 'l26', # 'LINUX',
+ 58 => 'w2k', # 'Windows 2000',
+ 67 => 'wxp', #'Windows XP',
+ 69 => 'w2k3', # 'Microsoft Windows Server 2003',
+ 70 => 'w2k3', # 'Microsoft Windows Server 2003 64-Bit',
+ 71 => 'wxp', # 'Windows XP 64-Bit',
+ 72 => 'wxp', # 'Windows XP Embedded',
+ 73 => 'wvista', # 'Windows Vista',
+ 74 => 'wvista', # 'Windows Vista 64-Bit',
+ 75 => 'wxp', # 'Windows Embedded for Point of Service', ??
+ 76 => 'w2k8', # 'Microsoft Windows Server 2008',
+ 77 => 'w2k8', # 'Microsoft Windows Server 2008 64-Bit',
+ 79 => 'l26', # 'RedHat Enterprise Linux',
+ 80 => 'l26', # 'RedHat Enterprise Linux 64-Bit',
+ 81 => 'solaris', #'Solaris 64-Bit',
+ 82 => 'l26', # 'SUSE',
+ 83 => 'l26', # 'SUSE 64-Bit',
+ 84 => 'l26', # 'SLES',
+ 85 => 'l26', # 'SLES 64-Bit',
+ 87 => 'l26', # 'Novell Linux Desktop',
+ 89 => 'l26', # 'Mandriva',
+ 90 => 'l26', # 'Mandriva 64-Bit',
+ 91 => 'l26', # 'TurboLinux',
+ 92 => 'l26', # 'TurboLinux 64-Bit',
+ 93 => 'l26', # 'Ubuntu',
+ 94 => 'l26', # 'Ubuntu 64-Bit',
+ 95 => 'l26', # 'Debian',
+ 96 => 'l26', # 'Debian 64-Bit',
+ 97 => 'l24', # 'Linux 2.4.x',
+ 98 => 'l24', # 'Linux 2.4.x 64-Bit',
+ 99 => 'l26', # 'Linux 2.6.x',
+ 100 => 'l26', # 'Linux 2.6.x 64-Bit',
+ 101 => 'l26', # 'Linux 64-Bit',
+ 103 => 'win7', # 'Microsoft Windows Server 2008 R2',
+ 105 => 'win7', # 'Microsoft Windows 7',
+ 106 => 'l26', # 'CentOS 32-bit',
+ 107 => 'l26', # 'CentOS 64-bit',
+ 108 => 'l26', # 'Oracle Linux 32-bit',
+ 109 => 'l26', # 'Oracle Linux 64-bit',
+ 111 => 'win8', # 'Microsoft Windows Server 2011', ??
+ 112 => 'win8', # 'Microsoft Windows Server 2012',
+ 113 => 'win8', # 'Microsoft Windows 8',
+ 114 => 'win8', # 'Microsoft Windows 8 64-bit',
+ 115 => 'win8', # 'Microsoft Windows Server 2012 R2',
+ 116 => 'win10', # 'Microsoft Windows Server 2016',
+ 117 => 'win8', # 'Microsoft Windows 8.1',
+ 118 => 'win8', # 'Microsoft Windows 8.1 64-bit',
+ 119 => 'win10', # 'Microsoft Windows 10',
+ 120 => 'win10', # 'Microsoft Windows 10 64-bit',
+ 121 => 'win10', # 'Microsoft Windows Server 2019',
+ 122 => 'win11', # 'Microsoft Windows 11 64-bit',
+ 123 => 'win11', # 'Microsoft Windows Server 2022',
+ # others => 'other',
+};
+
+sub get_ostype {
+ my ($id) = @_;
+
+ return $ostype_ids->{$id} // 'other';
+}
+
sub find_by {
my ($key, $param) = @_;
foreach my $resource (@resources) {
@@ -159,6 +224,10 @@ sub parse_ovf {
my $xpath_find_disks="/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${disk_id}]";
my @disk_items = $xpc->findnodes($xpath_find_disks);
+ my $xpath_find_ostype_id = "/ovf:Envelope/ovf:VirtualSystem/ovf:OperatingSystemSection/\@ovf:id";
+ my $ostype_id = $xpc->findvalue($xpath_find_ostype_id);
+ $qm->{ostype} = get_ostype($ostype_id);
+
# disks metadata is split in four different xml elements:
# * as an Item node of type DiskDrive in the VirtualHardwareSection
# * as an Disk node in the DiskSection
diff --git a/src/test/run_ovf_tests.pl b/src/test/run_ovf_tests.pl
index 5a80ab2..c433c9d 100755
--- a/src/test/run_ovf_tests.pl
+++ b/src/test/run_ovf_tests.pl
@@ -59,13 +59,18 @@ print "\ntesting vm.conf extraction\n";
is($win2008->{qm}->{name}, 'Win2008-R2x64', 'win2008 VM name is correct');
is($win2008->{qm}->{memory}, '2048', 'win2008 VM memory is correct');
is($win2008->{qm}->{cores}, '1', 'win2008 VM cores are correct');
+is($win2008->{qm}->{ostype}, 'win7', 'win2008 VM ostype is correcty');
is($win10->{qm}->{name}, 'Win10-Liz', 'win10 VM name is correct');
is($win10->{qm}->{memory}, '6144', 'win10 VM memory is correct');
is($win10->{qm}->{cores}, '4', 'win10 VM cores are correct');
+# older esxi/ovf standard used 'other' for windows10
+is($win10->{qm}->{ostype}, 'other', 'win10 VM ostype is correct');
is($win10noNs->{qm}->{name}, 'Win10-Liz', 'win10 VM (no default rasd NS) name is correct');
is($win10noNs->{qm}->{memory}, '6144', 'win10 VM (no default rasd NS) memory is correct');
is($win10noNs->{qm}->{cores}, '4', 'win10 VM (no default rasd NS) cores are correct');
+# older esxi/ovf standard used 'other' for windows10
+is($win10noNs->{qm}->{ostype}, 'other', 'win10 VM (no default rasd NS) ostype is correct');
done_testing();
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* [pve-devel] [PATCH storage v6 06/12] ovf: implement parsing out firmware type
2024-11-15 15:17 [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages Dominik Csapak
` (4 preceding siblings ...)
2024-11-15 15:17 ` [pve-devel] [PATCH storage v6 05/12] ovf: implement parsing the ostype Dominik Csapak
@ 2024-11-15 15:17 ` Dominik Csapak
2024-11-15 15:17 ` [pve-devel] [PATCH storage v6 07/12] ovf: implement rudimentary boot order Dominik Csapak
` (24 subsequent siblings)
30 siblings, 0 replies; 68+ messages in thread
From: Dominik Csapak @ 2024-11-15 15:17 UTC (permalink / raw)
To: pve-devel
it seems there is no part of the ovf standard that handles which type of
bios there is (at least i could not find it). Every ovf/ova i tested
either has no info about it, or has it in a vmware specific property
which we parse here.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/GuestImport/OVF.pm | 5 +++++
src/PVE/Storage/DirPlugin.pm | 5 +++++
src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf | 1 +
src/test/run_ovf_tests.pl | 1 +
4 files changed, 12 insertions(+)
diff --git a/src/PVE/GuestImport/OVF.pm b/src/PVE/GuestImport/OVF.pm
index a760e1e..08e9d0f 100644
--- a/src/PVE/GuestImport/OVF.pm
+++ b/src/PVE/GuestImport/OVF.pm
@@ -228,6 +228,11 @@ sub parse_ovf {
my $ostype_id = $xpc->findvalue($xpath_find_ostype_id);
$qm->{ostype} = get_ostype($ostype_id);
+ # vmware specific firmware config, seems to not be standardized in ovf ?
+ my $xpath_find_firmware = "/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/vmw:Config[\@vmw:key=\"firmware\"]/\@vmw:value";
+ my $firmware = $xpc->findvalue($xpath_find_firmware) || 'seabios';
+ $qm->{bios} = 'ovmf' if $firmware eq 'efi';
+
# disks metadata is split in four different xml elements:
# * as an Item node of type DiskDrive in the VirtualHardwareSection
# * as an Disk node in the DiskSection
diff --git a/src/PVE/Storage/DirPlugin.pm b/src/PVE/Storage/DirPlugin.pm
index 04a0485..0c32242 100644
--- a/src/PVE/Storage/DirPlugin.pm
+++ b/src/PVE/Storage/DirPlugin.pm
@@ -282,6 +282,11 @@ sub get_import_metadata {
};
}
+ if (defined($res->{qm}->{bios}) && $res->{qm}->{bios} eq 'ovmf') {
+ $disks->{efidisk0} = 1;
+ push @$warnings, { type => 'efi-state-lost', key => 'bios', value => 'ovmf' };
+ }
+
return {
type => 'vm',
source => $volname,
diff --git a/src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf b/src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf
index b93540f..10ccaf1 100755
--- a/src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf
+++ b/src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf
@@ -137,6 +137,7 @@
<vmw:Config ovf:required="false" vmw:key="powerOpInfo.resetType" vmw:value="soft"/>
<vmw:Config ovf:required="false" vmw:key="powerOpInfo.suspendType" vmw:value="soft"/>
<vmw:Config ovf:required="false" vmw:key="tools.syncTimeWithHost" vmw:value="false"/>
+ <vmw:Config ovf:required="false" vmw:key="firmware" vmw:value="efi"/>
</VirtualHardwareSection>
</VirtualSystem>
</Envelope>
diff --git a/src/test/run_ovf_tests.pl b/src/test/run_ovf_tests.pl
index c433c9d..e92258d 100755
--- a/src/test/run_ovf_tests.pl
+++ b/src/test/run_ovf_tests.pl
@@ -72,5 +72,6 @@ is($win10noNs->{qm}->{memory}, '6144', 'win10 VM (no default rasd NS) memory is
is($win10noNs->{qm}->{cores}, '4', 'win10 VM (no default rasd NS) cores are correct');
# older esxi/ovf standard used 'other' for windows10
is($win10noNs->{qm}->{ostype}, 'other', 'win10 VM (no default rasd NS) ostype is correct');
+is($win10noNs->{qm}->{bios}, 'ovmf', 'win10 VM (no default rasd NS) bios is correct');
done_testing();
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* [pve-devel] [PATCH storage v6 07/12] ovf: implement rudimentary boot order
2024-11-15 15:17 [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages Dominik Csapak
` (5 preceding siblings ...)
2024-11-15 15:17 ` [pve-devel] [PATCH storage v6 06/12] ovf: implement parsing out firmware type Dominik Csapak
@ 2024-11-15 15:17 ` Dominik Csapak
2024-11-15 15:17 ` [pve-devel] [PATCH storage v6 08/12] ovf: implement parsing nics Dominik Csapak
` (23 subsequent siblings)
30 siblings, 0 replies; 68+ messages in thread
From: Dominik Csapak @ 2024-11-15 15:17 UTC (permalink / raw)
To: pve-devel
simply add all parsed disks to the boot order in the order we encounter
them (similar to the esxi plugin).
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/GuestImport/OVF.pm | 6 +++++-
src/test/run_ovf_tests.pl | 3 +++
2 files changed, 8 insertions(+), 1 deletion(-)
diff --git a/src/PVE/GuestImport/OVF.pm b/src/PVE/GuestImport/OVF.pm
index 08e9d0f..d08cc51 100644
--- a/src/PVE/GuestImport/OVF.pm
+++ b/src/PVE/GuestImport/OVF.pm
@@ -244,6 +244,8 @@ sub parse_ovf {
# when all the nodes has been found out, we copy the relevant information to
# a $pve_disk hash ref, which we push to @disks;
+ my $boot_order = [];
+
foreach my $item_node (@disk_items) {
my $disk_node;
@@ -352,9 +354,11 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
};
$pve_disk->{virtual_size} = $virtual_size if defined($virtual_size);
push @disks, $pve_disk;
-
+ push @$boot_order, $pve_disk_address;
}
+ $qm->{boot} = "order=" . join(';', @$boot_order) if scalar(@$boot_order) > 0;
+
return {qm => $qm, disks => \@disks};
}
diff --git a/src/test/run_ovf_tests.pl b/src/test/run_ovf_tests.pl
index e92258d..3b04100 100755
--- a/src/test/run_ovf_tests.pl
+++ b/src/test/run_ovf_tests.pl
@@ -56,17 +56,20 @@ is($win10noNs->{disks}->[0]->{virtual_size}, 2048, 'single disk vm (no default r
print "\ntesting vm.conf extraction\n";
+is($win2008->{qm}->{boot}, 'order=scsi0;scsi1', 'win2008 VM boot is correct');
is($win2008->{qm}->{name}, 'Win2008-R2x64', 'win2008 VM name is correct');
is($win2008->{qm}->{memory}, '2048', 'win2008 VM memory is correct');
is($win2008->{qm}->{cores}, '1', 'win2008 VM cores are correct');
is($win2008->{qm}->{ostype}, 'win7', 'win2008 VM ostype is correcty');
+is($win10->{qm}->{boot}, 'order=scsi0', 'win10 VM boot is correct');
is($win10->{qm}->{name}, 'Win10-Liz', 'win10 VM name is correct');
is($win10->{qm}->{memory}, '6144', 'win10 VM memory is correct');
is($win10->{qm}->{cores}, '4', 'win10 VM cores are correct');
# older esxi/ovf standard used 'other' for windows10
is($win10->{qm}->{ostype}, 'other', 'win10 VM ostype is correct');
+is($win10noNs->{qm}->{boot}, 'order=scsi0', 'win10 VM (no default rasd NS) boot is correct');
is($win10noNs->{qm}->{name}, 'Win10-Liz', 'win10 VM (no default rasd NS) name is correct');
is($win10noNs->{qm}->{memory}, '6144', 'win10 VM (no default rasd NS) memory is correct');
is($win10noNs->{qm}->{cores}, '4', 'win10 VM (no default rasd NS) cores are correct');
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* [pve-devel] [PATCH storage v6 08/12] ovf: implement parsing nics
2024-11-15 15:17 [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages Dominik Csapak
` (6 preceding siblings ...)
2024-11-15 15:17 ` [pve-devel] [PATCH storage v6 07/12] ovf: implement rudimentary boot order Dominik Csapak
@ 2024-11-15 15:17 ` Dominik Csapak
2024-11-15 15:17 ` [pve-devel] [PATCH storage v6 09/12] api: allow ova upload/download Dominik Csapak
` (22 subsequent siblings)
30 siblings, 0 replies; 68+ messages in thread
From: Dominik Csapak @ 2024-11-15 15:17 UTC (permalink / raw)
To: pve-devel
by iterating over the relevant parts and trying to parse out the
'ResourceSubType'. The content of that is not standardized, but I only
ever found examples that are compatible with vmware, meaning it's
either 'e1000', 'e1000e' or 'vmxnet3' (in various capitalizations; thus
the `lc()`)
As a fallback i used e1000, since that is our default too, and should
work for most guest operating systems.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/GuestImport/OVF.pm | 23 ++++++++++++++++++++++-
src/PVE/Storage/DirPlugin.pm | 2 +-
src/test/run_ovf_tests.pl | 5 +++++
3 files changed, 28 insertions(+), 2 deletions(-)
diff --git a/src/PVE/GuestImport/OVF.pm b/src/PVE/GuestImport/OVF.pm
index d08cc51..4af58ed 100644
--- a/src/PVE/GuestImport/OVF.pm
+++ b/src/PVE/GuestImport/OVF.pm
@@ -119,6 +119,12 @@ sub get_ostype {
return $ostype_ids->{$id} // 'other';
}
+my $allowed_nic_models = [
+ 'e1000',
+ 'e1000e',
+ 'vmxnet3',
+];
+
sub find_by {
my ($key, $param) = @_;
foreach my $resource (@resources) {
@@ -359,7 +365,22 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
$qm->{boot} = "order=" . join(';', @$boot_order) if scalar(@$boot_order) > 0;
- return {qm => $qm, disks => \@disks};
+ my $nic_id = dtmf_name_to_id('Ethernet Adapter');
+ my $xpath_find_nics = "/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${nic_id}]";
+ my @nic_items = $xpc->findnodes($xpath_find_nics);
+
+ my $net = {};
+
+ my $net_count = 0;
+ for my $item_node (@nic_items) {
+ my $model = $xpc->findvalue('rasd:ResourceSubType', $item_node);
+ $model = lc($model);
+ $model = 'e1000' if ! grep { $_ eq $model } @$allowed_nic_models;
+ $net->{"net${net_count}"} = { model => $model };
+ $net_count++;
+ }
+
+ return {qm => $qm, disks => \@disks, net => $net};
}
1;
diff --git a/src/PVE/Storage/DirPlugin.pm b/src/PVE/Storage/DirPlugin.pm
index 0c32242..fb23e0a 100644
--- a/src/PVE/Storage/DirPlugin.pm
+++ b/src/PVE/Storage/DirPlugin.pm
@@ -293,7 +293,7 @@ sub get_import_metadata {
'create-args' => $res->{qm},
'disks' => $disks,
warnings => $warnings,
- net => [],
+ net => $res->{net},
};
}
diff --git a/src/test/run_ovf_tests.pl b/src/test/run_ovf_tests.pl
index 3b04100..b8fa4b1 100755
--- a/src/test/run_ovf_tests.pl
+++ b/src/test/run_ovf_tests.pl
@@ -54,6 +54,11 @@ is($win10noNs->{disks}->[0]->{disk_address}, 'scsi0', 'single disk vm (no defaul
is($win10noNs->{disks}->[0]->{backing_file}, "$test_manifests/Win10-Liz-disk1.vmdk", 'single disk vm (no default rasd NS) has the correct disk backing device');
is($win10noNs->{disks}->[0]->{virtual_size}, 2048, 'single disk vm (no default rasd NS) has the correct size');
+print "testing nics\n";
+is($win2008->{net}->{net0}->{model}, 'e1000', 'win2008 has correct nic model');
+is($win10->{net}->{net0}->{model}, 'e1000e', 'win10 has correct nic model');
+is($win10noNs->{net}->{net0}->{model}, 'e1000e', 'win10 (no default rasd NS) has correct nic model');
+
print "\ntesting vm.conf extraction\n";
is($win2008->{qm}->{boot}, 'order=scsi0;scsi1', 'win2008 VM boot is correct');
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* [pve-devel] [PATCH storage v6 09/12] api: allow ova upload/download
2024-11-15 15:17 [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages Dominik Csapak
` (7 preceding siblings ...)
2024-11-15 15:17 ` [pve-devel] [PATCH storage v6 08/12] ovf: implement parsing nics Dominik Csapak
@ 2024-11-15 15:17 ` Dominik Csapak
2024-11-18 12:42 ` Fiona Ebner
2024-11-15 15:17 ` [pve-devel] [PATCH storage v6 10/12] plugin: enable import for nfs/btrfs/cifs/cephfs/glusterfs Dominik Csapak
` (21 subsequent siblings)
30 siblings, 1 reply; 68+ messages in thread
From: Dominik Csapak @ 2024-11-15 15:17 UTC (permalink / raw)
To: pve-devel
introducing a separate regex that only contains ova, since
upload/downloading ovfs does not make sense (since the disks are then
missing).
Add a sanity check after up/downloading the ova file (and delete if it
does not match).
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
changes from v2:
* add sanity check for ova content after up/downloading
src/PVE/API2/Storage/Status.pm | 69 +++++++++++++++++++++++++++++++---
src/PVE/Storage.pm | 11 ++++++
2 files changed, 75 insertions(+), 5 deletions(-)
diff --git a/src/PVE/API2/Storage/Status.pm b/src/PVE/API2/Storage/Status.pm
index bdf1c18..8bbb5a7 100644
--- a/src/PVE/API2/Storage/Status.pm
+++ b/src/PVE/API2/Storage/Status.pm
@@ -41,6 +41,24 @@ __PACKAGE__->register_method ({
path => '{storage}/file-restore',
});
+my sub assert_ova_contents {
+ my ($file) = @_;
+
+ # test if it's really a tar file with an ovf file inside
+ my $hasOvf = 0;
+ run_command(['tar', '-t', '-f', $file], outfunc => sub {
+ my ($line) = @_;
+
+ if ($line =~ m/\.ovf$/) {
+ $hasOvf = 1;
+ }
+ });
+
+ die "ova archive has no .ovf file inside\n" if !$hasOvf;
+
+ return undef;
+};
+
__PACKAGE__->register_method ({
name => 'index',
path => '',
@@ -369,7 +387,7 @@ __PACKAGE__->register_method ({
name => 'upload',
path => '{storage}/upload',
method => 'POST',
- description => "Upload templates and ISO images.",
+ description => "Upload templates, ISO images and OVAs.",
permissions => {
check => ['perm', '/storage/{storage}', ['Datastore.AllocateTemplate']],
},
@@ -382,7 +400,7 @@ __PACKAGE__->register_method ({
content => {
description => "Content type.",
type => 'string', format => 'pve-storage-content',
- enum => ['iso', 'vztmpl'],
+ enum => ['iso', 'vztmpl', 'import'],
},
filename => {
description => "The name of the file to create. Caution: This will be normalized!",
@@ -437,6 +455,7 @@ __PACKAGE__->register_method ({
my $filename = PVE::Storage::normalize_content_filename($param->{filename});
my $path;
+ my $isOva = 0;
if ($content eq 'iso') {
if ($filename !~ m![^/]+$PVE::Storage::ISO_EXT_RE_0$!) {
@@ -448,6 +467,16 @@ __PACKAGE__->register_method ({
raise_param_exc({ filename => "wrong file extension" });
}
$path = PVE::Storage::get_vztmpl_dir($cfg, $param->{storage});
+ } elsif ($content eq 'import') {
+ if ($filename !~ m!${PVE::Storage::SAFE_CHAR_CLASS_RE}+$PVE::Storage::UPLOAD_IMPORT_EXT_RE_1$!) {
+ raise_param_exc({ filename => "invalid filename or wrong extension" });
+ }
+
+ if ($filename =~ m/\.ova$/) {
+ $isOva = 1;
+ }
+
+ $path = PVE::Storage::get_import_dir($cfg, $param->{storage});
} else {
raise_param_exc({ content => "upload content type '$content' not allowed" });
}
@@ -510,6 +539,10 @@ __PACKAGE__->register_method ({
die "checksum mismatch: got '$checksum_got' != expect '$checksum'\n";
}
}
+
+ if ($isOva) {
+ assert_ova_contents($tmpfilename);
+ }
};
if (my $err = $@) {
# unlinks only the temporary file from the http server
@@ -544,7 +577,7 @@ __PACKAGE__->register_method({
name => 'download_url',
path => '{storage}/download-url',
method => 'POST',
- description => "Download templates and ISO images by using an URL.",
+ description => "Download templates, ISO images and OVAs by using an URL.",
proxyto => 'node',
permissions => {
description => 'Requires allocation access on the storage and as this allows one to probe'
@@ -572,7 +605,7 @@ __PACKAGE__->register_method({
content => {
description => "Content type.", # TODO: could be optional & detected in most cases
type => 'string', format => 'pve-storage-content',
- enum => ['iso', 'vztmpl'],
+ enum => ['iso', 'vztmpl', 'import'],
},
filename => {
description => "The name of the file to create. Caution: This will be normalized!",
@@ -632,6 +665,8 @@ __PACKAGE__->register_method({
my $filename = PVE::Storage::normalize_content_filename($param->{filename});
my $path;
+ my $isOva = 0;
+
if ($content eq 'iso') {
if ($filename !~ m![^/]+$PVE::Storage::ISO_EXT_RE_0$!) {
raise_param_exc({ filename => "wrong file extension" });
@@ -642,6 +677,16 @@ __PACKAGE__->register_method({
raise_param_exc({ filename => "wrong file extension" });
}
$path = PVE::Storage::get_vztmpl_dir($cfg, $storage);
+ } elsif ($content eq 'import') {
+ if ($filename !~ m!${PVE::Storage::SAFE_CHAR_CLASS_RE}+$PVE::Storage::UPLOAD_IMPORT_EXT_RE_1$!) {
+ raise_param_exc({ filename => "invalid filename or wrong extension" });
+ }
+
+ if ($filename =~ m/\.ova$/) {
+ $isOva = 1;
+ }
+
+ $path = PVE::Storage::get_import_dir($cfg, $param->{storage});
} else {
raise_param_exc({ content => "upload content-type '$content' is not allowed" });
}
@@ -669,7 +714,21 @@ __PACKAGE__->register_method({
die "no decompression method found\n" if !$info->{decompressor};
$opts->{decompression_command} = $info->{decompressor};
}
- PVE::Tools::download_file_from_url("$path/$filename", $url, $opts);
+
+ my $target_path = "$path/$filename";
+ PVE::Tools::download_file_from_url($target_path, $url, $opts);
+
+ if ($isOva) {
+ eval {
+ assert_ova_contents($target_path);
+ };
+ if (my $err = $@) {
+ # unlinks only the temporary file from the http server
+ unlink $target_path or $! == ENOENT
+ or warn "unable to clean up temporory file '$target_path' - $!\n";
+ die $err;
+ }
+ }
};
my $worker_id = PVE::Tools::encode_text($filename); # must not pass : or the like as w-ID
diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
index 4df1a84..c6a8894 100755
--- a/src/PVE/Storage.pm
+++ b/src/PVE/Storage.pm
@@ -116,6 +116,8 @@ our $BACKUP_EXT_RE_2 = qr/\.(tgz|(?:tar|vma)(?:\.(${\PVE::Storage::Plugin::COMPR
our $IMPORT_EXT_RE_1 = qr/\.(ova|ovf|qcow2|raw|vmdk)/;
+our $UPLOAD_IMPORT_EXT_RE_1 = qr/\.(ova)/;
+
our $SAFE_CHAR_CLASS_RE = qr/[a-zA-Z0-9\-\.\+\=\_]/;
our $OVA_CONTENT_RE_1 = qr/${SAFE_CHAR_CLASS_RE}+\.(qcow2|raw|vmdk)/;
@@ -466,6 +468,15 @@ sub get_iso_dir {
return $plugin->get_subdir($scfg, 'iso');
}
+sub get_import_dir {
+ my ($cfg, $storeid) = @_;
+
+ my $scfg = storage_config($cfg, $storeid);
+ my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
+
+ return $plugin->get_subdir($scfg, 'import');
+}
+
sub get_vztmpl_dir {
my ($cfg, $storeid) = @_;
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [pve-devel] [PATCH storage v6 09/12] api: allow ova upload/download
2024-11-15 15:17 ` [pve-devel] [PATCH storage v6 09/12] api: allow ova upload/download Dominik Csapak
@ 2024-11-18 12:42 ` Fiona Ebner
0 siblings, 0 replies; 68+ messages in thread
From: Fiona Ebner @ 2024-11-18 12:42 UTC (permalink / raw)
To: Proxmox VE development discussion, Dominik Csapak
Am 15.11.24 um 16:17 schrieb Dominik Csapak:
> introducing a separate regex that only contains ova, since
> upload/downloading ovfs does not make sense (since the disks are then
> missing).
>
> Add a sanity check after up/downloading the ova file (and delete if it
> does not match).
>
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
> changes from v2:
> * add sanity check for ova content after up/downloading
>
> src/PVE/API2/Storage/Status.pm | 69 +++++++++++++++++++++++++++++++---
> src/PVE/Storage.pm | 11 ++++++
> 2 files changed, 75 insertions(+), 5 deletions(-)
>
> diff --git a/src/PVE/API2/Storage/Status.pm b/src/PVE/API2/Storage/Status.pm
> index bdf1c18..8bbb5a7 100644
> --- a/src/PVE/API2/Storage/Status.pm
> +++ b/src/PVE/API2/Storage/Status.pm
> @@ -41,6 +41,24 @@ __PACKAGE__->register_method ({
> path => '{storage}/file-restore',
> });
>
> +my sub assert_ova_contents {
> + my ($file) = @_;
> +
> + # test if it's really a tar file with an ovf file inside
> + my $hasOvf = 0;
> + run_command(['tar', '-t', '-f', $file], outfunc => sub {
> + my ($line) = @_;
> +
> + if ($line =~ m/\.ovf$/) {
> + $hasOvf = 1;
> + }
> + });
Style nit: wrong indentation
> +
> + die "ova archive has no .ovf file inside\n" if !$hasOvf;
> +
> + return undef;
Nit: I'd prefer a truthy-return, but no big deal
> +};
Style nit: no need for semicolon
> +
> __PACKAGE__->register_method ({
> name => 'index',
> path => '',
> @@ -369,7 +387,7 @@ __PACKAGE__->register_method ({
> name => 'upload',
> path => '{storage}/upload',
> method => 'POST',
> - description => "Upload templates and ISO images.",
> + description => "Upload templates, ISO images and OVAs.",
> permissions => {
> check => ['perm', '/storage/{storage}', ['Datastore.AllocateTemplate']],
> },
> @@ -382,7 +400,7 @@ __PACKAGE__->register_method ({
> content => {
> description => "Content type.",
> type => 'string', format => 'pve-storage-content',
> - enum => ['iso', 'vztmpl'],
> + enum => ['iso', 'vztmpl', 'import'],
> },
> filename => {
> description => "The name of the file to create. Caution: This will be normalized!",
> @@ -437,6 +455,7 @@ __PACKAGE__->register_method ({
> my $filename = PVE::Storage::normalize_content_filename($param->{filename});
>
> my $path;
> + my $isOva = 0;
>
> if ($content eq 'iso') {
> if ($filename !~ m![^/]+$PVE::Storage::ISO_EXT_RE_0$!) {
> @@ -448,6 +467,16 @@ __PACKAGE__->register_method ({
> raise_param_exc({ filename => "wrong file extension" });
> }
> $path = PVE::Storage::get_vztmpl_dir($cfg, $param->{storage});
> + } elsif ($content eq 'import') {
> + if ($filename !~ m!${PVE::Storage::SAFE_CHAR_CLASS_RE}+$PVE::Storage::UPLOAD_IMPORT_EXT_RE_1$!) {
> + raise_param_exc({ filename => "invalid filename or wrong extension" });
> + }
> +
> + if ($filename =~ m/\.ova$/) {
Nit: we already get the extension from the above match, and could use
that. Also, it's always an ova, so no need for special casing.
> + $isOva = 1;
> + }
> +
> + $path = PVE::Storage::get_import_dir($cfg, $param->{storage});
> } else {
> raise_param_exc({ content => "upload content type '$content' not allowed" });
> }
---snip---
> @@ -669,7 +714,21 @@ __PACKAGE__->register_method({
> die "no decompression method found\n" if !$info->{decompressor};
> $opts->{decompression_command} = $info->{decompressor};
> }
> - PVE::Tools::download_file_from_url("$path/$filename", $url, $opts);
> +
> + my $target_path = "$path/$filename";
> + PVE::Tools::download_file_from_url($target_path, $url, $opts);
As already discussed off-list, would be nice to do the check before the
file is put in its final location ;)
> +
> + if ($isOva) {
> + eval {
> + assert_ova_contents($target_path);
> + };
> + if (my $err = $@) {
> + # unlinks only the temporary file from the http server
> + unlink $target_path or $! == ENOENT
> + or warn "unable to clean up temporory file '$target_path' - $!\n";
> + die $err;
> + }
> + }
> };
>
> my $worker_id = PVE::Tools::encode_text($filename); # must not pass : or the like as w-ID
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* [pve-devel] [PATCH storage v6 10/12] plugin: enable import for nfs/btrfs/cifs/cephfs/glusterfs
2024-11-15 15:17 [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages Dominik Csapak
` (8 preceding siblings ...)
2024-11-15 15:17 ` [pve-devel] [PATCH storage v6 09/12] api: allow ova upload/download Dominik Csapak
@ 2024-11-15 15:17 ` Dominik Csapak
2024-11-15 15:17 ` [pve-devel] [PATCH storage v6 11/12] add 'import' content type to 'check_volume_access' Dominik Csapak
` (20 subsequent siblings)
30 siblings, 0 replies; 68+ messages in thread
From: Dominik Csapak @ 2024-11-15 15:17 UTC (permalink / raw)
To: pve-devel
and reuse the DirPlugin implementation
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/Storage/BTRFSPlugin.pm | 5 +++++
src/PVE/Storage/CIFSPlugin.pm | 6 +++++-
src/PVE/Storage/CephFSPlugin.pm | 6 +++++-
src/PVE/Storage/GlusterfsPlugin.pm | 6 +++++-
src/PVE/Storage/NFSPlugin.pm | 6 +++++-
5 files changed, 25 insertions(+), 4 deletions(-)
diff --git a/src/PVE/Storage/BTRFSPlugin.pm b/src/PVE/Storage/BTRFSPlugin.pm
index abc5bba..d28e681 100644
--- a/src/PVE/Storage/BTRFSPlugin.pm
+++ b/src/PVE/Storage/BTRFSPlugin.pm
@@ -40,6 +40,7 @@ sub plugindata {
backup => 1,
snippets => 1,
none => 1,
+ import => 1,
},
{ images => 1, rootdir => 1 },
],
@@ -963,4 +964,8 @@ sub rename_volume {
return "${storeid}:$target_volname";
}
+sub get_import_metadata {
+ return PVE::Storage::DirPlugin::get_import_metadata(@_);
+}
+
1
diff --git a/src/PVE/Storage/CIFSPlugin.pm b/src/PVE/Storage/CIFSPlugin.pm
index 2184471..475065a 100644
--- a/src/PVE/Storage/CIFSPlugin.pm
+++ b/src/PVE/Storage/CIFSPlugin.pm
@@ -99,7 +99,7 @@ sub type {
sub plugindata {
return {
content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1,
- backup => 1, snippets => 1}, { images => 1 }],
+ backup => 1, snippets => 1, import => 1}, { images => 1 }],
format => [ { raw => 1, qcow2 => 1, vmdk => 1 } , 'raw' ],
};
}
@@ -314,4 +314,8 @@ sub update_volume_attribute {
return PVE::Storage::DirPlugin::update_volume_attribute(@_);
}
+sub get_import_metadata {
+ return PVE::Storage::DirPlugin::get_import_metadata(@_);
+}
+
1;
diff --git a/src/PVE/Storage/CephFSPlugin.pm b/src/PVE/Storage/CephFSPlugin.pm
index 8aad518..36c64ea 100644
--- a/src/PVE/Storage/CephFSPlugin.pm
+++ b/src/PVE/Storage/CephFSPlugin.pm
@@ -116,7 +116,7 @@ sub type {
sub plugindata {
return {
- content => [ { vztmpl => 1, iso => 1, backup => 1, snippets => 1},
+ content => [ { vztmpl => 1, iso => 1, backup => 1, snippets => 1, import => 1 },
{ backup => 1 }],
};
}
@@ -261,4 +261,8 @@ sub update_volume_attribute {
return PVE::Storage::DirPlugin::update_volume_attribute(@_);
}
+sub get_import_metadata {
+ return PVE::Storage::DirPlugin::get_import_metadata(@_);
+}
+
1;
diff --git a/src/PVE/Storage/GlusterfsPlugin.pm b/src/PVE/Storage/GlusterfsPlugin.pm
index 2b7f9e1..9d17180 100644
--- a/src/PVE/Storage/GlusterfsPlugin.pm
+++ b/src/PVE/Storage/GlusterfsPlugin.pm
@@ -97,7 +97,7 @@ sub type {
sub plugindata {
return {
- content => [ { images => 1, vztmpl => 1, iso => 1, backup => 1, snippets => 1},
+ content => [ { images => 1, vztmpl => 1, iso => 1, backup => 1, snippets => 1, import => 1},
{ images => 1 }],
format => [ { raw => 1, qcow2 => 1, vmdk => 1 } , 'raw' ],
};
@@ -352,4 +352,8 @@ sub check_connection {
return defined($server) ? 1 : 0;
}
+sub get_import_metadata {
+ return PVE::Storage::DirPlugin::get_import_metadata(@_);
+}
+
1;
diff --git a/src/PVE/Storage/NFSPlugin.pm b/src/PVE/Storage/NFSPlugin.pm
index f2e4c0d..72e9c6d 100644
--- a/src/PVE/Storage/NFSPlugin.pm
+++ b/src/PVE/Storage/NFSPlugin.pm
@@ -53,7 +53,7 @@ sub type {
sub plugindata {
return {
- content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1, backup => 1, snippets => 1 },
+ content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1, backup => 1, snippets => 1, import => 1 },
{ images => 1 }],
format => [ { raw => 1, qcow2 => 1, vmdk => 1 } , 'raw' ],
};
@@ -223,4 +223,8 @@ sub update_volume_attribute {
return PVE::Storage::DirPlugin::update_volume_attribute(@_);
}
+sub get_import_metadata {
+ return PVE::Storage::DirPlugin::get_import_metadata(@_);
+}
+
1;
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* [pve-devel] [PATCH storage v6 11/12] add 'import' content type to 'check_volume_access'
2024-11-15 15:17 [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages Dominik Csapak
` (9 preceding siblings ...)
2024-11-15 15:17 ` [pve-devel] [PATCH storage v6 10/12] plugin: enable import for nfs/btrfs/cifs/cephfs/glusterfs Dominik Csapak
@ 2024-11-15 15:17 ` Dominik Csapak
2024-11-18 12:58 ` Fiona Ebner
2024-11-15 15:17 ` [pve-devel] [PATCH storage v6 12/12] plugin: file_size_info: don't ignore base path with whitespace Dominik Csapak
` (19 subsequent siblings)
30 siblings, 1 reply; 68+ messages in thread
From: Dominik Csapak @ 2024-11-15 15:17 UTC (permalink / raw)
To: pve-devel
in the same branch as 'vztmpl' and 'iso'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/PVE/Storage.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
index c6a8894..31faa5e 100755
--- a/src/PVE/Storage.pm
+++ b/src/PVE/Storage.pm
@@ -542,7 +542,7 @@ sub check_volume_access {
return if $rpcenv->check($user, "/storage/$sid", ['Datastore.Allocate'], 1);
- if ($vtype eq 'iso' || $vtype eq 'vztmpl') {
+ if ($vtype eq 'iso' || $vtype eq 'vztmpl' || $vtype eq 'import') {
# require at least read access to storage, (custom) templates/ISOs could be sensitive
$rpcenv->check_any($user, "/storage/$sid", ['Datastore.AllocateSpace', 'Datastore.Audit']);
} elsif (defined($ownervm) && defined($vmid) && ($ownervm == $vmid)) {
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* [pve-devel] [PATCH storage v6 12/12] plugin: file_size_info: don't ignore base path with whitespace
2024-11-15 15:17 [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages Dominik Csapak
` (10 preceding siblings ...)
2024-11-15 15:17 ` [pve-devel] [PATCH storage v6 11/12] add 'import' content type to 'check_volume_access' Dominik Csapak
@ 2024-11-15 15:17 ` Dominik Csapak
2024-11-17 15:16 ` Thomas Lamprecht
2024-11-15 15:17 ` [pve-devel] [PATCH qemu-server v6 1/6] disk import: add additional safeguards for imported image files Dominik Csapak
` (18 subsequent siblings)
30 siblings, 1 reply; 68+ messages in thread
From: Dominik Csapak @ 2024-11-15 15:17 UTC (permalink / raw)
To: pve-devel
if the base image (parent) of an image contains whitespace in it's path
(e.g. a space), the current untainting would not match and it would seem
there was no parent.
Fix that by adapting the untaint regex
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/PVE/Storage/Plugin.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
index eed764d..761783f 100644
--- a/src/PVE/Storage/Plugin.pm
+++ b/src/PVE/Storage/Plugin.pm
@@ -1031,7 +1031,7 @@ sub file_size_info {
($format) = ($format =~ /^(\S+)$/); # untaint
die "format '$format' includes whitespace\n" if !defined($format);
if (defined($parent)) {
- ($parent) = ($parent =~ /^(\S+)$/); # untaint
+ ($parent) = ($parent =~ /^(.*)$/); # untaint
}
return wantarray ? ($size, $format, $used, $parent, $st->ctime) : $size;
}
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [pve-devel] [PATCH storage v6 12/12] plugin: file_size_info: don't ignore base path with whitespace
2024-11-15 15:17 ` [pve-devel] [PATCH storage v6 12/12] plugin: file_size_info: don't ignore base path with whitespace Dominik Csapak
@ 2024-11-17 15:16 ` Thomas Lamprecht
2024-11-18 7:42 ` Dominik Csapak
0 siblings, 1 reply; 68+ messages in thread
From: Thomas Lamprecht @ 2024-11-17 15:16 UTC (permalink / raw)
To: Proxmox VE development discussion, Dominik Csapak
Am 15.11.24 um 16:17 schrieb Dominik Csapak:
> if the base image (parent) of an image contains whitespace in it's path
> (e.g. a space), the current untainting would not match and it would seem
> there was no parent.
do we really want all spaces like newline too? Those sometimes can cause odd
things when printing to CLI or the like, so maybe just add space explicitly?
Like with: /^([ \S]+)$/
>
> Fix that by adapting the untaint regex
>
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
> src/PVE/Storage/Plugin.pm | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
> index eed764d..761783f 100644
> --- a/src/PVE/Storage/Plugin.pm
> +++ b/src/PVE/Storage/Plugin.pm
> @@ -1031,7 +1031,7 @@ sub file_size_info {
> ($format) = ($format =~ /^(\S+)$/); # untaint
> die "format '$format' includes whitespace\n" if !defined($format);
> if (defined($parent)) {
> - ($parent) = ($parent =~ /^(\S+)$/); # untaint
> + ($parent) = ($parent =~ /^(.*)$/); # untaint
> }
> return wantarray ? ($size, $format, $used, $parent, $st->ctime) : $size;
> }
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [pve-devel] [PATCH storage v6 12/12] plugin: file_size_info: don't ignore base path with whitespace
2024-11-17 15:16 ` Thomas Lamprecht
@ 2024-11-18 7:42 ` Dominik Csapak
2024-11-18 7:48 ` Thomas Lamprecht
0 siblings, 1 reply; 68+ messages in thread
From: Dominik Csapak @ 2024-11-18 7:42 UTC (permalink / raw)
To: Thomas Lamprecht, Proxmox VE development discussion
On 11/17/24 16:16, Thomas Lamprecht wrote:
> Am 15.11.24 um 16:17 schrieb Dominik Csapak:
>> if the base image (parent) of an image contains whitespace in it's path
>> (e.g. a space), the current untainting would not match and it would seem
>> there was no parent.
>
> do we really want all spaces like newline too? Those sometimes can cause odd
> things when printing to CLI or the like, so maybe just add space explicitly?
>
> Like with: /^([ \S]+)$/
>
mhmm i agree that there might be some characters that can make problem.
in that case I'd rather just 'die' if we encounter a base image with problematic characters,
instead of treating it as having no parent?
I can't exactly remember the context of this patch, but we now disallow
base images for imported volumes altogether, so not sure if it is still necessary
to allow such paths for parents
(file based storages can't have a space in the path, and neither can have volume ids
created with our api)
>>
>> Fix that by adapting the untaint regex
>>
>> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
>> ---
>> src/PVE/Storage/Plugin.pm | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
>> index eed764d..761783f 100644
>> --- a/src/PVE/Storage/Plugin.pm
>> +++ b/src/PVE/Storage/Plugin.pm
>> @@ -1031,7 +1031,7 @@ sub file_size_info {
>> ($format) = ($format =~ /^(\S+)$/); # untaint
>> die "format '$format' includes whitespace\n" if !defined($format);
>> if (defined($parent)) {
>> - ($parent) = ($parent =~ /^(\S+)$/); # untaint
>> + ($parent) = ($parent =~ /^(.*)$/); # untaint
>> }
>> return wantarray ? ($size, $format, $used, $parent, $st->ctime) : $size;
>> }
>
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [pve-devel] [PATCH storage v6 12/12] plugin: file_size_info: don't ignore base path with whitespace
2024-11-18 7:42 ` Dominik Csapak
@ 2024-11-18 7:48 ` Thomas Lamprecht
0 siblings, 0 replies; 68+ messages in thread
From: Thomas Lamprecht @ 2024-11-18 7:48 UTC (permalink / raw)
To: Dominik Csapak, Proxmox VE development discussion
Am 18.11.24 um 08:42 schrieb Dominik Csapak:
> On 11/17/24 16:16, Thomas Lamprecht wrote:
>> Am 15.11.24 um 16:17 schrieb Dominik Csapak:
>>> if the base image (parent) of an image contains whitespace in it's path
>>> (e.g. a space), the current untainting would not match and it would seem
>>> there was no parent.
>>
>> do we really want all spaces like newline too? Those sometimes can cause odd
>> things when printing to CLI or the like, so maybe just add space explicitly?
>>
>> Like with: /^([ \S]+)$/
>>
>
> mhmm i agree that there might be some characters that can make problem.
>
> in that case I'd rather just 'die' if we encounter a base image with problematic characters,
> instead of treating it as having no parent?
yeah, that's the nicer approach in general, we just need to be somewhat certain
that it cannot happen on existing systems causing some bad regression during
a release, if we can imagine how this can break such systems then maybe just
warn now and change that to a die for PVE 9?
That said, from top of my head it doesn't seem very like that this can easily
happen, so if you think so too then fine by me to die now already.
>
> I can't exactly remember the context of this patch, but we now disallow
> base images for imported volumes altogether, so not sure if it is still necessary
> to allow such paths for parents
> (file based storages can't have a space in the path, and neither can have volume ids
> created with our api)
yeah, this probably doesn't matters much anymore, but being explicit about the
error here would still be better and possible save some dev/support from some
debugging headache in the future.
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* [pve-devel] [PATCH qemu-server v6 1/6] disk import: add additional safeguards for imported image files
2024-11-15 15:17 [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages Dominik Csapak
` (11 preceding siblings ...)
2024-11-15 15:17 ` [pve-devel] [PATCH storage v6 12/12] plugin: file_size_info: don't ignore base path with whitespace Dominik Csapak
@ 2024-11-15 15:17 ` Dominik Csapak
2024-11-18 13:08 ` Fiona Ebner
2024-11-15 15:17 ` [pve-devel] [PATCH qemu-server v6 2/6] api: delete unused OVF.pm Dominik Csapak
` (17 subsequent siblings)
30 siblings, 1 reply; 68+ messages in thread
From: Dominik Csapak @ 2024-11-15 15:17 UTC (permalink / raw)
To: pve-devel
From: Fabian Grünbichler <f.gruenbichler@proxmox.com>
creating non-raw disk images with arbitrary content is only possible with raw
access to the storage, but checking for references to external files doesn't
hurt, in case for non pve-managed volumes.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
[ DC: removed prolematic checks for pve-managed volumes ]
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
PVE/API2/Qemu.pm | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 1c3cb271..b9c63af8 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -413,12 +413,15 @@ my sub create_disks : prototype($$$$$$$$$$) {
$needs_creation = $live_import;
- if (PVE::Storage::parse_volume_id($source, 1)) { # PVE-managed volume
+ my ($source_storage, $source_volid) = PVE::Storage::parse_volume_id($source, 1);
+
+ if ($source_storage) { # PVE-managed volume
if ($live_import && $ds ne 'efidisk0') {
my $path = PVE::Storage::path($storecfg, $source)
or die "failed to get a path for '$source'\n";
$source = $path;
($size, my $source_format) = PVE::Storage::file_size_info($source);
+
die "could not get file size of $source\n" if !$size;
$live_import_mapping->{$ds} = {
path => $source,
@@ -442,7 +445,8 @@ my sub create_disks : prototype($$$$$$$$$$) {
}
} else {
$source = PVE::Storage::abs_filesystem_path($storecfg, $source, 1);
- ($size, my $source_format) = PVE::Storage::file_size_info($source);
+ # check potentially untrusted image file!
+ ($size, my $source_format) = PVE::Storage::file_size_info($source, undef, 1);
die "could not get file size of $source\n" if !$size;
if ($live_import && $ds ne 'efidisk0') {
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [pve-devel] [PATCH qemu-server v6 1/6] disk import: add additional safeguards for imported image files
2024-11-15 15:17 ` [pve-devel] [PATCH qemu-server v6 1/6] disk import: add additional safeguards for imported image files Dominik Csapak
@ 2024-11-18 13:08 ` Fiona Ebner
0 siblings, 0 replies; 68+ messages in thread
From: Fiona Ebner @ 2024-11-18 13:08 UTC (permalink / raw)
To: Proxmox VE development discussion, Dominik Csapak
Am 15.11.24 um 16:17 schrieb Dominik Csapak:
> From: Fabian Grünbichler <f.gruenbichler@proxmox.com>
>
> creating non-raw disk images with arbitrary content is only possible with raw
> access to the storage, but checking for references to external files doesn't
> hurt, in case for non pve-managed volumes.
>
> Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
> [ DC: removed prolematic checks for pve-managed volumes ]
typo: s/prolematic/problematic/
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Other than the nit below:
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
> ---
> PVE/API2/Qemu.pm | 8 ++++++--
> 1 file changed, 6 insertions(+), 2 deletions(-)
>
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index 1c3cb271..b9c63af8 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -413,12 +413,15 @@ my sub create_disks : prototype($$$$$$$$$$) {
>
> $needs_creation = $live_import;
>
> - if (PVE::Storage::parse_volume_id($source, 1)) { # PVE-managed volume
> + my ($source_storage, $source_volid) = PVE::Storage::parse_volume_id($source, 1);
> +
> + if ($source_storage) { # PVE-managed volume
> if ($live_import && $ds ne 'efidisk0') {
> my $path = PVE::Storage::path($storecfg, $source)
> or die "failed to get a path for '$source'\n";
> $source = $path;
> ($size, my $source_format) = PVE::Storage::file_size_info($source);
> +
> die "could not get file size of $source\n" if !$size;
> $live_import_mapping->{$ds} = {
> path => $source,
Nit: this hunk doesn't do anything now and could be squashed into the
other patch
> @@ -442,7 +445,8 @@ my sub create_disks : prototype($$$$$$$$$$) {
> }
> } else {
> $source = PVE::Storage::abs_filesystem_path($storecfg, $source, 1);
> - ($size, my $source_format) = PVE::Storage::file_size_info($source);
> + # check potentially untrusted image file!
> + ($size, my $source_format) = PVE::Storage::file_size_info($source, undef, 1);
> die "could not get file size of $source\n" if !$size;
>
> if ($live_import && $ds ne 'efidisk0') {
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* [pve-devel] [PATCH qemu-server v6 2/6] api: delete unused OVF.pm
2024-11-15 15:17 [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages Dominik Csapak
` (12 preceding siblings ...)
2024-11-15 15:17 ` [pve-devel] [PATCH qemu-server v6 1/6] disk import: add additional safeguards for imported image files Dominik Csapak
@ 2024-11-15 15:17 ` Dominik Csapak
2024-11-17 15:18 ` [pve-devel] applied: " Thomas Lamprecht
2024-11-15 15:17 ` [pve-devel] [PATCH qemu-server v6 3/6] use OVF from Storage Dominik Csapak
` (16 subsequent siblings)
30 siblings, 1 reply; 68+ messages in thread
From: Dominik Csapak @ 2024-11-15 15:17 UTC (permalink / raw)
To: pve-devel
the api part was never in use by anything
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
PVE/API2/Qemu/Makefile | 2 +-
PVE/API2/Qemu/OVF.pm | 53 ------------------------------------------
2 files changed, 1 insertion(+), 54 deletions(-)
delete mode 100644 PVE/API2/Qemu/OVF.pm
diff --git a/PVE/API2/Qemu/Makefile b/PVE/API2/Qemu/Makefile
index bdd4762b..5d4abda6 100644
--- a/PVE/API2/Qemu/Makefile
+++ b/PVE/API2/Qemu/Makefile
@@ -1,4 +1,4 @@
-SOURCES=Agent.pm CPU.pm Machine.pm OVF.pm
+SOURCES=Agent.pm CPU.pm Machine.pm
.PHONY: install
install:
diff --git a/PVE/API2/Qemu/OVF.pm b/PVE/API2/Qemu/OVF.pm
deleted file mode 100644
index cc0ef2da..00000000
--- a/PVE/API2/Qemu/OVF.pm
+++ /dev/null
@@ -1,53 +0,0 @@
-package PVE::API2::Qemu::OVF;
-
-use strict;
-use warnings;
-
-use PVE::JSONSchema qw(get_standard_option);
-use PVE::QemuServer::OVF;
-use PVE::RESTHandler;
-
-use base qw(PVE::RESTHandler);
-
-__PACKAGE__->register_method ({
- name => 'readovf',
- path => '',
- method => 'GET',
- proxyto => 'node',
- description => "Read an .ovf manifest.",
- protected => 1,
- parameters => {
- additionalProperties => 0,
- properties => {
- node => get_standard_option('pve-node'),
- manifest => {
- description => "Path to .ovf manifest.",
- type => 'string',
- },
- },
- },
- returns => {
- type => 'object',
- additionalProperties => 1,
- properties => PVE::QemuServer::json_ovf_properties(),
- description => "VM config according to .ovf manifest.",
- },
- code => sub {
- my ($param) = @_;
-
- my $manifest = $param->{manifest};
- die "check for file $manifest failed - $!\n" if !-f $manifest;
-
- my $parsed = PVE::QemuServer::OVF::parse_ovf($manifest);
- my $result;
- $result->{cores} = $parsed->{qm}->{cores};
- $result->{name} = $parsed->{qm}->{name};
- $result->{memory} = $parsed->{qm}->{memory};
- my $disks = $parsed->{disks};
- for my $disk (@$disks) {
- $result->{$disk->{disk_address}} = $disk->{backing_file};
- }
- return $result;
- }});
-
-1;
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* [pve-devel] [PATCH qemu-server v6 3/6] use OVF from Storage
2024-11-15 15:17 [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages Dominik Csapak
` (13 preceding siblings ...)
2024-11-15 15:17 ` [pve-devel] [PATCH qemu-server v6 2/6] api: delete unused OVF.pm Dominik Csapak
@ 2024-11-15 15:17 ` Dominik Csapak
2024-11-17 17:42 ` Thomas Lamprecht
2024-11-15 15:17 ` [pve-devel] [PATCH qemu-server v6 4/6] api: create: implement extracting disks when needed for import-from Dominik Csapak
` (15 subsequent siblings)
30 siblings, 1 reply; 68+ messages in thread
From: Dominik Csapak @ 2024-11-15 15:17 UTC (permalink / raw)
To: pve-devel
and delete it here (incl tests; they live in pve-storage now).
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
PVE/CLI/qm.pm | 4 +-
PVE/QemuServer/Makefile | 1 -
PVE/QemuServer/OVF.pm | 242 ------------------
debian/control | 2 -
test/Makefile | 5 +-
test/ovf_manifests/Win10-Liz-disk1.vmdk | Bin 65536 -> 0 bytes
test/ovf_manifests/Win10-Liz.ovf | 142 ----------
.../ovf_manifests/Win10-Liz_no_default_ns.ovf | 142 ----------
test/ovf_manifests/Win_2008_R2_two-disks.ovf | 145 -----------
test/ovf_manifests/disk1.vmdk | Bin 65536 -> 0 bytes
test/ovf_manifests/disk2.vmdk | Bin 65536 -> 0 bytes
test/run_ovf_tests.pl | 71 -----
12 files changed, 3 insertions(+), 751 deletions(-)
delete mode 100644 PVE/QemuServer/OVF.pm
delete mode 100644 test/ovf_manifests/Win10-Liz-disk1.vmdk
delete mode 100755 test/ovf_manifests/Win10-Liz.ovf
delete mode 100755 test/ovf_manifests/Win10-Liz_no_default_ns.ovf
delete mode 100755 test/ovf_manifests/Win_2008_R2_two-disks.ovf
delete mode 100644 test/ovf_manifests/disk1.vmdk
delete mode 100644 test/ovf_manifests/disk2.vmdk
delete mode 100755 test/run_ovf_tests.pl
diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index 47b87782..6c442449 100755
--- a/PVE/CLI/qm.pm
+++ b/PVE/CLI/qm.pm
@@ -28,13 +28,13 @@ use PVE::Tools qw(extract_param file_get_contents);
use PVE::API2::Qemu::Agent;
use PVE::API2::Qemu;
+use PVE::GuestImport::OVF;
use PVE::QemuConfig;
use PVE::QemuServer::Drive;
use PVE::QemuServer::Helpers;
use PVE::QemuServer::Agent qw(agent_available);
use PVE::QemuServer::ImportDisk;
use PVE::QemuServer::Monitor qw(mon_cmd);
-use PVE::QemuServer::OVF;
use PVE::QemuServer::QMPHelpers;
use PVE::QemuServer;
@@ -730,7 +730,7 @@ __PACKAGE__->register_method ({
my $storecfg = PVE::Storage::config();
PVE::Storage::storage_check_enabled($storecfg, $storeid);
- my $parsed = PVE::QemuServer::OVF::parse_ovf($ovf_file);
+ my $parsed = PVE::GuestImport::OVF::parse_ovf($ovf_file);
if ($dryrun) {
print to_json($parsed, { pretty => 1, canonical => 1});
diff --git a/PVE/QemuServer/Makefile b/PVE/QemuServer/Makefile
index ac26e56f..89d12091 100644
--- a/PVE/QemuServer/Makefile
+++ b/PVE/QemuServer/Makefile
@@ -2,7 +2,6 @@ SOURCES=PCI.pm \
USB.pm \
Memory.pm \
ImportDisk.pm \
- OVF.pm \
Cloudinit.pm \
Agent.pm \
Helpers.pm \
diff --git a/PVE/QemuServer/OVF.pm b/PVE/QemuServer/OVF.pm
deleted file mode 100644
index eb9cf8e8..00000000
--- a/PVE/QemuServer/OVF.pm
+++ /dev/null
@@ -1,242 +0,0 @@
-# Open Virtualization Format import routines
-# https://www.dmtf.org/standards/ovf
-package PVE::QemuServer::OVF;
-
-use strict;
-use warnings;
-
-use XML::LibXML;
-use File::Spec;
-use File::Basename;
-use Data::Dumper;
-use Cwd 'realpath';
-
-use PVE::Tools;
-use PVE::Storage;
-
-# map OVF resources types to descriptive strings
-# this will allow us to explore the xml tree without using magic numbers
-# http://schemas.dmtf.org/wbem/cim-html/2/CIM_ResourceAllocationSettingData.html
-my @resources = (
- { id => 1, dtmf_name => 'Other' },
- { id => 2, dtmf_name => 'Computer System' },
- { id => 3, dtmf_name => 'Processor' },
- { id => 4, dtmf_name => 'Memory' },
- { id => 5, dtmf_name => 'IDE Controller', pve_type => 'ide' },
- { id => 6, dtmf_name => 'Parallel SCSI HBA', pve_type => 'scsi' },
- { id => 7, dtmf_name => 'FC HBA' },
- { id => 8, dtmf_name => 'iSCSI HBA' },
- { id => 9, dtmf_name => 'IB HCA' },
- { id => 10, dtmf_name => 'Ethernet Adapter' },
- { id => 11, dtmf_name => 'Other Network Adapter' },
- { id => 12, dtmf_name => 'I/O Slot' },
- { id => 13, dtmf_name => 'I/O Device' },
- { id => 14, dtmf_name => 'Floppy Drive' },
- { id => 15, dtmf_name => 'CD Drive' },
- { id => 16, dtmf_name => 'DVD drive' },
- { id => 17, dtmf_name => 'Disk Drive' },
- { id => 18, dtmf_name => 'Tape Drive' },
- { id => 19, dtmf_name => 'Storage Extent' },
- { id => 20, dtmf_name => 'Other storage device', pve_type => 'sata'},
- { id => 21, dtmf_name => 'Serial port' },
- { id => 22, dtmf_name => 'Parallel port' },
- { id => 23, dtmf_name => 'USB Controller' },
- { id => 24, dtmf_name => 'Graphics controller' },
- { id => 25, dtmf_name => 'IEEE 1394 Controller' },
- { id => 26, dtmf_name => 'Partitionable Unit' },
- { id => 27, dtmf_name => 'Base Partitionable Unit' },
- { id => 28, dtmf_name => 'Power' },
- { id => 29, dtmf_name => 'Cooling Capacity' },
- { id => 30, dtmf_name => 'Ethernet Switch Port' },
- { id => 31, dtmf_name => 'Logical Disk' },
- { id => 32, dtmf_name => 'Storage Volume' },
- { id => 33, dtmf_name => 'Ethernet Connection' },
- { id => 34, dtmf_name => 'DMTF reserved' },
- { id => 35, dtmf_name => 'Vendor Reserved'}
-);
-
-sub find_by {
- my ($key, $param) = @_;
- foreach my $resource (@resources) {
- if ($resource->{$key} eq $param) {
- return ($resource);
- }
- }
- return;
-}
-
-sub dtmf_name_to_id {
- my ($dtmf_name) = @_;
- my $found = find_by('dtmf_name', $dtmf_name);
- if ($found) {
- return $found->{id};
- } else {
- return;
- }
-}
-
-sub id_to_pve {
- my ($id) = @_;
- my $resource = find_by('id', $id);
- if ($resource) {
- return $resource->{pve_type};
- } else {
- return;
- }
-}
-
-# returns two references, $qm which holds qm.conf style key/values, and \@disks
-sub parse_ovf {
- my ($ovf, $debug) = @_;
-
- my $dom = XML::LibXML->load_xml(location => $ovf, no_blanks => 1);
-
- # register the xml namespaces in a xpath context object
- # 'ovf' is the default namespace so it will prepended to each xml element
- my $xpc = XML::LibXML::XPathContext->new($dom);
- $xpc->registerNs('ovf', 'http://schemas.dmtf.org/ovf/envelope/1');
- $xpc->registerNs('rasd', 'http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData');
- $xpc->registerNs('vssd', 'http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData');
-
-
- # hash to save qm.conf parameters
- my $qm;
-
- #array to save a disk list
- my @disks;
-
- # easy xpath
- # walk down the dom until we find the matching XML element
- my $xpath_find_name = "/ovf:Envelope/ovf:VirtualSystem/ovf:Name";
- my $ovf_name = $xpc->findvalue($xpath_find_name);
-
- if ($ovf_name) {
- # PVE::QemuServer::confdesc requires a valid DNS name
- ($qm->{name} = $ovf_name) =~ s/[^a-zA-Z0-9\-\.]//g;
- } else {
- warn "warning: unable to parse the VM name in this OVF manifest, generating a default value\n";
- }
-
- # middle level xpath
- # element[child] search the elements which have this [child]
- my $processor_id = dtmf_name_to_id('Processor');
- my $xpath_find_vcpu_count = "/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${processor_id}]/rasd:VirtualQuantity";
- $qm->{'cores'} = $xpc->findvalue($xpath_find_vcpu_count);
-
- my $memory_id = dtmf_name_to_id('Memory');
- my $xpath_find_memory = ("/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${memory_id}]/rasd:VirtualQuantity");
- $qm->{'memory'} = $xpc->findvalue($xpath_find_memory);
-
- # middle level xpath
- # here we expect multiple results, so we do not read the element value with
- # findvalue() but store multiple elements with findnodes()
- my $disk_id = dtmf_name_to_id('Disk Drive');
- my $xpath_find_disks="/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${disk_id}]";
- my @disk_items = $xpc->findnodes($xpath_find_disks);
-
- # disks metadata is split in four different xml elements:
- # * as an Item node of type DiskDrive in the VirtualHardwareSection
- # * as an Disk node in the DiskSection
- # * as a File node in the References section
- # * each Item node also holds a reference to its owning controller
- #
- # we iterate over the list of Item nodes of type disk drive, and for each item,
- # find the corresponding Disk node, and File node and owning controller
- # when all the nodes has been found out, we copy the relevant information to
- # a $pve_disk hash ref, which we push to @disks;
-
- foreach my $item_node (@disk_items) {
-
- my $disk_node;
- my $file_node;
- my $controller_node;
- my $pve_disk;
-
- print "disk item:\n", $item_node->toString(1), "\n" if $debug;
-
- # from Item, find corresponding Disk node
- # here the dot means the search should start from the current element in dom
- my $host_resource = $xpc->findvalue('rasd:HostResource', $item_node);
- my $disk_section_path;
- my $disk_id;
-
- # RFC 3986 "2.3. Unreserved Characters"
- my $valid_uripath_chars = qr/[[:alnum:]]|[\-\._~]/;
-
- if ($host_resource =~ m|^ovf:/(${valid_uripath_chars}+)/(${valid_uripath_chars}+)$|) {
- $disk_section_path = $1;
- $disk_id = $2;
- } else {
- warn "invalid host resource $host_resource, skipping\n";
- next;
- }
- printf "disk section path: $disk_section_path and disk id: $disk_id\n" if $debug;
-
- # tricky xpath
- # @ means we filter the result query based on a the value of an item attribute ( @ = attribute)
- # @ needs to be escaped to prevent Perl double quote interpolation
- my $xpath_find_fileref = sprintf("/ovf:Envelope/ovf:DiskSection/\
-ovf:Disk[\@ovf:diskId='%s']/\@ovf:fileRef", $disk_id);
- my $fileref = $xpc->findvalue($xpath_find_fileref);
-
- my $valid_url_chars = qr@${valid_uripath_chars}|/@;
- if (!$fileref || $fileref !~ m/^${valid_url_chars}+$/) {
- warn "invalid host resource $host_resource, skipping\n";
- next;
- }
-
- # from Disk Node, find corresponding filepath
- my $xpath_find_filepath = sprintf("/ovf:Envelope/ovf:References/ovf:File[\@ovf:id='%s']/\@ovf:href", $fileref);
- my $filepath = $xpc->findvalue($xpath_find_filepath);
- if (!$filepath) {
- warn "invalid file reference $fileref, skipping\n";
- next;
- }
- print "file path: $filepath\n" if $debug;
-
- # from Item, find owning Controller type
- my $controller_id = $xpc->findvalue('rasd:Parent', $item_node);
- my $xpath_find_parent_type = sprintf("/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/\
-ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
- my $controller_type = $xpc->findvalue($xpath_find_parent_type);
- if (!$controller_type) {
- warn "invalid or missing controller: $controller_type, skipping\n";
- next;
- }
- print "owning controller type: $controller_type\n" if $debug;
-
- # extract corresponding Controller node details
- my $adress_on_controller = $xpc->findvalue('rasd:AddressOnParent', $item_node);
- my $pve_disk_address = id_to_pve($controller_type) . $adress_on_controller;
-
- # resolve symlinks and relative path components
- # and die if the diskimage is not somewhere under the $ovf path
- my $ovf_dir = realpath(dirname(File::Spec->rel2abs($ovf)));
- my $backing_file_path = realpath(join ('/', $ovf_dir, $filepath));
- if ($backing_file_path !~ /^\Q${ovf_dir}\E/) {
- die "error parsing $filepath, are you using a symlink ?\n";
- }
-
- if (!-e $backing_file_path) {
- die "error parsing $filepath, file seems not to exist at $backing_file_path\n";
- }
-
- ($backing_file_path) = $backing_file_path =~ m|^(/.*)|; # untaint
-
- my $virtual_size = PVE::Storage::file_size_info($backing_file_path);
- die "error parsing $backing_file_path, cannot determine file size\n"
- if !$virtual_size;
-
- $pve_disk = {
- disk_address => $pve_disk_address,
- backing_file => $backing_file_path,
- virtual_size => $virtual_size
- };
- push @disks, $pve_disk;
-
- }
-
- return {qm => $qm, disks => \@disks};
-}
-
-1;
diff --git a/debian/control b/debian/control
index aa5f4c6d..33012650 100644
--- a/debian/control
+++ b/debian/control
@@ -14,7 +14,6 @@ Build-Depends: debhelper-compat (= 13),
libtest-mockmodule-perl,
liburi-perl,
libuuid-perl,
- libxml-libxml-perl,
lintian,
perl,
pkg-config,
@@ -44,7 +43,6 @@ Depends: dbus,
libterm-readline-gnu-perl,
liburi-perl,
libuuid-perl,
- libxml-libxml-perl,
perl (>= 5.10.0-19),
proxmox-websocket-tunnel,
pve-cluster,
diff --git a/test/Makefile b/test/Makefile
index 9e6d39e8..65ed7bc4 100644
--- a/test/Makefile
+++ b/test/Makefile
@@ -1,14 +1,11 @@
all: test
-test: test_snapshot test_ovf test_cfg_to_cmd test_pci_addr_conflicts test_qemu_img_convert test_migration test_restore_config
+test: test_snapshot test_cfg_to_cmd test_pci_addr_conflicts test_qemu_img_convert test_migration test_restore_config
test_snapshot: run_snapshot_tests.pl
./run_snapshot_tests.pl
./test_get_replicatable_volumes.pl
-test_ovf: run_ovf_tests.pl
- ./run_ovf_tests.pl
-
test_cfg_to_cmd: run_config2command_tests.pl cfg2cmd/*.conf
perl -I../ ./run_config2command_tests.pl
diff --git a/test/ovf_manifests/Win10-Liz-disk1.vmdk b/test/ovf_manifests/Win10-Liz-disk1.vmdk
deleted file mode 100644
index 662354a3d1333a2f6c4364005e53bfe7cd8b9044..0000000000000000000000000000000000000000
GIT binary patch
literal 0
HcmV?d00001
literal 65536
zcmeIvy>HV%7zbeUF`dK)46s<q+^B&Pi6H|eMIb;zP1VdMK8V$P$u?2T)IS|NNta69
zSXw<No$qu%-}$}AUq|21A0<ihr0Gwa-nQ%QGfCR@wmshsN%A;JUhL<u_T%+U7Sd<o
zW^TMU0^M{}R2S(eR@1Ur*Q@eVF^^#r%c@u{hyC#J%V?Nq?*?z)=UG^1Wn9+n(yx6B
z(=ujtJiA)QVP~;guI5EOE2iV-%_??6=%y!^b+aeU_aA6Z4X2azC>{U!a5_FoJCkDB
zKRozW{5{B<Li)YUBEQ&fJe$RRZCRbA$5|CacQiT<A<uvIHbq(g$>yIY=etVNVcI$B
zY@^?CwTN|j)tg?;i)G&AZFqPqoW(5P2K~XUq>9sqVVe!!?y@Y;)^#k~TefEvd2_XU
z^M@5mfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk4^iOdL%ftb
z5g<T-009C72;3>~`p!f^fB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ
zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U
zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7
z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N
p0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK;Zui`~%h>XmtPp
diff --git a/test/ovf_manifests/Win10-Liz.ovf b/test/ovf_manifests/Win10-Liz.ovf
deleted file mode 100755
index 46642c04..00000000
--- a/test/ovf_manifests/Win10-Liz.ovf
+++ /dev/null
@@ -1,142 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--Generated by VMware ovftool 4.1.0 (build-2982904), UTC time: 2017-02-07T13:50:15.265014Z-->
-<Envelope vmw:buildId="build-2982904" xmlns="http://schemas.dmtf.org/ovf/envelope/1" xmlns:cim="http://schemas.dmtf.org/wbem/wscim/1/common" xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1" xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" xmlns:vmw="http://www.vmware.com/schema/ovf" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
- <References>
- <File ovf:href="Win10-Liz-disk1.vmdk" ovf:id="file1" ovf:size="9155243008"/>
- </References>
- <DiskSection>
- <Info>Virtual disk information</Info>
- <Disk ovf:capacity="128" ovf:capacityAllocationUnits="byte * 2^30" ovf:diskId="vmdisk1" ovf:fileRef="file1" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:populatedSize="16798056448"/>
- </DiskSection>
- <NetworkSection>
- <Info>The list of logical networks</Info>
- <Network ovf:name="bridged">
- <Description>The bridged network</Description>
- </Network>
- </NetworkSection>
- <VirtualSystem ovf:id="vm">
- <Info>A virtual machine</Info>
- <Name>Win10-Liz</Name>
- <OperatingSystemSection ovf:id="1" vmw:osType="windows9_64Guest">
- <Info>The kind of installed guest operating system</Info>
- </OperatingSystemSection>
- <VirtualHardwareSection>
- <Info>Virtual hardware requirements</Info>
- <System>
- <vssd:ElementName>Virtual Hardware Family</vssd:ElementName>
- <vssd:InstanceID>0</vssd:InstanceID>
- <vssd:VirtualSystemIdentifier>Win10-Liz</vssd:VirtualSystemIdentifier>
- <vssd:VirtualSystemType>vmx-11</vssd:VirtualSystemType>
- </System>
- <Item>
- <rasd:AllocationUnits>hertz * 10^6</rasd:AllocationUnits>
- <rasd:Description>Number of Virtual CPUs</rasd:Description>
- <rasd:ElementName>4 virtual CPU(s)</rasd:ElementName>
- <rasd:InstanceID>1</rasd:InstanceID>
- <rasd:ResourceType>3</rasd:ResourceType>
- <rasd:VirtualQuantity>4</rasd:VirtualQuantity>
- </Item>
- <Item>
- <rasd:AllocationUnits>byte * 2^20</rasd:AllocationUnits>
- <rasd:Description>Memory Size</rasd:Description>
- <rasd:ElementName>6144MB of memory</rasd:ElementName>
- <rasd:InstanceID>2</rasd:InstanceID>
- <rasd:ResourceType>4</rasd:ResourceType>
- <rasd:VirtualQuantity>6144</rasd:VirtualQuantity>
- </Item>
- <Item>
- <rasd:Address>0</rasd:Address>
- <rasd:Description>SATA Controller</rasd:Description>
- <rasd:ElementName>sataController0</rasd:ElementName>
- <rasd:InstanceID>3</rasd:InstanceID>
- <rasd:ResourceSubType>vmware.sata.ahci</rasd:ResourceSubType>
- <rasd:ResourceType>20</rasd:ResourceType>
- </Item>
- <Item ovf:required="false">
- <rasd:Address>0</rasd:Address>
- <rasd:Description>USB Controller (XHCI)</rasd:Description>
- <rasd:ElementName>usb3</rasd:ElementName>
- <rasd:InstanceID>4</rasd:InstanceID>
- <rasd:ResourceSubType>vmware.usb.xhci</rasd:ResourceSubType>
- <rasd:ResourceType>23</rasd:ResourceType>
- </Item>
- <Item ovf:required="false">
- <rasd:Address>0</rasd:Address>
- <rasd:Description>USB Controller (EHCI)</rasd:Description>
- <rasd:ElementName>usb</rasd:ElementName>
- <rasd:InstanceID>5</rasd:InstanceID>
- <rasd:ResourceSubType>vmware.usb.ehci</rasd:ResourceSubType>
- <rasd:ResourceType>23</rasd:ResourceType>
- <vmw:Config ovf:required="false" vmw:key="ehciEnabled" vmw:value="true"/>
- </Item>
- <Item>
- <rasd:Address>0</rasd:Address>
- <rasd:Description>SCSI Controller</rasd:Description>
- <rasd:ElementName>scsiController0</rasd:ElementName>
- <rasd:InstanceID>6</rasd:InstanceID>
- <rasd:ResourceSubType>lsilogicsas</rasd:ResourceSubType>
- <rasd:ResourceType>6</rasd:ResourceType>
- </Item>
- <Item ovf:required="false">
- <rasd:AutomaticAllocation>true</rasd:AutomaticAllocation>
- <rasd:ElementName>serial0</rasd:ElementName>
- <rasd:InstanceID>7</rasd:InstanceID>
- <rasd:ResourceType>21</rasd:ResourceType>
- <vmw:Config ovf:required="false" vmw:key="yieldOnPoll" vmw:value="false"/>
- </Item>
- <Item>
- <rasd:AddressOnParent>0</rasd:AddressOnParent>
- <rasd:ElementName>disk0</rasd:ElementName>
- <rasd:HostResource>ovf:/disk/vmdisk1</rasd:HostResource>
- <rasd:InstanceID>8</rasd:InstanceID>
- <rasd:Parent>6</rasd:Parent>
- <rasd:ResourceType>17</rasd:ResourceType>
- </Item>
- <Item>
- <rasd:AddressOnParent>2</rasd:AddressOnParent>
- <rasd:AutomaticAllocation>true</rasd:AutomaticAllocation>
- <rasd:Connection>bridged</rasd:Connection>
- <rasd:Description>E1000e ethernet adapter on "bridged"</rasd:Description>
- <rasd:ElementName>ethernet0</rasd:ElementName>
- <rasd:InstanceID>9</rasd:InstanceID>
- <rasd:ResourceSubType>E1000e</rasd:ResourceSubType>
- <rasd:ResourceType>10</rasd:ResourceType>
- <vmw:Config ovf:required="false" vmw:key="wakeOnLanEnabled" vmw:value="false"/>
- </Item>
- <Item ovf:required="false">
- <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
- <rasd:ElementName>sound</rasd:ElementName>
- <rasd:InstanceID>10</rasd:InstanceID>
- <rasd:ResourceSubType>vmware.soundcard.hdaudio</rasd:ResourceSubType>
- <rasd:ResourceType>1</rasd:ResourceType>
- </Item>
- <Item ovf:required="false">
- <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
- <rasd:ElementName>video</rasd:ElementName>
- <rasd:InstanceID>11</rasd:InstanceID>
- <rasd:ResourceType>24</rasd:ResourceType>
- <vmw:Config ovf:required="false" vmw:key="enable3DSupport" vmw:value="true"/>
- </Item>
- <Item ovf:required="false">
- <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
- <rasd:ElementName>vmci</rasd:ElementName>
- <rasd:InstanceID>12</rasd:InstanceID>
- <rasd:ResourceSubType>vmware.vmci</rasd:ResourceSubType>
- <rasd:ResourceType>1</rasd:ResourceType>
- </Item>
- <Item ovf:required="false">
- <rasd:AddressOnParent>1</rasd:AddressOnParent>
- <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
- <rasd:ElementName>cdrom0</rasd:ElementName>
- <rasd:InstanceID>13</rasd:InstanceID>
- <rasd:Parent>3</rasd:Parent>
- <rasd:ResourceType>15</rasd:ResourceType>
- </Item>
- <vmw:Config ovf:required="false" vmw:key="memoryHotAddEnabled" vmw:value="true"/>
- <vmw:Config ovf:required="false" vmw:key="powerOpInfo.powerOffType" vmw:value="soft"/>
- <vmw:Config ovf:required="false" vmw:key="powerOpInfo.resetType" vmw:value="soft"/>
- <vmw:Config ovf:required="false" vmw:key="powerOpInfo.suspendType" vmw:value="soft"/>
- <vmw:Config ovf:required="false" vmw:key="tools.syncTimeWithHost" vmw:value="false"/>
- </VirtualHardwareSection>
- </VirtualSystem>
-</Envelope>
\ No newline at end of file
diff --git a/test/ovf_manifests/Win10-Liz_no_default_ns.ovf b/test/ovf_manifests/Win10-Liz_no_default_ns.ovf
deleted file mode 100755
index b93540f4..00000000
--- a/test/ovf_manifests/Win10-Liz_no_default_ns.ovf
+++ /dev/null
@@ -1,142 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--Generated by VMware ovftool 4.1.0 (build-2982904), UTC time: 2017-02-07T13:50:15.265014Z-->
-<Envelope vmw:buildId="build-2982904" xmlns="http://schemas.dmtf.org/ovf/envelope/1" xmlns:cim="http://schemas.dmtf.org/wbem/wscim/1/common" xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1" xmlns:vmw="http://www.vmware.com/schema/ovf" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
- <References>
- <File ovf:href="Win10-Liz-disk1.vmdk" ovf:id="file1" ovf:size="9155243008"/>
- </References>
- <DiskSection>
- <Info>Virtual disk information</Info>
- <Disk ovf:capacity="128" ovf:capacityAllocationUnits="byte * 2^30" ovf:diskId="vmdisk1" ovf:fileRef="file1" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:populatedSize="16798056448"/>
- </DiskSection>
- <NetworkSection>
- <Info>The list of logical networks</Info>
- <Network ovf:name="bridged">
- <Description>The bridged network</Description>
- </Network>
- </NetworkSection>
- <VirtualSystem ovf:id="vm">
- <Info>A virtual machine</Info>
- <Name>Win10-Liz</Name>
- <OperatingSystemSection ovf:id="1" vmw:osType="windows9_64Guest">
- <Info>The kind of installed guest operating system</Info>
- </OperatingSystemSection>
- <VirtualHardwareSection>
- <Info>Virtual hardware requirements</Info>
- <System>
- <vssd:ElementName>Virtual Hardware Family</vssd:ElementName>
- <vssd:InstanceID>0</vssd:InstanceID>
- <vssd:VirtualSystemIdentifier>Win10-Liz</vssd:VirtualSystemIdentifier>
- <vssd:VirtualSystemType>vmx-11</vssd:VirtualSystemType>
- </System>
- <Item>
- <rasd:AllocationUnits xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">hertz * 10^6</rasd:AllocationUnits>
- <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">Number of Virtual CPUs</rasd:Description>
- <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">4 virtual CPU(s)</rasd:ElementName>
- <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">1</rasd:InstanceID>
- <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">3</rasd:ResourceType>
- <rasd:VirtualQuantity xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">4</rasd:VirtualQuantity>
- </Item>
- <Item>
- <rasd:AllocationUnits xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">byte * 2^20</rasd:AllocationUnits>
- <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">Memory Size</rasd:Description>
- <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">6144MB of memory</rasd:ElementName>
- <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">2</rasd:InstanceID>
- <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">4</rasd:ResourceType>
- <rasd:VirtualQuantity xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">6144</rasd:VirtualQuantity>
- </Item>
- <Item>
- <rasd:Address xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">0</rasd:Address>
- <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">SATA Controller</rasd:Description>
- <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">sataController0</rasd:ElementName>
- <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">3</rasd:InstanceID>
- <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmware.sata.ahci</rasd:ResourceSubType>
- <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">20</rasd:ResourceType>
- </Item>
- <Item ovf:required="false">
- <rasd:Address xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">0</rasd:Address>
- <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">USB Controller (XHCI)</rasd:Description>
- <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">usb3</rasd:ElementName>
- <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">4</rasd:InstanceID>
- <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmware.usb.xhci</rasd:ResourceSubType>
- <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">23</rasd:ResourceType>
- </Item>
- <Item ovf:required="false">
- <rasd:Address xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">0</rasd:Address>
- <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">USB Controller (EHCI)</rasd:Description>
- <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">usb</rasd:ElementName>
- <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">5</rasd:InstanceID>
- <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmware.usb.ehci</rasd:ResourceSubType>
- <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">23</rasd:ResourceType>
- <vmw:Config ovf:required="false" vmw:key="ehciEnabled" vmw:value="true"/>
- </Item>
- <Item>
- <rasd:Address xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">0</rasd:Address>
- <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">SCSI Controller</rasd:Description>
- <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">scsiController0</rasd:ElementName>
- <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">6</rasd:InstanceID>
- <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">lsilogicsas</rasd:ResourceSubType>
- <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">6</rasd:ResourceType>
- </Item>
- <Item ovf:required="false">
- <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">true</rasd:AutomaticAllocation>
- <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">serial0</rasd:ElementName>
- <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">7</rasd:InstanceID>
- <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">21</rasd:ResourceType>
- <vmw:Config ovf:required="false" vmw:key="yieldOnPoll" vmw:value="false"/>
- </Item>
- <Item>
- <rasd:AddressOnParent xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" >0</rasd:AddressOnParent>
- <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" >disk0</rasd:ElementName>
- <rasd:HostResource xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" >ovf:/disk/vmdisk1</rasd:HostResource>
- <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">8</rasd:InstanceID>
- <rasd:Parent xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">6</rasd:Parent>
- <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">17</rasd:ResourceType>
- </Item>
- <Item>
- <rasd:AddressOnParent xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">2</rasd:AddressOnParent>
- <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">true</rasd:AutomaticAllocation>
- <rasd:Connection xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">bridged</rasd:Connection>
- <rasd:Description xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">E1000e ethernet adapter on "bridged"</rasd:Description>
- <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">ethernet0</rasd:ElementName>
- <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">9</rasd:InstanceID>
- <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">E1000e</rasd:ResourceSubType>
- <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">10</rasd:ResourceType>
- <vmw:Config ovf:required="false" vmw:key="wakeOnLanEnabled" vmw:value="false"/>
- </Item>
- <Item ovf:required="false">
- <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">false</rasd:AutomaticAllocation>
- <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">sound</rasd:ElementName>
- <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">10</rasd:InstanceID>
- <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmware.soundcard.hdaudio</rasd:ResourceSubType>
- <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">1</rasd:ResourceType>
- </Item>
- <Item ovf:required="false">
- <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">false</rasd:AutomaticAllocation>
- <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">video</rasd:ElementName>
- <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">11</rasd:InstanceID>
- <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">24</rasd:ResourceType>
- <vmw:Config ovf:required="false" vmw:key="enable3DSupport" vmw:value="true"/>
- </Item>
- <Item ovf:required="false">
- <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">false</rasd:AutomaticAllocation>
- <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmci</rasd:ElementName>
- <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">12</rasd:InstanceID>
- <rasd:ResourceSubType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">vmware.vmci</rasd:ResourceSubType>
- <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">1</rasd:ResourceType>
- </Item>
- <Item ovf:required="false">
- <rasd:AddressOnParent xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">1</rasd:AddressOnParent>
- <rasd:AutomaticAllocation xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">false</rasd:AutomaticAllocation>
- <rasd:ElementName xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">cdrom0</rasd:ElementName>
- <rasd:InstanceID xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">13</rasd:InstanceID>
- <rasd:Parent xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">3</rasd:Parent>
- <rasd:ResourceType xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData">15</rasd:ResourceType>
- </Item>
- <vmw:Config ovf:required="false" vmw:key="memoryHotAddEnabled" vmw:value="true"/>
- <vmw:Config ovf:required="false" vmw:key="powerOpInfo.powerOffType" vmw:value="soft"/>
- <vmw:Config ovf:required="false" vmw:key="powerOpInfo.resetType" vmw:value="soft"/>
- <vmw:Config ovf:required="false" vmw:key="powerOpInfo.suspendType" vmw:value="soft"/>
- <vmw:Config ovf:required="false" vmw:key="tools.syncTimeWithHost" vmw:value="false"/>
- </VirtualHardwareSection>
- </VirtualSystem>
-</Envelope>
diff --git a/test/ovf_manifests/Win_2008_R2_two-disks.ovf b/test/ovf_manifests/Win_2008_R2_two-disks.ovf
deleted file mode 100755
index a563aabb..00000000
--- a/test/ovf_manifests/Win_2008_R2_two-disks.ovf
+++ /dev/null
@@ -1,145 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--Generated by VMware ovftool 4.1.0 (build-2982904), UTC time: 2017-02-27T15:09:29.768974Z-->
-<Envelope vmw:buildId="build-2982904" xmlns="http://schemas.dmtf.org/ovf/envelope/1" xmlns:cim="http://schemas.dmtf.org/wbem/wscim/1/common" xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1" xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" xmlns:vmw="http://www.vmware.com/schema/ovf" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
- <References>
- <File ovf:href="disk1.vmdk" ovf:id="file1" ovf:size="3481968640"/>
- <File ovf:href="disk2.vmdk" ovf:id="file2" ovf:size="68096"/>
- </References>
- <DiskSection>
- <Info>Virtual disk information</Info>
- <Disk ovf:capacity="40" ovf:capacityAllocationUnits="byte * 2^30" ovf:diskId="vmdisk1" ovf:fileRef="file1" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:populatedSize="7684882432"/>
- <Disk ovf:capacity="1" ovf:capacityAllocationUnits="byte * 2^30" ovf:diskId="vmdisk2" ovf:fileRef="file2" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:populatedSize="0"/>
- </DiskSection>
- <NetworkSection>
- <Info>The list of logical networks</Info>
- <Network ovf:name="bridged">
- <Description>The bridged network</Description>
- </Network>
- </NetworkSection>
- <VirtualSystem ovf:id="vm">
- <Info>A virtual machine</Info>
- <Name>Win_2008-R2x64</Name>
- <OperatingSystemSection ovf:id="103" vmw:osType="windows7Server64Guest">
- <Info>The kind of installed guest operating system</Info>
- </OperatingSystemSection>
- <VirtualHardwareSection>
- <Info>Virtual hardware requirements</Info>
- <System>
- <vssd:ElementName>Virtual Hardware Family</vssd:ElementName>
- <vssd:InstanceID>0</vssd:InstanceID>
- <vssd:VirtualSystemIdentifier>Win_2008-R2x64</vssd:VirtualSystemIdentifier>
- <vssd:VirtualSystemType>vmx-11</vssd:VirtualSystemType>
- </System>
- <Item>
- <rasd:AllocationUnits>hertz * 10^6</rasd:AllocationUnits>
- <rasd:Description>Number of Virtual CPUs</rasd:Description>
- <rasd:ElementName>1 virtual CPU(s)</rasd:ElementName>
- <rasd:InstanceID>1</rasd:InstanceID>
- <rasd:ResourceType>3</rasd:ResourceType>
- <rasd:VirtualQuantity>1</rasd:VirtualQuantity>
- </Item>
- <Item>
- <rasd:AllocationUnits>byte * 2^20</rasd:AllocationUnits>
- <rasd:Description>Memory Size</rasd:Description>
- <rasd:ElementName>2048MB of memory</rasd:ElementName>
- <rasd:InstanceID>2</rasd:InstanceID>
- <rasd:ResourceType>4</rasd:ResourceType>
- <rasd:VirtualQuantity>2048</rasd:VirtualQuantity>
- </Item>
- <Item>
- <rasd:Address>0</rasd:Address>
- <rasd:Description>SATA Controller</rasd:Description>
- <rasd:ElementName>sataController0</rasd:ElementName>
- <rasd:InstanceID>3</rasd:InstanceID>
- <rasd:ResourceSubType>vmware.sata.ahci</rasd:ResourceSubType>
- <rasd:ResourceType>20</rasd:ResourceType>
- </Item>
- <Item ovf:required="false">
- <rasd:Address>0</rasd:Address>
- <rasd:Description>USB Controller (EHCI)</rasd:Description>
- <rasd:ElementName>usb</rasd:ElementName>
- <rasd:InstanceID>4</rasd:InstanceID>
- <rasd:ResourceSubType>vmware.usb.ehci</rasd:ResourceSubType>
- <rasd:ResourceType>23</rasd:ResourceType>
- <vmw:Config ovf:required="false" vmw:key="ehciEnabled" vmw:value="true"/>
- </Item>
- <Item>
- <rasd:Address>0</rasd:Address>
- <rasd:Description>SCSI Controller</rasd:Description>
- <rasd:ElementName>scsiController0</rasd:ElementName>
- <rasd:InstanceID>5</rasd:InstanceID>
- <rasd:ResourceSubType>lsilogicsas</rasd:ResourceSubType>
- <rasd:ResourceType>6</rasd:ResourceType>
- </Item>
- <Item ovf:required="false">
- <rasd:AutomaticAllocation>true</rasd:AutomaticAllocation>
- <rasd:ElementName>serial0</rasd:ElementName>
- <rasd:InstanceID>6</rasd:InstanceID>
- <rasd:ResourceType>21</rasd:ResourceType>
- <vmw:Config ovf:required="false" vmw:key="yieldOnPoll" vmw:value="false"/>
- </Item>
- <Item>
- <rasd:AddressOnParent>0</rasd:AddressOnParent>
- <rasd:ElementName>disk0</rasd:ElementName>
- <rasd:HostResource>ovf:/disk/vmdisk1</rasd:HostResource>
- <rasd:InstanceID>7</rasd:InstanceID>
- <rasd:Parent>5</rasd:Parent>
- <rasd:ResourceType>17</rasd:ResourceType>
- </Item>
- <Item>
- <rasd:AddressOnParent>1</rasd:AddressOnParent>
- <rasd:ElementName>disk1</rasd:ElementName>
- <rasd:HostResource>ovf:/disk/vmdisk2</rasd:HostResource>
- <rasd:InstanceID>8</rasd:InstanceID>
- <rasd:Parent>5</rasd:Parent>
- <rasd:ResourceType>17</rasd:ResourceType>
- </Item>
- <Item>
- <rasd:AddressOnParent>2</rasd:AddressOnParent>
- <rasd:AutomaticAllocation>true</rasd:AutomaticAllocation>
- <rasd:Connection>bridged</rasd:Connection>
- <rasd:Description>E1000 ethernet adapter on "bridged"</rasd:Description>
- <rasd:ElementName>ethernet0</rasd:ElementName>
- <rasd:InstanceID>9</rasd:InstanceID>
- <rasd:ResourceSubType>E1000</rasd:ResourceSubType>
- <rasd:ResourceType>10</rasd:ResourceType>
- <vmw:Config ovf:required="false" vmw:key="wakeOnLanEnabled" vmw:value="false"/>
- </Item>
- <Item ovf:required="false">
- <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
- <rasd:ElementName>sound</rasd:ElementName>
- <rasd:InstanceID>10</rasd:InstanceID>
- <rasd:ResourceSubType>vmware.soundcard.hdaudio</rasd:ResourceSubType>
- <rasd:ResourceType>1</rasd:ResourceType>
- </Item>
- <Item ovf:required="false">
- <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
- <rasd:ElementName>video</rasd:ElementName>
- <rasd:InstanceID>11</rasd:InstanceID>
- <rasd:ResourceType>24</rasd:ResourceType>
- <vmw:Config ovf:required="false" vmw:key="enable3DSupport" vmw:value="true"/>
- </Item>
- <Item ovf:required="false">
- <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
- <rasd:ElementName>vmci</rasd:ElementName>
- <rasd:InstanceID>12</rasd:InstanceID>
- <rasd:ResourceSubType>vmware.vmci</rasd:ResourceSubType>
- <rasd:ResourceType>1</rasd:ResourceType>
- </Item>
- <Item ovf:required="false">
- <rasd:AddressOnParent>1</rasd:AddressOnParent>
- <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
- <rasd:ElementName>cdrom0</rasd:ElementName>
- <rasd:InstanceID>13</rasd:InstanceID>
- <rasd:Parent>3</rasd:Parent>
- <rasd:ResourceType>15</rasd:ResourceType>
- </Item>
- <vmw:Config ovf:required="false" vmw:key="cpuHotAddEnabled" vmw:value="true"/>
- <vmw:Config ovf:required="false" vmw:key="memoryHotAddEnabled" vmw:value="true"/>
- <vmw:Config ovf:required="false" vmw:key="powerOpInfo.powerOffType" vmw:value="soft"/>
- <vmw:Config ovf:required="false" vmw:key="powerOpInfo.resetType" vmw:value="soft"/>
- <vmw:Config ovf:required="false" vmw:key="powerOpInfo.suspendType" vmw:value="soft"/>
- <vmw:Config ovf:required="false" vmw:key="tools.syncTimeWithHost" vmw:value="false"/>
- </VirtualHardwareSection>
- </VirtualSystem>
-</Envelope>
diff --git a/test/ovf_manifests/disk1.vmdk b/test/ovf_manifests/disk1.vmdk
deleted file mode 100644
index 8660602343a1a955f9bcf2e6beaed99316dd8167..0000000000000000000000000000000000000000
GIT binary patch
literal 0
HcmV?d00001
literal 65536
zcmeIvy>HV%7zbeUF`dK)46s<q9ua7|WuUkSgpg2EwX+*viPe0`HWAtSr(-ASlAWQ|
zbJF?F{+-YFKK_yYyn2=-$&0qXY<t)4ch@B8o_Fo_en^t%N%H0}e|H$~AF`0X3J-JR
zqY>z*Sy|tuS*)j3xo%d~*K!`iCRTO1T8@X|%lB-2b2}Q2@{gmi&a1d=x<|K%7N%9q
zn|Qfh$8m45TCV10Gb^W)c4ZxVA@tMpzfJp2S{y#m?iwzx)01@a>+{9rJna?j=ZAyM
zqPW{FznsOxiSi~-&+<BkewLkuP!u<VO<6U6^7*&xtNr=XaoRiS?V{gtwTMl%9Za|L
za#^%_7k)SjXE85!!SM7bspGUQewUqo+Glx@ubWtPwRL-yMO)CL`L7O2fB*pk1PBly
zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pkPg~&a(=JbS1PBlyK!5-N0!ISx
zkM7+PAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+
z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBly
zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF
z5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk
d1PBlyK!5-N0t5&UAV7cs0RjXF5cr=0{{Ra(Wheju
diff --git a/test/ovf_manifests/disk2.vmdk b/test/ovf_manifests/disk2.vmdk
deleted file mode 100644
index c4634513348b392202898374f1c8d2d51d565b27..0000000000000000000000000000000000000000
GIT binary patch
literal 0
HcmV?d00001
literal 65536
zcmeIvy>HV%7zbeUF`dK)46s<q9#IIDI%J@v2!xPOQ?;`jUy0Rx$u<$$`lr`+(j_}X
ztLLQio&7tX?|uAp{Oj^rk|Zyh{<7(9yX&q=(mrq7>)ntf&y(cMe*SJh-aTX?eH9+&
z#z!O2Psc@dn~q~OEsJ%%D!&!;7&fu2iq&#-6u$l#k8ZBB&%=0f64qH6mv#5(X4k^B
zj9DEow(B_REmq6byr^fzbkeM>VlRY#diJkw-bwTQ2bx{O`BgehC%?a(PtMX_-hBS!
zV6(_?yX6<NxIa-=XX$BH#n2y*PeaJ_>%pcd>%ZCj`_<*{eCa6d4SQYmC$1K;F1Lf}
zc3v#=CU3(J2jMJcc^4cVA0$<rHpO?@@uyvu<=MK9Wm{XjSCKabJ(~aOpacjIAV7cs
z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAn>#W-ahT}R7ZdS0RjXF5Fl_M
z@c!W5Edc@q2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk
z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs
z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ
zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U
eAV7cs0RjXF5FkK+009C72oNAZfB=F2DR2*l=VfOA
diff --git a/test/run_ovf_tests.pl b/test/run_ovf_tests.pl
deleted file mode 100755
index ff6c7863..00000000
--- a/test/run_ovf_tests.pl
+++ /dev/null
@@ -1,71 +0,0 @@
-#!/usr/bin/perl
-
-use strict;
-use warnings;
-use lib qw(..); # prepend .. to @INC so we use the local version of PVE packages
-
-use FindBin '$Bin';
-use PVE::QemuServer::OVF;
-use Test::More;
-
-use Data::Dumper;
-
-my $test_manifests = join ('/', $Bin, 'ovf_manifests');
-
-print "parsing ovfs\n";
-
-my $win2008 = eval { PVE::QemuServer::OVF::parse_ovf("$test_manifests/Win_2008_R2_two-disks.ovf") };
-if (my $err = $@) {
- fail('parse win2008');
- warn("error: $err\n");
-} else {
- ok('parse win2008');
-}
-my $win10 = eval { PVE::QemuServer::OVF::parse_ovf("$test_manifests/Win10-Liz.ovf") };
-if (my $err = $@) {
- fail('parse win10');
- warn("error: $err\n");
-} else {
- ok('parse win10');
-}
-my $win10noNs = eval { PVE::QemuServer::OVF::parse_ovf("$test_manifests/Win10-Liz_no_default_ns.ovf") };
-if (my $err = $@) {
- fail("parse win10 no default rasd NS");
- warn("error: $err\n");
-} else {
- ok('parse win10 no default rasd NS');
-}
-
-print "testing disks\n";
-
-is($win2008->{disks}->[0]->{disk_address}, 'scsi0', 'multidisk vm has the correct first disk controller');
-is($win2008->{disks}->[0]->{backing_file}, "$test_manifests/disk1.vmdk", 'multidisk vm has the correct first disk backing device');
-is($win2008->{disks}->[0]->{virtual_size}, 2048, 'multidisk vm has the correct first disk size');
-
-is($win2008->{disks}->[1]->{disk_address}, 'scsi1', 'multidisk vm has the correct second disk controller');
-is($win2008->{disks}->[1]->{backing_file}, "$test_manifests/disk2.vmdk", 'multidisk vm has the correct second disk backing device');
-is($win2008->{disks}->[1]->{virtual_size}, 2048, 'multidisk vm has the correct second disk size');
-
-is($win10->{disks}->[0]->{disk_address}, 'scsi0', 'single disk vm has the correct disk controller');
-is($win10->{disks}->[0]->{backing_file}, "$test_manifests/Win10-Liz-disk1.vmdk", 'single disk vm has the correct disk backing device');
-is($win10->{disks}->[0]->{virtual_size}, 2048, 'single disk vm has the correct size');
-
-is($win10noNs->{disks}->[0]->{disk_address}, 'scsi0', 'single disk vm (no default rasd NS) has the correct disk controller');
-is($win10noNs->{disks}->[0]->{backing_file}, "$test_manifests/Win10-Liz-disk1.vmdk", 'single disk vm (no default rasd NS) has the correct disk backing device');
-is($win10noNs->{disks}->[0]->{virtual_size}, 2048, 'single disk vm (no default rasd NS) has the correct size');
-
-print "\ntesting vm.conf extraction\n";
-
-is($win2008->{qm}->{name}, 'Win2008-R2x64', 'win2008 VM name is correct');
-is($win2008->{qm}->{memory}, '2048', 'win2008 VM memory is correct');
-is($win2008->{qm}->{cores}, '1', 'win2008 VM cores are correct');
-
-is($win10->{qm}->{name}, 'Win10-Liz', 'win10 VM name is correct');
-is($win10->{qm}->{memory}, '6144', 'win10 VM memory is correct');
-is($win10->{qm}->{cores}, '4', 'win10 VM cores are correct');
-
-is($win10noNs->{qm}->{name}, 'Win10-Liz', 'win10 VM (no default rasd NS) name is correct');
-is($win10noNs->{qm}->{memory}, '6144', 'win10 VM (no default rasd NS) memory is correct');
-is($win10noNs->{qm}->{cores}, '4', 'win10 VM (no default rasd NS) cores are correct');
-
-done_testing();
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [pve-devel] [PATCH qemu-server v6 3/6] use OVF from Storage
2024-11-15 15:17 ` [pve-devel] [PATCH qemu-server v6 3/6] use OVF from Storage Dominik Csapak
@ 2024-11-17 17:42 ` Thomas Lamprecht
0 siblings, 0 replies; 68+ messages in thread
From: Thomas Lamprecht @ 2024-11-17 17:42 UTC (permalink / raw)
To: Proxmox VE development discussion, Dominik Csapak
Am 15.11.24 um 16:17 schrieb Dominik Csapak:
> @@ -28,13 +28,13 @@ use PVE::Tools qw(extract_param file_get_contents);
>
> use PVE::API2::Qemu::Agent;
> use PVE::API2::Qemu;
> +use PVE::GuestImport::OVF;
nit: we normally group into three:
- perl modules not from us
- our modules from other packages
- modules from this package
Seems like the added entry should go to the other group above now.
> use PVE::QemuConfig;
> use PVE::QemuServer::Drive;
> use PVE::QemuServer::Helpers;
> use PVE::QemuServer::Agent qw(agent_available);
> use PVE::QemuServer::ImportDisk;
> use PVE::QemuServer::Monitor qw(mon_cmd);
> -use PVE::QemuServer::OVF;
> use PVE::QemuServer::QMPHelpers;
> use PVE::QemuServer;
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* [pve-devel] [PATCH qemu-server v6 4/6] api: create: implement extracting disks when needed for import-from
2024-11-15 15:17 [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages Dominik Csapak
` (14 preceding siblings ...)
2024-11-15 15:17 ` [pve-devel] [PATCH qemu-server v6 3/6] use OVF from Storage Dominik Csapak
@ 2024-11-15 15:17 ` Dominik Csapak
2024-11-18 13:31 ` Fiona Ebner
2024-11-15 15:17 ` [pve-devel] [PATCH qemu-server v6 5/6] api: create: add 'import-extraction-storage' parameter Dominik Csapak
` (14 subsequent siblings)
30 siblings, 1 reply; 68+ messages in thread
From: Dominik Csapak @ 2024-11-15 15:17 UTC (permalink / raw)
To: pve-devel
when 'import-from' contains a disk image that needs extraction
(currently only from an 'ova' archive), do that in 'create_disks'
and overwrite the '$source' volid.
Collect the names into a 'delete_sources' list, that we use later
to clean it up again (either when we're finished with importing or in an
error case).
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
changes from v5:
* changed the pve-managed check to be correct
PVE/API2/Qemu.pm | 51 +++++++++++++++++++++++++++++++++------
PVE/QemuServer.pm | 12 +++++++++
PVE/QemuServer/Helpers.pm | 5 ++++
3 files changed, 60 insertions(+), 8 deletions(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index b9c63af8..1aa42585 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -24,6 +24,7 @@ use PVE::JSONSchema qw(get_standard_option);
use PVE::RESTHandler;
use PVE::ReplicationConfig;
use PVE::GuestHelpers qw(assert_tag_permissions);
+use PVE::GuestImport;
use PVE::QemuConfig;
use PVE::QemuServer;
use PVE::QemuServer::Cloudinit;
@@ -163,10 +164,20 @@ my $check_storage_access = sub {
if (my $src_image = $drive->{'import-from'}) {
my $src_vmid;
- if (PVE::Storage::parse_volume_id($src_image, 1)) { # PVE-managed volume
- (my $vtype, undef, $src_vmid) = PVE::Storage::parse_volname($storecfg, $src_image);
- raise_param_exc({ $ds => "$src_image has wrong type '$vtype' - not an image" })
- if $vtype ne 'images';
+ my ($storeid, $volname) = PVE::Storage::parse_volume_id($src_image, 1);
+ if ($storeid) { # PVE-managed volume
+ my $scfg = PVE::Storage::storage_config($storecfg, $storeid);
+ my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
+ (my $vtype, undef, $src_vmid, undef, undef, undef, my $fmt) = $plugin->parse_volname($volname);
+
+ raise_param_exc({ $ds => "$src_image has wrong type '$vtype' - needs to be 'images' or 'import'" })
+ if $vtype ne 'images' && $vtype ne 'import';
+
+ if (PVE::QemuServer::Helpers::needs_extraction($vtype, $fmt)) {
+ raise_param_exc({ $ds => "$src_image is not on an storage with 'images' content type."})
+ if !$scfg->{content}->{images};
+ $rpcenv->check($authuser, "/storage/$storeid", ['Datastore.AllocateSpace']);
+ }
}
if ($src_vmid) { # might be actively used by VM and will be copied via clone_disk()
@@ -416,6 +427,23 @@ my sub create_disks : prototype($$$$$$$$$$) {
my ($source_storage, $source_volid) = PVE::Storage::parse_volume_id($source, 1);
if ($source_storage) { # PVE-managed volume
+ my ($vtype, undef, undef, undef, undef, undef, $fmt)
+ = PVE::Storage::parse_volname($storecfg, $source);
+ my $needs_extraction = PVE::QemuServer::Helpers::needs_extraction($vtype, $fmt);
+ if ($needs_extraction) {
+ print "extracting $source\n";
+ my $extracted_volid
+ = PVE::GuestImport::extract_disk_from_import_file($source, $vmid);
+ print "finished extracting to $extracted_volid\n";
+ push @$vollist, $extracted_volid;
+ $source = $extracted_volid;
+
+ my (undef, undef, undef, $parent)
+ = PVE::Storage::volume_size_info($storecfg, $source);
+ die "importing from extracted images with backing file ($parent) not allowed\n"
+ if $parent;
+ }
+
if ($live_import && $ds ne 'efidisk0') {
my $path = PVE::Storage::path($storecfg, $source)
or die "failed to get a path for '$source'\n";
@@ -424,9 +452,11 @@ my sub create_disks : prototype($$$$$$$$$$) {
die "could not get file size of $source\n" if !$size;
$live_import_mapping->{$ds} = {
- path => $source,
+ path => $path,
format => $source_format,
};
+ $live_import_mapping->{$ds}->{'delete-after-finish'} = $source
+ if $needs_extraction;
} else {
my $dest_info = {
vmid => $vmid,
@@ -438,8 +468,14 @@ my sub create_disks : prototype($$$$$$$$$$) {
$dest_info->{efisize} = PVE::QemuServer::get_efivars_size($conf, $disk)
if $ds eq 'efidisk0';
- ($dst_volid, $size) = eval {
- $import_from_volid->($storecfg, $source, $dest_info, $vollist);
+ eval {
+ ($dst_volid, $size)
+ = $import_from_volid->($storecfg, $source, $dest_info, $vollist);
+
+ # remove extracted volumes after importing
+ PVE::Storage::vdisk_free($storecfg, $source) if $needs_extraction;
+ print "cleaned up extracted image $source\n";
+ @$vollist = grep { $_ ne $source } @$vollist;
};
die "cannot import from '$source' - $@" if $@;
}
@@ -1964,7 +2000,6 @@ my $update_vm_api = sub {
assert_scsi_feature_compatibility($opt, $conf, $storecfg, $param->{$opt})
if $opt =~ m/^scsi\d+$/;
-
my (undef, $created_opts) = create_disks(
$rpcenv,
$authuser,
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index cb1e0b82..706343a4 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -7400,6 +7400,7 @@ sub live_import_from_files {
my ($mapping, $vmid, $conf, $restore_options) = @_;
my $live_restore_backing = {};
+ my $sources_to_remove = [];
for my $dev (keys %$mapping) {
die "disk not support for live-restoring: '$dev'\n"
if !is_valid_drivename($dev) || $dev =~ /^(?:efidisk|tpmstate)/;
@@ -7420,6 +7421,9 @@ sub live_import_from_files {
. ",read-only=on"
. ",file.driver=file,file.filename=$path"
};
+
+ my $source_volid = $info->{'delete-after-finish'};
+ push $sources_to_remove->@*, $source_volid if defined($source_volid);
};
my $storecfg = PVE::Storage::config();
@@ -7464,6 +7468,14 @@ sub live_import_from_files {
my $err = $@;
+ for my $volid ($sources_to_remove->@*) {
+ eval {
+ PVE::Storage::vdisk_free($storecfg, $volid);
+ print "cleaned up extracted image $volid\n";
+ };
+ warn "An error occurred while cleaning up source images: $@\n" if $@;
+ }
+
if ($err) {
warn "An error occurred during live-restore: $err\n";
_do_vm_stop($storecfg, $vmid, 1, 1, 10, 0, 1);
diff --git a/PVE/QemuServer/Helpers.pm b/PVE/QemuServer/Helpers.pm
index 0afb6317..15e2496c 100644
--- a/PVE/QemuServer/Helpers.pm
+++ b/PVE/QemuServer/Helpers.pm
@@ -225,4 +225,9 @@ sub windows_version {
return $winversion;
}
+sub needs_extraction {
+ my ($vtype, $fmt) = @_;
+ return $vtype eq 'import' && $fmt =~ m/^ova\+(.*)$/;
+}
+
1;
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [pve-devel] [PATCH qemu-server v6 4/6] api: create: implement extracting disks when needed for import-from
2024-11-15 15:17 ` [pve-devel] [PATCH qemu-server v6 4/6] api: create: implement extracting disks when needed for import-from Dominik Csapak
@ 2024-11-18 13:31 ` Fiona Ebner
2024-11-18 13:36 ` Dominik Csapak
0 siblings, 1 reply; 68+ messages in thread
From: Fiona Ebner @ 2024-11-18 13:31 UTC (permalink / raw)
To: Proxmox VE development discussion, Dominik Csapak
Am 15.11.24 um 16:17 schrieb Dominik Csapak:
> @@ -416,6 +427,23 @@ my sub create_disks : prototype($$$$$$$$$$) {
> my ($source_storage, $source_volid) = PVE::Storage::parse_volume_id($source, 1);
>
> if ($source_storage) { # PVE-managed volume
> + my ($vtype, undef, undef, undef, undef, undef, $fmt)
> + = PVE::Storage::parse_volname($storecfg, $source);
> + my $needs_extraction = PVE::QemuServer::Helpers::needs_extraction($vtype, $fmt);
> + if ($needs_extraction) {
> + print "extracting $source\n";
> + my $extracted_volid
> + = PVE::GuestImport::extract_disk_from_import_file($source, $vmid);
> + print "finished extracting to $extracted_volid\n";
> + push @$vollist, $extracted_volid;
> + $source = $extracted_volid;
> +
> + my (undef, undef, undef, $parent)
> + = PVE::Storage::volume_size_info($storecfg, $source);
> + die "importing from extracted images with backing file ($parent) not allowed\n"
> + if $parent;
> + }
> +
> if ($live_import && $ds ne 'efidisk0') {
> my $path = PVE::Storage::path($storecfg, $source)
> or die "failed to get a path for '$source'\n";
Below here is a $source = $path
> @@ -424,9 +452,11 @@ my sub create_disks : prototype($$$$$$$$$$) {
>
> die "could not get file size of $source\n" if !$size;
> $live_import_mapping->{$ds} = {
> - path => $source,
> + path => $path,
So this doesn't change anything. It's nicer to read though :P
> format => $source_format,
> };
> + $live_import_mapping->{$ds}->{'delete-after-finish'} = $source
But here you already have $path assigned to $source rather than the
original volume ID. Doesn't vdisk_free() fail later then?
> + if $needs_extraction;
> } else {
> my $dest_info = {
> vmid => $vmid,
> @@ -438,8 +468,14 @@ my sub create_disks : prototype($$$$$$$$$$) {
> $dest_info->{efisize} = PVE::QemuServer::get_efivars_size($conf, $disk)
> if $ds eq 'efidisk0';
>
> - ($dst_volid, $size) = eval {
> - $import_from_volid->($storecfg, $source, $dest_info, $vollist);
> + eval {
> + ($dst_volid, $size)
> + = $import_from_volid->($storecfg, $source, $dest_info, $vollist);
> +
> + # remove extracted volumes after importing
> + PVE::Storage::vdisk_free($storecfg, $source) if $needs_extraction;
> + print "cleaned up extracted image $source\n";
> + @$vollist = grep { $_ ne $source } @$vollist;
> };
> die "cannot import from '$source' - $@" if $@;
> }
> @@ -1964,7 +2000,6 @@ my $update_vm_api = sub {
>
> assert_scsi_feature_compatibility($opt, $conf, $storecfg, $param->{$opt})
> if $opt =~ m/^scsi\d+$/;
> -
> my (undef, $created_opts) = create_disks(
> $rpcenv,
> $authuser,
Unrelated hunk should not be here
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [pve-devel] [PATCH qemu-server v6 4/6] api: create: implement extracting disks when needed for import-from
2024-11-18 13:31 ` Fiona Ebner
@ 2024-11-18 13:36 ` Dominik Csapak
0 siblings, 0 replies; 68+ messages in thread
From: Dominik Csapak @ 2024-11-18 13:36 UTC (permalink / raw)
To: Fiona Ebner, Proxmox VE development discussion
On 11/18/24 14:31, Fiona Ebner wrote:
> Am 15.11.24 um 16:17 schrieb Dominik Csapak:
>> @@ -416,6 +427,23 @@ my sub create_disks : prototype($$$$$$$$$$) {
>> my ($source_storage, $source_volid) = PVE::Storage::parse_volume_id($source, 1);
>>
>> if ($source_storage) { # PVE-managed volume
>> + my ($vtype, undef, undef, undef, undef, undef, $fmt)
>> + = PVE::Storage::parse_volname($storecfg, $source);
>> + my $needs_extraction = PVE::QemuServer::Helpers::needs_extraction($vtype, $fmt);
>> + if ($needs_extraction) {
>> + print "extracting $source\n";
>> + my $extracted_volid
>> + = PVE::GuestImport::extract_disk_from_import_file($source, $vmid);
>> + print "finished extracting to $extracted_volid\n";
>> + push @$vollist, $extracted_volid;
>> + $source = $extracted_volid;
>> +
>> + my (undef, undef, undef, $parent)
>> + = PVE::Storage::volume_size_info($storecfg, $source);
>> + die "importing from extracted images with backing file ($parent) not allowed\n"
>> + if $parent;
>> + }
>> +
>> if ($live_import && $ds ne 'efidisk0') {
>> my $path = PVE::Storage::path($storecfg, $source)
>> or die "failed to get a path for '$source'\n";
>
> Below here is a $source = $path
>
>> @@ -424,9 +452,11 @@ my sub create_disks : prototype($$$$$$$$$$) {
>>
>> die "could not get file size of $source\n" if !$size;
>> $live_import_mapping->{$ds} = {
>> - path => $source,
>> + path => $path,
>
> So this doesn't change anything. It's nicer to read though :P
>
>> format => $source_format,
>> };
>> + $live_import_mapping->{$ds}->{'delete-after-finish'} = $source
>
> But here you already have $path assigned to $source rather than the
> original volume ID. Doesn't vdisk_free() fail later then?
yep, i noticed that 5 minutes ago ;)
i'll change it so that source does not get overwritten and we save the volid
to delete instead of the path
>
>> + if $needs_extraction;
>> } else {
>> my $dest_info = {
>> vmid => $vmid,
>> @@ -438,8 +468,14 @@ my sub create_disks : prototype($$$$$$$$$$) {
>> $dest_info->{efisize} = PVE::QemuServer::get_efivars_size($conf, $disk)
>> if $ds eq 'efidisk0';
>>
>> - ($dst_volid, $size) = eval {
>> - $import_from_volid->($storecfg, $source, $dest_info, $vollist);
>> + eval {
>> + ($dst_volid, $size)
>> + = $import_from_volid->($storecfg, $source, $dest_info, $vollist);
>> +
>> + # remove extracted volumes after importing
>> + PVE::Storage::vdisk_free($storecfg, $source) if $needs_extraction;
>> + print "cleaned up extracted image $source\n";
>> + @$vollist = grep { $_ ne $source } @$vollist;
>> };
>> die "cannot import from '$source' - $@" if $@;
>> }
>> @@ -1964,7 +2000,6 @@ my $update_vm_api = sub {
>>
>> assert_scsi_feature_compatibility($opt, $conf, $storecfg, $param->{$opt})
>> if $opt =~ m/^scsi\d+$/;
>> -
>> my (undef, $created_opts) = create_disks(
>> $rpcenv,
>> $authuser,
>
> Unrelated hunk should not be here
>
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* [pve-devel] [PATCH qemu-server v6 5/6] api: create: add 'import-extraction-storage' parameter
2024-11-15 15:17 [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages Dominik Csapak
` (15 preceding siblings ...)
2024-11-15 15:17 ` [pve-devel] [PATCH qemu-server v6 4/6] api: create: implement extracting disks when needed for import-from Dominik Csapak
@ 2024-11-15 15:17 ` Dominik Csapak
2024-11-17 16:13 ` Thomas Lamprecht
2024-11-15 15:17 ` [pve-devel] [PATCH qemu-server v6 6/6] api: check untrusted image files for import content type Dominik Csapak
` (13 subsequent siblings)
30 siblings, 1 reply; 68+ messages in thread
From: Dominik Csapak @ 2024-11-15 15:17 UTC (permalink / raw)
To: pve-devel
this is to override the target extraction storage for the option disk
extraction for 'import-from'. This way if the storage does not
supports the content type 'images', one can give an alternative one.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
PVE/API2/Qemu.pm | 46 +++++++++++++++++++++++++++++++++++++---------
1 file changed, 37 insertions(+), 9 deletions(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 1aa42585..58aaabbe 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -132,7 +132,7 @@ my $check_drive_param = sub {
};
my $check_storage_access = sub {
- my ($rpcenv, $authuser, $storecfg, $vmid, $settings, $default_storage) = @_;
+ my ($rpcenv, $authuser, $storecfg, $vmid, $settings, $default_storage, $extraction_storage) = @_;
$foreach_volume_with_alloc->($settings, sub {
my ($ds, $drive) = @_;
@@ -174,9 +174,18 @@ my $check_storage_access = sub {
if $vtype ne 'images' && $vtype ne 'import';
if (PVE::QemuServer::Helpers::needs_extraction($vtype, $fmt)) {
- raise_param_exc({ $ds => "$src_image is not on an storage with 'images' content type."})
- if !$scfg->{content}->{images};
- $rpcenv->check($authuser, "/storage/$storeid", ['Datastore.AllocateSpace']);
+ if (defined($extraction_storage)) {
+ my $extraction_scfg = PVE::Storage::storage_config($storecfg, $extraction_storage);
+ raise_param_exc({ 'import-extraction-storage' => "$extraction_storage does not support"
+ ." 'images' content type or is not file based."})
+ if !$extraction_scfg->{content}->{images} || !$extraction_scfg->{path};
+ $rpcenv->check($authuser, "/storage/$extraction_storage", ['Datastore.AllocateSpace']);
+ } else {
+ raise_param_exc({ $ds => "$src_image is not on an storage with 'images'"
+ ." content type and no 'import-extraction-storage' was given."})
+ if !$scfg->{content}->{images};
+ $rpcenv->check($authuser, "/storage/$storeid", ['Datastore.AllocateSpace']);
+ }
}
}
@@ -349,7 +358,7 @@ my sub prohibit_tpm_version_change {
# Note: $pool is only needed when creating a VM, because pool permissions
# are automatically inherited if VM already exists inside a pool.
-my sub create_disks : prototype($$$$$$$$$$) {
+my sub create_disks : prototype($$$$$$$$$$$) {
my (
$rpcenv,
$authuser,
@@ -361,6 +370,7 @@ my sub create_disks : prototype($$$$$$$$$$) {
$settings,
$default_storage,
$is_live_import,
+ $extraction_storage,
) = @_;
my $vollist = [];
@@ -432,8 +442,8 @@ my sub create_disks : prototype($$$$$$$$$$) {
my $needs_extraction = PVE::QemuServer::Helpers::needs_extraction($vtype, $fmt);
if ($needs_extraction) {
print "extracting $source\n";
- my $extracted_volid
- = PVE::GuestImport::extract_disk_from_import_file($source, $vmid);
+ my $extracted_volid = PVE::GuestImport::extract_disk_from_import_file(
+ $source, $vmid, $extraction_storage);
print "finished extracting to $extracted_volid\n";
push @$vollist, $extracted_volid;
$source = $extracted_volid;
@@ -973,6 +983,12 @@ __PACKAGE__->register_method({
default => 0,
description => "Start VM after it was created successfully.",
},
+ 'import-extraction-storage' => get_standard_option('pve-storage-id', {
+ description => "Storage for temporarily extracted images 'import-from' image"
+ ." files (default: import source storage)",
+ optional => 1,
+ completion => \&PVE::QemuServer::complete_storage,
+ }),
},
1, # with_disk_alloc
),
@@ -999,6 +1015,7 @@ __PACKAGE__->register_method({
my $storage = extract_param($param, 'storage');
my $unique = extract_param($param, 'unique');
my $live_restore = extract_param($param, 'live-restore');
+ my $extraction_storage = extract_param($param, 'import-extraction-storage');
if (defined(my $ssh_keys = $param->{sshkeys})) {
$ssh_keys = URI::Escape::uri_unescape($ssh_keys);
@@ -1058,7 +1075,8 @@ __PACKAGE__->register_method({
if (scalar(keys $param->%*) > 0) {
&$resolve_cdrom_alias($param);
- &$check_storage_access($rpcenv, $authuser, $storecfg, $vmid, $param, $storage);
+ &$check_storage_access(
+ $rpcenv, $authuser, $storecfg, $vmid, $param, $storage, $extraction_storage);
&$check_vm_modify_config_perm($rpcenv, $authuser, $vmid, $pool, [ keys %$param]);
@@ -1173,6 +1191,7 @@ __PACKAGE__->register_method({
$param,
$storage,
$live_restore,
+ $extraction_storage
);
$conf->{$_} = $created_opts->{$_} for keys $created_opts->%*;
@@ -1715,6 +1734,8 @@ my $update_vm_api = sub {
my $skip_cloud_init = extract_param($param, 'skip_cloud_init');
+ my $extraction_storage = extract_param($param, 'import-extraction-storage');
+
my @paramarr = (); # used for log message
foreach my $key (sort keys %$param) {
my $value = $key eq 'cipassword' ? '<hidden>' : $param->{$key};
@@ -1828,7 +1849,7 @@ my $update_vm_api = sub {
&$check_vm_modify_config_perm($rpcenv, $authuser, $vmid, undef, [keys %$param]);
- &$check_storage_access($rpcenv, $authuser, $storecfg, $vmid, $param);
+ &$check_storage_access($rpcenv, $authuser, $storecfg, $vmid, $param, undef, $extraction_storage);
PVE::QemuServer::check_bridge_access($rpcenv, $authuser, $param);
@@ -2011,6 +2032,7 @@ my $update_vm_api = sub {
{$opt => $param->{$opt}},
undef,
undef,
+ $extraction_storage,
);
$conf->{pending}->{$_} = $created_opts->{$_} for keys $created_opts->%*;
@@ -2213,6 +2235,12 @@ __PACKAGE__->register_method({
maximum => 30,
optional => 1,
},
+ 'import-extraction-storage' => get_standard_option('pve-storage-id', {
+ description => "Storage for temporarily extracted images 'import-from' image"
+ ." files (default: import source storage)",
+ optional => 1,
+ completion => \&PVE::QemuServer::complete_storage,
+ }),
},
1, # with_disk_alloc
),
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [pve-devel] [PATCH qemu-server v6 5/6] api: create: add 'import-extraction-storage' parameter
2024-11-15 15:17 ` [pve-devel] [PATCH qemu-server v6 5/6] api: create: add 'import-extraction-storage' parameter Dominik Csapak
@ 2024-11-17 16:13 ` Thomas Lamprecht
0 siblings, 0 replies; 68+ messages in thread
From: Thomas Lamprecht @ 2024-11-17 16:13 UTC (permalink / raw)
To: Proxmox VE development discussion, Dominik Csapak
Am 15.11.24 um 16:17 schrieb Dominik Csapak:
> this is to override the target extraction storage for the option disk
> extraction for 'import-from'. This way if the storage does not
> supports the content type 'images', one can give an alternative one.
>
looks OK to me besides some styling/naming things I commented inline
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
> PVE/API2/Qemu.pm | 46 +++++++++++++++++++++++++++++++++++++---------
> 1 file changed, 37 insertions(+), 9 deletions(-)
>
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index 1aa42585..58aaabbe 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -174,9 +174,18 @@ my $check_storage_access = sub {
> if $vtype ne 'images' && $vtype ne 'import';
>
> if (PVE::QemuServer::Helpers::needs_extraction($vtype, $fmt)) {
> - raise_param_exc({ $ds => "$src_image is not on an storage with 'images' content type."})
> - if !$scfg->{content}->{images};
> - $rpcenv->check($authuser, "/storage/$storeid", ['Datastore.AllocateSpace']);
> + if (defined($extraction_storage)) {
> + my $extraction_scfg = PVE::Storage::storage_config($storecfg, $extraction_storage);
> + raise_param_exc({ 'import-extraction-storage' => "$extraction_storage does not support"
> + ." 'images' content type or is not file based."})
wrapping two things at once (string and the object/method call parenthesis) is not
ideal readability wise.
> + if !$extraction_scfg->{content}->{images} || !$extraction_scfg->{path};
please avoid post-ifs on expressions wrapping already multiple lines.
Above could look something like:
if (!$extraction_scfg->{content}->{images} || !$extraction_scfg->{path}) {
raise_param_exc({
'import-extraction-storage' => "$extraction_storage does not support"
." 'images' content type or is not file based.",
});
}
> + $rpcenv->check($authuser, "/storage/$extraction_storage", ['Datastore.AllocateSpace']);
> + } else {
> + raise_param_exc({ $ds => "$src_image is not on an storage with 'images'"
> + ." content type and no 'import-extraction-storage' was given."})
> + if !$scfg->{content}->{images};
same here, in general you could unfiy the code paths more, they're basically the same,
just need upfront
my $extraction_scfg = defined($extraction_storage)
? PVE::Storage::storage_config($storecfg, $extraction_storage)
: $scfg;
and
$rpcenv->check($authuser, "/storage/" . ($extraction_storage // $storeid), ['Datastore.AllocateSpace']);
IMO it's more gained by more easily seeing that the same things are checked
compared to explicit branches for making loading the extra storage config a bit
more explicit.
> + $rpcenv->check($authuser, "/storage/$storeid", ['Datastore.AllocateSpace']);
> + }
> }
> }
>
> @@ -973,6 +983,12 @@ __PACKAGE__->register_method({
> default => 0,
> description => "Start VM after it was created successfully.",
> },
> + 'import-extraction-storage' => get_standard_option('pve-storage-id', {
Would prefer:
import-working-storage
> + description => "Storage for temporarily extracted images 'import-from' image"
"images 'import-from' image files" seem like it misses some words in between.
> + ." files (default: import source storage)",
btw. why not prefer the target storage if applicable, i.e. it's a file-storage, and
only fallback to the import one if not? we can change that later though.
But in any way it might be nice to mention that the storage needs to be a file-based
one I think.
Maybe something like:
"A file-based storage with 'images' content-type enabled, into which images are extracted during import as an intermediate step for further processing. Defaults to ..."
> + optional => 1,
> + completion => \&PVE::QemuServer::complete_storage,
> + }),
> },
> 1, # with_disk_alloc
> ),
> @@ -2213,6 +2235,12 @@ __PACKAGE__->register_method({
> maximum => 30,
> optional => 1,
> },
> + 'import-extraction-storage' => get_standard_option('pve-storage-id', {
> + description => "Storage for temporarily extracted images 'import-from' image"
> + ." files (default: import source storage)",
same here as above
> + optional => 1,
> + completion => \&PVE::QemuServer::complete_storage,
> + }),
> },
> 1, # with_disk_alloc
> ),
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* [pve-devel] [PATCH qemu-server v6 6/6] api: check untrusted image files for import content type
2024-11-15 15:17 [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages Dominik Csapak
` (16 preceding siblings ...)
2024-11-15 15:17 ` [pve-devel] [PATCH qemu-server v6 5/6] api: create: add 'import-extraction-storage' parameter Dominik Csapak
@ 2024-11-15 15:17 ` Dominik Csapak
2024-11-18 14:48 ` Fiona Ebner
2024-11-15 15:17 ` [pve-devel] [PATCH manager v6 1/9] ui: fix special 'import' icon for non-esxi storages Dominik Csapak
` (12 subsequent siblings)
30 siblings, 1 reply; 68+ messages in thread
From: Dominik Csapak @ 2024-11-15 15:17 UTC (permalink / raw)
To: pve-devel
check to be imported files for external references if they are of
content type 'import'.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
new in v6
PVE/API2/Qemu.pm | 11 ++++++++++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 58aaabbe..cbbd1e36 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -440,6 +440,7 @@ my sub create_disks : prototype($$$$$$$$$$$) {
my ($vtype, undef, undef, undef, undef, undef, $fmt)
= PVE::Storage::parse_volname($storecfg, $source);
my $needs_extraction = PVE::QemuServer::Helpers::needs_extraction($vtype, $fmt);
+ my $untrusted = $vtype eq 'import' ? 1 : 0;
if ($needs_extraction) {
print "extracting $source\n";
my $extracted_volid = PVE::GuestImport::extract_disk_from_import_file(
@@ -458,7 +459,8 @@ my sub create_disks : prototype($$$$$$$$$$$) {
my $path = PVE::Storage::path($storecfg, $source)
or die "failed to get a path for '$source'\n";
$source = $path;
- ($size, my $source_format) = PVE::Storage::file_size_info($source);
+ # check potentially untrusted image file for import vtype
+ ($size, my $source_format) = PVE::Storage::file_size_info($source, undef, $untrusted);
die "could not get file size of $source\n" if !$size;
$live_import_mapping->{$ds} = {
@@ -468,6 +470,13 @@ my sub create_disks : prototype($$$$$$$$$$$) {
$live_import_mapping->{$ds}->{'delete-after-finish'} = $source
if $needs_extraction;
} else {
+ # check potentially untrusted image file for import vtype
+ if ($untrusted) {
+ my $scfg = PVE::Storage::storage_config($storecfg, $source_storage);
+ my $path = PVE::Storage::path($storecfg, $source);
+ PVE::Storage::file_size_info($path, undef, 1);
+ }
+
my $dest_info = {
vmid => $vmid,
drivename => $ds,
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [pve-devel] [PATCH qemu-server v6 6/6] api: check untrusted image files for import content type
2024-11-15 15:17 ` [pve-devel] [PATCH qemu-server v6 6/6] api: check untrusted image files for import content type Dominik Csapak
@ 2024-11-18 14:48 ` Fiona Ebner
0 siblings, 0 replies; 68+ messages in thread
From: Fiona Ebner @ 2024-11-18 14:48 UTC (permalink / raw)
To: Proxmox VE development discussion, Dominik Csapak
Am 15.11.24 um 16:17 schrieb Dominik Csapak:
> @@ -468,6 +470,13 @@ my sub create_disks : prototype($$$$$$$$$$$) {
> $live_import_mapping->{$ds}->{'delete-after-finish'} = $source
> if $needs_extraction;
> } else {
> + # check potentially untrusted image file for import vtype
> + if ($untrusted) {
> + my $scfg = PVE::Storage::storage_config($storecfg, $source_storage);
$scfg is unused/not required
> + my $path = PVE::Storage::path($storecfg, $source);
> + PVE::Storage::file_size_info($path, undef, 1);
> + }
> +
> my $dest_info = {
> vmid => $vmid,
> drivename => $ds,
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* [pve-devel] [PATCH manager v6 1/9] ui: fix special 'import' icon for non-esxi storages
2024-11-15 15:17 [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages Dominik Csapak
` (17 preceding siblings ...)
2024-11-15 15:17 ` [pve-devel] [PATCH qemu-server v6 6/6] api: check untrusted image files for import content type Dominik Csapak
@ 2024-11-15 15:17 ` Dominik Csapak
2024-11-17 16:21 ` [pve-devel] applied: " Thomas Lamprecht
2024-11-15 15:17 ` [pve-devel] [PATCH manager v6 2/9] ui: guest import: add ova-needs-extracting warning text Dominik Csapak
` (11 subsequent siblings)
30 siblings, 1 reply; 68+ messages in thread
From: Dominik Csapak @ 2024-11-15 15:17 UTC (permalink / raw)
To: pve-devel
we only want to show that icon in the tree when the storage is solely
used for importing, not when it's just one of several content types.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
www/manager6/Utils.js | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index da8870a6..3691d66a 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -1246,7 +1246,7 @@ Ext.define('PVE.Utils', {
// templates
objType = 'template';
status = type;
- } else if (type === 'storage' && record.content.indexOf('import') !== -1) {
+ } else if (type === 'storage' && record.content === 'import') {
return 'fa fa-cloud-download';
} else {
// everything else
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* [pve-devel] applied: [PATCH manager v6 1/9] ui: fix special 'import' icon for non-esxi storages
2024-11-15 15:17 ` [pve-devel] [PATCH manager v6 1/9] ui: fix special 'import' icon for non-esxi storages Dominik Csapak
@ 2024-11-17 16:21 ` Thomas Lamprecht
2024-11-18 8:47 ` Dominik Csapak
0 siblings, 1 reply; 68+ messages in thread
From: Thomas Lamprecht @ 2024-11-17 16:21 UTC (permalink / raw)
To: Proxmox VE development discussion, Dominik Csapak
Am 15.11.24 um 16:17 schrieb Dominik Csapak:
> we only want to show that icon in the tree when the storage is solely
> used for importing, not when it's just one of several content types.
>
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
> www/manager6/Utils.js | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
>
applied this one, thanks!
albeit, for storages that just have the import content type defined it
would still show the icon until one then adds another content-type, so
could be still slightly confusing, but IMO it's and edge case we can
bother with if somebody really reports it.
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [pve-devel] applied: [PATCH manager v6 1/9] ui: fix special 'import' icon for non-esxi storages
2024-11-17 16:21 ` [pve-devel] applied: " Thomas Lamprecht
@ 2024-11-18 8:47 ` Dominik Csapak
2024-11-18 9:56 ` Thomas Lamprecht
0 siblings, 1 reply; 68+ messages in thread
From: Dominik Csapak @ 2024-11-18 8:47 UTC (permalink / raw)
To: Thomas Lamprecht, Proxmox VE development discussion
On 11/17/24 17:21, Thomas Lamprecht wrote:
> Am 15.11.24 um 16:17 schrieb Dominik Csapak:
>> we only want to show that icon in the tree when the storage is solely
>> used for importing, not when it's just one of several content types.
>>
>> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
>> ---
>> www/manager6/Utils.js | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>>
>
> applied this one, thanks!
>
> albeit, for storages that just have the import content type defined it
> would still show the icon until one then adds another content-type, so
> could be still slightly confusing, but IMO it's and edge case we can
> bother with if somebody really reports it.
actually IIRC this was intentional, so that 'import-only' storages stand
out a bit, if you don't like it, i can change it ofc
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [pve-devel] applied: [PATCH manager v6 1/9] ui: fix special 'import' icon for non-esxi storages
2024-11-18 8:47 ` Dominik Csapak
@ 2024-11-18 9:56 ` Thomas Lamprecht
0 siblings, 0 replies; 68+ messages in thread
From: Thomas Lamprecht @ 2024-11-18 9:56 UTC (permalink / raw)
To: Dominik Csapak, Proxmox VE development discussion
Am 18.11.24 um 09:47 schrieb Dominik Csapak:
> On 11/17/24 17:21, Thomas Lamprecht wrote:
>> albeit, for storages that just have the import content type defined it
>> would still show the icon until one then adds another content-type, so
>> could be still slightly confusing, but IMO it's and edge case we can
>> bother with if somebody really reports it.
>
> actually IIRC this was intentional, so that 'import-only' storages stand
> out a bit, if you don't like it, i can change it ofc
I just find it odd that it can change due to some adaption of setting, like
if e.g. one adds a storage for just import first and then after a while also
allows managing ISOs/CT templates, but once that changed the previous
accustomed icon of that storage entry changes, and thus it might be harder
to spot, especially if one has a few other shared storage that share the
same icon.
For ESXi it was chosen because it's unlikely, close to completely impossible,
that we will ever expose that storage for something else, that's why it can
be fixated to a different icon. For file based storage that cannot be said.
IMO it might be nicer to show the available content types in some other form,
like small icons with tooltips for each enabled content type beside the name.
But for the short-term I do not see any pressing need to change that, but
maybe something to polish a bit during calmer times.
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* [pve-devel] [PATCH manager v6 2/9] ui: guest import: add ova-needs-extracting warning text
2024-11-15 15:17 [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages Dominik Csapak
` (18 preceding siblings ...)
2024-11-15 15:17 ` [pve-devel] [PATCH manager v6 1/9] ui: fix special 'import' icon for non-esxi storages Dominik Csapak
@ 2024-11-15 15:17 ` Dominik Csapak
2024-11-17 16:29 ` Thomas Lamprecht
2024-11-15 15:17 ` [pve-devel] [PATCH manager v6 3/9] ui: enable import content type for relevant storages Dominik Csapak
` (10 subsequent siblings)
30 siblings, 1 reply; 68+ messages in thread
From: Dominik Csapak @ 2024-11-15 15:17 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
www/manager6/window/GuestImport.js | 1 +
1 file changed, 1 insertion(+)
diff --git a/www/manager6/window/GuestImport.js b/www/manager6/window/GuestImport.js
index 2577ece2..1483d97f 100644
--- a/www/manager6/window/GuestImport.js
+++ b/www/manager6/window/GuestImport.js
@@ -937,6 +937,7 @@ Ext.define('PVE.window.GuestImport', {
gettext('EFI state cannot be imported, you may need to reconfigure the boot order (see {0})'),
'<a href="https://pve.proxmox.com/wiki/OVMF/UEFI_Boot_Entries">OVMF/UEFI Boot Entries</a>',
),
+ 'ova-needs-extracting': gettext('Importing from an OVA requires extra space while extracting the contained disks into the import or selected storage.'),
};
let message = warningsCatalogue[w.type];
if (!w.type || !message) {
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [pve-devel] [PATCH manager v6 2/9] ui: guest import: add ova-needs-extracting warning text
2024-11-15 15:17 ` [pve-devel] [PATCH manager v6 2/9] ui: guest import: add ova-needs-extracting warning text Dominik Csapak
@ 2024-11-17 16:29 ` Thomas Lamprecht
0 siblings, 0 replies; 68+ messages in thread
From: Thomas Lamprecht @ 2024-11-17 16:29 UTC (permalink / raw)
To: Proxmox VE development discussion, Dominik Csapak
Would squash that into the patch using it [0], but no hard feelings.
[0] "ui: guest import: add storage selector for ova extraction storage"
Am 15.11.24 um 16:17 schrieb Dominik Csapak:
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
> www/manager6/window/GuestImport.js | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/www/manager6/window/GuestImport.js b/www/manager6/window/GuestImport.js
> index 2577ece2..1483d97f 100644
> --- a/www/manager6/window/GuestImport.js
> +++ b/www/manager6/window/GuestImport.js
> @@ -937,6 +937,7 @@ Ext.define('PVE.window.GuestImport', {
> gettext('EFI state cannot be imported, you may need to reconfigure the boot order (see {0})'),
> '<a href="https://pve.proxmox.com/wiki/OVMF/UEFI_Boot_Entries">OVMF/UEFI Boot Entries</a>',
> ),
> + 'ova-needs-extracting': gettext('Importing from an OVA requires extra space while extracting the contained disks into the import or selected storage.'),
maybe one of the following two variants is a bit easier to follow:
'Importing an OVA temporarily requires additional space on the working storage while extracting the contained disks for further processing.'
or
'Importing an OVA temporarily requires additional space on the working storage while the disks are being extracted for further processing.'
I left out mention which storage, as there's a dedicated field for overriding
with a fitting emptyText for the default that should be clear already.
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* [pve-devel] [PATCH manager v6 3/9] ui: enable import content type for relevant storages
2024-11-15 15:17 [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages Dominik Csapak
` (19 preceding siblings ...)
2024-11-15 15:17 ` [pve-devel] [PATCH manager v6 2/9] ui: guest import: add ova-needs-extracting warning text Dominik Csapak
@ 2024-11-15 15:17 ` Dominik Csapak
2024-11-15 15:17 ` [pve-devel] [PATCH manager v6 4/9] ui: enable upload/download/remove buttons for 'import' type storages Dominik Csapak
` (9 subsequent siblings)
30 siblings, 0 replies; 68+ messages in thread
From: Dominik Csapak @ 2024-11-15 15:17 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
www/manager6/Utils.js | 1 +
www/manager6/form/ContentTypeSelector.js | 2 +-
www/manager6/storage/CephFSEdit.js | 2 +-
www/manager6/storage/GlusterFsEdit.js | 2 +-
4 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index 3691d66a..08d88c83 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -691,6 +691,7 @@ Ext.define('PVE.Utils', {
'iso': gettext('ISO image'),
'rootdir': gettext('Container'),
'snippets': gettext('Snippets'),
+ 'import': gettext('Import'),
},
volume_is_qemu_backup: function(volid, format) {
diff --git a/www/manager6/form/ContentTypeSelector.js b/www/manager6/form/ContentTypeSelector.js
index d0fa0b08..431bd948 100644
--- a/www/manager6/form/ContentTypeSelector.js
+++ b/www/manager6/form/ContentTypeSelector.js
@@ -10,7 +10,7 @@ Ext.define('PVE.form.ContentTypeSelector', {
me.comboItems = [];
if (me.cts === undefined) {
- me.cts = ['images', 'iso', 'vztmpl', 'backup', 'rootdir', 'snippets'];
+ me.cts = ['images', 'iso', 'vztmpl', 'backup', 'rootdir', 'snippets', 'import'];
}
Ext.Array.each(me.cts, function(ct) {
diff --git a/www/manager6/storage/CephFSEdit.js b/www/manager6/storage/CephFSEdit.js
index 6a95a00a..2cdcf7cd 100644
--- a/www/manager6/storage/CephFSEdit.js
+++ b/www/manager6/storage/CephFSEdit.js
@@ -92,7 +92,7 @@ Ext.define('PVE.storage.CephFSInputPanel', {
me.column2 = [
{
xtype: 'pveContentTypeSelector',
- cts: ['backup', 'iso', 'vztmpl', 'snippets'],
+ cts: ['backup', 'iso', 'vztmpl', 'snippets', 'import'],
fieldLabel: gettext('Content'),
name: 'content',
value: 'backup',
diff --git a/www/manager6/storage/GlusterFsEdit.js b/www/manager6/storage/GlusterFsEdit.js
index 8155d9c2..df7fe23f 100644
--- a/www/manager6/storage/GlusterFsEdit.js
+++ b/www/manager6/storage/GlusterFsEdit.js
@@ -99,7 +99,7 @@ Ext.define('PVE.storage.GlusterFsInputPanel', {
},
{
xtype: 'pveContentTypeSelector',
- cts: ['images', 'iso', 'backup', 'vztmpl', 'snippets'],
+ cts: ['images', 'iso', 'backup', 'vztmpl', 'snippets', 'import'],
name: 'content',
value: 'images',
multiSelect: true,
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* [pve-devel] [PATCH manager v6 4/9] ui: enable upload/download/remove buttons for 'import' type storages
2024-11-15 15:17 [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages Dominik Csapak
` (20 preceding siblings ...)
2024-11-15 15:17 ` [pve-devel] [PATCH manager v6 3/9] ui: enable import content type for relevant storages Dominik Csapak
@ 2024-11-15 15:17 ` Dominik Csapak
2024-11-15 15:17 ` [pve-devel] [PATCH manager v6 5/9] ui: disable 'import' button for non importable formats Dominik Csapak
` (8 subsequent siblings)
30 siblings, 0 replies; 68+ messages in thread
From: Dominik Csapak @ 2024-11-15 15:17 UTC (permalink / raw)
To: pve-devel
but only for non esxi ones, since that does not allow
uploading/downloading there
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
www/manager6/storage/Browser.js | 9 +++++++--
www/manager6/window/UploadToStorage.js | 1 +
2 files changed, 8 insertions(+), 2 deletions(-)
diff --git a/www/manager6/storage/Browser.js b/www/manager6/storage/Browser.js
index 2123141d..934ce706 100644
--- a/www/manager6/storage/Browser.js
+++ b/www/manager6/storage/Browser.js
@@ -28,7 +28,9 @@ Ext.define('PVE.storage.Browser', {
let res = storageInfo.data;
let plugin = res.plugintype;
- me.items = plugin !== 'esxi' ? [
+ let isEsxi = plugin === 'esxi';
+
+ me.items = !isEsxi ? [
{
title: gettext('Summary'),
xtype: 'pveStorageSummary',
@@ -142,8 +144,11 @@ Ext.define('PVE.storage.Browser', {
iconCls: 'fa fa-desktop',
itemId: 'contentImport',
content: 'import',
- useCustomRemoveButton: true, // hide default remove button
+ useCustomRemoveButton: isEsxi, // hide default remove button for esxi
showColumns: ['name', 'format'],
+ enableUploadButton: enableUpload && !isEsxi,
+ enableDownloadUrlButton: enableDownloadUrl && !isEsxi,
+ useUploadButton: !isEsxi,
itemdblclick: (view, record) => createGuestImportWindow(record),
tbar: [
{
diff --git a/www/manager6/window/UploadToStorage.js b/www/manager6/window/UploadToStorage.js
index 3c5bba88..cdf548a8 100644
--- a/www/manager6/window/UploadToStorage.js
+++ b/www/manager6/window/UploadToStorage.js
@@ -9,6 +9,7 @@ Ext.define('PVE.window.UploadToStorage', {
title: gettext('Upload'),
acceptedExtensions: {
+ 'import': ['.ova'],
iso: ['.img', '.iso'],
vztmpl: ['.tar.gz', '.tar.xz', '.tar.zst'],
},
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* [pve-devel] [PATCH manager v6 5/9] ui: disable 'import' button for non importable formats
2024-11-15 15:17 [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages Dominik Csapak
` (21 preceding siblings ...)
2024-11-15 15:17 ` [pve-devel] [PATCH manager v6 4/9] ui: enable upload/download/remove buttons for 'import' type storages Dominik Csapak
@ 2024-11-15 15:17 ` Dominik Csapak
2024-11-15 15:17 ` [pve-devel] [PATCH manager v6 6/9] ui: import: improve rendering of volume names Dominik Csapak
` (7 subsequent siblings)
30 siblings, 0 replies; 68+ messages in thread
From: Dominik Csapak @ 2024-11-15 15:17 UTC (permalink / raw)
To: pve-devel
importable formats are currently ova/ovf/vmx
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
www/manager6/storage/Browser.js | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/www/manager6/storage/Browser.js b/www/manager6/storage/Browser.js
index 934ce706..822257e7 100644
--- a/www/manager6/storage/Browser.js
+++ b/www/manager6/storage/Browser.js
@@ -124,6 +124,7 @@ Ext.define('PVE.storage.Browser', {
});
}
if (contents.includes('import')) {
+ let isImportable = format => ['ova', 'ovf', 'vmx'].indexOf(format) !== -1;
let createGuestImportWindow = (selection) => {
if (!selection) {
return;
@@ -149,13 +150,18 @@ Ext.define('PVE.storage.Browser', {
enableUploadButton: enableUpload && !isEsxi,
enableDownloadUrlButton: enableDownloadUrl && !isEsxi,
useUploadButton: !isEsxi,
- itemdblclick: (view, record) => createGuestImportWindow(record),
+ itemdblclick: (view, record) => {
+ if (isImportable(record.data.format)) {
+ createGuestImportWindow(record);
+ }
+ },
tbar: [
{
xtype: 'proxmoxButton',
disabled: true,
text: gettext('Import'),
iconCls: 'fa fa-cloud-download',
+ enableFn: rec => isImportable(rec.data.format),
handler: function() {
let grid = this.up('pveStorageContentView');
let selection = grid.getSelection()?.[0];
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* [pve-devel] [PATCH manager v6 6/9] ui: import: improve rendering of volume names
2024-11-15 15:17 [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages Dominik Csapak
` (22 preceding siblings ...)
2024-11-15 15:17 ` [pve-devel] [PATCH manager v6 5/9] ui: disable 'import' button for non importable formats Dominik Csapak
@ 2024-11-15 15:17 ` Dominik Csapak
2024-11-15 15:17 ` [pve-devel] [PATCH manager v6 7/9] ui: guest import: add storage selector for ova extraction storage Dominik Csapak
` (6 subsequent siblings)
30 siblings, 0 replies; 68+ messages in thread
From: Dominik Csapak @ 2024-11-15 15:17 UTC (permalink / raw)
To: pve-devel
in directory storages, we don't need the 'import/' part of the volumes,
as that is implied in dir based storages
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
www/manager6/Utils.js | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index 08d88c83..97dbbae2 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -1025,7 +1025,13 @@ Ext.define('PVE.Utils', {
Ext.String.leftPad(data.channel, 2, '0') +
" ID " + data.id + " LUN " + data.lun;
} else if (data.content === 'import') {
- result = data.volid.replace(/^.*?:/, '');
+ if (data.volid.match(/^.*?:import\//)) {
+ // dir-based storages
+ result = data.volid.replace(/^.*?:import\//, '');
+ } else {
+ // esxi storage
+ result = data.volid.replace(/^.*?:/, '');
+ }
} else {
result = data.volid.replace(/^.*?:(.*?\/)?/, '');
}
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* [pve-devel] [PATCH manager v6 7/9] ui: guest import: add storage selector for ova extraction storage
2024-11-15 15:17 [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages Dominik Csapak
` (23 preceding siblings ...)
2024-11-15 15:17 ` [pve-devel] [PATCH manager v6 6/9] ui: import: improve rendering of volume names Dominik Csapak
@ 2024-11-15 15:17 ` Dominik Csapak
2024-11-17 16:31 ` Thomas Lamprecht
2024-11-15 15:17 ` [pve-devel] [PATCH manager v6 8/9] ui: guest import: change icon/text for non-esxi import storage Dominik Csapak
` (5 subsequent siblings)
30 siblings, 1 reply; 68+ messages in thread
From: Dominik Csapak @ 2024-11-15 15:17 UTC (permalink / raw)
To: pve-devel
but only when we detect the 'ova-needs-extraction' warning.
This can be used to select the storage where the disks contained in an
OVA will be extracted to temporarily.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
www/manager6/window/GuestImport.js | 23 +++++++++++++++++++++++
1 file changed, 23 insertions(+)
diff --git a/www/manager6/window/GuestImport.js b/www/manager6/window/GuestImport.js
index 1483d97f..56417f27 100644
--- a/www/manager6/window/GuestImport.js
+++ b/www/manager6/window/GuestImport.js
@@ -303,6 +303,7 @@ Ext.define('PVE.window.GuestImport', {
os: 'l26',
maxCdDrives: false,
uniqueMACAdresses: false,
+ isOva: false,
warnings: [],
},
@@ -432,6 +433,10 @@ Ext.define('PVE.window.GuestImport', {
}
}
+ if (config['import-extraction-storage'] === '') {
+ delete config['import-extraction-storage'];
+ }
+
return config;
},
@@ -553,6 +558,22 @@ Ext.define('PVE.window.GuestImport', {
allowBlank: false,
fieldLabel: gettext('Default Bridge'),
},
+ {
+ xtype: 'pveStorageSelector',
+ reference: 'extractionStorage',
+ fieldLabel: gettext('Extraction Storage'),
+ storageContent: 'images',
+ emptyText: gettext('Import Storage'),
+ autoSelect: false,
+ name: 'import-extraction-storage',
+ disabled: true,
+ hidden: true,
+ allowBlank: true,
+ bind: {
+ disabled: '{!isOva}',
+ hidden: '{!isOva}',
+ },
+ },
],
columnB: [
@@ -925,6 +946,7 @@ Ext.define('PVE.window.GuestImport', {
me.lookup('defaultStorage').setNodename(me.nodename);
me.lookup('defaultBridge').setNodename(me.nodename);
+ me.lookup('extractionStorage').setNodename(me.nodename);
let renderWarning = w => {
const warningsCatalogue = {
@@ -1006,6 +1028,7 @@ Ext.define('PVE.window.GuestImport', {
}
me.getViewModel().set('warnings', data.warnings.map(w => renderWarning(w)));
+ me.getViewModel().set('isOva', data.warnings.map(w => w.type).indexOf('ova-needs-extracting') !== -1);
let osinfo = PVE.Utils.get_kvm_osinfo(me.vmConfig.ostype ?? '');
let prepareForVirtIO = (me.vmConfig.ostype ?? '').startsWith('w') && (me.vmConfig.bios ?? '').indexOf('ovmf') !== -1;
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [pve-devel] [PATCH manager v6 7/9] ui: guest import: add storage selector for ova extraction storage
2024-11-15 15:17 ` [pve-devel] [PATCH manager v6 7/9] ui: guest import: add storage selector for ova extraction storage Dominik Csapak
@ 2024-11-17 16:31 ` Thomas Lamprecht
0 siblings, 0 replies; 68+ messages in thread
From: Thomas Lamprecht @ 2024-11-17 16:31 UTC (permalink / raw)
To: Proxmox VE development discussion, Dominik Csapak
Am 15.11.24 um 16:17 schrieb Dominik Csapak:
> but only when we detect the 'ova-needs-extraction' warning.
> This can be used to select the storage where the disks contained in an
> OVA will be extracted to temporarily.
>
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
> www/manager6/window/GuestImport.js | 23 +++++++++++++++++++++++
> 1 file changed, 23 insertions(+)
>
> diff --git a/www/manager6/window/GuestImport.js b/www/manager6/window/GuestImport.js
> index 1483d97f..56417f27 100644
> --- a/www/manager6/window/GuestImport.js
> +++ b/www/manager6/window/GuestImport.js
> @@ -303,6 +303,7 @@ Ext.define('PVE.window.GuestImport', {
> os: 'l26',
> maxCdDrives: false,
> uniqueMACAdresses: false,
> + isOva: false,
> warnings: [],
> },
>
> @@ -432,6 +433,10 @@ Ext.define('PVE.window.GuestImport', {
> }
> }
>
> + if (config['import-extraction-storage'] === '') {
> + delete config['import-extraction-storage'];
> + }
> +
> return config;
> },
>
> @@ -553,6 +558,22 @@ Ext.define('PVE.window.GuestImport', {
> allowBlank: false,
> fieldLabel: gettext('Default Bridge'),
> },
> + {
> + xtype: 'pveStorageSelector',
> + reference: 'extractionStorage',
> + fieldLabel: gettext('Extraction Storage'),
This reads a bit strange to me, but the alternatives from top of my mind aren't
perfect either, fwiw:
'Working Storage'
or
'Import Working Storage'
or
'Extraction Target Storage'
> + storageContent: 'images',
> + emptyText: gettext('Import Storage'),
> + autoSelect: false,
> + name: 'import-extraction-storage',
> + disabled: true,
> + hidden: true,
> + allowBlank: true,
> + bind: {
> + disabled: '{!isOva}',
> + hidden: '{!isOva}',
> + },
> + },
> ],
>
> columnB: [
> @@ -925,6 +946,7 @@ Ext.define('PVE.window.GuestImport', {
>
> me.lookup('defaultStorage').setNodename(me.nodename);
> me.lookup('defaultBridge').setNodename(me.nodename);
> + me.lookup('extractionStorage').setNodename(me.nodename);
>
> let renderWarning = w => {
> const warningsCatalogue = {
> @@ -1006,6 +1028,7 @@ Ext.define('PVE.window.GuestImport', {
> }
>
> me.getViewModel().set('warnings', data.warnings.map(w => renderWarning(w)));
> + me.getViewModel().set('isOva', data.warnings.map(w => w.type).indexOf('ova-needs-extracting') !== -1);
>
> let osinfo = PVE.Utils.get_kvm_osinfo(me.vmConfig.ostype ?? '');
> let prepareForVirtIO = (me.vmConfig.ostype ?? '').startsWith('w') && (me.vmConfig.bios ?? '').indexOf('ovmf') !== -1;
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* [pve-devel] [PATCH manager v6 8/9] ui: guest import: change icon/text for non-esxi import storage
2024-11-15 15:17 [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages Dominik Csapak
` (24 preceding siblings ...)
2024-11-15 15:17 ` [pve-devel] [PATCH manager v6 7/9] ui: guest import: add storage selector for ova extraction storage Dominik Csapak
@ 2024-11-15 15:17 ` Dominik Csapak
2024-11-15 15:17 ` [pve-devel] [PATCH manager v6 9/9] ui: import: show size for dir-based storages Dominik Csapak
` (4 subsequent siblings)
30 siblings, 0 replies; 68+ messages in thread
From: Dominik Csapak @ 2024-11-15 15:17 UTC (permalink / raw)
To: pve-devel
since 'virtual guests' only make sense for a hypervisor, not e.g. a
directory for OVAs
also change the icon from 'desktop' to 'cloud-download' in the
non-esxi case
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
www/manager6/storage/Browser.js | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/www/manager6/storage/Browser.js b/www/manager6/storage/Browser.js
index 822257e7..763abc70 100644
--- a/www/manager6/storage/Browser.js
+++ b/www/manager6/storage/Browser.js
@@ -141,8 +141,10 @@ Ext.define('PVE.storage.Browser', {
};
me.items.push({
xtype: 'pveStorageContentView',
- title: gettext('Virtual Guests'),
- iconCls: 'fa fa-desktop',
+ // each gettext needs to be in a separate line
+ title: isEsxi ? gettext('Virtual Guests')
+ : gettext('Import'),
+ iconCls: isEsxi ? 'fa fa-desktop' : 'fa fa-cloud-download',
itemId: 'contentImport',
content: 'import',
useCustomRemoveButton: isEsxi, // hide default remove button for esxi
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* [pve-devel] [PATCH manager v6 9/9] ui: import: show size for dir-based storages
2024-11-15 15:17 [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages Dominik Csapak
` (25 preceding siblings ...)
2024-11-15 15:17 ` [pve-devel] [PATCH manager v6 8/9] ui: guest import: change icon/text for non-esxi import storage Dominik Csapak
@ 2024-11-15 15:17 ` Dominik Csapak
2024-11-17 16:37 ` [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages Thomas Lamprecht
` (3 subsequent siblings)
30 siblings, 0 replies; 68+ messages in thread
From: Dominik Csapak @ 2024-11-15 15:17 UTC (permalink / raw)
To: pve-devel
since there we already have the size information
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
www/manager6/storage/Browser.js | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/www/manager6/storage/Browser.js b/www/manager6/storage/Browser.js
index 763abc70..c0b66acc 100644
--- a/www/manager6/storage/Browser.js
+++ b/www/manager6/storage/Browser.js
@@ -148,7 +148,7 @@ Ext.define('PVE.storage.Browser', {
itemId: 'contentImport',
content: 'import',
useCustomRemoveButton: isEsxi, // hide default remove button for esxi
- showColumns: ['name', 'format'],
+ showColumns: isEsxi ? ['name', 'format'] : ['name', 'size', 'format'],
enableUploadButton: enableUpload && !isEsxi,
enableDownloadUrlButton: enableDownloadUrl && !isEsxi,
useUploadButton: !isEsxi,
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages
2024-11-15 15:17 [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages Dominik Csapak
` (26 preceding siblings ...)
2024-11-15 15:17 ` [pve-devel] [PATCH manager v6 9/9] ui: import: show size for dir-based storages Dominik Csapak
@ 2024-11-17 16:37 ` Thomas Lamprecht
2024-11-18 13:06 ` Lukas Wagner
` (2 subsequent siblings)
30 siblings, 0 replies; 68+ messages in thread
From: Thomas Lamprecht @ 2024-11-17 16:37 UTC (permalink / raw)
To: Proxmox VE development discussion, Dominik Csapak
Am 15.11.24 um 16:17 schrieb Dominik Csapak:
> This series enables importing ova/ovf from directory based storages,
> inclusive upload/download via the webui (ova only).
>
> It also improves the ovf importer by parsing the ostype, nics, bootorder
> (and firmware from vmware exported files).
>
> I opted to move the OVF.pm to pve-storage, since there is no
> real other place where we could put it. I put it in a new module
> 'GuestImport'
>
> We now extract the images into either a given target storage or in the
> import storage in the 'images' dir so accidentally left over images
> are discoverable by the ui/cli.
>
> This version is half rebased on fabians hardening series:
> https://lore.proxmox.com/pve-devel/20241104104221.228730-1-f.gruenbichler@proxmox.com/
>
> I sent the qemu-server patch from fabian again but omitted some
> problematic checks. I add them later with a check
> against the import vtype again (last patch in qemu-server)
>
> changes from v5:
> * removed leftover hunks in makefile
> * moved ova checks to correct patch
> * split up error messages for unexpected format
> * remove unnecessary untaint
> * reword error message
> * reintroduce symlink check in ova/ovf check
> * added sanity check for ovas after uploading/downloading
> * added new patch for checking import vtypes
> * fixed issue with files with absolute path
Looks like it's quite in good shape, I left a few comments on a few patches, but
mostly for some style/naming stuff.
Would be great if Fiona could take another look and possibly also Fabian, some
testing from others would naturally also be nice, maybe you can organise that.
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages
2024-11-15 15:17 [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages Dominik Csapak
` (27 preceding siblings ...)
2024-11-17 16:37 ` [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages Thomas Lamprecht
@ 2024-11-18 13:06 ` Lukas Wagner
2024-11-18 13:18 ` Dominik Csapak
2024-11-18 14:35 ` Daniel Herzig
2024-11-18 15:33 ` Dominik Csapak
30 siblings, 1 reply; 68+ messages in thread
From: Lukas Wagner @ 2024-11-18 13:06 UTC (permalink / raw)
To: Proxmox VE development discussion, d.csapak
On Fri Nov 15, 2024 at 4:17 PM CET, Dominik Csapak wrote:
> This series enables importing ova/ovf from directory based storages,
> inclusive upload/download via the webui (ova only).
>
> It also improves the ovf importer by parsing the ostype, nics, bootorder
> (and firmware from vmware exported files).
>
> I opted to move the OVF.pm to pve-storage, since there is no
> real other place where we could put it. I put it in a new module
> 'GuestImport'
>
> We now extract the images into either a given target storage or in the
> import storage in the 'images' dir so accidentally left over images
> are discoverable by the ui/cli.
>
> This version is half rebased on fabians hardening series:
> https://lore.proxmox.com/pve-devel/20241104104221.228730-1-f.gruenbichler@proxmox.com/
>
> I sent the qemu-server patch from fabian again but omitted some
> problematic checks. I add them later with a check
> against the import vtype again (last patch in qemu-server)
Hi,
gave this series a quick test on the respective latest master branches.
Looking good so far, but a couple of things that I've noticed were:
- In the UI, checking 'Live Import' does not seem to have any effect
(is live import even available for OVA import?)
Also, the 'Start a previously stopped VM on Proxmox VE' text does
not make much sense in the context of OVAs, FWICT.
- A help button for the dialog would be useful
- Some documentation should be added before this feature is released
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages
2024-11-18 13:06 ` Lukas Wagner
@ 2024-11-18 13:18 ` Dominik Csapak
2024-11-18 13:39 ` Lukas Wagner
0 siblings, 1 reply; 68+ messages in thread
From: Dominik Csapak @ 2024-11-18 13:18 UTC (permalink / raw)
To: Lukas Wagner, Proxmox VE development discussion
On 11/18/24 14:06, Lukas Wagner wrote:
> On Fri Nov 15, 2024 at 4:17 PM CET, Dominik Csapak wrote:
>> This series enables importing ova/ovf from directory based storages,
>> inclusive upload/download via the webui (ova only).
>>
>> It also improves the ovf importer by parsing the ostype, nics, bootorder
>> (and firmware from vmware exported files).
>>
>> I opted to move the OVF.pm to pve-storage, since there is no
>> real other place where we could put it. I put it in a new module
>> 'GuestImport'
>>
>> We now extract the images into either a given target storage or in the
>> import storage in the 'images' dir so accidentally left over images
>> are discoverable by the ui/cli.
>>
>> This version is half rebased on fabians hardening series:
>> https://lore.proxmox.com/pve-devel/20241104104221.228730-1-f.gruenbichler@proxmox.com/
>>
>> I sent the qemu-server patch from fabian again but omitted some
>> problematic checks. I add them later with a check
>> against the import vtype again (last patch in qemu-server)
>
> Hi,
> gave this series a quick test on the respective latest master branches.
>
> Looking good so far, but a couple of things that I've noticed were:
> - In the UI, checking 'Live Import' does not seem to have any effect
> (is live import even available for OVA import?)
yes this works here, under what condition does it not for you?
> Also, the 'Start a previously stopped VM on Proxmox VE' text does
> not make much sense in the context of OVAs, FWICT.
true, i'll change the text for ova import
> - A help button for the dialog would be useful
> - Some documentation should be added before this feature is released
yes, also true. I'll prepare something
Thanks!
>
> Tested-by: Lukas Wagner <l.wagner@proxmox.com>
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages
2024-11-18 13:18 ` Dominik Csapak
@ 2024-11-18 13:39 ` Lukas Wagner
2024-11-18 13:44 ` Dominik Csapak
0 siblings, 1 reply; 68+ messages in thread
From: Lukas Wagner @ 2024-11-18 13:39 UTC (permalink / raw)
To: Proxmox VE development discussion, d.csapak
On Mon Nov 18, 2024 at 2:18 PM CET, Dominik Csapak wrote:
> On 11/18/24 14:06, Lukas Wagner wrote:
> > On Fri Nov 15, 2024 at 4:17 PM CET, Dominik Csapak wrote:
> >> This series enables importing ova/ovf from directory based storages,
> >> inclusive upload/download via the webui (ova only).
> >>
> >> It also improves the ovf importer by parsing the ostype, nics, bootorder
> >> (and firmware from vmware exported files).
> >>
> >> I opted to move the OVF.pm to pve-storage, since there is no
> >> real other place where we could put it. I put it in a new module
> >> 'GuestImport'
> >>
> >> We now extract the images into either a given target storage or in the
> >> import storage in the 'images' dir so accidentally left over images
> >> are discoverable by the ui/cli.
> >>
> >> This version is half rebased on fabians hardening series:
> >> https://lore.proxmox.com/pve-devel/20241104104221.228730-1-f.gruenbichler@proxmox.com/
> >>
> >> I sent the qemu-server patch from fabian again but omitted some
> >> problematic checks. I add them later with a check
> >> against the import vtype again (last patch in qemu-server)
> >
> > Hi,
> > gave this series a quick test on the respective latest master branches.
> >
> > Looking good so far, but a couple of things that I've noticed were:
> > - In the UI, checking 'Live Import' does not seem to have any effect
> > (is live import even available for OVA import?)
>
> yes this works here, under what condition does it not for you?
>
nothing special, I tested the feature using the Home Assistant .ova from
[1]. Downloaded the OVA to my local storage, pressed "import", did not
change *any* settings apart from ticking "live import".
The import works, but the VM is not started. Starting the VM manually
works fine, also the tasks log does not show anything of concern.
In the browser network requests I saw that 'live-restore' is set to '1'
in the import POST request.
[1] https://www.home-assistant.io/installation/alternative/
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages
2024-11-18 13:39 ` Lukas Wagner
@ 2024-11-18 13:44 ` Dominik Csapak
2024-11-18 13:53 ` Dominik Csapak
0 siblings, 1 reply; 68+ messages in thread
From: Dominik Csapak @ 2024-11-18 13:44 UTC (permalink / raw)
To: Lukas Wagner, Proxmox VE development discussion
On 11/18/24 14:39, Lukas Wagner wrote:
> On Mon Nov 18, 2024 at 2:18 PM CET, Dominik Csapak wrote:
>> On 11/18/24 14:06, Lukas Wagner wrote:
>>> On Fri Nov 15, 2024 at 4:17 PM CET, Dominik Csapak wrote:
>>>> This series enables importing ova/ovf from directory based storages,
>>>> inclusive upload/download via the webui (ova only).
>>>>
>>>> It also improves the ovf importer by parsing the ostype, nics, bootorder
>>>> (and firmware from vmware exported files).
>>>>
>>>> I opted to move the OVF.pm to pve-storage, since there is no
>>>> real other place where we could put it. I put it in a new module
>>>> 'GuestImport'
>>>>
>>>> We now extract the images into either a given target storage or in the
>>>> import storage in the 'images' dir so accidentally left over images
>>>> are discoverable by the ui/cli.
>>>>
>>>> This version is half rebased on fabians hardening series:
>>>> https://lore.proxmox.com/pve-devel/20241104104221.228730-1-f.gruenbichler@proxmox.com/
>>>>
>>>> I sent the qemu-server patch from fabian again but omitted some
>>>> problematic checks. I add them later with a check
>>>> against the import vtype again (last patch in qemu-server)
>>>
>>> Hi,
>>> gave this series a quick test on the respective latest master branches.
>>>
>>> Looking good so far, but a couple of things that I've noticed were:
>>> - In the UI, checking 'Live Import' does not seem to have any effect
>>> (is live import even available for OVA import?)
>>
>> yes this works here, under what condition does it not for you?
>>
>
> nothing special, I tested the feature using the Home Assistant .ova from
> [1]. Downloaded the OVA to my local storage, pressed "import", did not
> change *any* settings apart from ticking "live import".
> The import works, but the VM is not started. Starting the VM manually
> works fine, also the tasks log does not show anything of concern.
> In the browser network requests I saw that 'live-restore' is set to '1'
> in the import POST request.
>
> [1] https://www.home-assistant.io/installation/alternative/
mhmm can reproduce with that ova image, maybe it has something to do with our
ovf import not detecting the main disk?
(at least here it does not detect any disk)
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages
2024-11-18 13:44 ` Dominik Csapak
@ 2024-11-18 13:53 ` Dominik Csapak
2024-11-19 8:15 ` Lukas Wagner
0 siblings, 1 reply; 68+ messages in thread
From: Dominik Csapak @ 2024-11-18 13:53 UTC (permalink / raw)
To: Lukas Wagner, Proxmox VE development discussion
On 11/18/24 14:44, Dominik Csapak wrote:
> On 11/18/24 14:39, Lukas Wagner wrote:
>> On Mon Nov 18, 2024 at 2:18 PM CET, Dominik Csapak wrote:
>>> On 11/18/24 14:06, Lukas Wagner wrote:
>>>> On Fri Nov 15, 2024 at 4:17 PM CET, Dominik Csapak wrote:
>>>>> This series enables importing ova/ovf from directory based storages,
>>>>> inclusive upload/download via the webui (ova only).
>>>>>
>>>>> It also improves the ovf importer by parsing the ostype, nics, bootorder
>>>>> (and firmware from vmware exported files).
>>>>>
>>>>> I opted to move the OVF.pm to pve-storage, since there is no
>>>>> real other place where we could put it. I put it in a new module
>>>>> 'GuestImport'
>>>>>
>>>>> We now extract the images into either a given target storage or in the
>>>>> import storage in the 'images' dir so accidentally left over images
>>>>> are discoverable by the ui/cli.
>>>>>
>>>>> This version is half rebased on fabians hardening series:
>>>>> https://lore.proxmox.com/pve-devel/20241104104221.228730-1-f.gruenbichler@proxmox.com/
>>>>>
>>>>> I sent the qemu-server patch from fabian again but omitted some
>>>>> problematic checks. I add them later with a check
>>>>> against the import vtype again (last patch in qemu-server)
>>>>
>>>> Hi,
>>>> gave this series a quick test on the respective latest master branches.
>>>>
>>>> Looking good so far, but a couple of things that I've noticed were:
>>>> - In the UI, checking 'Live Import' does not seem to have any effect
>>>> (is live import even available for OVA import?)
>>>
>>> yes this works here, under what condition does it not for you?
>>>
>>
>> nothing special, I tested the feature using the Home Assistant .ova from
>> [1]. Downloaded the OVA to my local storage, pressed "import", did not
>> change *any* settings apart from ticking "live import".
>> The import works, but the VM is not started. Starting the VM manually
>> works fine, also the tasks log does not show anything of concern.
>> In the browser network requests I saw that 'live-restore' is set to '1'
>> in the import POST request.
>>
>> [1] https://www.home-assistant.io/installation/alternative/
>
>
> mhmm can reproduce with that ova image, maybe it has something to do with our
> ovf import not detecting the main disk?
> (at least here it does not detect any disk)
>
>
ok two things here:
* this ova is malformed it seems, it has a HostResource of '/disk/vmidsk1' when
the spec says it should be of the form 'ovf:/disk/vmdisk1' which is what we look for
(we could make that optional though?)
* seemingly live-import does not start the vm if there was no disk, but AFAICS
this was pre-existing and not something my series introduced (but not sure either)
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages
2024-11-18 13:53 ` Dominik Csapak
@ 2024-11-19 8:15 ` Lukas Wagner
2024-11-19 8:44 ` Dominik Csapak
2024-11-19 8:48 ` Thomas Lamprecht
0 siblings, 2 replies; 68+ messages in thread
From: Lukas Wagner @ 2024-11-19 8:15 UTC (permalink / raw)
To: Proxmox VE development discussion, d.csapak
On Mon Nov 18, 2024 at 2:53 PM CET, Dominik Csapak wrote:
> >> nothing special, I tested the feature using the Home Assistant .ova from
> >> [1]. Downloaded the OVA to my local storage, pressed "import", did not
> >> change *any* settings apart from ticking "live import".
> >> The import works, but the VM is not started. Starting the VM manually
> >> works fine, also the tasks log does not show anything of concern.
> >> In the browser network requests I saw that 'live-restore' is set to '1'
> >> in the import POST request.
> >>
> >> [1] https://www.home-assistant.io/installation/alternative/
> >
> >
> > mhmm can reproduce with that ova image, maybe it has something to do with our
> > ovf import not detecting the main disk?
> > (at least here it does not detect any disk)
> >
> >
>
> ok two things here:
>
> * this ova is malformed it seems, it has a HostResource of '/disk/vmidsk1' when
> the spec says it should be of the form 'ovf:/disk/vmdisk1' which is what we look for
> (we could make that optional though?)
>
> * seemingly live-import does not start the vm if there was no disk, but AFAICS
> this was pre-existing and not something my series introduced (but not sure either)
>
Ah, thanks for the investigation. Maybe it'd be worth to check whether
other hypervisors accept the malformed resource definition and then add
support to PVE as well if this is the case.
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages
2024-11-19 8:15 ` Lukas Wagner
@ 2024-11-19 8:44 ` Dominik Csapak
2024-11-19 8:48 ` Thomas Lamprecht
1 sibling, 0 replies; 68+ messages in thread
From: Dominik Csapak @ 2024-11-19 8:44 UTC (permalink / raw)
To: Lukas Wagner, Proxmox VE development discussion
On 11/19/24 09:15, Lukas Wagner wrote:
> On Mon Nov 18, 2024 at 2:53 PM CET, Dominik Csapak wrote:
>>>> nothing special, I tested the feature using the Home Assistant .ova from
>>>> [1]. Downloaded the OVA to my local storage, pressed "import", did not
>>>> change *any* settings apart from ticking "live import".
>>>> The import works, but the VM is not started. Starting the VM manually
>>>> works fine, also the tasks log does not show anything of concern.
>>>> In the browser network requests I saw that 'live-restore' is set to '1'
>>>> in the import POST request.
>>>>
>>>> [1] https://www.home-assistant.io/installation/alternative/
>>>
>>>
>>> mhmm can reproduce with that ova image, maybe it has something to do with our
>>> ovf import not detecting the main disk?
>>> (at least here it does not detect any disk)
>>>
>>>
>>
>> ok two things here:
>>
>> * this ova is malformed it seems, it has a HostResource of '/disk/vmidsk1' when
>> the spec says it should be of the form 'ovf:/disk/vmdisk1' which is what we look for
>> (we could make that optional though?)
>>
>> * seemingly live-import does not start the vm if there was no disk, but AFAICS
>> this was pre-existing and not something my series introduced (but not sure either)
>>
>
> Ah, thanks for the investigation. Maybe it'd be worth to check whether
> other hypervisors accept the malformed resource definition and then add
> support to PVE as well if this is the case.
actually thomas fixed that yesterday already:
https://git.proxmox.com/?p=pve-storage.git;a=commitdiff;h=f2a6bd278896f4354f88cf828dc690d186126741
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages
2024-11-19 8:15 ` Lukas Wagner
2024-11-19 8:44 ` Dominik Csapak
@ 2024-11-19 8:48 ` Thomas Lamprecht
2024-11-20 16:32 ` Gilberto Ferreira via pve-devel
1 sibling, 1 reply; 68+ messages in thread
From: Thomas Lamprecht @ 2024-11-19 8:48 UTC (permalink / raw)
To: Proxmox VE development discussion, Lukas Wagner, d.csapak
Am 19.11.24 um 09:15 schrieb Lukas Wagner:
> On Mon Nov 18, 2024 at 2:53 PM CET, Dominik Csapak wrote:
>>>> nothing special, I tested the feature using the Home Assistant .ova from
>>>> [1]. Downloaded the OVA to my local storage, pressed "import", did not
>>>> change *any* settings apart from ticking "live import".
>>>> The import works, but the VM is not started. Starting the VM manually
>>>> works fine, also the tasks log does not show anything of concern.
>>>> In the browser network requests I saw that 'live-restore' is set to '1'
>>>> in the import POST request.
>>>>
>>>> [1] https://www.home-assistant.io/installation/alternative/
>>>
>>>
>>> mhmm can reproduce with that ova image, maybe it has something to do with our
>>> ovf import not detecting the main disk?
>>> (at least here it does not detect any disk)
>>>
>>>
>>
>> ok two things here:
>>
>> * this ova is malformed it seems, it has a HostResource of '/disk/vmidsk1' when
>> the spec says it should be of the form 'ovf:/disk/vmdisk1' which is what we look for
>> (we could make that optional though?)
>>
>> * seemingly live-import does not start the vm if there was no disk, but AFAICS
>> this was pre-existing and not something my series introduced (but not sure either)
>>
>
> Ah, thanks for the investigation. Maybe it'd be worth to check whether
> other hypervisors accept the malformed resource definition and then add
> support to PVE as well if this is the case.
Btw. it should work now already, the GNS3 image that Filip linked to in his reply
used the same format, so this seems to be relatively common.
I used the GNS3 one yesterday as test case to integrate some detection quirks, I
also had to accept whitespace in the OVA disk names (we normalize those as standard
PVE volume name through import anyway).
I then also imported A HAOS OVA image, that works in general, but the disk was not
added as boot device, and the disk bus was LSI, I needed to change both to make the
disk available for OVMF. IIRC the OS type was detected "Others" on import, I changed
that, but seems not all defaults changed with it – maybe one could look into that?
HAOS is a common usecase, I alone installed it twice this year, back then I had to
use the qcow2 image and manually import that like a cave man though hehe
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages
2024-11-19 8:48 ` Thomas Lamprecht
@ 2024-11-20 16:32 ` Gilberto Ferreira via pve-devel
2024-11-20 16:57 ` Gilberto Ferreira via pve-devel
0 siblings, 1 reply; 68+ messages in thread
From: Gilberto Ferreira via pve-devel @ 2024-11-20 16:32 UTC (permalink / raw)
To: Proxmox VE development discussion; +Cc: Gilberto Ferreira, Lukas Wagner
[-- Attachment #1: Type: message/rfc822, Size: 8768 bytes --]
From: Gilberto Ferreira <gilberto.nunes32@gmail.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Cc: Lukas Wagner <l.wagner@proxmox.com>, d.csapak@proxmox.com
Subject: Re: [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages
Date: Wed, 20 Nov 2024 13:32:05 -0300
Message-ID: <CAOKSTBvy5hemAM1+9rV8Y_Z1vZ_gCy5JtUJSxFmRTJVQDLbtNw@mail.gmail.com>
Hi there.
Sorry for the email, but I tested this import option with an Ubuntu ova
which has space in the vmdk name and I believe that could lead to errors.
I had downloaded from here:
https://razaoinfo.dl.sourceforge.net/project/osboxes/v/vb/55-U-u/OVA-Versions/24.04/Ubuntu-24.04-64bit-VB.ova
When I tried to import got error 500!
Unfortunately I didn't get the entire error message or any logs.
Is it some kind of error or bug?
Nevertheless, I would like to say thanks for this amazing feature.
Cheers
Em ter., 19 de nov. de 2024 às 05:49, Thomas Lamprecht <
t.lamprecht@proxmox.com> escreveu:
> Am 19.11.24 um 09:15 schrieb Lukas Wagner:
> > On Mon Nov 18, 2024 at 2:53 PM CET, Dominik Csapak wrote:
> >>>> nothing special, I tested the feature using the Home Assistant .ova
> from
> >>>> [1]. Downloaded the OVA to my local storage, pressed "import", did not
> >>>> change *any* settings apart from ticking "live import".
> >>>> The import works, but the VM is not started. Starting the VM manually
> >>>> works fine, also the tasks log does not show anything of concern.
> >>>> In the browser network requests I saw that 'live-restore' is set to
> '1'
> >>>> in the import POST request.
> >>>>
> >>>> [1] https://www.home-assistant.io/installation/alternative/
> >>>
> >>>
> >>> mhmm can reproduce with that ova image, maybe it has something to do
> with our
> >>> ovf import not detecting the main disk?
> >>> (at least here it does not detect any disk)
> >>>
> >>>
> >>
> >> ok two things here:
> >>
> >> * this ova is malformed it seems, it has a HostResource of
> '/disk/vmidsk1' when
> >> the spec says it should be of the form 'ovf:/disk/vmdisk1' which is
> what we look for
> >> (we could make that optional though?)
> >>
> >> * seemingly live-import does not start the vm if there was no disk, but
> AFAICS
> >> this was pre-existing and not something my series introduced (but not
> sure either)
> >>
> >
> > Ah, thanks for the investigation. Maybe it'd be worth to check whether
> > other hypervisors accept the malformed resource definition and then add
> > support to PVE as well if this is the case.
>
>
> Btw. it should work now already, the GNS3 image that Filip linked to in
> his reply
> used the same format, so this seems to be relatively common.
>
> I used the GNS3 one yesterday as test case to integrate some detection
> quirks, I
> also had to accept whitespace in the OVA disk names (we normalize those as
> standard
> PVE volume name through import anyway).
>
> I then also imported A HAOS OVA image, that works in general, but the disk
> was not
> added as boot device, and the disk bus was LSI, I needed to change both to
> make the
> disk available for OVMF. IIRC the OS type was detected "Others" on import,
> I changed
> that, but seems not all defaults changed with it – maybe one could look
> into that?
> HAOS is a common usecase, I alone installed it twice this year, back then
> I had to
> use the qcow2 image and manually import that like a cave man though hehe
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages
2024-11-20 16:32 ` Gilberto Ferreira via pve-devel
@ 2024-11-20 16:57 ` Gilberto Ferreira via pve-devel
2024-11-21 8:24 ` Dominik Csapak
0 siblings, 1 reply; 68+ messages in thread
From: Gilberto Ferreira via pve-devel @ 2024-11-20 16:57 UTC (permalink / raw)
To: Proxmox VE development discussion; +Cc: Gilberto Ferreira, Lukas Wagner
[-- Attachment #1: Type: message/rfc822, Size: 10265 bytes --]
From: Gilberto Ferreira <gilberto.nunes32@gmail.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Cc: Lukas Wagner <l.wagner@proxmox.com>
Subject: Re: [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages
Date: Wed, 20 Nov 2024 13:57:18 -0300
Message-ID: <CAOKSTBvadQhX9mRcs9pemLW2o+g5BDQ2bq+5BWu-N04MyM6nAw@mail.gmail.com>
Ok. Here is the error message:
referenced path 'Ubuntu 24.04 (64bit)-disk001.vmdk' is invalid (500)
Hope this helps.
Thanks
Em qua., 20 de nov. de 2024 às 13:32, Gilberto Ferreira via pve-devel <
pve-devel@lists.proxmox.com> escreveu:
>
>
>
> ---------- Forwarded message ----------
> From: Gilberto Ferreira <gilberto.nunes32@gmail.com>
> To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
> Cc: Lukas Wagner <l.wagner@proxmox.com>, d.csapak@proxmox.com
> Bcc:
> Date: Wed, 20 Nov 2024 13:32:05 -0300
> Subject: Re: [pve-devel] [PATCH storage/qemu-server/manager v6] implement
> ova/ovf import for file based storages
> Hi there.
> Sorry for the email, but I tested this import option with an Ubuntu ova
> which has space in the vmdk name and I believe that could lead to errors.
> I had downloaded from here:
>
> https://razaoinfo.dl.sourceforge.net/project/osboxes/v/vb/55-U-u/OVA-Versions/24.04/Ubuntu-24.04-64bit-VB.ova
> When I tried to import got error 500!
> Unfortunately I didn't get the entire error message or any logs.
> Is it some kind of error or bug?
>
> Nevertheless, I would like to say thanks for this amazing feature.
>
> Cheers
>
>
>
>
>
>
> Em ter., 19 de nov. de 2024 às 05:49, Thomas Lamprecht <
> t.lamprecht@proxmox.com> escreveu:
>
> > Am 19.11.24 um 09:15 schrieb Lukas Wagner:
> > > On Mon Nov 18, 2024 at 2:53 PM CET, Dominik Csapak wrote:
> > >>>> nothing special, I tested the feature using the Home Assistant .ova
> > from
> > >>>> [1]. Downloaded the OVA to my local storage, pressed "import", did
> not
> > >>>> change *any* settings apart from ticking "live import".
> > >>>> The import works, but the VM is not started. Starting the VM
> manually
> > >>>> works fine, also the tasks log does not show anything of concern.
> > >>>> In the browser network requests I saw that 'live-restore' is set to
> > '1'
> > >>>> in the import POST request.
> > >>>>
> > >>>> [1] https://www.home-assistant.io/installation/alternative/
> > >>>
> > >>>
> > >>> mhmm can reproduce with that ova image, maybe it has something to do
> > with our
> > >>> ovf import not detecting the main disk?
> > >>> (at least here it does not detect any disk)
> > >>>
> > >>>
> > >>
> > >> ok two things here:
> > >>
> > >> * this ova is malformed it seems, it has a HostResource of
> > '/disk/vmidsk1' when
> > >> the spec says it should be of the form 'ovf:/disk/vmdisk1' which is
> > what we look for
> > >> (we could make that optional though?)
> > >>
> > >> * seemingly live-import does not start the vm if there was no disk,
> but
> > AFAICS
> > >> this was pre-existing and not something my series introduced (but not
> > sure either)
> > >>
> > >
> > > Ah, thanks for the investigation. Maybe it'd be worth to check whether
> > > other hypervisors accept the malformed resource definition and then add
> > > support to PVE as well if this is the case.
> >
> >
> > Btw. it should work now already, the GNS3 image that Filip linked to in
> > his reply
> > used the same format, so this seems to be relatively common.
> >
> > I used the GNS3 one yesterday as test case to integrate some detection
> > quirks, I
> > also had to accept whitespace in the OVA disk names (we normalize those
> as
> > standard
> > PVE volume name through import anyway).
> >
> > I then also imported A HAOS OVA image, that works in general, but the
> disk
> > was not
> > added as boot device, and the disk bus was LSI, I needed to change both
> to
> > make the
> > disk available for OVMF. IIRC the OS type was detected "Others" on
> import,
> > I changed
> > that, but seems not all defaults changed with it – maybe one could look
> > into that?
> > HAOS is a common usecase, I alone installed it twice this year, back then
> > I had to
> > use the qcow2 image and manually import that like a cave man though hehe
> >
> >
> > _______________________________________________
> > pve-devel mailing list
> > pve-devel@lists.proxmox.com
> > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> >
>
>
>
> ---------- Forwarded message ----------
> From: Gilberto Ferreira via pve-devel <pve-devel@lists.proxmox.com>
> To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
> Cc: Gilberto Ferreira <gilberto.nunes32@gmail.com>, Lukas Wagner <
> l.wagner@proxmox.com>
> Bcc:
> Date: Wed, 20 Nov 2024 13:32:05 -0300
> Subject: Re: [pve-devel] [PATCH storage/qemu-server/manager v6] implement
> ova/ovf import for file based storages
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages
2024-11-20 16:57 ` Gilberto Ferreira via pve-devel
@ 2024-11-21 8:24 ` Dominik Csapak
2024-11-21 12:05 ` Gilberto Ferreira via pve-devel
0 siblings, 1 reply; 68+ messages in thread
From: Dominik Csapak @ 2024-11-21 8:24 UTC (permalink / raw)
To: Proxmox VE development discussion
On 11/20/24 17:57, Gilberto Ferreira via pve-devel wrote:
>
> Ok. Here is the error message:
> referenced path 'Ubuntu 24.04 (64bit)-disk001.vmdk' is invalid (500)
>
hi,
it's probably the parenthesis in the filename.
would you mind opening a bug in our bugzilla so we can keep track of that?
if you find other ovas that don't work please add them too there :)
thanks
Dominik
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages
2024-11-21 8:24 ` Dominik Csapak
@ 2024-11-21 12:05 ` Gilberto Ferreira via pve-devel
2024-11-21 12:23 ` Gilberto Ferreira via pve-devel
0 siblings, 1 reply; 68+ messages in thread
From: Gilberto Ferreira via pve-devel @ 2024-11-21 12:05 UTC (permalink / raw)
To: Dominik Csapak; +Cc: Gilberto Ferreira, Proxmox VE development discussion
[-- Attachment #1: Type: message/rfc822, Size: 6151 bytes --]
From: Gilberto Ferreira <gilberto.nunes32@gmail.com>
To: Dominik Csapak <d.csapak@proxmox.com>
Cc: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages
Date: Thu, 21 Nov 2024 09:05:34 -0300
Message-ID: <CAOKSTBvBtk9CmgHSgLX8E45D2nk+WaSAL5xLiU_q0sONEUAmrA@mail.gmail.com>
Never mind.
I tried to import via VirtualBox and got the same result.
I guess the ova from that web site has some malformed files.
But eventually I will open a bug report, if I encounter further issues.
Thanks
Cheers
---
Em qui., 21 de nov. de 2024 às 05:24, Dominik Csapak <d.csapak@proxmox.com>
escreveu:
> On 11/20/24 17:57, Gilberto Ferreira via pve-devel wrote:
> >
> > Ok. Here is the error message:
> > referenced path 'Ubuntu 24.04 (64bit)-disk001.vmdk' is invalid (500)
> >
>
> hi,
>
> it's probably the parenthesis in the filename.
> would you mind opening a bug in our bugzilla so we can keep track of that?
>
> if you find other ovas that don't work please add them too there :)
>
> thanks
> Dominik
>
>
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages
2024-11-21 12:05 ` Gilberto Ferreira via pve-devel
@ 2024-11-21 12:23 ` Gilberto Ferreira via pve-devel
2024-11-21 12:34 ` Fabian Grünbichler
0 siblings, 1 reply; 68+ messages in thread
From: Gilberto Ferreira via pve-devel @ 2024-11-21 12:23 UTC (permalink / raw)
To: Proxmox VE development discussion; +Cc: Gilberto Ferreira
[-- Attachment #1: Type: message/rfc822, Size: 7884 bytes --]
From: Gilberto Ferreira <gilberto.nunes32@gmail.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages
Date: Thu, 21 Nov 2024 09:23:01 -0300
Message-ID: <CAOKSTBvWGTFUOKSqV=GgFNX+-wigVjVZv=PyHV1Rj0eSXm5Agg@mail.gmail.com>
By the way, let me take the opportunity to ask a question: Are there any
plans to make this import tool accept other image formats like raw, qcow2,
vmdk or vhd(x) without OVA format?
Sometimes, all we have is a VHD file, generated by disk2vhd, for example.
That would be great.
Cheers
Em qui., 21 de nov. de 2024 às 09:06, Gilberto Ferreira via pve-devel <
pve-devel@lists.proxmox.com> escreveu:
>
>
>
> ---------- Forwarded message ----------
> From: Gilberto Ferreira <gilberto.nunes32@gmail.com>
> To: Dominik Csapak <d.csapak@proxmox.com>
> Cc: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
> Bcc:
> Date: Thu, 21 Nov 2024 09:05:34 -0300
> Subject: Re: [pve-devel] [PATCH storage/qemu-server/manager v6] implement
> ova/ovf import for file based storages
> Never mind.
> I tried to import via VirtualBox and got the same result.
> I guess the ova from that web site has some malformed files.
> But eventually I will open a bug report, if I encounter further issues.
> Thanks
> Cheers
>
> ---
>
>
>
>
>
>
> Em qui., 21 de nov. de 2024 às 05:24, Dominik Csapak <d.csapak@proxmox.com
> >
> escreveu:
>
> > On 11/20/24 17:57, Gilberto Ferreira via pve-devel wrote:
> > >
> > > Ok. Here is the error message:
> > > referenced path 'Ubuntu 24.04 (64bit)-disk001.vmdk' is invalid (500)
> > >
> >
> > hi,
> >
> > it's probably the parenthesis in the filename.
> > would you mind opening a bug in our bugzilla so we can keep track of
> that?
> >
> > if you find other ovas that don't work please add them too there :)
> >
> > thanks
> > Dominik
> >
> >
>
>
>
> ---------- Forwarded message ----------
> From: Gilberto Ferreira via pve-devel <pve-devel@lists.proxmox.com>
> To: Dominik Csapak <d.csapak@proxmox.com>
> Cc: Gilberto Ferreira <gilberto.nunes32@gmail.com>, Proxmox VE
> development discussion <pve-devel@lists.proxmox.com>
> Bcc:
> Date: Thu, 21 Nov 2024 09:05:34 -0300
> Subject: Re: [pve-devel] [PATCH storage/qemu-server/manager v6] implement
> ova/ovf import for file based storages
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages
2024-11-21 12:23 ` Gilberto Ferreira via pve-devel
@ 2024-11-21 12:34 ` Fabian Grünbichler
2024-11-22 18:10 ` Gilberto Ferreira via pve-devel
0 siblings, 1 reply; 68+ messages in thread
From: Fabian Grünbichler @ 2024-11-21 12:34 UTC (permalink / raw)
To: Proxmox VE development discussion
> Gilberto Ferreira via pve-devel <pve-devel@lists.proxmox.com> hat am 21.11.2024 13:23 CET geschrieben:
> By the way, let me take the opportunity to ask a question: Are there any
> plans to make this import tool accept other image formats like raw, qcow2,
> vmdk or vhd(x) without OVA format?
> Sometimes, all we have is a VHD file, generated by disk2vhd, for example.
> That would be great.
you can already do that (for raw, vmdk, qcow2) via `import-from=` when creating a VM or updating its config (i.e., "qm create/set XXX [..] --scsi0 target_storage:-1,import-from=volume-to-import-from,other_options" or the corresponding API call - not on the UI yet, but planned at some point ;)). just beware - there are no checks on the input image, so only do this with files from a trusted source (arbitrary paths require root as usual, regular volumes on a PVE storage just require access to that volume). I think for vhd files you should be able to convert those first into one of the "agreeable" formats ;)
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages
2024-11-21 12:34 ` Fabian Grünbichler
@ 2024-11-22 18:10 ` Gilberto Ferreira via pve-devel
0 siblings, 0 replies; 68+ messages in thread
From: Gilberto Ferreira via pve-devel @ 2024-11-22 18:10 UTC (permalink / raw)
To: Fabian Grünbichler
Cc: Gilberto Ferreira, Proxmox VE development discussion
[-- Attachment #1: Type: message/rfc822, Size: 6969 bytes --]
From: Gilberto Ferreira <gilberto.nunes32@gmail.com>
To: "Fabian Grünbichler" <f.gruenbichler@proxmox.com>
Cc: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages
Date: Fri, 22 Nov 2024 15:10:53 -0300
Message-ID: <CAOKSTBsPM=4n5v5nngGmw=3z1cDrxr-ZzPSPw2nUb=yDHXjmUQ@mail.gmail.com>
Oh! I see. Thanks.
Cheers
Em qui., 21 de nov. de 2024 às 09:34, Fabian Grünbichler <
f.gruenbichler@proxmox.com> escreveu:
>
> > Gilberto Ferreira via pve-devel <pve-devel@lists.proxmox.com> hat am
> 21.11.2024 13:23 CET geschrieben:
> > By the way, let me take the opportunity to ask a question: Are there any
> > plans to make this import tool accept other image formats like raw,
> qcow2,
> > vmdk or vhd(x) without OVA format?
> > Sometimes, all we have is a VHD file, generated by disk2vhd, for example.
> > That would be great.
>
> you can already do that (for raw, vmdk, qcow2) via `import-from=` when
> creating a VM or updating its config (i.e., "qm create/set XXX [..] --scsi0
> target_storage:-1,import-from=volume-to-import-from,other_options" or the
> corresponding API call - not on the UI yet, but planned at some point ;)).
> just beware - there are no checks on the input image, so only do this with
> files from a trusted source (arbitrary paths require root as usual, regular
> volumes on a PVE storage just require access to that volume). I think for
> vhd files you should be able to convert those first into one of the
> "agreeable" formats ;)
>
>
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages
2024-11-15 15:17 [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages Dominik Csapak
` (28 preceding siblings ...)
2024-11-18 13:06 ` Lukas Wagner
@ 2024-11-18 14:35 ` Daniel Herzig
2024-11-18 15:01 ` Daniel Herzig
2024-11-18 15:33 ` Dominik Csapak
30 siblings, 1 reply; 68+ messages in thread
From: Daniel Herzig @ 2024-11-18 14:35 UTC (permalink / raw)
To: Dominik Csapak; +Cc: pve-devel
I've just tested this series with the following images:
+ GNS3 with VMware ESXi image from https://www.gns3.com/software/download-vm,
unzipped and uploaded to local dir storage.
+ Ubuntu Noble from https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.ova,
downloaded straight through the 'Download from URL' button.
GNS3 imports nicely and runs.
Cannot really tell about noble -- it imports nicely
but will get stuck during startup with default settings
(~btrfs loaded, zoned=yes, fsverity=yes~). Fiddling with
hardware settings tends to let it boot, but I haven't yet managed to
provide it with a username and password that I'd know.
Best,
Daniel
Things that I've encountered:
+ Download from URL does not support zipped images.
+ If I use a root-mounted nfs share as extraction storage, I yield a
(not having a clue where the chown comes from):
------------------------------------------------------------
extracting local:import/GNS3_VM.ova/GNS3_VM-disk1.vmdk
tar: GNS3_VM-disk1.vmdk: Cannot change ownership to uid 64, gid 64: Operation not permitted
tar: Exiting with failure status due to previous errors
TASK ERROR: unable to create VM 112 - error during extraction: command
'tar -x --force-local \
-C /mnt/pve/nfs/images/112/tmp_3017_112 -f /var/lib/vz/import/GNS3_VM.ova GNS3_VM-disk1.vmdk' failed: exit code 2
------------------------------------------------------------
+ Although I understand the purpose, it did not feel very 'natural'
having to add the 'disk image' storage type to the local directory with
the .ova-file for a successful import.
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages
2024-11-18 14:35 ` Daniel Herzig
@ 2024-11-18 15:01 ` Daniel Herzig
0 siblings, 0 replies; 68+ messages in thread
From: Daniel Herzig @ 2024-11-18 15:01 UTC (permalink / raw)
To: Dominik Csapak; +Cc: Proxmox VE development discussion
Daniel Herzig <d.herzig@proxmox.com> writes:
> I've just tested this series with the following images:
>
> + GNS3 with VMware ESXi image from https://www.gns3.com/software/download-vm,
> unzipped and uploaded to local dir storage.
> + Ubuntu Noble from https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.ova,
> downloaded straight through the 'Download from URL' button.
>
> GNS3 imports nicely and runs.
>
> Cannot really tell about noble -- it imports nicely
> but will get stuck during startup with default settings
> (~btrfs loaded, zoned=yes, fsverity=yes~). Fiddling with
> hardware settings tends to let it boot, but I haven't yet managed to
> provide it with a username and password that I'd know.
>
Finally got it:
+ VirtIO SCSI single as SCSI Controller (from LSI53C895A).
+ VirtIO paravirtualized (from VMware vmxnet3).
+ Cloudinit drive with username, password and ip=dhcp.
Works nicely for Noble in my SDN simple zone setup!
> Best,
> Daniel
>
> Things that I've encountered:
> + Download from URL does not support zipped images.
> + If I use a root-mounted nfs share as extraction storage, I yield a
> (not having a clue where the chown comes from):
> ------------------------------------------------------------
> extracting local:import/GNS3_VM.ova/GNS3_VM-disk1.vmdk
> tar: GNS3_VM-disk1.vmdk: Cannot change ownership to uid 64, gid 64: Operation not permitted
> tar: Exiting with failure status due to previous errors
> TASK ERROR: unable to create VM 112 - error during extraction: command
> 'tar -x --force-local \
> -C /mnt/pve/nfs/images/112/tmp_3017_112 -f /var/lib/vz/import/GNS3_VM.ova GNS3_VM-disk1.vmdk' failed: exit code 2
> ------------------------------------------------------------
> + Although I understand the purpose, it did not feel very 'natural'
> having to add the 'disk image' storage type to the local directory with
> the .ova-file for a successful import.
>
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages
2024-11-15 15:17 [pve-devel] [PATCH storage/qemu-server/manager v6] implement ova/ovf import for file based storages Dominik Csapak
` (29 preceding siblings ...)
2024-11-18 14:35 ` Daniel Herzig
@ 2024-11-18 15:33 ` Dominik Csapak
30 siblings, 0 replies; 68+ messages in thread
From: Dominik Csapak @ 2024-11-18 15:33 UTC (permalink / raw)
To: pve-devel
sent a v7:
https://lore.proxmox.com/pve-devel/20241118152928.858590-1-d.csapak@proxmox.com/
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 68+ messages in thread