* [pve-devel] [PATCH v13 qemu-server 1/8] clone disk: assert that drive name is the same for drive-mirror on single VM
2022-03-17 11:30 [pve-devel] [PATCH-SERIES v13 qemu-server/manager] API for disk import and OVF Fabian Ebner
@ 2022-03-17 11:30 ` Fabian Ebner
2022-03-17 11:31 ` [pve-devel] [PATCH v13 qemu-server 2/8] clone disk: move check against cloning TPM state of running VM to beginning Fabian Ebner
` (8 subsequent siblings)
9 siblings, 0 replies; 13+ messages in thread
From: Fabian Ebner @ 2022-03-17 11:30 UTC (permalink / raw)
To: pve-devel
because when the VM ID of target and source are the same,
qemu_drive_mirror_monitor() switches the QEMU device node over to the
new backing image. The planned import-from functionality makes it
possible to run into this, although for an a bit unusual use case.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---
New in v13.
PVE/QemuServer.pm | 18 ++++++++++++------
1 file changed, 12 insertions(+), 6 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 6a9f6b31..dd6f48f3 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -7585,11 +7585,17 @@ sub clone_disk {
my ($newvmid, $dst_drivename, $efisize) = $dest->@{qw(vmid drivename efisize)};
my ($storage, $format) = $dest->@{qw(storage format)};
+ my $use_drive_mirror = $full && $running && $src_drivename && !$snapname;
+
if ($src_drivename && $dst_drivename && $src_drivename ne $dst_drivename) {
die "cloning from/to EFI disk requires EFI disk\n"
if $src_drivename eq 'efidisk0' || $dst_drivename eq 'efidisk0';
die "cloning from/to TPM state requires TPM state\n"
if $src_drivename eq 'tpmstate0' || $dst_drivename eq 'tpmstate0';
+
+ # This would lead to two device nodes in QEMU pointing to the same backing image!
+ die "cannot change drive name when cloning disk from/to the same VM\n"
+ if $use_drive_mirror && $vmid == $newvmid;
}
my $newvolid;
@@ -7644,7 +7650,12 @@ sub clone_disk {
}
my $sparseinit = PVE::Storage::volume_has_feature($storecfg, 'sparseinit', $newvolid);
- if (!$running || !$src_drivename || $snapname) {
+ if ($use_drive_mirror) {
+ die "cannot move TPM state while VM is running\n" if $src_drivename eq 'tpmstate0';
+
+ qemu_drive_mirror($vmid, $src_drivename, $newvolid, $newvmid, $sparseinit, $jobs,
+ $completion, $qga, $bwlimit);
+ } else {
# TODO: handle bwlimits
if ($dst_drivename eq 'efidisk0') {
# the relevant data on the efidisk may be smaller than the source
@@ -7661,11 +7672,6 @@ sub clone_disk {
} else {
qemu_img_convert($drive->{file}, $newvolid, $size, $snapname, $sparseinit);
}
- } else {
- die "cannot move TPM state while VM is running\n" if $src_drivename eq 'tpmstate0';
-
- qemu_drive_mirror($vmid, $src_drivename, $newvolid, $newvmid, $sparseinit, $jobs,
- $completion, $qga, $bwlimit);
}
}
--
2.30.2
^ permalink raw reply [flat|nested] 13+ messages in thread
* [pve-devel] [PATCH v13 qemu-server 2/8] clone disk: move check against cloning TPM state of running VM to beginning
2022-03-17 11:30 [pve-devel] [PATCH-SERIES v13 qemu-server/manager] API for disk import and OVF Fabian Ebner
2022-03-17 11:30 ` [pve-devel] [PATCH v13 qemu-server 1/8] clone disk: assert that drive name is the same for drive-mirror on single VM Fabian Ebner
@ 2022-03-17 11:31 ` Fabian Ebner
2022-03-17 11:31 ` [pve-devel] [PATCH v13 qemu-server 3/8] api: clone vm: check against cloning running TPM state early Fabian Ebner
` (7 subsequent siblings)
9 siblings, 0 replies; 13+ messages in thread
From: Fabian Ebner @ 2022-03-17 11:31 UTC (permalink / raw)
To: pve-devel
where other similar checks are.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---
New in v13.
PVE/QemuServer.pm | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index dd6f48f3..907cfc09 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -7598,6 +7598,9 @@ sub clone_disk {
if $use_drive_mirror && $vmid == $newvmid;
}
+ die "cannot move TPM state while VM is running\n"
+ if $use_drive_mirror && $src_drivename eq 'tpmstate0';
+
my $newvolid;
print "create " . ($full ? 'full' : 'linked') . " clone of drive ";
@@ -7651,8 +7654,6 @@ sub clone_disk {
my $sparseinit = PVE::Storage::volume_has_feature($storecfg, 'sparseinit', $newvolid);
if ($use_drive_mirror) {
- die "cannot move TPM state while VM is running\n" if $src_drivename eq 'tpmstate0';
-
qemu_drive_mirror($vmid, $src_drivename, $newvolid, $newvmid, $sparseinit, $jobs,
$completion, $qga, $bwlimit);
} else {
--
2.30.2
^ permalink raw reply [flat|nested] 13+ messages in thread
* [pve-devel] [PATCH v13 qemu-server 3/8] api: clone vm: check against cloning running TPM state early
2022-03-17 11:30 [pve-devel] [PATCH-SERIES v13 qemu-server/manager] API for disk import and OVF Fabian Ebner
2022-03-17 11:30 ` [pve-devel] [PATCH v13 qemu-server 1/8] clone disk: assert that drive name is the same for drive-mirror on single VM Fabian Ebner
2022-03-17 11:31 ` [pve-devel] [PATCH v13 qemu-server 2/8] clone disk: move check against cloning TPM state of running VM to beginning Fabian Ebner
@ 2022-03-17 11:31 ` Fabian Ebner
2022-03-17 11:31 ` [pve-devel] [PATCH v13 qemu-server 4/8] clone disk: also clone EFI disk from snapshot Fabian Ebner
` (6 subsequent siblings)
9 siblings, 0 replies; 13+ messages in thread
From: Fabian Ebner @ 2022-03-17 11:31 UTC (permalink / raw)
To: pve-devel
Drive keys are sorted when cloning and 'tpmstate0' comes late, so it
was likely that potentially large disks were already copied just to be
removed again, because of the TPM state restriction at the end.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---
New in v13.
PVE/API2/Qemu.pm | 3 +++
1 file changed, 3 insertions(+)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index cb6973f1..15592d7a 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -3130,6 +3130,9 @@ __PACKAGE__->register_method({
# no need to copy unused images, because VMID(owner) changes anyways
next if $opt =~ m/^unused\d+$/;
+ die "cannot clone TPM state while VM is running\n"
+ if $full && $running && !$snapname && $opt eq 'tpmstate0';
+
# always change MAC! address
if ($opt =~ m/^net(\d+)$/) {
my $net = PVE::QemuServer::parse_net($value);
--
2.30.2
^ permalink raw reply [flat|nested] 13+ messages in thread
* [pve-devel] [PATCH v13 qemu-server 4/8] clone disk: also clone EFI disk from snapshot
2022-03-17 11:30 [pve-devel] [PATCH-SERIES v13 qemu-server/manager] API for disk import and OVF Fabian Ebner
` (2 preceding siblings ...)
2022-03-17 11:31 ` [pve-devel] [PATCH v13 qemu-server 3/8] api: clone vm: check against cloning running TPM state early Fabian Ebner
@ 2022-03-17 11:31 ` Fabian Ebner
[not found] ` <<20220317113107.60466-5-f.ebner@proxmox.com>
2022-03-17 11:31 ` [pve-devel] [PATCH v13 qemu-server 5/8] api: add endpoint for parsing .ovf files Fabian Ebner
` (5 subsequent siblings)
9 siblings, 1 reply; 13+ messages in thread
From: Fabian Ebner @ 2022-03-17 11:31 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---
New in v13.
Dependency bump for QEMU 6.2 needed for qemu-img dd's -l option.
PVE/QemuServer.pm | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 907cfc09..a24309d2 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -7662,14 +7662,18 @@ sub clone_disk {
# the relevant data on the efidisk may be smaller than the source
# e.g. on RBD/ZFS, so we use dd to copy only the amount
# that is given by the OVMF_VARS.fd
- my $src_path = PVE::Storage::path($storecfg, $drive->{file});
+ my $src_path = PVE::Storage::path($storecfg, $drive->{file}, $snapname);
my $dst_path = PVE::Storage::path($storecfg, $newvolid);
+ my $src_format = (PVE::Storage::parse_volname($storecfg, $drive->{file}))[6];
+
# better for Ceph if block size is not too small, see bug #3324
my $bs = 1024*1024;
- run_command(['qemu-img', 'dd', '-n', '-O', $dst_format, "bs=$bs", "osize=$size",
- "if=$src_path", "of=$dst_path"]);
+ my $cmd = ['qemu-img', 'dd', '-n', '-O', $dst_format];
+ push $cmd->@*, '-l', $snapname if $src_format eq 'qcow2' && $snapname;
+ push $cmd->@*, "bs=$bs", "osize=$size", "if=$src_path", "of=$dst_path";
+ run_command($cmd);
} else {
qemu_img_convert($drive->{file}, $newvolid, $size, $snapname, $sparseinit);
}
--
2.30.2
^ permalink raw reply [flat|nested] 13+ messages in thread
* [pve-devel] [PATCH v13 qemu-server 5/8] api: add endpoint for parsing .ovf files
2022-03-17 11:30 [pve-devel] [PATCH-SERIES v13 qemu-server/manager] API for disk import and OVF Fabian Ebner
` (3 preceding siblings ...)
2022-03-17 11:31 ` [pve-devel] [PATCH v13 qemu-server 4/8] clone disk: also clone EFI disk from snapshot Fabian Ebner
@ 2022-03-17 11:31 ` Fabian Ebner
2022-03-17 11:31 ` [pve-devel] [PATCH v13 qemu-server 6/8] schema: drive: use separate schema when disk allocation is possible Fabian Ebner
` (4 subsequent siblings)
9 siblings, 0 replies; 13+ messages in thread
From: Fabian Ebner @ 2022-03-17 11:31 UTC (permalink / raw)
To: pve-devel
Co-developed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
[split into its own patch + minor improvements/style fixes]
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---
Changes from v12:
* Add protected => 1, so it's not limited to files accessible by
www-data.
* Get rid of duplicate 'return' key in schema.
* Drop unused parameter from json_ovf_properties().
PVE/API2/Qemu/Makefile | 2 +-
PVE/API2/Qemu/OVF.pm | 53 ++++++++++++++++++++++++++++++++++++++++++
PVE/QemuServer.pm | 32 +++++++++++++++++++++++++
3 files changed, 86 insertions(+), 1 deletion(-)
create mode 100644 PVE/API2/Qemu/OVF.pm
diff --git a/PVE/API2/Qemu/Makefile b/PVE/API2/Qemu/Makefile
index 5d4abda6..bdd4762b 100644
--- a/PVE/API2/Qemu/Makefile
+++ b/PVE/API2/Qemu/Makefile
@@ -1,4 +1,4 @@
-SOURCES=Agent.pm CPU.pm Machine.pm
+SOURCES=Agent.pm CPU.pm Machine.pm OVF.pm
.PHONY: install
install:
diff --git a/PVE/API2/Qemu/OVF.pm b/PVE/API2/Qemu/OVF.pm
new file mode 100644
index 00000000..bb55f0b8
--- /dev/null
+++ b/PVE/API2/Qemu/OVF.pm
@@ -0,0 +1,53 @@
+package PVE::API2::Qemu::OVF;
+
+use strict;
+use warnings;
+
+use PVE::JSONSchema qw(get_standard_option);
+use PVE::QemuServer::OVF;
+use PVE::RESTHandler;
+
+use base qw(PVE::RESTHandler);
+
+__PACKAGE__->register_method ({
+ name => 'index',
+ path => '',
+ method => 'GET',
+ proxyto => 'node',
+ description => "Read an .ovf manifest.",
+ protected => 1,
+ parameters => {
+ additionalProperties => 0,
+ properties => {
+ node => get_standard_option('pve-node'),
+ manifest => {
+ description => "Path to .ovf manifest.",
+ type => 'string',
+ },
+ },
+ },
+ returns => {
+ type => 'object',
+ additionalProperties => 1,
+ properties => PVE::QemuServer::json_ovf_properties(),
+ description => "VM config according to .ovf manifest.",
+ },
+ code => sub {
+ my ($param) = @_;
+
+ my $manifest = $param->{manifest};
+ die "check for file $manifest failed - $!\n" if !-f $manifest;
+
+ my $parsed = PVE::QemuServer::OVF::parse_ovf($manifest);
+ my $result;
+ $result->{cores} = $parsed->{qm}->{cores};
+ $result->{name} = $parsed->{qm}->{name};
+ $result->{memory} = $parsed->{qm}->{memory};
+ my $disks = $parsed->{disks};
+ for my $disk (@$disks) {
+ $result->{$disk->{disk_address}} = $disk->{backing_file};
+ }
+ return $result;
+ }});
+
+1;
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index a24309d2..3bdc55a6 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2203,6 +2203,38 @@ sub json_config_properties {
return $prop;
}
+# Properties that we can read from an OVF file
+sub json_ovf_properties {
+ my $prop = {};
+
+ for my $device (PVE::QemuServer::Drive::valid_drive_names()) {
+ $prop->{$device} = {
+ type => 'string',
+ format => 'pve-volume-id-or-absolute-path',
+ description => "Disk image that gets imported to $device",
+ optional => 1,
+ };
+ }
+
+ $prop->{cores} = {
+ type => 'integer',
+ description => "The number of CPU cores.",
+ optional => 1,
+ };
+ $prop->{memory} = {
+ type => 'integer',
+ description => "Amount of RAM for the VM in MB.",
+ optional => 1,
+ };
+ $prop->{name} = {
+ type => 'string',
+ description => "Name of the VM.",
+ optional => 1,
+ };
+
+ return $prop;
+}
+
# return copy of $confdesc_cloudinit to generate documentation
sub cloudinit_config_properties {
--
2.30.2
^ permalink raw reply [flat|nested] 13+ messages in thread
* [pve-devel] [PATCH v13 qemu-server 6/8] schema: drive: use separate schema when disk allocation is possible
2022-03-17 11:30 [pve-devel] [PATCH-SERIES v13 qemu-server/manager] API for disk import and OVF Fabian Ebner
` (4 preceding siblings ...)
2022-03-17 11:31 ` [pve-devel] [PATCH v13 qemu-server 5/8] api: add endpoint for parsing .ovf files Fabian Ebner
@ 2022-03-17 11:31 ` Fabian Ebner
2022-03-17 11:31 ` [pve-devel] [PATCH v13 qemu-server 7/8] api: support VM disk import Fabian Ebner
` (3 subsequent siblings)
9 siblings, 0 replies; 13+ messages in thread
From: Fabian Ebner @ 2022-03-17 11:31 UTC (permalink / raw)
To: pve-devel
via the special syntax <storeid>:<size>.
Not worth it by itself, but this is anticipating a new 'import-from'
parameter which is only used upon import/allocation, but shouldn't be
part of the schema for the config or other API enpoints.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---
Changes from v12:
* Include adaptation of {parse,print}_drive and callers already
in this patch rather than the next.
PVE/API2/Qemu.pm | 41 +++++++++++++++------
PVE/QemuServer.pm | 9 +++--
PVE/QemuServer/Drive.pm | 79 +++++++++++++++++++++++++++++------------
3 files changed, 94 insertions(+), 35 deletions(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 15592d7a..1082d5e3 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -63,28 +63,43 @@ my $resolve_cdrom_alias = sub {
}
};
+# Used in import-enabled API endpoints. Parses drives using the extended '_with_alloc' schema.
+my $foreach_volume_with_alloc = sub {
+ my ($param, $func) = @_;
+
+ for my $opt (sort keys $param->%*) {
+ next if !PVE::QemuServer::is_valid_drivename($opt);
+
+ my $drive = PVE::QemuServer::Drive::parse_drive($opt, $param->{$opt}, 1);
+ next if !$drive;
+
+ $func->($opt, $drive);
+ }
+};
+
+my $NEW_DISK_RE = qr!^(([^/:\s]+):)?(\d+(\.\d+)?)$!;
+
my $check_drive_param = sub {
my ($param, $storecfg, $extra_checks) = @_;
for my $opt (sort keys $param->%*) {
next if !PVE::QemuServer::is_valid_drivename($opt);
- my $drive = PVE::QemuServer::parse_drive($opt, $param->{$opt});
+ my $drive = PVE::QemuServer::parse_drive($opt, $param->{$opt}, 1);
raise_param_exc({ $opt => "unable to parse drive options" }) if !$drive;
PVE::QemuServer::cleanup_drive_path($opt, $storecfg, $drive);
$extra_checks->($drive) if $extra_checks;
- $param->{$opt} = PVE::QemuServer::print_drive($drive);
+ $param->{$opt} = PVE::QemuServer::print_drive($drive, 1);
}
};
-my $NEW_DISK_RE = qr!^(([^/:\s]+):)?(\d+(\.\d+)?)$!;
my $check_storage_access = sub {
my ($rpcenv, $authuser, $storecfg, $vmid, $settings, $default_storage) = @_;
- PVE::QemuConfig->foreach_volume($settings, sub {
+ $foreach_volume_with_alloc->($settings, sub {
my ($ds, $drive) = @_;
my $isCDROM = PVE::QemuServer::drive_is_cdrom($drive);
@@ -242,7 +257,7 @@ my $create_disks = sub {
}
};
- eval { PVE::QemuConfig->foreach_volume($settings, $code); };
+ eval { $foreach_volume_with_alloc->($settings, $code); };
# free allocated images on error
if (my $err = $@) {
@@ -569,7 +584,9 @@ __PACKAGE__->register_method({
default => 0,
description => "Start VM after it was created successfully.",
},
- }),
+ },
+ 1, # with_disk_alloc
+ ),
},
returns => {
type => 'string',
@@ -1283,7 +1300,7 @@ my $update_vm_api = sub {
my $check_drive_perms = sub {
my ($opt, $val) = @_;
- my $drive = PVE::QemuServer::parse_drive($opt, $val);
+ my $drive = PVE::QemuServer::parse_drive($opt, $val, 1);
# FIXME: cloudinit: CDROM or Disk?
if (PVE::QemuServer::drive_is_cdrom($drive)) { # CDROM
$rpcenv->check_vm_perm($authuser, $vmid, undef, ['VM.Config.CDROM']);
@@ -1389,7 +1406,7 @@ my $update_vm_api = sub {
# default legacy boot order implies all cdroms anyway
if (@bootorder) {
# append new CD drives to bootorder to mark them bootable
- my $drive = PVE::QemuServer::parse_drive($opt, $param->{$opt});
+ my $drive = PVE::QemuServer::parse_drive($opt, $param->{$opt}, 1);
if (PVE::QemuServer::drive_is_cdrom($drive, 1) && !grep(/^$opt$/, @bootorder)) {
push @bootorder, $opt;
$conf->{pending}->{boot} = PVE::QemuServer::print_bootorder(\@bootorder);
@@ -1552,7 +1569,9 @@ __PACKAGE__->register_method({
maximum => 30,
optional => 1,
},
- }),
+ },
+ 1, # with_disk_alloc
+ ),
},
returns => {
type => 'string',
@@ -1600,7 +1619,9 @@ __PACKAGE__->register_method({
maxLength => 40,
optional => 1,
},
- }),
+ },
+ 1, # with_disk_alloc
+ ),
},
returns => { type => 'null' },
code => sub {
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 3bdc55a6..7b335cc1 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2184,7 +2184,7 @@ sub verify_usb_device {
# add JSON properties for create and set function
sub json_config_properties {
- my $prop = shift;
+ my ($prop, $with_disk_alloc) = @_;
my $skip_json_config_opts = {
parent => 1,
@@ -2197,7 +2197,12 @@ sub json_config_properties {
foreach my $opt (keys %$confdesc) {
next if $skip_json_config_opts->{$opt};
- $prop->{$opt} = $confdesc->{$opt};
+
+ if ($with_disk_alloc && is_valid_drivename($opt)) {
+ $prop->{$opt} = $PVE::QemuServer::Drive::drivedesc_hash_with_alloc->{$opt};
+ } else {
+ $prop->{$opt} = $confdesc->{$opt};
+ }
}
return $prop;
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index 7b82fb22..cebf1730 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -3,6 +3,8 @@ package PVE::QemuServer::Drive;
use strict;
use warnings;
+use Storable qw(dclone);
+
use PVE::Storage;
use PVE::JSONSchema qw(get_standard_option);
@@ -33,6 +35,8 @@ our $MAX_SATA_DISKS = 6;
our $MAX_UNUSED_DISKS = 256;
our $drivedesc_hash;
+# Schema when disk allocation is possible.
+our $drivedesc_hash_with_alloc = {};
my %drivedesc_base = (
volume => { alias => 'file' },
@@ -262,14 +266,10 @@ my $ide_fmt = {
};
PVE::JSONSchema::register_format("pve-qm-ide", $ide_fmt);
-my $ALLOCATION_SYNTAX_DESC =
- "Use the special syntax STORAGE_ID:SIZE_IN_GiB to allocate a new volume.";
-
my $idedesc = {
optional => 1,
type => 'string', format => $ide_fmt,
- description => "Use volume as IDE hard disk or CD-ROM (n is 0 to " .($MAX_IDE_DISKS -1) . "). " .
- $ALLOCATION_SYNTAX_DESC,
+ description => "Use volume as IDE hard disk or CD-ROM (n is 0 to " .($MAX_IDE_DISKS - 1) . ").",
};
PVE::JSONSchema::register_standard_option("pve-qm-ide", $idedesc);
@@ -285,8 +285,7 @@ my $scsi_fmt = {
my $scsidesc = {
optional => 1,
type => 'string', format => $scsi_fmt,
- description => "Use volume as SCSI hard disk or CD-ROM (n is 0 to " . ($MAX_SCSI_DISKS - 1) . "). " .
- $ALLOCATION_SYNTAX_DESC,
+ description => "Use volume as SCSI hard disk or CD-ROM (n is 0 to " . ($MAX_SCSI_DISKS - 1) . ").",
};
PVE::JSONSchema::register_standard_option("pve-qm-scsi", $scsidesc);
@@ -298,8 +297,7 @@ my $sata_fmt = {
my $satadesc = {
optional => 1,
type => 'string', format => $sata_fmt,
- description => "Use volume as SATA hard disk or CD-ROM (n is 0 to " . ($MAX_SATA_DISKS - 1). "). " .
- $ALLOCATION_SYNTAX_DESC,
+ description => "Use volume as SATA hard disk or CD-ROM (n is 0 to " . ($MAX_SATA_DISKS - 1). ").",
};
PVE::JSONSchema::register_standard_option("pve-qm-sata", $satadesc);
@@ -311,8 +309,7 @@ my $virtio_fmt = {
my $virtiodesc = {
optional => 1,
type => 'string', format => $virtio_fmt,
- description => "Use volume as VIRTIO hard disk (n is 0 to " . ($MAX_VIRTIO_DISKS - 1) . "). " .
- $ALLOCATION_SYNTAX_DESC,
+ description => "Use volume as VIRTIO hard disk (n is 0 to " . ($MAX_VIRTIO_DISKS - 1) . ").",
};
PVE::JSONSchema::register_standard_option("pve-qm-virtio", $virtiodesc);
@@ -359,9 +356,7 @@ my $efidisk_fmt = {
my $efidisk_desc = {
optional => 1,
type => 'string', format => $efidisk_fmt,
- description => "Configure a Disk for storing EFI vars. " .
- $ALLOCATION_SYNTAX_DESC . " Note that SIZE_IN_GiB is ignored here " .
- "and that the default EFI vars are copied to the volume instead.",
+ description => "Configure a Disk for storing EFI vars.",
};
PVE::JSONSchema::register_standard_option("pve-qm-efidisk", $efidisk_desc);
@@ -397,10 +392,7 @@ my $tpmstate_fmt = {
my $tpmstate_desc = {
optional => 1,
type => 'string', format => $tpmstate_fmt,
- description => "Configure a Disk for storing TPM state. " .
- $ALLOCATION_SYNTAX_DESC . " Note that SIZE_IN_GiB is ignored here " .
- "and that the default size of 4 MiB will always be used instead. The " .
- "format is also fixed to 'raw'.",
+ description => "Configure a Disk for storing TPM state. The format is fixed to 'raw'.",
};
use constant TPMSTATE_DISK_SIZE => 4 * 1024 * 1024;
@@ -417,6 +409,10 @@ my $alldrive_fmt = {
%efitype_fmt,
};
+my $alldrive_fmt_with_alloc = {
+ %$alldrive_fmt,
+};
+
my $unused_fmt = {
volume => { alias => 'file' },
file => {
@@ -434,27 +430,61 @@ my $unuseddesc = {
description => "Reference to unused volumes. This is used internally, and should not be modified manually.",
};
+my $with_alloc_desc_cache = {
+ unused => $unuseddesc, # Allocation for unused is not supported currently.
+};
+my $desc_with_alloc = sub {
+ my ($type, $desc) = @_;
+
+ return $with_alloc_desc_cache->{$type} if $with_alloc_desc_cache->{$type};
+
+ my $new_desc = dclone($desc);
+
+ my $extra_note = '';
+ if ($type eq 'efidisk') {
+ $extra_note = " Note that SIZE_IN_GiB is ignored here and that the default EFI vars are ".
+ "copied to the volume instead.";
+ } elsif ($type eq 'tpmstate') {
+ $extra_note = " Note that SIZE_IN_GiB is ignored here and 4 MiB will be used instead.";
+ }
+
+ $new_desc->{description} .= " Use the special syntax STORAGE_ID:SIZE_IN_GiB to allocate a new ".
+ "volume.${extra_note}";
+
+ $with_alloc_desc_cache->{$type} = $new_desc;
+
+ return $new_desc;
+};
+
for (my $i = 0; $i < $MAX_IDE_DISKS; $i++) {
$drivedesc_hash->{"ide$i"} = $idedesc;
+ $drivedesc_hash_with_alloc->{"ide$i"} = $desc_with_alloc->('ide', $idedesc);
}
for (my $i = 0; $i < $MAX_SATA_DISKS; $i++) {
$drivedesc_hash->{"sata$i"} = $satadesc;
+ $drivedesc_hash_with_alloc->{"sata$i"} = $desc_with_alloc->('sata', $satadesc);
}
for (my $i = 0; $i < $MAX_SCSI_DISKS; $i++) {
$drivedesc_hash->{"scsi$i"} = $scsidesc;
+ $drivedesc_hash_with_alloc->{"scsi$i"} = $desc_with_alloc->('scsi', $scsidesc);
}
for (my $i = 0; $i < $MAX_VIRTIO_DISKS; $i++) {
$drivedesc_hash->{"virtio$i"} = $virtiodesc;
+ $drivedesc_hash_with_alloc->{"virtio$i"} = $desc_with_alloc->('virtio', $virtiodesc);
}
$drivedesc_hash->{efidisk0} = $efidisk_desc;
+$drivedesc_hash_with_alloc->{efidisk0} = $desc_with_alloc->('efidisk', $efidisk_desc);
+
$drivedesc_hash->{tpmstate0} = $tpmstate_desc;
+$drivedesc_hash_with_alloc->{tpmstate0} = $desc_with_alloc->('tpmstate', $tpmstate_desc);
for (my $i = 0; $i < $MAX_UNUSED_DISKS; $i++) {
$drivedesc_hash->{"unused$i"} = $unuseddesc;
+ $drivedesc_hash_with_alloc->{"unused$i"} = $desc_with_alloc->('unused', $unuseddesc);
}
sub valid_drive_names_for_boot {
@@ -521,7 +551,7 @@ sub drive_is_read_only {
# [,iothread=on][,serial=serial][,model=model]
sub parse_drive {
- my ($key, $data) = @_;
+ my ($key, $data, $with_alloc) = @_;
my ($interface, $index);
@@ -532,12 +562,14 @@ sub parse_drive {
return;
}
- if (!defined($drivedesc_hash->{$key})) {
+ my $desc_hash = $with_alloc ? $drivedesc_hash_with_alloc : $drivedesc_hash;
+
+ if (!defined($desc_hash->{$key})) {
warn "invalid drive key: $key\n";
return;
}
- my $desc = $drivedesc_hash->{$key}->{format};
+ my $desc = $desc_hash->{$key}->{format};
my $res = eval { PVE::JSONSchema::parse_property_string($desc, $data) };
return if !$res;
$res->{interface} = $interface;
@@ -597,9 +629,10 @@ sub parse_drive {
}
sub print_drive {
- my ($drive) = @_;
+ my ($drive, $with_alloc) = @_;
my $skip = [ 'index', 'interface' ];
- return PVE::JSONSchema::print_property_string($drive, $alldrive_fmt, $skip);
+ my $fmt = $with_alloc ? $alldrive_fmt_with_alloc : $alldrive_fmt;
+ return PVE::JSONSchema::print_property_string($drive, $fmt, $skip);
}
sub get_bootdisks {
--
2.30.2
^ permalink raw reply [flat|nested] 13+ messages in thread
* [pve-devel] [PATCH v13 qemu-server 7/8] api: support VM disk import
2022-03-17 11:30 [pve-devel] [PATCH-SERIES v13 qemu-server/manager] API for disk import and OVF Fabian Ebner
` (5 preceding siblings ...)
2022-03-17 11:31 ` [pve-devel] [PATCH v13 qemu-server 6/8] schema: drive: use separate schema when disk allocation is possible Fabian Ebner
@ 2022-03-17 11:31 ` Fabian Ebner
[not found] ` <CAOKSTBvWfnXW9RUo2ddKACTgZV9Be-7d=9g8zPoduaDH=XP1Zw@mail.gmail.com>
2022-03-17 11:31 ` [pve-devel] [PATCH v13 qemu-server 8/8] api: update vm: print drive string for newly allocated/imported drives Fabian Ebner
` (2 subsequent siblings)
9 siblings, 1 reply; 13+ messages in thread
From: Fabian Ebner @ 2022-03-17 11:31 UTC (permalink / raw)
To: pve-devel
From: Dominic Jäger <d.jaeger@proxmox.com>
Extend qm importdisk functionality to the API.
Co-authored-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Co-authored-by: Dominic Jäger <d.jaeger@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---
Changes from v12:
* Switch to raise_param_exc rather than die in API helper.
* Simplify check for PVE-managed volume.
* Assert that volume type for import-source is 'images'.
* Move existence/size check into $import_from_volid.
* Add comment about new skip-config-update (and existing skiplock)
options to do_import()
* Clear up that passing an absolute path to import-from means that
the user is responsible to ensure that the source is not used by
something else. Previously it read "not managed by PVE", but
there can be absolute paths pointing to PVE-managed volumes of
course ;). We /could/ use PVE::Storage::path_to_volume_id
to resolve such paths to volids, but that's a) automagic and b)
only works with file-based storages and not things like zvol
device paths.
PVE/API2/Qemu.pm | 208 +++++++++++++++++++++++++++++++----
PVE/QemuServer/Drive.pm | 18 ++-
PVE/QemuServer/ImportDisk.pm | 4 +-
3 files changed, 207 insertions(+), 23 deletions(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 1082d5e3..0e9f5c7e 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -21,8 +21,9 @@ use PVE::ReplicationConfig;
use PVE::GuestHelpers;
use PVE::QemuConfig;
use PVE::QemuServer;
-use PVE::QemuServer::Drive;
use PVE::QemuServer::CPUConfig;
+use PVE::QemuServer::Drive;
+use PVE::QemuServer::ImportDisk;
use PVE::QemuServer::Monitor qw(mon_cmd);
use PVE::QemuServer::Machine;
use PVE::QemuMigrate;
@@ -88,6 +89,28 @@ my $check_drive_param = sub {
my $drive = PVE::QemuServer::parse_drive($opt, $param->{$opt}, 1);
raise_param_exc({ $opt => "unable to parse drive options" }) if !$drive;
+ if ($drive->{'import-from'}) {
+ if ($drive->{file} !~ $NEW_DISK_RE || $3 != 0) {
+ raise_param_exc({
+ $opt => "'import-from' requires special syntax - ".
+ "use <storage ID>:0,import-from=<source>",
+ });
+ }
+
+ if ($opt eq 'efidisk0') {
+ for my $required (qw(efitype pre-enrolled-keys)) {
+ if (!defined($drive->{$required})) {
+ raise_param_exc({
+ $opt => "need to specify '$required' when using 'import-from'",
+ });
+ }
+ }
+ } elsif ($opt eq 'tpmstate0') {
+ raise_param_exc({ $opt => "need to specify 'version' when using 'import-from'" })
+ if !defined($drive->{version});
+ }
+ }
+
PVE::QemuServer::cleanup_drive_path($opt, $storecfg, $drive);
$extra_checks->($drive) if $extra_checks;
@@ -121,6 +144,21 @@ my $check_storage_access = sub {
} else {
PVE::Storage::check_volume_access($rpcenv, $authuser, $storecfg, $vmid, $volid);
}
+
+ if (my $src_image = $drive->{'import-from'}) {
+ my $src_vmid;
+ if (PVE::Storage::parse_volume_id($src_image, 1)) { # PVE-managed volume
+ (my $vtype, undef, $src_vmid) = PVE::Storage::parse_volname($storecfg, $src_image);
+ raise_param_exc({ $ds => "$src_image has wrong type '$vtype' - not an image" })
+ if $vtype ne 'images';
+ }
+
+ if ($src_vmid) { # might be actively used by VM and will be copied via clone_disk()
+ $rpcenv->check($authuser, "/vms/${src_vmid}", ['VM.Clone']);
+ } else {
+ PVE::Storage::check_volume_access($rpcenv, $authuser, $storecfg, $vmid, $src_image);
+ }
+ }
});
$rpcenv->check($authuser, "/storage/$settings->{vmstatestorage}", ['Datastore.AllocateSpace'])
@@ -179,6 +217,91 @@ my $check_storage_access_migrate = sub {
if !$scfg->{content}->{images};
};
+my $import_from_volid = sub {
+ my ($storecfg, $src_volid, $dest_info, $vollist) = @_;
+
+ die "could not get size of $src_volid\n"
+ if !PVE::Storage::volume_size_info($storecfg, $src_volid, 10);
+
+ die "cannot import from cloudinit disk\n"
+ if PVE::QemuServer::Drive::drive_is_cloudinit({ file => $src_volid });
+
+ my $src_vmid = (PVE::Storage::parse_volname($storecfg, $src_volid))[2];
+
+ my $src_vm_state = sub {
+ my $exists = $src_vmid && PVE::Cluster::get_vmlist()->{ids}->{$src_vmid} ? 1 : 0;
+
+ my $runs = 0;
+ if ($exists) {
+ eval { PVE::QemuConfig::assert_config_exists_on_node($src_vmid); };
+ die "owner VM $src_vmid not on local node\n" if $@;
+ $runs = PVE::QemuServer::Helpers::vm_running_locally($src_vmid) || 0;
+ }
+
+ return ($exists, $runs);
+ };
+
+ my ($src_vm_exists, $running) = $src_vm_state->();
+
+ die "cannot import from '$src_volid' - full clone feature is not supported\n"
+ if !PVE::Storage::volume_has_feature($storecfg, 'copy', $src_volid, undef, $running);
+
+ my $clonefn = sub {
+ my ($src_vm_exists_now, $running_now) = $src_vm_state->();
+
+ die "owner VM $src_vmid changed state unexpectedly\n"
+ if $src_vm_exists_now != $src_vm_exists || $running_now != $running;
+
+ my $src_conf = $src_vm_exists_now ? PVE::QemuConfig->load_config($src_vmid) : {};
+
+ my $src_drive = { file => $src_volid };
+ my $src_drivename;
+ PVE::QemuConfig->foreach_volume($src_conf, sub {
+ my ($ds, $drive) = @_;
+
+ return if $src_drivename;
+
+ if ($drive->{file} eq $src_volid) {
+ $src_drive = $drive;
+ $src_drivename = $ds;
+ }
+ });
+
+ my $source_info = {
+ vmid => $src_vmid,
+ running => $running_now,
+ drivename => $src_drivename,
+ drive => $src_drive,
+ snapname => undef,
+ };
+
+ my ($src_storeid) = PVE::Storage::parse_volume_id($src_volid);
+
+ return PVE::QemuServer::clone_disk(
+ $storecfg,
+ $source_info,
+ $dest_info,
+ 1,
+ $vollist,
+ undef,
+ undef,
+ $src_conf->{agent},
+ PVE::Storage::get_bandwidth_limit('clone', [$src_storeid, $dest_info->{storage}]),
+ );
+ };
+
+ my $cloned;
+ if ($running) {
+ $cloned = PVE::QemuConfig->lock_config_full($src_vmid, 30, $clonefn);
+ } elsif ($src_vmid) {
+ $cloned = PVE::QemuConfig->lock_config_shared($src_vmid, 30, $clonefn);
+ } else {
+ $cloned = $clonefn->();
+ }
+
+ return $cloned->@{qw(file size)};
+};
+
# Note: $pool is only needed when creating a VM, because pool permissions
# are automatically inherited if VM already exists inside a pool.
my $create_disks = sub {
@@ -222,28 +345,71 @@ my $create_disks = sub {
} elsif ($volid =~ $NEW_DISK_RE) {
my ($storeid, $size) = ($2 || $default_storage, $3);
die "no storage ID specified (and no default storage)\n" if !$storeid;
- my $defformat = PVE::Storage::storage_default_format($storecfg, $storeid);
- my $fmt = $disk->{format} || $defformat;
-
- $size = PVE::Tools::convert_size($size, 'gb' => 'kb'); # vdisk_alloc uses kb
-
- my $volid;
- if ($ds eq 'efidisk0') {
- my $smm = PVE::QemuServer::Machine::machine_type_is_q35($conf);
- ($volid, $size) = PVE::QemuServer::create_efidisk(
- $storecfg, $storeid, $vmid, $fmt, $arch, $disk, $smm);
- } elsif ($ds eq 'tpmstate0') {
- # swtpm can only use raw volumes, and uses a fixed size
- $size = PVE::Tools::convert_size(PVE::QemuServer::Drive::TPMSTATE_DISK_SIZE, 'b' => 'kb');
- $volid = PVE::Storage::vdisk_alloc($storecfg, $storeid, $vmid, "raw", undef, $size);
+
+ if (my $source = delete $disk->{'import-from'}) {
+ my $dst_volid;
+
+ if (PVE::Storage::parse_volume_id($source, 1)) { # PVE-managed volume
+ my $dest_info = {
+ vmid => $vmid,
+ drivename => $ds,
+ storage => $storeid,
+ format => $disk->{format},
+ };
+
+ $dest_info->{efisize} = PVE::QemuServer::get_efivars_size($conf, $disk)
+ if $ds eq 'efidisk0';
+
+ ($dst_volid, $size) = eval {
+ $import_from_volid->($storecfg, $source, $dest_info, $vollist);
+ };
+ die "cannot import from '$source' - $@" if $@;
+ } else {
+ $source = PVE::Storage::abs_filesystem_path($storecfg, $source, 1);
+ $size = PVE::Storage::file_size_info($source);
+ die "could not get file size of $source\n" if !$size;
+
+ (undef, $dst_volid) = PVE::QemuServer::ImportDisk::do_import(
+ $source,
+ $vmid,
+ $storeid,
+ {
+ drive_name => $ds,
+ format => $disk->{format},
+ 'skip-config-update' => 1,
+ },
+ );
+ push @$vollist, $dst_volid;
+ }
+
+ $disk->{file} = $dst_volid;
+ $disk->{size} = $size;
+ delete $disk->{format}; # no longer needed
+ $res->{$ds} = PVE::QemuServer::print_drive($disk);
} else {
- $volid = PVE::Storage::vdisk_alloc($storecfg, $storeid, $vmid, $fmt, undef, $size);
+ my $defformat = PVE::Storage::storage_default_format($storecfg, $storeid);
+ my $fmt = $disk->{format} || $defformat;
+
+ $size = PVE::Tools::convert_size($size, 'gb' => 'kb'); # vdisk_alloc uses kb
+
+ my $volid;
+ if ($ds eq 'efidisk0') {
+ my $smm = PVE::QemuServer::Machine::machine_type_is_q35($conf);
+ ($volid, $size) = PVE::QemuServer::create_efidisk(
+ $storecfg, $storeid, $vmid, $fmt, $arch, $disk, $smm);
+ } elsif ($ds eq 'tpmstate0') {
+ # swtpm can only use raw volumes, and uses a fixed size
+ $size = PVE::Tools::convert_size(PVE::QemuServer::Drive::TPMSTATE_DISK_SIZE, 'b' => 'kb');
+ $volid = PVE::Storage::vdisk_alloc($storecfg, $storeid, $vmid, "raw", undef, $size);
+ } else {
+ $volid = PVE::Storage::vdisk_alloc($storecfg, $storeid, $vmid, $fmt, undef, $size);
+ }
+ push @$vollist, $volid;
+ $disk->{file} = $volid;
+ $disk->{size} = PVE::Tools::convert_size($size, 'kb' => 'b');
+ delete $disk->{format}; # no longer needed
+ $res->{$ds} = PVE::QemuServer::print_drive($disk);
}
- push @$vollist, $volid;
- $disk->{file} = $volid;
- $disk->{size} = PVE::Tools::convert_size($size, 'kb' => 'b');
- delete $disk->{format}; # no longer needed
- $res->{$ds} = PVE::QemuServer::print_drive($disk);
} else {
PVE::Storage::check_volume_access($rpcenv, $authuser, $storecfg, $vmid, $volid);
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index cebf1730..1dc6171a 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -409,8 +409,21 @@ my $alldrive_fmt = {
%efitype_fmt,
};
+my %import_from_fmt = (
+ 'import-from' => {
+ type => 'string',
+ format => 'pve-volume-id-or-absolute-path',
+ format_description => 'source volume',
+ description => "Create a new disk, importing from this source (volume ID or absolute ".
+ "path). When an absolute path is specified, it's up to you to ensure that the source ".
+ "is not actively used by another process during the import!",
+ optional => 1,
+ },
+);
+
my $alldrive_fmt_with_alloc = {
%$alldrive_fmt,
+ %import_from_fmt,
};
my $unused_fmt = {
@@ -440,6 +453,8 @@ my $desc_with_alloc = sub {
my $new_desc = dclone($desc);
+ $new_desc->{format}->{'import-from'} = $import_from_fmt{'import-from'};
+
my $extra_note = '';
if ($type eq 'efidisk') {
$extra_note = " Note that SIZE_IN_GiB is ignored here and that the default EFI vars are ".
@@ -449,7 +464,8 @@ my $desc_with_alloc = sub {
}
$new_desc->{description} .= " Use the special syntax STORAGE_ID:SIZE_IN_GiB to allocate a new ".
- "volume.${extra_note}";
+ "volume.${extra_note} Use STORAGE_ID:0 and the 'import-from' parameter to import from an ".
+ "existing volume.";
$with_alloc_desc_cache->{$type} = $new_desc;
diff --git a/PVE/QemuServer/ImportDisk.pm b/PVE/QemuServer/ImportDisk.pm
index 51ad52ea..3e0474b6 100755
--- a/PVE/QemuServer/ImportDisk.pm
+++ b/PVE/QemuServer/ImportDisk.pm
@@ -11,6 +11,8 @@ use PVE::Tools qw(run_command extract_param);
# and creates by default a drive entry unused[n] pointing to the created volume
# $params->{drive_name} may be used to specify ide0, scsi1, etc ...
# $params->{format} may be used to specify qcow2, raw, etc ...
+# $params->{skiplock} may be used to skip checking for a lock in the VM config
+# $params->{'skip-config-update'} may be used to import the disk without updating the VM config
sub do_import {
my ($src_path, $vmid, $storage_id, $params) = @_;
@@ -71,7 +73,7 @@ sub do_import {
PVE::Storage::activate_volumes($storecfg, [$dst_volid]);
PVE::QemuServer::qemu_img_convert($src_path, $dst_volid, $src_size, undef, $zeroinit);
PVE::Storage::deactivate_volumes($storecfg, [$dst_volid]);
- PVE::QemuConfig->lock_config($vmid, $create_drive);
+ PVE::QemuConfig->lock_config($vmid, $create_drive) if !$params->{'skip-config-update'};
};
if (my $err = $@) {
eval { PVE::Storage::vdisk_free($storecfg, $dst_volid) };
--
2.30.2
^ permalink raw reply [flat|nested] 13+ messages in thread
* [pve-devel] [PATCH v13 qemu-server 8/8] api: update vm: print drive string for newly allocated/imported drives
2022-03-17 11:30 [pve-devel] [PATCH-SERIES v13 qemu-server/manager] API for disk import and OVF Fabian Ebner
` (6 preceding siblings ...)
2022-03-17 11:31 ` [pve-devel] [PATCH v13 qemu-server 7/8] api: support VM disk import Fabian Ebner
@ 2022-03-17 11:31 ` Fabian Ebner
2022-03-17 11:31 ` [pve-devel] [PATCH v13 manager 1/1] api: nodes: add readovf endpoint Fabian Ebner
2022-04-04 14:59 ` [pve-devel] applied: [PATCH-SERIES v13 qemu-server/manager] API for disk import and OVF Fabian Grünbichler
9 siblings, 0 replies; 13+ messages in thread
From: Fabian Ebner @ 2022-03-17 11:31 UTC (permalink / raw)
To: pve-devel
In the spirit of c75bf16 ("qm importdisk: tell user to what VM disk we
actually imported"), and so that the information is not lost once qm
importdisk switches to re-using the API call.
Added for cloudinit too, because a new disk is allocated.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---
No changes from v12.
The name for cloudinit is rather predictable, so not too sure if it's
worth it there.
PVE/API2/Qemu.pm | 3 +++
1 file changed, 3 insertions(+)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 0e9f5c7e..bca45c8b 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -342,6 +342,7 @@ my $create_disks = sub {
push @$vollist, $volid;
delete $disk->{format}; # no longer needed
$res->{$ds} = PVE::QemuServer::print_drive($disk);
+ print "$ds: successfully created disk '$res->{$ds}'\n";
} elsif ($volid =~ $NEW_DISK_RE) {
my ($storeid, $size) = ($2 || $default_storage, $3);
die "no storage ID specified (and no default storage)\n" if !$storeid;
@@ -410,6 +411,8 @@ my $create_disks = sub {
delete $disk->{format}; # no longer needed
$res->{$ds} = PVE::QemuServer::print_drive($disk);
}
+
+ print "$ds: successfully created disk '$res->{$ds}'\n";
} else {
PVE::Storage::check_volume_access($rpcenv, $authuser, $storecfg, $vmid, $volid);
--
2.30.2
^ permalink raw reply [flat|nested] 13+ messages in thread
* [pve-devel] [PATCH v13 manager 1/1] api: nodes: add readovf endpoint
2022-03-17 11:30 [pve-devel] [PATCH-SERIES v13 qemu-server/manager] API for disk import and OVF Fabian Ebner
` (7 preceding siblings ...)
2022-03-17 11:31 ` [pve-devel] [PATCH v13 qemu-server 8/8] api: update vm: print drive string for newly allocated/imported drives Fabian Ebner
@ 2022-03-17 11:31 ` Fabian Ebner
2022-04-04 14:59 ` [pve-devel] applied: [PATCH-SERIES v13 qemu-server/manager] API for disk import and OVF Fabian Grünbichler
9 siblings, 0 replies; 13+ messages in thread
From: Fabian Ebner @ 2022-03-17 11:31 UTC (permalink / raw)
To: pve-devel
Because the paths under /nodes/{node}/qemu/ are already occupied by
a {vmid} regex, it's not possible to use /nodes/{node}/qemu/readovf
for the new call. As the call does not depend upon a particular vmid,
it's placed under /nodes/{node} instead.
Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
[split into its own patch + add to index]
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---
No changes from v12.
Needs dependency bump for qemu-server.
PVE/API2/Nodes.pm | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/PVE/API2/Nodes.pm b/PVE/API2/Nodes.pm
index 655493a3..f595808a 100644
--- a/PVE/API2/Nodes.pm
+++ b/PVE/API2/Nodes.pm
@@ -49,6 +49,7 @@ use PVE::API2::LXC;
use PVE::API2::Network;
use PVE::API2::NodeConfig;
use PVE::API2::Qemu::CPU;
+use PVE::API2::Qemu::OVF;
use PVE::API2::Qemu;
use PVE::API2::Replication;
use PVE::API2::Services;
@@ -71,6 +72,11 @@ __PACKAGE__->register_method ({
path => 'qemu',
});
+__PACKAGE__->register_method ({
+ subclass => "PVE::API2::Qemu::OVF",
+ path => 'readovf',
+});
+
__PACKAGE__->register_method ({
subclass => "PVE::API2::LXC",
path => 'lxc',
@@ -233,6 +239,7 @@ __PACKAGE__->register_method ({
{ name => 'network' },
{ name => 'qemu' },
{ name => 'query-url-metadata' },
+ { name => 'readovf' },
{ name => 'replication' },
{ name => 'report' },
{ name => 'rrd' }, # fixme: remove?
--
2.30.2
^ permalink raw reply [flat|nested] 13+ messages in thread
* [pve-devel] applied: [PATCH-SERIES v13 qemu-server/manager] API for disk import and OVF
2022-03-17 11:30 [pve-devel] [PATCH-SERIES v13 qemu-server/manager] API for disk import and OVF Fabian Ebner
` (8 preceding siblings ...)
2022-03-17 11:31 ` [pve-devel] [PATCH v13 manager 1/1] api: nodes: add readovf endpoint Fabian Ebner
@ 2022-04-04 14:59 ` Fabian Grünbichler
9 siblings, 0 replies; 13+ messages in thread
From: Fabian Grünbichler @ 2022-04-04 14:59 UTC (permalink / raw)
To: Proxmox VE development discussion
the qemu-server part with one slight fixup (readovf API endpoint renamed
from 'index' to 'readovf'). the manager part should be applied once
we've bumped qemu-server or when the GUI patches are ready.
On March 17, 2022 12:30 pm, Fabian Ebner wrote:
> Extend qm importdisk/importovf functionality to the API.
>
> Changes from v12:
> * Drop already applied patches.
> * Add some follow-up improvements related to clone:
> * Aborting early for TPM state restriction.
> * Check against a corner case with drive-mirror with different
> drive names but with the same VM ID.
> * Clone EFI disk from snapshot when specified.
> * Make readovf endpoint 'protected => 1'.
> * Group schema-related changes into a patch.
> * Some smaller cleanups/improvements to checks (see individual
> patches).
>
> Still missing GUI integration for import from ovf, but that will be it's
> own series.
>
> Previous discussion here:
> https://lists.proxmox.com/pipermail/pve-devel/2022-March/052005.html
>
>
> qemu-server:
>
> Dominic Jäger (1):
> api: support VM disk import
>
> Fabian Ebner (7):
> clone disk: assert that drive name is the same for drive-mirror on
> single VM
> clone disk: move check against cloning TPM state of running VM to
> beginning
> api: clone vm: check against cloning running TPM state early
> clone disk: also clone EFI disk from snapshot
> api: add endpoint for parsing .ovf files
> schema: drive: use separate schema when disk allocation is possible
> api: update vm: print drive string for newly allocated/imported drives
>
> PVE/API2/Qemu.pm | 255 ++++++++++++++++++++++++++++++-----
> PVE/API2/Qemu/Makefile | 2 +-
> PVE/API2/Qemu/OVF.pm | 53 ++++++++
> PVE/QemuServer.pm | 70 ++++++++--
> PVE/QemuServer/Drive.pm | 95 +++++++++----
> PVE/QemuServer/ImportDisk.pm | 4 +-
> 6 files changed, 412 insertions(+), 67 deletions(-)
> create mode 100644 PVE/API2/Qemu/OVF.pm
>
>
> manager:
>
> Fabian Ebner (1):
> api: nodes: add readovf endpoint
>
> PVE/API2/Nodes.pm | 7 +++++++
> 1 file changed, 7 insertions(+)
>
> --
> 2.30.2
>
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
^ permalink raw reply [flat|nested] 13+ messages in thread