* [pve-devel] [PATCH v5 storage 1/9] add disk rename feature
2021-11-09 14:55 [pve-devel] [PATCH v5 storage qemu-server container 0/9] move disk or volume to other guests Aaron Lauterer
@ 2021-11-09 14:55 ` Aaron Lauterer
2021-11-09 14:55 ` [pve-devel] [PATCH v5 qemu-server 2/9] cli: qm: change move_disk to move-disk Aaron Lauterer
` (8 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Aaron Lauterer @ 2021-11-09 14:55 UTC (permalink / raw)
To: pve-devel
Functionality has been added for the following storage types:
* directory ones, based on the default implementation:
* directory
* NFS
* CIFS
* gluster
* ZFS
* (thin) LVM
* Ceph
A new feature `rename` has been introduced to mark which storage
plugin supports the feature.
Version API and AGE have been bumped.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
v4 -> v5: Since the previous first patch which added the
'find_free_volname' on a Storage.pm and Plugin.pm level was completely
dropped, the 'rename_volume' functions have gotten a little more logic
to check if a target_volname has been given or if one needs to get
requested from 'find_free_diskname'. Since 'find_free_diskname' does not
handle VMID subdirectories, the Plugin.pm also adds them now where
needed. This was previously handled by the now gone
'find_free_volname'.
v3 -> v4:
* add notes to ApiChangeLog
* fix style nits
v2 -> v3:
* dropped exists() check
* fixed base image handling
* fixed smaller code style issues
v1 -> v2:
* many small fixes and improvements
* rename_volume now accepts $source_volname instead of $source_volid
helps us to avoid to parse the volid a second time
rfc -> v1:
* reduced number of parameters to minimum needed, plugins infer needed
information themselves
* added storage locking and checking if volume already exists
* parse target_volname prior to renaming to check if valid
old dedicated API endpoint -> rfc:
only do rename now but the rename function handles templates and returns
the new volid as this can be differently handled on some storages.
ApiChangeLog | 10 ++++++++++
PVE/Storage.pm | 25 ++++++++++++++++++++++--
PVE/Storage/LVMPlugin.pm | 34 ++++++++++++++++++++++++++++++++
PVE/Storage/LvmThinPlugin.pm | 1 +
PVE/Storage/Plugin.pm | 38 ++++++++++++++++++++++++++++++++++++
PVE/Storage/RBDPlugin.pm | 34 ++++++++++++++++++++++++++++++++
PVE/Storage/ZFSPoolPlugin.pm | 31 +++++++++++++++++++++++++++++
7 files changed, 171 insertions(+), 2 deletions(-)
diff --git a/ApiChangeLog b/ApiChangeLog
index 8c119c5..5b572e3 100644
--- a/ApiChangeLog
+++ b/ApiChangeLog
@@ -6,6 +6,16 @@ without breaking anything unaware of it.)
Future changes should be documented in here.
+## Version 10: (AGE 1):
+
+* a new `find_free_volname` method has been added
+ It exposes the functionality to request a new, not yet used, volname for a storage
+ to outside callers
+
+* a new `rename_volume` method has been added
+ Storage plugins with rename support need to enable the `rename` feature flag; e.g. in the
+ `volume_has_feature` method.
+
## Version 9: (AGE resets to 0):
* volume_import_formats gets a new parameter *inserted*:
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 71d6ad7..7c2ceab 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -41,11 +41,11 @@ use PVE::Storage::PBSPlugin;
use PVE::Storage::BTRFSPlugin;
# Storage API version. Increment it on changes in storage API interface.
-use constant APIVER => 9;
+use constant APIVER => 10;
# Age is the number of versions we're backward compatible with.
# This is like having 'current=APIVER' and age='APIAGE' in libtool,
# see https://www.gnu.org/software/libtool/manual/html_node/Libtool-versioning.html
-use constant APIAGE => 0;
+use constant APIAGE => 1;
# load standard plugins
PVE::Storage::DirPlugin->register();
@@ -349,6 +349,7 @@ sub volume_snapshot_needs_fsfreeze {
# snapshot - taking a snapshot is possible
# sparseinit - volume is sparsely initialized
# template - conversion to base image is possible
+# rename - renaming volumes is possible
# $snap - check if the feature is supported for a given snapshot
# $running - if the guest owning the volume is running
# $opts - hash with further options:
@@ -1856,6 +1857,26 @@ sub complete_volume {
return $res;
}
+sub rename_volume {
+ my ($cfg, $source_volid, $target_vmid, $target_volname) = @_;
+
+ die "no source volid provided\n" if !$source_volid;
+ die "no target VMID or target volname provided\n" if !$target_vmid && !$target_volname;
+
+ my ($storeid, $source_volname) = parse_volume_id($source_volid);
+
+ activate_storage($cfg, $storeid);
+
+ my $scfg = storage_config($cfg, $storeid);
+ my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
+
+ $target_vmid = ($plugin->parse_volname($source_volname))[3] if !$target_vmid;
+
+ return $plugin->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
+ return $plugin->rename_volume($scfg, $storeid, $source_volname, $target_vmid, $target_volname);
+ });
+}
+
# Various io-heavy operations require io/bandwidth limits which can be
# configured on multiple levels: The global defaults in datacenter.cfg, and
# per-storage overrides. When we want to do a restore from storage A to storage
diff --git a/PVE/Storage/LVMPlugin.pm b/PVE/Storage/LVMPlugin.pm
index 139d391..40c1613 100644
--- a/PVE/Storage/LVMPlugin.pm
+++ b/PVE/Storage/LVMPlugin.pm
@@ -339,6 +339,15 @@ sub lvcreate {
run_command($cmd, errmsg => "lvcreate '$vg/$name' error");
}
+sub lvrename {
+ my ($vg, $oldname, $newname) = @_;
+
+ run_command(
+ ['/sbin/lvrename', $vg, $oldname, $newname],
+ errmsg => "lvrename '${vg}/${oldname}' to '${newname}' error",
+ );
+}
+
sub alloc_image {
my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size) = @_;
@@ -584,6 +593,7 @@ sub volume_has_feature {
my $features = {
copy => { base => 1, current => 1},
+ rename => {current => 1},
};
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
@@ -692,4 +702,28 @@ sub volume_import_write {
input => '<&'.fileno($input_fh));
}
+sub rename_volume {
+ my ($class, $scfg, $storeid, $source_volname, $target_vmid, $target_volname) = @_;
+
+ my (
+ undef,
+ $source_image,
+ $source_vmid,
+ $base_name,
+ $base_vmid,
+ undef,
+ $format
+ ) = $class->parse_volname($source_volname);
+ $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid, $format)
+ if !$target_volname;
+
+ my $vg = $scfg->{vgname};
+ my $lvs = lvm_list_volumes($vg);
+ die "target volume '${target_volname}' already exists\n"
+ if ($lvs->{$vg}->{$target_volname});
+
+ lvrename($vg, $source_volname, $target_volname);
+ return "${storeid}:${target_volname}";
+}
+
1;
diff --git a/PVE/Storage/LvmThinPlugin.pm b/PVE/Storage/LvmThinPlugin.pm
index 4ba6f90..c24af22 100644
--- a/PVE/Storage/LvmThinPlugin.pm
+++ b/PVE/Storage/LvmThinPlugin.pm
@@ -355,6 +355,7 @@ sub volume_has_feature {
template => { current => 1},
copy => { base => 1, current => 1, snap => 1},
sparseinit => { base => 1, current => 1},
+ rename => {current => 1},
};
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index aeb4fff..5350a62 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -1008,6 +1008,7 @@ sub volume_has_feature {
snap => {qcow2 => 1} },
sparseinit => { base => {qcow2 => 1, raw => 1, vmdk => 1},
current => {qcow2 => 1, raw => 1, vmdk => 1} },
+ rename => { current => {qcow2 => 1, raw => 1, vmdk => 1} },
};
# clone_image creates a qcow2 volume
@@ -1015,6 +1016,8 @@ sub volume_has_feature {
defined($opts->{valid_target_formats}) &&
!(grep { $_ eq 'qcow2' } @{$opts->{valid_target_formats}});
+ return 0 if $feature eq 'rename' && $class->can('api') && $class->api() < 10;
+
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
$class->parse_volname($volname);
@@ -1531,4 +1534,39 @@ sub volume_import_formats {
return ();
}
+sub rename_volume {
+ my ($class, $scfg, $storeid, $source_volname, $target_vmid, $target_volname) = @_;
+ die "not implemented in storage plugin '$class'\n" if $class->can('api') && $class->api() < 10;
+ die "no path found\n" if !$scfg->{path};
+
+ my (
+ undef,
+ $source_image,
+ $source_vmid,
+ $base_name,
+ $base_vmid,
+ undef,
+ $format
+ ) = $class->parse_volname($source_volname);
+
+ $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid, $format, 1)
+ if !$target_volname;
+
+ my $basedir = $class->get_subdir($scfg, 'images');
+
+ mkpath "${basedir}/${target_vmid}";
+
+ my $old_path = "${basedir}/${source_vmid}/${source_image}";
+ my $new_path = "${basedir}/${target_vmid}/${target_volname}";
+
+ die "target volume '${target_volname}' already exists\n" if -e $new_path;
+
+ my $base = $base_name ? "${base_vmid}/${base_name}/" : '';
+
+ rename($old_path, $new_path) ||
+ die "rename '$old_path' to '$new_path' failed - $!\n";
+
+ return "${storeid}:${base}${target_vmid}/${target_volname}";
+}
+
1;
diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index 613d32b..2607d25 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -742,6 +742,7 @@ sub volume_has_feature {
template => { current => 1},
copy => { base => 1, current => 1, snap => 1},
sparseinit => { base => 1, current => 1},
+ rename => {current => 1},
};
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) = $class->parse_volname($volname);
@@ -757,4 +758,37 @@ sub volume_has_feature {
return undef;
}
+sub rename_volume {
+ my ($class, $scfg, $storeid, $source_volname, $target_vmid, $target_volname) = @_;
+
+ my (
+ undef,
+ $source_image,
+ $source_vmid,
+ $base_name,
+ $base_vmid,
+ undef,
+ $format
+ ) = $class->parse_volname($source_volname);
+ $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid, $format)
+ if !$target_volname;
+
+ eval {
+ my $cmd = $rbd_cmd->($scfg, $storeid, 'info', $target_volname);
+ run_rbd_command($cmd, errmsg => "exist check", quiet => 1);
+ };
+ die "target volume '${target_volname}' already exists\n" if !$@;
+
+ my $cmd = $rbd_cmd->($scfg, $storeid, 'rename', $source_image, $target_volname);
+
+ run_rbd_command(
+ $cmd,
+ errmsg => "could not rename image '${source_image}' to '${target_volname}'",
+ );
+
+ $base_name = $base_name ? "${base_name}/" : '';
+
+ return "${storeid}:${base_name}${target_volname}";
+}
+
1;
diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index 660b3d9..83220ec 100644
--- a/PVE/Storage/ZFSPoolPlugin.pm
+++ b/PVE/Storage/ZFSPoolPlugin.pm
@@ -690,6 +690,7 @@ sub volume_has_feature {
copy => { base => 1, current => 1},
sparseinit => { base => 1, current => 1},
replicate => { base => 1, current => 1},
+ rename => {current => 1},
};
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
@@ -792,4 +793,34 @@ sub volume_import_formats {
return $class->volume_export_formats($scfg, $storeid, $volname, $snapshot, $base_snapshot, $with_snapshots);
}
+sub rename_volume {
+ my ($class, $scfg, $storeid, $source_volname, $target_vmid, $target_volname) = @_;
+
+ my (
+ undef,
+ $source_image,
+ $source_vmid,
+ $base_name,
+ $base_vmid,
+ undef,
+ $format
+ ) = $class->parse_volname($source_volname);
+ $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid, $format)
+ if !$target_volname;
+
+ my $pool = $scfg->{pool};
+ my $source_zfspath = "${pool}/${source_image}";
+ my $target_zfspath = "${pool}/${target_volname}";
+
+ my $exists = 0 == run_command(['zfs', 'get', '-H', 'name', $target_zfspath],
+ noerr => 1, quiet => 1);
+ die "target volume '${target_volname}' already exists\n" if $exists;
+
+ $class->zfs_request($scfg, 5, 'rename', ${source_zfspath}, ${target_zfspath});
+
+ $base_name = $base_name ? "${base_name}/" : '';
+
+ return "${storeid}:${base_name}${target_volname}";
+}
+
1;
--
2.30.2
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pve-devel] [PATCH v5 qemu-server 2/9] cli: qm: change move_disk to move-disk
2021-11-09 14:55 [pve-devel] [PATCH v5 storage qemu-server container 0/9] move disk or volume to other guests Aaron Lauterer
2021-11-09 14:55 ` [pve-devel] [PATCH v5 storage 1/9] add disk rename feature Aaron Lauterer
@ 2021-11-09 14:55 ` Aaron Lauterer
2021-11-09 14:55 ` [pve-devel] [PATCH v5 qemu-server 3/9] Drive: add valid_drive_names_with_unused Aaron Lauterer
` (7 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Aaron Lauterer @ 2021-11-09 14:55 UTC (permalink / raw)
To: pve-devel
also add alias to keep move_disk working.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
PVE/CLI/qm.pm | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index 8307dc1..ef99b6d 100755
--- a/PVE/CLI/qm.pm
+++ b/PVE/CLI/qm.pm
@@ -910,7 +910,8 @@ our $cmddef = {
resize => [ "PVE::API2::Qemu", 'resize_vm', ['vmid', 'disk', 'size'], { node => $nodename } ],
- move_disk => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 'disk', 'storage'], { node => $nodename }, $upid_exit ],
+ 'move-disk' => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 'disk', 'storage'], { node => $nodename }, $upid_exit ],
+ move_disk => { alias => 'move-disk' },
unlink => [ "PVE::API2::Qemu", 'unlink', ['vmid'], { node => $nodename } ],
--
2.30.2
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pve-devel] [PATCH v5 qemu-server 3/9] Drive: add valid_drive_names_with_unused
2021-11-09 14:55 [pve-devel] [PATCH v5 storage qemu-server container 0/9] move disk or volume to other guests Aaron Lauterer
2021-11-09 14:55 ` [pve-devel] [PATCH v5 storage 1/9] add disk rename feature Aaron Lauterer
2021-11-09 14:55 ` [pve-devel] [PATCH v5 qemu-server 2/9] cli: qm: change move_disk to move-disk Aaron Lauterer
@ 2021-11-09 14:55 ` Aaron Lauterer
2021-11-09 14:55 ` [pve-devel] [PATCH v5 qemu-server 4/9] api: move-disk: add move to other VM Aaron Lauterer
` (6 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Aaron Lauterer @ 2021-11-09 14:55 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
v1 -> v2: fixed spacing between - and 1
PVE/QemuServer/Drive.pm | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index 97b82f9..0f9ceba 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -467,6 +467,10 @@ sub valid_drive_names {
'tpmstate0');
}
+sub valid_drive_names_with_unused {
+ return (valid_drive_names(), map {"unused$_"} (0 .. ($MAX_UNUSED_DISKS - 1)));
+}
+
sub is_valid_drivename {
my $dev = shift;
--
2.30.2
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pve-devel] [PATCH v5 qemu-server 4/9] api: move-disk: add move to other VM
2021-11-09 14:55 [pve-devel] [PATCH v5 storage qemu-server container 0/9] move disk or volume to other guests Aaron Lauterer
` (2 preceding siblings ...)
2021-11-09 14:55 ` [pve-devel] [PATCH v5 qemu-server 3/9] Drive: add valid_drive_names_with_unused Aaron Lauterer
@ 2021-11-09 14:55 ` Aaron Lauterer
2021-11-09 14:55 ` [pve-devel] [PATCH v5 qemu-server 5/9] api: move-disk: cleanup very long lines Aaron Lauterer
` (5 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Aaron Lauterer @ 2021-11-09 14:55 UTC (permalink / raw)
To: pve-devel
The goal of this is to expand the move-disk API endpoint to make it
possible to move a disk to another VM. Previously this was only possible
with manual intervertion either by renaming the VM disk or by manually
adding the disks volid to the config of the other VM.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
v4 -> v5:
* dropped call to gone PVE::Storage::find_free_volname
* call PVE::Storage::rename_volume with changed parameters (target_vmid
instead of target_volname)
v3 -> v4:
* add check to avoid potential undefined warning in $vmlist->{$vmid}
* removed line bloat
v2 -> v3:
* mention default target disk key in description
* code style things
v1 -> v2:
* make --target-disk optional and use source disk key as fallback
* use parse_volname instead of custom regex
* adapt to find_free_volname
* smaller (style) fixes
rfc -> v1:
* add check if target guest is replicated and fail if the moved volume
does not support it
* check if source volume has a format suffix and pass it to
'find_free_disk'
* fixed some style nits
old dedicated api endpoint -> rfc:
There are some big changes here. The old [0] dedicated API endpoint is
gone and most of its code is now part of move_disk. Error messages have
been changed accordingly and sometimes enahnced by adding disk keys and
VMIDs where appropriate.
Since a move to other guests should be possible for unused disks, we
need to check before doing a move to storage to make sure to not
handle unused disks.
PVE/API2/Qemu.pm | 202 +++++++++++++++++++++++++++++++++++++++++++++--
PVE/CLI/qm.pm | 2 +-
2 files changed, 198 insertions(+), 6 deletions(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index b479811..5ba469a 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -36,6 +36,7 @@ use PVE::API2::Qemu::Agent;
use PVE::VZDump::Plugin;
use PVE::DataCenterConfig;
use PVE::SSHInfo;
+use PVE::Replication;
BEGIN {
if (!$ENV{PVE_GENERATING_DOCS}) {
@@ -3282,9 +3283,11 @@ __PACKAGE__->register_method({
method => 'POST',
protected => 1,
proxyto => 'node',
- description => "Move volume to different storage.",
+ description => "Move volume to different storage or to a different VM.",
permissions => {
- description => "You need 'VM.Config.Disk' permissions on /vms/{vmid}, and 'Datastore.AllocateSpace' permissions on the storage.",
+ description => "You need 'VM.Config.Disk' permissions on /vms/{vmid}, " .
+ "and 'Datastore.AllocateSpace' permissions on the storage. To move ".
+ "a disk to another VM, you need the permissions on the target VM as well.",
check => [ 'and',
['perm', '/vms/{vmid}', [ 'VM.Config.Disk' ]],
['perm', '/storage/{storage}', [ 'Datastore.AllocateSpace' ]],
@@ -3295,14 +3298,19 @@ __PACKAGE__->register_method({
properties => {
node => get_standard_option('pve-node'),
vmid => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),
+ 'target-vmid' => get_standard_option('pve-vmid', {
+ completion => \&PVE::QemuServer::complete_vmid,
+ optional => 1,
+ }),
disk => {
type => 'string',
description => "The disk you want to move.",
- enum => [PVE::QemuServer::Drive::valid_drive_names()],
+ enum => [PVE::QemuServer::Drive::valid_drive_names_with_unused()],
},
storage => get_standard_option('pve-storage-id', {
description => "Target storage.",
completion => \&PVE::QemuServer::complete_storage,
+ optional => 1,
}),
'format' => {
type => 'string',
@@ -3329,6 +3337,20 @@ __PACKAGE__->register_method({
minimum => '0',
default => 'move limit from datacenter or storage config',
},
+ 'target-disk' => {
+ type => 'string',
+ description => "The config key the disk will be moved to on the target VM " .
+ "(for example, ide0 or scsi1). Default is the source disk key.",
+ enum => [PVE::QemuServer::Drive::valid_drive_names_with_unused()],
+ optional => 1,
+ },
+ 'target-digest' => {
+ type => 'string',
+ description => 'Prevent changes if current configuration file of the target VM has " .
+ "a different SHA1 digest. This can be used to prevent concurrent modifications.',
+ maxLength => 40,
+ optional => 1,
+ },
},
},
returns => {
@@ -3343,14 +3365,20 @@ __PACKAGE__->register_method({
my $node = extract_param($param, 'node');
my $vmid = extract_param($param, 'vmid');
+ my $target_vmid = extract_param($param, 'target-vmid');
my $digest = extract_param($param, 'digest');
+ my $target_digest = extract_param($param, 'target-digest');
my $disk = extract_param($param, 'disk');
+ my $target_disk = extract_param($param, 'target-disk') // $disk;
my $storeid = extract_param($param, 'storage');
my $format = extract_param($param, 'format');
+ die "either set storage or target-vmid, but not both\n" if $storeid && $target_vmid;
+
my $storecfg = PVE::Storage::config();
+ my $source_volid;
- my $updatefn = sub {
+ my $move_updatefn = sub {
my $conf = PVE::QemuConfig->load_config($vmid);
PVE::QemuConfig->check_lock($conf);
@@ -3460,7 +3488,171 @@ __PACKAGE__->register_method({
return $rpcenv->fork_worker('qmmove', $vmid, $authuser, $realcmd);
};
- return PVE::QemuConfig->lock_config($vmid, $updatefn);
+ my $load_and_check_reassign_configs = sub {
+ my $vmlist = PVE::Cluster::get_vmlist()->{ids};
+
+ die "could not find VM ${vmid}\n" if !exists($vmlist->{$vmid});
+
+ if ($vmlist->{$vmid}->{node} ne $vmlist->{$target_vmid}->{node}) {
+ die "Both VMs need to be on the same node $vmlist->{$vmid}->{node}) ".
+ "but target VM is on $vmlist->{$target_vmid}->{node}.\n";
+ }
+
+ my $source_conf = PVE::QemuConfig->load_config($vmid);
+ PVE::QemuConfig->check_lock($source_conf);
+ my $target_conf = PVE::QemuConfig->load_config($target_vmid);
+ PVE::QemuConfig->check_lock($target_conf);
+
+ die "Can't move disks from or to template VMs\n"
+ if ($source_conf->{template} || $target_conf->{template});
+
+ if ($digest) {
+ eval { PVE::Tools::assert_if_modified($digest, $source_conf->{digest}) };
+ die "VM ${vmid}: $@" if $@;
+ }
+
+ if ($target_digest) {
+ eval { PVE::Tools::assert_if_modified($target_digest, $target_conf->{digest}) };
+ die "VM ${target_vmid}: $@" if $@;
+ }
+
+ die "Disk '${disk}' for VM '$vmid' does not exist\n" if !defined($source_conf->{$disk});
+
+ die "Target disk key '${target_disk}' is already in use for VM '$target_vmid'\n"
+ if exists($target_conf->{$target_disk});
+
+ my $drive = PVE::QemuServer::parse_drive(
+ $disk,
+ $source_conf->{$disk},
+ );
+ $source_volid = $drive->{file};
+
+ die "disk '${disk}' has no associated volume\n" if !$source_volid;
+ die "CD drive contents can't be moved to another VM\n"
+ if PVE::QemuServer::drive_is_cdrom($drive, 1);
+ die "Can't move physical disk to another VM\n" if $source_volid =~ m|^/dev/|;
+ die "Can't move disk used by a snapshot to another VM\n"
+ if PVE::QemuServer::Drive::is_volume_in_use($storecfg, $source_conf, $disk, $source_volid);
+ die "Storage does not support moving of this disk to another VM\n"
+ if (!PVE::Storage::volume_has_feature($storecfg, 'rename', $source_volid));
+ die "Cannot move disk to another VM while the source VM is running\n"
+ if PVE::QemuServer::check_running($vmid) && $disk !~ m/^unused\d+$/;
+
+ if ($target_disk !~ m/^unused\d+$/ && $target_disk =~ m/^([^\d]+)\d+$/) {
+ my $interface = $1;
+ my $desc = PVE::JSONSchema::get_standard_option("pve-qm-${interface}");
+ eval {
+ PVE::JSONSchema::parse_property_string(
+ $desc->{format},
+ $source_conf->{$disk},
+ )
+ };
+ die "Cannot move disk to another VM: $@" if $@;
+ }
+
+ my $repl_conf = PVE::ReplicationConfig->new();
+ my $is_replicated = $repl_conf->check_for_existing_jobs($target_vmid, 1);
+ my ($storeid, undef) = PVE::Storage::parse_volume_id($source_volid);
+ my $format = (PVE::Storage::parse_volname($storecfg, $source_volid))[6];
+ if ($is_replicated && !PVE::Storage::storage_can_replicate($storecfg, $storeid, $format)) {
+ die "Cannot move disk to a replicated VM. Storage does not support replication!\n";
+ }
+
+ return ($source_conf, $target_conf);
+ };
+
+ my $logfunc = sub {
+ my ($msg) = @_;
+ print STDERR "$msg\n";
+ };
+
+ my $disk_reassignfn = sub {
+ return PVE::QemuConfig->lock_config($vmid, sub {
+ return PVE::QemuConfig->lock_config($target_vmid, sub {
+ my ($source_conf, $target_conf) = &$load_and_check_reassign_configs();
+
+ my $drive_param = PVE::QemuServer::parse_drive(
+ $target_disk,
+ $source_conf->{$disk},
+ );
+
+ print "moving disk '$disk' from VM '$vmid' to '$target_vmid'\n";
+ my ($storeid, $source_volname) = PVE::Storage::parse_volume_id($source_volid);
+
+ my $fmt = (PVE::Storage::parse_volname($storecfg, $source_volid))[6];
+
+ my $new_volid = PVE::Storage::rename_volume(
+ $storecfg,
+ $source_volid,
+ $target_vmid,
+ );
+
+ $drive_param->{file} = $new_volid;
+
+ delete $source_conf->{$disk};
+ print "removing disk '${disk}' from VM '${vmid}' config\n";
+ PVE::QemuConfig->write_config($vmid, $source_conf);
+
+ my $drive_string = PVE::QemuServer::print_drive($drive_param);
+ &$update_vm_api(
+ {
+ node => $node,
+ vmid => $target_vmid,
+ digest => $target_digest,
+ $target_disk => $drive_string,
+ },
+ 1,
+ );
+
+ # remove possible replication snapshots
+ if (PVE::Storage::volume_has_feature(
+ $storecfg,
+ 'replicate',
+ $source_volid),
+ ) {
+ eval {
+ PVE::Replication::prepare(
+ $storecfg,
+ [$new_volid],
+ undef,
+ 1,
+ undef,
+ $logfunc,
+ )
+ };
+ if (my $err = $@) {
+ print "Failed to remove replication snapshots on moved disk " .
+ "'$target_disk'. Manual cleanup could be necessary.\n";
+ }
+ }
+ });
+ });
+ };
+
+ if ($target_vmid) {
+ $rpcenv->check_vm_perm($authuser, $target_vmid, undef, ['VM.Config.Disk'])
+ if $authuser ne 'root@pam';
+
+ die "Moving a disk to the same VM is not possible. Did you mean to ".
+ "move the disk to a different storage?\n"
+ if $vmid eq $target_vmid;
+
+ &$load_and_check_reassign_configs();
+ return $rpcenv->fork_worker(
+ 'qmmove',
+ "${vmid}-${disk}>${target_vmid}-${target_disk}",
+ $authuser,
+ $disk_reassignfn
+ );
+ } elsif ($storeid) {
+ die "cannot move disk '$disk', only configured disks can be moved to another storage\n"
+ if $disk =~ m/^unused\d+$/;
+ return PVE::QemuConfig->lock_config($vmid, $move_updatefn);
+ } else {
+ die "Provide either a 'storage' to move the disk to a different " .
+ "storage or 'target-vmid' and 'target-disk' to move the disk " .
+ "to another VM\n";
+ }
}});
my $check_vm_disks_local = sub {
diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index ef99b6d..a92d301 100755
--- a/PVE/CLI/qm.pm
+++ b/PVE/CLI/qm.pm
@@ -910,7 +910,7 @@ our $cmddef = {
resize => [ "PVE::API2::Qemu", 'resize_vm', ['vmid', 'disk', 'size'], { node => $nodename } ],
- 'move-disk' => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 'disk', 'storage'], { node => $nodename }, $upid_exit ],
+ 'move-disk' => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 'disk', 'storage', 'target-vmid', 'target-disk'], { node => $nodename }, $upid_exit ],
move_disk => { alias => 'move-disk' },
unlink => [ "PVE::API2::Qemu", 'unlink', ['vmid'], { node => $nodename } ],
--
2.30.2
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pve-devel] [PATCH v5 qemu-server 5/9] api: move-disk: cleanup very long lines
2021-11-09 14:55 [pve-devel] [PATCH v5 storage qemu-server container 0/9] move disk or volume to other guests Aaron Lauterer
` (3 preceding siblings ...)
2021-11-09 14:55 ` [pve-devel] [PATCH v5 qemu-server 4/9] api: move-disk: add move to other VM Aaron Lauterer
@ 2021-11-09 14:55 ` Aaron Lauterer
2021-11-09 14:55 ` [pve-devel] [PATCH v5 container 6/9] cli: pct: change move_volume to move-volume Aaron Lauterer
` (4 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Aaron Lauterer @ 2021-11-09 14:55 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
PVE/API2/Qemu.pm | 25 ++++++++++++++++++++-----
1 file changed, 20 insertions(+), 5 deletions(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 5ba469a..780bc95 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -3320,13 +3320,15 @@ __PACKAGE__->register_method({
},
delete => {
type => 'boolean',
- description => "Delete the original disk after successful copy. By default the original disk is kept as unused disk.",
+ description => "Delete the original disk after successful copy. By default the " .
+ "original disk is kept as unused disk.",
optional => 1,
default => 0,
},
digest => {
type => 'string',
- description => 'Prevent changes if current configuration file has different SHA1 digest. This can be used to prevent concurrent modifications.',
+ description => 'Prevent changes if current configuration file has different SHA1 " .
+ "digest. This can be used to prevent concurrent modifications.',
maxLength => 40,
optional => 1,
},
@@ -3403,11 +3405,20 @@ __PACKAGE__->register_method({
(!$format || !$oldfmt || $oldfmt eq $format);
# this only checks snapshots because $disk is passed!
- my $snapshotted = PVE::QemuServer::Drive::is_volume_in_use($storecfg, $conf, $disk, $old_volid);
+ my $snapshotted = PVE::QemuServer::Drive::is_volume_in_use(
+ $storecfg,
+ $conf,
+ $disk,
+ $old_volid
+ );
die "you can't move a disk with snapshots and delete the source\n"
if $snapshotted && $param->{delete};
- PVE::Cluster::log_msg('info', $authuser, "move disk VM $vmid: move --disk $disk --storage $storeid");
+ PVE::Cluster::log_msg(
+ 'info',
+ $authuser,
+ "move disk VM $vmid: move --disk $disk --storage $storeid"
+ );
my $running = PVE::QemuServer::check_running($vmid);
@@ -3426,7 +3437,11 @@ __PACKAGE__->register_method({
if $snapshotted;
my $bwlimit = extract_param($param, 'bwlimit');
- my $movelimit = PVE::Storage::get_bandwidth_limit('move', [$oldstoreid, $storeid], $bwlimit);
+ my $movelimit = PVE::Storage::get_bandwidth_limit(
+ 'move',
+ [$oldstoreid, $storeid],
+ $bwlimit
+ );
my $newdrive = PVE::QemuServer::clone_disk(
$storecfg,
--
2.30.2
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pve-devel] [PATCH v5 container 6/9] cli: pct: change move_volume to move-volume
2021-11-09 14:55 [pve-devel] [PATCH v5 storage qemu-server container 0/9] move disk or volume to other guests Aaron Lauterer
` (4 preceding siblings ...)
2021-11-09 14:55 ` [pve-devel] [PATCH v5 qemu-server 5/9] api: move-disk: cleanup very long lines Aaron Lauterer
@ 2021-11-09 14:55 ` Aaron Lauterer
2021-11-09 14:55 ` [pve-devel] [PATCH v5 container 7/9] Config: add valid_volume_keys_with_unused Aaron Lauterer
` (3 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Aaron Lauterer @ 2021-11-09 14:55 UTC (permalink / raw)
To: pve-devel
also add alias to keep move_volume working
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
v1 -> v2: fix alias to actually point to move-volume
src/PVE/CLI/pct.pm | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/src/PVE/CLI/pct.pm b/src/PVE/CLI/pct.pm
index 254b3b3..462917b 100755
--- a/src/PVE/CLI/pct.pm
+++ b/src/PVE/CLI/pct.pm
@@ -849,7 +849,8 @@ our $cmddef = {
clone => [ "PVE::API2::LXC", 'clone_vm', ['vmid', 'newid'], { node => $nodename }, $upid_exit ],
migrate => [ "PVE::API2::LXC", 'migrate_vm', ['vmid', 'target'], { node => $nodename }, $upid_exit],
- move_volume => [ "PVE::API2::LXC", 'move_volume', ['vmid', 'volume', 'storage'], { node => $nodename }, $upid_exit ],
+ 'move-volume' => [ "PVE::API2::LXC", 'move_volume', ['vmid', 'volume', 'storage', 'target-vmid', 'target-volume'], { node => $nodename }, $upid_exit ],
+ move_volume => { alias => 'move-volume' },
snapshot => [ "PVE::API2::LXC::Snapshot", 'snapshot', ['vmid', 'snapname'], { node => $nodename } , $upid_exit ],
delsnapshot => [ "PVE::API2::LXC::Snapshot", 'delsnapshot', ['vmid', 'snapname'], { node => $nodename } , $upid_exit ],
--
2.30.2
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pve-devel] [PATCH v5 container 7/9] Config: add valid_volume_keys_with_unused
2021-11-09 14:55 [pve-devel] [PATCH v5 storage qemu-server container 0/9] move disk or volume to other guests Aaron Lauterer
` (5 preceding siblings ...)
2021-11-09 14:55 ` [pve-devel] [PATCH v5 container 6/9] cli: pct: change move_volume to move-volume Aaron Lauterer
@ 2021-11-09 14:55 ` Aaron Lauterer
2021-11-09 14:55 ` [pve-devel] [PATCH v5 container 8/9] api: move-volume: add move to another container Aaron Lauterer
` (2 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Aaron Lauterer @ 2021-11-09 14:55 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
v4 (follow up) -> v5: put this into its own patch
src/PVE/LXC/Config.pm | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index 8557e4c..8a25343 100644
--- a/src/PVE/LXC/Config.pm
+++ b/src/PVE/LXC/Config.pm
@@ -1556,6 +1556,15 @@ sub valid_volume_keys {
return $reverse ? reverse @names : @names;
}
+sub valid_volume_keys_with_unused {
+ my ($class, $reverse) = @_;
+ my @names = $class->valid_volume_keys();
+ for (my $i = 0; $i < $MAX_UNUSED_DISKS; $i++) {
+ push @names, "unused$i";
+ }
+ return $reverse ? reverse @names : @names;
+}
+
sub get_vm_volumes {
my ($class, $conf, $excludes) = @_;
--
2.30.2
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pve-devel] [PATCH v5 container 8/9] api: move-volume: add move to another container
2021-11-09 14:55 [pve-devel] [PATCH v5 storage qemu-server container 0/9] move disk or volume to other guests Aaron Lauterer
` (6 preceding siblings ...)
2021-11-09 14:55 ` [pve-devel] [PATCH v5 container 7/9] Config: add valid_volume_keys_with_unused Aaron Lauterer
@ 2021-11-09 14:55 ` Aaron Lauterer
2021-11-09 14:55 ` [pve-devel] [PATCH v5 container 9/9] api: move-volume: cleanup very long lines Aaron Lauterer
2021-11-09 16:52 ` [pve-devel] applied-series: [PATCH v5 storage qemu-server container 0/9] move disk or volume to other guests Fabian Grünbichler
9 siblings, 0 replies; 11+ messages in thread
From: Aaron Lauterer @ 2021-11-09 14:55 UTC (permalink / raw)
To: pve-devel
The goal of this is to expand the move-volume API endpoint to make it
possible to move a container volume / mountpoint to another container.
For unused volumes, the API parameters have been changed to allow them
as well. This means, additional checks had to be introduced to avoid
migration of an unusedX volume to another storage. Some follow up work
is needed for that to work properly.
Moving the rootfs from or to another container is prohibited.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
This is mostly the code from qemu-server with some adaptions. Mainly
error messages and some checks.
Previous checks have been moved to '$move_to_storage_checks'.
v4(follow up) -> v5:
* dropped call to PVE::Storage::find_free_volname as it is gone in this
version
* adapted call to PVE::Storage::rename_volume by now sending the
target_vmid instead of the target_volame
v4 -> v4(follow up):
In this follow up I introduce the capability to allow moving 'unusedX'
volumes from one container to another.
This includes:
* changing the API parameters to allow for 'unused' config keys with the
needed method in the LXC/Config.pm to enumerate them.
* added a check to prevent unused volumes to be migrated to other
storages. Needs some more work to work properly and is on my todo
* the $volume_reassignfn has ssen the most changes as unused volumes
need to be handled differently than 'mpX' ones when adding them to the
target CT (writing directly to the config, like done when doing a
'rescan')
Other changes that happened:
* added checks to prevent the rootfs to/from a container
* removed a double check if the container is running
* rephrased an occurence of 'disk' to 'volume'
v3 -> v4:
* add check to avoid potential undefined warning in $vmlist->{$vmid}
* add check if volume is part of a snapshot
* removed line bloat
v2 -> v3:
* mention default volume key in description
* use $rpcenv->warn should replication snapshot removal fail
* also print the original error
* code style things
v1 -> v2:
* change --target-mp to --target-volume
* make --target-volume optional and fallback to source mount point
* use parse_volname instead of custom regex
* adapt to find_free_volname
* print warnings from update_pct_config
* move running check back in both lock_config sections
* fixed a few style issues
rfc -> v1:
* add check if target guest is replicated and fail if the moved volume
does not support it
* check if source volume has a format suffix and pass it to
'find_free_disk' or if the prefix is vm/subvol as those also have
their own meaning, see the comment in the code
* fixed some style nits
src/PVE/API2/LXC.pm | 269 ++++++++++++++++++++++++++++++++++++++++----
1 file changed, 247 insertions(+), 22 deletions(-)
diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 69df366..8c508bd 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -1813,10 +1813,12 @@ __PACKAGE__->register_method({
method => 'POST',
protected => 1,
proxyto => 'node',
- description => "Move a rootfs-/mp-volume to a different storage",
+ description => "Move a rootfs-/mp-volume to a different storage or to a different container.",
permissions => {
description => "You need 'VM.Config.Disk' permissions on /vms/{vmid}, " .
- "and 'Datastore.AllocateSpace' permissions on the storage.",
+ "and 'Datastore.AllocateSpace' permissions on the storage. To move ".
+ "a volume to another container, you need the permissions on the ".
+ "target container as well.",
check =>
[ 'and',
['perm', '/vms/{vmid}', [ 'VM.Config.Disk' ]],
@@ -1828,14 +1830,20 @@ __PACKAGE__->register_method({
properties => {
node => get_standard_option('pve-node'),
vmid => get_standard_option('pve-vmid', { completion => \&PVE::LXC::complete_ctid }),
+ 'target-vmid' => get_standard_option('pve-vmid', {
+ completion => \&PVE::LXC::complete_ctid,
+ optional => 1,
+ }),
volume => {
type => 'string',
- enum => [ PVE::LXC::Config->valid_volume_keys() ],
+ #TODO: check how to handle unused mount points as the mp parameter is not configured
+ enum => [ PVE::LXC::Config->valid_volume_keys_with_unused() ],
description => "Volume which will be moved.",
},
storage => get_standard_option('pve-storage-id', {
description => "Target Storage.",
completion => \&PVE::Storage::complete_storage_enabled,
+ optional => 1,
}),
delete => {
type => 'boolean',
@@ -1856,6 +1864,21 @@ __PACKAGE__->register_method({
minimum => '0',
default => 'clone limit from datacenter or storage config',
},
+ 'target-volume' => {
+ type => 'string',
+ description => "The config key the volume will be moved to. Default is the " .
+ "source volume key.",
+ enum => [PVE::LXC::Config->valid_volume_keys_with_unused()],
+ optional => 1,
+ },
+ 'target-digest' => {
+ type => 'string',
+ description => 'Prevent changes if current configuration file of the target " .
+ "container has a different SHA1 digest. This can be used to prevent " .
+ "concurrent modifications.',
+ maxLength => 40,
+ optional => 1,
+ },
},
},
returns => {
@@ -1870,32 +1893,54 @@ __PACKAGE__->register_method({
my $vmid = extract_param($param, 'vmid');
+ my $target_vmid = extract_param($param, 'target-vmid');
+
my $storage = extract_param($param, 'storage');
my $mpkey = extract_param($param, 'volume');
+ my $target_mpkey = extract_param($param, 'target-volume') // $mpkey;
+
+ my $digest = extract_param($param, 'digest');
+
+ my $target_digest = extract_param($param, 'target-digest');
+
my $lockname = 'disk';
my ($mpdata, $old_volid);
- PVE::LXC::Config->lock_config($vmid, sub {
- my $conf = PVE::LXC::Config->load_config($vmid);
- PVE::LXC::Config->check_lock($conf);
+ die "either set storage or target-vmid, but not both\n"
+ if $storage && $target_vmid;
- die "cannot move volumes of a running container\n" if PVE::LXC::check_running($vmid);
+ my $storecfg = PVE::Storage::config();
+ my $source_volid;
- $mpdata = PVE::LXC::Config->parse_volume($mpkey, $conf->{$mpkey});
- $old_volid = $mpdata->{volume};
+ my $move_to_storage_checks = sub {
+ PVE::LXC::Config->lock_config($vmid, sub {
+ my $conf = PVE::LXC::Config->load_config($vmid);
+ PVE::LXC::Config->check_lock($conf);
- die "you can't move a volume with snapshots and delete the source\n"
- if $param->{delete} && PVE::LXC::Config->is_volume_in_use_by_snapshots($conf, $old_volid);
+ die "cannot move volumes of a running container\n"
+ if PVE::LXC::check_running($vmid);
- PVE::Tools::assert_if_modified($param->{digest}, $conf->{digest});
+ if ($mpkey =~ m/^unused\d+$/) {
+ die "cannot move volume '$mpkey', only configured volumes can be moved to ".
+ "another storage\n";
+ }
- PVE::LXC::Config->set_lock($vmid, $lockname);
- });
+ $mpdata = PVE::LXC::Config->parse_volume($mpkey, $conf->{$mpkey});
+ $old_volid = $mpdata->{volume};
- my $realcmd = sub {
+ die "you can't move a volume with snapshots and delete the source\n"
+ if $param->{delete} && PVE::LXC::Config->is_volume_in_use_by_snapshots($conf, $old_volid);
+
+ PVE::Tools::assert_if_modified($digest, $conf->{digest});
+
+ PVE::LXC::Config->set_lock($vmid, $lockname);
+ });
+ };
+
+ my $storage_realcmd = sub {
eval {
PVE::Cluster::log_msg('info', $authuser, "move volume CT $vmid: move --volume $mpkey --storage $storage");
@@ -1965,15 +2010,195 @@ __PACKAGE__->register_method({
warn $@ if $@;
die $err if $err;
};
- my $task = eval {
- $rpcenv->fork_worker('move_volume', $vmid, $authuser, $realcmd);
+
+ my $load_and_check_reassign_configs = sub {
+ my $vmlist = PVE::Cluster::get_vmlist()->{ids};
+
+ die "Cannot move to/from 'rootfs'\n" if $mpkey eq "rootfs" || $target_mpkey eq "rootfs";
+
+ if ($mpkey =~ m/^unused\d+$/ && $target_mpkey !~ m/^unused\d+$/) {
+ die "Moving an unused volume to a used one is not possible\n";
+ }
+ die "could not find CT ${vmid}\n" if !exists($vmlist->{$vmid});
+
+ die "Both containers need to be on the same node ($vmlist->{$vmid}->{node}) ".
+ "but target continer is on $vmlist->{$target_vmid}->{node}.\n"
+ if $vmlist->{$vmid}->{node} ne $vmlist->{$target_vmid}->{node};
+
+ my $source_conf = PVE::LXC::Config->load_config($vmid);
+ PVE::LXC::Config->check_lock($source_conf);
+ my $target_conf = PVE::LXC::Config->load_config($target_vmid);
+ PVE::LXC::Config->check_lock($target_conf);
+
+ die "Can't move volumes from or to template VMs\n"
+ if ($source_conf->{template} || $target_conf->{template});
+
+ if ($digest) {
+ eval { PVE::Tools::assert_if_modified($digest, $source_conf->{digest}) };
+ die "Container ${vmid}: $@" if $@;
+ }
+
+ if ($target_digest) {
+ eval { PVE::Tools::assert_if_modified($target_digest, $target_conf->{digest}) };
+ die "Container ${target_vmid}: $@" if $@;
+ }
+
+ die "volume '${mpkey}' for container '$vmid' does not exist\n"
+ if !defined($source_conf->{$mpkey});
+
+ die "Target volume key '${target_mpkey}' is already in use for container '$target_vmid'\n"
+ if exists $target_conf->{$target_mpkey};
+
+ my $drive = PVE::LXC::Config->parse_volume(
+ $mpkey,
+ $source_conf->{$mpkey},
+ );
+
+ $source_volid = $drive->{volume};
+
+ die "Volume '${mpkey}' has no associated image\n"
+ if !$source_volid;
+ die "Cannot move volume used by a snapshot to another container\n"
+ if PVE::LXC::Config->is_volume_in_use_by_snapshots($source_conf, $source_volid);
+ die "Storage does not support moving of this disk to another container\n"
+ if !PVE::Storage::volume_has_feature($storecfg, 'rename', $source_volid);
+ die "Cannot move a bindmound or device mount to another container\n"
+ if $drive->{type} ne "volume";
+ die "Cannot move volume to another container while the source container is running\n"
+ if PVE::LXC::check_running($vmid) && $mpkey !~ m/^unused\d+$/;
+
+ my $repl_conf = PVE::ReplicationConfig->new();
+ my $is_replicated = $repl_conf->check_for_existing_jobs($target_vmid, 1);
+ my ($storeid, undef) = PVE::Storage::parse_volume_id($source_volid);
+ my $format = (PVE::Storage::parse_volname($storecfg, $source_volid))[6];
+ if (
+ $is_replicated
+ && !PVE::Storage::storage_can_replicate($storecfg, $storeid, $format)
+ ) {
+ die "Cannot move volume to a replicated container. Storage " .
+ "does not support replication!\n";
+ }
+ return ($source_conf, $target_conf);
};
- if (my $err = $@) {
- eval { PVE::LXC::Config->remove_lock($vmid, $lockname) };
- warn $@ if $@;
- die $err;
+
+ my $logfunc = sub {
+ my ($msg) = @_;
+ print STDERR "$msg\n";
+ };
+
+ my $volume_reassignfn = sub {
+ return PVE::LXC::Config->lock_config($vmid, sub {
+ return PVE::LXC::Config->lock_config($target_vmid, sub {
+ my ($source_conf, $target_conf) = &$load_and_check_reassign_configs();
+
+ my $target_unused = $target_mpkey =~ m/^unused\d+$/;
+
+ my $drive_param = PVE::LXC::Config->parse_volume(
+ $mpkey,
+ $source_conf->{$mpkey},
+ );
+
+ print "moving volume '$mpkey' from container '$vmid' to '$target_vmid'\n";
+ my ($storage, $source_volname) = PVE::Storage::parse_volume_id($source_volid);
+
+ my $fmt = (PVE::Storage::parse_volname($storecfg, $source_volid))[6];
+
+ my $new_volid = PVE::Storage::rename_volume(
+ $storecfg,
+ $source_volid,
+ $target_vmid,
+ );
+
+ $drive_param->{volume} = $new_volid;
+
+ delete $source_conf->{$mpkey};
+ print "removing volume '${mpkey}' from container '${vmid}' config\n";
+ PVE::LXC::Config->write_config($vmid, $source_conf);
+
+ my $drive_string;
+
+ if ($target_unused) {
+ $drive_string = $new_volid;
+ } else {
+ $drive_string = PVE::LXC::Config->print_volume($target_mpkey, $drive_param);
+ }
+
+ if ($target_unused) {
+ $target_conf->{$target_mpkey} = $drive_string;
+ } else {
+ my $running = PVE::LXC::check_running($target_vmid);
+ my $param = { $target_mpkey => $drive_string };
+ my $errors = PVE::LXC::Config->update_pct_config(
+ $target_vmid,
+ $target_conf,
+ $running,
+ $param
+ );
+
+ foreach my $key (keys %$errors) {
+ $rpcenv->warn($errors->{$key});
+ }
+ }
+
+ PVE::LXC::Config->write_config($target_vmid, $target_conf);
+ $target_conf = PVE::LXC::Config->load_config($target_vmid);
+
+ PVE::LXC::update_lxc_config($target_vmid, $target_conf) if !$target_unused;
+ print "target container '$target_vmid' updated with '$target_mpkey'\n";
+
+ # remove possible replication snapshots
+ if (PVE::Storage::volume_has_feature($storecfg,'replicate', $source_volid)) {
+ eval {
+ PVE::Replication::prepare(
+ $storecfg,
+ [$new_volid],
+ undef,
+ 1,
+ undef,
+ $logfunc,
+ )
+ };
+ if (my $err = $@) {
+ $rpcenv->warn("Failed to remove replication snapshots on volume ".
+ "'${target_mpkey}'. Manual cleanup could be necessary. " .
+ "Error: ${err}\n");
+ }
+ }
+ });
+ });
+ };
+
+ if ($target_vmid) {
+ $rpcenv->check_vm_perm($authuser, $target_vmid, undef, ['VM.Config.Disk'])
+ if $authuser ne 'root@pam';
+
+ die "Moving a volume to the same container is not possible. Did you ".
+ "mean to move the volume to a different storage?\n"
+ if $vmid eq $target_vmid;
+
+ &$load_and_check_reassign_configs();
+ return $rpcenv->fork_worker(
+ 'move_volume',
+ "${vmid}-${mpkey}>${target_vmid}-${target_mpkey}",
+ $authuser,
+ $volume_reassignfn
+ );
+ } elsif ($storage) {
+ &$move_to_storage_checks();
+ my $task = eval {
+ $rpcenv->fork_worker('move_volume', $vmid, $authuser, $storage_realcmd);
+ };
+ if (my $err = $@) {
+ eval { PVE::LXC::Config->remove_lock($vmid, $lockname) };
+ warn $@ if $@;
+ die $err;
+ }
+ return $task;
+ } else {
+ die "Provide either a 'storage' to move the mount point to a ".
+ "different storage or 'target-vmid' and 'target-mp' to move ".
+ "the mount point to another container\n";
}
- return $task;
}});
__PACKAGE__->register_method({
--
2.30.2
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pve-devel] [PATCH v5 container 9/9] api: move-volume: cleanup very long lines
2021-11-09 14:55 [pve-devel] [PATCH v5 storage qemu-server container 0/9] move disk or volume to other guests Aaron Lauterer
` (7 preceding siblings ...)
2021-11-09 14:55 ` [pve-devel] [PATCH v5 container 8/9] api: move-volume: add move to another container Aaron Lauterer
@ 2021-11-09 14:55 ` Aaron Lauterer
2021-11-09 16:52 ` [pve-devel] applied-series: [PATCH v5 storage qemu-server container 0/9] move disk or volume to other guests Fabian Grünbichler
9 siblings, 0 replies; 11+ messages in thread
From: Aaron Lauterer @ 2021-11-09 14:55 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
src/PVE/API2/LXC.pm | 33 +++++++++++++++++++++++++++------
1 file changed, 27 insertions(+), 6 deletions(-)
diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 8c508bd..eab9f27 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -1847,13 +1847,15 @@ __PACKAGE__->register_method({
}),
delete => {
type => 'boolean',
- description => "Delete the original volume after successful copy. By default the original is kept as an unused volume entry.",
+ description => "Delete the original volume after successful copy. By default the " .
+ "original is kept as an unused volume entry.",
optional => 1,
default => 0,
},
digest => {
type => 'string',
- description => 'Prevent changes if current configuration file has different SHA1 digest. This can be used to prevent concurrent modifications.',
+ description => 'Prevent changes if current configuration file has different SHA1 " .
+ "digest. This can be used to prevent concurrent modifications.',
maxLength => 40,
optional => 1,
},
@@ -1942,7 +1944,11 @@ __PACKAGE__->register_method({
my $storage_realcmd = sub {
eval {
- PVE::Cluster::log_msg('info', $authuser, "move volume CT $vmid: move --volume $mpkey --storage $storage");
+ PVE::Cluster::log_msg(
+ 'info',
+ $authuser,
+ "move volume CT $vmid: move --volume $mpkey --storage $storage"
+ );
my $conf = PVE::LXC::Config->load_config($vmid);
my $storage_cfg = PVE::Storage::config();
@@ -1953,8 +1959,20 @@ __PACKAGE__->register_method({
PVE::Storage::activate_volumes($storage_cfg, [ $old_volid ]);
my $bwlimit = extract_param($param, 'bwlimit');
my $source_storage = PVE::Storage::parse_volume_id($old_volid);
- my $movelimit = PVE::Storage::get_bandwidth_limit('move', [$source_storage, $storage], $bwlimit);
- $new_volid = PVE::LXC::copy_volume($mpdata, $vmid, $storage, $storage_cfg, $conf, undef, $movelimit);
+ my $movelimit = PVE::Storage::get_bandwidth_limit(
+ 'move',
+ [$source_storage, $storage],
+ $bwlimit
+ );
+ $new_volid = PVE::LXC::copy_volume(
+ $mpdata,
+ $vmid,
+ $storage,
+ $storage_cfg,
+ $conf,
+ undef,
+ $movelimit
+ );
if (PVE::LXC::Config->is_template($conf)) {
PVE::Storage::activate_volumes($storage_cfg, [ $new_volid ]);
my $template_volid = PVE::Storage::vdisk_create_base($storage_cfg, $new_volid);
@@ -1968,7 +1986,10 @@ __PACKAGE__->register_method({
$conf = PVE::LXC::Config->load_config($vmid);
PVE::Tools::assert_if_modified($digest, $conf->{digest});
- $conf->{$mpkey} = PVE::LXC::Config->print_ct_mountpoint($mpdata, $mpkey eq 'rootfs');
+ $conf->{$mpkey} = PVE::LXC::Config->print_ct_mountpoint(
+ $mpdata,
+ $mpkey eq 'rootfs'
+ );
PVE::LXC::Config->add_unused_volume($conf, $old_volid) if !$param->{delete};
--
2.30.2
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pve-devel] applied-series: [PATCH v5 storage qemu-server container 0/9] move disk or volume to other guests
2021-11-09 14:55 [pve-devel] [PATCH v5 storage qemu-server container 0/9] move disk or volume to other guests Aaron Lauterer
` (8 preceding siblings ...)
2021-11-09 14:55 ` [pve-devel] [PATCH v5 container 9/9] api: move-volume: cleanup very long lines Aaron Lauterer
@ 2021-11-09 16:52 ` Fabian Grünbichler
9 siblings, 0 replies; 11+ messages in thread
From: Fabian Grünbichler @ 2021-11-09 16:52 UTC (permalink / raw)
To: Proxmox VE development discussion
with a small fixup for the ApiChangelog conflicts, and a more detailed
explanation in the pve-storage commit message. I'll likely do a few more
style follow-ups tomorrow ;)
On November 9, 2021 3:55 pm, Aaron Lauterer wrote:
> This version 5 came to be after Fabian did notice that exposing the
> 'find_free_volname' could cause some troubles down the line since it
> could be seen as an incompatible change in the storage API, especially
> of other parts will start using it as well.
>
> We discussed it off list and the current version is the result of that.
> We drop all of find_free_volname for now and change the parameters given
> to the rename functions which now take the source volid, target vmid and
> target volname. The latter two are either or optional. If the target
> vmid is not given, it will be set to the source vmid. If no target
> volname is given, the rename function will request the next free one
> itself within the storage plugin (find_free_diskname).
>
> This makes it possible to keep our storage api compatible and we can
> continue working on making it possible to give disk images a more custom
> name. The raname functions as implemented should be able to deal with
> this situation already (only did light testing on this).
>
> If, in the future it is necessary to have something like
> find_free_volname as part of the storage api, we can add it again at the
> right time, where a storage api version bump is okay.
>
> We will have to discuss if we want to use v5 or are okay with v4.
>
> Additionally I split up adding valid unused volume keys in the
> LXC/Config.pm into its own patch as it is on the Qemu side.
>
> Previous cover letter follows:
> ------------------------------
>
> This is the continuation of 'disk-reassign' but instead of a separate
> API endpoint we now follow the approach to make it part of the
> 'move-disk' and 'move-volume' endpoints for VMs and containers.
>
> The main idea is to make it easy to move a disk/volume to another guest.
> Currently this is a manual and error prone process that requires
> knowledge of how PVE handles disks/volumes and the mapping which guest
> they belong to.
>
> With this, the 'qm move-disk' and 'pct move-volume' are changed in the
> way that the storage parameter is optional as well as the new
> target-vmid and target-{disk,volume}. This will keep old calls to move the
> disk/volume to another storage working. To move to another guest, the
> storage needs to be omitted.
>
> The following storage types are implemented at the moment:
> * dir based ones
> * ZFS
> * (thin) LVM
> * Ceph RBD
>
> Most parts of the disk-reassign code has been taken and moved into the
> 'move_disk' and 'move_volume' endpoints with conditional checking if the
> reassign code or the move to other storage code is meant to run
> depending on the given parameters.
>
> Changes since v4:
> * remove find_free_volname completely
> * change rename functions to take either target_vmid or target_volname
> as parameter. The target_volname as parameter makes it possible to
> easily implement custom renaming in the future. If no target_volname
> is given, it will use find_free_diskname to find the next free one.
> * removed calls to find_free_volname in the container und qemu APIs
>
> Changes since v3:
> * added check if $vmid exists in $vmlist
> * added check if volume if part of a snapshot (containers, qemu already
> had it)
> * added section in storage/ApiChangeLog
> * code style issues
>
> Changes since v2:
> * fixed base image handling
> * fixed code style issues
>
> Changes since v1 [2] (thx @ Fabian_E for the reviews!):
> * drop exposed 'find_free_diskname' method
> * drop 'wants_fmt_suffix' method (not needed anymore)
> * introduce 'find_free_volname' which decides if only the diskname is
> needed or the longer path for directory based storages
> * use $source_volname instead of $source_volid -> avoids some extra
> calls to get to $source_volname again
> * make --target-{disk,volume} optional and fall back to source key
> * smaller fixes in code quality and using existing functions like
> 'parse_volname' instead of a custom regex (possible with the new
> changes)
>
>
> Changes since the RFC [1]:
> * added check if target guest is replicated and fail if storage does not
> support replication
> * only pass minimum of needed parameters to the storage layer and infer
> other needed information from that
> * lock storage and check if the volume aready exists (handling a
> possible race condition between calling find_free_disk and the actual
> renaming)
> * use a helper method to determine if the plugin needs the fmt suffix
> in the volume name
> * getting format of the source and pass it to find_free_disk
> * style fixes (long lines, multiline post-if, ...)
>
> [1] https://lists.proxmox.com/pipermail/pve-devel/2021-June/048400.html
> [2] https://lists.proxmox.com/pipermail/pve-devel/2021-July/049445.html
>
> storage: Aaron Lauterer (1):
> add disk rename feature
>
> ApiChangeLog | 10 ++++++++++
> PVE/Storage.pm | 25 ++++++++++++++++++++++--
> PVE/Storage/LVMPlugin.pm | 34 ++++++++++++++++++++++++++++++++
> PVE/Storage/LvmThinPlugin.pm | 1 +
> PVE/Storage/Plugin.pm | 38 ++++++++++++++++++++++++++++++++++++
> PVE/Storage/RBDPlugin.pm | 34 ++++++++++++++++++++++++++++++++
> PVE/Storage/ZFSPoolPlugin.pm | 31 +++++++++++++++++++++++++++++
> 7 files changed, 171 insertions(+), 2 deletions(-)
>
>
> qemu-server: Aaron Lauterer (4):
> cli: qm: change move_disk to move-disk
> Drive: add valid_drive_names_with_unused
> api: move-disk: add move to other VM
> api: move-disk: cleanup very long lines
>
> PVE/API2/Qemu.pm | 227 ++++++++++++++++++++++++++++++++++++++--
> PVE/CLI/qm.pm | 3 +-
> PVE/QemuServer/Drive.pm | 4 +
> 3 files changed, 223 insertions(+), 11 deletions(-)
>
> container: Aaron Lauterer (4):
> cli: pct: change move_volume to move-volume
> Config: add valid_volume_keys_with_unused
> api: move-volume: add move to another container
> api: move-volume: cleanup very long lines
>
> src/PVE/API2/LXC.pm | 302 ++++++++++++++++++++++++++++++++++++++----
> src/PVE/CLI/pct.pm | 3 +-
> src/PVE/LXC/Config.pm | 9 ++
> 3 files changed, 285 insertions(+), 29 deletions(-)
>
>
> --
> 2.30.2
>
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>
>
^ permalink raw reply [flat|nested] 11+ messages in thread