* [pve-devel] [PATCH v1 storage qemu-server container 0/9] move disk or volume to other guests
@ 2021-07-19 14:52 Aaron Lauterer
2021-07-19 14:52 ` [pve-devel] [PATCH v1 storage 1/9] storage: expose find_free_diskname Aaron Lauterer
` (9 more replies)
0 siblings, 10 replies; 22+ messages in thread
From: Aaron Lauterer @ 2021-07-19 14:52 UTC (permalink / raw)
To: pve-devel
This is the continuation of 'disk-reassign' but instead of a separate
API endpoint we now follow the approach to make it part of the
'move-disk' and 'move-volume' endpoints for VMs and containers.
The main idea is to make it easy to move a disk/volume to another guest.
Currently this is a manual and error prone process that requires
knowledge of how PVE handles disks/volumes and the mapping which guest
they belong to.
With this, the 'qm move-disk' and 'pct move-volume' are changed in the
way that the storage parameter is optional as well as the new
target-vmid and target-{disk,mp}. This will keep old calls to move the
disk/volume to another storage working. To move to another guest, the
storage needs to be omitted.
Major changes since the last iteration as dedicated API endpoint [0] are
that the storage layer only implements the renaming itself. The layer
above (qemu-server and pve-container) define the name of the new
volume/disk. Therefore it was necessary to expose the
'find_free_diskname' function. The rename function on the storage layer
handles possible template referneces and the creation of the new volid
as that is highly dependent on the actual storage.
The following storage types are implemented at the moment:
* dir based ones
* ZFS
* (thin) LVM
* Ceph RBD
Most parts of the disk-reassign code has been taken and moved into the
'move_disk' and 'move_volume' endpoints with conditional checking if the
reassign code or the move to other storage code is meant to run
depending on the given parameters.
Changes since the RFC [1]:
* added check if target guest is replicated and fail if storage does not
support replication
* only pass minimum of needed parameters to the storage layer and infer
other needed information from that
* lock storage and check if the volume aready exists (handling a
possible race condition between calling find_free_disk and the actual
renaming)
* use a helper method to determine if the plugin needs the fmt suffix
in the volume name
* getting format of the source and pass it to find_free_disk
* style fixes (long lines, multiline post-if, ...)
[0] https://lists.proxmox.com/pipermail/pve-devel/2021-April/047481.html
[1] https://lists.proxmox.com/pipermail/pve-devel/2021-June/048400.html
storage: Aaron Lauterer (2):
storage: expose find_free_diskname
add disk rename feature
PVE/Storage.pm | 31 +++++++++++++++--
PVE/Storage/LVMPlugin.pm | 32 ++++++++++++++++++
PVE/Storage/LvmThinPlugin.pm | 1 +
PVE/Storage/Plugin.pm | 65 ++++++++++++++++++++++++++++++++++++
PVE/Storage/RBDPlugin.pm | 34 +++++++++++++++++++
PVE/Storage/ZFSPoolPlugin.pm | 29 ++++++++++++++++
6 files changed, 190 insertions(+), 2 deletions(-)
qemu-server: Aaron Lauterer (4):
cli: qm: change move_disk to move-disk
Drive: add valid_drive_names_with_unused
api: move-disk: add move to other VM
api: move-disk: cleanup very long lines
PVE/API2/Qemu.pm | 254 ++++++++++++++++++++++++++++++++++++++--
PVE/CLI/qm.pm | 3 +-
PVE/QemuServer/Drive.pm | 4 +
3 files changed, 250 insertions(+), 11 deletions(-)
container: Aaron Lauterer (3):
cli: pct: change move_volume to move-volume
api: move-volume: add move to another container
api: move-volume: cleanup very long lines
src/PVE/API2/LXC.pm | 303 ++++++++++++++++++++++++++++++++++++++++----
src/PVE/CLI/pct.pm | 3 +-
2 files changed, 278 insertions(+), 28 deletions(-)
--
2.30.2
^ permalink raw reply [flat|nested] 22+ messages in thread
* [pve-devel] [PATCH v1 storage 1/9] storage: expose find_free_diskname
2021-07-19 14:52 [pve-devel] [PATCH v1 storage qemu-server container 0/9] move disk or volume to other guests Aaron Lauterer
@ 2021-07-19 14:52 ` Aaron Lauterer
2021-08-02 12:56 ` Fabian Ebner
2021-07-19 14:52 ` [pve-devel] [PATCH v1 storage 2/9] add disk rename feature Aaron Lauterer
` (8 subsequent siblings)
9 siblings, 1 reply; 22+ messages in thread
From: Aaron Lauterer @ 2021-07-19 14:52 UTC (permalink / raw)
To: pve-devel
We do not expose the parameter 'add_fmt_suffix' used by the internal
implemantion of 'find_free_diskname'. This is something only the plugins
themselves know but cannot be determined easily and reliably from an
outside caller.
This is why the new 'wants_fmt_suffix' method has been introduced. For
most plugins the return value is very clear. For the default
implementation in Plugin.pm we add another check to be on the safe side
and only return true if the '$scfg->{path}' option is present.
It indicates that the volume in question is an actual file which will
need the suffix.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
rfc -> v1:
dropped $add_fmt_suffix parameter and added the "wants_fmt_suffix"
helper method in each plugin.
PVE/Storage.pm | 11 +++++++++++
PVE/Storage/LVMPlugin.pm | 5 +++++
PVE/Storage/Plugin.pm | 7 +++++++
PVE/Storage/RBDPlugin.pm | 5 +++++
PVE/Storage/ZFSPoolPlugin.pm | 5 +++++
5 files changed, 33 insertions(+)
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index c04b5a2..afeb2e3 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -203,6 +203,17 @@ sub storage_can_replicate {
return $plugin->storage_can_replicate($scfg, $storeid, $format);
}
+sub find_free_diskname {
+ my ($cfg, $storeid, $vmid, $fmt) = @_;
+
+ my $scfg = storage_config($cfg, $storeid);
+ my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
+
+ my $add_fmt_suffix = $plugin->wants_fmt_suffix($scfg);
+
+ return $plugin->find_free_diskname($storeid, $scfg, $vmid, $fmt, $add_fmt_suffix);
+}
+
sub storage_ids {
my ($cfg) = @_;
diff --git a/PVE/Storage/LVMPlugin.pm b/PVE/Storage/LVMPlugin.pm
index 139d391..3e5b6c8 100644
--- a/PVE/Storage/LVMPlugin.pm
+++ b/PVE/Storage/LVMPlugin.pm
@@ -201,6 +201,11 @@ sub type {
return 'lvm';
}
+sub wants_fmt_suffix {
+ my ($class, $scfg) = @_;
+ return 0;
+}
+
sub plugindata {
return {
content => [ {images => 1, rootdir => 1}, { images => 1 }],
diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index b1865cb..5c6c659 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -191,6 +191,13 @@ sub default_format {
return wantarray ? ($def_format, $valid_formats) : $def_format;
}
+sub wants_fmt_suffix {
+ my ($class, $scfg) = @_;
+ return 1 if $scfg->{path};
+ return 0;
+}
+
+
PVE::JSONSchema::register_format('pve-storage-path', \&verify_path);
sub verify_path {
my ($path, $noerr) = @_;
diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index a8d1243..86ea45a 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -273,6 +273,11 @@ sub type {
return 'rbd';
}
+sub wants_fmt_suffix {
+ my ($class, $scfg) = @_;
+ return 0;
+}
+
sub plugindata {
return {
content => [ {images => 1, rootdir => 1}, { images => 1 }],
diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index c4be70f..85e2211 100644
--- a/PVE/Storage/ZFSPoolPlugin.pm
+++ b/PVE/Storage/ZFSPoolPlugin.pm
@@ -18,6 +18,11 @@ sub type {
return 'zfspool';
}
+sub wants_fmt_suffix {
+ my ($class, $scfg) = @_;
+ return 0;
+}
+
sub plugindata {
return {
content => [ {images => 1, rootdir => 1}, {images => 1 , rootdir => 1}],
--
2.30.2
^ permalink raw reply [flat|nested] 22+ messages in thread
* [pve-devel] [PATCH v1 storage 2/9] add disk rename feature
2021-07-19 14:52 [pve-devel] [PATCH v1 storage qemu-server container 0/9] move disk or volume to other guests Aaron Lauterer
2021-07-19 14:52 ` [pve-devel] [PATCH v1 storage 1/9] storage: expose find_free_diskname Aaron Lauterer
@ 2021-07-19 14:52 ` Aaron Lauterer
2021-08-02 12:57 ` Fabian Ebner
2021-07-19 14:52 ` [pve-devel] [PATCH v1 qemu-server 3/9] cli: qm: change move_disk to move-disk Aaron Lauterer
` (7 subsequent siblings)
9 siblings, 1 reply; 22+ messages in thread
From: Aaron Lauterer @ 2021-07-19 14:52 UTC (permalink / raw)
To: pve-devel
Functionality has been added for the following storage types:
* directory ones, based on the default implementation:
* directory
* NFS
* CIFS
* gluster
* ZFS
* (thin) LVM
* Ceph
A new feature `rename` has been introduced to mark which storage
plugin supports the feature.
Version API and AGE have been bumped.
The storage gets locked and each plugin checks if the target volume
already exists prior renaming.
This is done because there could be a race condition from the time the
external caller requests a new free disk name to the time the volume is
actually renamed.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
rfc -> v1:
* reduced number of parameters to minimum needed, plugins infer needed
information themselves
* added storage locking and checking if volume already exists
* parse target_volname prior to renaming to check if valid
old dedicated API endpoint -> rfc:
only do rename now but the rename function handles templates and returns
the new volid as this can be differently handled on some storages.
PVE/Storage.pm | 20 +++++++++++--
PVE/Storage/LVMPlugin.pm | 27 +++++++++++++++++
PVE/Storage/LvmThinPlugin.pm | 1 +
PVE/Storage/Plugin.pm | 58 ++++++++++++++++++++++++++++++++++++
PVE/Storage/RBDPlugin.pm | 29 ++++++++++++++++++
PVE/Storage/ZFSPoolPlugin.pm | 24 +++++++++++++++
6 files changed, 157 insertions(+), 2 deletions(-)
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index afeb2e3..f6d86e1 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -41,11 +41,11 @@ use PVE::Storage::PBSPlugin;
use PVE::Storage::BTRFSPlugin;
# Storage API version. Increment it on changes in storage API interface.
-use constant APIVER => 9;
+use constant APIVER => 10;
# Age is the number of versions we're backward compatible with.
# This is like having 'current=APIVER' and age='APIAGE' in libtool,
# see https://www.gnu.org/software/libtool/manual/html_node/Libtool-versioning.html
-use constant APIAGE => 0;
+use constant APIAGE => 1;
# load standard plugins
PVE::Storage::DirPlugin->register();
@@ -360,6 +360,7 @@ sub volume_snapshot_needs_fsfreeze {
# snapshot - taking a snapshot is possible
# sparseinit - volume is sparsely initialized
# template - conversion to base image is possible
+# rename - renaming volumes is possible
# $snap - check if the feature is supported for a given snapshot
# $running - if the guest owning the volume is running
# $opts - hash with further options:
@@ -1868,6 +1869,21 @@ sub complete_volume {
return $res;
}
+sub rename_volume {
+ my ($cfg, $source_volid, $target_volname) = @_;
+
+ my ($storeid) = parse_volume_id($source_volid);
+
+ activate_storage($cfg, $storeid);
+
+ my $scfg = storage_config($cfg, $storeid);
+ my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
+
+ return $plugin->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
+ return $plugin->rename_volume($scfg, $storeid, $source_volid, $target_volname);
+ });
+}
+
# Various io-heavy operations require io/bandwidth limits which can be
# configured on multiple levels: The global defaults in datacenter.cfg, and
# per-storage overrides. When we want to do a restore from storage A to storage
diff --git a/PVE/Storage/LVMPlugin.pm b/PVE/Storage/LVMPlugin.pm
index 3e5b6c8..7a13a96 100644
--- a/PVE/Storage/LVMPlugin.pm
+++ b/PVE/Storage/LVMPlugin.pm
@@ -344,6 +344,16 @@ sub lvcreate {
run_command($cmd, errmsg => "lvcreate '$vg/$name' error");
}
+sub lvrename {
+ my ($vg, $oldname, $newname) = @_;
+
+ my $cmd = ['/sbin/lvrename', $vg, $oldname, $newname];
+ run_command(
+ ['/sbin/lvrename', $vg, $oldname, $newname],
+ errmsg => "lvrename '${vg}/${oldname}' to '${newname}' error"
+ );
+}
+
sub alloc_image {
my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size) = @_;
@@ -589,6 +599,7 @@ sub volume_has_feature {
my $features = {
copy => { base => 1, current => 1},
+ rename => {current => 1},
};
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
@@ -697,4 +708,20 @@ sub volume_import_write {
input => '<&'.fileno($input_fh));
}
+sub rename_volume {
+ my ($class, $scfg, $storeid, $source_volid, $target_volname) = @_;
+
+ $class->parse_volname($target_volname);
+
+ my (undef, $source_volname) = PVE::Storage::Plugin::parse_volume_id($source_volid);
+
+ my $vg = $scfg->{vgname};
+ my $lvs = lvm_list_volumes($vg);
+ die "target volume '${target_volname}' already exists\n"
+ if ($lvs->{$vg}->{$target_volname});
+
+ lvrename($scfg->{vgname}, $source_volname, $target_volname);
+ return "${storeid}:${target_volname}";
+}
+
1;
diff --git a/PVE/Storage/LvmThinPlugin.pm b/PVE/Storage/LvmThinPlugin.pm
index 4ba6f90..c24af22 100644
--- a/PVE/Storage/LvmThinPlugin.pm
+++ b/PVE/Storage/LvmThinPlugin.pm
@@ -355,6 +355,7 @@ sub volume_has_feature {
template => { current => 1},
copy => { base => 1, current => 1, snap => 1},
sparseinit => { base => 1, current => 1},
+ rename => {current => 1},
};
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index 5c6c659..3f38a94 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -967,6 +967,7 @@ sub volume_has_feature {
snap => {qcow2 => 1} },
sparseinit => { base => {qcow2 => 1, raw => 1, vmdk => 1},
current => {qcow2 => 1, raw => 1, vmdk => 1} },
+ rename => { current =>{qcow2 => 1, raw => 1, vmdk => 1} },
};
# clone_image creates a qcow2 volume
@@ -974,6 +975,14 @@ sub volume_has_feature {
defined($opts->{valid_target_formats}) &&
!(grep { $_ eq 'qcow2' } @{$opts->{valid_target_formats}});
+ if (
+ $feature eq 'rename'
+ && $class->can('api')
+ && $class->api() < 9
+ ) {
+ return 0;
+ }
+
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
$class->parse_volname($volname);
@@ -1491,4 +1500,53 @@ sub volume_import_formats {
return ();
}
+sub rename_volume {
+ my ($class, $scfg, $storeid, $source_volid, $target_volname) = @_;
+ die "not implemented in storage plugin '$class'\n" if $class->can('api') && $class->api() < 10;
+
+ my $target_vmid;
+
+ if ($target_volname =~ m/^(base|vm)?-(\d+)-\S+$/) {
+ $target_vmid = $2;
+ } else {
+ die "could not parse target vmid\n";
+ }
+
+ $class->parse_volname("${target_vmid}/${target_volname}");
+
+ my (undef, $source_volname) = PVE::Storage::Plugin::parse_volume_id($source_volid);
+ my (
+ undef,
+ $source_name,
+ $source_vmid,
+ $base_name,
+ $base_vmid,
+ undef,
+ $format
+ ) = $class->parse_volname($source_volname);
+
+ my $basedir = $class->get_subdir($scfg, 'images');
+ # $source_volname includes the '<vmid>/' prefix
+ my $sourcedir = "${basedir}/${source_vmid}";
+ my $targetdir = "${basedir}/${target_vmid}";
+
+ mkpath $targetdir;
+
+ if ($target_volname !~ m/(vm|subsvol)-\d+-\S+\.(raw|qcow2|vmdk)$/) {
+ $target_volname = "${target_volname}.${format}";
+ }
+
+ my $old_path = "${sourcedir}/${source_name}";
+ my $new_path = "${targetdir}/${target_volname}";
+
+ die "target volume '${target_volname}' already exists\n" if -e $new_path;
+
+ my $base = $base_name ? "${base_vmid}/${base_name}/" : '';
+
+ rename($old_path, $new_path) ||
+ die "rename '$old_path' to '$new_path' failed - $!\n";
+
+ return "${storeid}:${base}${target_vmid}/${target_volname}";
+}
+
1;
diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index 86ea45a..6ae846a 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -733,6 +733,7 @@ sub volume_has_feature {
template => { current => 1},
copy => { base => 1, current => 1, snap => 1},
sparseinit => { base => 1, current => 1},
+ rename => {current => 1},
};
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) = $class->parse_volname($volname);
@@ -748,4 +749,32 @@ sub volume_has_feature {
return undef;
}
+sub rename_volume {
+ my ($class, $scfg, $storeid, $source_volid, $target_volname) = @_;
+
+ $class->parse_volname($target_volname);
+
+ my (undef, $source_volname) = PVE::Storage::Plugin::parse_volume_id($source_volid);
+ my (undef, undef, $source_vmid, $base_name) = $class->parse_volname($source_volname);
+
+ eval {
+ my $cmd = $rbd_cmd->($scfg, $storeid, 'info', $target_volname);
+ run_rbd_command($cmd, errmsg => "exist check", quiet => 1, noerr => 1);
+ };
+ my $err = $@;
+ die "target volume '${target_volname}' already exists\n"
+ if !$err;
+
+ my $cmd = $rbd_cmd->($scfg, $storeid, 'rename', $source_volname, $target_volname);
+
+ run_rbd_command(
+ $cmd,
+ errmsg => "could not rename image '${source_volname}' to '${target_volname}'",
+ );
+
+ $base_name = $base_name ? "${base_name}/" : '';
+
+ return "${storeid}:${base_name}${target_volname}";
+}
+
1;
diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index 85e2211..ac375e8 100644
--- a/PVE/Storage/ZFSPoolPlugin.pm
+++ b/PVE/Storage/ZFSPoolPlugin.pm
@@ -692,6 +692,7 @@ sub volume_has_feature {
copy => { base => 1, current => 1},
sparseinit => { base => 1, current => 1},
replicate => { base => 1, current => 1},
+ rename => {current => 1},
};
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
@@ -794,4 +795,27 @@ sub volume_import_formats {
return $class->volume_export_formats($scfg, $storeid, $volname, $snapshot, $base_snapshot, $with_snapshots);
}
+sub rename_volume {
+ my ($class, $scfg, $storeid, $source_volid, $target_volname) = @_;
+
+ $class->parse_volname($target_volname);
+
+ my (undef, $source_volname) = PVE::Storage::Plugin::parse_volume_id($source_volid);
+ my (undef, undef, $source_vmid, $base_name) = $class->parse_volname($source_volname);
+
+ my $pool = $scfg->{pool};
+ my $source_zfspath = "${pool}/${source_volname}";
+ my $target_zfspath = "${pool}/${target_volname}";
+
+ my $exists = 0 == run_command(['zfs', 'get', '-H', 'name', $target_zfspath],
+ noerr => 1, quiet => 1);
+ die "target volume '${target_volname}' already exists\n" if $exists;
+
+ $class->zfs_request($scfg, 5, 'rename', ${source_zfspath}, ${target_zfspath});
+
+ $base_name = $base_name ? "${base_name}/" : '';
+
+ return "${storeid}:${base_name}${target_volname}";
+}
+
1;
--
2.30.2
^ permalink raw reply [flat|nested] 22+ messages in thread
* [pve-devel] [PATCH v1 qemu-server 3/9] cli: qm: change move_disk to move-disk
2021-07-19 14:52 [pve-devel] [PATCH v1 storage qemu-server container 0/9] move disk or volume to other guests Aaron Lauterer
2021-07-19 14:52 ` [pve-devel] [PATCH v1 storage 1/9] storage: expose find_free_diskname Aaron Lauterer
2021-07-19 14:52 ` [pve-devel] [PATCH v1 storage 2/9] add disk rename feature Aaron Lauterer
@ 2021-07-19 14:52 ` Aaron Lauterer
2021-07-19 14:52 ` [pve-devel] [PATCH v1 qemu-server 4/9] Drive: add valid_drive_names_with_unused Aaron Lauterer
` (6 subsequent siblings)
9 siblings, 0 replies; 22+ messages in thread
From: Aaron Lauterer @ 2021-07-19 14:52 UTC (permalink / raw)
To: pve-devel
also add alias to keep move_disk working.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
PVE/CLI/qm.pm | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index 8307dc1..ef99b6d 100755
--- a/PVE/CLI/qm.pm
+++ b/PVE/CLI/qm.pm
@@ -910,7 +910,8 @@ our $cmddef = {
resize => [ "PVE::API2::Qemu", 'resize_vm', ['vmid', 'disk', 'size'], { node => $nodename } ],
- move_disk => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 'disk', 'storage'], { node => $nodename }, $upid_exit ],
+ 'move-disk' => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 'disk', 'storage'], { node => $nodename }, $upid_exit ],
+ move_disk => { alias => 'move-disk' },
unlink => [ "PVE::API2::Qemu", 'unlink', ['vmid'], { node => $nodename } ],
--
2.30.2
^ permalink raw reply [flat|nested] 22+ messages in thread
* [pve-devel] [PATCH v1 qemu-server 4/9] Drive: add valid_drive_names_with_unused
2021-07-19 14:52 [pve-devel] [PATCH v1 storage qemu-server container 0/9] move disk or volume to other guests Aaron Lauterer
` (2 preceding siblings ...)
2021-07-19 14:52 ` [pve-devel] [PATCH v1 qemu-server 3/9] cli: qm: change move_disk to move-disk Aaron Lauterer
@ 2021-07-19 14:52 ` Aaron Lauterer
2021-08-03 7:35 ` Fabian Ebner
2021-07-19 14:52 ` [pve-devel] [PATCH v1 qemu-server 5/9] api: move-disk: add move to other VM Aaron Lauterer
` (5 subsequent siblings)
9 siblings, 1 reply; 22+ messages in thread
From: Aaron Lauterer @ 2021-07-19 14:52 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
PVE/QemuServer/Drive.pm | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index 5110190..09f37c1 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -393,6 +393,10 @@ sub valid_drive_names {
'efidisk0');
}
+sub valid_drive_names_with_unused {
+ return (valid_drive_names(), map {"unused$_"} (0 .. ($MAX_UNUSED_DISKS -1)));
+}
+
sub is_valid_drivename {
my $dev = shift;
--
2.30.2
^ permalink raw reply [flat|nested] 22+ messages in thread
* [pve-devel] [PATCH v1 qemu-server 5/9] api: move-disk: add move to other VM
2021-07-19 14:52 [pve-devel] [PATCH v1 storage qemu-server container 0/9] move disk or volume to other guests Aaron Lauterer
` (3 preceding siblings ...)
2021-07-19 14:52 ` [pve-devel] [PATCH v1 qemu-server 4/9] Drive: add valid_drive_names_with_unused Aaron Lauterer
@ 2021-07-19 14:52 ` Aaron Lauterer
2021-08-03 7:34 ` Fabian Ebner
2021-07-19 14:52 ` [pve-devel] [PATCH v1 qemu-server 6/9] api: move-disk: cleanup very long lines Aaron Lauterer
` (4 subsequent siblings)
9 siblings, 1 reply; 22+ messages in thread
From: Aaron Lauterer @ 2021-07-19 14:52 UTC (permalink / raw)
To: pve-devel
The goal of this is to expand the move-disk API endpoint to make it
possible to move a disk to another VM. Previously this was only possible
with manual intervertion either by renaming the VM disk or by manually
adding the disks volid to the config of the other VM.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
rfc -> v1:
* add check if target guest is replicated and fail if the moved volume
does not support it
* check if source volume has a format suffix and pass it to
'find_free_disk'
* fixed some style nits
old dedicated api endpoint -> rfc:
There are some big changes here. The old [0] dedicated API endpoint is
gone and most of its code is now part of move_disk. Error messages have
been changed accordingly and sometimes enahnced by adding disk keys and
VMIDs where appropriate.
Since a move to other guests should be possible for unused disks, we
need to check before doing a move to storage to make sure to not
handle unused disks.
[0] https://lists.proxmox.com/pipermail/pve-devel/2021-April/047738.html
PVE/API2/Qemu.pm | 229 +++++++++++++++++++++++++++++++++++++++++++++--
PVE/CLI/qm.pm | 2 +-
2 files changed, 225 insertions(+), 6 deletions(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index f2557e3..ed1179b 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -35,6 +35,7 @@ use PVE::API2::Qemu::Agent;
use PVE::VZDump::Plugin;
use PVE::DataCenterConfig;
use PVE::SSHInfo;
+use PVE::Replication;
BEGIN {
if (!$ENV{PVE_GENERATING_DOCS}) {
@@ -3263,9 +3264,11 @@ __PACKAGE__->register_method({
method => 'POST',
protected => 1,
proxyto => 'node',
- description => "Move volume to different storage.",
+ description => "Move volume to different storage or to a different VM.",
permissions => {
- description => "You need 'VM.Config.Disk' permissions on /vms/{vmid}, and 'Datastore.AllocateSpace' permissions on the storage.",
+ description => "You need 'VM.Config.Disk' permissions on /vms/{vmid}, " .
+ "and 'Datastore.AllocateSpace' permissions on the storage. To move ".
+ "a disk to another VM, you need the permissions on the target VM as well.",
check => [ 'and',
['perm', '/vms/{vmid}', [ 'VM.Config.Disk' ]],
['perm', '/storage/{storage}', [ 'Datastore.AllocateSpace' ]],
@@ -3276,14 +3279,19 @@ __PACKAGE__->register_method({
properties => {
node => get_standard_option('pve-node'),
vmid => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),
+ 'target-vmid' => get_standard_option('pve-vmid', {
+ completion => \&PVE::QemuServer::complete_vmid,
+ optional => 1,
+ }),
disk => {
type => 'string',
description => "The disk you want to move.",
- enum => [PVE::QemuServer::Drive::valid_drive_names()],
+ enum => [PVE::QemuServer::Drive::valid_drive_names_with_unused()],
},
storage => get_standard_option('pve-storage-id', {
description => "Target storage.",
completion => \&PVE::QemuServer::complete_storage,
+ optional => 1,
}),
'format' => {
type => 'string',
@@ -3310,6 +3318,20 @@ __PACKAGE__->register_method({
minimum => '0',
default => 'move limit from datacenter or storage config',
},
+ 'target-disk' => {
+ type => 'string',
+ description => "The config key the disk will be moved to on the target VM " .
+ "(for example, ide0 or scsi1).",
+ enum => [PVE::QemuServer::Drive::valid_drive_names_with_unused()],
+ optional => 1,
+ },
+ 'target-digest' => {
+ type => 'string',
+ description => 'Prevent changes if current configuration file of the target VM has " .
+ "a different SHA1 digest. This can be used to prevent concurrent modifications.',
+ maxLength => 40,
+ optional => 1,
+ },
},
},
returns => {
@@ -3324,14 +3346,22 @@ __PACKAGE__->register_method({
my $node = extract_param($param, 'node');
my $vmid = extract_param($param, 'vmid');
+ my $target_vmid = extract_param($param, 'target-vmid');
my $digest = extract_param($param, 'digest');
+ my $target_digest = extract_param($param, 'target-digest');
my $disk = extract_param($param, 'disk');
+ my $target_disk = extract_param($param, 'target-disk');
my $storeid = extract_param($param, 'storage');
my $format = extract_param($param, 'format');
+ die "either set storage or target-vmid, but not both\n"
+ if $storeid && $target_vmid;
+
+
my $storecfg = PVE::Storage::config();
+ my $source_volid;
- my $updatefn = sub {
+ my $move_updatefn = sub {
my $conf = PVE::QemuConfig->load_config($vmid);
PVE::QemuConfig->check_lock($conf);
@@ -3441,7 +3471,196 @@ __PACKAGE__->register_method({
return $rpcenv->fork_worker('qmmove', $vmid, $authuser, $realcmd);
};
- return PVE::QemuConfig->lock_config($vmid, $updatefn);
+ my $load_and_check_reassign_configs = sub {
+ my $vmlist = PVE::Cluster::get_vmlist()->{ids};
+ die "Both VMs need to be on the same node ($vmlist->{$vmid}->{node}) ".
+ "but target VM is on $vmlist->{$target_vmid}->{node}.\n"
+ if $vmlist->{$vmid}->{node} ne $vmlist->{$target_vmid}->{node};
+
+ my $source_conf = PVE::QemuConfig->load_config($vmid);
+ PVE::QemuConfig->check_lock($source_conf);
+ my $target_conf = PVE::QemuConfig->load_config($target_vmid);
+ PVE::QemuConfig->check_lock($target_conf);
+
+ die "Can't move disks from or to template VMs\n"
+ if ($source_conf->{template} || $target_conf->{template});
+
+ if ($digest) {
+ eval { PVE::Tools::assert_if_modified($digest, $source_conf->{digest}) };
+ if (my $err = $@) {
+ die "VM ${vmid}: ${err}";
+ }
+ }
+
+ if ($target_digest) {
+ eval { PVE::Tools::assert_if_modified($target_digest, $target_conf->{digest}) };
+ if (my $err = $@) {
+ die "VM ${target_vmid}: ${err}";
+ }
+ }
+
+ die "Disk '${disk}' for VM '$vmid' does not exist\n"
+ if !defined($source_conf->{$disk});
+
+ die "Target disk key '${target_disk}' is already in use for VM '$target_vmid'\n"
+ if exists $target_conf->{$target_disk};
+
+ my $drive = PVE::QemuServer::parse_drive(
+ $disk,
+ $source_conf->{$disk},
+ );
+ $source_volid = $drive->{file};
+
+ die "disk '${disk}' has no associated volume\n"
+ if !$source_volid;
+ die "CD drive contents can't be moved to another VM\n"
+ if PVE::QemuServer::drive_is_cdrom($drive, 1);
+ die "Can't move physical disk to another VM\n" if $drive->{file} =~ m|^/dev/|;
+ die "Can't move disk used by a snapshot to another VM\n"
+ if PVE::QemuServer::Drive::is_volume_in_use(
+ $storecfg,
+ $source_conf,
+ $disk,
+ $source_volid,
+ );
+
+ die "Storage does not support moving of this disk to another VM\n"
+ if !PVE::Storage::volume_has_feature(
+ $storecfg,
+ 'rename',
+ $source_volid,
+ );
+
+ die "Cannot move disk to another VM while the source VM is running\n"
+ if PVE::QemuServer::check_running($vmid) && $disk !~ m/^unused\d+$/;
+
+ if ($target_disk !~ m/^unused\d+$/ && $target_disk =~ m/^([^\d]+)\d+$/) {
+ my $interface = $1;
+ my $desc = PVE::JSONSchema::get_standard_option("pve-qm-${interface}");
+ eval {
+ PVE::JSONSchema::parse_property_string(
+ $desc->{format},
+ $source_conf->{$disk},
+ )
+ };
+ if (my $err = $@) {
+ die "Cannot move disk to another VM: ${err}";
+ }
+ }
+
+ my $repl_conf = PVE::ReplicationConfig->new();
+ my $is_replicated = $repl_conf->check_for_existing_jobs($target_vmid, 1);
+ my ($storeid, undef) = PVE::Storage::parse_volume_id($source_volid);
+ my $format = (PVE::Storage::parse_volname($storecfg, $source_volid))[6];
+ if ($is_replicated && !PVE::Storage::storage_can_replicate($storecfg, $storeid, $format)) {
+ die "Cannot move disk to a replicated VM. Storage does not support replication!\n";
+ }
+
+ return ($source_conf, $target_conf);
+ };
+
+ my $logfunc = sub {
+ my ($msg) = @_;
+ print STDERR "$msg\n";
+ };
+
+ my $disk_reassignfn = sub {
+ return PVE::QemuConfig->lock_config($vmid, sub {
+ return PVE::QemuConfig->lock_config($target_vmid, sub {
+ my ($source_conf, $target_conf) = &$load_and_check_reassign_configs();
+
+ my $drive_param = PVE::QemuServer::parse_drive(
+ $target_disk,
+ $source_conf->{$disk},
+ );
+
+ print "moving disk '$disk' from VM '$vmid' to '$target_vmid'\n";
+ my ($storeid, $source_volname) = PVE::Storage::parse_volume_id($source_volid);
+
+ my $fmt = undef;
+ if ($source_volname =~ m/vm-\d+-\S+\.(raw|qcow2|vmdk)$/) {
+ $fmt = $1;
+ }
+
+ my $target_volname = PVE::Storage::find_free_diskname(
+ $storecfg,
+ $storeid,
+ $target_vmid,
+ $fmt
+ );
+
+ my $new_volid = PVE::Storage::rename_volume(
+ $storecfg,
+ $source_volid,
+ $target_volname,
+ );
+
+ $drive_param->{file} = $new_volid;
+
+ delete $source_conf->{$disk};
+ print "removing disk '${disk}' from VM '${vmid}' config\n";
+ PVE::QemuConfig->write_config($vmid, $source_conf);
+
+ my $drive_string = PVE::QemuServer::print_drive($drive_param);
+ &$update_vm_api(
+ {
+ node => $node,
+ vmid => $target_vmid,
+ digest => $target_digest,
+ $target_disk => $drive_string,
+ },
+ 1,
+ );
+
+ # remove possible replication snapshots
+ if (PVE::Storage::volume_has_feature(
+ $storecfg,
+ 'replicate',
+ $source_volid),
+ ) {
+ eval {
+ PVE::Replication::prepare(
+ $storecfg,
+ [$new_volid],
+ undef,
+ 1,
+ undef,
+ $logfunc,
+ )
+ };
+ if (my $err = $@) {
+ print "Failed to remove replication snapshots on moved disk " .
+ "'$target_disk'. Manual cleanup could be necessary.\n";
+ }
+ }
+ });
+ });
+ };
+
+ if ($target_vmid) {
+ $rpcenv->check_vm_perm($authuser, $target_vmid, undef, ['VM.Config.Disk'])
+ if $authuser ne 'root@pam';
+
+ die "Moving a disk to the same VM is not possible. Did you mean to ".
+ "move the disk to a different storage?\n"
+ if $vmid eq $target_vmid;
+
+ &$load_and_check_reassign_configs();
+ return $rpcenv->fork_worker(
+ 'qmmove',
+ "${vmid}-${disk}>${target_vmid}-${target_disk}",
+ $authuser,
+ $disk_reassignfn
+ );
+ } elsif ($storeid) {
+ die "cannot move disk '$disk', only configured disks can be moved to another storage\n"
+ if $disk =~ m/^unused\d+$/;
+ return PVE::QemuConfig->lock_config($vmid, $move_updatefn);
+ } else {
+ die "Provide either a 'storage' to move the disk to a different " .
+ "storage or 'target-vmid' and 'target-disk' to move the disk " .
+ "to another VM\n";
+ }
}});
my $check_vm_disks_local = sub {
diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index ef99b6d..a92d301 100755
--- a/PVE/CLI/qm.pm
+++ b/PVE/CLI/qm.pm
@@ -910,7 +910,7 @@ our $cmddef = {
resize => [ "PVE::API2::Qemu", 'resize_vm', ['vmid', 'disk', 'size'], { node => $nodename } ],
- 'move-disk' => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 'disk', 'storage'], { node => $nodename }, $upid_exit ],
+ 'move-disk' => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 'disk', 'storage', 'target-vmid', 'target-disk'], { node => $nodename }, $upid_exit ],
move_disk => { alias => 'move-disk' },
unlink => [ "PVE::API2::Qemu", 'unlink', ['vmid'], { node => $nodename } ],
--
2.30.2
^ permalink raw reply [flat|nested] 22+ messages in thread
* [pve-devel] [PATCH v1 qemu-server 6/9] api: move-disk: cleanup very long lines
2021-07-19 14:52 [pve-devel] [PATCH v1 storage qemu-server container 0/9] move disk or volume to other guests Aaron Lauterer
` (4 preceding siblings ...)
2021-07-19 14:52 ` [pve-devel] [PATCH v1 qemu-server 5/9] api: move-disk: add move to other VM Aaron Lauterer
@ 2021-07-19 14:52 ` Aaron Lauterer
2021-08-03 7:37 ` Fabian Ebner
2021-07-19 14:52 ` [pve-devel] [PATCH v1 container 7/9] cli: pct: change move_volume to move-volume Aaron Lauterer
` (3 subsequent siblings)
9 siblings, 1 reply; 22+ messages in thread
From: Aaron Lauterer @ 2021-07-19 14:52 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
PVE/API2/Qemu.pm | 25 ++++++++++++++++++++-----
1 file changed, 20 insertions(+), 5 deletions(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index ed1179b..0529c1b 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -3301,13 +3301,15 @@ __PACKAGE__->register_method({
},
delete => {
type => 'boolean',
- description => "Delete the original disk after successful copy. By default the original disk is kept as unused disk.",
+ description => "Delete the original disk after successful copy. By default the " .
+ "original disk is kept as unused disk.",
optional => 1,
default => 0,
},
digest => {
type => 'string',
- description => 'Prevent changes if current configuration file has different SHA1 digest. This can be used to prevent concurrent modifications.',
+ description => 'Prevent changes if current configuration file has different SHA1 " .
+ "digest. This can be used to prevent concurrent modifications.',
maxLength => 40,
optional => 1,
},
@@ -3386,11 +3388,20 @@ __PACKAGE__->register_method({
(!$format || !$oldfmt || $oldfmt eq $format);
# this only checks snapshots because $disk is passed!
- my $snapshotted = PVE::QemuServer::Drive::is_volume_in_use($storecfg, $conf, $disk, $old_volid);
+ my $snapshotted = PVE::QemuServer::Drive::is_volume_in_use(
+ $storecfg,
+ $conf,
+ $disk,
+ $old_volid
+ );
die "you can't move a disk with snapshots and delete the source\n"
if $snapshotted && $param->{delete};
- PVE::Cluster::log_msg('info', $authuser, "move disk VM $vmid: move --disk $disk --storage $storeid");
+ PVE::Cluster::log_msg(
+ 'info',
+ $authuser,
+ "move disk VM $vmid: move --disk $disk --storage $storeid"
+ );
my $running = PVE::QemuServer::check_running($vmid);
@@ -3409,7 +3420,11 @@ __PACKAGE__->register_method({
if $snapshotted;
my $bwlimit = extract_param($param, 'bwlimit');
- my $movelimit = PVE::Storage::get_bandwidth_limit('move', [$oldstoreid, $storeid], $bwlimit);
+ my $movelimit = PVE::Storage::get_bandwidth_limit(
+ 'move',
+ [$oldstoreid, $storeid],
+ $bwlimit
+ );
my $newdrive = PVE::QemuServer::clone_disk(
$storecfg,
--
2.30.2
^ permalink raw reply [flat|nested] 22+ messages in thread
* [pve-devel] [PATCH v1 container 7/9] cli: pct: change move_volume to move-volume
2021-07-19 14:52 [pve-devel] [PATCH v1 storage qemu-server container 0/9] move disk or volume to other guests Aaron Lauterer
` (5 preceding siblings ...)
2021-07-19 14:52 ` [pve-devel] [PATCH v1 qemu-server 6/9] api: move-disk: cleanup very long lines Aaron Lauterer
@ 2021-07-19 14:52 ` Aaron Lauterer
2021-07-19 14:52 ` [pve-devel] [PATCH v1 container 8/9] api: move-volume: add move to another container Aaron Lauterer
` (2 subsequent siblings)
9 siblings, 0 replies; 22+ messages in thread
From: Aaron Lauterer @ 2021-07-19 14:52 UTC (permalink / raw)
To: pve-devel
also add alias to keep move_volume working
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
src/PVE/CLI/pct.pm | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/src/PVE/CLI/pct.pm b/src/PVE/CLI/pct.pm
index 8c40bbe..7ac5a55 100755
--- a/src/PVE/CLI/pct.pm
+++ b/src/PVE/CLI/pct.pm
@@ -849,7 +849,8 @@ our $cmddef = {
clone => [ "PVE::API2::LXC", 'clone_vm', ['vmid', 'newid'], { node => $nodename }, $upid_exit ],
migrate => [ "PVE::API2::LXC", 'migrate_vm', ['vmid', 'target'], { node => $nodename }, $upid_exit],
- move_volume => [ "PVE::API2::LXC", 'move_volume', ['vmid', 'volume', 'storage'], { node => $nodename }, $upid_exit ],
+ 'move-volume' => [ "PVE::API2::LXC", 'move_volume', ['vmid', 'volume', 'storage'], { node => $nodename }, $upid_exit ],
+ move_volume => { alias => 'move-disk' },
snapshot => [ "PVE::API2::LXC::Snapshot", 'snapshot', ['vmid', 'snapname'], { node => $nodename } , $upid_exit ],
delsnapshot => [ "PVE::API2::LXC::Snapshot", 'delsnapshot', ['vmid', 'snapname'], { node => $nodename } , $upid_exit ],
--
2.30.2
^ permalink raw reply [flat|nested] 22+ messages in thread
* [pve-devel] [PATCH v1 container 8/9] api: move-volume: add move to another container
2021-07-19 14:52 [pve-devel] [PATCH v1 storage qemu-server container 0/9] move disk or volume to other guests Aaron Lauterer
` (6 preceding siblings ...)
2021-07-19 14:52 ` [pve-devel] [PATCH v1 container 7/9] cli: pct: change move_volume to move-volume Aaron Lauterer
@ 2021-07-19 14:52 ` Aaron Lauterer
2021-08-03 8:14 ` Fabian Ebner
2021-07-19 14:52 ` [pve-devel] [PATCH v1 container 9/9] api: move-volume: cleanup very long lines Aaron Lauterer
2021-08-03 8:27 ` [pve-devel] [PATCH v1 storage qemu-server container 0/9] move disk or volume to other guests Fabian Ebner
9 siblings, 1 reply; 22+ messages in thread
From: Aaron Lauterer @ 2021-07-19 14:52 UTC (permalink / raw)
To: pve-devel
The goal of this is to expand the move-volume API endpoint to make it
possible to move a container volume / mountpoint to another container.
Currently it works for regular mountpoints though it would be nice to be
able to do it for unused mounpoints as well.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
This is mostly the code from qemu-server with some adaptions. Mainly
error messages and some checks.
Previous checks have been moved to '$move_to_storage_checks'.
rfc -> v1:
* add check if target guest is replicated and fail if the moved volume
does not support it
* check if source volume has a format suffix and pass it to
'find_free_disk' or if the prefix is vm/subvol as those also have
their own meaning, see the comment in the code
* fixed some style nits
src/PVE/API2/LXC.pm | 270 ++++++++++++++++++++++++++++++++++++++++----
src/PVE/CLI/pct.pm | 2 +-
2 files changed, 250 insertions(+), 22 deletions(-)
diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index b929481..0af22c1 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -27,6 +27,8 @@ use PVE::API2::LXC::Snapshot;
use PVE::JSONSchema qw(get_standard_option);
use base qw(PVE::RESTHandler);
+use Data::Dumper;
+
BEGIN {
if (!$ENV{PVE_GENERATING_DOCS}) {
require PVE::HA::Env::PVE2;
@@ -1784,10 +1786,12 @@ __PACKAGE__->register_method({
method => 'POST',
protected => 1,
proxyto => 'node',
- description => "Move a rootfs-/mp-volume to a different storage",
+ description => "Move a rootfs-/mp-volume to a different storage or to a different container.",
permissions => {
description => "You need 'VM.Config.Disk' permissions on /vms/{vmid}, " .
- "and 'Datastore.AllocateSpace' permissions on the storage.",
+ "and 'Datastore.AllocateSpace' permissions on the storage. To move ".
+ "a volume to another container, you need the permissions on the ".
+ "target container as well.",
check =>
[ 'and',
['perm', '/vms/{vmid}', [ 'VM.Config.Disk' ]],
@@ -1799,14 +1803,20 @@ __PACKAGE__->register_method({
properties => {
node => get_standard_option('pve-node'),
vmid => get_standard_option('pve-vmid', { completion => \&PVE::LXC::complete_ctid }),
+ 'target-vmid' => get_standard_option('pve-vmid', {
+ completion => \&PVE::LXC::complete_ctid,
+ optional => 1,
+ }),
volume => {
type => 'string',
+ #TODO: check how to handle unused mount points as the mp parameter is not configured
enum => [ PVE::LXC::Config->valid_volume_keys() ],
description => "Volume which will be moved.",
},
storage => get_standard_option('pve-storage-id', {
description => "Target Storage.",
completion => \&PVE::Storage::complete_storage_enabled,
+ optional => 1,
}),
delete => {
type => 'boolean',
@@ -1827,6 +1837,20 @@ __PACKAGE__->register_method({
minimum => '0',
default => 'clone limit from datacenter or storage config',
},
+ 'target-mp' => {
+ type => 'string',
+ description => "The config key the mp will be moved to.",
+ enum => [PVE::LXC::Config->valid_volume_keys()],
+ optional => 1,
+ },
+ 'target-digest' => {
+ type => 'string',
+ description => 'Prevent changes if current configuration file of the target " .
+ "container has a different SHA1 digest. This can be used to prevent " .
+ "concurrent modifications.',
+ maxLength => 40,
+ optional => 1,
+ },
},
},
returns => {
@@ -1841,32 +1865,49 @@ __PACKAGE__->register_method({
my $vmid = extract_param($param, 'vmid');
+ my $target_vmid = extract_param($param, 'target-vmid');
+
my $storage = extract_param($param, 'storage');
my $mpkey = extract_param($param, 'volume');
+ my $target_mp = extract_param($param, 'target-mp');
+
+ my $digest = extract_param($param, 'digest');
+
+ my $target_digest = extract_param($param, 'target-digest');
+
my $lockname = 'disk';
my ($mpdata, $old_volid);
- PVE::LXC::Config->lock_config($vmid, sub {
- my $conf = PVE::LXC::Config->load_config($vmid);
- PVE::LXC::Config->check_lock($conf);
+ die "either set storage or target-vmid, but not both\n"
+ if $storage && $target_vmid;
- die "cannot move volumes of a running container\n" if PVE::LXC::check_running($vmid);
+ die "cannot move volumes of a running container\n" if PVE::LXC::check_running($vmid);
- $mpdata = PVE::LXC::Config->parse_volume($mpkey, $conf->{$mpkey});
- $old_volid = $mpdata->{volume};
+ my $storecfg = PVE::Storage::config();
+ my $source_volid;
- die "you can't move a volume with snapshots and delete the source\n"
- if $param->{delete} && PVE::LXC::Config->is_volume_in_use_by_snapshots($conf, $old_volid);
+ my $move_to_storage_checks = sub {
+ PVE::LXC::Config->lock_config($vmid, sub {
+ my $conf = PVE::LXC::Config->load_config($vmid);
+ PVE::LXC::Config->check_lock($conf);
- PVE::Tools::assert_if_modified($param->{digest}, $conf->{digest});
- PVE::LXC::Config->set_lock($vmid, $lockname);
- });
+ $mpdata = PVE::LXC::Config->parse_volume($mpkey, $conf->{$mpkey});
+ $old_volid = $mpdata->{volume};
- my $realcmd = sub {
+ die "you can't move a volume with snapshots and delete the source\n"
+ if $param->{delete} && PVE::LXC::Config->is_volume_in_use_by_snapshots($conf, $old_volid);
+
+ PVE::Tools::assert_if_modified($digest, $conf->{digest});
+
+ PVE::LXC::Config->set_lock($vmid, $lockname);
+ });
+ };
+
+ my $storage_realcmd = sub {
eval {
PVE::Cluster::log_msg('info', $authuser, "move volume CT $vmid: move --volume $mpkey --storage $storage");
@@ -1936,15 +1977,202 @@ __PACKAGE__->register_method({
warn $@ if $@;
die $err if $err;
};
- my $task = eval {
- $rpcenv->fork_worker('move_volume', $vmid, $authuser, $realcmd);
+
+ my $load_and_check_reassign_configs = sub {
+ my $vmlist = PVE::Cluster::get_vmlist()->{ids};
+ die "Both containers need to be on the same node ($vmlist->{$vmid}->{node}) ".
+ "but target continer is on $vmlist->{$target_vmid}->{node}.\n"
+ if $vmlist->{$vmid}->{node} ne $vmlist->{$target_vmid}->{node};
+
+ my $source_conf = PVE::LXC::Config->load_config($vmid);
+ PVE::LXC::Config->check_lock($source_conf);
+ my $target_conf = PVE::LXC::Config->load_config($target_vmid);
+ PVE::LXC::Config->check_lock($target_conf);
+
+ die "Can't move volumes from or to template VMs\n"
+ if ($source_conf->{template} || $target_conf->{template});
+
+ if ($digest) {
+ eval { PVE::Tools::assert_if_modified($digest, $source_conf->{digest}) };
+ if (my $err = $@) {
+ die "Container ${vmid}: ${err}";
+ }
+ }
+
+ if ($target_digest) {
+ eval { PVE::Tools::assert_if_modified($target_digest, $target_conf->{digest}) };
+ if (my $err = $@) {
+ die "Container ${target_vmid}: ${err}";
+ }
+ }
+
+ die "volume '${mpkey}' for container '$vmid' does not exist\n"
+ if !defined($source_conf->{$mpkey});
+
+ die "Target volume key '${target_mp}' is already in use for container '$target_vmid'\n"
+ if exists $target_conf->{$target_mp};
+
+ my $drive = PVE::LXC::Config->parse_volume(
+ $mpkey,
+ $source_conf->{$mpkey},
+ );
+
+ $source_volid = $drive->{volume};
+
+ die "disk '${mpkey}' has no associated volume\n"
+ if !$source_volid;
+
+ die "Storage does not support moving of this disk to another container\n"
+ if !PVE::Storage::volume_has_feature(
+ $storecfg,
+ 'rename',
+ $source_volid,
+ );
+
+ die "Cannot move a bindmound or device mount to another container\n"
+ if PVE::LXC::Config->classify_mountpoint($source_volid) ne "volume";
+ die "Cannot move volume to another container while the source container is running\n"
+ if PVE::LXC::check_running($vmid) && $mpkey !~ m/^unused\d+$/;
+
+ my $repl_conf = PVE::ReplicationConfig->new();
+ my $is_replicated = $repl_conf->check_for_existing_jobs($target_vmid, 1);
+ my ($storeid, undef) = PVE::Storage::parse_volume_id($source_volid);
+ my $format = (PVE::Storage::parse_volname($storecfg, $source_volid))[6];
+ if (
+ $is_replicated
+ && !PVE::Storage::storage_can_replicate($storecfg, $storeid, $format)
+ ) {
+ die "Cannot move volume to a replicated container. Storage " .
+ "does not support replication!\n";
+ }
+ return ($source_conf, $target_conf);
+ };
+
+ my $logfunc = sub {
+ my ($msg) = @_;
+ print STDERR "$msg\n";
};
- if (my $err = $@) {
- eval { PVE::LXC::Config->remove_lock($vmid, $lockname) };
- warn $@ if $@;
- die $err;
+
+ my $volume_reassignfn = sub {
+ return PVE::LXC::Config->lock_config($vmid, sub {
+ return PVE::LXC::Config->lock_config($target_vmid, sub {
+ my ($source_conf, $target_conf) = &$load_and_check_reassign_configs();
+
+ my $drive_param = PVE::LXC::Config->parse_volume(
+ $target_mp,
+ $source_conf->{$mpkey},
+ );
+
+ print "moving volume '$mpkey' from container '$vmid' to '$target_vmid'\n";
+ my ($storage, $source_volname) = PVE::Storage::parse_volume_id($source_volid);
+
+ # The format in find_free_diskname handles two cases for containers.
+ # If it is set to 'subvol', the prefix will be set to it instead of 'vm',
+ # this is needed for example for ZFS based storages.
+ # The other thing to check for, in case of directory/file based storages,
+ # is .raw files as we need to pass this format in that case.
+ my $fmt = undef;
+ if ($source_volname =~ m/(subvol|vm)-\d+-disk-\S+$/) {
+ $fmt = $1 if $1 eq "subvol";
+ } else {
+ die "could not detect source volname prefix\n";
+ }
+ if ($source_volname =~ m/vm-\d+-disk-\S+\.(raw)/) {
+ $fmt = $1;
+ }
+
+ my $target_volname = PVE::Storage::find_free_diskname(
+ $storecfg,
+ $storage,
+ $target_vmid,
+ $fmt
+ );
+
+ my $new_volid = PVE::Storage::rename_volume(
+ $storecfg,
+ $source_volid,
+ $target_volname,
+ );
+
+ $drive_param->{volume} = $new_volid;
+
+ delete $source_conf->{$mpkey};
+ print "removing volume '${mpkey}' from container '${vmid}' config\n";
+ PVE::LXC::Config->write_config($vmid, $source_conf);
+
+ my $drive_string = PVE::LXC::Config->print_volume($target_mp, $drive_param);
+ my $running = PVE::LXC::check_running($target_vmid);
+ my $param = { $target_mp => $drive_string };
+
+ my $err = Dumper(PVE::LXC::Config->update_pct_config(
+ $target_vmid,
+ $target_conf,
+ $running,
+ $param
+ ));
+
+ PVE::LXC::Config->write_config($target_vmid, $target_conf);
+ $target_conf = PVE::LXC::Config->load_config($target_vmid);
+
+ PVE::LXC::update_lxc_config($target_vmid, $target_conf);
+ print "target container '$target_vmid' updated with '$target_mp'\n";
+
+ # remove possible replication snapshots
+ if (PVE::Storage::volume_has_feature(
+ $storecfg,
+ 'replicate',
+ $source_volid),
+ ) {
+ eval {
+ PVE::Replication::prepare(
+ $storecfg,
+ [$new_volid],
+ undef,
+ 1,
+ undef,
+ $logfunc,
+ )
+ };
+ if (my $err = $@) {
+ print "Failed to remove replication snapshots on volume ".
+ "'$target_mp'. Manual cleanup could be necessary.\n";
+ }
+ }
+ });
+ });
+ };
+
+ if ($target_vmid) {
+ $rpcenv->check_vm_perm($authuser, $target_vmid, undef, ['VM.Config.Disk'])
+ if $authuser ne 'root@pam';
+
+ die "Moving a volume to the same container is not possible. Did you ".
+ "mean to move the volume to a different storage?\n"
+ if $vmid eq $target_vmid;
+
+ &$load_and_check_reassign_configs();
+ return $rpcenv->fork_worker(
+ 'move_volume',
+ "${vmid}-${mpkey}>${target_vmid}-${target_mp}",
+ $authuser,
+ $volume_reassignfn
+ );
+ } elsif ($storage) {
+ &$move_to_storage_checks();
+ my $task = eval {
+ $rpcenv->fork_worker('move_volume', $vmid, $authuser, $storage_realcmd);
+ };
+ if (my $err = $@) {
+ eval { PVE::LXC::Config->remove_lock($vmid, $lockname) };
+ warn $@ if $@;
+ die $err;
+ }
+ return $task;
+ } else {
+ die "Provide either a 'storage' to move the mount point to a ".
+ "different storage or 'target-vmid' and 'target-mp' to move ".
+ "the mount point to another container\n";
}
- return $task;
}});
__PACKAGE__->register_method({
diff --git a/src/PVE/CLI/pct.pm b/src/PVE/CLI/pct.pm
index 7ac5a55..5f2bf04 100755
--- a/src/PVE/CLI/pct.pm
+++ b/src/PVE/CLI/pct.pm
@@ -849,7 +849,7 @@ our $cmddef = {
clone => [ "PVE::API2::LXC", 'clone_vm', ['vmid', 'newid'], { node => $nodename }, $upid_exit ],
migrate => [ "PVE::API2::LXC", 'migrate_vm', ['vmid', 'target'], { node => $nodename }, $upid_exit],
- 'move-volume' => [ "PVE::API2::LXC", 'move_volume', ['vmid', 'volume', 'storage'], { node => $nodename }, $upid_exit ],
+ 'move-volume' => [ "PVE::API2::LXC", 'move_volume', ['vmid', 'volume', 'storage', 'target-vmid', 'target-mp'], { node => $nodename }, $upid_exit ],
move_volume => { alias => 'move-disk' },
snapshot => [ "PVE::API2::LXC::Snapshot", 'snapshot', ['vmid', 'snapname'], { node => $nodename } , $upid_exit ],
--
2.30.2
^ permalink raw reply [flat|nested] 22+ messages in thread
* [pve-devel] [PATCH v1 container 9/9] api: move-volume: cleanup very long lines
2021-07-19 14:52 [pve-devel] [PATCH v1 storage qemu-server container 0/9] move disk or volume to other guests Aaron Lauterer
` (7 preceding siblings ...)
2021-07-19 14:52 ` [pve-devel] [PATCH v1 container 8/9] api: move-volume: add move to another container Aaron Lauterer
@ 2021-07-19 14:52 ` Aaron Lauterer
2021-08-03 8:27 ` [pve-devel] [PATCH v1 storage qemu-server container 0/9] move disk or volume to other guests Fabian Ebner
9 siblings, 0 replies; 22+ messages in thread
From: Aaron Lauterer @ 2021-07-19 14:52 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
src/PVE/API2/LXC.pm | 33 +++++++++++++++++++++++++++------
1 file changed, 27 insertions(+), 6 deletions(-)
diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 0af22c1..fecd4ca 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -1820,13 +1820,15 @@ __PACKAGE__->register_method({
}),
delete => {
type => 'boolean',
- description => "Delete the original volume after successful copy. By default the original is kept as an unused volume entry.",
+ description => "Delete the original volume after successful copy. By default the " .
+ "original is kept as an unused volume entry.",
optional => 1,
default => 0,
},
digest => {
type => 'string',
- description => 'Prevent changes if current configuration file has different SHA1 digest. This can be used to prevent concurrent modifications.',
+ description => 'Prevent changes if current configuration file has different SHA1 " .
+ "digest. This can be used to prevent concurrent modifications.',
maxLength => 40,
optional => 1,
},
@@ -1909,7 +1911,11 @@ __PACKAGE__->register_method({
my $storage_realcmd = sub {
eval {
- PVE::Cluster::log_msg('info', $authuser, "move volume CT $vmid: move --volume $mpkey --storage $storage");
+ PVE::Cluster::log_msg(
+ 'info',
+ $authuser,
+ "move volume CT $vmid: move --volume $mpkey --storage $storage"
+ );
my $conf = PVE::LXC::Config->load_config($vmid);
my $storage_cfg = PVE::Storage::config();
@@ -1920,8 +1926,20 @@ __PACKAGE__->register_method({
PVE::Storage::activate_volumes($storage_cfg, [ $old_volid ]);
my $bwlimit = extract_param($param, 'bwlimit');
my $source_storage = PVE::Storage::parse_volume_id($old_volid);
- my $movelimit = PVE::Storage::get_bandwidth_limit('move', [$source_storage, $storage], $bwlimit);
- $new_volid = PVE::LXC::copy_volume($mpdata, $vmid, $storage, $storage_cfg, $conf, undef, $movelimit);
+ my $movelimit = PVE::Storage::get_bandwidth_limit(
+ 'move',
+ [$source_storage, $storage],
+ $bwlimit
+ );
+ $new_volid = PVE::LXC::copy_volume(
+ $mpdata,
+ $vmid,
+ $storage,
+ $storage_cfg,
+ $conf,
+ undef,
+ $movelimit
+ );
if (PVE::LXC::Config->is_template($conf)) {
PVE::Storage::activate_volumes($storage_cfg, [ $new_volid ]);
my $template_volid = PVE::Storage::vdisk_create_base($storage_cfg, $new_volid);
@@ -1935,7 +1953,10 @@ __PACKAGE__->register_method({
$conf = PVE::LXC::Config->load_config($vmid);
PVE::Tools::assert_if_modified($digest, $conf->{digest});
- $conf->{$mpkey} = PVE::LXC::Config->print_ct_mountpoint($mpdata, $mpkey eq 'rootfs');
+ $conf->{$mpkey} = PVE::LXC::Config->print_ct_mountpoint(
+ $mpdata,
+ $mpkey eq 'rootfs'
+ );
PVE::LXC::Config->add_unused_volume($conf, $old_volid) if !$param->{delete};
--
2.30.2
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [pve-devel] [PATCH v1 storage 1/9] storage: expose find_free_diskname
2021-07-19 14:52 ` [pve-devel] [PATCH v1 storage 1/9] storage: expose find_free_diskname Aaron Lauterer
@ 2021-08-02 12:56 ` Fabian Ebner
2021-08-03 14:20 ` Aaron Lauterer
0 siblings, 1 reply; 22+ messages in thread
From: Fabian Ebner @ 2021-08-02 12:56 UTC (permalink / raw)
To: pve-devel, Aaron Lauterer
Am 19.07.21 um 16:52 schrieb Aaron Lauterer:
> We do not expose the parameter 'add_fmt_suffix' used by the internal
> implemantion of 'find_free_diskname'. This is something only the plugins
> themselves know but cannot be determined easily and reliably from an
> outside caller.
>
> This is why the new 'wants_fmt_suffix' method has been introduced. For
> most plugins the return value is very clear. For the default
> implementation in Plugin.pm we add another check to be on the safe side
> and only return true if the '$scfg->{path}' option is present.
> It indicates that the volume in question is an actual file which will
> need the suffix.
>
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> --- > rfc -> v1:
> dropped $add_fmt_suffix parameter and added the "wants_fmt_suffix"
> helper method in each plugin.
>
> PVE/Storage.pm | 11 +++++++++++
> PVE/Storage/LVMPlugin.pm | 5 +++++
> PVE/Storage/Plugin.pm | 7 +++++++
> PVE/Storage/RBDPlugin.pm | 5 +++++
> PVE/Storage/ZFSPoolPlugin.pm | 5 +++++
> 5 files changed, 33 insertions(+)
>
> diff --git a/PVE/Storage.pm b/PVE/Storage.pm
> index c04b5a2..afeb2e3 100755
> --- a/PVE/Storage.pm
> +++ b/PVE/Storage.pm
> @@ -203,6 +203,17 @@ sub storage_can_replicate {
> return $plugin->storage_can_replicate($scfg, $storeid, $format);
> }
>
> +sub find_free_diskname {
> + my ($cfg, $storeid, $vmid, $fmt) = @_;
> +
> + my $scfg = storage_config($cfg, $storeid);
> + my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
> +
> + my $add_fmt_suffix = $plugin->wants_fmt_suffix($scfg);
> +
> + return $plugin->find_free_diskname($storeid, $scfg, $vmid, $fmt, $add_fmt_suffix);
Maybe this should rather be find_free_volname and always return the full
volname (i.e. "<VMID>/vm-<VMID>-disk-<N>.<FMT>" for dir-based storages)
so that it can be parsed with parse_volname later?
That would mean something like wants_vmid_prefix() is needed, but I'd
rather extend (and rename) the wants_fmt_suffix method instead.
The rationale is: the value returned here is used for the
$target_volname parameter in the next patch, and in case it's a
dir-based storage, passing in an actual volname like
'123/vm-123-disk-0.raw' currently fails (because it actually expects
only the disk name).
> +}
> +
> sub storage_ids {
> my ($cfg) = @_;
>
> diff --git a/PVE/Storage/LVMPlugin.pm b/PVE/Storage/LVMPlugin.pm
> index 139d391..3e5b6c8 100644
> --- a/PVE/Storage/LVMPlugin.pm
> +++ b/PVE/Storage/LVMPlugin.pm
> @@ -201,6 +201,11 @@ sub type {
> return 'lvm';
> }
>
> +sub wants_fmt_suffix {
> + my ($class, $scfg) = @_;
> + return 0;
> +}
> +
Nit: since there is no $scfg->{path}, the default implementation would
already return 0. Same for {RBD,ZFSPool}Plugin below, but overwriting
the method doesn't really hurt either of course.
> sub plugindata {
> return {
> content => [ {images => 1, rootdir => 1}, { images => 1 }],
> diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
> index b1865cb..5c6c659 100644
> --- a/PVE/Storage/Plugin.pm
> +++ b/PVE/Storage/Plugin.pm
> @@ -191,6 +191,13 @@ sub default_format {
> return wantarray ? ($def_format, $valid_formats) : $def_format;
> }
>
> +sub wants_fmt_suffix {
> + my ($class, $scfg) = @_;
> + return 1 if $scfg->{path};
> + return 0;
> +}
> +
> +
> PVE::JSONSchema::register_format('pve-storage-path', \&verify_path);
> sub verify_path {
> my ($path, $noerr) = @_;
> diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
> index a8d1243..86ea45a 100644
> --- a/PVE/Storage/RBDPlugin.pm
> +++ b/PVE/Storage/RBDPlugin.pm
> @@ -273,6 +273,11 @@ sub type {
> return 'rbd';
> }
>
> +sub wants_fmt_suffix {
> + my ($class, $scfg) = @_;
> + return 0;
> +}
> +
> sub plugindata {
> return {
> content => [ {images => 1, rootdir => 1}, { images => 1 }],
> diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
> index c4be70f..85e2211 100644
> --- a/PVE/Storage/ZFSPoolPlugin.pm
> +++ b/PVE/Storage/ZFSPoolPlugin.pm
> @@ -18,6 +18,11 @@ sub type {
> return 'zfspool';
> }
>
> +sub wants_fmt_suffix {
> + my ($class, $scfg) = @_;
> + return 0;
> +}
> +
> sub plugindata {
> return {
> content => [ {images => 1, rootdir => 1}, {images => 1 , rootdir => 1}],
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [pve-devel] [PATCH v1 storage 2/9] add disk rename feature
2021-07-19 14:52 ` [pve-devel] [PATCH v1 storage 2/9] add disk rename feature Aaron Lauterer
@ 2021-08-02 12:57 ` Fabian Ebner
2021-08-03 14:22 ` Aaron Lauterer
0 siblings, 1 reply; 22+ messages in thread
From: Fabian Ebner @ 2021-08-02 12:57 UTC (permalink / raw)
To: pve-devel, Aaron Lauterer
Am 19.07.21 um 16:52 schrieb Aaron Lauterer:
> Functionality has been added for the following storage types:
>
> * directory ones, based on the default implementation:
> * directory
> * NFS
> * CIFS
> * gluster
> * ZFS
> * (thin) LVM
> * Ceph
>
> A new feature `rename` has been introduced to mark which storage
> plugin supports the feature.
>
> Version API and AGE have been bumped.
>
> The storage gets locked and each plugin checks if the target volume
> already exists prior renaming.
> This is done because there could be a race condition from the time the
> external caller requests a new free disk name to the time the volume is
> actually renamed.
>
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> rfc -> v1:
> * reduced number of parameters to minimum needed, plugins infer needed
> information themselves
> * added storage locking and checking if volume already exists
> * parse target_volname prior to renaming to check if valid
>
> old dedicated API endpoint -> rfc:
> only do rename now but the rename function handles templates and returns
> the new volid as this can be differently handled on some storages.
>
> PVE/Storage.pm | 20 +++++++++++--
> PVE/Storage/LVMPlugin.pm | 27 +++++++++++++++++
> PVE/Storage/LvmThinPlugin.pm | 1 +
> PVE/Storage/Plugin.pm | 58 ++++++++++++++++++++++++++++++++++++
> PVE/Storage/RBDPlugin.pm | 29 ++++++++++++++++++
> PVE/Storage/ZFSPoolPlugin.pm | 24 +++++++++++++++
> 6 files changed, 157 insertions(+), 2 deletions(-)
>
> diff --git a/PVE/Storage.pm b/PVE/Storage.pm
> index afeb2e3..f6d86e1 100755
> --- a/PVE/Storage.pm
> +++ b/PVE/Storage.pm
> @@ -41,11 +41,11 @@ use PVE::Storage::PBSPlugin;
> use PVE::Storage::BTRFSPlugin;
>
> # Storage API version. Increment it on changes in storage API interface.
> -use constant APIVER => 9;
> +use constant APIVER => 10;
> # Age is the number of versions we're backward compatible with.
> # This is like having 'current=APIVER' and age='APIAGE' in libtool,
> # see https://www.gnu.org/software/libtool/manual/html_node/Libtool-versioning.html
> -use constant APIAGE => 0;
> +use constant APIAGE => 1;
>
> # load standard plugins
> PVE::Storage::DirPlugin->register();
> @@ -360,6 +360,7 @@ sub volume_snapshot_needs_fsfreeze {
> # snapshot - taking a snapshot is possible
> # sparseinit - volume is sparsely initialized
> # template - conversion to base image is possible
> +# rename - renaming volumes is possible
> # $snap - check if the feature is supported for a given snapshot
> # $running - if the guest owning the volume is running
> # $opts - hash with further options:
> @@ -1868,6 +1869,21 @@ sub complete_volume {
> return $res;
> }
>
> +sub rename_volume {
> + my ($cfg, $source_volid, $target_volname) = @_;
> +
> + my ($storeid) = parse_volume_id($source_volid);
> +
> + activate_storage($cfg, $storeid);
> +
> + my $scfg = storage_config($cfg, $storeid);
> + my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
> +
> + return $plugin->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
> + return $plugin->rename_volume($scfg, $storeid, $source_volid, $target_volname);
I know I suggested these parameters last time, but I hadn't worked it
out in detail, and using $source_volname instead of $source_volid seems
to be possible and nicer (would avoid some calls in the plugins AFAICS).
> + });
> +}
> +
> # Various io-heavy operations require io/bandwidth limits which can be
> # configured on multiple levels: The global defaults in datacenter.cfg, and
> # per-storage overrides. When we want to do a restore from storage A to storage
> diff --git a/PVE/Storage/LVMPlugin.pm b/PVE/Storage/LVMPlugin.pm
> index 3e5b6c8..7a13a96 100644
> --- a/PVE/Storage/LVMPlugin.pm
> +++ b/PVE/Storage/LVMPlugin.pm
> @@ -344,6 +344,16 @@ sub lvcreate {
> run_command($cmd, errmsg => "lvcreate '$vg/$name' error");
> }
>
> +sub lvrename {
> + my ($vg, $oldname, $newname) = @_;
> +
> + my $cmd = ['/sbin/lvrename', $vg, $oldname, $newname];
$cmd is not used
> + run_command(
> + ['/sbin/lvrename', $vg, $oldname, $newname],
> + errmsg => "lvrename '${vg}/${oldname}' to '${newname}' error"
> + );
> +}
> +
> sub alloc_image {
> my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size) = @_;
>
> @@ -589,6 +599,7 @@ sub volume_has_feature {
>
> my $features = {
> copy => { base => 1, current => 1},
> + rename => {current => 1},
> };
>
> my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
> @@ -697,4 +708,20 @@ sub volume_import_write {
> input => '<&'.fileno($input_fh));
> }
>
> +sub rename_volume {
> + my ($class, $scfg, $storeid, $source_volid, $target_volname) = @_;
> +
> + $class->parse_volname($target_volname);
> +
> + my (undef, $source_volname) = PVE::Storage::Plugin::parse_volume_id($source_volid); > +
> + my $vg = $scfg->{vgname};
> + my $lvs = lvm_list_volumes($vg);
> + die "target volume '${target_volname}' already exists\n"
> + if ($lvs->{$vg}->{$target_volname});
> +
> + lvrename($scfg->{vgname}, $source_volname, $target_volname);
Nit: $vg can be re-used
> + return "${storeid}:${target_volname}";
> +}
> +
> 1;
> diff --git a/PVE/Storage/LvmThinPlugin.pm b/PVE/Storage/LvmThinPlugin.pm
> index 4ba6f90..c24af22 100644
> --- a/PVE/Storage/LvmThinPlugin.pm
> +++ b/PVE/Storage/LvmThinPlugin.pm
> @@ -355,6 +355,7 @@ sub volume_has_feature {
> template => { current => 1},
> copy => { base => 1, current => 1, snap => 1},
> sparseinit => { base => 1, current => 1},
> + rename => {current => 1},
> };
>
> my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
> diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
> index 5c6c659..3f38a94 100644
> --- a/PVE/Storage/Plugin.pm
> +++ b/PVE/Storage/Plugin.pm
> @@ -967,6 +967,7 @@ sub volume_has_feature {
> snap => {qcow2 => 1} },
> sparseinit => { base => {qcow2 => 1, raw => 1, vmdk => 1},
> current => {qcow2 => 1, raw => 1, vmdk => 1} },
> + rename => { current =>{qcow2 => 1, raw => 1, vmdk => 1} },
Style nit: missing space
> };
>
> # clone_image creates a qcow2 volume
> @@ -974,6 +975,14 @@ sub volume_has_feature {
> defined($opts->{valid_target_formats}) &&
> !(grep { $_ eq 'qcow2' } @{$opts->{valid_target_formats}});
>
> + if (
> + $feature eq 'rename'
> + && $class->can('api')
> + && $class->api() < 9
Should be < 10 now like below.
> + ) {
> + return 0;
> + }
> +
> my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
> $class->parse_volname($volname);
>
> @@ -1491,4 +1500,53 @@ sub volume_import_formats {
> return ();
> }
>
> +sub rename_volume {
> + my ($class, $scfg, $storeid, $source_volid, $target_volname) = @_;
> + die "not implemented in storage plugin '$class'\n" if $class->can('api') && $class->api() < 10;
Could also die if there is no $scfg->{path} (so that PBS, ISCSI, etc.
die if the method is ever called for them).
> +
> + my $target_vmid;
> +
> + if ($target_volname =~ m/^(base|vm)?-(\d+)-\S+$/) {
> + $target_vmid = $2;
> + } else {
> + die "could not parse target vmid\n";
> + }
> +
> + $class->parse_volname("${target_vmid}/${target_volname}");
So the $target_volname that's passed in here is not an actual volname by
our "definition", i.e. something that can be parsed by parse_volname.
And passing in an actual volname like '123/vm-123-disk-0.raw' doesn't work.
> +
> + my (undef, $source_volname) = PVE::Storage::Plugin::parse_volume_id($source_volid);
> + my (
> + undef,
> + $source_name,
> + $source_vmid,
> + $base_name,
> + $base_vmid,
> + undef,
> + $format
> + ) = $class->parse_volname($source_volname);
> +
> + my $basedir = $class->get_subdir($scfg, 'images');
> + # $source_volname includes the '<vmid>/' prefix
> + my $sourcedir = "${basedir}/${source_vmid}";
> + my $targetdir = "${basedir}/${target_vmid}";
> +
> + mkpath $targetdir;
> +
> + if ($target_volname !~ m/(vm|subsvol)-\d+-\S+\.(raw|qcow2|vmdk)$/) {
> + $target_volname = "${target_volname}.${format}";
> + }
parse_volname for the target didn't fail, so no need for this check.
> +
> + my $old_path = "${sourcedir}/${source_name}";
> + my $new_path = "${targetdir}/${target_volname}";
> +
> + die "target volume '${target_volname}' already exists\n" if -e $new_path;
> +
> + my $base = $base_name ? "${base_vmid}/${base_name}/" : '';
> +
> + rename($old_path, $new_path) ||
> + die "rename '$old_path' to '$new_path' failed - $!\n";
> +
> + return "${storeid}:${base}${target_vmid}/${target_volname}";
> +}
> +
> 1;
> diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
> index 86ea45a..6ae846a 100644
> --- a/PVE/Storage/RBDPlugin.pm
> +++ b/PVE/Storage/RBDPlugin.pm
> @@ -733,6 +733,7 @@ sub volume_has_feature {
> template => { current => 1},
> copy => { base => 1, current => 1, snap => 1},
> sparseinit => { base => 1, current => 1},
> + rename => {current => 1},
> };
>
> my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) = $class->parse_volname($volname);
> @@ -748,4 +749,32 @@ sub volume_has_feature {
> return undef;
> }
>
> +sub rename_volume {
> + my ($class, $scfg, $storeid, $source_volid, $target_volname) = @_;
> +
> + $class->parse_volname($target_volname);
> +
> + my (undef, $source_volname) = PVE::Storage::Plugin::parse_volume_id($source_volid);
> + my (undef, undef, $source_vmid, $base_name) = $class->parse_volname($source_volname);
> +
> + eval {
> + my $cmd = $rbd_cmd->($scfg, $storeid, 'info', $target_volname);
> + run_rbd_command($cmd, errmsg => "exist check", quiet => 1, noerr => 1);
> + };
> + my $err = $@;
> + die "target volume '${target_volname}' already exists\n"
> + if !$err;
> +
> + my $cmd = $rbd_cmd->($scfg, $storeid, 'rename', $source_volname, $target_volname);
> +
> + run_rbd_command(
> + $cmd,
> + errmsg => "could not rename image '${source_volname}' to '${target_volname}'",
> + );
> +
> + $base_name = $base_name ? "${base_name}/" : '';
> +
> + return "${storeid}:${base_name}${target_volname}";
> +}
> +
> 1;
> diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
> index 85e2211..ac375e8 100644
> --- a/PVE/Storage/ZFSPoolPlugin.pm
> +++ b/PVE/Storage/ZFSPoolPlugin.pm
> @@ -692,6 +692,7 @@ sub volume_has_feature {
> copy => { base => 1, current => 1},
> sparseinit => { base => 1, current => 1},
> replicate => { base => 1, current => 1},
> + rename => {current => 1},
> };
>
> my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
> @@ -794,4 +795,27 @@ sub volume_import_formats {
> return $class->volume_export_formats($scfg, $storeid, $volname, $snapshot, $base_snapshot, $with_snapshots);
> }
>
> +sub rename_volume {
> + my ($class, $scfg, $storeid, $source_volid, $target_volname) = @_;
> +
> + $class->parse_volname($target_volname);
> +
> + my (undef, $source_volname) = PVE::Storage::Plugin::parse_volume_id($source_volid);
> + my (undef, undef, $source_vmid, $base_name) = $class->parse_volname($source_volname);
> +
> + my $pool = $scfg->{pool};
> + my $source_zfspath = "${pool}/${source_volname}";
> + my $target_zfspath = "${pool}/${target_volname}";
> +
> + my $exists = 0 == run_command(['zfs', 'get', '-H', 'name', $target_zfspath],
> + noerr => 1, quiet => 1);
> + die "target volume '${target_volname}' already exists\n" if $exists;
> +
> + $class->zfs_request($scfg, 5, 'rename', ${source_zfspath}, ${target_zfspath});
> +
> + $base_name = $base_name ? "${base_name}/" : '';
> +
> + return "${storeid}:${base_name}${target_volname}";
> +}
> +
> 1;
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [pve-devel] [PATCH v1 qemu-server 5/9] api: move-disk: add move to other VM
2021-07-19 14:52 ` [pve-devel] [PATCH v1 qemu-server 5/9] api: move-disk: add move to other VM Aaron Lauterer
@ 2021-08-03 7:34 ` Fabian Ebner
2021-08-05 9:36 ` Aaron Lauterer
0 siblings, 1 reply; 22+ messages in thread
From: Fabian Ebner @ 2021-08-03 7:34 UTC (permalink / raw)
To: pve-devel, Aaron Lauterer
Am 19.07.21 um 16:52 schrieb Aaron Lauterer:
> The goal of this is to expand the move-disk API endpoint to make it
> possible to move a disk to another VM. Previously this was only possible
> with manual intervertion either by renaming the VM disk or by manually
> adding the disks volid to the config of the other VM.
>
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> rfc -> v1:
> * add check if target guest is replicated and fail if the moved volume
> does not support it
> * check if source volume has a format suffix and pass it to
> 'find_free_disk'
> * fixed some style nits
>
> old dedicated api endpoint -> rfc:
> There are some big changes here. The old [0] dedicated API endpoint is
> gone and most of its code is now part of move_disk. Error messages have
> been changed accordingly and sometimes enahnced by adding disk keys and
> VMIDs where appropriate.
>
> Since a move to other guests should be possible for unused disks, we
> need to check before doing a move to storage to make sure to not
> handle unused disks.
>
> [0] https://lists.proxmox.com/pipermail/pve-devel/2021-April/047738.html
>
> PVE/API2/Qemu.pm | 229 +++++++++++++++++++++++++++++++++++++++++++++--
> PVE/CLI/qm.pm | 2 +-
> 2 files changed, 225 insertions(+), 6 deletions(-)
>
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index f2557e3..ed1179b 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -35,6 +35,7 @@ use PVE::API2::Qemu::Agent;
> use PVE::VZDump::Plugin;
> use PVE::DataCenterConfig;
> use PVE::SSHInfo;
> +use PVE::Replication;
>
> BEGIN {
> if (!$ENV{PVE_GENERATING_DOCS}) {
> @@ -3263,9 +3264,11 @@ __PACKAGE__->register_method({
> method => 'POST',
> protected => 1,
> proxyto => 'node',
> - description => "Move volume to different storage.",
> + description => "Move volume to different storage or to a different VM.",
> permissions => {
> - description => "You need 'VM.Config.Disk' permissions on /vms/{vmid}, and 'Datastore.AllocateSpace' permissions on the storage.",
> + description => "You need 'VM.Config.Disk' permissions on /vms/{vmid}, " .
> + "and 'Datastore.AllocateSpace' permissions on the storage. To move ".
> + "a disk to another VM, you need the permissions on the target VM as well.",
> check => [ 'and',
> ['perm', '/vms/{vmid}', [ 'VM.Config.Disk' ]],
> ['perm', '/storage/{storage}', [ 'Datastore.AllocateSpace' ]],
> @@ -3276,14 +3279,19 @@ __PACKAGE__->register_method({
> properties => {
> node => get_standard_option('pve-node'),
> vmid => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),
> + 'target-vmid' => get_standard_option('pve-vmid', {
> + completion => \&PVE::QemuServer::complete_vmid,
> + optional => 1,
> + }),
> disk => {
> type => 'string',
> description => "The disk you want to move.",
> - enum => [PVE::QemuServer::Drive::valid_drive_names()],
> + enum => [PVE::QemuServer::Drive::valid_drive_names_with_unused()],
> },
> storage => get_standard_option('pve-storage-id', {
> description => "Target storage.",
> completion => \&PVE::QemuServer::complete_storage,
> + optional => 1,
> }),
> 'format' => {
> type => 'string',
> @@ -3310,6 +3318,20 @@ __PACKAGE__->register_method({
> minimum => '0',
> default => 'move limit from datacenter or storage config',
> },
> + 'target-disk' => {
> + type => 'string',
> + description => "The config key the disk will be moved to on the target VM " .
> + "(for example, ide0 or scsi1).",
> + enum => [PVE::QemuServer::Drive::valid_drive_names_with_unused()],
> + optional => 1,
> + },
> + 'target-digest' => {
> + type => 'string',
> + description => 'Prevent changes if current configuration file of the target VM has " .
> + "a different SHA1 digest. This can be used to prevent concurrent modifications.',
> + maxLength => 40,
> + optional => 1,
> + },
> },
> },
> returns => {
> @@ -3324,14 +3346,22 @@ __PACKAGE__->register_method({
>
> my $node = extract_param($param, 'node');
> my $vmid = extract_param($param, 'vmid');
> + my $target_vmid = extract_param($param, 'target-vmid');
> my $digest = extract_param($param, 'digest');
> + my $target_digest = extract_param($param, 'target-digest');
> my $disk = extract_param($param, 'disk');
> + my $target_disk = extract_param($param, 'target-disk');
> my $storeid = extract_param($param, 'storage');
> my $format = extract_param($param, 'format');
>
> + die "either set storage or target-vmid, but not both\n"
> + if $storeid && $target_vmid;
If the VM -> VM variant is used, shouldn't we also test for $target_disk
being set? When not passing --target-disk, it fails further below, but
here we could die clean and early. Or make $target_disk default to $disk
in that case?
> +
> +
> my $storecfg = PVE::Storage::config();
> + my $source_volid;
>
> - my $updatefn = sub {
> + my $move_updatefn = sub {
> my $conf = PVE::QemuConfig->load_config($vmid);
> PVE::QemuConfig->check_lock($conf);
>
> @@ -3441,7 +3471,196 @@ __PACKAGE__->register_method({
> return $rpcenv->fork_worker('qmmove', $vmid, $authuser, $realcmd);
> };
>
> - return PVE::QemuConfig->lock_config($vmid, $updatefn);
> + my $load_and_check_reassign_configs = sub {
> + my $vmlist = PVE::Cluster::get_vmlist()->{ids};
> + die "Both VMs need to be on the same node ($vmlist->{$vmid}->{node}) ".
> + "but target VM is on $vmlist->{$target_vmid}->{node}.\n"
> + if $vmlist->{$vmid}->{node} ne $vmlist->{$target_vmid}->{node};
Style nit: multiline expression with post-if
> +
> + my $source_conf = PVE::QemuConfig->load_config($vmid);
> + PVE::QemuConfig->check_lock($source_conf);
> + my $target_conf = PVE::QemuConfig->load_config($target_vmid);
> + PVE::QemuConfig->check_lock($target_conf);
> +
> + die "Can't move disks from or to template VMs\n"
> + if ($source_conf->{template} || $target_conf->{template});
> +
> + if ($digest) {
> + eval { PVE::Tools::assert_if_modified($digest, $source_conf->{digest}) };
> + if (my $err = $@) {
> + die "VM ${vmid}: ${err}";
> + }
> + }
> +
> + if ($target_digest) {
> + eval { PVE::Tools::assert_if_modified($target_digest, $target_conf->{digest}) };
> + if (my $err = $@) {
> + die "VM ${target_vmid}: ${err}";
> + }
> + }
> +
> + die "Disk '${disk}' for VM '$vmid' does not exist\n"
> + if !defined($source_conf->{$disk});
> +
> + die "Target disk key '${target_disk}' is already in use for VM '$target_vmid'\n"
> + if exists $target_conf->{$target_disk};
Any special reason to test for defined above and exists here? Is there a
problem if the target config has an explicit undef at that key?
> +
> + my $drive = PVE::QemuServer::parse_drive(
> + $disk,
> + $source_conf->{$disk},
> + );
> + $source_volid = $drive->{file};
> +
> + die "disk '${disk}' has no associated volume\n"
> + if !$source_volid;
> + die "CD drive contents can't be moved to another VM\n"
> + if PVE::QemuServer::drive_is_cdrom($drive, 1);
> + die "Can't move physical disk to another VM\n" if $drive->{file} =~ m|^/dev/|;
Could re-use $source_volid instead of $drive->{file}
> + die "Can't move disk used by a snapshot to another VM\n"
> + if PVE::QemuServer::Drive::is_volume_in_use(
> + $storecfg,
> + $source_conf,
> + $disk,
> + $source_volid,
> + );
> +
> + die "Storage does not support moving of this disk to another VM\n"
> + if !PVE::Storage::volume_has_feature(
> + $storecfg,
> + 'rename',
> + $source_volid,
> + );
Style nit: two multiline post-if expressions
> +
> + die "Cannot move disk to another VM while the source VM is running\n"
> + if PVE::QemuServer::check_running($vmid) && $disk !~ m/^unused\d+$/;
> +
> + if ($target_disk !~ m/^unused\d+$/ && $target_disk =~ m/^([^\d]+)\d+$/) {
> + my $interface = $1;
> + my $desc = PVE::JSONSchema::get_standard_option("pve-qm-${interface}");
> + eval {
> + PVE::JSONSchema::parse_property_string(
> + $desc->{format},
> + $source_conf->{$disk},
> + )
> + };
> + if (my $err = $@) {
> + die "Cannot move disk to another VM: ${err}";
> + }
> + }
> +
> + my $repl_conf = PVE::ReplicationConfig->new();
> + my $is_replicated = $repl_conf->check_for_existing_jobs($target_vmid, 1);
> + my ($storeid, undef) = PVE::Storage::parse_volume_id($source_volid);
> + my $format = (PVE::Storage::parse_volname($storecfg, $source_volid))[6];
> + if ($is_replicated && !PVE::Storage::storage_can_replicate($storecfg, $storeid, $format)) {
> + die "Cannot move disk to a replicated VM. Storage does not support replication!\n";
> + }
> +
> + return ($source_conf, $target_conf);
> + };
> +
> + my $logfunc = sub {
> + my ($msg) = @_;
> + print STDERR "$msg\n";
> + };
> +
> + my $disk_reassignfn = sub {
> + return PVE::QemuConfig->lock_config($vmid, sub {
> + return PVE::QemuConfig->lock_config($target_vmid, sub {
> + my ($source_conf, $target_conf) = &$load_and_check_reassign_configs();
> +
> + my $drive_param = PVE::QemuServer::parse_drive(
> + $target_disk,
> + $source_conf->{$disk},
> + );
> +
> + print "moving disk '$disk' from VM '$vmid' to '$target_vmid'\n";
> + my ($storeid, $source_volname) = PVE::Storage::parse_volume_id($source_volid);
> +
> + my $fmt = undef;
> + if ($source_volname =~ m/vm-\d+-\S+\.(raw|qcow2|vmdk)$/) {
> + $fmt = $1;
> + }
Using parse_volname to get the format should be more complete/robust.
> +
> + my $target_volname = PVE::Storage::find_free_diskname(
> + $storecfg,
> + $storeid,
> + $target_vmid,
> + $fmt
> + );
> +
> + my $new_volid = PVE::Storage::rename_volume(
> + $storecfg,
> + $source_volid,
> + $target_volname,
> + );
> +
> + $drive_param->{file} = $new_volid;
> +
> + delete $source_conf->{$disk};
> + print "removing disk '${disk}' from VM '${vmid}' config\n";
> + PVE::QemuConfig->write_config($vmid, $source_conf);
> +
> + my $drive_string = PVE::QemuServer::print_drive($drive_param);
> + &$update_vm_api(
> + {
> + node => $node,
> + vmid => $target_vmid,
> + digest => $target_digest,
> + $target_disk => $drive_string,
> + },
> + 1,
> + );
> +
> + # remove possible replication snapshots
> + if (PVE::Storage::volume_has_feature(
> + $storecfg,
> + 'replicate',
> + $source_volid),
> + ) {
> + eval {
> + PVE::Replication::prepare(
> + $storecfg,
> + [$new_volid],
> + undef,
> + 1,
> + undef,
> + $logfunc,
> + )
> + };
> + if (my $err = $@) {
> + print "Failed to remove replication snapshots on moved disk " .
> + "'$target_disk'. Manual cleanup could be necessary.\n";
> + }
> + }
> + });
> + });
> + };
> +
> + if ($target_vmid) {
> + $rpcenv->check_vm_perm($authuser, $target_vmid, undef, ['VM.Config.Disk'])
> + if $authuser ne 'root@pam';
> +
> + die "Moving a disk to the same VM is not possible. Did you mean to ".
> + "move the disk to a different storage?\n"
> + if $vmid eq $target_vmid;
> +
> + &$load_and_check_reassign_configs();
> + return $rpcenv->fork_worker(
> + 'qmmove',
> + "${vmid}-${disk}>${target_vmid}-${target_disk}",
> + $authuser,
> + $disk_reassignfn
> + );
> + } elsif ($storeid) {
> + die "cannot move disk '$disk', only configured disks can be moved to another storage\n"
> + if $disk =~ m/^unused\d+$/;
> + return PVE::QemuConfig->lock_config($vmid, $move_updatefn);
> + } else {
> + die "Provide either a 'storage' to move the disk to a different " .
> + "storage or 'target-vmid' and 'target-disk' to move the disk " .
> + "to another VM\n";
> + }
> }});
>
> my $check_vm_disks_local = sub {
> diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
> index ef99b6d..a92d301 100755
> --- a/PVE/CLI/qm.pm
> +++ b/PVE/CLI/qm.pm
> @@ -910,7 +910,7 @@ our $cmddef = {
>
> resize => [ "PVE::API2::Qemu", 'resize_vm', ['vmid', 'disk', 'size'], { node => $nodename } ],
>
> - 'move-disk' => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 'disk', 'storage'], { node => $nodename }, $upid_exit ],
> + 'move-disk' => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 'disk', 'storage', 'target-vmid', 'target-disk'], { node => $nodename }, $upid_exit ],
> move_disk => { alias => 'move-disk' },
>
> unlink => [ "PVE::API2::Qemu", 'unlink', ['vmid'], { node => $nodename } ],
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [pve-devel] [PATCH v1 qemu-server 4/9] Drive: add valid_drive_names_with_unused
2021-07-19 14:52 ` [pve-devel] [PATCH v1 qemu-server 4/9] Drive: add valid_drive_names_with_unused Aaron Lauterer
@ 2021-08-03 7:35 ` Fabian Ebner
0 siblings, 0 replies; 22+ messages in thread
From: Fabian Ebner @ 2021-08-03 7:35 UTC (permalink / raw)
To: pve-devel, Aaron Lauterer
Am 19.07.21 um 16:52 schrieb Aaron Lauterer:
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> PVE/QemuServer/Drive.pm | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
> index 5110190..09f37c1 100644
> --- a/PVE/QemuServer/Drive.pm
> +++ b/PVE/QemuServer/Drive.pm
> @@ -393,6 +393,10 @@ sub valid_drive_names {
> 'efidisk0');
> }
>
> +sub valid_drive_names_with_unused {
> + return (valid_drive_names(), map {"unused$_"} (0 .. ($MAX_UNUSED_DISKS -1)));
style nit: missing space between - and 1
> +}
> +
> sub is_valid_drivename {
> my $dev = shift;
>
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [pve-devel] [PATCH v1 qemu-server 6/9] api: move-disk: cleanup very long lines
2021-07-19 14:52 ` [pve-devel] [PATCH v1 qemu-server 6/9] api: move-disk: cleanup very long lines Aaron Lauterer
@ 2021-08-03 7:37 ` Fabian Ebner
0 siblings, 0 replies; 22+ messages in thread
From: Fabian Ebner @ 2021-08-03 7:37 UTC (permalink / raw)
To: pve-devel, Aaron Lauterer
Am 19.07.21 um 16:52 schrieb Aaron Lauterer:
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> PVE/API2/Qemu.pm | 25 ++++++++++++++++++++-----
> 1 file changed, 20 insertions(+), 5 deletions(-)
>
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index ed1179b..0529c1b 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -3301,13 +3301,15 @@ __PACKAGE__->register_method({
> },
> delete => {
> type => 'boolean',
> - description => "Delete the original disk after successful copy. By default the original disk is kept as unused disk.",
> + description => "Delete the original disk after successful copy. By default the " .
> + "original disk is kept as unused disk.",
> optional => 1,
> default => 0,
> },
> digest => {
> type => 'string',
> - description => 'Prevent changes if current configuration file has different SHA1 digest. This can be used to prevent concurrent modifications.',
> + description => 'Prevent changes if current configuration file has different SHA1 " .
> + "digest. This can be used to prevent concurrent modifications.',
> maxLength => 40,
> optional => 1,
> },
> @@ -3386,11 +3388,20 @@ __PACKAGE__->register_method({
> (!$format || !$oldfmt || $oldfmt eq $format);
>
> # this only checks snapshots because $disk is passed!
> - my $snapshotted = PVE::QemuServer::Drive::is_volume_in_use($storecfg, $conf, $disk, $old_volid);
> + my $snapshotted = PVE::QemuServer::Drive::is_volume_in_use(
> + $storecfg,
> + $conf,
> + $disk,
> + $old_volid
style nit: missing trailing comma (also below and for the last container
patch)
> + );
> die "you can't move a disk with snapshots and delete the source\n"
> if $snapshotted && $param->{delete};
>
> - PVE::Cluster::log_msg('info', $authuser, "move disk VM $vmid: move --disk $disk --storage $storeid");
> + PVE::Cluster::log_msg(
> + 'info',
> + $authuser,
> + "move disk VM $vmid: move --disk $disk --storage $storeid"
> + );
>
> my $running = PVE::QemuServer::check_running($vmid);
>
> @@ -3409,7 +3420,11 @@ __PACKAGE__->register_method({
> if $snapshotted;
>
> my $bwlimit = extract_param($param, 'bwlimit');
> - my $movelimit = PVE::Storage::get_bandwidth_limit('move', [$oldstoreid, $storeid], $bwlimit);
> + my $movelimit = PVE::Storage::get_bandwidth_limit(
> + 'move',
> + [$oldstoreid, $storeid],
> + $bwlimit
> + );
>
> my $newdrive = PVE::QemuServer::clone_disk(
> $storecfg,
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [pve-devel] [PATCH v1 container 8/9] api: move-volume: add move to another container
2021-07-19 14:52 ` [pve-devel] [PATCH v1 container 8/9] api: move-volume: add move to another container Aaron Lauterer
@ 2021-08-03 8:14 ` Fabian Ebner
2021-08-05 12:45 ` Aaron Lauterer
0 siblings, 1 reply; 22+ messages in thread
From: Fabian Ebner @ 2021-08-03 8:14 UTC (permalink / raw)
To: pve-devel, Aaron Lauterer
Am 19.07.21 um 16:52 schrieb Aaron Lauterer:
> The goal of this is to expand the move-volume API endpoint to make it
> possible to move a container volume / mountpoint to another container.
>
> Currently it works for regular mountpoints though it would be nice to be
> able to do it for unused mounpoints as well.
>
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> This is mostly the code from qemu-server with some adaptions. Mainly
> error messages and some checks.
>
> Previous checks have been moved to '$move_to_storage_checks'.
>
> rfc -> v1:
> * add check if target guest is replicated and fail if the moved volume
> does not support it
> * check if source volume has a format suffix and pass it to
> 'find_free_disk' or if the prefix is vm/subvol as those also have
> their own meaning, see the comment in the code
> * fixed some style nits
>
> src/PVE/API2/LXC.pm | 270 ++++++++++++++++++++++++++++++++++++++++----
> src/PVE/CLI/pct.pm | 2 +-
> 2 files changed, 250 insertions(+), 22 deletions(-)
>
> diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
> index b929481..0af22c1 100644
> --- a/src/PVE/API2/LXC.pm
> +++ b/src/PVE/API2/LXC.pm
> @@ -27,6 +27,8 @@ use PVE::API2::LXC::Snapshot;
> use PVE::JSONSchema qw(get_standard_option);
> use base qw(PVE::RESTHandler);
>
> +use Data::Dumper;
> +
Left-over from debugging?
> BEGIN {
> if (!$ENV{PVE_GENERATING_DOCS}) {
> require PVE::HA::Env::PVE2;
> @@ -1784,10 +1786,12 @@ __PACKAGE__->register_method({
> method => 'POST',
> protected => 1,
> proxyto => 'node',
> - description => "Move a rootfs-/mp-volume to a different storage",
> + description => "Move a rootfs-/mp-volume to a different storage or to a different container.",
> permissions => {
> description => "You need 'VM.Config.Disk' permissions on /vms/{vmid}, " .
> - "and 'Datastore.AllocateSpace' permissions on the storage.",
> + "and 'Datastore.AllocateSpace' permissions on the storage. To move ".
> + "a volume to another container, you need the permissions on the ".
> + "target container as well.",
> check =>
> [ 'and',
> ['perm', '/vms/{vmid}', [ 'VM.Config.Disk' ]],
> @@ -1799,14 +1803,20 @@ __PACKAGE__->register_method({
> properties => {
> node => get_standard_option('pve-node'),
> vmid => get_standard_option('pve-vmid', { completion => \&PVE::LXC::complete_ctid }),
> + 'target-vmid' => get_standard_option('pve-vmid', {
> + completion => \&PVE::LXC::complete_ctid,
> + optional => 1,
> + }),
> volume => {
> type => 'string',
> + #TODO: check how to handle unused mount points as the mp parameter is not configured
> enum => [ PVE::LXC::Config->valid_volume_keys() ],
> description => "Volume which will be moved.",
> },
> storage => get_standard_option('pve-storage-id', {
> description => "Target Storage.",
> completion => \&PVE::Storage::complete_storage_enabled,
> + optional => 1,
> }),
> delete => {
> type => 'boolean',
> @@ -1827,6 +1837,20 @@ __PACKAGE__->register_method({
> minimum => '0',
> default => 'clone limit from datacenter or storage config',
> },
> + 'target-mp' => {
Using 'target-volume' would be more consistent.
> + type => 'string',
> + description => "The config key the mp will be moved to.",
> + enum => [PVE::LXC::Config->valid_volume_keys()],
> + optional => 1,
> + },
> + 'target-digest' => {
> + type => 'string',
> + description => 'Prevent changes if current configuration file of the target " .
> + "container has a different SHA1 digest. This can be used to prevent " .
> + "concurrent modifications.',
> + maxLength => 40,
> + optional => 1,
> + },
> },
> },
> returns => {
> @@ -1841,32 +1865,49 @@ __PACKAGE__->register_method({
>
> my $vmid = extract_param($param, 'vmid');
>
> + my $target_vmid = extract_param($param, 'target-vmid');
> +
> my $storage = extract_param($param, 'storage');
>
> my $mpkey = extract_param($param, 'volume');
>
> + my $target_mp = extract_param($param, 'target-mp');
> +
> + my $digest = extract_param($param, 'digest');
> +
> + my $target_digest = extract_param($param, 'target-digest');
> +
> my $lockname = 'disk';
>
> my ($mpdata, $old_volid);
>
> - PVE::LXC::Config->lock_config($vmid, sub {
> - my $conf = PVE::LXC::Config->load_config($vmid);
> - PVE::LXC::Config->check_lock($conf);
> + die "either set storage or target-vmid, but not both\n"
> + if $storage && $target_vmid;
Same as for qemu: either require target_vmid *and* target_mp or have it
default to the one from the source.
>
> - die "cannot move volumes of a running container\n" if PVE::LXC::check_running($vmid);
> + die "cannot move volumes of a running container\n" if PVE::LXC::check_running($vmid);
This check was previously inside lock_config, but now it isn't anymore.
>
> - $mpdata = PVE::LXC::Config->parse_volume($mpkey, $conf->{$mpkey});
> - $old_volid = $mpdata->{volume};
> + my $storecfg = PVE::Storage::config();
> + my $source_volid;
>
> - die "you can't move a volume with snapshots and delete the source\n"
> - if $param->{delete} && PVE::LXC::Config->is_volume_in_use_by_snapshots($conf, $old_volid);
> + my $move_to_storage_checks = sub {
> + PVE::LXC::Config->lock_config($vmid, sub {
> + my $conf = PVE::LXC::Config->load_config($vmid);
> + PVE::LXC::Config->check_lock($conf);
>
> - PVE::Tools::assert_if_modified($param->{digest}, $conf->{digest});
>
> - PVE::LXC::Config->set_lock($vmid, $lockname);
> - });
> + $mpdata = PVE::LXC::Config->parse_volume($mpkey, $conf->{$mpkey});
> + $old_volid = $mpdata->{volume};
>
> - my $realcmd = sub {
> + die "you can't move a volume with snapshots and delete the source\n"
> + if $param->{delete} && PVE::LXC::Config->is_volume_in_use_by_snapshots($conf, $old_volid);
> +
> + PVE::Tools::assert_if_modified($digest, $conf->{digest});
> +
> + PVE::LXC::Config->set_lock($vmid, $lockname);
> + });
> + };
> +
> + my $storage_realcmd = sub {
> eval {
> PVE::Cluster::log_msg('info', $authuser, "move volume CT $vmid: move --volume $mpkey --storage $storage");
>
> @@ -1936,15 +1977,202 @@ __PACKAGE__->register_method({
> warn $@ if $@;
> die $err if $err;
> };
> - my $task = eval {
> - $rpcenv->fork_worker('move_volume', $vmid, $authuser, $realcmd);
> +
> + my $load_and_check_reassign_configs = sub {
> + my $vmlist = PVE::Cluster::get_vmlist()->{ids};
> + die "Both containers need to be on the same node ($vmlist->{$vmid}->{node}) ".
> + "but target continer is on $vmlist->{$target_vmid}->{node}.\n"
> + if $vmlist->{$vmid}->{node} ne $vmlist->{$target_vmid}->{node};
> +
> + my $source_conf = PVE::LXC::Config->load_config($vmid);
> + PVE::LXC::Config->check_lock($source_conf);
> + my $target_conf = PVE::LXC::Config->load_config($target_vmid);
> + PVE::LXC::Config->check_lock($target_conf);
> +
> + die "Can't move volumes from or to template VMs\n"
> + if ($source_conf->{template} || $target_conf->{template});
> +
> + if ($digest) {
> + eval { PVE::Tools::assert_if_modified($digest, $source_conf->{digest}) };
> + if (my $err = $@) {
> + die "Container ${vmid}: ${err}";
> + }
> + }
> +
> + if ($target_digest) {
> + eval { PVE::Tools::assert_if_modified($target_digest, $target_conf->{digest}) };
> + if (my $err = $@) {
> + die "Container ${target_vmid}: ${err}";
> + }
> + }
> +
> + die "volume '${mpkey}' for container '$vmid' does not exist\n"
> + if !defined($source_conf->{$mpkey});
> +
> + die "Target volume key '${target_mp}' is already in use for container '$target_vmid'\n"
> + if exists $target_conf->{$target_mp};
Same as for qemu: any reason for using exists?
> +
> + my $drive = PVE::LXC::Config->parse_volume(
> + $mpkey,
> + $source_conf->{$mpkey},
> + );
> +
> + $source_volid = $drive->{volume};
> +
> + die "disk '${mpkey}' has no associated volume\n"
> + if !$source_volid;
> +
> + die "Storage does not support moving of this disk to another container\n"
> + if !PVE::Storage::volume_has_feature(
> + $storecfg,
> + 'rename',
> + $source_volid,
> + );
> +
> + die "Cannot move a bindmound or device mount to another container\n"
> + if PVE::LXC::Config->classify_mountpoint($source_volid) ne "volume";
The result from classify_mountpoint should be in $drive->{type} already.
> + die "Cannot move volume to another container while the source container is running\n"
> + if PVE::LXC::check_running($vmid) && $mpkey !~ m/^unused\d+$/;
> +
> + my $repl_conf = PVE::ReplicationConfig->new();
> + my $is_replicated = $repl_conf->check_for_existing_jobs($target_vmid, 1);
> + my ($storeid, undef) = PVE::Storage::parse_volume_id($source_volid);
> + my $format = (PVE::Storage::parse_volname($storecfg, $source_volid))[6];
> + if (
> + $is_replicated
> + && !PVE::Storage::storage_can_replicate($storecfg, $storeid, $format)
> + ) {
> + die "Cannot move volume to a replicated container. Storage " .
> + "does not support replication!\n";
> + }
> + return ($source_conf, $target_conf);
> + };
> +
> + my $logfunc = sub {
> + my ($msg) = @_;
> + print STDERR "$msg\n";
> };
> - if (my $err = $@) {
> - eval { PVE::LXC::Config->remove_lock($vmid, $lockname) };
> - warn $@ if $@;
> - die $err;
> +
> + my $volume_reassignfn = sub {
> + return PVE::LXC::Config->lock_config($vmid, sub {
> + return PVE::LXC::Config->lock_config($target_vmid, sub {
> + my ($source_conf, $target_conf) = &$load_and_check_reassign_configs();
> +
> + my $drive_param = PVE::LXC::Config->parse_volume(
> + $target_mp,
> + $source_conf->{$mpkey},
> + );
> +
> + print "moving volume '$mpkey' from container '$vmid' to '$target_vmid'\n";
> + my ($storage, $source_volname) = PVE::Storage::parse_volume_id($source_volid);
> +
> + # The format in find_free_diskname handles two cases for containers.
> + # If it is set to 'subvol', the prefix will be set to it instead of 'vm',
> + # this is needed for example for ZFS based storages.
> + # The other thing to check for, in case of directory/file based storages,
> + # is .raw files as we need to pass this format in that case.
> + my $fmt = undef;
> + if ($source_volname =~ m/(subvol|vm)-\d+-disk-\S+$/) {
> + $fmt = $1 if $1 eq "subvol";
> + } else {
> + die "could not detect source volname prefix\n";
> + }
> + if ($source_volname =~ m/vm-\d+-disk-\S+\.(raw)/) {
> + $fmt = $1;
> + }
Using parse_volname should be easier and future-proof.
> +
> + my $target_volname = PVE::Storage::find_free_diskname(
> + $storecfg,
> + $storage,
> + $target_vmid,
> + $fmt
> + );
> +
> + my $new_volid = PVE::Storage::rename_volume(
> + $storecfg,
> + $source_volid,
> + $target_volname,
> + );
> +
> + $drive_param->{volume} = $new_volid;
> +
> + delete $source_conf->{$mpkey};
> + print "removing volume '${mpkey}' from container '${vmid}' config\n";
> + PVE::LXC::Config->write_config($vmid, $source_conf);
> +
> + my $drive_string = PVE::LXC::Config->print_volume($target_mp, $drive_param);
> + my $running = PVE::LXC::check_running($target_vmid);
> + my $param = { $target_mp => $drive_string };
> +
> + my $err = Dumper(PVE::LXC::Config->update_pct_config(
> + $target_vmid,
> + $target_conf,
> + $running,
> + $param
> + ));
$err and Dumper left-over from debugging?
> +
> + PVE::LXC::Config->write_config($target_vmid, $target_conf);
> + $target_conf = PVE::LXC::Config->load_config($target_vmid);
> +
> + PVE::LXC::update_lxc_config($target_vmid, $target_conf);
> + print "target container '$target_vmid' updated with '$target_mp'\n";
> +
> + # remove possible replication snapshots
> + if (PVE::Storage::volume_has_feature(
> + $storecfg,
> + 'replicate',
> + $source_volid),
> + ) {
> + eval {
> + PVE::Replication::prepare(
> + $storecfg,
> + [$new_volid],
> + undef,
> + 1,
> + undef,
> + $logfunc,
> + )
> + };
> + if (my $err = $@) {
> + print "Failed to remove replication snapshots on volume ".
> + "'$target_mp'. Manual cleanup could be necessary.\n";
> + }
> + }
> + });
> + });
> + };
> +
> + if ($target_vmid) {
> + $rpcenv->check_vm_perm($authuser, $target_vmid, undef, ['VM.Config.Disk'])
> + if $authuser ne 'root@pam';
> +
> + die "Moving a volume to the same container is not possible. Did you ".
> + "mean to move the volume to a different storage?\n"
> + if $vmid eq $target_vmid;
> +
> + &$load_and_check_reassign_configs();
> + return $rpcenv->fork_worker(
> + 'move_volume',
> + "${vmid}-${mpkey}>${target_vmid}-${target_mp}",
> + $authuser,
> + $volume_reassignfn
> + );
> + } elsif ($storage) {
> + &$move_to_storage_checks();
> + my $task = eval {
> + $rpcenv->fork_worker('move_volume', $vmid, $authuser, $storage_realcmd);
> + };
> + if (my $err = $@) {
> + eval { PVE::LXC::Config->remove_lock($vmid, $lockname) };
> + warn $@ if $@;
> + die $err;
> + }
> + return $task;
> + } else {
> + die "Provide either a 'storage' to move the mount point to a ".
> + "different storage or 'target-vmid' and 'target-mp' to move ".
> + "the mount point to another container\n";
> }
> - return $task;
> }});
>
> __PACKAGE__->register_method({
> diff --git a/src/PVE/CLI/pct.pm b/src/PVE/CLI/pct.pm
> index 7ac5a55..5f2bf04 100755
> --- a/src/PVE/CLI/pct.pm
> +++ b/src/PVE/CLI/pct.pm
> @@ -849,7 +849,7 @@ our $cmddef = {
>
> clone => [ "PVE::API2::LXC", 'clone_vm', ['vmid', 'newid'], { node => $nodename }, $upid_exit ],
> migrate => [ "PVE::API2::LXC", 'migrate_vm', ['vmid', 'target'], { node => $nodename }, $upid_exit],
> - 'move-volume' => [ "PVE::API2::LXC", 'move_volume', ['vmid', 'volume', 'storage'], { node => $nodename }, $upid_exit ],
> + 'move-volume' => [ "PVE::API2::LXC", 'move_volume', ['vmid', 'volume', 'storage', 'target-vmid', 'target-mp'], { node => $nodename }, $upid_exit ],
> move_volume => { alias => 'move-disk' },
>
> snapshot => [ "PVE::API2::LXC::Snapshot", 'snapshot', ['vmid', 'snapname'], { node => $nodename } , $upid_exit ],
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [pve-devel] [PATCH v1 storage qemu-server container 0/9] move disk or volume to other guests
2021-07-19 14:52 [pve-devel] [PATCH v1 storage qemu-server container 0/9] move disk or volume to other guests Aaron Lauterer
` (8 preceding siblings ...)
2021-07-19 14:52 ` [pve-devel] [PATCH v1 container 9/9] api: move-volume: cleanup very long lines Aaron Lauterer
@ 2021-08-03 8:27 ` Fabian Ebner
9 siblings, 0 replies; 22+ messages in thread
From: Fabian Ebner @ 2021-08-03 8:27 UTC (permalink / raw)
To: pve-devel, Aaron Lauterer
Looks mostly good to me, mostly nits and suggestions for some
improvements. The only real issues I found are the < 9 instead of < 10
for checking the API version, and the check for running containers not
being inside lock_config anymore (maybe that is not even a big deal,
didn't check in detail).
The suggested changes for the find_free_diskname interface would be nice
to have IMHO, now that it's not set in stone yet, but it's not a must
either.
Am 19.07.21 um 16:52 schrieb Aaron Lauterer:
> This is the continuation of 'disk-reassign' but instead of a separate
> API endpoint we now follow the approach to make it part of the
> 'move-disk' and 'move-volume' endpoints for VMs and containers.
>
> The main idea is to make it easy to move a disk/volume to another guest.
> Currently this is a manual and error prone process that requires
> knowledge of how PVE handles disks/volumes and the mapping which guest
> they belong to.
>
> With this, the 'qm move-disk' and 'pct move-volume' are changed in the
> way that the storage parameter is optional as well as the new
> target-vmid and target-{disk,mp}. This will keep old calls to move the
> disk/volume to another storage working. To move to another guest, the
> storage needs to be omitted.
>
> Major changes since the last iteration as dedicated API endpoint [0] are
> that the storage layer only implements the renaming itself. The layer
> above (qemu-server and pve-container) define the name of the new
> volume/disk. Therefore it was necessary to expose the
> 'find_free_diskname' function. The rename function on the storage layer
> handles possible template referneces and the creation of the new volid
> as that is highly dependent on the actual storage.
>
> The following storage types are implemented at the moment:
> * dir based ones
> * ZFS
> * (thin) LVM
> * Ceph RBD
>
>
> Most parts of the disk-reassign code has been taken and moved into the
> 'move_disk' and 'move_volume' endpoints with conditional checking if the
> reassign code or the move to other storage code is meant to run
> depending on the given parameters.
>
> Changes since the RFC [1]:
> * added check if target guest is replicated and fail if storage does not
> support replication
> * only pass minimum of needed parameters to the storage layer and infer
> other needed information from that
> * lock storage and check if the volume aready exists (handling a
> possible race condition between calling find_free_disk and the actual
> renaming)
> * use a helper method to determine if the plugin needs the fmt suffix
> in the volume name
> * getting format of the source and pass it to find_free_disk
> * style fixes (long lines, multiline post-if, ...)
>
> [0] https://lists.proxmox.com/pipermail/pve-devel/2021-April/047481.html
> [1] https://lists.proxmox.com/pipermail/pve-devel/2021-June/048400.html
>
> storage: Aaron Lauterer (2):
> storage: expose find_free_diskname
> add disk rename feature
>
> PVE/Storage.pm | 31 +++++++++++++++--
> PVE/Storage/LVMPlugin.pm | 32 ++++++++++++++++++
> PVE/Storage/LvmThinPlugin.pm | 1 +
> PVE/Storage/Plugin.pm | 65 ++++++++++++++++++++++++++++++++++++
> PVE/Storage/RBDPlugin.pm | 34 +++++++++++++++++++
> PVE/Storage/ZFSPoolPlugin.pm | 29 ++++++++++++++++
> 6 files changed, 190 insertions(+), 2 deletions(-)
>
>
> qemu-server: Aaron Lauterer (4):
> cli: qm: change move_disk to move-disk
> Drive: add valid_drive_names_with_unused
> api: move-disk: add move to other VM
> api: move-disk: cleanup very long lines
>
> PVE/API2/Qemu.pm | 254 ++++++++++++++++++++++++++++++++++++++--
> PVE/CLI/qm.pm | 3 +-
> PVE/QemuServer/Drive.pm | 4 +
> 3 files changed, 250 insertions(+), 11 deletions(-)
>
>
> container: Aaron Lauterer (3):
> cli: pct: change move_volume to move-volume
> api: move-volume: add move to another container
> api: move-volume: cleanup very long lines
>
> src/PVE/API2/LXC.pm | 303 ++++++++++++++++++++++++++++++++++++++++----
> src/PVE/CLI/pct.pm | 3 +-
> 2 files changed, 278 insertions(+), 28 deletions(-)
>
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [pve-devel] [PATCH v1 storage 1/9] storage: expose find_free_diskname
2021-08-02 12:56 ` Fabian Ebner
@ 2021-08-03 14:20 ` Aaron Lauterer
2021-08-04 7:27 ` Fabian Ebner
0 siblings, 1 reply; 22+ messages in thread
From: Aaron Lauterer @ 2021-08-03 14:20 UTC (permalink / raw)
To: Fabian Ebner, pve-devel
On 8/2/21 2:56 PM, Fabian Ebner wrote:
> Am 19.07.21 um 16:52 schrieb Aaron Lauterer:
>> We do not expose the parameter 'add_fmt_suffix' used by the internal
>> implemantion of 'find_free_diskname'. This is something only the plugins
>> themselves know but cannot be determined easily and reliably from an
>> outside caller.
>>
>> This is why the new 'wants_fmt_suffix' method has been introduced. For
>> most plugins the return value is very clear. For the default
>> implementation in Plugin.pm we add another check to be on the safe side
>> and only return true if the '$scfg->{path}' option is present.
>> It indicates that the volume in question is an actual file which will
>> need the suffix.
>>
>> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
>> --- > rfc -> v1:
>> dropped $add_fmt_suffix parameter and added the "wants_fmt_suffix"
>> helper method in each plugin.
>>
>> PVE/Storage.pm | 11 +++++++++++
>> PVE/Storage/LVMPlugin.pm | 5 +++++
>> PVE/Storage/Plugin.pm | 7 +++++++
>> PVE/Storage/RBDPlugin.pm | 5 +++++
>> PVE/Storage/ZFSPoolPlugin.pm | 5 +++++
>> 5 files changed, 33 insertions(+)
>>
>> diff --git a/PVE/Storage.pm b/PVE/Storage.pm
>> index c04b5a2..afeb2e3 100755
>> --- a/PVE/Storage.pm
>> +++ b/PVE/Storage.pm
>> @@ -203,6 +203,17 @@ sub storage_can_replicate {
>> return $plugin->storage_can_replicate($scfg, $storeid, $format);
>> }
>> +sub find_free_diskname {
>> + my ($cfg, $storeid, $vmid, $fmt) = @_;
>> +
>> + my $scfg = storage_config($cfg, $storeid);
>> + my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
>> +
>> + my $add_fmt_suffix = $plugin->wants_fmt_suffix($scfg);
>> +
>> + return $plugin->find_free_diskname($storeid, $scfg, $vmid, $fmt, $add_fmt_suffix);
>
> Maybe this should rather be find_free_volname and always return the full volname (i.e. "<VMID>/vm-<VMID>-disk-<N>.<FMT>" for dir-based storages) so that it can be parsed with parse_volname later?
>
> That would mean something like wants_vmid_prefix() is needed, but I'd rather extend (and rename) the wants_fmt_suffix method instead.
>
> The rationale is: the value returned here is used for the $target_volname parameter in the next patch, and in case it's a dir-based storage, passing in an actual volname like '123/vm-123-disk-0.raw' currently fails (because it actually expects only the disk name).
Adding a new "find_free_volname" is probably the cleanest approach. For the default directory implementation we can return it in the full "<VMID>/vm-<VMID>-disk-<N>.<FMT>" format. All other storage plugins would need their own implementation though and for most it will be basically a thin wrapper around "find_free_diskname". Alternatively it might work to check against $scfg->{path} to decide if we should return just what we get from "find_free_diskname" (LVM, RBD, ZFS,...) or add the VMID prefix and FMT suffix (files/directory)?
What do you have in mind regarding "wants_fmt_suffix" exactly? If I am not mistaken, we usually need both (FMT suffix and VMID prefix) as it is usually needed for directory based storages. All other storages (AFAICT) will not have either.
>
>> +}
>> +
>> sub storage_ids {
>> my ($cfg) = @_;
>> diff --git a/PVE/Storage/LVMPlugin.pm b/PVE/Storage/LVMPlugin.pm
>> index 139d391..3e5b6c8 100644
>> --- a/PVE/Storage/LVMPlugin.pm
>> +++ b/PVE/Storage/LVMPlugin.pm
>> @@ -201,6 +201,11 @@ sub type {
>> return 'lvm';
>> }
>> +sub wants_fmt_suffix {
>> + my ($class, $scfg) = @_;
>> + return 0;
>> +}
>> +
>
> Nit: since there is no $scfg->{path}, the default implementation would already return 0. Same for {RBD,ZFSPool}Plugin below, but overwriting the method doesn't really hurt either of course.
You're right. I guess it comes down to if we want this to be explicit or implicit. Other opinions?
>
>> sub plugindata {
>> return {
>> content => [ {images => 1, rootdir => 1}, { images => 1 }],
>> diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
>> index b1865cb..5c6c659 100644
>> --- a/PVE/Storage/Plugin.pm
>> +++ b/PVE/Storage/Plugin.pm
>> @@ -191,6 +191,13 @@ sub default_format {
>> return wantarray ? ($def_format, $valid_formats) : $def_format;
>> }
>> +sub wants_fmt_suffix {
>> + my ($class, $scfg) = @_;
>> + return 1 if $scfg->{path};
>> + return 0;
>> +}
>> +
>> +
>> PVE::JSONSchema::register_format('pve-storage-path', \&verify_path);
>> sub verify_path {
>> my ($path, $noerr) = @_;
>> diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
>> index a8d1243..86ea45a 100644
>> --- a/PVE/Storage/RBDPlugin.pm
>> +++ b/PVE/Storage/RBDPlugin.pm
>> @@ -273,6 +273,11 @@ sub type {
>> return 'rbd';
>> }
>> +sub wants_fmt_suffix {
>> + my ($class, $scfg) = @_;
>> + return 0;
>> +}
>> +
>> sub plugindata {
>> return {
>> content => [ {images => 1, rootdir => 1}, { images => 1 }],
>> diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
>> index c4be70f..85e2211 100644
>> --- a/PVE/Storage/ZFSPoolPlugin.pm
>> +++ b/PVE/Storage/ZFSPoolPlugin.pm
>> @@ -18,6 +18,11 @@ sub type {
>> return 'zfspool';
>> }
>> +sub wants_fmt_suffix {
>> + my ($class, $scfg) = @_;
>> + return 0;
>> +}
>> +
>> sub plugindata {
>> return {
>> content => [ {images => 1, rootdir => 1}, {images => 1 , rootdir => 1}],
>>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [pve-devel] [PATCH v1 storage 2/9] add disk rename feature
2021-08-02 12:57 ` Fabian Ebner
@ 2021-08-03 14:22 ` Aaron Lauterer
0 siblings, 0 replies; 22+ messages in thread
From: Aaron Lauterer @ 2021-08-03 14:22 UTC (permalink / raw)
To: Fabian Ebner, pve-devel
On 8/2/21 2:57 PM, Fabian Ebner wrote:
> Am 19.07.21 um 16:52 schrieb Aaron Lauterer:
>> Functionality has been added for the following storage types:
>>
>> * directory ones, based on the default implementation:
>> * directory
>> * NFS
>> * CIFS
>> * gluster
>> * ZFS
>> * (thin) LVM
>> * Ceph
>>
>> A new feature `rename` has been introduced to mark which storage
>> plugin supports the feature.
>>
>> Version API and AGE have been bumped.
>>
>> The storage gets locked and each plugin checks if the target volume
>> already exists prior renaming.
>> This is done because there could be a race condition from the time the
>> external caller requests a new free disk name to the time the volume is
>> actually renamed.
>>
>> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
>> ---
>> rfc -> v1:
>> * reduced number of parameters to minimum needed, plugins infer needed
>> information themselves
>> * added storage locking and checking if volume already exists
>> * parse target_volname prior to renaming to check if valid
>>
>> old dedicated API endpoint -> rfc:
>> only do rename now but the rename function handles templates and returns
>> the new volid as this can be differently handled on some storages.
>>
>> PVE/Storage.pm | 20 +++++++++++--
>> PVE/Storage/LVMPlugin.pm | 27 +++++++++++++++++
>> PVE/Storage/LvmThinPlugin.pm | 1 +
>> PVE/Storage/Plugin.pm | 58 ++++++++++++++++++++++++++++++++++++
>> PVE/Storage/RBDPlugin.pm | 29 ++++++++++++++++++
>> PVE/Storage/ZFSPoolPlugin.pm | 24 +++++++++++++++
>> 6 files changed, 157 insertions(+), 2 deletions(-)
>>
>> diff --git a/PVE/Storage.pm b/PVE/Storage.pm
>> index afeb2e3..f6d86e1 100755
>> --- a/PVE/Storage.pm
>> +++ b/PVE/Storage.pm
>> @@ -41,11 +41,11 @@ use PVE::Storage::PBSPlugin;
>> use PVE::Storage::BTRFSPlugin;
>> # Storage API version. Increment it on changes in storage API interface.
>> -use constant APIVER => 9;
>> +use constant APIVER => 10;
>> # Age is the number of versions we're backward compatible with.
>> # This is like having 'current=APIVER' and age='APIAGE' in libtool,
>> # see https://www.gnu.org/software/libtool/manual/html_node/Libtool-versioning.html
>> -use constant APIAGE => 0;
>> +use constant APIAGE => 1;
>> # load standard plugins
>> PVE::Storage::DirPlugin->register();
>> @@ -360,6 +360,7 @@ sub volume_snapshot_needs_fsfreeze {
>> # snapshot - taking a snapshot is possible
>> # sparseinit - volume is sparsely initialized
>> # template - conversion to base image is possible
>> +# rename - renaming volumes is possible
>> # $snap - check if the feature is supported for a given snapshot
>> # $running - if the guest owning the volume is running
>> # $opts - hash with further options:
>> @@ -1868,6 +1869,21 @@ sub complete_volume {
>> return $res;
>> }
>> +sub rename_volume {
>> + my ($cfg, $source_volid, $target_volname) = @_;
>> +
>> + my ($storeid) = parse_volume_id($source_volid);
>> +
>> + activate_storage($cfg, $storeid);
>> +
>> + my $scfg = storage_config($cfg, $storeid);
>> + my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
>> +
>> + return $plugin->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
>> + return $plugin->rename_volume($scfg, $storeid, $source_volid, $target_volname);
>
> I know I suggested these parameters last time, but I hadn't worked it out in detail, and using $source_volname instead of $source_volid seems to be possible and nicer (would avoid some calls in the plugins AFAICS).
I'll give it a try and will also incorporate the other smaller issues you found.
>
>> + });
>> +}
>> +
>> # Various io-heavy operations require io/bandwidth limits which can be
>> # configured on multiple levels: The global defaults in datacenter.cfg, and
>> # per-storage overrides. When we want to do a restore from storage A to storage
>> diff --git a/PVE/Storage/LVMPlugin.pm b/PVE/Storage/LVMPlugin.pm
>> index 3e5b6c8..7a13a96 100644
>> --- a/PVE/Storage/LVMPlugin.pm
>> +++ b/PVE/Storage/LVMPlugin.pm
>> @@ -344,6 +344,16 @@ sub lvcreate {
>> run_command($cmd, errmsg => "lvcreate '$vg/$name' error");
>> }
>> +sub lvrename {
>> + my ($vg, $oldname, $newname) = @_;
>> +
>> + my $cmd = ['/sbin/lvrename', $vg, $oldname, $newname];
>
> $cmd is not used
>
>> + run_command(
>> + ['/sbin/lvrename', $vg, $oldname, $newname],
>> + errmsg => "lvrename '${vg}/${oldname}' to '${newname}' error"
>> + );
>> +}
>> +
>> sub alloc_image {
>> my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size) = @_;
>> @@ -589,6 +599,7 @@ sub volume_has_feature {
>> my $features = {
>> copy => { base => 1, current => 1},
>> + rename => {current => 1},
>> };
>> my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
>> @@ -697,4 +708,20 @@ sub volume_import_write {
>> input => '<&'.fileno($input_fh));
>> }
>> +sub rename_volume {
>> + my ($class, $scfg, $storeid, $source_volid, $target_volname) = @_;
>> +
>> + $class->parse_volname($target_volname);
>> +
>> + my (undef, $source_volname) = PVE::Storage::Plugin::parse_volume_id($source_volid); > +
>> + my $vg = $scfg->{vgname};
>> + my $lvs = lvm_list_volumes($vg);
>> + die "target volume '${target_volname}' already exists\n"
>> + if ($lvs->{$vg}->{$target_volname});
>> +
>> + lvrename($scfg->{vgname}, $source_volname, $target_volname);
>
> Nit: $vg can be re-used
>
>> + return "${storeid}:${target_volname}";
>> +}
>> +
>> 1;
>> diff --git a/PVE/Storage/LvmThinPlugin.pm b/PVE/Storage/LvmThinPlugin.pm
>> index 4ba6f90..c24af22 100644
>> --- a/PVE/Storage/LvmThinPlugin.pm
>> +++ b/PVE/Storage/LvmThinPlugin.pm
>> @@ -355,6 +355,7 @@ sub volume_has_feature {
>> template => { current => 1},
>> copy => { base => 1, current => 1, snap => 1},
>> sparseinit => { base => 1, current => 1},
>> + rename => {current => 1},
>> };
>> my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
>> diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
>> index 5c6c659..3f38a94 100644
>> --- a/PVE/Storage/Plugin.pm
>> +++ b/PVE/Storage/Plugin.pm
>> @@ -967,6 +967,7 @@ sub volume_has_feature {
>> snap => {qcow2 => 1} },
>> sparseinit => { base => {qcow2 => 1, raw => 1, vmdk => 1},
>> current => {qcow2 => 1, raw => 1, vmdk => 1} },
>> + rename => { current =>{qcow2 => 1, raw => 1, vmdk => 1} },
>
> Style nit: missing space
>
>> };
>> # clone_image creates a qcow2 volume
>> @@ -974,6 +975,14 @@ sub volume_has_feature {
>> defined($opts->{valid_target_formats}) &&
>> !(grep { $_ eq 'qcow2' } @{$opts->{valid_target_formats}});
>> + if (
>> + $feature eq 'rename'
>> + && $class->can('api')
>> + && $class->api() < 9
>
> Should be < 10 now like below.
>
>> + ) {
>> + return 0;
>> + }
>> +
>> my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
>> $class->parse_volname($volname);
>> @@ -1491,4 +1500,53 @@ sub volume_import_formats {
>> return ();
>> }
>> +sub rename_volume {
>> + my ($class, $scfg, $storeid, $source_volid, $target_volname) = @_;
>> + die "not implemented in storage plugin '$class'\n" if $class->can('api') && $class->api() < 10;
>
> Could also die if there is no $scfg->{path} (so that PBS, ISCSI, etc. die if the method is ever called for them).
>
>> +
>> + my $target_vmid;
>> +
>> + if ($target_volname =~ m/^(base|vm)?-(\d+)-\S+$/) {
>> + $target_vmid = $2;
>> + } else {
>> + die "could not parse target vmid\n";
>> + }
>> +
>> + $class->parse_volname("${target_vmid}/${target_volname}");
>
> So the $target_volname that's passed in here is not an actual volname by our "definition", i.e. something that can be parsed by parse_volname. And passing in an actual volname like '123/vm-123-disk-0.raw' doesn't work.
>
>> +
>> + my (undef, $source_volname) = PVE::Storage::Plugin::parse_volume_id($source_volid);
>> + my (
>> + undef,
>> + $source_name,
>> + $source_vmid,
>> + $base_name,
>> + $base_vmid,
>> + undef,
>> + $format
>> + ) = $class->parse_volname($source_volname);
>> +
>> + my $basedir = $class->get_subdir($scfg, 'images');
>> + # $source_volname includes the '<vmid>/' prefix
>> + my $sourcedir = "${basedir}/${source_vmid}";
>> + my $targetdir = "${basedir}/${target_vmid}";
>> +
>> + mkpath $targetdir;
>> +
>> + if ($target_volname !~ m/(vm|subsvol)-\d+-\S+\.(raw|qcow2|vmdk)$/) {
>> + $target_volname = "${target_volname}.${format}";
>> + }
>
> parse_volname for the target didn't fail, so no need for this check.
>
>> +
>> + my $old_path = "${sourcedir}/${source_name}";
>> + my $new_path = "${targetdir}/${target_volname}";
>> +
>> + die "target volume '${target_volname}' already exists\n" if -e $new_path;
>> +
>> + my $base = $base_name ? "${base_vmid}/${base_name}/" : '';
>> +
>> + rename($old_path, $new_path) ||
>> + die "rename '$old_path' to '$new_path' failed - $!\n";
>> +
>> + return "${storeid}:${base}${target_vmid}/${target_volname}";
>> +}
>> +
>> 1;
>> diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
>> index 86ea45a..6ae846a 100644
>> --- a/PVE/Storage/RBDPlugin.pm
>> +++ b/PVE/Storage/RBDPlugin.pm
>> @@ -733,6 +733,7 @@ sub volume_has_feature {
>> template => { current => 1},
>> copy => { base => 1, current => 1, snap => 1},
>> sparseinit => { base => 1, current => 1},
>> + rename => {current => 1},
>> };
>> my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) = $class->parse_volname($volname);
>> @@ -748,4 +749,32 @@ sub volume_has_feature {
>> return undef;
>> }
>> +sub rename_volume {
>> + my ($class, $scfg, $storeid, $source_volid, $target_volname) = @_;
>> +
>> + $class->parse_volname($target_volname);
>> +
>> + my (undef, $source_volname) = PVE::Storage::Plugin::parse_volume_id($source_volid);
>> + my (undef, undef, $source_vmid, $base_name) = $class->parse_volname($source_volname);
>> +
>> + eval {
>> + my $cmd = $rbd_cmd->($scfg, $storeid, 'info', $target_volname);
>> + run_rbd_command($cmd, errmsg => "exist check", quiet => 1, noerr => 1);
>> + };
>> + my $err = $@;
>> + die "target volume '${target_volname}' already exists\n"
>> + if !$err;
>> +
>> + my $cmd = $rbd_cmd->($scfg, $storeid, 'rename', $source_volname, $target_volname);
>> +
>> + run_rbd_command(
>> + $cmd,
>> + errmsg => "could not rename image '${source_volname}' to '${target_volname}'",
>> + );
>> +
>> + $base_name = $base_name ? "${base_name}/" : '';
>> +
>> + return "${storeid}:${base_name}${target_volname}";
>> +}
>> +
>> 1;
>> diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
>> index 85e2211..ac375e8 100644
>> --- a/PVE/Storage/ZFSPoolPlugin.pm
>> +++ b/PVE/Storage/ZFSPoolPlugin.pm
>> @@ -692,6 +692,7 @@ sub volume_has_feature {
>> copy => { base => 1, current => 1},
>> sparseinit => { base => 1, current => 1},
>> replicate => { base => 1, current => 1},
>> + rename => {current => 1},
>> };
>> my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
>> @@ -794,4 +795,27 @@ sub volume_import_formats {
>> return $class->volume_export_formats($scfg, $storeid, $volname, $snapshot, $base_snapshot, $with_snapshots);
>> }
>> +sub rename_volume {
>> + my ($class, $scfg, $storeid, $source_volid, $target_volname) = @_;
>> +
>> + $class->parse_volname($target_volname);
>> +
>> + my (undef, $source_volname) = PVE::Storage::Plugin::parse_volume_id($source_volid);
>> + my (undef, undef, $source_vmid, $base_name) = $class->parse_volname($source_volname);
>> +
>> + my $pool = $scfg->{pool};
>> + my $source_zfspath = "${pool}/${source_volname}";
>> + my $target_zfspath = "${pool}/${target_volname}";
>> +
>> + my $exists = 0 == run_command(['zfs', 'get', '-H', 'name', $target_zfspath],
>> + noerr => 1, quiet => 1);
>> + die "target volume '${target_volname}' already exists\n" if $exists;
>> +
>> + $class->zfs_request($scfg, 5, 'rename', ${source_zfspath}, ${target_zfspath});
>> +
>> + $base_name = $base_name ? "${base_name}/" : '';
>> +
>> + return "${storeid}:${base_name}${target_volname}";
>> +}
>> +
>> 1;
>>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [pve-devel] [PATCH v1 storage 1/9] storage: expose find_free_diskname
2021-08-03 14:20 ` Aaron Lauterer
@ 2021-08-04 7:27 ` Fabian Ebner
0 siblings, 0 replies; 22+ messages in thread
From: Fabian Ebner @ 2021-08-04 7:27 UTC (permalink / raw)
To: Aaron Lauterer, pve-devel
Am 03.08.21 um 16:20 schrieb Aaron Lauterer:
>
>
> On 8/2/21 2:56 PM, Fabian Ebner wrote:
>> Am 19.07.21 um 16:52 schrieb Aaron Lauterer:
>>> We do not expose the parameter 'add_fmt_suffix' used by the internal
>>> implemantion of 'find_free_diskname'. This is something only the plugins
>>> themselves know but cannot be determined easily and reliably from an
>>> outside caller.
>>>
>>> This is why the new 'wants_fmt_suffix' method has been introduced. For
>>> most plugins the return value is very clear. For the default
>>> implementation in Plugin.pm we add another check to be on the safe side
>>> and only return true if the '$scfg->{path}' option is present.
>>> It indicates that the volume in question is an actual file which will
>>> need the suffix.
>>>
>>> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
>>> --- > rfc -> v1:
>>> dropped $add_fmt_suffix parameter and added the "wants_fmt_suffix"
>>> helper method in each plugin.
>>>
>>> PVE/Storage.pm | 11 +++++++++++
>>> PVE/Storage/LVMPlugin.pm | 5 +++++
>>> PVE/Storage/Plugin.pm | 7 +++++++
>>> PVE/Storage/RBDPlugin.pm | 5 +++++
>>> PVE/Storage/ZFSPoolPlugin.pm | 5 +++++
>>> 5 files changed, 33 insertions(+)
>>>
>>> diff --git a/PVE/Storage.pm b/PVE/Storage.pm
>>> index c04b5a2..afeb2e3 100755
>>> --- a/PVE/Storage.pm
>>> +++ b/PVE/Storage.pm
>>> @@ -203,6 +203,17 @@ sub storage_can_replicate {
>>> return $plugin->storage_can_replicate($scfg, $storeid, $format);
>>> }
>>> +sub find_free_diskname {
>>> + my ($cfg, $storeid, $vmid, $fmt) = @_;
>>> +
>>> + my $scfg = storage_config($cfg, $storeid);
>>> + my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
>>> +
>>> + my $add_fmt_suffix = $plugin->wants_fmt_suffix($scfg);
>>> +
>>> + return $plugin->find_free_diskname($storeid, $scfg, $vmid, $fmt,
>>> $add_fmt_suffix);
>>
>> Maybe this should rather be find_free_volname and always return the
>> full volname (i.e. "<VMID>/vm-<VMID>-disk-<N>.<FMT>" for dir-based
>> storages) so that it can be parsed with parse_volname later?
>>
>> That would mean something like wants_vmid_prefix() is needed, but I'd
>> rather extend (and rename) the wants_fmt_suffix method instead.
>>
>> The rationale is: the value returned here is used for the
>> $target_volname parameter in the next patch, and in case it's a
>> dir-based storage, passing in an actual volname like
>> '123/vm-123-disk-0.raw' currently fails (because it actually expects
>> only the disk name).
>
> Adding a new "find_free_volname" is probably the cleanest approach. For
> the default directory implementation we can return it in the full
> "<VMID>/vm-<VMID>-disk-<N>.<FMT>" format. All other storage plugins
> would need their own implementation though and for most it will be
> basically a thin wrapper around "find_free_diskname". Alternatively it
> might work to check against $scfg->{path} to decide if we should return
> just what we get from "find_free_diskname" (LVM, RBD, ZFS,...) or add
> the VMID prefix and FMT suffix (files/directory)?
Yes, that would work too. I'd prefer the latter approach where the
default implementation simply decides based on the presence of
$scfg->{path}.
>
>
> What do you have in mind regarding "wants_fmt_suffix" exactly? If I am
> not mistaken, we usually need both (FMT suffix and VMID prefix) as it is
> usually needed for directory based storages. All other storages (AFAICT)
> will not have either.
>
In principle, there might be external plugins using a format suffix, but
no vmid prefix. Using only one bit of information to decide how the
volname should be is not enough then. But I suppose one can argue that
an external plugin could even have a completely different volname
scheme. Introducing find_free_volname instead would avoid that problem
altogether.
>>
>>> +}
>>> +
>>> sub storage_ids {
>>> my ($cfg) = @_;
>>> diff --git a/PVE/Storage/LVMPlugin.pm b/PVE/Storage/LVMPlugin.pm
>>> index 139d391..3e5b6c8 100644
>>> --- a/PVE/Storage/LVMPlugin.pm
>>> +++ b/PVE/Storage/LVMPlugin.pm
>>> @@ -201,6 +201,11 @@ sub type {
>>> return 'lvm';
>>> }
>>> +sub wants_fmt_suffix {
>>> + my ($class, $scfg) = @_;
>>> + return 0;
>>> +}
>>> +
>>
>> Nit: since there is no $scfg->{path}, the default implementation would
>> already return 0. Same for {RBD,ZFSPool}Plugin below, but overwriting
>> the method doesn't really hurt either of course.
>
> You're right. I guess it comes down to if we want this to be explicit or
> implicit. Other opinions?
>
>>
>>> sub plugindata {
>>> return {
>>> content => [ {images => 1, rootdir => 1}, { images => 1 }],
>>> diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
>>> index b1865cb..5c6c659 100644
>>> --- a/PVE/Storage/Plugin.pm
>>> +++ b/PVE/Storage/Plugin.pm
>>> @@ -191,6 +191,13 @@ sub default_format {
>>> return wantarray ? ($def_format, $valid_formats) : $def_format;
>>> }
>>> +sub wants_fmt_suffix {
>>> + my ($class, $scfg) = @_;
>>> + return 1 if $scfg->{path};
>>> + return 0;
>>> +}
>>> +
>>> +
>>> PVE::JSONSchema::register_format('pve-storage-path', \&verify_path);
>>> sub verify_path {
>>> my ($path, $noerr) = @_;
>>> diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
>>> index a8d1243..86ea45a 100644
>>> --- a/PVE/Storage/RBDPlugin.pm
>>> +++ b/PVE/Storage/RBDPlugin.pm
>>> @@ -273,6 +273,11 @@ sub type {
>>> return 'rbd';
>>> }
>>> +sub wants_fmt_suffix {
>>> + my ($class, $scfg) = @_;
>>> + return 0;
>>> +}
>>> +
>>> sub plugindata {
>>> return {
>>> content => [ {images => 1, rootdir => 1}, { images => 1 }],
>>> diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
>>> index c4be70f..85e2211 100644
>>> --- a/PVE/Storage/ZFSPoolPlugin.pm
>>> +++ b/PVE/Storage/ZFSPoolPlugin.pm
>>> @@ -18,6 +18,11 @@ sub type {
>>> return 'zfspool';
>>> }
>>> +sub wants_fmt_suffix {
>>> + my ($class, $scfg) = @_;
>>> + return 0;
>>> +}
>>> +
>>> sub plugindata {
>>> return {
>>> content => [ {images => 1, rootdir => 1}, {images => 1 ,
>>> rootdir => 1}],
>>>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [pve-devel] [PATCH v1 qemu-server 5/9] api: move-disk: add move to other VM
2021-08-03 7:34 ` Fabian Ebner
@ 2021-08-05 9:36 ` Aaron Lauterer
0 siblings, 0 replies; 22+ messages in thread
From: Aaron Lauterer @ 2021-08-05 9:36 UTC (permalink / raw)
To: Fabian Ebner, pve-devel
On 8/3/21 9:34 AM, Fabian Ebner wrote:
> Am 19.07.21 um 16:52 schrieb Aaron Lauterer:
>> The goal of this is to expand the move-disk API endpoint to make it
>> possible to move a disk to another VM. Previously this was only possible
>> with manual intervertion either by renaming the VM disk or by manually
>> adding the disks volid to the config of the other VM.
>>
>> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
>> ---
>> rfc -> v1:
>> * add check if target guest is replicated and fail if the moved volume
>> does not support it
>> * check if source volume has a format suffix and pass it to
>> 'find_free_disk'
>> * fixed some style nits
>>
>> old dedicated api endpoint -> rfc:
>> There are some big changes here. The old [0] dedicated API endpoint is
>> gone and most of its code is now part of move_disk. Error messages have
>> been changed accordingly and sometimes enahnced by adding disk keys and
>> VMIDs where appropriate.
>>
>> Since a move to other guests should be possible for unused disks, we
>> need to check before doing a move to storage to make sure to not
>> handle unused disks.
>>
>> [0] https://lists.proxmox.com/pipermail/pve-devel/2021-April/047738.html
>>
>> PVE/API2/Qemu.pm | 229 +++++++++++++++++++++++++++++++++++++++++++++--
>> PVE/CLI/qm.pm | 2 +-
>> 2 files changed, 225 insertions(+), 6 deletions(-)
>>
>> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
>> index f2557e3..ed1179b 100644
>> --- a/PVE/API2/Qemu.pm
>> +++ b/PVE/API2/Qemu.pm
>> @@ -35,6 +35,7 @@ use PVE::API2::Qemu::Agent;
>> use PVE::VZDump::Plugin;
>> use PVE::DataCenterConfig;
>> use PVE::SSHInfo;
>> +use PVE::Replication;
>> BEGIN {
>> if (!$ENV{PVE_GENERATING_DOCS}) {
>> @@ -3263,9 +3264,11 @@ __PACKAGE__->register_method({
>> method => 'POST',
>> protected => 1,
>> proxyto => 'node',
>> - description => "Move volume to different storage.",
>> + description => "Move volume to different storage or to a different VM.",
>> permissions => {
>> - description => "You need 'VM.Config.Disk' permissions on /vms/{vmid}, and 'Datastore.AllocateSpace' permissions on the storage.",
>> + description => "You need 'VM.Config.Disk' permissions on /vms/{vmid}, " .
>> + "and 'Datastore.AllocateSpace' permissions on the storage. To move ".
>> + "a disk to another VM, you need the permissions on the target VM as well.",
>> check => [ 'and',
>> ['perm', '/vms/{vmid}', [ 'VM.Config.Disk' ]],
>> ['perm', '/storage/{storage}', [ 'Datastore.AllocateSpace' ]],
>> @@ -3276,14 +3279,19 @@ __PACKAGE__->register_method({
>> properties => {
>> node => get_standard_option('pve-node'),
>> vmid => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),
>> + 'target-vmid' => get_standard_option('pve-vmid', {
>> + completion => \&PVE::QemuServer::complete_vmid,
>> + optional => 1,
>> + }),
>> disk => {
>> type => 'string',
>> description => "The disk you want to move.",
>> - enum => [PVE::QemuServer::Drive::valid_drive_names()],
>> + enum => [PVE::QemuServer::Drive::valid_drive_names_with_unused()],
>> },
>> storage => get_standard_option('pve-storage-id', {
>> description => "Target storage.",
>> completion => \&PVE::QemuServer::complete_storage,
>> + optional => 1,
>> }),
>> 'format' => {
>> type => 'string',
>> @@ -3310,6 +3318,20 @@ __PACKAGE__->register_method({
>> minimum => '0',
>> default => 'move limit from datacenter or storage config',
>> },
>> + 'target-disk' => {
>> + type => 'string',
>> + description => "The config key the disk will be moved to on the target VM " .
>> + "(for example, ide0 or scsi1).",
>> + enum => [PVE::QemuServer::Drive::valid_drive_names_with_unused()],
>> + optional => 1,
>> + },
>> + 'target-digest' => {
>> + type => 'string',
>> + description => 'Prevent changes if current configuration file of the target VM has " .
>> + "a different SHA1 digest. This can be used to prevent concurrent modifications.',
>> + maxLength => 40,
>> + optional => 1,
>> + },
>> },
>> },
>> returns => {
>> @@ -3324,14 +3346,22 @@ __PACKAGE__->register_method({
>> my $node = extract_param($param, 'node');
>> my $vmid = extract_param($param, 'vmid');
>> + my $target_vmid = extract_param($param, 'target-vmid');
>> my $digest = extract_param($param, 'digest');
>> + my $target_digest = extract_param($param, 'target-digest');
>> my $disk = extract_param($param, 'disk');
>> + my $target_disk = extract_param($param, 'target-disk');
>> my $storeid = extract_param($param, 'storage');
>> my $format = extract_param($param, 'format');
>> + die "either set storage or target-vmid, but not both\n"
>> + if $storeid && $target_vmid;
>
> If the VM -> VM variant is used, shouldn't we also test for $target_disk being set? When not passing --target-disk, it fails further below, but here we could die clean and early. Or make $target_disk default to $disk in that case?
I do like the latter idea, to use the same drive key if there is no other provided.
>
>> +
>> +
>> my $storecfg = PVE::Storage::config();
>> + my $source_volid;
>> - my $updatefn = sub {
>> + my $move_updatefn = sub {
>> my $conf = PVE::QemuConfig->load_config($vmid);
>> PVE::QemuConfig->check_lock($conf);
>> @@ -3441,7 +3471,196 @@ __PACKAGE__->register_method({
>> return $rpcenv->fork_worker('qmmove', $vmid, $authuser, $realcmd);
>> };
>> - return PVE::QemuConfig->lock_config($vmid, $updatefn);
>> + my $load_and_check_reassign_configs = sub {
>> + my $vmlist = PVE::Cluster::get_vmlist()->{ids};
>> + die "Both VMs need to be on the same node ($vmlist->{$vmid}->{node}) ".
>> + "but target VM is on $vmlist->{$target_vmid}->{node}.\n"
>> + if $vmlist->{$vmid}->{node} ne $vmlist->{$target_vmid}->{node};
>
> Style nit: multiline expression with post-if
>
>> +
>> + my $source_conf = PVE::QemuConfig->load_config($vmid);
>> + PVE::QemuConfig->check_lock($source_conf);
>> + my $target_conf = PVE::QemuConfig->load_config($target_vmid);
>> + PVE::QemuConfig->check_lock($target_conf);
>> +
>> + die "Can't move disks from or to template VMs\n"
>> + if ($source_conf->{template} || $target_conf->{template});
>> +
>> + if ($digest) {
>> + eval { PVE::Tools::assert_if_modified($digest, $source_conf->{digest}) };
>> + if (my $err = $@) {
>> + die "VM ${vmid}: ${err}";
>> + }
>> + }
>> +
>> + if ($target_digest) {
>> + eval { PVE::Tools::assert_if_modified($target_digest, $target_conf->{digest}) };
>> + if (my $err = $@) {
>> + die "VM ${target_vmid}: ${err}";
>> + }
>> + }
>> +
>> + die "Disk '${disk}' for VM '$vmid' does not exist\n"
>> + if !defined($source_conf->{$disk});
>> +
>> + die "Target disk key '${target_disk}' is already in use for VM '$target_vmid'\n"
>> + if exists $target_conf->{$target_disk};
>
> Any special reason to test for defined above and exists here? Is there a problem if the target config has an explicit undef at that key?
If the target config already has the disk key present (undef or not) we bail in order to not overwrite anything that is already there (even if it's just the config key).
Of course we could argue that if only the config key is present, we could just overwrite it, but I wanted to err on the safe side.
>
>> +
>> + my $drive = PVE::QemuServer::parse_drive(
>> + $disk,
>> + $source_conf->{$disk},
>> + );
>> + $source_volid = $drive->{file};
>> +
>> + die "disk '${disk}' has no associated volume\n"
>> + if !$source_volid;
>> + die "CD drive contents can't be moved to another VM\n"
>> + if PVE::QemuServer::drive_is_cdrom($drive, 1);
>> + die "Can't move physical disk to another VM\n" if $drive->{file} =~ m|^/dev/|;
>
> Could re-use $source_volid instead of $drive->{file}
>
>> + die "Can't move disk used by a snapshot to another VM\n"
>> + if PVE::QemuServer::Drive::is_volume_in_use(
>> + $storecfg,
>> + $source_conf,
>> + $disk,
>> + $source_volid,
>> + );
>> +
>> + die "Storage does not support moving of this disk to another VM\n"
>> + if !PVE::Storage::volume_has_feature(
>> + $storecfg,
>> + 'rename',
>> + $source_volid,
>> + );
>
> Style nit: two multiline post-if expressions
>
>> +
>> + die "Cannot move disk to another VM while the source VM is running\n"
>> + if PVE::QemuServer::check_running($vmid) && $disk !~ m/^unused\d+$/;
>> +
>> + if ($target_disk !~ m/^unused\d+$/ && $target_disk =~ m/^([^\d]+)\d+$/) {
>> + my $interface = $1;
>> + my $desc = PVE::JSONSchema::get_standard_option("pve-qm-${interface}");
>> + eval {
>> + PVE::JSONSchema::parse_property_string(
>> + $desc->{format},
>> + $source_conf->{$disk},
>> + )
>> + };
>> + if (my $err = $@) {
>> + die "Cannot move disk to another VM: ${err}";
>> + }
>> + }
>> +
>> + my $repl_conf = PVE::ReplicationConfig->new();
>> + my $is_replicated = $repl_conf->check_for_existing_jobs($target_vmid, 1);
>> + my ($storeid, undef) = PVE::Storage::parse_volume_id($source_volid);
>> + my $format = (PVE::Storage::parse_volname($storecfg, $source_volid))[6];
>> + if ($is_replicated && !PVE::Storage::storage_can_replicate($storecfg, $storeid, $format)) {
>> + die "Cannot move disk to a replicated VM. Storage does not support replication!\n";
>> + }
>> +
>> + return ($source_conf, $target_conf);
>> + };
>> +
>> + my $logfunc = sub {
>> + my ($msg) = @_;
>> + print STDERR "$msg\n";
>> + };
>> +
>> + my $disk_reassignfn = sub {
>> + return PVE::QemuConfig->lock_config($vmid, sub {
>> + return PVE::QemuConfig->lock_config($target_vmid, sub {
>> + my ($source_conf, $target_conf) = &$load_and_check_reassign_configs();
>> +
>> + my $drive_param = PVE::QemuServer::parse_drive(
>> + $target_disk,
>> + $source_conf->{$disk},
>> + );
>> +
>> + print "moving disk '$disk' from VM '$vmid' to '$target_vmid'\n";
>> + my ($storeid, $source_volname) = PVE::Storage::parse_volume_id($source_volid);
>> +
>> + my $fmt = undef;
>> + if ($source_volname =~ m/vm-\d+-\S+\.(raw|qcow2|vmdk)$/) {
>> + $fmt = $1;
>> + }
>
> Using parse_volname to get the format should be more complete/robust.
With the other changes to decide in the "find_free_volname" if we should attach the fmt suffix, this is now possible :)
>
>> +
>> + my $target_volname = PVE::Storage::find_free_diskname(
>> + $storecfg,
>> + $storeid,
>> + $target_vmid,
>> + $fmt
>> + );
>> +
>> + my $new_volid = PVE::Storage::rename_volume(
>> + $storecfg,
>> + $source_volid,
>> + $target_volname,
>> + );
>> +
>> + $drive_param->{file} = $new_volid;
>> +
>> + delete $source_conf->{$disk};
>> + print "removing disk '${disk}' from VM '${vmid}' config\n";
>> + PVE::QemuConfig->write_config($vmid, $source_conf);
>> +
>> + my $drive_string = PVE::QemuServer::print_drive($drive_param);
>> + &$update_vm_api(
>> + {
>> + node => $node,
>> + vmid => $target_vmid,
>> + digest => $target_digest,
>> + $target_disk => $drive_string,
>> + },
>> + 1,
>> + );
>> +
>> + # remove possible replication snapshots
>> + if (PVE::Storage::volume_has_feature(
>> + $storecfg,
>> + 'replicate',
>> + $source_volid),
>> + ) {
>> + eval {
>> + PVE::Replication::prepare(
>> + $storecfg,
>> + [$new_volid],
>> + undef,
>> + 1,
>> + undef,
>> + $logfunc,
>> + )
>> + };
>> + if (my $err = $@) {
>> + print "Failed to remove replication snapshots on moved disk " .
>> + "'$target_disk'. Manual cleanup could be necessary.\n";
>> + }
>> + }
>> + });
>> + });
>> + };
>> +
>> + if ($target_vmid) {
>> + $rpcenv->check_vm_perm($authuser, $target_vmid, undef, ['VM.Config.Disk'])
>> + if $authuser ne 'root@pam';
>> +
>> + die "Moving a disk to the same VM is not possible. Did you mean to ".
>> + "move the disk to a different storage?\n"
>> + if $vmid eq $target_vmid;
>> +
>> + &$load_and_check_reassign_configs();
>> + return $rpcenv->fork_worker(
>> + 'qmmove',
>> + "${vmid}-${disk}>${target_vmid}-${target_disk}",
>> + $authuser,
>> + $disk_reassignfn
>> + );
>> + } elsif ($storeid) {
>> + die "cannot move disk '$disk', only configured disks can be moved to another storage\n"
>> + if $disk =~ m/^unused\d+$/;
>> + return PVE::QemuConfig->lock_config($vmid, $move_updatefn);
>> + } else {
>> + die "Provide either a 'storage' to move the disk to a different " .
>> + "storage or 'target-vmid' and 'target-disk' to move the disk " .
>> + "to another VM\n";
>> + }
>> }});
>> my $check_vm_disks_local = sub {
>> diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
>> index ef99b6d..a92d301 100755
>> --- a/PVE/CLI/qm.pm
>> +++ b/PVE/CLI/qm.pm
>> @@ -910,7 +910,7 @@ our $cmddef = {
>> resize => [ "PVE::API2::Qemu", 'resize_vm', ['vmid', 'disk', 'size'], { node => $nodename } ],
>> - 'move-disk' => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 'disk', 'storage'], { node => $nodename }, $upid_exit ],
>> + 'move-disk' => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 'disk', 'storage', 'target-vmid', 'target-disk'], { node => $nodename }, $upid_exit ],
>> move_disk => { alias => 'move-disk' },
>> unlink => [ "PVE::API2::Qemu", 'unlink', ['vmid'], { node => $nodename } ],
>>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [pve-devel] [PATCH v1 container 8/9] api: move-volume: add move to another container
2021-08-03 8:14 ` Fabian Ebner
@ 2021-08-05 12:45 ` Aaron Lauterer
0 siblings, 0 replies; 22+ messages in thread
From: Aaron Lauterer @ 2021-08-05 12:45 UTC (permalink / raw)
To: Fabian Ebner, pve-devel
On 8/3/21 10:14 AM, Fabian Ebner wrote:
> Am 19.07.21 um 16:52 schrieb Aaron Lauterer:
>> The goal of this is to expand the move-volume API endpoint to make it
>> possible to move a container volume / mountpoint to another container.
>>
>> Currently it works for regular mountpoints though it would be nice to be
>> able to do it for unused mounpoints as well.
>>
>> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
>> ---
>> This is mostly the code from qemu-server with some adaptions. Mainly
>> error messages and some checks.
>>
>> Previous checks have been moved to '$move_to_storage_checks'.
>>
>> rfc -> v1:
>> * add check if target guest is replicated and fail if the moved volume
>> does not support it
>> * check if source volume has a format suffix and pass it to
>> 'find_free_disk' or if the prefix is vm/subvol as those also have
>> their own meaning, see the comment in the code
>> * fixed some style nits
>>
>> src/PVE/API2/LXC.pm | 270 ++++++++++++++++++++++++++++++++++++++++----
>> src/PVE/CLI/pct.pm | 2 +-
>> 2 files changed, 250 insertions(+), 22 deletions(-)
>>
>> diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
>> index b929481..0af22c1 100644
>> --- a/src/PVE/API2/LXC.pm
>> +++ b/src/PVE/API2/LXC.pm
>> @@ -27,6 +27,8 @@ use PVE::API2::LXC::Snapshot;
>> use PVE::JSONSchema qw(get_standard_option);
>> use base qw(PVE::RESTHandler);
>> +use Data::Dumper;
>> +
>
> Left-over from debugging?
Yep...
>
>> BEGIN {
>> if (!$ENV{PVE_GENERATING_DOCS}) {
>> require PVE::HA::Env::PVE2;
>> @@ -1784,10 +1786,12 @@ __PACKAGE__->register_method({
>> method => 'POST',
>> protected => 1,
>> proxyto => 'node',
>> - description => "Move a rootfs-/mp-volume to a different storage",
>> + description => "Move a rootfs-/mp-volume to a different storage or to a different container.",
>> permissions => {
>> description => "You need 'VM.Config.Disk' permissions on /vms/{vmid}, " .
>> - "and 'Datastore.AllocateSpace' permissions on the storage.",
>> + "and 'Datastore.AllocateSpace' permissions on the storage. To move ".
>> + "a volume to another container, you need the permissions on the ".
>> + "target container as well.",
>> check =>
>> [ 'and',
>> ['perm', '/vms/{vmid}', [ 'VM.Config.Disk' ]],
>> @@ -1799,14 +1803,20 @@ __PACKAGE__->register_method({
>> properties => {
>> node => get_standard_option('pve-node'),
>> vmid => get_standard_option('pve-vmid', { completion => \&PVE::LXC::complete_ctid }),
>> + 'target-vmid' => get_standard_option('pve-vmid', {
>> + completion => \&PVE::LXC::complete_ctid,
>> + optional => 1,
>> + }),
>> volume => {
>> type => 'string',
>> + #TODO: check how to handle unused mount points as the mp parameter is not configured
>> enum => [ PVE::LXC::Config->valid_volume_keys() ],
>> description => "Volume which will be moved.",
>> },
>> storage => get_standard_option('pve-storage-id', {
>> description => "Target Storage.",
>> completion => \&PVE::Storage::complete_storage_enabled,
>> + optional => 1,
>> }),
>> delete => {
>> type => 'boolean',
>> @@ -1827,6 +1837,20 @@ __PACKAGE__->register_method({
>> minimum => '0',
>> default => 'clone limit from datacenter or storage config',
>> },
>> + 'target-mp' => {
>
> Using 'target-volume' would be more consistent.
Good point, I'll change the parameter
>
>> + type => 'string',
>> + description => "The config key the mp will be moved to.",
>> + enum => [PVE::LXC::Config->valid_volume_keys()],
>> + optional => 1,
>> + },
>> + 'target-digest' => {
>> + type => 'string',
>> + description => 'Prevent changes if current configuration file of the target " .
>> + "container has a different SHA1 digest. This can be used to prevent " .
>> + "concurrent modifications.',
>> + maxLength => 40,
>> + optional => 1,
>> + },
>> },
>> },
>> returns => {
>> @@ -1841,32 +1865,49 @@ __PACKAGE__->register_method({
>> my $vmid = extract_param($param, 'vmid');
>> + my $target_vmid = extract_param($param, 'target-vmid');
>> +
>> my $storage = extract_param($param, 'storage');
>> my $mpkey = extract_param($param, 'volume');
>> + my $target_mp = extract_param($param, 'target-mp');
>> +
>> + my $digest = extract_param($param, 'digest');
>> +
>> + my $target_digest = extract_param($param, 'target-digest');
>> +
>> my $lockname = 'disk';
>> my ($mpdata, $old_volid);
>> - PVE::LXC::Config->lock_config($vmid, sub {
>> - my $conf = PVE::LXC::Config->load_config($vmid);
>> - PVE::LXC::Config->check_lock($conf);
>> + die "either set storage or target-vmid, but not both\n"
>> + if $storage && $target_vmid;
>
> Same as for qemu: either require target_vmid *and* target_mp or have it default to the one from the source.
As in qemu, let's fall back to the source key if not explicitly specified.
>
>> - die "cannot move volumes of a running container\n" if PVE::LXC::check_running($vmid);
>> + die "cannot move volumes of a running container\n" if PVE::LXC::check_running($vmid);
>
> This check was previously inside lock_config, but now it isn't anymore.
Yeah, wanted to have it in one spot for both situations (move to other storage & move to other container) but that will be racy... will need to place it in both lock_config sections.
>
>> - $mpdata = PVE::LXC::Config->parse_volume($mpkey, $conf->{$mpkey});
>> - $old_volid = $mpdata->{volume};
>> + my $storecfg = PVE::Storage::config();
>> + my $source_volid;
>> - die "you can't move a volume with snapshots and delete the source\n"
>> - if $param->{delete} && PVE::LXC::Config->is_volume_in_use_by_snapshots($conf, $old_volid);
>> + my $move_to_storage_checks = sub {
>> + PVE::LXC::Config->lock_config($vmid, sub {
>> + my $conf = PVE::LXC::Config->load_config($vmid);
>> + PVE::LXC::Config->check_lock($conf);
>> - PVE::Tools::assert_if_modified($param->{digest}, $conf->{digest});
>> - PVE::LXC::Config->set_lock($vmid, $lockname);
>> - });
>> + $mpdata = PVE::LXC::Config->parse_volume($mpkey, $conf->{$mpkey});
>> + $old_volid = $mpdata->{volume};
>> - my $realcmd = sub {
>> + die "you can't move a volume with snapshots and delete the source\n"
>> + if $param->{delete} && PVE::LXC::Config->is_volume_in_use_by_snapshots($conf, $old_volid);
>> +
>> + PVE::Tools::assert_if_modified($digest, $conf->{digest});
>> +
>> + PVE::LXC::Config->set_lock($vmid, $lockname);
>> + });
>> + };
>> +
>> + my $storage_realcmd = sub {
>> eval {
>> PVE::Cluster::log_msg('info', $authuser, "move volume CT $vmid: move --volume $mpkey --storage $storage");
>> @@ -1936,15 +1977,202 @@ __PACKAGE__->register_method({
>> warn $@ if $@;
>> die $err if $err;
>> };
>> - my $task = eval {
>> - $rpcenv->fork_worker('move_volume', $vmid, $authuser, $realcmd);
>> +
>> + my $load_and_check_reassign_configs = sub {
>> + my $vmlist = PVE::Cluster::get_vmlist()->{ids};
>> + die "Both containers need to be on the same node ($vmlist->{$vmid}->{node}) ".
>> + "but target continer is on $vmlist->{$target_vmid}->{node}.\n"
>> + if $vmlist->{$vmid}->{node} ne $vmlist->{$target_vmid}->{node};
>> +
>> + my $source_conf = PVE::LXC::Config->load_config($vmid);
>> + PVE::LXC::Config->check_lock($source_conf);
>> + my $target_conf = PVE::LXC::Config->load_config($target_vmid);
>> + PVE::LXC::Config->check_lock($target_conf);
>> +
>> + die "Can't move volumes from or to template VMs\n"
>> + if ($source_conf->{template} || $target_conf->{template});
>> +
>> + if ($digest) {
>> + eval { PVE::Tools::assert_if_modified($digest, $source_conf->{digest}) };
>> + if (my $err = $@) {
>> + die "Container ${vmid}: ${err}";
>> + }
>> + }
>> +
>> + if ($target_digest) {
>> + eval { PVE::Tools::assert_if_modified($target_digest, $target_conf->{digest}) };
>> + if (my $err = $@) {
>> + die "Container ${target_vmid}: ${err}";
>> + }
>> + }
>> +
>> + die "volume '${mpkey}' for container '$vmid' does not exist\n"
>> + if !defined($source_conf->{$mpkey});
>> +
>> + die "Target volume key '${target_mp}' is already in use for container '$target_vmid'\n"
>> + if exists $target_conf->{$target_mp};
>
> Same as for qemu: any reason for using exists?
As in qemu: if the target key exists, I like to err on the safe side and abort, even if it has nothing configured.
>
>> +
>> + my $drive = PVE::LXC::Config->parse_volume(
>> + $mpkey,
>> + $source_conf->{$mpkey},
>> + );
>> +
>> + $source_volid = $drive->{volume};
>> +
>> + die "disk '${mpkey}' has no associated volume\n"
>> + if !$source_volid;
>> +
>> + die "Storage does not support moving of this disk to another container\n"
>> + if !PVE::Storage::volume_has_feature(
>> + $storecfg,
>> + 'rename',
>> + $source_volid,
>> + );
>> +
>> + die "Cannot move a bindmound or device mount to another container\n"
>> + if PVE::LXC::Config->classify_mountpoint($source_volid) ne "volume";
>
> The result from classify_mountpoint should be in $drive->{type} already.
>
>> + die "Cannot move volume to another container while the source container is running\n"
>> + if PVE::LXC::check_running($vmid) && $mpkey !~ m/^unused\d+$/;
>> +
>> + my $repl_conf = PVE::ReplicationConfig->new();
>> + my $is_replicated = $repl_conf->check_for_existing_jobs($target_vmid, 1);
>> + my ($storeid, undef) = PVE::Storage::parse_volume_id($source_volid);
>> + my $format = (PVE::Storage::parse_volname($storecfg, $source_volid))[6];
>> + if (
>> + $is_replicated
>> + && !PVE::Storage::storage_can_replicate($storecfg, $storeid, $format)
>> + ) {
>> + die "Cannot move volume to a replicated container. Storage " .
>> + "does not support replication!\n";
>> + }
>> + return ($source_conf, $target_conf);
>> + };
>> +
>> + my $logfunc = sub {
>> + my ($msg) = @_;
>> + print STDERR "$msg\n";
>> };
>> - if (my $err = $@) {
>> - eval { PVE::LXC::Config->remove_lock($vmid, $lockname) };
>> - warn $@ if $@;
>> - die $err;
>> +
>> + my $volume_reassignfn = sub {
>> + return PVE::LXC::Config->lock_config($vmid, sub {
>> + return PVE::LXC::Config->lock_config($target_vmid, sub {
>> + my ($source_conf, $target_conf) = &$load_and_check_reassign_configs();
>> +
>> + my $drive_param = PVE::LXC::Config->parse_volume(
>> + $target_mp,
>> + $source_conf->{$mpkey},
>> + );
>> +
>> + print "moving volume '$mpkey' from container '$vmid' to '$target_vmid'\n";
>> + my ($storage, $source_volname) = PVE::Storage::parse_volume_id($source_volid);
>> +
>> + # The format in find_free_diskname handles two cases for containers.
>> + # If it is set to 'subvol', the prefix will be set to it instead of 'vm',
>> + # this is needed for example for ZFS based storages.
>> + # The other thing to check for, in case of directory/file based storages,
>> + # is .raw files as we need to pass this format in that case.
>> + my $fmt = undef;
>> + if ($source_volname =~ m/(subvol|vm)-\d+-disk-\S+$/) {
>> + $fmt = $1 if $1 eq "subvol";
>> + } else {
>> + die "could not detect source volname prefix\n";
>> + }
>> + if ($source_volname =~ m/vm-\d+-disk-\S+\.(raw)/) {
>> + $fmt = $1;
>> + }
>
> Using parse_volname should be easier and future-proof.
Yep, same as in qemu, now with the behavior of find_free_volname to decide itself if the fmt suffix is needed, this can be done with parse_volname. Previously it would have been problematic as it also returns "raw" for other storages like ZFS or LVM.
>
>> +
>> + my $target_volname = PVE::Storage::find_free_diskname(
>> + $storecfg,
>> + $storage,
>> + $target_vmid,
>> + $fmt
>> + );
>> +
>> + my $new_volid = PVE::Storage::rename_volume(
>> + $storecfg,
>> + $source_volid,
>> + $target_volname,
>> + );
>> +
>> + $drive_param->{volume} = $new_volid;
>> +
>> + delete $source_conf->{$mpkey};
>> + print "removing volume '${mpkey}' from container '${vmid}' config\n";
>> + PVE::LXC::Config->write_config($vmid, $source_conf);
>> +
>> + my $drive_string = PVE::LXC::Config->print_volume($target_mp, $drive_param);
>> + my $running = PVE::LXC::check_running($target_vmid);
>> + my $param = { $target_mp => $drive_string };
>> +
>> + my $err = Dumper(PVE::LXC::Config->update_pct_config(
>> + $target_vmid,
>> + $target_conf,
>> + $running,
>> + $param
>> + ));
>
> $err and Dumper left-over from debugging?
This particular implementation yes. But since "update_pct_config" would return an error hash is some hotplugging would fail, I am thinking of iterating over the hash and print the errors as warnings, should there be any.
I don't want to die / raise some exception since at this point, we removed the config from the source vmid and need to continue in order to write it to the new vmid.
>
>> +
>> + PVE::LXC::Config->write_config($target_vmid, $target_conf);
>> + $target_conf = PVE::LXC::Config->load_config($target_vmid);
>> +
>> + PVE::LXC::update_lxc_config($target_vmid, $target_conf);
>> + print "target container '$target_vmid' updated with '$target_mp'\n";
>> +
>> + # remove possible replication snapshots
>> + if (PVE::Storage::volume_has_feature(
>> + $storecfg,
>> + 'replicate',
>> + $source_volid),
>> + ) {
>> + eval {
>> + PVE::Replication::prepare(
>> + $storecfg,
>> + [$new_volid],
>> + undef,
>> + 1,
>> + undef,
>> + $logfunc,
>> + )
>> + };
>> + if (my $err = $@) {
>> + print "Failed to remove replication snapshots on volume ".
>> + "'$target_mp'. Manual cleanup could be necessary.\n";
>> + }
>> + }
>> + });
>> + });
>> + };
>> +
>> + if ($target_vmid) {
>> + $rpcenv->check_vm_perm($authuser, $target_vmid, undef, ['VM.Config.Disk'])
>> + if $authuser ne 'root@pam';
>> +
>> + die "Moving a volume to the same container is not possible. Did you ".
>> + "mean to move the volume to a different storage?\n"
>> + if $vmid eq $target_vmid;
>> +
>> + &$load_and_check_reassign_configs();
>> + return $rpcenv->fork_worker(
>> + 'move_volume',
>> + "${vmid}-${mpkey}>${target_vmid}-${target_mp}",
>> + $authuser,
>> + $volume_reassignfn
>> + );
>> + } elsif ($storage) {
>> + &$move_to_storage_checks();
>> + my $task = eval {
>> + $rpcenv->fork_worker('move_volume', $vmid, $authuser, $storage_realcmd);
>> + };
>> + if (my $err = $@) {
>> + eval { PVE::LXC::Config->remove_lock($vmid, $lockname) };
>> + warn $@ if $@;
>> + die $err;
>> + }
>> + return $task;
>> + } else {
>> + die "Provide either a 'storage' to move the mount point to a ".
>> + "different storage or 'target-vmid' and 'target-mp' to move ".
>> + "the mount point to another container\n";
>> }
>> - return $task;
>> }});
>> __PACKAGE__->register_method({
>> diff --git a/src/PVE/CLI/pct.pm b/src/PVE/CLI/pct.pm
>> index 7ac5a55..5f2bf04 100755
>> --- a/src/PVE/CLI/pct.pm
>> +++ b/src/PVE/CLI/pct.pm
>> @@ -849,7 +849,7 @@ our $cmddef = {
>> clone => [ "PVE::API2::LXC", 'clone_vm', ['vmid', 'newid'], { node => $nodename }, $upid_exit ],
>> migrate => [ "PVE::API2::LXC", 'migrate_vm', ['vmid', 'target'], { node => $nodename }, $upid_exit],
>> - 'move-volume' => [ "PVE::API2::LXC", 'move_volume', ['vmid', 'volume', 'storage'], { node => $nodename }, $upid_exit ],
>> + 'move-volume' => [ "PVE::API2::LXC", 'move_volume', ['vmid', 'volume', 'storage', 'target-vmid', 'target-mp'], { node => $nodename }, $upid_exit ],
>> move_volume => { alias => 'move-disk' },
>> snapshot => [ "PVE::API2::LXC::Snapshot", 'snapshot', ['vmid', 'snapname'], { node => $nodename } , $upid_exit ],
>>
^ permalink raw reply [flat|nested] 22+ messages in thread
end of thread, other threads:[~2021-08-05 12:46 UTC | newest]
Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-19 14:52 [pve-devel] [PATCH v1 storage qemu-server container 0/9] move disk or volume to other guests Aaron Lauterer
2021-07-19 14:52 ` [pve-devel] [PATCH v1 storage 1/9] storage: expose find_free_diskname Aaron Lauterer
2021-08-02 12:56 ` Fabian Ebner
2021-08-03 14:20 ` Aaron Lauterer
2021-08-04 7:27 ` Fabian Ebner
2021-07-19 14:52 ` [pve-devel] [PATCH v1 storage 2/9] add disk rename feature Aaron Lauterer
2021-08-02 12:57 ` Fabian Ebner
2021-08-03 14:22 ` Aaron Lauterer
2021-07-19 14:52 ` [pve-devel] [PATCH v1 qemu-server 3/9] cli: qm: change move_disk to move-disk Aaron Lauterer
2021-07-19 14:52 ` [pve-devel] [PATCH v1 qemu-server 4/9] Drive: add valid_drive_names_with_unused Aaron Lauterer
2021-08-03 7:35 ` Fabian Ebner
2021-07-19 14:52 ` [pve-devel] [PATCH v1 qemu-server 5/9] api: move-disk: add move to other VM Aaron Lauterer
2021-08-03 7:34 ` Fabian Ebner
2021-08-05 9:36 ` Aaron Lauterer
2021-07-19 14:52 ` [pve-devel] [PATCH v1 qemu-server 6/9] api: move-disk: cleanup very long lines Aaron Lauterer
2021-08-03 7:37 ` Fabian Ebner
2021-07-19 14:52 ` [pve-devel] [PATCH v1 container 7/9] cli: pct: change move_volume to move-volume Aaron Lauterer
2021-07-19 14:52 ` [pve-devel] [PATCH v1 container 8/9] api: move-volume: add move to another container Aaron Lauterer
2021-08-03 8:14 ` Fabian Ebner
2021-08-05 12:45 ` Aaron Lauterer
2021-07-19 14:52 ` [pve-devel] [PATCH v1 container 9/9] api: move-volume: cleanup very long lines Aaron Lauterer
2021-08-03 8:27 ` [pve-devel] [PATCH v1 storage qemu-server container 0/9] move disk or volume to other guests Fabian Ebner
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox