public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH qemu-server, container 0/2] avoid migrating disk images multiple times
@ 2023-05-02 13:17 Aaron Lauterer
  2023-05-02 13:17 ` [pve-devel] [PATCH qemu-server 1/2] migration: " Aaron Lauterer
  2023-05-02 13:17 ` [pve-devel] [PATCH container 2/2] migration: avoid migrating volume " Aaron Lauterer
  0 siblings, 2 replies; 10+ messages in thread
From: Aaron Lauterer @ 2023-05-02 13:17 UTC (permalink / raw)
  To: pve-devel

This small patch series adds additional checks for VMs and CTs to avoid
migrating local disk images multiple times.

The idea on how to reference them is similar as in the check if a disk
should be added as unusedX [0].

For example, if there are multiple storage definitions pointing to the
same actual storage, we can see the same image multiple times. These are
ignored.

If we notice that a disk image is configured multiple times in the
config for the guest, we fail the migration with a warning.


One thing I am not really sure, is the case where a disk image is
currently configured fully and a second time as unusedX.

At least the Qemu part won't let us remove it as it is still in use.

But this is still a somewhat hypothetical example because in my tests, once
we detach a double configured disk image (e.g. scsi0 and scsi1), it will
not be added as unused0 but just disappear.
So there should not be a reasonable way to end up with a unused disk
config pointing to the same image as a configured one.


[0] https://git.proxmox.com/?p=qemu-server.git;a=blob;f=PVE/QemuServer.pm;h=c1d0fd2d06a35bd8d996a59b44eb60345165f1b6;hb=HEAD#l6910

qemu-server: Aaron Lauterer (1):
  migration: avoid migrating disk images multiple times

 PVE/QemuMigrate.pm | 33 +++++++++++++++++++++++++++++++++
 1 file changed, 33 insertions(+)


pve-container: Aaron Lauterer (1):
  migration: avoid migrating volume images multiple times

 src/PVE/LXC/Migrate.pm | 31 +++++++++++++++++++++++++++++++
 1 file changed, 31 insertions(+)

-- 
2.30.2





^ permalink raw reply	[flat|nested] 10+ messages in thread

* [pve-devel] [PATCH qemu-server 1/2] migration: avoid migrating disk images multiple times
  2023-05-02 13:17 [pve-devel] [PATCH qemu-server, container 0/2] avoid migrating disk images multiple times Aaron Lauterer
@ 2023-05-02 13:17 ` Aaron Lauterer
  2023-05-03  9:17   ` Fabian Grünbichler
  2023-05-09  7:34   ` Fiona Ebner
  2023-05-02 13:17 ` [pve-devel] [PATCH container 2/2] migration: avoid migrating volume " Aaron Lauterer
  1 sibling, 2 replies; 10+ messages in thread
From: Aaron Lauterer @ 2023-05-02 13:17 UTC (permalink / raw)
  To: pve-devel

Scan the VM config and store the volid and full path for each storage.
Do the same when we scan each storage.  Then we can have these
scenarios:
* multiple storage configurations might point to the same storage
The result is, that when scanning the storages, we find the disk image
multiple times.
-> we ignore them

* a VM might have multiple disks configured, pointing to the same disk
  image
-> We fail with a warning that two disk configs point to the same disk
image

Without these checks, it was possible to multiply the number of disk
images with each migration (with local disk) if at least another storage
was configured, pointing to the same place.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
 PVE/QemuMigrate.pm | 33 +++++++++++++++++++++++++++++++++
 1 file changed, 33 insertions(+)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 09cc1d8..bd3ea00 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -301,6 +301,10 @@ sub scan_local_volumes {
 	my $other_errors = [];
 	my $abort = 0;
 
+	# store and map already referenced absolute paths and volids
+	my $referencedpath = {}; # path -> volid
+	my $referenced = {}; # volid -> config key (e.g. scsi0)
+
 	my $log_error = sub {
 	    my ($msg, $volid) = @_;
 
@@ -312,6 +316,26 @@ sub scan_local_volumes {
 	    $abort = 1;
 	};
 
+	# reference disks in config first
+	PVE::QemuConfig->foreach_volume_full($conf, { include_unused => 1 }, sub {
+	    my ($key, $drive) = @_;
+	    my $volid = $drive->{file};
+	    return if PVE::QemuServer::Drive::drive_is_cdrom($drive);
+	    return if !$volid || $volid =~ m|^/|;
+
+	    my $path = PVE::Storage::path($storecfg, $volid);
+	    if (defined $referencedpath->{$path}) {
+		my $rkey = $referenced->{$referencedpath->{$path}};
+		&$log_error(
+		    "cannot migrate local image '$volid': '$key' and '$rkey' ".
+		    "reference the same volume. (check guest and storage configuration?)\n"
+		);
+		return;
+	    }
+	    $referencedpath->{$path} = $volid;
+	    $referenced->{$volid} = $key;
+	});
+
 	my @sids = PVE::Storage::storage_ids($storecfg);
 	foreach my $storeid (@sids) {
 	    my $scfg = PVE::Storage::storage_config($storecfg, $storeid);
@@ -342,6 +366,15 @@ sub scan_local_volumes {
 	    PVE::Storage::foreach_volid($dl, sub {
 		my ($volid, $sid, $volinfo) = @_;
 
+		# check if image is already referenced
+		my $path = PVE::Storage::path($storecfg, $volid);
+		if (defined $referencedpath->{$path} && !$referenced->{$volid}) {
+		    $self->log('info', "ignoring '$volid' - already referenced by other storage '$referencedpath->{$path}'\n");
+		    return;
+		}
+		$referencedpath->{$path} = $volid;
+		$referenced->{$volid} = 1;
+
 		$local_volumes->{$volid}->{ref} = 'storage';
 		$local_volumes->{$volid}->{size} = $volinfo->{size};
 		$local_volumes->{$volid}->{targetsid} = $targetsid;
-- 
2.30.2





^ permalink raw reply	[flat|nested] 10+ messages in thread

* [pve-devel] [PATCH container 2/2] migration: avoid migrating volume images multiple times
  2023-05-02 13:17 [pve-devel] [PATCH qemu-server, container 0/2] avoid migrating disk images multiple times Aaron Lauterer
  2023-05-02 13:17 ` [pve-devel] [PATCH qemu-server 1/2] migration: " Aaron Lauterer
@ 2023-05-02 13:17 ` Aaron Lauterer
  2023-05-03  9:07   ` Fabian Grünbichler
  1 sibling, 1 reply; 10+ messages in thread
From: Aaron Lauterer @ 2023-05-02 13:17 UTC (permalink / raw)
  To: pve-devel

Scan the VM config and store the volid and full path for each storage.
Do the same when we scan each storage.  Then we can have these
scenarios:
* multiple storage configurations might point to the same storage
The result is, that when scanning the storages, we find the volume
multiple times.
-> we ignore them

* a VM might have multiple volumes configured, pointing to the same
  volume
-> We fail with a warning that two mpX configs point to the same
volume

Without these checks, it was possible to multiply the number of volumes
with each migration (with local disk) if at least another storage was
configured, pointing to the same place.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
 src/PVE/LXC/Migrate.pm | 31 +++++++++++++++++++++++++++++++
 1 file changed, 31 insertions(+)

diff --git a/src/PVE/LXC/Migrate.pm b/src/PVE/LXC/Migrate.pm
index ccf5157..47cb94b 100644
--- a/src/PVE/LXC/Migrate.pm
+++ b/src/PVE/LXC/Migrate.pm
@@ -256,6 +256,28 @@ sub phase1 {
 	&$log_error($@, $volid) if $@;
     };
 
+    # store and map already referenced absolute paths and volids
+    my $referencedpath = {}; # path -> volid
+    my $referenced = {}; # volid -> config key (e.g. mp0)
+
+    # reference volumes in config first
+    PVE::LXC::Config->foreach_volume_full($conf, { include_unused => 1 }, sub {
+	my ($key, $volume) = @_;
+	my $volid = $volume->{volume};
+	my $path = PVE::Storage::path($self->{storecfg}, $volid);
+	if (defined $referencedpath->{$path}) {
+	    my $rkey = $referenced->{$referencedpath->{$path}};
+	    &$log_error(
+		"'$key' and '$rkey' reference the same volume. ".
+		"(check guest and storage configuration?)\n",
+		$volid
+	    );
+	    return;
+	}
+	$referencedpath->{$path} = $volid;
+	$referenced->{$volid} = $key;
+    });
+
     # first unused / lost volumes owned by this container
     my @sids = PVE::Storage::storage_ids($self->{storecfg});
     foreach my $storeid (@sids) {
@@ -280,6 +302,15 @@ sub phase1 {
 	PVE::Storage::foreach_volid($dl, sub {
 	    my ($volid, $sid, $volname) = @_;
 
+	    # check if volume is already referenced
+	    my $path = PVE::Storage::path($self->{storecfg}, $volid);
+	    if (defined $referencedpath->{$path} && !$referenced->{$volid}) {
+		$self->log('info', "ignoring '$volid' - already referenced by other storage '$referencedpath->{$path}'\n");
+		next;
+	    }
+	    $referencedpath->{$path} = $volid;
+	    $referenced->{$volid} = 1;
+
 	    $volhash->{$volid}->{ref} = 'storage';
 	    $volhash->{$volid}->{targetsid} = $targetsid;
 	});
-- 
2.30.2





^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [pve-devel] [PATCH container 2/2] migration: avoid migrating volume images multiple times
  2023-05-02 13:17 ` [pve-devel] [PATCH container 2/2] migration: avoid migrating volume " Aaron Lauterer
@ 2023-05-03  9:07   ` Fabian Grünbichler
  0 siblings, 0 replies; 10+ messages in thread
From: Fabian Grünbichler @ 2023-05-03  9:07 UTC (permalink / raw)
  To: Proxmox VE development discussion

On May 2, 2023 3:17 pm, Aaron Lauterer wrote:
> Scan the VM config and store the volid and full path for each storage.
> Do the same when we scan each storage.  Then we can have these
> scenarios:
> * multiple storage configurations might point to the same storage
> The result is, that when scanning the storages, we find the volume
> multiple times.
> -> we ignore them
> 
> * a VM might have multiple volumes configured, pointing to the same
>   volume
> -> We fail with a warning that two mpX configs point to the same
> volume

this one really doesn't make much sense for containers (if the volume
contains the default ext4, MMP means the container can't be started?) -
it could be handled though as well (just migrate the volume once, and
replace the references with the target volid?)
> 
> Without these checks, it was possible to multiply the number of volumes
> with each migration (with local disk) if at least another storage was
> configured, pointing to the same place.
> 
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
>  src/PVE/LXC/Migrate.pm | 31 +++++++++++++++++++++++++++++++
>  1 file changed, 31 insertions(+)
> 
> diff --git a/src/PVE/LXC/Migrate.pm b/src/PVE/LXC/Migrate.pm
> index ccf5157..47cb94b 100644
> --- a/src/PVE/LXC/Migrate.pm
> +++ b/src/PVE/LXC/Migrate.pm
> @@ -256,6 +256,28 @@ sub phase1 {
>  	&$log_error($@, $volid) if $@;
>      };
>  
> +    # store and map already referenced absolute paths and volids
> +    my $referencedpath = {}; # path -> volid

wrong name - this contains both referenced and unreferenced paths (and
also, long variable names would benefit from _ for separation of words)

> +    my $referenced = {}; # volid -> config key (e.g. mp0)

also wrong - contains both referenced and unreferenced volids, and also
doesn't always map to the config key ;)

but before diving further into nit territory - we already have $volhash
that is filled for all volumes, wouldn't it make more sense to re-use
it? the order is inverse compared to your patch here, but that doesn't
really matter (and it would avoid iterating over the config twice).
compared to your patch, it also covers snapshots (everything in
snapshots should also be in the current config either still used or as
unusedX, but who knows).

there are three things we'd need to do:
- in $test_volid, if 'ref' is already config, error out (same volid
  referenced in current config multiple times)
- add the path of each volid to the volhash while filling it
- after $volhash is filled, iterate over it and check for aliased paths
  (and depending on where the volid comes from, warn and ignore or error
  out)

or am I missing something?

alternatively, instead of putting the path into volhash, it could be
tracked separately (e.g. in a $path_to_volid hash) and checked directly
while iterating instead of at the end.

> +
> +    # reference volumes in config first
> +    PVE::LXC::Config->foreach_volume_full($conf, { include_unused => 1 }, sub {
> +	my ($key, $volume) = @_;
> +	my $volid = $volume->{volume};
> +	my $path = PVE::Storage::path($self->{storecfg}, $volid);
> +	if (defined $referencedpath->{$path}) {
> +	    my $rkey = $referenced->{$referencedpath->{$path}};
> +	    &$log_error(
> +		"'$key' and '$rkey' reference the same volume. ".
> +		"(check guest and storage configuration?)\n",
> +		$volid
> +	    );
> +	    return;
> +	}
> +	$referencedpath->{$path} = $volid;
> +	$referenced->{$volid} = $key;
> +    });
> +
>      # first unused / lost volumes owned by this container
>      my @sids = PVE::Storage::storage_ids($self->{storecfg});
>      foreach my $storeid (@sids) {
> @@ -280,6 +302,15 @@ sub phase1 {
>  	PVE::Storage::foreach_volid($dl, sub {
>  	    my ($volid, $sid, $volname) = @_;
>  
> +	    # check if volume is already referenced
> +	    my $path = PVE::Storage::path($self->{storecfg}, $volid);
> +	    if (defined $referencedpath->{$path} && !$referenced->{$volid}) {
> +		$self->log('info', "ignoring '$volid' - already referenced by other storage '$referencedpath->{$path}'\n");
> +		next;
> +	    }
> +	    $referencedpath->{$path} = $volid;
> +	    $referenced->{$volid} = 1;
> +
>  	    $volhash->{$volid}->{ref} = 'storage';
>  	    $volhash->{$volid}->{targetsid} = $targetsid;
>  	});
> -- 
> 2.30.2
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 1/2] migration: avoid migrating disk images multiple times
  2023-05-02 13:17 ` [pve-devel] [PATCH qemu-server 1/2] migration: " Aaron Lauterer
@ 2023-05-03  9:17   ` Fabian Grünbichler
  2023-05-09  7:34   ` Fiona Ebner
  1 sibling, 0 replies; 10+ messages in thread
From: Fabian Grünbichler @ 2023-05-03  9:17 UTC (permalink / raw)
  To: Proxmox VE development discussion

On May 2, 2023 3:17 pm, Aaron Lauterer wrote:
> Scan the VM config and store the volid and full path for each storage.
> Do the same when we scan each storage.  Then we can have these
> scenarios:
> * multiple storage configurations might point to the same storage
> The result is, that when scanning the storages, we find the disk image
> multiple times.
> -> we ignore them
> 
> * a VM might have multiple disks configured, pointing to the same disk
>   image
> -> We fail with a warning that two disk configs point to the same disk
> image

this is not a problem for VMs, and can actually be a valid case in a
test lab (e.g., testing multipath). I am not sure whether that means we
want to handle it properly in live migration though (or whether there
is a way to do so? I guess since starting the VM with both disks
pointing at the same volume works, the same would be true for having two
such disks on the target side, with an NBD export + drive mirror on
each?). for offline migration the same solution as for containers would
apply - migrate volume once, update volid for all references.

> Without these checks, it was possible to multiply the number of disk
> images with each migration (with local disk) if at least another storage
> was configured, pointing to the same place.
> 
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
>  PVE/QemuMigrate.pm | 33 +++++++++++++++++++++++++++++++++
>  1 file changed, 33 insertions(+)
> 
> diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
> index 09cc1d8..bd3ea00 100644
> --- a/PVE/QemuMigrate.pm
> +++ b/PVE/QemuMigrate.pm
> @@ -301,6 +301,10 @@ sub scan_local_volumes {
>  	my $other_errors = [];
>  	my $abort = 0;
>  
> +	# store and map already referenced absolute paths and volids
> +	my $referencedpath = {}; # path -> volid
> +	my $referenced = {}; # volid -> config key (e.g. scsi0)
> +

the same comments as for pve-container apply here as well AFAICT.

>  	my $log_error = sub {
>  	    my ($msg, $volid) = @_;
>  
> @@ -312,6 +316,26 @@ sub scan_local_volumes {
>  	    $abort = 1;
>  	};
>  
> +	# reference disks in config first
> +	PVE::QemuConfig->foreach_volume_full($conf, { include_unused => 1 }, sub {
> +	    my ($key, $drive) = @_;
> +	    my $volid = $drive->{file};
> +	    return if PVE::QemuServer::Drive::drive_is_cdrom($drive);
> +	    return if !$volid || $volid =~ m|^/|;
> +
> +	    my $path = PVE::Storage::path($storecfg, $volid);
> +	    if (defined $referencedpath->{$path}) {
> +		my $rkey = $referenced->{$referencedpath->{$path}};
> +		&$log_error(
> +		    "cannot migrate local image '$volid': '$key' and '$rkey' ".
> +		    "reference the same volume. (check guest and storage configuration?)\n"
> +		);
> +		return;
> +	    }
> +	    $referencedpath->{$path} = $volid;
> +	    $referenced->{$volid} = $key;
> +	});
> +
>  	my @sids = PVE::Storage::storage_ids($storecfg);
>  	foreach my $storeid (@sids) {
>  	    my $scfg = PVE::Storage::storage_config($storecfg, $storeid);
> @@ -342,6 +366,15 @@ sub scan_local_volumes {
>  	    PVE::Storage::foreach_volid($dl, sub {
>  		my ($volid, $sid, $volinfo) = @_;
>  
> +		# check if image is already referenced
> +		my $path = PVE::Storage::path($storecfg, $volid);
> +		if (defined $referencedpath->{$path} && !$referenced->{$volid}) {
> +		    $self->log('info', "ignoring '$volid' - already referenced by other storage '$referencedpath->{$path}'\n");
> +		    return;
> +		}
> +		$referencedpath->{$path} = $volid;
> +		$referenced->{$volid} = 1;
> +
>  		$local_volumes->{$volid}->{ref} = 'storage';
>  		$local_volumes->{$volid}->{size} = $volinfo->{size};
>  		$local_volumes->{$volid}->{targetsid} = $targetsid;
> -- 
> 2.30.2
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 1/2] migration: avoid migrating disk images multiple times
  2023-05-02 13:17 ` [pve-devel] [PATCH qemu-server 1/2] migration: " Aaron Lauterer
  2023-05-03  9:17   ` Fabian Grünbichler
@ 2023-05-09  7:34   ` Fiona Ebner
  2023-05-09 12:55     ` Aaron Lauterer
  1 sibling, 1 reply; 10+ messages in thread
From: Fiona Ebner @ 2023-05-09  7:34 UTC (permalink / raw)
  To: pve-devel, Aaron Lauterer

Am 02.05.23 um 15:17 schrieb Aaron Lauterer:
> Scan the VM config and store the volid and full path for each storage.
> Do the same when we scan each storage.  Then we can have these
> scenarios:
> * multiple storage configurations might point to the same storage
> The result is, that when scanning the storages, we find the disk image
> multiple times.
> -> we ignore them
> 

Having the same storage with overlapping content types is a
configuration error (except if you use different 'content-dirs' I
guess). We can't make guarantees for locking, which e.g. leads to races
for allocation, it can lead to left-over references, etc.. Rather than
trying to handle this here, I'd prefer a warning about the
misconfiguration somewhere (maybe during storage.cfg parsing?) and/or
error out when attempting to create such a configuration. Adding
something in the docs also makes sense if there isn't yet.




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 1/2] migration: avoid migrating disk images multiple times
  2023-05-09  7:34   ` Fiona Ebner
@ 2023-05-09 12:55     ` Aaron Lauterer
  2023-05-09 14:43       ` Fiona Ebner
  0 siblings, 1 reply; 10+ messages in thread
From: Aaron Lauterer @ 2023-05-09 12:55 UTC (permalink / raw)
  To: Fiona Ebner, pve-devel, Fabian Grünbichler



On 5/9/23 09:34, Fiona Ebner wrote:
> Am 02.05.23 um 15:17 schrieb Aaron Lauterer:
>> Scan the VM config and store the volid and full path for each storage.
>> Do the same when we scan each storage.  Then we can have these
>> scenarios:
>> * multiple storage configurations might point to the same storage
>> The result is, that when scanning the storages, we find the disk image
>> multiple times.
>> -> we ignore them
>>
> 
> Having the same storage with overlapping content types is a
> configuration error (except if you use different 'content-dirs' I
> guess). We can't make guarantees for locking, which e.g. leads to races
> for allocation, it can lead to left-over references, etc.. Rather than
> trying to handle this here, I'd prefer a warning about the
> misconfiguration somewhere (maybe during storage.cfg parsing?) and/or
> error out when attempting to create such a configuration. Adding
> something in the docs also makes sense if there isn't yet.

After having a discussion with @Fabian offline, and I hope I don't forget to 
mention something:

Yes, having two storage configurations pointing to the same location should not 
happen as far as we know. For most situation where one might want to do that, 
there are other, better options to separate it on the storage level.
For example:
* ZFS and different volblock sizes -> use different base datasets for each storage
* RBD: use KRBD or not -> use RBD namespaces to separate them

But it is hard to detect that on the storage layer reliably. For example, with 
an RBD storage I might add different monitors; do they point to the same 
cluster? There is no way to tell unless we open a connection and gather the Ceph 
FSID of that cluster.
For other storage types, it would also be possible to run into similar problems 
where we cannot really tell, by the storage definition alone, if they point to 
the same location or not.


Another approach that could make a migration handle such situations better but 
should only be targeting PVE 8:

* Don't scan all storages and only look at disk images that are referenced in 
the config. With this, we should have removed most situations where aliases 
would happen, and a migration is less likely to fail, because a storage is not 
online.
* If we detect an aliased and referenced image, fail the migration with the hint 
that this setup should get fixed.

But since we would fail the migration, instead of potentially creating duplicate 
images on the target node, this is a rather breaking change -> PVE 8

I hope I summed it up correctly.




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 1/2] migration: avoid migrating disk images multiple times
  2023-05-09 12:55     ` Aaron Lauterer
@ 2023-05-09 14:43       ` Fiona Ebner
  2023-05-10  9:57         ` Aaron Lauterer
  0 siblings, 1 reply; 10+ messages in thread
From: Fiona Ebner @ 2023-05-09 14:43 UTC (permalink / raw)
  To: Aaron Lauterer, pve-devel, Fabian Grünbichler

Am 09.05.23 um 14:55 schrieb Aaron Lauterer:
> 
> * Don't scan all storages and only look at disk images that are
> referenced in the config. With this, we should have removed most
> situations where aliases would happen, and a migration is less likely to
> fail, because a storage is not online.

I do prefer this approach as it also fixes issues like "unavailable, but
enabled storage that's not even involved fails migration". And it's also
more intuitive.

But if we really do that, we need to be careful: In particular, we need
to explicitly pick up volumes in the pending section (currently, that
does only happen via the implicit scanning). There might be similar
issues in other situations, but none that I'm aware of.




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 1/2] migration: avoid migrating disk images multiple times
  2023-05-09 14:43       ` Fiona Ebner
@ 2023-05-10  9:57         ` Aaron Lauterer
  2023-05-10 11:23           ` Fiona Ebner
  0 siblings, 1 reply; 10+ messages in thread
From: Aaron Lauterer @ 2023-05-10  9:57 UTC (permalink / raw)
  To: Fiona Ebner, pve-devel, Fabian Grünbichler



On 5/9/23 16:43, Fiona Ebner wrote:
> Am 09.05.23 um 14:55 schrieb Aaron Lauterer:
>>
>> * Don't scan all storages and only look at disk images that are
>> referenced in the config. With this, we should have removed most
>> situations where aliases would happen, and a migration is less likely to
>> fail, because a storage is not online.
> 
> I do prefer this approach as it also fixes issues like "unavailable, but
> enabled storage that's not even involved fails migration". And it's also
> more intuitive.
> 
> But if we really do that, we need to be careful: In particular, we need
> to explicitly pick up volumes in the pending section (currently, that
> does only happen via the implicit scanning). There might be similar
> issues in other situations, but none that I'm aware of.

Thanks for the hint. I'll look into it. What could be such a scenario? Adding a 
new disk or removing one with hot-plugging not working?




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 1/2] migration: avoid migrating disk images multiple times
  2023-05-10  9:57         ` Aaron Lauterer
@ 2023-05-10 11:23           ` Fiona Ebner
  0 siblings, 0 replies; 10+ messages in thread
From: Fiona Ebner @ 2023-05-10 11:23 UTC (permalink / raw)
  To: Aaron Lauterer, pve-devel, Fabian Grünbichler

Am 10.05.23 um 11:57 schrieb Aaron Lauterer:
> 
> 
> On 5/9/23 16:43, Fiona Ebner wrote:
>> Am 09.05.23 um 14:55 schrieb Aaron Lauterer:
>>>
>>> * Don't scan all storages and only look at disk images that are
>>> referenced in the config. With this, we should have removed most
>>> situations where aliases would happen, and a migration is less likely to
>>> fail, because a storage is not online.
>>
>> I do prefer this approach as it also fixes issues like "unavailable, but
>> enabled storage that's not even involved fails migration". And it's also
>> more intuitive.
>>
>> But if we really do that, we need to be careful: In particular, we need
>> to explicitly pick up volumes in the pending section (currently, that
>> does only happen via the implicit scanning). There might be similar
>> issues in other situations, but none that I'm aware of.
> 
> Thanks for the hint. I'll look into it. What could be such a scenario?
> Adding a new disk or removing one with hot-plugging not working?

Yes, disk hotplug can also be disabled and it doesn't work for all
controller types.




^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2023-05-10 11:24 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-05-02 13:17 [pve-devel] [PATCH qemu-server, container 0/2] avoid migrating disk images multiple times Aaron Lauterer
2023-05-02 13:17 ` [pve-devel] [PATCH qemu-server 1/2] migration: " Aaron Lauterer
2023-05-03  9:17   ` Fabian Grünbichler
2023-05-09  7:34   ` Fiona Ebner
2023-05-09 12:55     ` Aaron Lauterer
2023-05-09 14:43       ` Fiona Ebner
2023-05-10  9:57         ` Aaron Lauterer
2023-05-10 11:23           ` Fiona Ebner
2023-05-02 13:17 ` [pve-devel] [PATCH container 2/2] migration: avoid migrating volume " Aaron Lauterer
2023-05-03  9:07   ` Fabian Grünbichler

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal