public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: "Fabian Grünbichler" <f.gruenbichler@proxmox.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH qemu-server 1/2] migration: avoid migrating disk images multiple times
Date: Wed, 03 May 2023 11:17:20 +0200	[thread overview]
Message-ID: <1683104902.ds4ntmeubl.astroid@yuna.none> (raw)
In-Reply-To: <20230502131732.1875692-2-a.lauterer@proxmox.com>

On May 2, 2023 3:17 pm, Aaron Lauterer wrote:
> Scan the VM config and store the volid and full path for each storage.
> Do the same when we scan each storage.  Then we can have these
> scenarios:
> * multiple storage configurations might point to the same storage
> The result is, that when scanning the storages, we find the disk image
> multiple times.
> -> we ignore them
> 
> * a VM might have multiple disks configured, pointing to the same disk
>   image
> -> We fail with a warning that two disk configs point to the same disk
> image

this is not a problem for VMs, and can actually be a valid case in a
test lab (e.g., testing multipath). I am not sure whether that means we
want to handle it properly in live migration though (or whether there
is a way to do so? I guess since starting the VM with both disks
pointing at the same volume works, the same would be true for having two
such disks on the target side, with an NBD export + drive mirror on
each?). for offline migration the same solution as for containers would
apply - migrate volume once, update volid for all references.

> Without these checks, it was possible to multiply the number of disk
> images with each migration (with local disk) if at least another storage
> was configured, pointing to the same place.
> 
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
>  PVE/QemuMigrate.pm | 33 +++++++++++++++++++++++++++++++++
>  1 file changed, 33 insertions(+)
> 
> diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
> index 09cc1d8..bd3ea00 100644
> --- a/PVE/QemuMigrate.pm
> +++ b/PVE/QemuMigrate.pm
> @@ -301,6 +301,10 @@ sub scan_local_volumes {
>  	my $other_errors = [];
>  	my $abort = 0;
>  
> +	# store and map already referenced absolute paths and volids
> +	my $referencedpath = {}; # path -> volid
> +	my $referenced = {}; # volid -> config key (e.g. scsi0)
> +

the same comments as for pve-container apply here as well AFAICT.

>  	my $log_error = sub {
>  	    my ($msg, $volid) = @_;
>  
> @@ -312,6 +316,26 @@ sub scan_local_volumes {
>  	    $abort = 1;
>  	};
>  
> +	# reference disks in config first
> +	PVE::QemuConfig->foreach_volume_full($conf, { include_unused => 1 }, sub {
> +	    my ($key, $drive) = @_;
> +	    my $volid = $drive->{file};
> +	    return if PVE::QemuServer::Drive::drive_is_cdrom($drive);
> +	    return if !$volid || $volid =~ m|^/|;
> +
> +	    my $path = PVE::Storage::path($storecfg, $volid);
> +	    if (defined $referencedpath->{$path}) {
> +		my $rkey = $referenced->{$referencedpath->{$path}};
> +		&$log_error(
> +		    "cannot migrate local image '$volid': '$key' and '$rkey' ".
> +		    "reference the same volume. (check guest and storage configuration?)\n"
> +		);
> +		return;
> +	    }
> +	    $referencedpath->{$path} = $volid;
> +	    $referenced->{$volid} = $key;
> +	});
> +
>  	my @sids = PVE::Storage::storage_ids($storecfg);
>  	foreach my $storeid (@sids) {
>  	    my $scfg = PVE::Storage::storage_config($storecfg, $storeid);
> @@ -342,6 +366,15 @@ sub scan_local_volumes {
>  	    PVE::Storage::foreach_volid($dl, sub {
>  		my ($volid, $sid, $volinfo) = @_;
>  
> +		# check if image is already referenced
> +		my $path = PVE::Storage::path($storecfg, $volid);
> +		if (defined $referencedpath->{$path} && !$referenced->{$volid}) {
> +		    $self->log('info', "ignoring '$volid' - already referenced by other storage '$referencedpath->{$path}'\n");
> +		    return;
> +		}
> +		$referencedpath->{$path} = $volid;
> +		$referenced->{$volid} = 1;
> +
>  		$local_volumes->{$volid}->{ref} = 'storage';
>  		$local_volumes->{$volid}->{size} = $volinfo->{size};
>  		$local_volumes->{$volid}->{targetsid} = $targetsid;
> -- 
> 2.30.2
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 




  reply	other threads:[~2023-05-03  9:17 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-02 13:17 [pve-devel] [PATCH qemu-server, container 0/2] " Aaron Lauterer
2023-05-02 13:17 ` [pve-devel] [PATCH qemu-server 1/2] migration: " Aaron Lauterer
2023-05-03  9:17   ` Fabian Grünbichler [this message]
2023-05-09  7:34   ` Fiona Ebner
2023-05-09 12:55     ` Aaron Lauterer
2023-05-09 14:43       ` Fiona Ebner
2023-05-10  9:57         ` Aaron Lauterer
2023-05-10 11:23           ` Fiona Ebner
2023-05-02 13:17 ` [pve-devel] [PATCH container 2/2] migration: avoid migrating volume " Aaron Lauterer
2023-05-03  9:07   ` Fabian Grünbichler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1683104902.ds4ntmeubl.astroid@yuna.none \
    --to=f.gruenbichler@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal