public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Fiona Ebner <f.ebner@proxmox.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>,
	Aaron Lauterer <a.lauterer@proxmox.com>
Subject: Re: [pve-devel] [PATCH qemu-server v2 1/6] migration: only migrate disks used by the guest
Date: Mon, 22 May 2023 13:59:20 +0200	[thread overview]
Message-ID: <731e6d36-2943-9e69-1bcb-7ce23b712f11@proxmox.com> (raw)
In-Reply-To: <20230512124043.888785-2-a.lauterer@proxmox.com>

Am 12.05.23 um 14:40 schrieb Aaron Lauterer:
> When scanning all configured storages for disk images belonging to the
> VM, the migration could easily fail if a storage is not available, but
> enabled. That storage might not even be used by the VM at all.
> 
> By not doing that and only looking at the disk images referenced in the
> VM config, we can avoid that.
> Extra handling is needed for disk images currently in the 'pending'
> section of the VM config. These disk images used to be detected by
> scanning all storages before.
> It is also necessary to fetch some information (size, format) about the
> disk images explicitly that used to be provided by the initial scan of
> all storages.
> 
> The big change regarding behavior is that disk images not referenced in
> the VM config file will be ignored.  They are already orphans that used
> to be migrated as well, but are now left where they are.  The tests have
> been adapted to that changed behavior.
> 
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
>  PVE/QemuMigrate.pm                    | 71 +++++++++++----------------
>  test/MigrationTest/QemuMigrateMock.pm | 10 ++++
>  test/run_qemu_migrate_tests.pl        | 12 ++---
>  3 files changed, 44 insertions(+), 49 deletions(-)
> 
> diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
> index 09cc1d8..1d21250 100644
> --- a/PVE/QemuMigrate.pm
> +++ b/PVE/QemuMigrate.pm
> @@ -312,49 +312,6 @@ sub scan_local_volumes {
>  	    $abort = 1;
>  	};
>  
> -	my @sids = PVE::Storage::storage_ids($storecfg);
> -	foreach my $storeid (@sids) {
> -	    my $scfg = PVE::Storage::storage_config($storecfg, $storeid);
> -	    next if $scfg->{shared} && !$self->{opts}->{remote};
> -	    next if !PVE::Storage::storage_check_enabled($storecfg, $storeid, undef, 1);
> -
> -	    # get list from PVE::Storage (for unused volumes)
> -	    my $dl = PVE::Storage::vdisk_list($storecfg, $storeid, $vmid, undef, 'images');
> -
> -	    next if @{$dl->{$storeid}} == 0;
> -
> -	    my $targetsid = PVE::JSONSchema::map_id($self->{opts}->{storagemap}, $storeid);
> -	    if (!$self->{opts}->{remote}) {
> -		# check if storage is available on target node
> -		my $target_scfg = PVE::Storage::storage_check_enabled(
> -		    $storecfg,
> -		    $targetsid,
> -		    $self->{node},
> -		);
> -
> -		die "content type 'images' is not available on storage '$targetsid'\n"
> -		    if !$target_scfg->{content}->{images};

This one might be worth re-adding after the storage enabled check
further below. The same check is already done in prepare() for the
result of get_vm_volumes(), but that does not (currently ;)) include
pending ones (a bit of foreshadowing here :P)

Or actually, let's use the improved vtype-aware version from prepare():

>             my ($vtype) = PVE::Storage::parse_volname($storecfg, $volid);
> 
>             die "$volid: content type '$vtype' is not available on storage '$targetsid'\n"
>                 if !$target_scfg->{content}->{$vtype};
Might even be worth factoring out the whole block including the
    if (!$self->{opts}->{remote}) {
    ...
    }
into a mini-helper? But it's only used twice, so do as you like :)

(...)

> @@ -405,8 +362,23 @@ sub scan_local_volumes {
>  
>  	    $local_volumes->{$volid}->{ref} = $attr->{referenced_in_config} ? 'config' : 'snapshot';
>  	    $local_volumes->{$volid}->{ref} = 'storage' if $attr->{is_unused};
> +	    $local_volumes->{$volid}->{ref} = 'storage' if $attr->{is_pending};
>  	    $local_volumes->{$volid}->{ref} = 'generated' if $attr->{is_tpmstate};
>  
> +	    my $bwlimit = $self->get_bwlimit($sid, $targetsid);
> +	    $local_volumes->{$volid}->{targetsid} = $targetsid;
> +	    $local_volumes->{$volid}->{bwlimit} = $bwlimit;
> +
> +	    my $volume_list = PVE::Storage::volume_list($storecfg, $sid, $vmid, 'images');
> +	    # TODO could probably be done better than just iterating

volume_size_info() to the rescue :) Would avoid the loop and quite a bit
of overhead from calling volume_list() for each individual volume.

> +	    for my $volume (@$volume_list) {
> +		if ($volume->{volid} eq $volid) {
> +		    $local_volumes->{$volid}->{size} = $volume->{size};
> +		    $local_volumes->{$volid}->{format} = $volume->{format};
> +		    last;
> +		}
> +	    }
> +
>  	    $local_volumes->{$volid}->{is_vmstate} = $attr->{is_vmstate} ? 1 : 0;
>  
>  	    $local_volumes->{$volid}->{drivename} = $attr->{drivename}
> @@ -450,6 +422,19 @@ sub scan_local_volumes {
>  		if PVE::Storage::volume_is_base_and_used($storecfg, $volid);
>  	};
>  
> +	# add pending disks first
> +	if (defined $conf->{pending} && %{$conf->{pending}}) {

Style nit: please use parentheses for defined. And $conf->{pending}->%*
is slightly nicer, because it can be read from left to right, rather
than needing to look at the inner bit first and then remember the % on
the outside ;)

> +	    PVE::QemuServer::foreach_volid($conf->{pending}, sub {

Should we rather expand foreach_volid() instead? It handles snapshots
already, so handling pending too doesn't seem fundamentally wrong.

Let's check the existing callers and whether they are fine with the change:
1. this one right here: wants pending
2. API2/Qemu.pm: check_vm_disks_local() for migration precondition:
related to migration, so more consistent with pending
3. QemuConfig.pm: get_replicatable_volumes(): can be fine with pending,
but it's a change of course!
4. QemuServer.pm: get_vm_volumes(): is used multiple times by:
4a. vm_stop_cleanup() to deactivate/unmap: should also be fine with
including pending
4b. QemuMigrate.pm: in prepare(): part of migration, so more consistent
with pending
4c. QemuMigrate.pm: in phase3_cleanup() for deactivation: part of
migration, so more consistent with pending

So the only issue is with replication, where we can decide between:
1. Also include pending volumes for replication (would be fine by me).
2. Keep behavior as-is. But then we need to adapt. Some possibilities:
2a. Add a new attribute like 'pending-only' in the result of
foreach_volid(), so that get_replicatable_volumes() can filter them out.
2b. Switch to a different function like foreach_volume() and get the two
attributes that are used there (cdrom and replicate) manually.
2c. Add a switch to foreach_volid() whether it should include volumes
that are only in pending.

> +		    my ($volid, $attr) = @_;
> +		    $attr->{is_pending} = 1;
> +		    eval { $test_volid->($volid, $attr); };
> +		    if (my $err = $@) {
> +			&$log_error($err, $volid);
> +		    }
> +		});
> +	}
> +
> +	# add non-pending referenced disks
>  	PVE::QemuServer::foreach_volid($conf, sub {
>  	    my ($volid, $attr) = @_;
>  	    eval { $test_volid->($volid, $attr); };




  reply	other threads:[~2023-05-22 11:59 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-12 12:40 [pve-devel] [PATCH qemu-server, container v2 0/6] migration: don't scan all storages, fail on aliases Aaron Lauterer
2023-05-12 12:40 ` [pve-devel] [PATCH qemu-server v2 1/6] migration: only migrate disks used by the guest Aaron Lauterer
2023-05-22 11:59   ` Fiona Ebner [this message]
2023-05-24 15:00     ` Aaron Lauterer
2023-05-12 12:40 ` [pve-devel] [PATCH qemu-server v2 2/6] tests: add migration test for pending disk Aaron Lauterer
2023-05-22 14:02   ` Fiona Ebner
2023-05-12 12:40 ` [pve-devel] [PATCH qemu-server v2 3/6] migration: fail when aliased volume is detected Aaron Lauterer
2023-05-22 14:17   ` Fiona Ebner
2023-05-24 14:40     ` Aaron Lauterer
2023-05-25  8:14       ` Fiona Ebner
2023-05-25  8:15   ` Fiona Ebner
2023-05-12 12:40 ` [pve-devel] [PATCH qemu-server v2 4/6] tests: add migration alias check Aaron Lauterer
2023-05-22 14:25   ` Fiona Ebner
2023-05-24 14:41     ` Aaron Lauterer
2023-05-12 12:40 ` [pve-devel] [PATCH container v2 5/6] migration: only migrate volumes used by the guest Aaron Lauterer
2023-05-22 15:00   ` Fiona Ebner
2023-05-12 12:40 ` [pve-devel] [PATCH container v2 6/6] migration: fail when aliased volume is detected Aaron Lauterer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=731e6d36-2943-9e69-1bcb-7ce23b712f11@proxmox.com \
    --to=f.ebner@proxmox.com \
    --cc=a.lauterer@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal