From: Fiona Ebner <f.ebner@proxmox.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>,
Aaron Lauterer <a.lauterer@proxmox.com>
Subject: Re: [pve-devel] [PATCH v3 qemu-server 1/7] migration: only migrate disks used by the guest
Date: Fri, 2 Jun 2023 11:50:03 +0200 [thread overview]
Message-ID: <7e7cce73-d91e-74c9-f07e-dc2bc8f73413@proxmox.com> (raw)
In-Reply-To: <b6f44610-96b7-af0f-36d4-6ba350d21e01@proxmox.com>
Am 02.06.23 um 11:45 schrieb Fiona Ebner:
> Am 01.06.23 um 15:53 schrieb Aaron Lauterer:
>> When scanning all configured storages for disk images belonging to the
>> VM, the migration could easily fail if a storage is not available, but
>> enabled. That storage might not even be used by the VM at all.
>>
>> By not doing that and only looking at the disk images referenced in the
>> VM config, we can avoid that.
>> Extra handling is needed for disk images currently in the 'pending'
>> section of the VM config. These disk images used to be detected by
>> scanning all storages before.
>> It is also necessary to fetch some information (size, format) about the
>> disk images explicitly that used to be provided by the initial scan of
>> all storages.
>>
>> The big change regarding behavior is that disk images not referenced in
>> the VM config file will be ignored. They are already orphans that used
>> to be migrated as well, but are now left where they are. The tests have
>> been adapted to that changed behavior.
>>
>> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
>> ---
>> changes sind v2:
>> - move handling of pending changes into QemuSerer::foreach_volid
>> Seems to not have any bad side-effects
>
> This should really be its own patch and talk about what the side effects
> actually are and why they are okay in the commit message.
>
> Maybe even keep old behavior for replication at first and then add a
> second patch which explicitly enables replication for pending volumes by
> removing the filtering initially added. Easiest probably is an option
> $exclude_pending for foreach_volid() used only by replication and then
> remove it in the following patch.
>
> Like that, each patch has its own clear purpose and effect.
>
>> - style fixes
>> - use 'volume_size_info()' to get format and size of the image
>>
>> PVE/QemuMigrate.pm | 88 ++++++++-------------------
>> PVE/QemuServer.pm | 9 ++-
>> test/MigrationTest/QemuMigrateMock.pm | 9 +++
>> test/run_qemu_migrate_tests.pl | 12 ++--
>> 4 files changed, 50 insertions(+), 68 deletions(-)
>>
>> diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
>> index 09cc1d8..163a721 100644
>> --- a/PVE/QemuMigrate.pm
>> +++ b/PVE/QemuMigrate.pm
>> @@ -149,6 +149,22 @@ sub lock_vm {
>> return PVE::QemuConfig->lock_config($vmid, $code, @param);
>> }
>>
>> +sub target_storage_check_available {
>> + my ($self, $storecfg, $targetsid, $volid) = @_;
>> +
>> + if (!$self->{opts}->{remote}) {
>> + # check if storage is available on target node
>> + my $target_scfg = PVE::Storage::storage_check_enabled(
>> + $storecfg,
>> + $targetsid,
>> + $self->{node},
>> + );
>> + my ($vtype) = PVE::Storage::parse_volname($storecfg, $volid);
>> + die "$volid: content type '$vtype' is not available on storage '$targetsid'\n"
>> + if !$target_scfg->{content}->{$vtype};
>> + }
>> +}
>
> This should also be its own patch, please.
>
>> +
>> sub prepare {
>> my ($self, $vmid) = @_;
>>
>> @@ -396,17 +358,21 @@ sub scan_local_volumes {
>> $targetsid = PVE::JSONSchema::map_id($self->{opts}->{storagemap}, $sid);
>> }
>>
>> - # check target storage on target node if intra-cluster migration
>> - if (!$self->{opts}->{remote}) {
>> - PVE::Storage::storage_check_enabled($storecfg, $targetsid, $self->{node});
>> -
>> - return if $scfg->{shared};
>> - }
>> + return if $scfg->{shared} && !$self->{opts}->{remote};
>> + $self->target_storage_check_available($storecfg, $targetsid, $volid);
>
> Shouldn't these two lines be switched?
>
> Before, the enabled check would be done if !$self->{opts}->{remote} and
> then early return.
>
> But now, if $scfg->{shared} && !$self->{opts}->{remote};, you early
> return without doing the check at all.
>
> With switched lines, you call the helper, which does the check if
> !$self->{opts}->{remote} and then you do the early return like before.
>
>>
>> $local_volumes->{$volid}->{ref} = $attr->{referenced_in_config} ? 'config' : 'snapshot';
>> $local_volumes->{$volid}->{ref} = 'storage' if $attr->{is_unused};
>
> Not yours, but this is actually wrong, because if the same volume is
> already referenced in a snapshot, the reference will be overwritten to
> be 'storage' here :P
>
>> + $local_volumes->{$volid}->{ref} = 'storage' if $attr->{is_pending};
>
> But yours is also not correct ;) It should only be done for volumes that
> are only referenced in pending, but not in the current config. You don't
> touch the...
>
>> @@ -4888,6 +4888,8 @@ sub foreach_volid {
>
> $volhash->{$volid}->{referenced_in_config} = 1 if !defined($snapname);
>
> ...line in PVE/QemuServer.pm's foreach_volid(), so is_pending being 1
> means referenced_in_config is also 1 and there's no way to distinguish
> via attributes if a volume is only in pending or both in pending and the
> current config.
>
> Not sure if we should even bother to distinguish between more than
> 'config' and 'snapshot' anymore. Everything that's just in a snapshot
> gets 'snapshot' and everything that's in the current config (including
> pending) gets 'config'. No need for 'storage' at all.
>
Argh, it's actually not just used informationally, but affects real
logic too :(
foreach my $volid (sort keys %$local_volumes) {
my $ref = $local_volumes->{$volid}->{ref};
if ($self->{running} && $ref eq 'config') {
$local_volumes->{$volid}->{migration_mode} = 'online';
So we do need a real way to distinguish 'only in pending' and 'both in
pending and current config' after all here.
next prev parent reply other threads:[~2023-06-02 9:50 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-01 13:53 [pve-devel] [PATCH v3 qemu-server, container, docs 0/7] migration: don't scan all storages, fail on aliases Aaron Lauterer
2023-06-01 13:53 ` [pve-devel] [PATCH v3 qemu-server 1/7] migration: only migrate disks used by the guest Aaron Lauterer
2023-06-02 9:45 ` Fiona Ebner
2023-06-02 9:50 ` Fiona Ebner [this message]
2023-06-05 12:43 ` Aaron Lauterer
2023-06-01 13:53 ` [pve-devel] [PATCH v3 qemu-server 2/7] tests: add migration test for pending disk Aaron Lauterer
2023-06-01 13:53 ` [pve-devel] [PATCH v3 qemu-server 3/7] migration: fail when aliased volume is detected Aaron Lauterer
2023-06-01 13:53 ` [pve-devel] [PATCH v3 qemu-server 4/7] tests: add migration alias check Aaron Lauterer
2023-06-02 11:29 ` Fiona Ebner
2023-06-01 13:53 ` [pve-devel] [PATCH v3 container 5/7] migration: only migrate volumes used by the guest Aaron Lauterer
2023-06-02 11:42 ` Fiona Ebner
2023-06-01 13:53 ` [pve-devel] [PATCH v3 container 6/7] migration: fail when aliased volume is detected Aaron Lauterer
2023-06-01 13:53 ` [pve-devel] [PATCH v3 docs 7/7] storage: add hint to avoid storage aliasing Aaron Lauterer
2023-06-02 11:51 ` Fiona Ebner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7e7cce73-d91e-74c9-f07e-dc2bc8f73413@proxmox.com \
--to=f.ebner@proxmox.com \
--cc=a.lauterer@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal