From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <f.ebner@proxmox.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by lists.proxmox.com (Postfix) with ESMTPS id 595889CFBD
 for <pve-devel@lists.proxmox.com>; Fri,  2 Jun 2023 11:50:07 +0200 (CEST)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
 by firstgate.proxmox.com (Proxmox) with ESMTP id 7CDBE28120
 for <pve-devel@lists.proxmox.com>; Fri,  2 Jun 2023 11:50:05 +0200 (CEST)
Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com
 [94.136.29.106])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by firstgate.proxmox.com (Proxmox) with ESMTPS
 for <pve-devel@lists.proxmox.com>; Fri,  2 Jun 2023 11:50:04 +0200 (CEST)
Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1])
 by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 1E2674834E
 for <pve-devel@lists.proxmox.com>; Fri,  2 Jun 2023 11:50:04 +0200 (CEST)
Message-ID: <7e7cce73-d91e-74c9-f07e-dc2bc8f73413@proxmox.com>
Date: Fri, 2 Jun 2023 11:50:03 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Content-Language: en-US
From: Fiona Ebner <f.ebner@proxmox.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>,
 Aaron Lauterer <a.lauterer@proxmox.com>
Reply-To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
References: <20230601135342.2903359-1-a.lauterer@proxmox.com>
 <20230601135342.2903359-2-a.lauterer@proxmox.com>
 <b6f44610-96b7-af0f-36d4-6ba350d21e01@proxmox.com>
In-Reply-To: <b6f44610-96b7-af0f-36d4-6ba350d21e01@proxmox.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-SPAM-LEVEL: Spam detection results:  0
 AWL 0.002 Adjusted score from AWL reputation of From: address
 BAYES_00                 -1.9 Bayes spam probability is 0 to 1%
 DMARC_MISSING             0.1 Missing DMARC policy
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 NICE_REPLY_A             -0.1 Looks like a legit reply (A)
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
 T_SCC_BODY_TEXT_LINE    -0.01 -
Subject: Re: [pve-devel] [PATCH v3 qemu-server 1/7] migration: only migrate
 disks used by the guest
X-BeenThere: pve-devel@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/>
List-Post: <mailto:pve-devel@lists.proxmox.com>
List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe>
X-List-Received-Date: Fri, 02 Jun 2023 09:50:07 -0000

Am 02.06.23 um 11:45 schrieb Fiona Ebner:
> Am 01.06.23 um 15:53 schrieb Aaron Lauterer:
>> When scanning all configured storages for disk images belonging to the
>> VM, the migration could easily fail if a storage is not available, but
>> enabled. That storage might not even be used by the VM at all.
>>
>> By not doing that and only looking at the disk images referenced in the
>> VM config, we can avoid that.
>> Extra handling is needed for disk images currently in the 'pending'
>> section of the VM config. These disk images used to be detected by
>> scanning all storages before.
>> It is also necessary to fetch some information (size, format) about the
>> disk images explicitly that used to be provided by the initial scan of
>> all storages.
>>
>> The big change regarding behavior is that disk images not referenced in
>> the VM config file will be ignored.  They are already orphans that used
>> to be migrated as well, but are now left where they are.  The tests have
>> been adapted to that changed behavior.
>>
>> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
>> ---
>> changes sind v2:
>> - move handling of pending changes into QemuSerer::foreach_volid
>>     Seems to not have any bad side-effects
> 
> This should really be its own patch and talk about what the side effects
> actually are and why they are okay in the commit message.
> 
> Maybe even keep old behavior for replication at first and then add a
> second patch which explicitly enables replication for pending volumes by
> removing the filtering initially added. Easiest probably is an option
> $exclude_pending for foreach_volid() used only by replication and then
> remove it in the following patch.
> 
> Like that, each patch has its own clear purpose and effect.
> 
>> - style fixes
>> - use 'volume_size_info()' to get format and size of the image
>>
>>  PVE/QemuMigrate.pm                    | 88 ++++++++-------------------
>>  PVE/QemuServer.pm                     |  9 ++-
>>  test/MigrationTest/QemuMigrateMock.pm |  9 +++
>>  test/run_qemu_migrate_tests.pl        | 12 ++--
>>  4 files changed, 50 insertions(+), 68 deletions(-)
>>
>> diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
>> index 09cc1d8..163a721 100644
>> --- a/PVE/QemuMigrate.pm
>> +++ b/PVE/QemuMigrate.pm
>> @@ -149,6 +149,22 @@ sub lock_vm {
>>      return PVE::QemuConfig->lock_config($vmid, $code, @param);
>>  }
>>  
>> +sub target_storage_check_available {
>> +    my ($self, $storecfg, $targetsid, $volid) = @_;
>> +
>> +    if (!$self->{opts}->{remote}) {
>> +	# check if storage is available on target node
>> +	my $target_scfg = PVE::Storage::storage_check_enabled(
>> +	    $storecfg,
>> +	    $targetsid,
>> +	    $self->{node},
>> +	);
>> +	my ($vtype) = PVE::Storage::parse_volname($storecfg, $volid);
>> +	die "$volid: content type '$vtype' is not available on storage '$targetsid'\n"
>> +	    if !$target_scfg->{content}->{$vtype};
>> +    }
>> +}
> 
> This should also be its own patch, please.
> 
>> +
>>  sub prepare {
>>      my ($self, $vmid) = @_;
>>  
>> @@ -396,17 +358,21 @@ sub scan_local_volumes {
>>  		$targetsid = PVE::JSONSchema::map_id($self->{opts}->{storagemap}, $sid);
>>  	    }
>>  
>> -	    # check target storage on target node if intra-cluster migration
>> -	    if (!$self->{opts}->{remote}) {
>> -		PVE::Storage::storage_check_enabled($storecfg, $targetsid, $self->{node});
>> -
>> -		return if $scfg->{shared};
>> -	    }
>> +	    return if $scfg->{shared} && !$self->{opts}->{remote};
>> +	    $self->target_storage_check_available($storecfg, $targetsid, $volid);
> 
> Shouldn't these two lines be switched?
> 
> Before, the enabled check would be done if !$self->{opts}->{remote} and
> then early return.
> 
> But now, if $scfg->{shared} && !$self->{opts}->{remote};, you early
> return without doing the check at all.
> 
> With switched lines, you call the helper, which does the check if
> !$self->{opts}->{remote} and then you do the early return like before.
> 
>>  
>>  	    $local_volumes->{$volid}->{ref} = $attr->{referenced_in_config} ? 'config' : 'snapshot';
>>  	    $local_volumes->{$volid}->{ref} = 'storage' if $attr->{is_unused};
> 
> Not yours, but this is actually wrong, because if the same volume is
> already referenced in a snapshot, the reference will be overwritten to
> be 'storage' here :P
> 
>> +	    $local_volumes->{$volid}->{ref} = 'storage' if $attr->{is_pending};
> 
> But yours is also not correct ;) It should only be done for volumes that
> are only referenced in pending, but not in the current config. You don't
> touch the...
> 
>> @@ -4888,6 +4888,8 @@ sub foreach_volid {
> 
> $volhash->{$volid}->{referenced_in_config} = 1 if !defined($snapname);
> 
> ...line in PVE/QemuServer.pm's foreach_volid(), so is_pending being 1
> means referenced_in_config is also 1 and there's no way to distinguish
> via attributes if a volume is only in pending or both in pending and the
> current config.
> 
> Not sure if we should even bother to distinguish between more than
> 'config' and 'snapshot' anymore. Everything that's just in a snapshot
> gets 'snapshot' and everything that's in the current config (including
> pending) gets 'config'. No need for 'storage' at all.
> 

Argh, it's actually not just used informationally, but affects real
logic too :(

        foreach my $volid (sort keys %$local_volumes) {
            my $ref = $local_volumes->{$volid}->{ref};
            if ($self->{running} && $ref eq 'config') {
                $local_volumes->{$volid}->{migration_mode} = 'online';

So we do need a real way to distinguish 'only in pending' and 'both in
pending and current config' after all here.