From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id 8A0BF1FF389 for ; Wed, 5 Jun 2024 10:49:07 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 4D5A31EA19; Wed, 5 Jun 2024 10:49:37 +0200 (CEST) Message-ID: <85fea91c-5aa1-4b01-b965-e1e6a1313aad@proxmox.com> Date: Wed, 5 Jun 2024 10:49:03 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Beta To: Fiona Ebner , Proxmox VE development discussion References: <20240419124556.3334691-1-d.csapak@proxmox.com> <20240419124556.3334691-11-d.csapak@proxmox.com> <2b153bfe-9e0b-44ff-a77d-4f0173188ccb@proxmox.com> Content-Language: en-US From: Dominik Csapak In-Reply-To: <2b153bfe-9e0b-44ff-a77d-4f0173188ccb@proxmox.com> X-SPAM-LEVEL: Spam detection results: 0 AWL 0.021 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record T_SCC_BODY_TEXT_LINE -0.01 - URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [shared.pm, qemumigrate.pm] Subject: Re: [pve-devel] [PATCH qemu-server v3 06/10] migrate: call vm_stop_cleanup after stopping in phase3_cleanup X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Proxmox VE development discussion Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Errors-To: pve-devel-bounces@lists.proxmox.com Sender: "pve-devel" On 5/31/24 14:56, Fiona Ebner wrote: > Am 19.04.24 um 14:45 schrieb Dominik Csapak: >> we currently only call deactivate_volumes, but we actually want to call >> the whole vm_stop_cleanup, since that is not invoked by the vm_stop >> above (we cannot parse the config anymore) and might do other cleanups >> we also want to do (like mdev cleanup). >> >> For this to work properly we have to clone the original config at the >> beginning, since we might modify the volids. >> >> Signed-off-by: Dominik Csapak >> --- >> PVE/QemuMigrate.pm | 12 ++++++------ >> test/MigrationTest/Shared.pm | 3 +++ >> 2 files changed, 9 insertions(+), 6 deletions(-) >> >> diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm >> index 8d9b35ae..381022f5 100644 >> --- a/PVE/QemuMigrate.pm >> +++ b/PVE/QemuMigrate.pm >> @@ -5,6 +5,7 @@ use warnings; >> >> use IO::File; >> use IPC::Open2; >> +use Storable qw(dclone); >> use Time::HiRes qw( usleep ); >> >> use PVE::Cluster; > > Needs a rebase (because of added include for PVE::AccessControl) > >> @@ -1455,7 +1456,8 @@ sub phase3_cleanup { >> >> my $tunnel = $self->{tunnel}; >> >> - my $sourcevollist = PVE::QemuServer::get_vm_volumes($conf); >> + # we'll need an unmodified copy of the config later for the cleanup >> + my $oldconf = dclone($conf); >> >> if ($self->{volume_map} && !$self->{opts}->{remote}) { >> my $target_drives = $self->{target_drive}; >> @@ -1586,12 +1588,10 @@ sub phase3_cleanup { >> $self->{errors} = 1; >> } >> >> - # always deactivate volumes - avoid lvm LVs to be active on several nodes >> - eval { >> - PVE::Storage::deactivate_volumes($self->{storecfg}, $sourcevollist); >> - }; >> + # stop with nocheck does not do a cleanup, so do it here with the original config >> + eval { PVE::QemuServer::vm_stop_cleanup($self->{storecfg}, $vmid, $oldconf) }; >> if (my $err = $@) { >> - $self->log('err', $err); >> + $self->log('err', "cleanup for vm failed - $err"); > > Nit: "Cleanup after stopping VM failed" > > Is it better to only execute this in case vm_stop() did not return an > error? Although I guess attempting cleanup in that case also doesn't hurt. not sure honestly, we cannot really know at this point when the vm stop failed and if we should do a cleanup.. my guess is that when the vm is still running the cleanup will fail anyway at some step but IMHO doing it and potentially generating more warning/error output vs. not doing it and missing some cleanup, i'd prefer the former > >> $self->{errors} = 1; >> } >> >> diff --git a/test/MigrationTest/Shared.pm b/test/MigrationTest/Shared.pm >> index aa7203d1..2347e60a 100644 >> --- a/test/MigrationTest/Shared.pm >> +++ b/test/MigrationTest/Shared.pm >> @@ -130,6 +130,9 @@ $qemu_server_module->mock( >> clear_reboot_request => sub { >> return 1; >> }, >> + vm_stop_cleanup => sub { >> + return 1; > > Nit: I'd just have it be return; without a value. > >> + }, >> get_efivars_size => sub { >> return 128 * 1024; >> }, _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel