From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: <pve-devel-bounces@lists.proxmox.com> Received: from firstgate.proxmox.com (firstgate.proxmox.com [IPv6:2a01:7e0:0:424::9]) by lore.proxmox.com (Postfix) with ESMTPS id 52A111FF17C for <inbox@lore.proxmox.com>; Wed, 25 Jun 2025 18:03:25 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 4F90119D67; Wed, 25 Jun 2025 18:04:00 +0200 (CEST) From: Fiona Ebner <f.ebner@proxmox.com> To: pve-devel@lists.proxmox.com Date: Wed, 25 Jun 2025 17:56:47 +0200 Message-ID: <20250625155751.268047-25-f.ebner@proxmox.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250625155751.268047-1-f.ebner@proxmox.com> References: <20250625155751.268047-1-f.ebner@proxmox.com> MIME-Version: 1.0 X-SPAM-LEVEL: Spam detection results: 0 AWL -0.031 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_RPBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_SAFE_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [blockdev.pm, blockjob.pm] Subject: [pve-devel] [RFC qemu-server 24/31] block job: add blockdev mirror X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com> List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe> List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/> List-Post: <mailto:pve-devel@lists.proxmox.com> List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help> List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe> Reply-To: Proxmox VE development discussion <pve-devel@lists.proxmox.com> Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: pve-devel-bounces@lists.proxmox.com Sender: "pve-devel" <pve-devel-bounces@lists.proxmox.com> With blockdev-mirror, it is possible to change the aio setting on the fly and this is useful for migrations between storages where one wants to use io_uring by default and the other doesn't. Already mock blockdev_mirror in the tests. Co-developed-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com> Signed-off-by: Fiona Ebner <f.ebner@proxmox.com> --- src/PVE/QemuServer/BlockJob.pm | 162 ++++++++++++++++++++++ src/PVE/QemuServer/Blockdev.pm | 2 +- src/test/MigrationTest/QemuMigrateMock.pm | 8 ++ 3 files changed, 171 insertions(+), 1 deletion(-) diff --git a/src/PVE/QemuServer/BlockJob.pm b/src/PVE/QemuServer/BlockJob.pm index 0013bde6..89cd1312 100644 --- a/src/PVE/QemuServer/BlockJob.pm +++ b/src/PVE/QemuServer/BlockJob.pm @@ -4,12 +4,14 @@ use strict; use warnings; use JSON; +use Storable qw(dclone); use PVE::Format qw(render_duration render_bytes); use PVE::RESTEnvironment qw(log_warn); use PVE::Storage; use PVE::QemuServer::Agent qw(qga_check_running); +use PVE::QemuServer::Blockdev; use PVE::QemuServer::Drive qw(checked_volume_format); use PVE::QemuServer::Monitor qw(mon_cmd); use PVE::QemuServer::RunState; @@ -340,6 +342,166 @@ sub qemu_drive_mirror_switch_to_active_mode { } } +=pod + +=head3 blockdev_mirror + + blockdev_mirror($source, $dest, $jobs, $completion, $options) + +Mirrors the volume of a running VM specified by C<$source> to destination C<$dest>. + +=over + +=item C<$source> + +The source information consists of: + +=over + +=item C<< $source->{vmid} >> + +The ID of the running VM the source volume belongs to. + +=item C<< $source->{drive} >> + +The drive configuration of the source volume as currently attached to the VM. + +=item C<< $source->{bitmap} >> + +(optional) Use incremental mirroring based on the specified bitmap. + +=back + +=item C<$dest> + +The destination information consists of: + +=over + +=item C<< $dest->{volid} >> + +The volume ID of the target volume. + +=item C<< $dest->{vmid} >> + +(optional) The ID of the VM the target volume belongs to. Defaults to C<< $source->{vmid} >>. + +=item C<< $dest->{'zero-initialized'} >> + +(optional) True, if the target volume is zero-initialized. + +=back + +=item C<$jobs> + +(optional) Other jobs in the transaction when multiple volumes should be mirrored. All jobs must be +ready before completion can happen. + +=item C<$completion> + +Completion mode, default is C<complete>: + +=over + +=item C<complete> + +Wait until all jobs are ready, block-job-complete them (default). This means switching the orignal +drive to use the new target. + +=item C<cancel> + +Wait until all jobs are ready, block-job-cancel them. This means not switching the original drive +to use the new target. + +=item C<skip> + +Wait until all jobs are ready, return with block jobs in ready state. + +=item C<auto> + +Wait until all jobs disappear, only use for jobs which complete automatically. + +=back + +=item C<$options> + +Further options: + +=over + +=item C<< $options->{'guest-agent'} >> + +If the guest agent is configured for the VM. It will be used to freeze and thaw the filesystems for +consistency when the target belongs to a different VM. + +=item C<< $options->{'bwlimit'} >> + +The bandwidth limit to use for the mirroring operation, in KiB/s. + +=back + +=back + +=cut + +sub blockdev_mirror { + my ($source, $dest, $jobs, $completion, $options) = @_; + + my $vmid = $source->{vmid}; + + my $drive_id = PVE::QemuServer::Drive::get_drive_id($source->{drive}); + my $device_id = "drive-$drive_id"; + + my $storecfg = PVE::Storage::config(); + + # Need to replace the format node below the top node. + my $source_node_name = PVE::QemuServer::Blockdev::get_node_name( + 'fmt', $drive_id, $source->{drive}->{file}, + ); + + # Copy original drive config (aio, cache, discard, ...): + my $dest_drive = dclone($source->{drive}); + $dest_drive->{file} = $dest->{volid}; + + my $generate_blockdev_opts = {}; + $generate_blockdev_opts->{'zero-initialized'} = 1 if $dest->{'zero-initialized'}; + + # Note that if 'aio' is not explicitly set, i.e. default, it can change if source and target + # don't both allow or both not allow 'io_uring' as the default. + my $target_drive_blockdev = PVE::QemuServer::Blockdev::generate_drive_blockdev( + $storecfg, $dest_drive, $generate_blockdev_opts, + ); + # Top node is the throttle group, must use the file child. + my $target_blockdev = $target_drive_blockdev->{file}; + + PVE::QemuServer::Monitor::mon_cmd($vmid, 'blockdev-add', $target_blockdev->%*); + my $target_nodename = $target_blockdev->{'node-name'}; + + $jobs = {} if !$jobs; + my $jobid = "mirror-$drive_id"; + $jobs->{$jobid} = {}; + + my $qmp_opts = common_mirror_qmp_options( + $device_id, $target_nodename, $source->{bitmap}, $options->{bwlimit}, + ); + + $qmp_opts->{'job-id'} = "$jobid"; + $qmp_opts->{replaces} = "$source_node_name"; + + # if a job already runs for this device we get an error, catch it for cleanup + eval { mon_cmd($vmid, "blockdev-mirror", $qmp_opts->%*); }; + if (my $err = $@) { + eval { qemu_blockjobs_cancel($vmid, $jobs) }; + log_warn("unable to cancel block jobs - $@"); + eval { mon_cmd($vmid, 'blockdev-del', 'node-name' => $target_nodename) }; + log_warn("unable to delete blockdev '$target_nodename' - $@"); + die "error starting blockdev mirrror - $err"; + } + qemu_drive_mirror_monitor( + $vmid, $dest->{vmid}, $jobs, $completion, $options->{'guest-agent'}, 'mirror', + ); +} + sub mirror { my ($source, $dest, $jobs, $completion, $options) = @_; diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm index 6e6b9245..716a0ac9 100644 --- a/src/PVE/QemuServer/Blockdev.pm +++ b/src/PVE/QemuServer/Blockdev.pm @@ -12,7 +12,7 @@ use PVE::Storage; use PVE::QemuServer::Drive qw(drive_is_cdrom); -my sub get_node_name { +sub get_node_name { my ($type, $drive_id, $volid, $snap) = @_; my $info = "drive=$drive_id,"; diff --git a/src/test/MigrationTest/QemuMigrateMock.pm b/src/test/MigrationTest/QemuMigrateMock.pm index 25a4f9b2..c52df84b 100644 --- a/src/test/MigrationTest/QemuMigrateMock.pm +++ b/src/test/MigrationTest/QemuMigrateMock.pm @@ -9,6 +9,7 @@ use Test::MockModule; use MigrationTest::Shared; use PVE::API2::Qemu; +use PVE::QemuServer::Drive; use PVE::Storage; use PVE::Tools qw(file_set_contents file_get_contents); @@ -167,6 +168,13 @@ $qemu_server_blockjob_module->mock( common_mirror_mock($vmid, $drive_id); }, + blockdev_mirror => sub { + my ($source, $dest, $jobs, $completion, $options) = @_; + + my $drive_id = PVE::QemuServer::Drive::get_drive_id($source->{drive}); + + common_mirror_mock($source->{vmid}, $drive_id); + }, qemu_drive_mirror_monitor => sub { my ($vmid, $vmiddst, $jobs, $completion, $qga) = @_; -- 2.47.2 _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel