From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id 9EE271FF15F for ; Mon, 16 Dec 2024 10:14:12 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id C40DA76C9; Mon, 16 Dec 2024 10:13:21 +0100 (CET) To: pve-devel@lists.proxmox.com Date: Mon, 16 Dec 2024 10:12:26 +0100 In-Reply-To: <20241216091229.3142660-1-alexandre.derumier@groupe-cyllene.com> References: <20241216091229.3142660-1-alexandre.derumier@groupe-cyllene.com> MIME-Version: 1.0 Message-ID: List-Id: Proxmox VE development discussion List-Post: From: Alexandre Derumier via pve-devel Precedence: list Cc: Alexandre Derumier X-Mailman-Version: 2.1.29 X-BeenThere: pve-devel@lists.proxmox.com List-Subscribe: , List-Unsubscribe: , List-Archive: Reply-To: Proxmox VE development discussion List-Help: Subject: [pve-devel] [PATCH v3 qemu-server 08/11] blockdev: convert drive_mirror to blockdev_mirror Content-Type: multipart/mixed; boundary="===============7198270822128449505==" Errors-To: pve-devel-bounces@lists.proxmox.com Sender: "pve-devel" --===============7198270822128449505== Content-Type: message/rfc822 Content-Disposition: inline Return-Path: X-Original-To: pve-devel@lists.proxmox.com Delivered-To: pve-devel@lists.proxmox.com Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id E2026C3392 for ; Mon, 16 Dec 2024 10:13:19 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 91BEA6EA9 for ; Mon, 16 Dec 2024 10:12:47 +0100 (CET) Received: from bastiontest.odiso.net (unknown [IPv6:2a0a:1580:2000:6700::14]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS for ; Mon, 16 Dec 2024 10:12:44 +0100 (CET) Received: from formationkvm1.odiso.net (unknown [10.11.201.57]) by bastiontest.odiso.net (Postfix) with ESMTP id 4719C860F11; Mon, 16 Dec 2024 10:12:32 +0100 (CET) Received: by formationkvm1.odiso.net (Postfix, from userid 0) id 885321084C68; Mon, 16 Dec 2024 10:12:31 +0100 (CET) From: Alexandre Derumier To: pve-devel@lists.proxmox.com Subject: [PATCH v3 qemu-server 08/11] blockdev: convert drive_mirror to blockdev_mirror Date: Mon, 16 Dec 2024 10:12:26 +0100 Message-Id: <20241216091229.3142660-13-alexandre.derumier@groupe-cyllene.com> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20241216091229.3142660-1-alexandre.derumier@groupe-cyllene.com> References: <20241216091229.3142660-1-alexandre.derumier@groupe-cyllene.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SPAM-LEVEL: Spam detection results: 0 AWL 0.029 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_NONE 0.1 DMARC none policy HEADER_FROM_DIFFERENT_DOMAINS 0.248 From and EnvelopeFrom 2nd level mail domains are different KAM_DMARC_NONE 0.25 DKIM has Failed or SPF has failed on the message and the domain has no DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment KAM_LAZY_DOMAIN_SECURITY 1 Sending domain does not have any anti-forgery methods RDNS_NONE 0.793 Delivered to internal network by a host with no rDNS SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_NONE 0.001 SPF: sender does not publish an SPF Record Signed-off-by: Alexandre Derumier --- PVE/QemuMigrate.pm | 2 +- PVE/QemuServer.pm | 106 +++++++++++++++++++++++++++++++++++---------- 2 files changed, 83 insertions(+), 25 deletions(-) diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm index ed5ede30..88627ce4 100644 --- a/PVE/QemuMigrate.pm +++ b/PVE/QemuMigrate.pm @@ -1134,7 +1134,7 @@ sub phase2 { my $bitmap = $target->{bitmap}; $self->log('info', "$drive: start migration to $nbd_uri"); - PVE::QemuServer::qemu_drive_mirror($vmid, $drive, $nbd_uri, $vmid, undef, $self->{storage_migration_jobs}, 'skip', undef, $bwlimit, $bitmap); + PVE::QemuServer::qemu_drive_mirror($vmid, $drive, $source_drive, $nbd_uri, $vmid, undef, $self->{storage_migration_jobs}, 'skip', undef, $bwlimit, $bitmap); } if (PVE::QemuServer::QMPHelpers::runs_at_least_qemu_version($vmid, 8, 2)) { diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm index 6bebb906..3d7c41ee 100644 --- a/PVE/QemuServer.pm +++ b/PVE/QemuServer.pm @@ -8184,59 +8184,85 @@ sub qemu_img_convert { } sub qemu_drive_mirror { - my ($vmid, $drive, $dst_volid, $vmiddst, $is_zero_initialized, $jobs, $completion, $qga, $bwlimit, $src_bitmap) = @_; + my ($vmid, $driveid, $drive, $dst_volid, $vmiddst, $is_zero_initialized, $jobs, $completion, $qga, $bwlimit, $src_bitmap) = @_; $jobs = {} if !$jobs; + my $deviceid = "drive-$driveid"; + my $dst_format; + my $dst_path = $dst_volid; + my $jobid = "mirror-$deviceid"; + $jobs->{$jobid} = {}; - my $qemu_target; - my $format; - $jobs->{"drive-$drive"} = {}; + my $storecfg = PVE::Storage::config(); if ($dst_volid =~ /^nbd:/) { - $qemu_target = $dst_volid; - $format = "nbd"; + $dst_format = "nbd"; } else { - my $storecfg = PVE::Storage::config(); - - $format = checked_volume_format($storecfg, $dst_volid); - - my $dst_path = PVE::Storage::path($storecfg, $dst_volid); - - $qemu_target = $is_zero_initialized ? "zeroinit:$dst_path" : $dst_path; + $dst_format = checked_volume_format($storecfg, $dst_volid); + $dst_path = PVE::Storage::path($storecfg, $dst_volid); + } + + # copy original drive config (aio,cache,discard,...) + my $dst_drive = dclone($drive); + $dst_drive->{format} = $dst_format; + $dst_drive->{file} = $dst_path; + $dst_drive->{zeroinit} = 1 if $is_zero_initialized; + #improve: if target storage don't support aio uring,change it to default native + #and remove clone_disk_check_io_uring() + + #add new block device + my $nodes = get_blockdev_nodes($vmid); + + my $target_fmt_nodename = get_blockdev_nextid("fmt-$deviceid", $nodes); + my $target_file_nodename = get_blockdev_nextid("file-$deviceid", $nodes); + my $target_file_blockdev = generate_file_blockdev($storecfg, $dst_drive, $target_file_nodename); + my $target_nodename = undef; + + if ($dst_format eq 'nbd') { + #nbd file don't have fmt + $target_nodename = $target_file_nodename; + PVE::QemuServer::Monitor::mon_cmd($vmid, 'blockdev-add', %$target_file_blockdev); + } else { + $target_nodename = $target_fmt_nodename; + my $target_fmt_blockdev = generate_format_blockdev($storecfg, $dst_drive, $target_fmt_nodename, $target_file_blockdev); + PVE::QemuServer::Monitor::mon_cmd($vmid, 'blockdev-add', %$target_fmt_blockdev); } + #we replace the original src_fmt node in the blockdev graph + my $src_fmt_nodename = find_fmt_nodename_drive($storecfg, $vmid, $drive, $nodes); my $opts = { + 'job-id' => $jobid, timeout => 10, - device => "drive-$drive", - mode => "existing", + device => $deviceid, + replaces => $src_fmt_nodename, sync => "full", - target => $qemu_target, + target => $target_nodename, 'auto-dismiss' => JSON::false, }; - $opts->{format} = $format if $format; if (defined($src_bitmap)) { $opts->{sync} = 'incremental'; - $opts->{bitmap} = $src_bitmap; + $opts->{bitmap} = $src_bitmap; ##FIXME: how to handle bitmap ? special proxmox patch ? print "drive mirror re-using dirty bitmap '$src_bitmap'\n"; } if (defined($bwlimit)) { $opts->{speed} = $bwlimit * 1024; - print "drive mirror is starting for drive-$drive with bandwidth limit: ${bwlimit} KB/s\n"; + print "drive mirror is starting for $deviceid with bandwidth limit: ${bwlimit} KB/s\n"; } else { - print "drive mirror is starting for drive-$drive\n"; + print "drive mirror is starting for $deviceid\n"; } # if a job already runs for this device we get an error, catch it for cleanup - eval { mon_cmd($vmid, "drive-mirror", %$opts); }; + eval { mon_cmd($vmid, "blockdev-mirror", %$opts); }; + if (my $err = $@) { eval { PVE::QemuServer::qemu_blockjobs_cancel($vmid, $jobs) }; + #FIXME: delete blockdev after job cancel warn "$@\n" if $@; die "mirroring error: $err\n"; } - - qemu_drive_mirror_monitor ($vmid, $vmiddst, $jobs, $completion, $qga); + qemu_drive_mirror_monitor ($vmid, $vmiddst, $jobs, $completion, $qga, 'mirror'); } # $completion can be either @@ -8595,7 +8621,7 @@ sub clone_disk { my $sparseinit = PVE::Storage::volume_has_feature($storecfg, 'sparseinit', $newvolid); if ($use_drive_mirror) { - qemu_drive_mirror($vmid, $src_drivename, $newvolid, $newvmid, $sparseinit, $jobs, + qemu_drive_mirror($vmid, $src_drivename, $drive, $newvolid, $newvmid, $sparseinit, $jobs, $completion, $qga, $bwlimit); } else { if ($dst_drivename eq 'efidisk0') { @@ -9130,6 +9156,38 @@ sub delete_ifaces_ipams_ips { } } +sub find_fmt_nodename_drive { + my ($storecfg, $vmid, $drive, $nodes) = @_; + + my $volid = $drive->{file}; + my $format = checked_volume_format($storecfg, $volid); + my $path = PVE::Storage::path($storecfg, $volid); + + my $node = find_blockdev_node($nodes, $path, 'fmt'); + return $node->{'node-name'}; +} + +sub get_blockdev_nextid { + my ($nodename, $nodes) = @_; + my $version = 0; + for my $nodeid (keys %$nodes) { + if ($nodeid =~ m/^$nodename-(\d+)$/) { + my $current_version = $1; + $version = $current_version if $current_version >= $version; + } + } + $version++; + return "$nodename-$version"; +} + +sub get_blockdev_nodes { + my ($vmid) = @_; + + my $nodes = PVE::QemuServer::Monitor::mon_cmd($vmid, "query-named-block-nodes"); + $nodes = { map { $_->{'node-name'} => $_ } $nodes->@* }; + return $nodes; +} + sub encode_json_ordered { return JSON->new->canonical->allow_nonref->encode( $_[0] ); } -- 2.39.5 --===============7198270822128449505== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel --===============7198270822128449505==--