public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Daniel Kral <d.kral@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [RFC qemu-server 9/9] restore_vm: improve checks if storage supports vm images
Date: Mon, 16 Sep 2024 18:38:39 +0200	[thread overview]
Message-ID: <20240916163839.236908-10-d.kral@proxmox.com> (raw)
In-Reply-To: <20240916163839.236908-1-d.kral@proxmox.com>

Improves checks if the underlying storage, where VMs are restored to,
support the content type 'images'. This has been already the case for
backup restores, but is refactored to use `check_storage_alloc` and
`check_volume_alloc`.

Adds a check right before allocating a snapshot statefile and a
fleecing VM disk image for backups for consistency with the storage
content type system.

Signed-off-by: Daniel Kral <d.kral@proxmox.com>
---
I am not sure about the changes to the statefile and fleecing images
allocation as I've done them for consistency as well, but could cause
sudden failures, especially if the logic in
`PVE::QemuServer::find_vmstate_storage` could default to the hardcoded
`local` storage, which fails on system where said storage does not
support vm images (which is the default when installing PVE). Also the
fleecing disk image storage is specified when starting the backup job
with the `storeid` parameter in the PVE::VZDump::Plugin, where I'm not
totally sure yet how it is used across the repositories.

 PVE/QemuConfig.pm        |  4 ++--
 PVE/QemuServer.pm        | 22 ++++++++--------------
 PVE/VZDump/QemuServer.pm |  4 ++--
 3 files changed, 12 insertions(+), 18 deletions(-)

diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm
index 8e8a7828..d502b41f 100644
--- a/PVE/QemuConfig.pm
+++ b/PVE/QemuConfig.pm
@@ -8,7 +8,7 @@ use PVE::INotify;
 use PVE::JSONSchema;
 use PVE::QemuServer::CPUConfig;
 use PVE::QemuServer::Drive;
-use PVE::QemuServer::Helpers;
+use PVE::QemuServer::Helpers qw(alloc_volume_disk);
 use PVE::QemuServer::Monitor qw(mon_cmd);
 use PVE::QemuServer;
 use PVE::QemuServer::Machine;
@@ -221,7 +221,7 @@ sub __snapshot_save_vmstate {
     my $name = "vm-$vmid-state-$snapname";
     $name .= ".raw" if $scfg->{path}; # add filename extension for file base storage
 
-    my $statefile = PVE::Storage::vdisk_alloc($storecfg, $target, $vmid, 'raw', $name, $size*1024);
+    my $statefile = alloc_volume_disk($storecfg, $target, $vmid, 'raw', $name, $size*1024);
     my $runningmachine = PVE::QemuServer::Machine::get_current_qemu_machine($vmid);
 
     # get current QEMU -cpu argument to ensure consistency of custom CPU models
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index c07dd7aa..fb70caee 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -50,7 +50,7 @@ use PVE::Tools qw(run_command file_read_firstline file_get_contents dir_glob_for
 
 use PVE::QMPClient;
 use PVE::QemuConfig;
-use PVE::QemuServer::Helpers qw(config_aware_timeout min_version windows_version alloc_volume_disk);
+use PVE::QemuServer::Helpers qw(config_aware_timeout min_version windows_version check_storage_alloc check_volume_alloc alloc_volume_disk);
 use PVE::QemuServer::Cloudinit;
 use PVE::QemuServer::CGroup;
 use PVE::QemuServer::CPUConfig qw(print_cpu_device get_cpu_options get_cpu_bitness is_native_arch);
@@ -6688,14 +6688,6 @@ my $restore_cleanup_oldconf = sub {
 my $parse_backup_hints = sub {
     my ($rpcenv, $user, $storecfg, $fh, $devinfo, $options) = @_;
 
-    my $check_storage = sub { # assert if an image can be allocate
-	my ($storeid, $scfg) = @_;
-	die "Content type 'images' is not available on storage '$storeid'\n"
-	    if !$scfg->{content}->{images};
-	$rpcenv->check($user, "/storage/$storeid", ['Datastore.AllocateSpace'])
-	    if $user ne 'root@pam';
-    };
-
     my $virtdev_hash = {};
     while (defined(my $line = <$fh>)) {
 	if ($line =~ m/^\#qmdump\#map:(\S+):(\S+):(\S*):(\S*):$/) {
@@ -6714,8 +6706,9 @@ my $parse_backup_hints = sub {
 	    $devinfo->{$devname}->{format} = $format;
 	    $devinfo->{$devname}->{storeid} = $storeid;
 
-	    my $scfg = PVE::Storage::storage_config($storecfg, $storeid);
-	    $check_storage->($storeid, $scfg); # permission and content type check
+	    # permission and content type check
+	    check_storage_alloc($rpcenv, $user, $storeid);
+	    check_volume_alloc($storecfg, $storeid);
 
 	    $virtdev_hash->{$virtdev} = $devinfo->{$devname};
 	} elsif ($line =~ m/^((?:ide|sata|scsi)\d+):\s*(.*)\s*$/) {
@@ -6728,7 +6721,9 @@ my $parse_backup_hints = sub {
 		my $scfg = PVE::Storage::storage_config($storecfg, $storeid);
 		my $format = qemu_img_format($scfg, $volname); # has 'raw' fallback
 
-		$check_storage->($storeid, $scfg); # permission and content type check
+		# permission and content type check
+		check_storage_alloc($rpcenv, $user, $storeid);
+		check_volume_alloc($storecfg, $storeid);
 
 		$virtdev_hash->{$virtdev} = {
 		    format => $format,
@@ -6773,8 +6768,7 @@ my $restore_allocate_devices = sub {
 	    }
 	}
 
-	my $volid = PVE::Storage::vdisk_alloc(
-	    $storecfg, $storeid, $vmid, $d->{format}, $name, $alloc_size);
+	my $volid = alloc_volume_disk($storecfg, $storeid, $vmid, $d->{format}, $name, $alloc_size);
 
 	print STDERR "new volume ID is '$volid'\n";
 	$d->{volid} = $volid;
diff --git a/PVE/VZDump/QemuServer.pm b/PVE/VZDump/QemuServer.pm
index 012c9210..0037f648 100644
--- a/PVE/VZDump/QemuServer.pm
+++ b/PVE/VZDump/QemuServer.pm
@@ -26,7 +26,7 @@ use PVE::Format qw(render_duration render_bytes);
 
 use PVE::QemuConfig;
 use PVE::QemuServer;
-use PVE::QemuServer::Helpers;
+use PVE::QemuServer::Helpers qw(alloc_volume_disk);
 use PVE::QemuServer::Machine;
 use PVE::QemuServer::Monitor qw(mon_cmd);
 
@@ -553,7 +553,7 @@ my sub allocate_fleecing_images {
 
 		my $size = PVE::Tools::convert_size($di->{size}, 'b' => 'kb');
 
-		$di->{'fleece-volid'} = PVE::Storage::vdisk_alloc(
+		$di->{'fleece-volid'} = alloc_volume_disk(
 		    $self->{storecfg}, $fleecing_storeid, $vmid, $format, $name, $size);
 
 		$n++;
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


      parent reply	other threads:[~2024-09-16 16:39 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-09-16 16:38 [pve-devel] [RFC qemu-server 0/9] consistent checks for storage content types on volume disk allocation Daniel Kral
2024-09-16 16:38 ` [pve-devel] [RFC qemu-server 1/9] test: cfg2cmd: expect error for invalid volume's storage content type Daniel Kral
2024-09-16 16:38 ` [pve-devel] [RFC qemu-server 2/9] cfg2cmd: improve error message for invalid volume " Daniel Kral
2024-09-16 16:38 ` [pve-devel] [RFC qemu-server 3/9] fix #5284: move_vm: add check if target storage supports vm images Daniel Kral
2024-09-16 16:38 ` [pve-devel] [RFC qemu-server 4/9] api: clone_vm: add check if " Daniel Kral
2024-09-16 16:38 ` [pve-devel] [RFC qemu-server 5/9] api: create_vm: improve checks if storages for disks support " Daniel Kral
2024-09-16 16:38 ` [pve-devel] [RFC qemu-server 6/9] cloudinit: add check if storage for cloudinit disk supports " Daniel Kral
2024-09-16 16:38 ` [pve-devel] [RFC qemu-server 7/9] api: migrate_vm: improve check if target storages support " Daniel Kral
2024-09-16 16:38 ` [pve-devel] [RFC qemu-server 8/9] api: importdisk: improve check if storage supports " Daniel Kral
2024-09-16 16:38 ` Daniel Kral [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240916163839.236908-10-d.kral@proxmox.com \
    --to=d.kral@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal