public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH guest-common/qemu-server/manager/docs v4] implement experimental vgpu live migration
@ 2024-06-06  9:21 Dominik Csapak
  2024-06-06  9:21 ` [pve-devel] [PATCH guest-common v4 1/6] mapping: pci: assert_valid: rename cfg to mapping Dominik Csapak
                   ` (25 more replies)
  0 siblings, 26 replies; 35+ messages in thread
From: Dominik Csapak @ 2024-06-06  9:21 UTC (permalink / raw)
  To: pve-devel

and some useful cleanups

This is implemented for mapped resources. This requires driver and
hardware support, but aside from nvidia vgpus there don't seem to be
many drivers (if any) that do support that.

qemu already supports that for vfio-pci devices, so nothing to be
done there besides actively enabling it.

Since we currently can't properly test it here and very much depends on
hardware/driver support, mark it as experimental everywhere (docs/api/gui).
(though i tested the live-migration part manually here by using
"exec:cat > /tmp/test" for the migration target, and "exec: cat
/tmp/test" as the 'incoming' parameter for a new vm start, which worked ;) )

i opted for marking them migratable at the mapping level, but we could
theoretically also put it in the hostpciX config instead.
(though imho it fits better in the cluster-wide resource mapping config)

also the naming/texts could probably be improved, but i think
'live-migration-capable' is very descriptive and i didn't want to
use an overly short name for it (which can be confusing, see the
'shared' flag for storages)

guest-common 6/6 is optional and breaks qemu-server versions without
qemu-server patches 1&2

guest-common 1-4; qemu-server 1-6; pve-manager 1,2
are preparations/cleanups mostly and could be applied independently


changes from v3:
* rebased on master
* split first guest-common patch into 3
* instead of merging keys, just write all expected keys in to expected_props
* made $cfg optional so it does not break callers that don't call it
* added patch to fix the cfg2cmd tests for mdev check
* added patch to show vfio state transferred for migration
* incorporated fionas feedback (mostly minor stuff)

for more details see the individual patches

changes from v2:
* rebased on master
* rework the rework of the properties check (pve-guest-common 1/4)
* properly check mdev in the gui (pve-manager 1/5)


pve-guest-common:

Dominik Csapak (6):
  mapping: pci: assert_valid: rename cfg to mapping
  mapping: pci: assert_valid: reword error messages
  mapping: pci: make sure all desired properties are checked
  mapping: pci: check the mdev configuration on the device too
  mapping: pci: add 'live-migration-capable' flag to mappings
  mapping: remove find_on_current_node

 src/PVE/Mapping/PCI.pm | 60 ++++++++++++++++++++++++------------------
 src/PVE/Mapping/USB.pm | 10 -------
 2 files changed, 34 insertions(+), 36 deletions(-)

qemu-server:

Dominik Csapak (12):
  usb: mapping: move implementation of find_on_current_node here
  pci: mapping: move implementation of find_on_current_node here
  pci: mapping: check mdev config against hardware
  stop cleanup: remove unnecessary tpmstate cleanup
  vm_stop_cleanup: add noerr parameter
  migrate: call vm_stop_cleanup after stopping in phase3_cleanup
  pci: set 'enable-migration' to on for live-migration marked mapped
    devices
  check_local_resources: add more info per mapped device and return as
    hash
  api: enable live migration for marked mapped pci devices
  api: include not mapped resources for running vms in migrate
    preconditions
  tests: cfg2cmd: fix mdev tests
  migration: show vfio state transferred too

 PVE/API2/Qemu.pm                 | 55 ++++++++++++++++++++------------
 PVE/CLI/qm.pm                    |  2 +-
 PVE/QemuMigrate.pm               | 44 +++++++++++++++++--------
 PVE/QemuServer.pm                | 38 +++++++++++-----------
 PVE/QemuServer/PCI.pm            | 14 ++++++--
 PVE/QemuServer/USB.pm            |  5 ++-
 test/MigrationTest/Shared.pm     |  3 ++
 test/run_config2command_tests.pl |  2 +-
 8 files changed, 104 insertions(+), 59 deletions(-)

pve-manager:

Dominik Csapak (5):
  mapping: pci: include mdev in config checks
  bulk migrate: improve precondition checks
  bulk migrate: include checks for live-migratable local resources
  ui: adapt migration window to precondition api change
  fix #5175: ui: allow configuring and live migration of mapped pci
    resources

 PVE/API2/Cluster/Mapping/PCI.pm   |  2 +-
 PVE/API2/Nodes.pm                 | 27 ++++++++++++++--
 www/manager6/dc/PCIMapView.js     |  6 ++++
 www/manager6/window/Migrate.js    | 51 ++++++++++++++++++++-----------
 www/manager6/window/PCIMapEdit.js | 12 ++++++++
 5 files changed, 76 insertions(+), 22 deletions(-)

pve-docs:

Dominik Csapak (2):
  qm: resource mapping: add description for `mdev` option
  qm: resource mapping: document `live-migration-capable` setting

 qm.adoc | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [pve-devel] [PATCH guest-common v4 1/6] mapping: pci: assert_valid: rename cfg to mapping
  2024-06-06  9:21 [pve-devel] [PATCH guest-common/qemu-server/manager/docs v4] implement experimental vgpu live migration Dominik Csapak
@ 2024-06-06  9:21 ` Dominik Csapak
  2024-07-05  8:22   ` [pve-devel] applied: " Thomas Lamprecht
  2024-06-06  9:21 ` [pve-devel] [PATCH guest-common v4 2/6] mapping: pci: assert_valid: reword error messages Dominik Csapak
                   ` (24 subsequent siblings)
  25 siblings, 1 reply; 35+ messages in thread
From: Dominik Csapak @ 2024-06-06  9:21 UTC (permalink / raw)
  To: pve-devel

to make it clearer what it actually is. Also we want to add the
'real' config as parameter too, and so it's less confusing.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
split out in v4
 src/PVE/Mapping/PCI.pm | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/src/PVE/Mapping/PCI.pm b/src/PVE/Mapping/PCI.pm
index 725e106..3fba0c4 100644
--- a/src/PVE/Mapping/PCI.pm
+++ b/src/PVE/Mapping/PCI.pm
@@ -131,9 +131,9 @@ sub options {
 
 # checks if the given config is valid for the current node
 sub assert_valid {
-    my ($name, $cfg) = @_;
+    my ($name, $mapping) = @_;
 
-    my @paths = split(';', $cfg->{path} // '');
+    my @paths = split(';', $mapping->{path} // '');
 
     my $idx = 0;
     for my $path (@paths) {
@@ -160,15 +160,15 @@ sub assert_valid {
 	for my $prop (sort keys %$correct_props) {
 	    next if $prop eq 'iommugroup' && $idx > 0; # check iommu only on the first device
 
-	    next if !defined($correct_props->{$prop}) && !defined($cfg->{$prop});
+	    next if !defined($correct_props->{$prop}) && !defined($mapping->{$prop});
 	    die "no '$prop' for device '$path'\n"
-		if defined($correct_props->{$prop}) && !defined($cfg->{$prop});
+		if defined($correct_props->{$prop}) && !defined($mapping->{$prop});
 	    die "'$prop' configured but should not be\n"
-		if !defined($correct_props->{$prop}) && defined($cfg->{$prop});
+		if !defined($correct_props->{$prop}) && defined($mapping->{$prop});
 
 	    my $correct_prop = $correct_props->{$prop};
 	    $correct_prop =~ s/0x//g;
-	    my $configured_prop = $cfg->{$prop};
+	    my $configured_prop = $mapping->{$prop};
 	    $configured_prop =~ s/0x//g;
 
 	    die "'$prop' does not match for '$name' ($correct_prop != $configured_prop)\n"
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [pve-devel] [PATCH guest-common v4 2/6] mapping: pci: assert_valid: reword error messages
  2024-06-06  9:21 [pve-devel] [PATCH guest-common/qemu-server/manager/docs v4] implement experimental vgpu live migration Dominik Csapak
  2024-06-06  9:21 ` [pve-devel] [PATCH guest-common v4 1/6] mapping: pci: assert_valid: rename cfg to mapping Dominik Csapak
@ 2024-06-06  9:21 ` Dominik Csapak
  2024-07-05  8:22   ` [pve-devel] applied: " Thomas Lamprecht
  2024-06-06  9:21 ` [pve-devel] [PATCH guest-common v4 3/6] mapping: pci: make sure all desired properties are checked Dominik Csapak
                   ` (23 subsequent siblings)
  25 siblings, 1 reply; 35+ messages in thread
From: Dominik Csapak @ 2024-06-06  9:21 UTC (permalink / raw)
  To: pve-devel

makes them a bit clearer

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
split out in v4
 src/PVE/Mapping/PCI.pm | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/src/PVE/Mapping/PCI.pm b/src/PVE/Mapping/PCI.pm
index 3fba0c4..eb99819 100644
--- a/src/PVE/Mapping/PCI.pm
+++ b/src/PVE/Mapping/PCI.pm
@@ -161,9 +161,9 @@ sub assert_valid {
 	    next if $prop eq 'iommugroup' && $idx > 0; # check iommu only on the first device
 
 	    next if !defined($correct_props->{$prop}) && !defined($mapping->{$prop});
-	    die "no '$prop' for device '$path'\n"
+	    die "missing expected property '$prop' for device '$path'\n"
 		if defined($correct_props->{$prop}) && !defined($mapping->{$prop});
-	    die "'$prop' configured but should not be\n"
+	    die "unexpected property '$prop' configured for device '$path'\n"
 		if !defined($correct_props->{$prop}) && defined($mapping->{$prop});
 
 	    my $correct_prop = $correct_props->{$prop};
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [pve-devel] [PATCH guest-common v4 3/6] mapping: pci: make sure all desired properties are checked
  2024-06-06  9:21 [pve-devel] [PATCH guest-common/qemu-server/manager/docs v4] implement experimental vgpu live migration Dominik Csapak
  2024-06-06  9:21 ` [pve-devel] [PATCH guest-common v4 1/6] mapping: pci: assert_valid: rename cfg to mapping Dominik Csapak
  2024-06-06  9:21 ` [pve-devel] [PATCH guest-common v4 2/6] mapping: pci: assert_valid: reword error messages Dominik Csapak
@ 2024-06-06  9:21 ` Dominik Csapak
  2024-07-05  8:22   ` [pve-devel] applied: " Thomas Lamprecht
  2024-06-06  9:21 ` [pve-devel] [PATCH guest-common v4 4/6] mapping: pci: check the mdev configuration on the device too Dominik Csapak
                   ` (22 subsequent siblings)
  25 siblings, 1 reply; 35+ messages in thread
From: Dominik Csapak @ 2024-06-06  9:21 UTC (permalink / raw)
  To: pve-devel

by placing all expected properties from the hardware into an 'expected_props'
and those fromt he config into 'configured_props'

the names makes clearer what's what, and we can easily extend it, even
if the data does not come from the mapping (like we'll do with 'mdev')

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
changes from v3:
* rebased on split out patches before
* don't merge keys but add all to expected_props instead
 src/PVE/Mapping/PCI.pm | 27 ++++++++++++++++-----------
 1 file changed, 16 insertions(+), 11 deletions(-)

diff --git a/src/PVE/Mapping/PCI.pm b/src/PVE/Mapping/PCI.pm
index eb99819..aa56496 100644
--- a/src/PVE/Mapping/PCI.pm
+++ b/src/PVE/Mapping/PCI.pm
@@ -148,32 +148,37 @@ sub assert_valid {
 	my $info = PVE::SysFSTools::pci_device_info($path, 1);
 	die "pci device '$path' not found\n" if !defined($info);
 
-	my $correct_props = {
+	# make sure to initialize all keys that should be checked below
+	my $expected_props = {
 	    id => "$info->{vendor}:$info->{device}",
 	    iommugroup => $info->{iommugroup},
+	    'subsystem-id' => undef,
 	};
 
 	if (defined($info->{'subsystem_vendor'}) && defined($info->{'subsystem_device'})) {
-	    $correct_props->{'subsystem-id'} = "$info->{'subsystem_vendor'}:$info->{'subsystem_device'}";
+	    $expected_props->{'subsystem-id'} = "$info->{'subsystem_vendor'}:$info->{'subsystem_device'}";
 	}
 
-	for my $prop (sort keys %$correct_props) {
+	my $configured_props = { $mapping->%{qw(id iommugroup subsystem-id)} };
+
+	for my $prop (sort keys $expected_props->%*) {
 	    next if $prop eq 'iommugroup' && $idx > 0; # check iommu only on the first device
 
-	    next if !defined($correct_props->{$prop}) && !defined($mapping->{$prop});
+	    next if !defined($expected_props->{$prop}) && !defined($configured_props->{$prop});
 	    die "missing expected property '$prop' for device '$path'\n"
-		if defined($correct_props->{$prop}) && !defined($mapping->{$prop});
+		if defined($expected_props->{$prop}) && !defined($configured_props->{$prop});
 	    die "unexpected property '$prop' configured for device '$path'\n"
-		if !defined($correct_props->{$prop}) && defined($mapping->{$prop});
+		if !defined($expected_props->{$prop}) && defined($configured_props->{$prop});
 
-	    my $correct_prop = $correct_props->{$prop};
-	    $correct_prop =~ s/0x//g;
-	    my $configured_prop = $mapping->{$prop};
+	    my $expected_prop = $expected_props->{$prop};
+	    $expected_prop =~ s/0x//g;
+	    my $configured_prop = $configured_props->{$prop};
 	    $configured_prop =~ s/0x//g;
 
-	    die "'$prop' does not match for '$name' ($correct_prop != $configured_prop)\n"
-		if $correct_prop ne $configured_prop;
+	    die "'$prop' does not match for '$name' ($expected_prop != $configured_prop)\n"
+		if $expected_prop ne $configured_prop;
 	}
+
 	$idx++;
     }
 
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [pve-devel] [PATCH guest-common v4 4/6] mapping: pci: check the mdev configuration on the device too
  2024-06-06  9:21 [pve-devel] [PATCH guest-common/qemu-server/manager/docs v4] implement experimental vgpu live migration Dominik Csapak
                   ` (2 preceding siblings ...)
  2024-06-06  9:21 ` [pve-devel] [PATCH guest-common v4 3/6] mapping: pci: make sure all desired properties are checked Dominik Csapak
@ 2024-06-06  9:21 ` Dominik Csapak
  2024-07-05  8:33   ` Thomas Lamprecht
  2024-06-06  9:22 ` [pve-devel] [PATCH guest-common v4 5/6] mapping: pci: add 'live-migration-capable' flag to mappings Dominik Csapak
                   ` (21 subsequent siblings)
  25 siblings, 1 reply; 35+ messages in thread
From: Dominik Csapak @ 2024-06-06  9:21 UTC (permalink / raw)
  To: pve-devel

but that lives int he 'global' part of the mapping config, not in a
specific mapping. To check that, add it to the relevant hashes here.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
changes from v3:
* leave $cfg optional

 src/PVE/Mapping/PCI.pm | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/src/PVE/Mapping/PCI.pm b/src/PVE/Mapping/PCI.pm
index aa56496..1b2ac52 100644
--- a/src/PVE/Mapping/PCI.pm
+++ b/src/PVE/Mapping/PCI.pm
@@ -131,7 +131,7 @@ sub options {
 
 # checks if the given config is valid for the current node
 sub assert_valid {
-    my ($name, $mapping) = @_;
+    my ($name, $mapping, $cfg) = @_;
 
     my @paths = split(';', $mapping->{path} // '');
 
@@ -161,6 +161,12 @@ sub assert_valid {
 
 	my $configured_props = { $mapping->%{qw(id iommugroup subsystem-id)} };
 
+	# check mdev from globabl mapping config, if that is given
+	if (defined($cfg)) {
+	    $expected_props->{mdev} = $info->{mdev} ? 1 : 0;
+	    $configured_props->{mdev} = $cfg->{mdev} ? 1 : 0;
+	}
+
 	for my $prop (sort keys $expected_props->%*) {
 	    next if $prop eq 'iommugroup' && $idx > 0; # check iommu only on the first device
 
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [pve-devel] [PATCH guest-common v4 5/6] mapping: pci: add 'live-migration-capable' flag to mappings
  2024-06-06  9:21 [pve-devel] [PATCH guest-common/qemu-server/manager/docs v4] implement experimental vgpu live migration Dominik Csapak
                   ` (3 preceding siblings ...)
  2024-06-06  9:21 ` [pve-devel] [PATCH guest-common v4 4/6] mapping: pci: check the mdev configuration on the device too Dominik Csapak
@ 2024-06-06  9:22 ` Dominik Csapak
  2024-06-06  9:22 ` [pve-devel] [PATCH guest-common v4 6/6] mapping: remove find_on_current_node Dominik Csapak
                   ` (20 subsequent siblings)
  25 siblings, 0 replies; 35+ messages in thread
From: Dominik Csapak @ 2024-06-06  9:22 UTC (permalink / raw)
  To: pve-devel

so that we can decide in qemu-server to allow live-migration.
The driver and QEMU must be capable of that, and it's the
admin's responsibility to know and configure that

Mark the option as experimental in the description.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 src/PVE/Mapping/PCI.pm | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/src/PVE/Mapping/PCI.pm b/src/PVE/Mapping/PCI.pm
index 1b2ac52..a616f91 100644
--- a/src/PVE/Mapping/PCI.pm
+++ b/src/PVE/Mapping/PCI.pm
@@ -105,6 +105,13 @@ my $defaultData = {
 	    optional => 1,
 	    default => 0,
 	},
+	'live-migration-capable' => {
+	    description => "Marks the device(s) as being able to be live-migrated (Experimental)."
+		." This needs hardware and driver support to work.",
+	    type => 'boolean',
+	    optional => 1,
+	    default => 0,
+	},
 	map => {
 	    type => 'array',
 	    description => 'A list of maps for the cluster nodes.',
@@ -125,6 +132,7 @@ sub options {
     return {
 	description => { optional => 1 },
 	mdev => { optional => 1 },
+	'live-migration-capable' => { optional => 1 },
 	map => {},
     };
 }
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [pve-devel] [PATCH guest-common v4 6/6] mapping: remove find_on_current_node
  2024-06-06  9:21 [pve-devel] [PATCH guest-common/qemu-server/manager/docs v4] implement experimental vgpu live migration Dominik Csapak
                   ` (4 preceding siblings ...)
  2024-06-06  9:22 ` [pve-devel] [PATCH guest-common v4 5/6] mapping: pci: add 'live-migration-capable' flag to mappings Dominik Csapak
@ 2024-06-06  9:22 ` Dominik Csapak
  2024-06-06  9:22 ` [pve-devel] [PATCH qemu-server v4 01/12] usb: mapping: move implementation of find_on_current_node here Dominik Csapak
                   ` (19 subsequent siblings)
  25 siblings, 0 replies; 35+ messages in thread
From: Dominik Csapak @ 2024-06-06  9:22 UTC (permalink / raw)
  To: pve-devel

they only have one user each (where we can inline the implementation).
It's easy enough to recreate should we need to.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
changes from v3:
* remove also now unneeded PVE::INotify use statement

 src/PVE/Mapping/PCI.pm | 11 -----------
 src/PVE/Mapping/USB.pm | 10 ----------
 2 files changed, 21 deletions(-)

diff --git a/src/PVE/Mapping/PCI.pm b/src/PVE/Mapping/PCI.pm
index a616f91..9875df9 100644
--- a/src/PVE/Mapping/PCI.pm
+++ b/src/PVE/Mapping/PCI.pm
@@ -9,7 +9,6 @@ use PVE::Cluster qw(
     cfs_register_file
     cfs_write_file
 );
-use PVE::INotify ();
 use PVE::JSONSchema qw(get_standard_option parse_property_string);
 use PVE::SysFSTools ();
 
@@ -218,16 +217,6 @@ sub write_pci_config {
     cfs_write_file($FILENAME, $cfg);
 }
 
-sub find_on_current_node {
-    my ($id) = @_;
-
-    my $cfg = PVE::Mapping::PCI::config();
-    my $node = PVE::INotify::nodename();
-
-    # ignore errors
-    return get_node_mapping($cfg, $id, $node);
-}
-
 sub get_node_mapping {
     my ($cfg, $id, $nodename) = @_;
 
diff --git a/src/PVE/Mapping/USB.pm b/src/PVE/Mapping/USB.pm
index 34bc3cb..19468d8 100644
--- a/src/PVE/Mapping/USB.pm
+++ b/src/PVE/Mapping/USB.pm
@@ -9,7 +9,6 @@ use PVE::Cluster qw(
     cfs_register_file
     cfs_write_file
 );
-use PVE::INotify ();
 use PVE::JSONSchema qw(get_standard_option parse_property_string);
 use PVE::SysFSTools ();
 
@@ -155,15 +154,6 @@ sub write_usb_config {
     cfs_write_file($FILENAME, $cfg);
 }
 
-sub find_on_current_node {
-    my ($id) = @_;
-
-    my $cfg = config();
-    my $node = PVE::INotify::nodename();
-
-    return get_node_mapping($cfg, $id, $node);
-}
-
 sub get_node_mapping {
     my ($cfg, $id, $nodename) = @_;
 
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [pve-devel] [PATCH qemu-server v4 01/12] usb: mapping: move implementation of find_on_current_node here
  2024-06-06  9:21 [pve-devel] [PATCH guest-common/qemu-server/manager/docs v4] implement experimental vgpu live migration Dominik Csapak
                   ` (5 preceding siblings ...)
  2024-06-06  9:22 ` [pve-devel] [PATCH guest-common v4 6/6] mapping: remove find_on_current_node Dominik Csapak
@ 2024-06-06  9:22 ` Dominik Csapak
  2024-07-04 11:29   ` Thomas Lamprecht
  2024-06-06  9:22 ` [pve-devel] [PATCH qemu-server v4 02/12] pci: " Dominik Csapak
                   ` (18 subsequent siblings)
  25 siblings, 1 reply; 35+ messages in thread
From: Dominik Csapak @ 2024-06-06  9:22 UTC (permalink / raw)
  To: pve-devel

this was the only user, and it's easy enough

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
no changes
 PVE/QemuServer/USB.pm | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/PVE/QemuServer/USB.pm b/PVE/QemuServer/USB.pm
index 49957444..ecd0361d 100644
--- a/PVE/QemuServer/USB.pm
+++ b/PVE/QemuServer/USB.pm
@@ -5,6 +5,7 @@ use warnings;
 use PVE::QemuServer::PCI qw(print_pci_addr);
 use PVE::QemuServer::Machine;
 use PVE::QemuServer::Helpers qw(min_version windows_version);
+use PVE::INotify;
 use PVE::JSONSchema;
 use PVE::Mapping::USB;
 use base 'Exporter';
@@ -91,7 +92,9 @@ sub parse_usb_device {
 	    $res->{spice} = 1;
 	}
     } elsif (defined($mapping)) {
-	my $devices = PVE::Mapping::USB::find_on_current_node($mapping);
+	my $config = PVE::Mapping::USB::config();
+	my $node = PVE::INotify::nodename();
+	my $devices = PVE::Mapping::USB::get_node_mapping($config, $mapping, $node);
 	die "USB device mapping not found for '$mapping'\n" if !$devices || !scalar($devices->@*);
 	die "More than one USB mapping per host not supported\n" if scalar($devices->@*) > 1;
 	eval {
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [pve-devel] [PATCH qemu-server v4 02/12] pci: mapping: move implementation of find_on_current_node here
  2024-06-06  9:21 [pve-devel] [PATCH guest-common/qemu-server/manager/docs v4] implement experimental vgpu live migration Dominik Csapak
                   ` (6 preceding siblings ...)
  2024-06-06  9:22 ` [pve-devel] [PATCH qemu-server v4 01/12] usb: mapping: move implementation of find_on_current_node here Dominik Csapak
@ 2024-06-06  9:22 ` Dominik Csapak
  2024-06-06  9:22 ` [pve-devel] [PATCH qemu-server v4 03/12] pci: mapping: check mdev config against hardware Dominik Csapak
                   ` (17 subsequent siblings)
  25 siblings, 0 replies; 35+ messages in thread
From: Dominik Csapak @ 2024-06-06  9:22 UTC (permalink / raw)
  To: pve-devel

this was the only user, and it's easy enough

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
no changes
 PVE/QemuServer/PCI.pm | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/PVE/QemuServer/PCI.pm b/PVE/QemuServer/PCI.pm
index 1673041b..7ff9cad7 100644
--- a/PVE/QemuServer/PCI.pm
+++ b/PVE/QemuServer/PCI.pm
@@ -3,6 +3,7 @@ package PVE::QemuServer::PCI;
 use warnings;
 use strict;
 
+use PVE::INotify;
 use PVE::JSONSchema;
 use PVE::Mapping::PCI;
 use PVE::SysFSTools;
@@ -429,7 +430,9 @@ sub parse_hostpci {
 
     if ($mapping) {
 	# we have no ordinary pci id, must be a mapping
-	my $devices = PVE::Mapping::PCI::find_on_current_node($mapping);
+	my $config = PVE::Mapping::PCI::config();
+	my $node = PVE::INotify::nodename();
+	my $devices = PVE::Mapping::PCI::get_node_mapping($config, $mapping, $node);
 	die "PCI device mapping not found for '$mapping'\n" if !$devices || !scalar($devices->@*);
 
 	for my $device ($devices->@*) {
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [pve-devel] [PATCH qemu-server v4 03/12] pci: mapping: check mdev config against hardware
  2024-06-06  9:21 [pve-devel] [PATCH guest-common/qemu-server/manager/docs v4] implement experimental vgpu live migration Dominik Csapak
                   ` (7 preceding siblings ...)
  2024-06-06  9:22 ` [pve-devel] [PATCH qemu-server v4 02/12] pci: " Dominik Csapak
@ 2024-06-06  9:22 ` Dominik Csapak
  2024-06-06  9:22 ` [pve-devel] [PATCH qemu-server v4 04/12] stop cleanup: remove unnecessary tpmstate cleanup Dominik Csapak
                   ` (16 subsequent siblings)
  25 siblings, 0 replies; 35+ messages in thread
From: Dominik Csapak @ 2024-06-06  9:22 UTC (permalink / raw)
  To: pve-devel

by giving the mapping config to assert_valid, not only the specific mapping

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
no changes
 PVE/QemuServer/PCI.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/QemuServer/PCI.pm b/PVE/QemuServer/PCI.pm
index 7ff9cad7..6ba43ee8 100644
--- a/PVE/QemuServer/PCI.pm
+++ b/PVE/QemuServer/PCI.pm
@@ -436,7 +436,7 @@ sub parse_hostpci {
 	die "PCI device mapping not found for '$mapping'\n" if !$devices || !scalar($devices->@*);
 
 	for my $device ($devices->@*) {
-	    eval { PVE::Mapping::PCI::assert_valid($mapping, $device) };
+	    eval { PVE::Mapping::PCI::assert_valid($mapping, $device, $config->{ids}->{$mapping}) };
 	    die "PCI device mapping invalid (hardware probably changed): $@\n" if $@;
 	    push $alternatives->@*, [split(/;/, $device->{path})];
 	}
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [pve-devel] [PATCH qemu-server v4 04/12] stop cleanup: remove unnecessary tpmstate cleanup
  2024-06-06  9:21 [pve-devel] [PATCH guest-common/qemu-server/manager/docs v4] implement experimental vgpu live migration Dominik Csapak
                   ` (8 preceding siblings ...)
  2024-06-06  9:22 ` [pve-devel] [PATCH qemu-server v4 03/12] pci: mapping: check mdev config against hardware Dominik Csapak
@ 2024-06-06  9:22 ` Dominik Csapak
  2024-07-05  8:36   ` [pve-devel] applied: " Thomas Lamprecht
  2024-06-06  9:22 ` [pve-devel] [PATCH qemu-server v4 05/12] vm_stop_cleanup: add noerr parameter Dominik Csapak
                   ` (15 subsequent siblings)
  25 siblings, 1 reply; 35+ messages in thread
From: Dominik Csapak @ 2024-06-06  9:22 UTC (permalink / raw)
  To: pve-devel

tpmstate0 is already included in `get_vm_volumes`, and our only storage
plugin that has unmap_volume implemented is the RBDPlugin, where we call
unmap in `deactivate_volume`. So it's already ummapped by the
`deactivate_volumes` calls above.

For third-party storage plugins, it's natural to expect that
deactivate_volume() would also remove a mapping for the volume just
like RBDPlugin does.

While there is an explicit map_volume() call in start_swtpm(), a
third-party plugin might expect an explicit unmap_volume() call too.
However, the order of calls right now is
1. activate_volume()
2. map_volume()
3. deactivate_volume()
4. unmap_volume()

Which seems like it could cause problems already for third-party
plugins relying on an explicit unmap call.

All that said, it is unlikely that a third-party plugin breaks. If it
really happens, it can be discussed/adapted to the actual needs still.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Acked-by: Fiona Ebner <f.ebner@proxmox.com>
---
changes from v3:
* include rationale for 3rd party plugins (thanks @fiona)

 PVE/QemuServer.pm | 8 --------
 1 file changed, 8 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 5df0c96d..bbad02f4 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -6175,14 +6175,6 @@ sub vm_stop_cleanup {
 	if (!$keepActive) {
 	    my $vollist = get_vm_volumes($conf);
 	    PVE::Storage::deactivate_volumes($storecfg, $vollist);
-
-	    if (my $tpmdrive = $conf->{tpmstate0}) {
-		my $tpm = parse_drive("tpmstate0", $tpmdrive);
-		my ($storeid, $volname) = PVE::Storage::parse_volume_id($tpm->{file}, 1);
-		if ($storeid) {
-		    PVE::Storage::unmap_volume($storecfg, $tpm->{file});
-		}
-	    }
 	}
 
 	foreach my $ext (qw(mon qmp pid vnc qga)) {
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [pve-devel] [PATCH qemu-server v4 05/12] vm_stop_cleanup: add noerr parameter
  2024-06-06  9:21 [pve-devel] [PATCH guest-common/qemu-server/manager/docs v4] implement experimental vgpu live migration Dominik Csapak
                   ` (9 preceding siblings ...)
  2024-06-06  9:22 ` [pve-devel] [PATCH qemu-server v4 04/12] stop cleanup: remove unnecessary tpmstate cleanup Dominik Csapak
@ 2024-06-06  9:22 ` Dominik Csapak
  2024-07-04 12:24   ` Thomas Lamprecht
  2024-06-06  9:22 ` [pve-devel] [PATCH qemu-server v4 06/12] migrate: call vm_stop_cleanup after stopping in phase3_cleanup Dominik Csapak
                   ` (14 subsequent siblings)
  25 siblings, 1 reply; 35+ messages in thread
From: Dominik Csapak @ 2024-06-06  9:22 UTC (permalink / raw)
  To: pve-devel

and set it on all current users

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
no changes
 PVE/CLI/qm.pm     |  2 +-
 PVE/QemuServer.pm | 13 ++++++++-----
 2 files changed, 9 insertions(+), 6 deletions(-)

diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index b105830f..fbc590f5 100755
--- a/PVE/CLI/qm.pm
+++ b/PVE/CLI/qm.pm
@@ -933,7 +933,7 @@ __PACKAGE__->register_method({
 
 	    if (!$clean || $guest) {
 		# vm was shutdown from inside the guest or crashed, doing api cleanup
-		PVE::QemuServer::vm_stop_cleanup($storecfg, $vmid, $conf, 0, 0);
+		PVE::QemuServer::vm_stop_cleanup($storecfg, $vmid, $conf, 0, 0, 1);
 	    }
 	    PVE::GuestHelpers::exec_hookscript($conf, $vmid, 'post-stop');
 
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index bbad02f4..2d583ddf 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -6168,7 +6168,7 @@ sub cleanup_pci_devices {
 }
 
 sub vm_stop_cleanup {
-    my ($storecfg, $vmid, $conf, $keepActive, $apply_pending_changes) = @_;
+    my ($storecfg, $vmid, $conf, $keepActive, $apply_pending_changes, $noerr) = @_;
 
     eval {
 
@@ -6194,7 +6194,10 @@ sub vm_stop_cleanup {
 
 	vmconfig_apply_pending($vmid, $conf, $storecfg) if $apply_pending_changes;
     };
-    warn $@ if $@; # avoid errors - just warn
+    if (my $err = $@) {
+	die $err if !$noerr;
+	warn $err;
+    }
 }
 
 # call only in locked context
@@ -6245,7 +6248,7 @@ sub _do_vm_stop {
 		die "VM quit/powerdown failed - got timeout\n";
 	    }
 	} else {
-	    vm_stop_cleanup($storecfg, $vmid, $conf, $keepActive, 1) if $conf;
+	    vm_stop_cleanup($storecfg, $vmid, $conf, $keepActive, 1, 1) if $conf;
 	    return;
 	}
     } else {
@@ -6276,7 +6279,7 @@ sub _do_vm_stop {
 	sleep 1;
     }
 
-    vm_stop_cleanup($storecfg, $vmid, $conf, $keepActive, 1) if $conf;
+    vm_stop_cleanup($storecfg, $vmid, $conf, $keepActive, 1, 1) if $conf;
 }
 
 # Note: use $nocheck to skip tests if VM configuration file exists.
@@ -6291,7 +6294,7 @@ sub vm_stop {
 	my $pid = check_running($vmid, $nocheck, $migratedfrom);
 	kill 15, $pid if $pid;
 	my $conf = PVE::QemuConfig->load_config($vmid, $migratedfrom);
-	vm_stop_cleanup($storecfg, $vmid, $conf, $keepActive, 0);
+	vm_stop_cleanup($storecfg, $vmid, $conf, $keepActive, 0, 1);
 	return;
     }
 
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [pve-devel] [PATCH qemu-server v4 06/12] migrate: call vm_stop_cleanup after stopping in phase3_cleanup
  2024-06-06  9:21 [pve-devel] [PATCH guest-common/qemu-server/manager/docs v4] implement experimental vgpu live migration Dominik Csapak
                   ` (10 preceding siblings ...)
  2024-06-06  9:22 ` [pve-devel] [PATCH qemu-server v4 05/12] vm_stop_cleanup: add noerr parameter Dominik Csapak
@ 2024-06-06  9:22 ` Dominik Csapak
  2024-06-06  9:22 ` [pve-devel] [PATCH qemu-server v4 07/12] pci: set 'enable-migration' to on for live-migration marked mapped devices Dominik Csapak
                   ` (13 subsequent siblings)
  25 siblings, 0 replies; 35+ messages in thread
From: Dominik Csapak @ 2024-06-06  9:22 UTC (permalink / raw)
  To: pve-devel

we currently only call deactivate_volumes, but we actually want to call
the whole vm_stop_cleanup, since that is not invoked by the vm_stop
above (we cannot parse the config anymore) and might do other cleanups
we also want to do (like mdev cleanup).

For this to work properly we have to clone the original config at the
beginning, since we might modify the volids.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
changes from v3:
* reword error message
* change 'return 1' to 'return' in test
 PVE/QemuMigrate.pm           | 12 ++++++------
 test/MigrationTest/Shared.pm |  3 +++
 2 files changed, 9 insertions(+), 6 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 33d5b2d1..4cad14ef 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -5,6 +5,7 @@ use warnings;
 
 use IO::File;
 use IPC::Open2;
+use Storable qw(dclone);
 use Time::HiRes qw( usleep );
 
 use PVE::AccessControl;
@@ -1458,7 +1459,8 @@ sub phase3_cleanup {
 
     my $tunnel = $self->{tunnel};
 
-    my $sourcevollist = PVE::QemuServer::get_vm_volumes($conf);
+    # we'll need an unmodified copy of the config later for the cleanup
+    my $oldconf = dclone($conf);
 
     if ($self->{volume_map} && !$self->{opts}->{remote}) {
 	my $target_drives = $self->{target_drive};
@@ -1589,12 +1591,10 @@ sub phase3_cleanup {
 	$self->{errors} = 1;
     }
 
-    # always deactivate volumes - avoid lvm LVs to be active on several nodes
-    eval {
-	PVE::Storage::deactivate_volumes($self->{storecfg}, $sourcevollist);
-    };
+    # stop with nocheck does not do a cleanup, so do it here with the original config
+    eval { PVE::QemuServer::vm_stop_cleanup($self->{storecfg}, $vmid, $oldconf) };
     if (my $err = $@) {
-	$self->log('err', $err);
+	$self->log('err', "Cleanup after stopping VM failed - $err");
 	$self->{errors} = 1;
     }
 
diff --git a/test/MigrationTest/Shared.pm b/test/MigrationTest/Shared.pm
index aa7203d1..e69bf84f 100644
--- a/test/MigrationTest/Shared.pm
+++ b/test/MigrationTest/Shared.pm
@@ -130,6 +130,9 @@ $qemu_server_module->mock(
     clear_reboot_request => sub {
 	return 1;
     },
+    vm_stop_cleanup => sub {
+	return;
+    },
     get_efivars_size => sub {
 	 return 128 * 1024;
     },
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [pve-devel] [PATCH qemu-server v4 07/12] pci: set 'enable-migration' to on for live-migration marked mapped devices
  2024-06-06  9:21 [pve-devel] [PATCH guest-common/qemu-server/manager/docs v4] implement experimental vgpu live migration Dominik Csapak
                   ` (11 preceding siblings ...)
  2024-06-06  9:22 ` [pve-devel] [PATCH qemu-server v4 06/12] migrate: call vm_stop_cleanup after stopping in phase3_cleanup Dominik Csapak
@ 2024-06-06  9:22 ` Dominik Csapak
  2024-06-06  9:22 ` [pve-devel] [PATCH qemu-server v4 08/12] check_local_resources: add more info per mapped device and return as hash Dominik Csapak
                   ` (12 subsequent siblings)
  25 siblings, 0 replies; 35+ messages in thread
From: Dominik Csapak @ 2024-06-06  9:22 UTC (permalink / raw)
  To: pve-devel

the default is 'auto', but for those which are marked as capable for
live migration, we want to explicitly enable that, so we get an early
error on start if the driver does not support that.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
no changes
 PVE/QemuServer/PCI.pm | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/PVE/QemuServer/PCI.pm b/PVE/QemuServer/PCI.pm
index 6ba43ee8..df2ea3eb 100644
--- a/PVE/QemuServer/PCI.pm
+++ b/PVE/QemuServer/PCI.pm
@@ -435,8 +435,11 @@ sub parse_hostpci {
 	my $devices = PVE::Mapping::PCI::get_node_mapping($config, $mapping, $node);
 	die "PCI device mapping not found for '$mapping'\n" if !$devices || !scalar($devices->@*);
 
+	my $cfg = $config->{ids}->{$mapping};
+	$res->{'live-migration-capable'} = 1 if $cfg->{'live-migration-capable'};
+
 	for my $device ($devices->@*) {
-	    eval { PVE::Mapping::PCI::assert_valid($mapping, $device, $config->{ids}->{$mapping}) };
+	    eval { PVE::Mapping::PCI::assert_valid($mapping, $device, $cfg) };
 	    die "PCI device mapping invalid (hardware probably changed): $@\n" if $@;
 	    push $alternatives->@*, [split(/;/, $device->{path})];
 	}
@@ -635,6 +638,10 @@ sub print_hostpci_devices {
 		$devicestr .= ",host=$pcidevice->{id}";
 	    }
 
+	    if ($d->{'live-migration-capable'}) {
+		$devicestr .= ",enable-migration=on"
+	    }
+
 	    my $mf_addr = $multifunction ? ".$j" : '';
 	    $devicestr .= ",id=${id}${mf_addr}${pciaddr}${mf_addr}";
 
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [pve-devel] [PATCH qemu-server v4 08/12] check_local_resources: add more info per mapped device and return as hash
  2024-06-06  9:21 [pve-devel] [PATCH guest-common/qemu-server/manager/docs v4] implement experimental vgpu live migration Dominik Csapak
                   ` (12 preceding siblings ...)
  2024-06-06  9:22 ` [pve-devel] [PATCH qemu-server v4 07/12] pci: set 'enable-migration' to on for live-migration marked mapped devices Dominik Csapak
@ 2024-06-06  9:22 ` Dominik Csapak
  2024-06-06  9:22 ` [pve-devel] [PATCH qemu-server v4 09/12] api: enable live migration for marked mapped pci devices Dominik Csapak
                   ` (11 subsequent siblings)
  25 siblings, 0 replies; 35+ messages in thread
From: Dominik Csapak @ 2024-06-06  9:22 UTC (permalink / raw)
  To: pve-devel

such as the mapping name and if it's marked for live-migration (pci only)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
changes from v3:
* sort keys
* move text variable into if branch

 PVE/API2/Qemu.pm   |  2 +-
 PVE/QemuMigrate.pm |  7 ++++---
 PVE/QemuServer.pm  | 17 ++++++++++-------
 3 files changed, 15 insertions(+), 11 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 2a1d4d79..65195515 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -4538,7 +4538,7 @@ __PACKAGE__->register_method({
 	$res->{local_disks} = [ values %$local_disks ];;
 
 	$res->{local_resources} = $local_resources;
-	$res->{'mapped-resources'} = $mapped_resources;
+	$res->{'mapped-resources'} = [ sort keys $mapped_resources->%* ];
 
 	return $res;
 
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 4cad14ef..a87e11c0 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -236,7 +236,7 @@ sub prepare {
     my ($loc_res, $mapped_res, $missing_mappings_by_node) = PVE::QemuServer::check_local_resources($conf, 1);
     my $blocking_resources = [];
     for my $res ($loc_res->@*) {
-	if (!grep($res, $mapped_res->@*)) {
+	if (!defined($mapped_res->{$res})) {
 	    push $blocking_resources->@*, $res;
 	}
     }
@@ -248,10 +248,11 @@ sub prepare {
 	}
     }
 
-    if (scalar($mapped_res->@*)) {
+    if (scalar(keys $mapped_res->%*)) {
 	my $missing_mappings = $missing_mappings_by_node->{$self->{node}};
 	if ($running) {
-	    die "can't migrate running VM which uses mapped devices: " . join(", ", $mapped_res->@*) . "\n";
+	    my $mapped_text = join(", ", sort keys $mapped_res->%*);
+	    die "can't migrate running VM which uses mapped devices: $mapped_text\n";
 	} elsif (scalar($missing_mappings->@*)) {
 	    die "can't migrate to '$self->{node}': missing mapped devices " . join(", ", $missing_mappings->@*) . "\n";
 	} else {
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 2d583ddf..5275d7a4 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2562,7 +2562,7 @@ sub check_local_resources {
     my ($conf, $noerr) = @_;
 
     my @loc_res = ();
-    my $mapped_res = [];
+    my $mapped_res = {};
 
     my $nodelist = PVE::Cluster::get_nodelist();
     my $pci_map = PVE::Mapping::PCI::config();
@@ -2594,16 +2594,19 @@ sub check_local_resources {
 	if ($k =~ m/^usb/) {
 	    my $entry = parse_property_string('pve-qm-usb', $conf->{$k});
 	    next if $entry->{host} && $entry->{host} =~ m/^spice$/i;
-	    if ($entry->{mapping}) {
-		$add_missing_mapping->('usb', $k, $entry->{mapping});
-		push @$mapped_res, $k;
+	    if (my $name = $entry->{mapping}) {
+		$add_missing_mapping->('usb', $k, $name);
+		$mapped_res->{$k} = { name => $name };
 	    }
 	}
 	if ($k =~ m/^hostpci/) {
 	    my $entry = parse_property_string('pve-qm-hostpci', $conf->{$k});
-	    if ($entry->{mapping}) {
-		$add_missing_mapping->('pci', $k, $entry->{mapping});
-		push @$mapped_res, $k;
+	    if (my $name = $entry->{mapping}) {
+		$add_missing_mapping->('pci', $k, $name);
+		my $mapped_device = { name => $name };
+		$mapped_device->{'live-migration'} = 1
+		    if $pci_map->{ids}->{$name}->{'live-migration-capable'};
+		$mapped_res->{$k} = $mapped_device;
 	    }
 	}
 	# sockets are safe: they will recreated be on the target side post-migrate
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [pve-devel] [PATCH qemu-server v4 09/12] api: enable live migration for marked mapped pci devices
  2024-06-06  9:21 [pve-devel] [PATCH guest-common/qemu-server/manager/docs v4] implement experimental vgpu live migration Dominik Csapak
                   ` (13 preceding siblings ...)
  2024-06-06  9:22 ` [pve-devel] [PATCH qemu-server v4 08/12] check_local_resources: add more info per mapped device and return as hash Dominik Csapak
@ 2024-06-06  9:22 ` Dominik Csapak
  2024-06-06  9:22 ` [pve-devel] [PATCH qemu-server v4 10/12] api: include not mapped resources for running vms in migrate preconditions Dominik Csapak
                   ` (10 subsequent siblings)
  25 siblings, 0 replies; 35+ messages in thread
From: Dominik Csapak @ 2024-06-06  9:22 UTC (permalink / raw)
  To: pve-devel

They have to be marked as 'live-migration-capable' in the mapping
config, and the driver and qemu must support it.

For the gui checks, we now return the whole object of the mapped
resources, which includes info like the name and if it's marked as
live-migration capable. (while deprecating the old 'mapped-resource'
return value, since that returns strictly less information)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
changes from v3:
* added reminder to remove deprecated value
* shorten too long lines
* sort keys for mapped_res
 PVE/API2/Qemu.pm   |  8 +++++++-
 PVE/QemuMigrate.pm | 17 ++++++++++++-----
 2 files changed, 19 insertions(+), 6 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 65195515..24b82c6e 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -4471,9 +4471,14 @@ __PACKAGE__->register_method({
 		type => 'array',
 		description => "List local resources e.g. pci, usb"
 	    },
+	    # FIXME: remove with 9.0
 	    'mapped-resources' => {
 		type => 'array',
-		description => "List of mapped resources e.g. pci, usb"
+		description => "List of mapped resources e.g. pci, usb. Deprecated, use 'mapped-resource-info' instead."
+	    },
+	    'mapped-resource-info' => {
+		type => 'object',
+		description => "Object of mapped resources with additional information such if they're live migratable.",
 	    },
 	},
     },
@@ -4539,6 +4544,7 @@ __PACKAGE__->register_method({
 
 	$res->{local_resources} = $local_resources;
 	$res->{'mapped-resources'} = [ sort keys $mapped_resources->%* ];
+	$res->{'mapped-resource-info'} = $mapped_resources;
 
 	return $res;
 
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index a87e11c0..41ff12c2 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -250,11 +250,18 @@ sub prepare {
 
     if (scalar(keys $mapped_res->%*)) {
 	my $missing_mappings = $missing_mappings_by_node->{$self->{node}};
-	if ($running) {
-	    my $mapped_text = join(", ", sort keys $mapped_res->%*);
-	    die "can't migrate running VM which uses mapped devices: $mapped_text\n";
-	} elsif (scalar($missing_mappings->@*)) {
-	    die "can't migrate to '$self->{node}': missing mapped devices " . join(", ", $missing_mappings->@*) . "\n";
+	my $missing_live_mappings = [];
+	for my $key (sort keys $mapped_res->%*) {
+	    my $res = $mapped_res->{$key};
+	    my $name = "$key:$res->{name}";
+	    push $missing_live_mappings->@*, $name if !$res->{'live-migration'};
+	}
+	if (scalar($missing_mappings->@*)) {
+	    my $missing = join(", ", $missing_mappings->@*);
+	    die "can't migrate to '$self->{node}': missing mapped devices $missing\n";
+	} elsif ($running && scalar($missing_live_mappings->@*)) {
+	    my $missing = join(", ", $missing_live_mappings->@*);
+	    die "can't live migrate running VM which uses following mapped devices: $missing\n";
 	} else {
 	    $self->log('info', "migrating VM which uses mapped local devices");
 	}
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [pve-devel] [PATCH qemu-server v4 10/12] api: include not mapped resources for running vms in migrate preconditions
  2024-06-06  9:21 [pve-devel] [PATCH guest-common/qemu-server/manager/docs v4] implement experimental vgpu live migration Dominik Csapak
                   ` (14 preceding siblings ...)
  2024-06-06  9:22 ` [pve-devel] [PATCH qemu-server v4 09/12] api: enable live migration for marked mapped pci devices Dominik Csapak
@ 2024-06-06  9:22 ` Dominik Csapak
  2024-06-06  9:22 ` [pve-devel] [PATCH qemu-server v4 11/12] tests: cfg2cmd: fix mdev tests Dominik Csapak
                   ` (9 subsequent siblings)
  25 siblings, 0 replies; 35+ messages in thread
From: Dominik Csapak @ 2024-06-06  9:22 UTC (permalink / raw)
  To: pve-devel

so that we can show a proper warning in the migrate dialog and check it
in the bulk migrate precondition check

the unavailable_storages and should be the same as before, but
we now always return (not_)allowed_nodes too.

to make the code a bit easier to read, reorganize how we construct
the (not_)allowed_nodes properties.

also add a note that we want to redesign the return values here, to make
* the api call simpler
* return better structured values

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
changes from v3:
* rework the code a bit so it's easier to read and follow
* properly fill allowed nodes now in the running case too

 PVE/API2/Qemu.pm | 45 ++++++++++++++++++++++++++-------------------
 1 file changed, 26 insertions(+), 19 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 24b82c6e..0a999006 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -4450,18 +4450,20 @@ __PACKAGE__->register_method({
 	},
     },
     returns => {
+	# TODO 9.x: rework the api call to return more sensible structures
+	# e.g. a simple list of nodes with their blockers and/or notices to show
 	type => "object",
 	properties => {
 	    running => { type => 'boolean' },
 	    allowed_nodes => {
 		type => 'array',
 		optional => 1,
-		description => "List nodes allowed for offline migration, only passed if VM is offline"
+		description => "List nodes allowed for migration.",
 	    },
 	    not_allowed_nodes => {
 		type => 'object',
 		optional => 1,
-		description => "List not allowed nodes with additional informations, only passed if VM is offline"
+		description => "List not allowed nodes with additional information.",
 	    },
 	    local_disks => {
 		type => 'array',
@@ -4510,7 +4512,6 @@ __PACKAGE__->register_method({
 
 	my ($local_resources, $mapped_resources, $missing_mappings_by_node) =
 	    PVE::QemuServer::check_local_resources($vmconf, 1);
-	delete $missing_mappings_by_node->{$localnode};
 
 	my $vga = PVE::QemuServer::parse_vga($vmconf->{vga});
 	if ($res->{running} && $vga->{'clipboard'} && $vga->{'clipboard'} eq 'vnc') {
@@ -4519,28 +4520,34 @@ __PACKAGE__->register_method({
 
 	# if vm is not running, return target nodes where local storage/mapped devices are available
 	# for offline migration
-	if (!$res->{running}) {
-	    $res->{allowed_nodes} = [];
-	    my $checked_nodes = PVE::QemuServer::check_local_storage_availability($vmconf, $storecfg);
-	    delete $checked_nodes->{$localnode};
-
-	    foreach my $node (keys %$checked_nodes) {
-		my $missing_mappings = $missing_mappings_by_node->{$node};
-		if (scalar($missing_mappings->@*)) {
-		    $checked_nodes->{$node}->{'unavailable-resources'} = $missing_mappings;
-		    next;
-		}
+	$res->{allowed_nodes} = [];
+	$res->{not_allowed_nodes} = {};
 
-		if (!defined($checked_nodes->{$node}->{unavailable_storages})) {
-		    push @{$res->{allowed_nodes}}, $node;
-		}
+	my $storage_nodehash = PVE::QemuServer::check_local_storage_availability($vmconf, $storecfg);
+
+	my $nodelist = PVE::Cluster::get_nodelist();
+	for my $node ($nodelist->@*) {
+	    next if $node eq $localnode;
+
+	    # extract missing storage info
+	    if (my $storage_info = $storage_nodehash->{$node}) {
+		$res->{not_allowed_nodes}->{$node} = $storage_info;
+	    }
+
+	    # extract missing mappings info
+	    my $missing_mappings = $missing_mappings_by_node->{$node};
+	    if (scalar($missing_mappings->@*)) {
+		$res->{not_allowed_nodes}->{$node}->{'unavailable-resources'} = $missing_mappings;
+	    }
 
+	    # if nothing came up, add it to the allowed nodes
+	    if (!$res->{not_allowed_nodes}->{$node}) {
+		push $res->{allowed_nodes}->@*, $node;
 	    }
-	    $res->{not_allowed_nodes} = $checked_nodes;
 	}
 
 	my $local_disks = &$check_vm_disks_local($storecfg, $vmconf, $vmid);
-	$res->{local_disks} = [ values %$local_disks ];;
+	$res->{local_disks} = [ values %$local_disks ];
 
 	$res->{local_resources} = $local_resources;
 	$res->{'mapped-resources'} = [ sort keys $mapped_resources->%* ];
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [pve-devel] [PATCH qemu-server v4 11/12] tests: cfg2cmd: fix mdev tests
  2024-06-06  9:21 [pve-devel] [PATCH guest-common/qemu-server/manager/docs v4] implement experimental vgpu live migration Dominik Csapak
                   ` (15 preceding siblings ...)
  2024-06-06  9:22 ` [pve-devel] [PATCH qemu-server v4 10/12] api: include not mapped resources for running vms in migrate preconditions Dominik Csapak
@ 2024-06-06  9:22 ` Dominik Csapak
  2024-06-06  9:22 ` [pve-devel] [PATCH qemu-server v4 12/12] migration: show vfio state transferred too Dominik Csapak
                   ` (8 subsequent siblings)
  25 siblings, 0 replies; 35+ messages in thread
From: Dominik Csapak @ 2024-06-06  9:22 UTC (permalink / raw)
  To: pve-devel

this will fail with the new checks for mdev when we don't have the
correct config.

namely a device that has mediated devices, should have 'mdev' set in the
mapping config

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
new in v4
 test/run_config2command_tests.pl | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/test/run_config2command_tests.pl b/test/run_config2command_tests.pl
index 7212acc4..234b9504 100755
--- a/test/run_config2command_tests.pl
+++ b/test/run_config2command_tests.pl
@@ -100,6 +100,7 @@ my $pci_map_config = {
     ids => {
 	someGpu => {
 	    type => 'pci',
+	    mdev => 1,
 	    map => [
 		'node=localhost,path=0000:01:00.4,id=10de:2231,iommugroup=1',
 		'node=localhost,path=0000:01:00.5,id=10de:2231,iommugroup=1',
@@ -319,7 +320,6 @@ $pve_common_sysfstools->mock(
 	} elsif ($path =~ m/^0000:07:10/) {
 	    return {
 		iommugroup => 2,
-		mdev => 0,
 		vendor => "0x8086",
 		device => "0x1520",
 	    };
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [pve-devel] [PATCH qemu-server v4 12/12] migration: show vfio state transferred too
  2024-06-06  9:21 [pve-devel] [PATCH guest-common/qemu-server/manager/docs v4] implement experimental vgpu live migration Dominik Csapak
                   ` (16 preceding siblings ...)
  2024-06-06  9:22 ` [pve-devel] [PATCH qemu-server v4 11/12] tests: cfg2cmd: fix mdev tests Dominik Csapak
@ 2024-06-06  9:22 ` Dominik Csapak
  2024-06-06  9:22 ` [pve-devel] [PATCH manager v4 1/5] mapping: pci: include mdev in config checks Dominik Csapak
                   ` (7 subsequent siblings)
  25 siblings, 0 replies; 35+ messages in thread
From: Dominik Csapak @ 2024-06-06  9:22 UTC (permalink / raw)
  To: pve-devel

there is no total here, so we can't show any, just what was transferred
up until now (if any).

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
new in v4
 PVE/API2/Qemu.pm   |  2 +-
 PVE/QemuMigrate.pm | 12 +++++++++++-
 2 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 0a999006..34bfcdf4 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -4541,7 +4541,7 @@ __PACKAGE__->register_method({
 	    }
 
 	    # if nothing came up, add it to the allowed nodes
-	    if (!$res->{not_allowed_nodes}->{$node}) {
+	    if (scalar($res->{not_allowed_nodes}->{$node}->%*) == 0) {
 		push $res->{allowed_nodes}->@*, $node;
 	    }
 	}
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 41ff12c2..bd65558c 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -1242,6 +1242,7 @@ sub phase2 {
     $self->log('info', "migrate uri => $migrate_uri failed: $merr") if $merr;
 
     my $last_mem_transferred = 0;
+    my $last_vfio_transferred = 0;
     my $usleep = 1000000;
     my $i = 0;
     my $err_count = 0;
@@ -1301,8 +1302,11 @@ sub phase2 {
 	    last;
 	}
 
-	if ($memstat->{transferred} ne $last_mem_transferred) {
+	if ($memstat->{transferred} ne $last_mem_transferred ||
+	    $stat->{vfio}->{transferred} ne $last_vfio_transferred
+	) {
 	    my $trans = $memstat->{transferred} || 0;
+	    my $vfio_transferred = $stat->{vfio}->{transferred} || 0;
 	    my $rem = $memstat->{remaining} || 0;
 	    my $total = $memstat->{total} || 0;
 	    my $speed = ($memstat->{'pages-per-second'} // 0) * ($memstat->{'page-size'} // 0);
@@ -1320,6 +1324,11 @@ sub phase2 {
 
 	    my $progress = "transferred $transferred_h of $total_h VM-state, ${speed_h}/s";
 
+	    if ($vfio_transferred > 0) {
+		my $vfio_h = render_bytes($vfio_transferred, 1);
+		$progress .= " (+ $vfio_h VFIO-state)";
+	    }
+
 	    if ($dirty_rate > $speed) {
 		my $dirty_rate_h = render_bytes($dirty_rate, 1);
 		$progress .= ", VM dirties lots of memory: $dirty_rate_h/s";
@@ -1361,6 +1370,7 @@ sub phase2 {
 	}
 
 	$last_mem_transferred = $memstat->{transferred};
+	$last_vfio_transferred = $stat->{vfio}->{transferred};
     }
 
     if ($self->{storage_migration}) {
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [pve-devel] [PATCH manager v4 1/5] mapping: pci: include mdev in config checks
  2024-06-06  9:21 [pve-devel] [PATCH guest-common/qemu-server/manager/docs v4] implement experimental vgpu live migration Dominik Csapak
                   ` (17 preceding siblings ...)
  2024-06-06  9:22 ` [pve-devel] [PATCH qemu-server v4 12/12] migration: show vfio state transferred too Dominik Csapak
@ 2024-06-06  9:22 ` Dominik Csapak
  2024-06-06  9:22 ` [pve-devel] [PATCH manager v4 2/5] bulk migrate: improve precondition checks Dominik Csapak
                   ` (6 subsequent siblings)
  25 siblings, 0 replies; 35+ messages in thread
From: Dominik Csapak @ 2024-06-06  9:22 UTC (permalink / raw)
  To: pve-devel

by also providing the global config in assert_valid, and by also
adding the mdev config in the 'toCheck' object in the gui

For the gui, we extract the mdev property from the global entry, and add
it to the individual mapping entries, that way we can reuse the checking
logic of the other properties.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
changes from v3:
* give mdev a default value, to make the error nicer

 PVE/API2/Cluster/Mapping/PCI.pm | 2 +-
 www/manager6/dc/PCIMapView.js   | 6 ++++++
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/PVE/API2/Cluster/Mapping/PCI.pm b/PVE/API2/Cluster/Mapping/PCI.pm
index 64462d25..f85623bb 100644
--- a/PVE/API2/Cluster/Mapping/PCI.pm
+++ b/PVE/API2/Cluster/Mapping/PCI.pm
@@ -115,7 +115,7 @@ __PACKAGE__->register_method ({
 			};
 		    }
 		    for my $mapping ($mappings->@*) {
-			eval { PVE::Mapping::PCI::assert_valid($id, $mapping) };
+			eval { PVE::Mapping::PCI::assert_valid($id, $mapping, $entry) };
 			if (my $err = $@) {
 			    push $entry->{checks}->@*, {
 				severity => 'error',
diff --git a/www/manager6/dc/PCIMapView.js b/www/manager6/dc/PCIMapView.js
index 80fe3c0f..f9d43aa2 100644
--- a/www/manager6/dc/PCIMapView.js
+++ b/www/manager6/dc/PCIMapView.js
@@ -20,10 +20,15 @@ Ext.define('PVE.dc.PCIMapView', {
 	data.forEach((entry) => {
 	    ids[entry.id] = entry;
 	});
+	let mdev;
 	me.getRootNode()?.cascade(function(rec) {
+	    if (rec.data.type === 'entry') {
+		mdev = rec.data.mdev ?? 0;
+	    }
 	    if (rec.data.node !== node || rec.data.type !== 'map') {
 		return;
 	    }
+	    rec.data.mdev = mdev;
 
 	    let id = rec.data.path;
 	    if (!id.match(/\.\d$/)) {
@@ -44,6 +49,7 @@ Ext.define('PVE.dc.PCIMapView', {
 	    let toCheck = {
 		id: deviceId,
 		'subsystem-id': subId,
+		mdev: device.mdev ?? 0,
 		iommugroup: device.iommugroup !== -1 ? device.iommugroup : undefined,
 	    };
 
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [pve-devel] [PATCH manager v4 2/5] bulk migrate: improve precondition checks
  2024-06-06  9:21 [pve-devel] [PATCH guest-common/qemu-server/manager/docs v4] implement experimental vgpu live migration Dominik Csapak
                   ` (18 preceding siblings ...)
  2024-06-06  9:22 ` [pve-devel] [PATCH manager v4 1/5] mapping: pci: include mdev in config checks Dominik Csapak
@ 2024-06-06  9:22 ` Dominik Csapak
  2024-06-06  9:22 ` [pve-devel] [PATCH manager v4 3/5] bulk migrate: include checks for live-migratable local resources Dominik Csapak
                   ` (5 subsequent siblings)
  25 siblings, 0 replies; 35+ messages in thread
From: Dominik Csapak @ 2024-06-06  9:22 UTC (permalink / raw)
  To: pve-devel

this now takes into account the 'not_allowed_nodes' hash we get from the
api call. With that, we can now limit the 'local_resources' check for
online vms only, as for offline guests, the 'unavailable-resources' hash
already includes mapped devices that don't exist on the target node.

This now also includes unavailable storages on target nodes.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
no changes
 PVE/API2/Nodes.pm | 14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/PVE/API2/Nodes.pm b/PVE/API2/Nodes.pm
index fb4fd4d6..882d7301 100644
--- a/PVE/API2/Nodes.pm
+++ b/PVE/API2/Nodes.pm
@@ -2255,11 +2255,23 @@ my $create_migrate_worker = sub {
 	    $invalidConditions .= join(', ', map { $_->{volid} } @{$preconditions->{local_disks}});
 	}
 
-	if (@{$preconditions->{local_resources}}) {
+	if ($online && scalar($preconditions->{local_resources}->@*)) {
 	    $invalidConditions .= "\n  Has local resources: ";
 	    $invalidConditions .= join(', ', @{$preconditions->{local_resources}});
 	}
 
+	if (my $not_allowed_nodes = $preconditions->{not_allowed_nodes}) {
+	    if (my $unavail_storages = $not_allowed_nodes->{$target}->{unavailable_storages}) {
+		$invalidConditions .= "\n  Has unavailable storages: ";
+		$invalidConditions .= join(', ', $unavail_storages->@*);
+	    }
+
+	    if (my $unavail_resources = $not_allowed_nodes->{$target}->{'unavailable-resources'}) {
+		$invalidConditions .= "\n  Has unavailable resources ";
+		$invalidConditions .= join(', ', $unavail_resources->@*);
+	    }
+	}
+
 	if ($invalidConditions && $invalidConditions ne '') {
 	    print STDERR "skip VM $vmid - precondition check failed:";
 	    die "$invalidConditions\n";
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [pve-devel] [PATCH manager v4 3/5] bulk migrate: include checks for live-migratable local resources
  2024-06-06  9:21 [pve-devel] [PATCH guest-common/qemu-server/manager/docs v4] implement experimental vgpu live migration Dominik Csapak
                   ` (19 preceding siblings ...)
  2024-06-06  9:22 ` [pve-devel] [PATCH manager v4 2/5] bulk migrate: improve precondition checks Dominik Csapak
@ 2024-06-06  9:22 ` Dominik Csapak
  2024-06-06  9:22 ` [pve-devel] [PATCH manager v4 4/5] ui: adapt migration window to precondition api change Dominik Csapak
                   ` (4 subsequent siblings)
  25 siblings, 0 replies; 35+ messages in thread
From: Dominik Csapak @ 2024-06-06  9:22 UTC (permalink / raw)
  To: pve-devel

those should be able to migrate even for online vms. If the mapping does
not exist on the target node, that will be caught further down anyway.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
no changes
 PVE/API2/Nodes.pm | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/PVE/API2/Nodes.pm b/PVE/API2/Nodes.pm
index 882d7301..35e7047c 100644
--- a/PVE/API2/Nodes.pm
+++ b/PVE/API2/Nodes.pm
@@ -2255,9 +2255,18 @@ my $create_migrate_worker = sub {
 	    $invalidConditions .= join(', ', map { $_->{volid} } @{$preconditions->{local_disks}});
 	}
 
+	# for a live migration all local_resources must be marked as live-migratable
 	if ($online && scalar($preconditions->{local_resources}->@*)) {
-	    $invalidConditions .= "\n  Has local resources: ";
-	    $invalidConditions .= join(', ', @{$preconditions->{local_resources}});
+	    my $resource_not_live = [];
+	    for my $resource ($preconditions->{local_resources}->@*) {
+		next if $preconditions->{'mapped-resource-info'}->{$resource}->{'live-migration'};
+		push $resource_not_live->@*, $resource;
+	    }
+
+	    if (scalar($resource_not_live->@*)) {
+		$invalidConditions .= "\n  Has local resources not marked as live migratable: ";
+		$invalidConditions .= join(', ', $resource_not_live->@*);
+	    }
 	}
 
 	if (my $not_allowed_nodes = $preconditions->{not_allowed_nodes}) {
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [pve-devel] [PATCH manager v4 4/5] ui: adapt migration window to precondition api change
  2024-06-06  9:21 [pve-devel] [PATCH guest-common/qemu-server/manager/docs v4] implement experimental vgpu live migration Dominik Csapak
                   ` (20 preceding siblings ...)
  2024-06-06  9:22 ` [pve-devel] [PATCH manager v4 3/5] bulk migrate: include checks for live-migratable local resources Dominik Csapak
@ 2024-06-06  9:22 ` Dominik Csapak
  2024-06-06  9:22 ` [pve-devel] [PATCH manager v4 5/5] fix #5175: ui: allow configuring and live migration of mapped pci resources Dominik Csapak
                   ` (3 subsequent siblings)
  25 siblings, 0 replies; 35+ messages in thread
From: Dominik Csapak @ 2024-06-06  9:22 UTC (permalink / raw)
  To: pve-devel

we now return the 'allowed_nodes'/'not_allowed_nodes' also if the vm is
running, when it has mapped resources. So do that checks independently
so that the user has instant feedback where those resources exist.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
no changes
 www/manager6/window/Migrate.js | 26 +++++++++++++-------------
 1 file changed, 13 insertions(+), 13 deletions(-)

diff --git a/www/manager6/window/Migrate.js b/www/manager6/window/Migrate.js
index 5473821b..c553a1ee 100644
--- a/www/manager6/window/Migrate.js
+++ b/www/manager6/window/Migrate.js
@@ -104,7 +104,7 @@ Ext.define('PVE.window.Migrate', {
 	onTargetChange: function(nodeSelector) {
 	    // Always display the storages of the currently seleceted migration target
 	    this.lookup('pveDiskStorageSelector').setNodename(nodeSelector.value);
-	    this.checkMigratePreconditions();
+	    this.checkMigratePreconditions(true);
 	},
 
 	startMigration: function() {
@@ -214,12 +214,12 @@ Ext.define('PVE.window.Migrate', {
 		migration.possible = true;
 	    }
 	    migration.preconditions = [];
+	    let target = me.lookup('pveNodeSelector').value;
+	    let disallowed = migrateStats.not_allowed_nodes?.[target] ?? {};
 
-	    if (migrateStats.allowed_nodes) {
+	    if (migrateStats.allowed_nodes && !vm.get('running')) {
 		migration.allowedNodes = migrateStats.allowed_nodes;
-		let target = me.lookup('pveNodeSelector').value;
 		if (target.length && !migrateStats.allowed_nodes.includes(target)) {
-		    let disallowed = migrateStats.not_allowed_nodes[target] ?? {};
 		    if (disallowed.unavailable_storages !== undefined) {
 			let missingStorages = disallowed.unavailable_storages.join(', ');
 
@@ -230,17 +230,17 @@ Ext.define('PVE.window.Migrate', {
 			    severity: 'error',
 			});
 		    }
+		}
+	    }
 
-		    if (disallowed['unavailable-resources'] !== undefined) {
-			let unavailableResources = disallowed['unavailable-resources'].join(', ');
+	    if (disallowed['unavailable-resources'] !== undefined) {
+		let unavailableResources = disallowed['unavailable-resources'].join(', ');
 
-			migration.possible = false;
-			migration.preconditions.push({
-			    text: 'Mapped Resources (' + unavailableResources + ') not available on selected target. ',
-			    severity: 'error',
-			});
-		    }
-		}
+		migration.possible = false;
+		migration.preconditions.push({
+		    text: 'Mapped Resources (' + unavailableResources + ') not available on selected target. ',
+		    severity: 'error',
+		});
 	    }
 
 	    let blockingResources = [];
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [pve-devel] [PATCH manager v4 5/5] fix #5175: ui: allow configuring and live migration of mapped pci resources
  2024-06-06  9:21 [pve-devel] [PATCH guest-common/qemu-server/manager/docs v4] implement experimental vgpu live migration Dominik Csapak
                   ` (21 preceding siblings ...)
  2024-06-06  9:22 ` [pve-devel] [PATCH manager v4 4/5] ui: adapt migration window to precondition api change Dominik Csapak
@ 2024-06-06  9:22 ` Dominik Csapak
  2024-06-06  9:22 ` [pve-devel] [PATCH docs v4 1/2] qm: resource mapping: add description for `mdev` option Dominik Csapak
                   ` (2 subsequent siblings)
  25 siblings, 0 replies; 35+ messages in thread
From: Dominik Csapak @ 2024-06-06  9:22 UTC (permalink / raw)
  To: pve-devel

if the hardware/driver is capable, the admin can now mark a pci device
as 'live-migration-capable', which then tries enabling live migration
for such devices.

mark it as experimental when configuring and in the migrate window

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
no changes
 www/manager6/window/Migrate.js    | 25 ++++++++++++++++++++-----
 www/manager6/window/PCIMapEdit.js | 12 ++++++++++++
 2 files changed, 32 insertions(+), 5 deletions(-)

diff --git a/www/manager6/window/Migrate.js b/www/manager6/window/Migrate.js
index c553a1ee..272ab022 100644
--- a/www/manager6/window/Migrate.js
+++ b/www/manager6/window/Migrate.js
@@ -244,10 +244,10 @@ Ext.define('PVE.window.Migrate', {
 	    }
 
 	    let blockingResources = [];
-	    let mappedResources = migrateStats['mapped-resources'] ?? [];
+	    let mappedResources = migrateStats['mapped-resource-info'] ?? {};
 
 	    for (const res of migrateStats.local_resources) {
-		if (mappedResources.indexOf(res) === -1) {
+		if (!mappedResources[res]) {
 		    blockingResources.push(res);
 		}
 	    }
@@ -271,14 +271,29 @@ Ext.define('PVE.window.Migrate', {
 		}
 	    }
 
-	    if (mappedResources && mappedResources.length) {
-		if (vm.get('running')) {
+	    if (mappedResources && vm.get('running')) {
+		let allowed = [];
+		let notAllowed = [];
+		for (const [key, resource] of Object.entries(mappedResources)) {
+		    if (resource['live-migration']) {
+			allowed.push(key);
+		    } else {
+			notAllowed.push(key);
+		    }
+		}
+		if (notAllowed.length > 0) {
 		    migration.possible = false;
 		    migration.preconditions.push({
 			text: Ext.String.format('Can\'t migrate running VM with mapped resources: {0}',
-			mappedResources.join(', ')),
+			notAllowed.join(', ')),
 			severity: 'error',
 		    });
+		} else if (allowed.length > 0) {
+		    migration.preconditions.push({
+			text: Ext.String.format('Live-migrating running VM with mapped resources (Experimental): {0}',
+			allowed.join(', ')),
+			severity: 'warning',
+		    });
 		}
 	    }
 
diff --git a/www/manager6/window/PCIMapEdit.js b/www/manager6/window/PCIMapEdit.js
index d43f04eb..731269a0 100644
--- a/www/manager6/window/PCIMapEdit.js
+++ b/www/manager6/window/PCIMapEdit.js
@@ -242,6 +242,18 @@ Ext.define('PVE.window.PCIMapEditWindow', {
 			disabled: '{hideComment}',
 		    },
 		},
+		{
+		    xtype: 'proxmoxcheckbox',
+		    fieldLabel: gettext('Live Migration Capable'),
+		    labelWidth: 200,
+		    boxLabel: `<i class="fa fa-exclamation-triangle warning"></i> ${gettext('Experimental')}`,
+		    reference: 'live-migration-capable',
+		    name: 'live-migration-capable',
+		    cbind: {
+			deleteEmpty: '{!isCreate}',
+			disabled: '{hideComment}',
+		    },
+		},
 	    ],
 
 	    columnB: [
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [pve-devel] [PATCH docs v4 1/2] qm: resource mapping: add description for `mdev` option
  2024-06-06  9:21 [pve-devel] [PATCH guest-common/qemu-server/manager/docs v4] implement experimental vgpu live migration Dominik Csapak
                   ` (22 preceding siblings ...)
  2024-06-06  9:22 ` [pve-devel] [PATCH manager v4 5/5] fix #5175: ui: allow configuring and live migration of mapped pci resources Dominik Csapak
@ 2024-06-06  9:22 ` Dominik Csapak
  2024-08-14 14:14   ` Alexander Zeidler
  2024-06-06  9:22 ` [pve-devel] [PATCH docs v4 2/2] qm: resource mapping: document `live-migration-capable` setting Dominik Csapak
  2024-07-05  9:00 ` [pve-devel] [PATCH guest-common/qemu-server/manager/docs v4] implement experimental vgpu live migration Thomas Lamprecht
  25 siblings, 1 reply; 35+ messages in thread
From: Dominik Csapak @ 2024-06-06  9:22 UTC (permalink / raw)
  To: pve-devel

in a new section about additional options

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
no changes
 qm.adoc | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/qm.adoc b/qm.adoc
index 42c26db..3f4e59a 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -1802,6 +1802,19 @@ To create mappings `Mapping.Modify` on `/mapping/<type>/<name>` is necessary
 To use these mappings, `Mapping.Use` on `/mapping/<type>/<name>` is necessary
 (in addition to the normal guest privileges to edit the configuration).
 
+Additional Options
+^^^^^^^^^^^^^^^^^^
+
+There are additional options when defining a cluster wide resource mapping.
+Currently there are the following options:
+
+* `mdev` (PCI): This marks the PCI device as being capable of providing
+  `mediated devices`. When this is enabled, you can select a type when
+  configuring it on the guest. If multiple PCI devices are selected for
+  the mapping, the mediated device will be create on the first one where
+  there are any available instances of the selected type.
+
+
 Managing Virtual Machines with `qm`
 ------------------------------------
 
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [pve-devel] [PATCH docs v4 2/2] qm: resource mapping: document `live-migration-capable` setting
  2024-06-06  9:21 [pve-devel] [PATCH guest-common/qemu-server/manager/docs v4] implement experimental vgpu live migration Dominik Csapak
                   ` (23 preceding siblings ...)
  2024-06-06  9:22 ` [pve-devel] [PATCH docs v4 1/2] qm: resource mapping: add description for `mdev` option Dominik Csapak
@ 2024-06-06  9:22 ` Dominik Csapak
  2024-07-05  9:00 ` [pve-devel] [PATCH guest-common/qemu-server/manager/docs v4] implement experimental vgpu live migration Thomas Lamprecht
  25 siblings, 0 replies; 35+ messages in thread
From: Dominik Csapak @ 2024-06-06  9:22 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
no changes
 qm.adoc | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/qm.adoc b/qm.adoc
index 3f4e59a..3ab5270 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -1814,6 +1814,12 @@ Currently there are the following options:
   the mapping, the mediated device will be create on the first one where
   there are any available instances of the selected type.
 
+* `live-migration-capable` (PCI): This marks the device as being capable of
+  being live migrated between nodes. This requires driver and hardware support.
+  Only NVIDIA GPUs with recent kernel are known to support this. Note that
+  live migrating passed through devices is an experimental feature and may
+  not work or cause issues.
+
 
 Managing Virtual Machines with `qm`
 ------------------------------------
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [pve-devel] [PATCH qemu-server v4 01/12] usb: mapping: move implementation of find_on_current_node here
  2024-06-06  9:22 ` [pve-devel] [PATCH qemu-server v4 01/12] usb: mapping: move implementation of find_on_current_node here Dominik Csapak
@ 2024-07-04 11:29   ` Thomas Lamprecht
  0 siblings, 0 replies; 35+ messages in thread
From: Thomas Lamprecht @ 2024-07-04 11:29 UTC (permalink / raw)
  To: Proxmox VE development discussion, Dominik Csapak

Am 06/06/2024 um 11:22 schrieb Dominik Csapak:
> this was the only user, and it's easy enough
> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
> no changes
>  PVE/QemuServer/USB.pm | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/PVE/QemuServer/USB.pm b/PVE/QemuServer/USB.pm
> index 49957444..ecd0361d 100644
> --- a/PVE/QemuServer/USB.pm
> +++ b/PVE/QemuServer/USB.pm
> @@ -5,6 +5,7 @@ use warnings;
>  use PVE::QemuServer::PCI qw(print_pci_addr);
>  use PVE::QemuServer::Machine;
>  use PVE::QemuServer::Helpers qw(min_version windows_version);
> +use PVE::INotify;
>  use PVE::JSONSchema;
>  use PVE::Mapping::USB;
>  use base 'Exporter';
> @@ -91,7 +92,9 @@ sub parse_usb_device {
>  	    $res->{spice} = 1;
>  	}
>      } elsif (defined($mapping)) {
> -	my $devices = PVE::Mapping::USB::find_on_current_node($mapping);
> +	my $config = PVE::Mapping::USB::config();
> +	my $node = PVE::INotify::nodename();
> +	my $devices = PVE::Mapping::USB::get_node_mapping($config, $mapping, $node);
>  	die "USB device mapping not found for '$mapping'\n" if !$devices || !scalar($devices->@*);
>  	die "More than one USB mapping per host not supported\n" if scalar($devices->@*) > 1;

fine by me, but above could be still in a separate (private) method here in
this module, reducing the noise a bit here.

Same for the PCI one.

>  	eval {



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [pve-devel] [PATCH qemu-server v4 05/12] vm_stop_cleanup: add noerr parameter
  2024-06-06  9:22 ` [pve-devel] [PATCH qemu-server v4 05/12] vm_stop_cleanup: add noerr parameter Dominik Csapak
@ 2024-07-04 12:24   ` Thomas Lamprecht
  0 siblings, 0 replies; 35+ messages in thread
From: Thomas Lamprecht @ 2024-07-04 12:24 UTC (permalink / raw)
  To: Proxmox VE development discussion, Dominik Csapak

Am 06/06/2024 um 11:22 schrieb Dominik Csapak:
> and set it on all current users

Hmm, it would have helped me if you stated that this patch is keeping
the semantics, but allow callers to request that the method dies instead
of just warning and silencing such an error.

As I was first, when just reading the commit message, a bit confused why
there was no explanation whatsoever even though the behavior changed
drastically - while sure, for review one can reasonably expect devs to also
read the code, it's IMO still much nicer if one already get the basic
gist from just reading the message, ideally just the subject; especially
when writing d/changelog entries.

The subject could maybe be changed to something like:

"vm stop-cleanup: allow callers to decide error behavior"

(in this specific case it might be even fine to use the literal method
name, but no hard feelings either way)

Looks OK to me otherwise.

> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>

With commit meta stuff addressed consider this:

Reviewed-by: Thomas Lamprecht <t.lamprecht@proxmox.com>



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [pve-devel] applied: [PATCH guest-common v4 1/6] mapping: pci: assert_valid: rename cfg to mapping
  2024-06-06  9:21 ` [pve-devel] [PATCH guest-common v4 1/6] mapping: pci: assert_valid: rename cfg to mapping Dominik Csapak
@ 2024-07-05  8:22   ` Thomas Lamprecht
  0 siblings, 0 replies; 35+ messages in thread
From: Thomas Lamprecht @ 2024-07-05  8:22 UTC (permalink / raw)
  To: Proxmox VE development discussion, Dominik Csapak

Am 06/06/2024 um 11:21 schrieb Dominik Csapak:
> to make it clearer what it actually is. Also we want to add the
> 'real' config as parameter too, and so it's less confusing.
> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
> split out in v4
>  src/PVE/Mapping/PCI.pm | 12 ++++++------
>  1 file changed, 6 insertions(+), 6 deletions(-)
> 
>

applied, thanks!


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [pve-devel] applied: [PATCH guest-common v4 2/6] mapping: pci: assert_valid: reword error messages
  2024-06-06  9:21 ` [pve-devel] [PATCH guest-common v4 2/6] mapping: pci: assert_valid: reword error messages Dominik Csapak
@ 2024-07-05  8:22   ` Thomas Lamprecht
  0 siblings, 0 replies; 35+ messages in thread
From: Thomas Lamprecht @ 2024-07-05  8:22 UTC (permalink / raw)
  To: Proxmox VE development discussion, Dominik Csapak

Am 06/06/2024 um 11:21 schrieb Dominik Csapak:
> makes them a bit clearer
> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
> split out in v4
>  src/PVE/Mapping/PCI.pm | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
>

applied, thanks!


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [pve-devel] applied: [PATCH guest-common v4 3/6] mapping: pci: make sure all desired properties are checked
  2024-06-06  9:21 ` [pve-devel] [PATCH guest-common v4 3/6] mapping: pci: make sure all desired properties are checked Dominik Csapak
@ 2024-07-05  8:22   ` Thomas Lamprecht
  0 siblings, 0 replies; 35+ messages in thread
From: Thomas Lamprecht @ 2024-07-05  8:22 UTC (permalink / raw)
  To: Proxmox VE development discussion, Dominik Csapak

Am 06/06/2024 um 11:21 schrieb Dominik Csapak:
> by placing all expected properties from the hardware into an 'expected_props'
> and those fromt he config into 'configured_props'
> 
> the names makes clearer what's what, and we can easily extend it, even
> if the data does not come from the mapping (like we'll do with 'mdev')
> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
> changes from v3:
> * rebased on split out patches before
> * don't merge keys but add all to expected_props instead
>  src/PVE/Mapping/PCI.pm | 27 ++++++++++++++++-----------
>  1 file changed, 16 insertions(+), 11 deletions(-)
> 
>

applied, thanks!


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [pve-devel] [PATCH guest-common v4 4/6] mapping: pci: check the mdev configuration on the device too
  2024-06-06  9:21 ` [pve-devel] [PATCH guest-common v4 4/6] mapping: pci: check the mdev configuration on the device too Dominik Csapak
@ 2024-07-05  8:33   ` Thomas Lamprecht
  0 siblings, 0 replies; 35+ messages in thread
From: Thomas Lamprecht @ 2024-07-05  8:33 UTC (permalink / raw)
  To: Proxmox VE development discussion, Dominik Csapak

Am 06/06/2024 um 11:21 schrieb Dominik Csapak:
> but that lives int he 'global' part of the mapping config, not in a
> specific mapping. To check that, add it to the relevant hashes here.
> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
> changes from v3:
> * leave $cfg optional
> 
>  src/PVE/Mapping/PCI.pm | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/src/PVE/Mapping/PCI.pm b/src/PVE/Mapping/PCI.pm
> index aa56496..1b2ac52 100644
> --- a/src/PVE/Mapping/PCI.pm
> +++ b/src/PVE/Mapping/PCI.pm
> @@ -131,7 +131,7 @@ sub options {
>  
>  # checks if the given config is valid for the current node
>  sub assert_valid {
> -    my ($name, $mapping) = @_;
> +    my ($name, $mapping, $cfg) = @_;

Which config is this? As is its IMO a bit to opaque. I even thought first
that this is the VM config, or well the config of a specific VM PCI
passthrough property, which I would not have liked much as that would add
a coupling, or maybe better said, hidden cyclic dependency between
guest-common and qemu-server.

Naming this such that it's clearer what config we're talking about here
would help to avoid such thought digressions, at least me.

Maybe `$global_mapping_cfg` or `$cluster_mapping_cfg` (as we do not really
use the term "global" much IIRC).

>  
>      my @paths = split(';', $mapping->{path} // '');
>  
> @@ -161,6 +161,12 @@ sub assert_valid {
>  
>  	my $configured_props = { $mapping->%{qw(id iommugroup subsystem-id)} };
>  
> +	# check mdev from globabl mapping config, if that is given
> +	if (defined($cfg)) {
> +	    $expected_props->{mdev} = $info->{mdev} ? 1 : 0;
> +	    $configured_props->{mdev} = $cfg->{mdev} ? 1 : 0;
> +	}
> +
>  	for my $prop (sort keys $expected_props->%*) {
>  	    next if $prop eq 'iommugroup' && $idx > 0; # check iommu only on the first device
>  



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [pve-devel] applied: [PATCH qemu-server v4 04/12] stop cleanup: remove unnecessary tpmstate cleanup
  2024-06-06  9:22 ` [pve-devel] [PATCH qemu-server v4 04/12] stop cleanup: remove unnecessary tpmstate cleanup Dominik Csapak
@ 2024-07-05  8:36   ` Thomas Lamprecht
  0 siblings, 0 replies; 35+ messages in thread
From: Thomas Lamprecht @ 2024-07-05  8:36 UTC (permalink / raw)
  To: Proxmox VE development discussion, Dominik Csapak

Am 06/06/2024 um 11:22 schrieb Dominik Csapak:
> tpmstate0 is already included in `get_vm_volumes`, and our only storage
> plugin that has unmap_volume implemented is the RBDPlugin, where we call
> unmap in `deactivate_volume`. So it's already ummapped by the
> `deactivate_volumes` calls above.
> 
> For third-party storage plugins, it's natural to expect that
> deactivate_volume() would also remove a mapping for the volume just
> like RBDPlugin does.
> 
> While there is an explicit map_volume() call in start_swtpm(), a
> third-party plugin might expect an explicit unmap_volume() call too.
> However, the order of calls right now is
> 1. activate_volume()
> 2. map_volume()
> 3. deactivate_volume()
> 4. unmap_volume()
> 
> Which seems like it could cause problems already for third-party
> plugins relying on an explicit unmap call.
> 
> All that said, it is unlikely that a third-party plugin breaks. If it
> really happens, it can be discussed/adapted to the actual needs still.
> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> Acked-by: Fiona Ebner <f.ebner@proxmox.com>
> ---
> changes from v3:
> * include rationale for 3rd party plugins (thanks @fiona)
> 
>  PVE/QemuServer.pm | 8 --------
>  1 file changed, 8 deletions(-)
> 
>

applied, thanks!


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [pve-devel] [PATCH guest-common/qemu-server/manager/docs v4] implement experimental vgpu live migration
  2024-06-06  9:21 [pve-devel] [PATCH guest-common/qemu-server/manager/docs v4] implement experimental vgpu live migration Dominik Csapak
                   ` (24 preceding siblings ...)
  2024-06-06  9:22 ` [pve-devel] [PATCH docs v4 2/2] qm: resource mapping: document `live-migration-capable` setting Dominik Csapak
@ 2024-07-05  9:00 ` Thomas Lamprecht
  25 siblings, 0 replies; 35+ messages in thread
From: Thomas Lamprecht @ 2024-07-05  9:00 UTC (permalink / raw)
  To: Proxmox VE development discussion, Dominik Csapak

Am 06/06/2024 um 11:21 schrieb Dominik Csapak:
> guest-common 1-4; qemu-server 1-6; pve-manager 1,2
> are preparations/cleanups mostly and could be applied independently

Well, yes and no, they have some interdependency between themselves, so
not full independent.

It would be great if you would note such inter-patch dependencies also
in each patches' dev/review note section (where patch revision changelog
lives too).

E.g. I cannot apply manager's 1/5: "mapping: pci: include mdev in config
checks" without the guest-common's 4/6: "mapping: pci: check the mdev
configuration on the device too" but I probably can apply manager's 2/5:
"bulk migrate: improve precondition checks", but as I noticed the missing
dependency documentation for the first pair I was now slightly hesitant to
do so without a much bigger amount of work to check this very closely
myself; that is IMO extra reviewer work cost that can be avoided with not
too much work from patch series author.

Also, having that info makes it a bit easier to not miss any d/control
`Depends` or `Breaks` version bump/update.


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [pve-devel] [PATCH docs v4 1/2] qm: resource mapping: add description for `mdev` option
  2024-06-06  9:22 ` [pve-devel] [PATCH docs v4 1/2] qm: resource mapping: add description for `mdev` option Dominik Csapak
@ 2024-08-14 14:14   ` Alexander Zeidler
  0 siblings, 0 replies; 35+ messages in thread
From: Alexander Zeidler @ 2024-08-14 14:14 UTC (permalink / raw)
  To: Proxmox VE development discussion

On Thu Jun 6, 2024 at 11:22 AM CEST, Dominik Csapak wrote:
> in a new section about additional options
>
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
> no changes
>  qm.adoc | 13 +++++++++++++
>  1 file changed, 13 insertions(+)
>
> diff --git a/qm.adoc b/qm.adoc
> index 42c26db..3f4e59a 100644
> --- a/qm.adoc
> +++ b/qm.adoc
> @@ -1802,6 +1802,19 @@ To create mappings `Mapping.Modify` on `/mapping/<type>/<name>` is necessary
>  To use these mappings, `Mapping.Use` on `/mapping/<type>/<name>` is necessary
>  (in addition to the normal guest privileges to edit the configuration).
>  
> +Additional Options
> +^^^^^^^^^^^^^^^^^^
Omit the '^' line and use  .Additional Options  instead, to avoid:
asciidoc: WARNING: qm.adoc: line 1806: section title out of sequence: expected level 3, got level 4

> +
> +There are additional options when defining a cluster wide resource mapping.
> +Currently there are the following options:
> +
> +* `mdev` (PCI): This marks the PCI device as being capable of providing
"This marks the PCI device", but your 2/2 patch begins with "This marks the
device", maybe add or remove the word "PCI" to stay consistent.

> +  `mediated devices`. When this is enabled, you can select a type when

> +  configuring it on the guest. If multiple PCI devices are selected for
> +  the mapping, the mediated device will be create on the first one where
> +  there are any available instances of the selected type.
s/create/created/

"where there" could be avoided, e.g.:
If multiple PCI devices are selected for the mapping, the mediated
device will be created on the first device on which instances of the
selected type are available.

> +
> +
>  Managing Virtual Machines with `qm`
>  ------------------------------------
>  



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2024-08-14 14:15 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-06-06  9:21 [pve-devel] [PATCH guest-common/qemu-server/manager/docs v4] implement experimental vgpu live migration Dominik Csapak
2024-06-06  9:21 ` [pve-devel] [PATCH guest-common v4 1/6] mapping: pci: assert_valid: rename cfg to mapping Dominik Csapak
2024-07-05  8:22   ` [pve-devel] applied: " Thomas Lamprecht
2024-06-06  9:21 ` [pve-devel] [PATCH guest-common v4 2/6] mapping: pci: assert_valid: reword error messages Dominik Csapak
2024-07-05  8:22   ` [pve-devel] applied: " Thomas Lamprecht
2024-06-06  9:21 ` [pve-devel] [PATCH guest-common v4 3/6] mapping: pci: make sure all desired properties are checked Dominik Csapak
2024-07-05  8:22   ` [pve-devel] applied: " Thomas Lamprecht
2024-06-06  9:21 ` [pve-devel] [PATCH guest-common v4 4/6] mapping: pci: check the mdev configuration on the device too Dominik Csapak
2024-07-05  8:33   ` Thomas Lamprecht
2024-06-06  9:22 ` [pve-devel] [PATCH guest-common v4 5/6] mapping: pci: add 'live-migration-capable' flag to mappings Dominik Csapak
2024-06-06  9:22 ` [pve-devel] [PATCH guest-common v4 6/6] mapping: remove find_on_current_node Dominik Csapak
2024-06-06  9:22 ` [pve-devel] [PATCH qemu-server v4 01/12] usb: mapping: move implementation of find_on_current_node here Dominik Csapak
2024-07-04 11:29   ` Thomas Lamprecht
2024-06-06  9:22 ` [pve-devel] [PATCH qemu-server v4 02/12] pci: " Dominik Csapak
2024-06-06  9:22 ` [pve-devel] [PATCH qemu-server v4 03/12] pci: mapping: check mdev config against hardware Dominik Csapak
2024-06-06  9:22 ` [pve-devel] [PATCH qemu-server v4 04/12] stop cleanup: remove unnecessary tpmstate cleanup Dominik Csapak
2024-07-05  8:36   ` [pve-devel] applied: " Thomas Lamprecht
2024-06-06  9:22 ` [pve-devel] [PATCH qemu-server v4 05/12] vm_stop_cleanup: add noerr parameter Dominik Csapak
2024-07-04 12:24   ` Thomas Lamprecht
2024-06-06  9:22 ` [pve-devel] [PATCH qemu-server v4 06/12] migrate: call vm_stop_cleanup after stopping in phase3_cleanup Dominik Csapak
2024-06-06  9:22 ` [pve-devel] [PATCH qemu-server v4 07/12] pci: set 'enable-migration' to on for live-migration marked mapped devices Dominik Csapak
2024-06-06  9:22 ` [pve-devel] [PATCH qemu-server v4 08/12] check_local_resources: add more info per mapped device and return as hash Dominik Csapak
2024-06-06  9:22 ` [pve-devel] [PATCH qemu-server v4 09/12] api: enable live migration for marked mapped pci devices Dominik Csapak
2024-06-06  9:22 ` [pve-devel] [PATCH qemu-server v4 10/12] api: include not mapped resources for running vms in migrate preconditions Dominik Csapak
2024-06-06  9:22 ` [pve-devel] [PATCH qemu-server v4 11/12] tests: cfg2cmd: fix mdev tests Dominik Csapak
2024-06-06  9:22 ` [pve-devel] [PATCH qemu-server v4 12/12] migration: show vfio state transferred too Dominik Csapak
2024-06-06  9:22 ` [pve-devel] [PATCH manager v4 1/5] mapping: pci: include mdev in config checks Dominik Csapak
2024-06-06  9:22 ` [pve-devel] [PATCH manager v4 2/5] bulk migrate: improve precondition checks Dominik Csapak
2024-06-06  9:22 ` [pve-devel] [PATCH manager v4 3/5] bulk migrate: include checks for live-migratable local resources Dominik Csapak
2024-06-06  9:22 ` [pve-devel] [PATCH manager v4 4/5] ui: adapt migration window to precondition api change Dominik Csapak
2024-06-06  9:22 ` [pve-devel] [PATCH manager v4 5/5] fix #5175: ui: allow configuring and live migration of mapped pci resources Dominik Csapak
2024-06-06  9:22 ` [pve-devel] [PATCH docs v4 1/2] qm: resource mapping: add description for `mdev` option Dominik Csapak
2024-08-14 14:14   ` Alexander Zeidler
2024-06-06  9:22 ` [pve-devel] [PATCH docs v4 2/2] qm: resource mapping: document `live-migration-capable` setting Dominik Csapak
2024-07-05  9:00 ` [pve-devel] [PATCH guest-common/qemu-server/manager/docs v4] implement experimental vgpu live migration Thomas Lamprecht

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal