public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH docs v4 0/6] feature #1027 virtio-9p/virtio-fs
@ 2023-04-25 10:21 Markus Frank
  2023-04-25 10:21 ` [pve-devel] [PATCH docs v4 1/6] added shared filesystem doc for virtio-fs & virtio-9p Markus Frank
                   ` (6 more replies)
  0 siblings, 7 replies; 19+ messages in thread
From: Markus Frank @ 2023-04-25 10:21 UTC (permalink / raw)
  To: pve-devel

pve-docs:

Markus Frank (1):
  added shared filesystem doc for virtio-fs & virtio-9p

 qm.adoc | 61 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 61 insertions(+)


pve-access-control:

v3:
 * replaced /dirs with /map/dirs/<dirid>

v2:
 * admin gives access via an ACL (/dirs/<dirid>)

Markus Frank (1):
  added acls for Shared Files Directories

 src/PVE/API2/Directory.pm | 68 +++++++++++++++++++++++++++++++++++++++
 src/PVE/AccessControl.pm  | 16 +++++++++
 src/PVE/RPCEnvironment.pm | 12 ++++++-
 3 files changed, 95 insertions(+), 1 deletion(-)
 create mode 100644 src/PVE/API2/Directory.pm


pve-manager:

v4:
 * moved extract_dir_path from qemu-server to DirConfig in pve-manager

v2:
 * admins define dirs on the host that are eligibly for mounting into 
  guests (<dirid>: /path/tp/share)

Markus Frank (3):
  added Config for Shared Filesystem Directories
  added Shared Files tab in Node Settings
  added options to add virtio-9p & virtio-fs Shared Filesystems to qemu
    config

 PVE/API2/DirConfig.pm                | 129 +++++++++++++++++++
 PVE/API2/Makefile                    |   1 +
 PVE/API2/Nodes.pm                    |   6 +
 PVE/DirConfig.pm                     | 155 +++++++++++++++++++++++
 PVE/Makefile                         |   1 +
 www/manager6/Makefile                |   2 +
 www/manager6/Utils.js                |   1 +
 www/manager6/data/PermPathStore.js   |   3 +
 www/manager6/node/Config.js          |  12 ++
 www/manager6/node/SharedFiles.js     | 177 +++++++++++++++++++++++++++
 www/manager6/qemu/HardwareView.js    |  19 +++
 www/manager6/qemu/SharedFilesEdit.js | 101 +++++++++++++++
 12 files changed, 607 insertions(+)
 create mode 100644 PVE/API2/DirConfig.pm
 create mode 100644 PVE/DirConfig.pm
 create mode 100644 www/manager6/node/SharedFiles.js
 create mode 100644 www/manager6/qemu/SharedFilesEdit.js


qemu-server:

v4:
 * moved extract_dir_path from qemu-server to DirConfig

v3:
 * created own socket and get file descriptor for virtiofsd
 so there is no race between starting virtiofsd & qemu
 * added TODO to replace virtiofsd with rust implementation in bookworm
 (I packaged the rust implementation for bookworm & the C implementation
 in qemu will be removed in qemu 8.0)

v2:
 * replaced sharedfiles_fmt path in qemu-server with dirid:
 * user can use the dirid to specify the directory without requiring root access

Markus Frank (1):
  feature #1027: virtio-9p & virtio-fs support

 PVE/API2/Qemu.pm  |  19 +++++++
 PVE/QemuServer.pm | 141 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 160 insertions(+)

-- 
2.30.2





^ permalink raw reply	[flat|nested] 19+ messages in thread

* [pve-devel] [PATCH docs v4 1/6] added shared filesystem doc for virtio-fs & virtio-9p
  2023-04-25 10:21 [pve-devel] [PATCH docs v4 0/6] feature #1027 virtio-9p/virtio-fs Markus Frank
@ 2023-04-25 10:21 ` Markus Frank
  2023-04-25 10:21 ` [pve-devel] [PATCH access-control v4 2/6] added acls for Shared Files Directories Markus Frank
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 19+ messages in thread
From: Markus Frank @ 2023-04-25 10:21 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
 qm.adoc | 61 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 61 insertions(+)

diff --git a/qm.adoc b/qm.adoc
index 0726b29..8cc1678 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -934,6 +934,67 @@ recommended to always use a limiter to avoid guests using too many host
 resources. If desired, a value of '0' for `max_bytes` can be used to disable
 all limits.
 
+[[qm_sharedfiles]]
+Shared Filesystems
+~~~~~~~~~~~~~~~~~~
+
+add directories for Shared Filesystems
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To add a directory either add a directory to the "Shared Files" list in the node
+config in the WebUI or add it to the /etc/pve/nodes/<nodename>/dirs config file
+like this:
+
+----
+<dirid>: /path/to/share
+----
+
+9pfs (virtio-9p)
+^^^^^^^^^^^^^^^^
+
+QEMU's 9pfs uses the Plan 9 Filesystem Protocol to share a directory on the host
+with a guest VM.
+
+To share a directory with 9p, run the following command:
+
+----
+qm set <vmid> -sharedfiles0 virtio-9p,dirid=<dirid>,tag=<mount tag>
+----
+
+To mount QEMU's 9pfs in a guest VM with the Linux kernel 9p driver, run the
+following command:
+
+----
+mount -t 9p -o trans=virtio,version=9p2000.L <mount tag> <mount point>
+----
+
+https://www.linux-kvm.org/page/9p_virtio
+
+https://wiki.qemu.org/Documentation/9psetup
+
+virtio-fs
+^^^^^^^^^
+
+Virtio-fs is a shared file system, that enables sharing a directory between
+host and guest VM while taking advantage of the locality of virtual machines
+and the hypervisor to get a higher throughput than 9p.
+Numa must be disabled to use virtio-fs.
+
+To share a directory with virtio-fs, run the following command:
+
+----
+qm set <vmid> -sharedfiles0 virtio-fs,dirid=<dirid>,tag=<mount tag>
+----
+
+To mount virtio-fs in a guest VM with the Linux kernel virtiofs driver, run the
+following command:
+
+----
+mount -t virtiofs <mount tag> <mount point>
+----
+
+https://virtio-fs.gitlab.io/howto-qemu.html
+
 [[qm_bootorder]]
 Device Boot Order
 ~~~~~~~~~~~~~~~~~
-- 
2.30.2





^ permalink raw reply	[flat|nested] 19+ messages in thread

* [pve-devel] [PATCH access-control v4 2/6] added acls for Shared Files Directories
  2023-04-25 10:21 [pve-devel] [PATCH docs v4 0/6] feature #1027 virtio-9p/virtio-fs Markus Frank
  2023-04-25 10:21 ` [pve-devel] [PATCH docs v4 1/6] added shared filesystem doc for virtio-fs & virtio-9p Markus Frank
@ 2023-04-25 10:21 ` Markus Frank
  2023-05-04  8:24   ` Fabian Grünbichler
  2023-04-25 10:21 ` [pve-devel] [PATCH manager v4 3/6] added Config for Shared Filesystem Directories Markus Frank
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 19+ messages in thread
From: Markus Frank @ 2023-04-25 10:21 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
 src/PVE/API2/Directory.pm | 68 +++++++++++++++++++++++++++++++++++++++
 src/PVE/AccessControl.pm  | 16 +++++++++
 src/PVE/RPCEnvironment.pm | 12 ++++++-
 3 files changed, 95 insertions(+), 1 deletion(-)
 create mode 100644 src/PVE/API2/Directory.pm

diff --git a/src/PVE/API2/Directory.pm b/src/PVE/API2/Directory.pm
new file mode 100644
index 0000000..b44ba9d
--- /dev/null
+++ b/src/PVE/API2/Directory.pm
@@ -0,0 +1,68 @@
+package PVE::API2::Directory;
+
+use strict;
+use warnings;
+
+use PVE::Exception qw(raise raise_perm_exc raise_param_exc);
+use PVE::Cluster qw (cfs_read_file cfs_write_file);
+use PVE::Tools qw(split_list extract_param);
+use PVE::JSONSchema qw(get_standard_option register_standard_option);
+use PVE::SafeSyslog;
+
+use PVE::AccessControl;
+use PVE::Auth::Plugin;
+use PVE::TokenConfig;
+
+use PVE::RESTHandler;
+use PVE::DirConfig;
+
+use base qw(PVE::RESTHandler);
+
+__PACKAGE__->register_method ({
+    name => 'index',
+    path => '',
+    method => 'GET',
+    description => "simple return value of parameter 'text'",
+    parameters => {
+	additionalProperties => 0,
+	properties => {
+	    node => {
+		type => 'string',
+	    }
+	},
+    },
+    returns => {
+	type => 'string',
+    },
+    code => sub {
+	my ($param) = @_;
+	return $param->{node};
+    }
+});
+
+
+__PACKAGE__->register_method ({
+    name => 'add_dir',
+    path => 'add',
+    method => 'POST',
+    description => "simple return value of parameter 'text'",
+    parameters => {
+	additionalProperties => 0,
+	properties => {
+	    directory => {
+		type => 'string',
+	    },
+	    node => {
+		type => 'string',
+	    }
+	},
+    },
+    returns => {
+	type => 'string',
+    },
+    code => sub {
+	my ($param) = @_;
+
+	return $param->{node};
+    }
+});
diff --git a/src/PVE/AccessControl.pm b/src/PVE/AccessControl.pm
index 5690a1f..6530753 100644
--- a/src/PVE/AccessControl.pm
+++ b/src/PVE/AccessControl.pm
@@ -1133,6 +1133,18 @@ my $privgroups = {
 	    'Pool.Audit',
 	],
     },
+    Map => {
+	root => [],
+	admin => [
+	    'Map.Modify',
+	],
+	user => [
+	    'Map.Use',
+	],
+	audit => [
+	    'Map.Audit',
+	],
+    },
 };
 
 my $valid_privs = {};
@@ -1166,6 +1178,8 @@ sub create_roles {
     }
 
     $special_roles->{"PVETemplateUser"} = { 'VM.Clone' => 1, 'VM.Audit' => 1 };
+
+    delete($special_roles->{"PVEAdmin"}->{'Map.Modify'});
 };
 
 create_roles();
@@ -1262,6 +1276,8 @@ sub check_path {
 	|/storage/[[:alnum:]\.\-\_]+
 	|/vms
 	|/vms/[1-9][0-9]{2,}
+	|/map/dirs
+	|/map/dirs/[[:alnum:]\.\-\_]+
     )$!xs;
 }
 
diff --git a/src/PVE/RPCEnvironment.pm b/src/PVE/RPCEnvironment.pm
index 8586938..42ff287 100644
--- a/src/PVE/RPCEnvironment.pm
+++ b/src/PVE/RPCEnvironment.pm
@@ -187,10 +187,11 @@ sub compute_api_permission {
 	nodes => qr/Sys\.|Permissions\.Modify/,
 	sdn => qr/SDN\.|Permissions\.Modify/,
 	dc => qr/Sys\.Audit|SDN\./,
+	map => qr/Map\.Modify/
     };
     map { $res->{$_} = {} } keys %$priv_re_map;
 
-    my $required_paths = ['/', '/nodes', '/access/groups', '/vms', '/storage', '/sdn'];
+    my $required_paths = ['/', '/nodes', '/access/groups', '/vms', '/storage', '/sdn', '/map'];
     my $defined_paths = [];
     PVE::AccessControl::iterate_acl_tree("/", $usercfg->{acl_root}, sub {
 	my ($path, $node) = @_;
@@ -245,6 +246,7 @@ sub get_effective_permissions {
 	'/sdn' => 1,
 	'/storage' => 1,
 	'/vms' => 1,
+	'/map' => 1,
     };
 
     my $cfg = $self->{user_cfg};
@@ -361,6 +363,14 @@ sub check_vm_perm {
     return $self->check_full($user, "/vms/$vmid", $privs, $any, $noerr);
 };
 
+sub check_dir_perm {
+    my ($self, $user, $dirid, $privs, $any, $noerr) = @_;
+
+    my $cfg = $self->{user_cfg};
+
+    return $self->check_full($user, "/map/dirs/$dirid", $privs, $any, $noerr);
+};
+
 sub is_group_member {
     my ($self, $group, $user) = @_;
 
-- 
2.30.2





^ permalink raw reply	[flat|nested] 19+ messages in thread

* [pve-devel] [PATCH manager v4 3/6] added Config for Shared Filesystem Directories
  2023-04-25 10:21 [pve-devel] [PATCH docs v4 0/6] feature #1027 virtio-9p/virtio-fs Markus Frank
  2023-04-25 10:21 ` [pve-devel] [PATCH docs v4 1/6] added shared filesystem doc for virtio-fs & virtio-9p Markus Frank
  2023-04-25 10:21 ` [pve-devel] [PATCH access-control v4 2/6] added acls for Shared Files Directories Markus Frank
@ 2023-04-25 10:21 ` Markus Frank
  2023-05-03 11:26   ` Dominik Csapak
  2023-05-04  8:24   ` Fabian Grünbichler
  2023-04-25 10:21 ` [pve-devel] [PATCH manager v4 4/6] added Shared Files tab in Node Settings Markus Frank
                   ` (3 subsequent siblings)
  6 siblings, 2 replies; 19+ messages in thread
From: Markus Frank @ 2023-04-25 10:21 UTC (permalink / raw)
  To: pve-devel

and made an API Endpoint for getting, adding and removing
directories to the config.

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
 PVE/API2/DirConfig.pm | 129 +++++++++++++++++++++++++++++++++++
 PVE/API2/Makefile     |   1 +
 PVE/API2/Nodes.pm     |   6 ++
 PVE/DirConfig.pm      | 155 ++++++++++++++++++++++++++++++++++++++++++
 PVE/Makefile          |   1 +
 5 files changed, 292 insertions(+)
 create mode 100644 PVE/API2/DirConfig.pm
 create mode 100644 PVE/DirConfig.pm

diff --git a/PVE/API2/DirConfig.pm b/PVE/API2/DirConfig.pm
new file mode 100644
index 00000000..0cbc6f96
--- /dev/null
+++ b/PVE/API2/DirConfig.pm
@@ -0,0 +1,129 @@
+package PVE::API2::DirConfig;
+
+use strict;
+use warnings;
+
+use PVE::JSONSchema qw(get_standard_option);
+use PVE::DirConfig;
+use PVE::Tools qw(extract_param);
+
+use base qw(PVE::RESTHandler);
+
+__PACKAGE__->register_method({
+    name => 'get_config',
+    path => '',
+    method => 'GET',
+    description => "Get Directories for Host Directory Sharing.",
+    permissions => {
+	check => ['perm', '/map/dirs', [ 'Map.Audit' ]],
+    },
+    proxyto => 'node',
+    parameters => {
+	additionalProperties => 0,
+	properties => {
+	    node => get_standard_option('pve-node'),
+	},
+    },
+    returns => {
+	type => 'array',
+	items => {
+	    type => 'object',
+	    properties => {
+		dirid => {
+		    type => 'string',
+		    description => 'Directory ID',
+		},
+		path => {
+		    type => 'string',
+		    description => 'Host Directory Path',
+		},
+	    },
+	},
+    },
+    code => sub {
+	my ($param) = @_;
+
+	my $config = PVE::DirConfig::load_config($param->{node});
+	delete $config->{description};
+	my $result = [];
+	foreach my $key (keys %{$config}) {
+	    push @$result, {
+		dirid => $key,
+		path => $config->{$key},
+	    };
+	}
+
+	return $result;
+    }
+});
+
+__PACKAGE__->register_method({
+    name => 'add_dir',
+    path => '',
+    method => 'POST',
+    description => "Add Directories for Host Directory Sharing.",
+    permissions => {
+	check => ['perm', '/map/dirs', [ 'Map.Modify' ]],
+    },
+    protected => 1,
+    proxyto => 'node',
+    parameters => {
+	additionalProperties => 0,
+	properties => {
+	    node => get_standard_option('pve-node'),
+	    dirid => {
+		type => 'string',
+		pattern => '[a-zA-Z0-9\-]+',
+	    },
+	    path => {
+		type => 'string',
+		maxLength => 4096,
+		format => 'pve-storage-path',
+	    },
+	},
+    },
+    returns => { type => "null" },
+    code => sub {
+	my ($param) = @_;
+	my $node = extract_param($param, 'node');
+	my $dirid = extract_param($param, 'dirid');
+	my $path = extract_param($param, 'path');
+	PVE::DirConfig::add_dir_config($node, $dirid, $path);
+	return undef;
+    },
+});
+
+
+__PACKAGE__->register_method({
+    name => 'del_dir',
+    path => '',
+    method => 'DELETE',
+    description => "Remove Directory from Host Directory Sharing.",
+    permissions => {
+	check => ['perm', '/map/dirs', [ 'Map.Modify' ]],
+    },
+    protected => 1,
+    proxyto => 'node',
+    parameters => {
+	additionalProperties => 0,
+	properties => {
+	    node => get_standard_option('pve-node'),
+	    dirid => {
+		type => 'string',
+		pattern => '[a-zA-Z0-9\-]+',
+	    },
+	},
+    },
+    returns => { type => "null" },
+    code => sub {
+	my ($param) = @_;
+
+	my $node = extract_param($param, 'node');
+	my $dirid = extract_param($param, 'dirid');
+	PVE::DirConfig::del_dir_config($node, $dirid);
+	return undef;
+    },
+});
+
+
+1;
diff --git a/PVE/API2/Makefile b/PVE/API2/Makefile
index 5c08ebe0..1b96223c 100644
--- a/PVE/API2/Makefile
+++ b/PVE/API2/Makefile
@@ -12,6 +12,7 @@ PERLSOURCE = 			\
 	Ceph.pm			\
 	Certificates.pm		\
 	Cluster.pm		\
+	DirConfig.pm		\
 	HAConfig.pm		\
 	Hardware.pm		\
 	Network.pm		\
diff --git a/PVE/API2/Nodes.pm b/PVE/API2/Nodes.pm
index bfe5c40a..b4e2992f 100644
--- a/PVE/API2/Nodes.pm
+++ b/PVE/API2/Nodes.pm
@@ -48,6 +48,7 @@ use PVE::API2::LXC::Status;
 use PVE::API2::LXC;
 use PVE::API2::Network;
 use PVE::API2::NodeConfig;
+use PVE::API2::DirConfig;
 use PVE::API2::Qemu::CPU;
 use PVE::API2::Qemu;
 use PVE::API2::Replication;
@@ -199,6 +200,11 @@ __PACKAGE__->register_method ({
     path => 'config',
 });
 
+__PACKAGE__->register_method ({
+    subclass => "PVE::API2::DirConfig",
+    path => 'dirs',
+});
+
 if ($have_sdn) {
     __PACKAGE__->register_method ({
 	subclass => "PVE::API2::Network::SDN::Zones::Status",
diff --git a/PVE/DirConfig.pm b/PVE/DirConfig.pm
new file mode 100644
index 00000000..56796029
--- /dev/null
+++ b/PVE/DirConfig.pm
@@ -0,0 +1,155 @@
+package PVE::DirConfig;
+
+use strict;
+use warnings;
+
+use PVE::JSONSchema qw(get_standard_option);
+use PVE::Tools qw(file_get_contents file_set_contents lock_file);
+
+my $dir_config_lock = '/var/lock/dirs.lock';
+
+sub config_file {
+    my ($node) = @_;
+
+    return "/etc/pve/nodes/${node}/dirs";
+}
+
+sub load_config {
+    my ($node) = @_;
+
+    my $filename = config_file($node);
+    my $raw = eval { PVE::Tools::file_get_contents($filename); };
+    return {} if !$raw;
+
+    return parse_file_config($raw, $filename);
+}
+
+sub write_config {
+    my ($node, $conf) = @_;
+
+    my $filename = config_file($node);
+
+    my $raw = write_file_config($conf);
+
+    PVE::Tools::file_set_contents($filename, $raw);
+}
+
+sub lock_config {
+    my ($node, $realcode, @param) = @_;
+
+    # make sure configuration file is up-to-date
+    my $code = sub {
+	PVE::Cluster::cfs_update();
+	$realcode->(@_);
+    };
+
+    my $res = lock_file($dir_config_lock, 10, $code, @param);
+
+    die $@ if $@;
+
+    return $res;
+}
+
+my $descr = " Description for Shared Files Directory Config.\n"
+    ." Add Directories with:\n dirid: /path/to/share";
+
+my $dir_desc = {
+    path => {
+	type => 'string',
+	format_description => 'path',
+	description => 'path of Directory ID',
+	default_key => 1,
+    },
+};
+
+my $conf_schema = {
+    type => 'object',
+    properties => {},
+};
+
+sub parse_file_config : prototype($$) {
+    my ($content, $filename) = @_;
+    return undef if !defined($content);
+
+    my $conf = PVE::JSONSchema::parse_config($conf_schema, $filename, $content, 'description');
+
+    return $conf;
+}
+
+sub parse_dir_config {
+    my ($data) = @_;
+
+    return PVE::JSONSchema::parse_property_string($dir_desc, $data);
+}
+
+sub print_dir_config {
+    my ($data) = @_;
+
+    return PVE::JSONSchema::print_property_string($data, $dir_desc);
+}
+
+sub add_dir_config {
+    my ($node, $dirid, $path) = @_;
+    my $code = sub {
+	my $conf = load_config($node);
+	if (! -e $path) {
+	    mkdir $path;
+	}
+	if ((-e $path) && (! -d $path)) {
+	    die "Path $path exists but is not a directory\n"
+	}
+	$conf->{$dirid} = $path;
+	write_config($node, $conf);
+    };
+    lock_config($node, $code);
+    die $@ if $@;
+}
+
+sub del_dir_config {
+    my ($node, $dirid) = @_;
+    my $code = sub {
+	my $conf = load_config($node);
+	delete $conf->{$dirid};
+	write_config($node, $conf);
+    };
+    lock_config($node, $code);
+    die $@ if $@;
+}
+
+sub write_file_config {
+    my ($conf) = @_;
+
+    my $raw = '';
+    # add description as comment to top of file
+    foreach my $cl (split(/\n/, $descr)) {
+	$raw .= '#' . $cl . "\n";
+    }
+
+    for my $key (sort keys %$conf) {
+	next if ($key eq 'description');
+	my $value = $conf->{$key};
+	die "detected invalid newline inside property '$key'\n"
+	    if $value =~ m/\n/;
+	$raw .= "$key: $value\n";
+    }
+
+    return $raw;
+}
+
+sub extract_dir_path {
+    my ($nodename, $dirid) = @_;
+    my $dir_config = load_config($nodename);
+    my $path = $dir_config->{$dirid};
+    if (!$path) {
+	die "Directory ID $dirid does not exist\n";
+    }
+    if (! -e $path) {
+	die "Path $path does not exist\n";
+    }
+    if ((-e $path) && (! -d $path)) {
+	die "Path $path exists but is not a directory\n"
+    }
+    return $path;
+}
+
+1;
diff --git a/PVE/Makefile b/PVE/Makefile
index 48b85d33..c11066bd 100644
--- a/PVE/Makefile
+++ b/PVE/Makefile
@@ -9,6 +9,7 @@ PERLSOURCE = 			\
 	AutoBalloon.pm		\
 	CertCache.pm		\
 	CertHelpers.pm		\
+	DirConfig.pm		\
 	ExtMetric.pm		\
 	HTTPServer.pm		\
 	Jobs.pm			\
-- 
2.30.2





^ permalink raw reply	[flat|nested] 19+ messages in thread

* [pve-devel] [PATCH manager v4 4/6] added Shared Files tab in Node Settings
  2023-04-25 10:21 [pve-devel] [PATCH docs v4 0/6] feature #1027 virtio-9p/virtio-fs Markus Frank
                   ` (2 preceding siblings ...)
  2023-04-25 10:21 ` [pve-devel] [PATCH manager v4 3/6] added Config for Shared Filesystem Directories Markus Frank
@ 2023-04-25 10:21 ` Markus Frank
  2023-04-25 10:21 ` [pve-devel] [PATCH manager v4 5/6] added options to add virtio-9p & virtio-fs Shared Filesystems to qemu config Markus Frank
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 19+ messages in thread
From: Markus Frank @ 2023-04-25 10:21 UTC (permalink / raw)
  To: pve-devel

to add/remove/show directories that are available for shared
filesystems.

and added /node/dir path to PermPathStore.

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
 www/manager6/Makefile              |   1 +
 www/manager6/data/PermPathStore.js |   3 +
 www/manager6/node/Config.js        |  12 ++
 www/manager6/node/SharedFiles.js   | 177 +++++++++++++++++++++++++++++
 4 files changed, 193 insertions(+)
 create mode 100644 www/manager6/node/SharedFiles.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 2b577c8e..bce14fdb 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -199,6 +199,7 @@ JSSRC= 							\
 	node/CmdMenu.js					\
 	node/Config.js					\
 	node/Directory.js				\
+	node/SharedFiles.js				\
 	node/LVM.js					\
 	node/LVMThin.js					\
 	node/StatusView.js				\
diff --git a/www/manager6/data/PermPathStore.js b/www/manager6/data/PermPathStore.js
index cf702c03..d83f9e12 100644
--- a/www/manager6/data/PermPathStore.js
+++ b/www/manager6/data/PermPathStore.js
@@ -13,6 +13,7 @@ Ext.define('PVE.data.PermPathStore', {
 	{ 'value': '/sdn/zones' },
 	{ 'value': '/storage' },
 	{ 'value': '/vms' },
+	{ 'value': '/map/dirs' },
     ],
 
     constructor: function(config) {
@@ -39,6 +40,8 @@ Ext.define('PVE.data.PermPathStore', {
 		    break;
 		case 'pool': path = '/pool/' + record.get('pool');
 		    break;
+		case 'dirs': path = '/map/dirs/' + record.get('dirs');
+		    break;
 	    }
 	    if (path !== undefined && !donePaths[path]) {
 		me.add({ value: path });
diff --git a/www/manager6/node/Config.js b/www/manager6/node/Config.js
index 0cc23fb4..b35d5abc 100644
--- a/www/manager6/node/Config.js
+++ b/www/manager6/node/Config.js
@@ -410,6 +410,18 @@ Ext.define('PVE.node.Config', {
 		});
 	}
 
+	if (caps.map['Map.Modify']) {
+	    me.items.push(
+		{
+		    xtype: 'pveSharedFilesList',
+		    title: gettext('Shared Files'),
+		    iconCls: 'fa fa-folder',
+		    onlineHelp: 'qm_sharedfiles',
+		    itemId: 'sharedFiles',
+		},
+	    );
+	}
+
 	me.items.push(
 	    {
 		title: gettext('Task History'),
diff --git a/www/manager6/node/SharedFiles.js b/www/manager6/node/SharedFiles.js
new file mode 100644
index 00000000..4152de1b
--- /dev/null
+++ b/www/manager6/node/SharedFiles.js
@@ -0,0 +1,177 @@
+Ext.define('PVE.node.CreateSharedFiles', {
+    extend: 'Proxmox.window.Edit',
+    xtype: 'pveCreateSharedFiles',
+
+    subject: "Shared Directory",
+
+    onlineHelp: 'qm_sharedfiles',
+
+    initComponent: function() {
+        var me = this;
+
+	if (!me.nodename) {
+	    throw "no node name specified";
+	}
+
+	me.isCreate = true;
+
+        Ext.applyIf(me, {
+	    url: "/nodes/" + me.nodename + "/dirs",
+	    method: 'POST',
+	    items: [
+		{
+		    xtype: 'proxmoxtextfield',
+		    name: 'dirid',
+		    fieldLabel: gettext('Directory ID'),
+		    allowBlank: false,
+		},
+		{
+		    xtype: 'proxmoxtextfield',
+		    name: 'path',
+		    fieldLabel: gettext('Directory Path'),
+		    allowBlank: false,
+		},
+            ],
+        });
+
+        me.callParent();
+    },
+});
+
+Ext.define('PVE.node.SharedFilesList', {
+    extend: 'Ext.grid.Panel',
+    xtype: 'pveSharedFilesList',
+
+    viewModel: {
+	data: {
+	    path: '',
+	},
+	formulas: {
+	    dirid: (get) => get('dirid'),
+	},
+    },
+
+    controller: {
+	xclass: 'Ext.app.ViewController',
+
+	removeDirectory: function() {
+	    let me = this;
+	    let vm = me.getViewModel();
+	    let view = me.getView();
+
+	    const dirid = vm.get('dirid');
+
+	    if (!view.nodename) {
+		throw "no node name specified";
+	    }
+
+	    if (!dirid) {
+		throw "no directory name specified";
+	    }
+	    Ext.create('Proxmox.window.SafeDestroy', {
+		url: `/nodes/${view.nodename}/dirs`,
+		item: { id: dirid },
+		params: { dirid: dirid },
+		taskName: 'Remove Directory from Shared Files',
+		autoShow: true,
+		listeners: {
+		    destroy: () => view.reload(),
+		},
+	    }).show();
+	},
+    },
+
+    stateful: true,
+    stateId: 'grid-node-directory',
+    columns: [
+	{
+	    text: gettext('Directory ID'),
+	    dataIndex: 'dirid',
+	    flex: 1,
+	},
+	{
+	    text: gettext('Directory Path'),
+	    dataIndex: 'path',
+	    flex: 1,
+	},
+    ],
+
+    rootVisible: false,
+    useArrows: true,
+
+    tbar: [
+	{
+	    text: gettext('Reload'),
+	    iconCls: 'fa fa-refresh',
+	    handler: function() {
+		this.up('panel').reload();
+	    },
+	},
+	{
+	    text: `${gettext('Create')}: ${gettext('Directory')}`,
+	    handler: function() {
+		let view = this.up('panel');
+		Ext.create('PVE.node.CreateSharedFiles', {
+		    nodename: view.nodename,
+		    listeners: {
+			destroy: () => view.reload(),
+		    },
+		    autoShow: true,
+		});
+	    },
+	},
+	{
+	    text: gettext('Remove'),
+	    itemId: 'removeDir',
+	    handler: 'removeDirectory',
+	    disabled: true,
+	    bind: {
+		disabled: '{!dirid}',
+	    },
+	},
+    ],
+
+    reload: function() {
+	let me = this;
+	me.store.load();
+	me.store.sort();
+    },
+
+    listeners: {
+	activate: function() {
+	    this.reload();
+	},
+	selectionchange: function(model, selected) {
+	    let me = this;
+	    let vm = me.getViewModel();
+
+	    vm.set('dirid', selected[0]?.data.dirid || '');
+	},
+    },
+
+    initComponent: function() {
+        let me = this;
+
+	me.nodename = me.pveSelNode.data.node;
+	if (!me.nodename) {
+	    throw "no node name specified";
+	}
+
+	Ext.apply(me, {
+	    store: {
+		fields: ['dirid', 'path'],
+		proxy: {
+		    type: 'proxmox',
+		    url: `/api2/json/nodes/${me.nodename}/dirs`,
+		},
+		sorters: 'dirid',
+	    },
+	});
+
+	me.callParent();
+
+	Proxmox.Utils.monStoreErrors(me, me.getStore(), true);
+	me.reload();
+    },
+});
+
-- 
2.30.2





^ permalink raw reply	[flat|nested] 19+ messages in thread

* [pve-devel] [PATCH manager v4 5/6] added options to add virtio-9p & virtio-fs Shared Filesystems to qemu config
  2023-04-25 10:21 [pve-devel] [PATCH docs v4 0/6] feature #1027 virtio-9p/virtio-fs Markus Frank
                   ` (3 preceding siblings ...)
  2023-04-25 10:21 ` [pve-devel] [PATCH manager v4 4/6] added Shared Files tab in Node Settings Markus Frank
@ 2023-04-25 10:21 ` Markus Frank
  2023-04-25 10:21 ` [pve-devel] [PATCH qemu-server v4 6/6] feature #1027: virtio-9p & virtio-fs support Markus Frank
  2023-05-04  8:24 ` [pve-devel] [PATCH docs v4 0/6] feature #1027 virtio-9p/virtio-fs Fabian Grünbichler
  6 siblings, 0 replies; 19+ messages in thread
From: Markus Frank @ 2023-04-25 10:21 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
 www/manager6/Makefile                |   1 +
 www/manager6/Utils.js                |   1 +
 www/manager6/qemu/HardwareView.js    |  19 +++++
 www/manager6/qemu/SharedFilesEdit.js | 101 +++++++++++++++++++++++++++
 4 files changed, 122 insertions(+)
 create mode 100644 www/manager6/qemu/SharedFilesEdit.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index bce14fdb..5900c9a4 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -241,6 +241,7 @@ JSSRC= 							\
 	qemu/QemuBiosEdit.js				\
 	qemu/RNGEdit.js					\
 	qemu/SSHKey.js					\
+	qemu/SharedFilesEdit.js				\
 	qemu/ScsiHwEdit.js				\
 	qemu/SerialEdit.js				\
 	qemu/Smbios1Edit.js				\
diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index 94e75d5c..a8e749b5 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -1596,6 +1596,7 @@ Ext.define('PVE.Utils', {
 	serial: 4,
 	rng: 1,
 	tpmstate: 1,
+	sharedfiles: 10,
     },
 
     // we can have usb6 and up only for specific machine/ostypes
diff --git a/www/manager6/qemu/HardwareView.js b/www/manager6/qemu/HardwareView.js
index af35a980..e1ceb32a 100644
--- a/www/manager6/qemu/HardwareView.js
+++ b/www/manager6/qemu/HardwareView.js
@@ -309,6 +309,16 @@ Ext.define('PVE.qemu.HardwareView', {
 	    never_delete: !caps.nodes['Sys.Console'],
 	    header: gettext("VirtIO RNG"),
 	};
+	for (let i = 0; i < PVE.Utils.hardware_counts.sharedfiles; i++) {
+	    let confid = "sharedfiles" + i.toString();
+	    rows[confid] = {
+		group: 50,
+		order: i,
+		iconCls: 'folder',
+		editor: 'PVE.qemu.SharedFilesEdit',
+		header: gettext('Shared FS') + ' (' + confid +')',
+	    };
+	}
 
 	var sorterFn = function(rec1, rec2) {
 	    var v1 = rec1.data.key;
@@ -582,6 +592,7 @@ Ext.define('PVE.qemu.HardwareView', {
 	    const noVMConfigDiskPerm = !caps.vms['VM.Config.Disk'];
 	    const noVMConfigCDROMPerm = !caps.vms['VM.Config.CDROM'];
 	    const noVMConfigCloudinitPerm = !caps.vms['VM.Config.Cloudinit'];
+	    const noVMConfigOptionsPerm = !caps.vms['VM.Config.Options'];
 
 	    me.down('#addUsb').setDisabled(noSysConsolePerm || isAtUsbLimit());
 	    me.down('#addPci').setDisabled(noSysConsolePerm || isAtLimit('hostpci'));
@@ -591,6 +602,7 @@ Ext.define('PVE.qemu.HardwareView', {
 	    me.down('#addRng').setDisabled(noSysConsolePerm || isAtLimit('rng'));
 	    efidisk_menuitem.setDisabled(noVMConfigDiskPerm || isAtLimit('efidisk'));
 	    me.down('#addTpmState').setDisabled(noSysConsolePerm || isAtLimit('tpmstate'));
+	    me.down('#addFileshare').setDisabled(noVMConfigOptionsPerm || isAtLimit('sharedfiles'));
 	    me.down('#addCloudinitDrive').setDisabled(noVMConfigCDROMPerm || noVMConfigCloudinitPerm || hasCloudInit);
 
 	    if (!rec) {
@@ -735,6 +747,13 @@ Ext.define('PVE.qemu.HardwareView', {
 				disabled: !caps.nodes['Sys.Console'],
 				handler: editorFactory('RNGEdit'),
 			    },
+			    {
+				text: gettext("Shared Filesystem"),
+				itemId: 'addFileshare',
+				iconCls: 'fa fa-folder',
+				disabled: !caps.nodes['Sys.Console'],
+				handler: editorFactory('SharedFilesEdit'),
+			    },
 			],
 		    }),
 		},
diff --git a/www/manager6/qemu/SharedFilesEdit.js b/www/manager6/qemu/SharedFilesEdit.js
new file mode 100644
index 00000000..b4d0a253
--- /dev/null
+++ b/www/manager6/qemu/SharedFilesEdit.js
@@ -0,0 +1,101 @@
+Ext.define('PVE.qemu.SharedFilesInputPanel', {
+    extend: 'Proxmox.panel.InputPanel',
+    xtype: 'pveSharedFilesInputPanel',
+    onlineHelp: 'qm_sharedfiles',
+
+    insideWizard: false,
+
+    onGetValues: function(values) {
+	var me = this;
+	var confid = me.confid;
+	var params = {};
+	params[confid] = PVE.Parser.printPropertyString(values, 'type');
+	return params;
+    },
+
+    setSharedfiles: function(confid, data) {
+	var me = this;
+	me.confid = confid;
+	me.sharedfiles = data;
+	me.setValues(me.sharedfiles);
+    },
+    items: [
+	{
+	    name: 'type',
+	    xtype: 'proxmoxKVComboBox',
+	    fieldLabel: gettext('Shared FS Type'),
+	    comboItems: [['virtio-9p', 'virtio-9p'], ['virtio-fs', 'virtio-fs']],
+	    allowBlank: false,
+	},
+	{
+	    xtype: 'proxmoxtextfield',
+	    emptyText: 'dirid',
+	    fieldLabel: gettext('Directory ID'),
+	    name: 'dirid',
+	    allowBlank: false,
+	},
+	{
+	    xtype: 'proxmoxtextfield',
+	    emptyText: 'tag name',
+	    fieldLabel: gettext('tag'),
+	    name: 'tag',
+	    allowBlank: false,
+	},
+    ],
+    initComponent: function() {
+	var me = this;
+
+	me.sharedfiles = {};
+	me.confid = 'sharedfiles0';
+	me.callParent();
+    },
+});
+
+Ext.define('PVE.qemu.SharedFilesEdit', {
+    extend: 'Proxmox.window.Edit',
+
+    subject: gettext('Filesystem Passthrough'),
+
+    initComponent: function() {
+	var me = this;
+
+	me.isCreate = !me.confid;
+
+	var ipanel = Ext.create('PVE.qemu.SharedFilesInputPanel', {
+	    confid: me.confid,
+	    isCreate: me.isCreate,
+	});
+
+	Ext.applyIf(me, {
+	    items: ipanel,
+	});
+
+	me.callParent();
+
+	me.load({
+	    success: function(response) {
+	        me.conf = response.result.data;
+	        var i, confid;
+	        if (!me.isCreate) {
+		    var value = me.conf[me.confid];
+		    var sharedfiles = PVE.Parser.parsePropertyString(value, "type");
+		    if (!sharedfiles) {
+			Ext.Msg.alert(gettext('Error'), 'Unable to parse sharedfiles options');
+			me.close();
+			return;
+		    }
+		    ipanel.setSharedfiles(me.confid, sharedfiles);
+		} else {
+		    for (i = 0; i < PVE.Utils.hardware_counts.sharedfiles; i++) {
+		        confid = 'sharedfiles' + i.toString();
+		        if (!Ext.isDefined(me.conf[confid])) {
+			    me.confid = confid;
+			    break;
+			}
+		    }
+		    ipanel.setSharedfiles(me.confid, {});
+		}
+	    },
+	});
+    },
+});
-- 
2.30.2





^ permalink raw reply	[flat|nested] 19+ messages in thread

* [pve-devel] [PATCH qemu-server v4 6/6] feature #1027: virtio-9p & virtio-fs support
  2023-04-25 10:21 [pve-devel] [PATCH docs v4 0/6] feature #1027 virtio-9p/virtio-fs Markus Frank
                   ` (4 preceding siblings ...)
  2023-04-25 10:21 ` [pve-devel] [PATCH manager v4 5/6] added options to add virtio-9p & virtio-fs Shared Filesystems to qemu config Markus Frank
@ 2023-04-25 10:21 ` Markus Frank
  2023-05-04  8:39   ` Fabian Grünbichler
  2023-05-04  8:24 ` [pve-devel] [PATCH docs v4 0/6] feature #1027 virtio-9p/virtio-fs Fabian Grünbichler
  6 siblings, 1 reply; 19+ messages in thread
From: Markus Frank @ 2023-04-25 10:21 UTC (permalink / raw)
  To: pve-devel

adds support for sharing directorys with a guest vm

virtio-9p can be simply started with qemu
virtio-fs needs virtiofsd to be started

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
 PVE/API2/Qemu.pm  |  19 +++++++
 PVE/QemuServer.pm | 141 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 160 insertions(+)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 587bb22..810e124 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -639,6 +639,8 @@ my $check_vm_modify_config_perm = sub {
 	    # the user needs Disk and PowerMgmt privileges to change the vmstate
 	    # also needs privileges on the storage, that will be checked later
 	    $rpcenv->check_vm_perm($authuser, $vmid, $pool, ['VM.Config.Disk', 'VM.PowerMgmt' ]);
+	} elsif ($opt =~ m/^sharedfiles\d$/) {
+	    # needs $param->{$opt} so checkout $check_vm_dir_perm
 	} else {
 	    # catches hostpci\d+, args, lock, etc.
 	    # new options will be checked here
@@ -649,6 +651,20 @@ my $check_vm_modify_config_perm = sub {
     return 1;
 };
 
+my $check_vm_dir_perm = sub {
+    my ($rpcenv, $authuser, $param) = @_;
+
+    return 1 if $authuser eq 'root@pam';
+
+    foreach my $opt (keys %{$param}) {
+	if ($opt =~ m/^sharedfiles\d$/) {
+	    my $sharedfiles = PVE::QemuServer::parse_sharedfiles($param->{$opt});
+	    $rpcenv->check_dir_perm($authuser, $sharedfiles->{dirid}, ['Map.Use']);
+	}
+    }
+    return 1;
+};
+
 __PACKAGE__->register_method({
     name => 'vmlist',
     path => '',
@@ -876,6 +892,7 @@ __PACKAGE__->register_method({
 
 	    &$check_vm_modify_config_perm($rpcenv, $authuser, $vmid, $pool, [ keys %$param]);
 
+	    &$check_vm_dir_perm($rpcenv, $authuser, $param);
 	    &$check_vm_create_serial_perm($rpcenv, $authuser, $vmid, $pool, $param);
 	    &$check_vm_create_usb_perm($rpcenv, $authuser, $vmid, $pool, $param);
 
@@ -1576,6 +1593,8 @@ my $update_vm_api  = sub {
 
     &$check_vm_modify_config_perm($rpcenv, $authuser, $vmid, undef, [keys %$param]);
 
+    &$check_vm_dir_perm($rpcenv, $authuser, $param);
+
     &$check_storage_access($rpcenv, $authuser, $storecfg, $vmid, $param);
 
     my $updatefn =  sub {
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index c1d0fd2..fc07311 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -274,6 +274,31 @@ my $rng_fmt = {
     },
 };
 
+my $sharedfiles_fmt = {
+    type => {
+	type => 'string',
+	default_key => 1,
+	enum => ['virtio-9p', 'virtio-fs'],
+	description => "sharedfiles via"
+	    ." virtio-9p (https://www.linux-kvm.org/page/9p_virtio)"
+	    ." or virtio-fs (https://virtio-fs.gitlab.io/howto-qemu.html)",
+	format_description => "virtio-sharedfiles-type",
+	optional => 1,
+    },
+    dirid => {
+	type => 'string',
+	description => "dirid of directory you want to share with the guest VM",
+	format_description => "virtio-sharedfiles-dirid",
+	optional => 1,
+    },
+    tag => {
+	type => 'string',
+	description => "tag name for mounting in the guest VM",
+	format_description => "virtio-sharedfiles-tag",
+	optional => 1,
+    },
+};
+
 my $meta_info_fmt = {
     'ctime' => {
 	type => 'integer',
@@ -832,6 +857,7 @@ while (my ($k, $v) = each %$confdesc) {
 
 my $MAX_USB_DEVICES = 14;
 my $MAX_NETS = 32;
+my $MAX_SHAREDFILES = 10;
 my $MAX_SERIAL_PORTS = 4;
 my $MAX_PARALLEL_PORTS = 3;
 my $MAX_NUMA = 8;
@@ -974,6 +1000,12 @@ my $netdesc = {
     description => "Specify network devices.",
 };
 
+my $sharedfilesdesc = {
+    optional => 1,
+    type => 'string', format => $sharedfiles_fmt,
+    description => "share files between host and guest",
+};
+
 PVE::JSONSchema::register_standard_option("pve-qm-net", $netdesc);
 
 my $ipconfig_fmt = {
@@ -1035,6 +1067,10 @@ for (my $i = 0; $i < $MAX_NETS; $i++)  {
     $confdesc_cloudinit->{"ipconfig$i"} = $ipconfigdesc;
 }
 
+for (my $i = 0; $i < $MAX_SHAREDFILES; $i++)  {
+    $confdesc->{"sharedfiles$i"} = $sharedfilesdesc;
+}
+
 foreach my $key (keys %$confdesc_cloudinit) {
     $confdesc->{$key} = $confdesc_cloudinit->{$key};
 }
@@ -1988,6 +2024,16 @@ sub parse_net {
     return $res;
 }
 
+sub parse_sharedfiles {
+    my ($value) = @_;
+
+    return if !$value;
+    my $res = eval { parse_property_string($sharedfiles_fmt, $value) };
+
+    warn $@ if $@;
+    return $res;
+}
+
 # ipconfigX ip=cidr,gw=ip,ip6=cidr,gw6=ip
 sub parse_ipconfig {
     my ($data) = @_;
@@ -4105,6 +4151,44 @@ sub config_to_command {
 	push @$devices, '-device', $netdevicefull;
     }
 
+    my $onevirtiofs = 0;
+    for (my $i = 0; $i < $MAX_SHAREDFILES; $i++) {
+	my $sharedfilesstr = "sharedfiles$i";
+
+	next if !$conf->{$sharedfilesstr};
+	my $sharedfiles = parse_sharedfiles($conf->{$sharedfilesstr});
+	next if !$sharedfiles;
+
+	die $sharedfilesstr.' needs a type (virtio-9p or virtio-fs)' if !$sharedfiles->{type};
+	die $sharedfilesstr.' needs a dirid of a directory to share' if !$sharedfiles->{dirid};
+	die $sharedfilesstr.' needs a mount tag' if !$sharedfiles->{tag};
+
+	if ($sharedfiles->{type} eq 'virtio-fs' && $conf->{numa}) {
+	    die "disable numa to use virtio-fs or use virtio-9p instead";
+	}
+
+	my $path = PVE::DirConfig::extract_dir_path($nodename, $sharedfiles->{dirid});
+
+	if ($sharedfiles->{type} eq 'virtio-9p') {
+	    push @$devices, '-fsdev', "local,security_model=passthrough,id=fsdev$i"
+		.",path=$path";
+	    push @$devices, '-device', "virtio-9p-pci,id=fs9p$i,fsdev=fsdev$i"
+		.",mount_tag=$sharedfiles->{tag}";
+	}
+	if ($sharedfiles->{type} eq 'virtio-fs') {
+	    push @$devices, '-chardev', "socket,id=virtfs$i,path=/var/run/virtiofsd/vm$vmid-fs$i";
+	    push @$devices, '-device', 'vhost-user-fs-pci,queue-size=1024'
+		.",chardev=virtfs$i,tag=$sharedfiles->{tag}";
+	    $onevirtiofs = 1;
+	}
+    }
+
+    if ($onevirtiofs) {
+	push @$devices, '-object', 'memory-backend-file,id=mem'
+	    .",size=$conf->{memory}M,mem-path=/dev/shm,share=on";
+	push @$devices, '-numa', 'node,memdev=mem';
+    }
+
     if ($conf->{ivshmem}) {
 	my $ivshmem = parse_property_string($ivshmem_fmt, $conf->{ivshmem});
 
@@ -4190,6 +4274,41 @@ sub config_to_command {
     return wantarray ? ($cmd, $vollist, $spice_port) : $cmd;
 }
 
+sub start_virtiofs {
+    my ($vmid, $path, $fsid, $dirid) = @_;
+
+    my $socket_path_root = "/var/run/virtiofsd";
+    mkdir $socket_path_root;
+    my $socket_path = "$socket_path_root/vm$vmid-fs$fsid";
+    unlink($socket_path);
+    my $socket = IO::Socket::UNIX->new(
+	Type => SOCK_STREAM,
+	Local => $socket_path,
+	Listen => 1,
+    ) or die "cannot create socket - $!\n";
+
+    my $flags = fcntl($socket, F_GETFD, 0)
+	or die "failed to get file descriptor flags: $!\n";
+    fcntl($socket, F_SETFD, $flags & ~FD_CLOEXEC)
+	or die "failed to remove FD_CLOEXEC from file descriptor\n";
+
+    my $fd = $socket->fileno();
+
+    my $pid = fork();
+    if ($pid == 0) {
+	# TODO replace with rust implementation in bookworm
+	run_command(['/usr/lib/kvm/virtiofsd', "--fd=$fd",
+	    '-o', "source=$path", '-o', 'cache=always']);
+	POSIX::_exit(0);
+    } elsif (!defined($pid)) {
+	die "could not fork to start virtiofsd";
+    }
+
+    # return socket to keep it alive,
+    # so that qemu will wait for virtiofsd to start
+    return $socket;
+}
+
 sub check_rng_source {
     my ($source) = @_;
 
@@ -5747,6 +5866,23 @@ sub vm_start_nolock {
     my ($cmd, $vollist, $spice_port) = config_to_command($storecfg, $vmid,
 	$conf, $defaults, $forcemachine, $forcecpu, $params->{'pbs-backing'});
 
+    my $nodename = nodename();
+    my @sockets;
+    for (my $i = 0; $i < $MAX_SHAREDFILES; $i++) {
+	my $sharedfilesstr = "sharedfiles$i";
+
+	next if !$conf->{$sharedfilesstr};
+	my $sharedfiles = parse_sharedfiles($conf->{$sharedfilesstr});
+	next if !$sharedfiles;
+
+	my $path = PVE::DirConfig::extract_dir_path($nodename, $sharedfiles->{dirid});
+
+	if ($sharedfiles && $sharedfiles->{type} eq 'virtio-fs' && !$conf->{numa}) {
+	    my $socket = start_virtiofs($vmid, $path, $i, $sharedfiles->{dirid});
+	    push @sockets, $socket;
+	}
+    }
+
     my $migration_ip;
     my $get_migration_ip = sub {
 	my ($nodename) = @_;
@@ -6094,6 +6230,11 @@ sub vm_start_nolock {
 
     PVE::GuestHelpers::exec_hookscript($conf, $vmid, 'post-start');
 
+    foreach my $socket (@sockets) {
+	shutdown($socket, 2);
+	close($socket);
+    }
+
     return $res;
 }
 
-- 
2.30.2





^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [pve-devel] [PATCH manager v4 3/6] added Config for Shared Filesystem Directories
  2023-04-25 10:21 ` [pve-devel] [PATCH manager v4 3/6] added Config for Shared Filesystem Directories Markus Frank
@ 2023-05-03 11:26   ` Dominik Csapak
  2023-05-04  8:13     ` Thomas Lamprecht
  2023-05-04  8:24   ` Fabian Grünbichler
  1 sibling, 1 reply; 19+ messages in thread
From: Dominik Csapak @ 2023-05-03 11:26 UTC (permalink / raw)
  To: Proxmox VE development discussion, Markus Frank
  Cc: Thomas Lamprecht, Fabian Grünbichler

just a short comment since this series overlaps a bit with my
cluster resource mapping series (i plan on sending a v4 soon).

i'd prefer to have the configuration endpoints for mapping bundled in a subdirectory
so instead of /nodes/<node>/dirs/ i'd put it in /nodes/<node>/mapping/dirs/
(or /nodes/<node>/map/dirs )

@thomas, @fabian, any other input on that?




^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [pve-devel] [PATCH manager v4 3/6] added Config for Shared Filesystem Directories
  2023-05-03 11:26   ` Dominik Csapak
@ 2023-05-04  8:13     ` Thomas Lamprecht
  2023-05-04  8:31       ` Dominik Csapak
  0 siblings, 1 reply; 19+ messages in thread
From: Thomas Lamprecht @ 2023-05-04  8:13 UTC (permalink / raw)
  To: Dominik Csapak, Proxmox VE development discussion, Markus Frank

Am 03/05/2023 um 13:26 schrieb Dominik Csapak:
> just a short comment since this series overlaps a bit with my
> cluster resource mapping series (i plan on sending a v4 soon).
> 
> i'd prefer to have the configuration endpoints for mapping bundled in a subdirectory
> so instead of /nodes/<node>/dirs/ i'd put it in /nodes/<node>/mapping/dirs/
> (or /nodes/<node>/map/dirs )
> 
> @thomas, @fabian, any other input on that?
> 

huh? aren't mappings per definition cluster wide i.e. /cluster/resource-map/<mapping-id>
than then allows to add/update the mapping of a resource on a specific node?
A node specific path makes no sense to me, at max it would if adding/removing a mapping
is completely decoupled from adding/removing/updaing entries to it – but that seems
convoluted from an usage POV and easy to get out of sync with the actual mapping list.




^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [pve-devel] [PATCH manager v4 3/6] added Config for Shared Filesystem Directories
  2023-04-25 10:21 ` [pve-devel] [PATCH manager v4 3/6] added Config for Shared Filesystem Directories Markus Frank
  2023-05-03 11:26   ` Dominik Csapak
@ 2023-05-04  8:24   ` Fabian Grünbichler
  1 sibling, 0 replies; 19+ messages in thread
From: Fabian Grünbichler @ 2023-05-04  8:24 UTC (permalink / raw)
  To: Proxmox VE development discussion

see cover letter for high level aspects!

On April 25, 2023 12:21 pm, Markus Frank wrote:
> and made an API Endpoint for getting, adding and removing
> directories to the config.
> 
> Signed-off-by: Markus Frank <m.frank@proxmox.com>
> ---
>  PVE/API2/DirConfig.pm | 129 +++++++++++++++++++++++++++++++++++
>  PVE/API2/Makefile     |   1 +
>  PVE/API2/Nodes.pm     |   6 ++
>  PVE/DirConfig.pm      | 155 ++++++++++++++++++++++++++++++++++++++++++
>  PVE/Makefile          |   1 +
>  5 files changed, 292 insertions(+)
>  create mode 100644 PVE/API2/DirConfig.pm
>  create mode 100644 PVE/DirConfig.pm
> 
> diff --git a/PVE/API2/DirConfig.pm b/PVE/API2/DirConfig.pm
> new file mode 100644
> index 00000000..0cbc6f96
> --- /dev/null
> +++ b/PVE/API2/DirConfig.pm
> @@ -0,0 +1,129 @@
> +package PVE::API2::DirConfig;
> +
> +use strict;
> +use warnings;
> +
> +use PVE::JSONSchema qw(get_standard_option);
> +use PVE::DirConfig;
> +use PVE::Tools qw(extract_param);
> +
> +use base qw(PVE::RESTHandler);
> +
> +__PACKAGE__->register_method({
> +    name => 'get_config',
> +    path => '',
> +    method => 'GET',
> +    description => "Get Directories for Host Directory Sharing.",
> +    permissions => {
> +	check => ['perm', '/map/dirs', [ 'Map.Audit' ]],
> +    },
> +    proxyto => 'node',
> +    parameters => {
> +	additionalProperties => 0,
> +	properties => {
> +	    node => get_standard_option('pve-node'),
> +	},
> +    },
> +    returns => {
> +	type => 'array',
> +	items => {
> +	    type => 'object',
> +	    properties => {
> +		dirid => {
> +		    type => 'string',
> +		    description => 'Directory ID',
> +		},
> +		path => {
> +		    type => 'string',
> +		    description => 'Host Directory Path',
> +		},
> +	    },
> +	},
> +    },
> +    code => sub {
> +	my ($param) = @_;
> +
> +	my $config = PVE::DirConfig::load_config($param->{node});
> +	delete $config->{description};
> +	my $result = [];
> +	foreach my $key (keys %{$config}) {
> +	    push @$result, {
> +		dirid => $key,
> +		path => $config->{$key},
> +	    };
> +	}
> +
> +	return $result;
> +    }
> +});
> +
> +__PACKAGE__->register_method({
> +    name => 'add_dir',
> +    path => '',
> +    method => 'POST',
> +    description => "Add Directories for Host Directory Sharing.",
> +    permissions => {
> +	check => ['perm', '/map/dirs', [ 'Map.Modify' ]],
> +    },
> +    protected => 1,
> +    proxyto => 'node',
> +    parameters => {
> +	additionalProperties => 0,
> +	properties => {
> +	    node => get_standard_option('pve-node'),
> +	    dirid => {
> +		type => 'string',
> +		pattern => '[a-zA-Z0-9\-]+',
> +	    },
> +	    path => {
> +		type => 'string',
> +		maxLength => 4096,
> +		format => 'pve-storage-path',

there's probably a few more options in the future ;) e.g. at least read
only would be expected, I am not sure which of the other things should
go here and which should go to the usage side in qemu-server:
- xattr (support)
- acls (support)
- recursion (i.e., cross mountpoint boundaries)
- caching
- direct IO

the first three might be configurable here (to signify that the underlying
storage supports this) and at the usage side (to control whether we want
to pass an enabled feature on to the guest).

caching and direct IO might be candidates for only being configurable at
usage time. I also haven't checked whether 9P and (either C or rust)
virtiofsd support the same things.

> +	    },
> +	},
> +    },
> +    returns => { type => "null" },
> +    code => sub {
> +	my ($param) = @_;
> +	my $node = extract_param($param, 'node');
> +	my $dirid = extract_param($param, 'dirid');
> +	my $path = extract_param($param, 'path');
> +	PVE::DirConfig::add_dir_config($node, $dirid, $path);
> +	return undef;
> +    },
> +});
> +
> +
> +__PACKAGE__->register_method({
> +    name => 'del_dir',
> +    path => '',
> +    method => 'DELETE',
> +    description => "Remove Directory from Host Directory Sharing.",
> +    permissions => {
> +	check => ['perm', '/map/dirs', [ 'Map.Modify' ]],
> +    },
> +    protected => 1,
> +    proxyto => 'node',
> +    parameters => {
> +	additionalProperties => 0,
> +	properties => {
> +	    node => get_standard_option('pve-node'),
> +	    dirid => {
> +		type => 'string',
> +		pattern => '[a-zA-Z0-9\-]+',
> +	    },
> +	},
> +    },
> +    returns => { type => "null" },
> +    code => sub {
> +	my ($param) = @_;
> +
> +	my $node = extract_param($param, 'node');
> +	my $dirid = extract_param($param, 'dirid');
> +	PVE::DirConfig::del_dir_config($node, $dirid);
> +	return undef;
> +    },
> +});
> +
> +
> +1;
> diff --git a/PVE/API2/Makefile b/PVE/API2/Makefile
> index 5c08ebe0..1b96223c 100644
> --- a/PVE/API2/Makefile
> +++ b/PVE/API2/Makefile
> @@ -12,6 +12,7 @@ PERLSOURCE = 			\
>  	Ceph.pm			\
>  	Certificates.pm		\
>  	Cluster.pm		\
> +	DirConfig.pm		\
>  	HAConfig.pm		\
>  	Hardware.pm		\
>  	Network.pm		\
> diff --git a/PVE/API2/Nodes.pm b/PVE/API2/Nodes.pm
> index bfe5c40a..b4e2992f 100644
> --- a/PVE/API2/Nodes.pm
> +++ b/PVE/API2/Nodes.pm
> @@ -48,6 +48,7 @@ use PVE::API2::LXC::Status;
>  use PVE::API2::LXC;
>  use PVE::API2::Network;
>  use PVE::API2::NodeConfig;
> +use PVE::API2::DirConfig;
>  use PVE::API2::Qemu::CPU;
>  use PVE::API2::Qemu;
>  use PVE::API2::Replication;
> @@ -199,6 +200,11 @@ __PACKAGE__->register_method ({
>      path => 'config',
>  });
>  
> +__PACKAGE__->register_method ({
> +    subclass => "PVE::API2::DirConfig",
> +    path => 'dirs',
> +});
> +
>  if ($have_sdn) {
>      __PACKAGE__->register_method ({
>  	subclass => "PVE::API2::Network::SDN::Zones::Status",
> diff --git a/PVE/DirConfig.pm b/PVE/DirConfig.pm
> new file mode 100644
> index 00000000..56796029
> --- /dev/null
> +++ b/PVE/DirConfig.pm
> @@ -0,0 +1,155 @@
> +package PVE::DirConfig;
> +
> +use strict;
> +use warnings;
> +
> +use PVE::JSONSchema qw(get_standard_option);
> +use PVE::Tools qw(file_get_contents file_set_contents lock_file);
> +
> +my $dir_config_lock = '/var/lock/dirs.lock';

if this becomes cluster wide it needs to change. it should also probably
be a cfs_registered file.

> +
> +sub config_file {
> +    my ($node) = @_;
> +
> +    return "/etc/pve/nodes/${node}/dirs";
> +}
> +
> +sub load_config {
> +    my ($node) = @_;
> +
> +    my $filename = config_file($node);
> +    my $raw = eval { PVE::Tools::file_get_contents($filename); };
> +    return {} if !$raw;
> +
> +    return parse_file_config($raw, $filename);
> +}
> +
> +sub write_config {
> +    my ($node, $conf) = @_;
> +
> +    my $filename = config_file($node);
> +
> +    my $raw = write_file_config($conf);
> +
> +    PVE::Tools::file_set_contents($filename, $raw);
> +}
> +
> +sub lock_config {
> +    my ($node, $realcode, @param) = @_;
> +
> +    # make sure configuration file is up-to-date
> +    my $code = sub {
> +	PVE::Cluster::cfs_update();
> +	$realcode->(@_);
> +    };
> +
> +    my $res = lock_file($dir_config_lock, 10, $code, @param);
> +
> +    die $@ if $@;
> +
> +    return $res;
> +}
> +
> +my $descr = " Description for Shared Files Directory Config.\n"
> +    ." Add Directories with:\n dirid: /path/to/share";
> +
> +my $dir_desc = {
> +    path => {
> +	type => 'string',
> +	format_description => 'path',
> +	description => 'path of Directory ID',
> +	default_key => 1,
> +    },
> +};

this is too restrictive with regards to future extensions IMHO. see
above for some ideas. also see the discussions surrounding the hardware
map format.

> [..]

left out the rest since I expect the format to change ;)




^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [pve-devel] [PATCH access-control v4 2/6] added acls for Shared Files Directories
  2023-04-25 10:21 ` [pve-devel] [PATCH access-control v4 2/6] added acls for Shared Files Directories Markus Frank
@ 2023-05-04  8:24   ` Fabian Grünbichler
  0 siblings, 0 replies; 19+ messages in thread
From: Fabian Grünbichler @ 2023-05-04  8:24 UTC (permalink / raw)
  To: Proxmox VE development discussion

On April 25, 2023 12:21 pm, Markus Frank wrote:
> Signed-off-by: Markus Frank <m.frank@proxmox.com>
> ---
>  src/PVE/API2/Directory.pm | 68 +++++++++++++++++++++++++++++++++++++++

this parts seems to be included by accident? ;)

>  src/PVE/AccessControl.pm  | 16 +++++++++
>  src/PVE/RPCEnvironment.pm | 12 ++++++-
>  3 files changed, 95 insertions(+), 1 deletion(-)
>  create mode 100644 src/PVE/API2/Directory.pm
> 
> diff --git a/src/PVE/API2/Directory.pm b/src/PVE/API2/Directory.pm
> new file mode 100644
> index 0000000..b44ba9d
> --- /dev/null
> +++ b/src/PVE/API2/Directory.pm
> @@ -0,0 +1,68 @@
> +package PVE::API2::Directory;
> +
> +use strict;
> +use warnings;
> +
> +use PVE::Exception qw(raise raise_perm_exc raise_param_exc);
> +use PVE::Cluster qw (cfs_read_file cfs_write_file);
> +use PVE::Tools qw(split_list extract_param);
> +use PVE::JSONSchema qw(get_standard_option register_standard_option);
> +use PVE::SafeSyslog;
> +
> +use PVE::AccessControl;
> +use PVE::Auth::Plugin;
> +use PVE::TokenConfig;
> +
> +use PVE::RESTHandler;
> +use PVE::DirConfig;
> +
> +use base qw(PVE::RESTHandler);
> +
> +__PACKAGE__->register_method ({
> +    name => 'index',
> +    path => '',
> +    method => 'GET',
> +    description => "simple return value of parameter 'text'",
> +    parameters => {
> +	additionalProperties => 0,
> +	properties => {
> +	    node => {
> +		type => 'string',
> +	    }
> +	},
> +    },
> +    returns => {
> +	type => 'string',
> +    },
> +    code => sub {
> +	my ($param) = @_;
> +	return $param->{node};
> +    }
> +});
> +
> +
> +__PACKAGE__->register_method ({
> +    name => 'add_dir',
> +    path => 'add',
> +    method => 'POST',
> +    description => "simple return value of parameter 'text'",
> +    parameters => {
> +	additionalProperties => 0,
> +	properties => {
> +	    directory => {
> +		type => 'string',
> +	    },
> +	    node => {
> +		type => 'string',
> +	    }
> +	},
> +    },
> +    returns => {
> +	type => 'string',
> +    },
> +    code => sub {
> +	my ($param) = @_;
> +
> +	return $param->{node};
> +    }
> +});
> diff --git a/src/PVE/AccessControl.pm b/src/PVE/AccessControl.pm
> index 5690a1f..6530753 100644
> --- a/src/PVE/AccessControl.pm
> +++ b/src/PVE/AccessControl.pm
> @@ -1133,6 +1133,18 @@ my $privgroups = {
>  	    'Pool.Audit',
>  	],
>      },
> +    Map => {
> +	root => [],
> +	admin => [
> +	    'Map.Modify',
> +	],
> +	user => [
> +	    'Map.Use',
> +	],
> +	audit => [
> +	    'Map.Audit',
> +	],
> +    },

the priv names need coordination with Dominik, we definitely don't want
two sets!

>  };
>  
>  my $valid_privs = {};
> @@ -1166,6 +1178,8 @@ sub create_roles {
>      }
>  
>      $special_roles->{"PVETemplateUser"} = { 'VM.Clone' => 1, 'VM.Audit' => 1 };
> +
> +    delete($special_roles->{"PVEAdmin"}->{'Map.Modify'});

this is something were Dominik had some ideas as well IIRC ;)

>  };
>  
>  create_roles();
> @@ -1262,6 +1276,8 @@ sub check_path {
>  	|/storage/[[:alnum:]\.\-\_]+
>  	|/vms
>  	|/vms/[1-9][0-9]{2,}
> +	|/map/dirs
> +	|/map/dirs/[[:alnum:]\.\-\_]+
>      )$!xs;
>  }
>  
> diff --git a/src/PVE/RPCEnvironment.pm b/src/PVE/RPCEnvironment.pm
> index 8586938..42ff287 100644
> --- a/src/PVE/RPCEnvironment.pm
> +++ b/src/PVE/RPCEnvironment.pm
> @@ -187,10 +187,11 @@ sub compute_api_permission {
>  	nodes => qr/Sys\.|Permissions\.Modify/,
>  	sdn => qr/SDN\.|Permissions\.Modify/,
>  	dc => qr/Sys\.Audit|SDN\./,
> +	map => qr/Map\.Modify/

why Modify? I think Map. and Permissions.Modify would make more sense?

>      };
>      map { $res->{$_} = {} } keys %$priv_re_map;
>  
> -    my $required_paths = ['/', '/nodes', '/access/groups', '/vms', '/storage', '/sdn'];
> +    my $required_paths = ['/', '/nodes', '/access/groups', '/vms', '/storage', '/sdn', '/map'];
>      my $defined_paths = [];
>      PVE::AccessControl::iterate_acl_tree("/", $usercfg->{acl_root}, sub {
>  	my ($path, $node) = @_;
> @@ -245,6 +246,7 @@ sub get_effective_permissions {
>  	'/sdn' => 1,
>  	'/storage' => 1,
>  	'/vms' => 1,
> +	'/map' => 1,
>      };
>  
>      my $cfg = $self->{user_cfg};
> @@ -361,6 +363,14 @@ sub check_vm_perm {
>      return $self->check_full($user, "/vms/$vmid", $privs, $any, $noerr);
>  };
>  
> +sub check_dir_perm {
> +    my ($self, $user, $dirid, $privs, $any, $noerr) = @_;
> +
> +    my $cfg = $self->{user_cfg};
> +
> +    return $self->check_full($user, "/map/dirs/$dirid", $privs, $any, $noerr);
> +};

I don't think this helper brings anything to the table? check_vm_perm is
special in that it handles pools, and we want that in a single place,
but this is just hardcoding the ACL prefix?

> +
>  sub is_group_member {
>      my ($self, $group, $user) = @_;
>  
> -- 
> 2.30.2
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 




^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [pve-devel] [PATCH docs v4 0/6] feature #1027 virtio-9p/virtio-fs
  2023-04-25 10:21 [pve-devel] [PATCH docs v4 0/6] feature #1027 virtio-9p/virtio-fs Markus Frank
                   ` (5 preceding siblings ...)
  2023-04-25 10:21 ` [pve-devel] [PATCH qemu-server v4 6/6] feature #1027: virtio-9p & virtio-fs support Markus Frank
@ 2023-05-04  8:24 ` Fabian Grünbichler
  6 siblings, 0 replies; 19+ messages in thread
From: Fabian Grünbichler @ 2023-05-04  8:24 UTC (permalink / raw)
  To: Proxmox VE development discussion

thanks for working on this! it's a long-standing feature request and
implementing it will make quite a few people happy. also sorry for not
getting back at you in v2/3 already. there's some high level stuff that
I'll reply with here, and then some more concrete feedback on individual
patches.

there is some overlap (well, not so much atm, but I think there should
be more ;)) with Dominik's hardware map, so it might make sense to
coordinate. 

On April 25, 2023 12:21 pm, Markus Frank wrote:
> pve-docs:
> 
> Markus Frank (1):
>   added shared filesystem doc for virtio-fs & virtio-9p
> 
>  qm.adoc | 61 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 61 insertions(+)
> 
> 
> pve-access-control:
> 
> v3:
>  * replaced /dirs with /map/dirs/<dirid>
> 
> v2:
>  * admin gives access via an ACL (/dirs/<dirid>)
> 
> Markus Frank (1):
>   added acls for Shared Files Directories
> 
>  src/PVE/API2/Directory.pm | 68 +++++++++++++++++++++++++++++++++++++++
>  src/PVE/AccessControl.pm  | 16 +++++++++
>  src/PVE/RPCEnvironment.pm | 12 ++++++-
>  3 files changed, 95 insertions(+), 1 deletion(-)
>  create mode 100644 src/PVE/API2/Directory.pm
> 
> 
> pve-manager:
> 
> v4:
>  * moved extract_dir_path from qemu-server to DirConfig in pve-manager
> 
> v2:
>  * admins define dirs on the host that are eligibly for mounting into 
>   guests (<dirid>: /path/tp/share)

I think pve-manager is the wrong place for this.

the module hierarchy is

pve-manager - qemu-server/pve-container - pve-guest-common - pve-storage - pve-access-control - pve-common

(things to the right can use things to the left, but not the other way round)

there are only two places where we have intentional cycles:
pve-firewall <-> qemu-server/pve-container
ha-manager <-> qemu-server/pve-container

(and the doc generator, but that can be worked around and is not at
runtime ;))

so something like this, which needs to be used by both qemu-server and,
in the future, pve-container needs to go into one of the following:
- pve-guest-common
- pve-storage
- pve-common

common is out, as that sits below pve-cluster and pve-access-control.
that leaves either pve-guest-common or pve-storage.

you must not introduce code in pve-manager (like PVE::DirConfig) that is
needed in qemu-server.

besides this question of moving, in my mind, the following would be much
nicer:
- define dir entities cluster-wide (like storages)
-- id
-- optional default host path
-- ?
- allow limiting and overriding per node (so that dir "foo" can be
  backed by a path "/X" on one node, path "/Y" on another, and not be
  available at all on a third node)
- the only downside is editing requires a cluster-wide lock, but that is
  something that is not done frequently so it doesn't really hurt.

ideally this would follow the same scheme as the hardware map, since it
has quite similar semantics. I am not sure if we want to put it into the
hardware map, but it might be an option as well (it is kinda of passing
through a host dir, like passing through an USB or PCI device after
all). having it in the hardware map would possibly allow a common set of
helpers (like for finding out whether a given dir is available on a
node, and getting the correct config, or for ACL checks).

> Markus Frank (3):
>   added Config for Shared Filesystem Directories
>   added Shared Files tab in Node Settings
>   added options to add virtio-9p & virtio-fs Shared Filesystems to qemu
>     config
> 
>  PVE/API2/DirConfig.pm                | 129 +++++++++++++++++++
>  PVE/API2/Makefile                    |   1 +
>  PVE/API2/Nodes.pm                    |   6 +
>  PVE/DirConfig.pm                     | 155 +++++++++++++++++++++++
>  PVE/Makefile                         |   1 +
>  www/manager6/Makefile                |   2 +
>  www/manager6/Utils.js                |   1 +
>  www/manager6/data/PermPathStore.js   |   3 +
>  www/manager6/node/Config.js          |  12 ++
>  www/manager6/node/SharedFiles.js     | 177 +++++++++++++++++++++++++++
>  www/manager6/qemu/HardwareView.js    |  19 +++
>  www/manager6/qemu/SharedFilesEdit.js | 101 +++++++++++++++
>  12 files changed, 607 insertions(+)
>  create mode 100644 PVE/API2/DirConfig.pm
>  create mode 100644 PVE/DirConfig.pm
>  create mode 100644 www/manager6/node/SharedFiles.js
>  create mode 100644 www/manager6/qemu/SharedFilesEdit.js
> 
> 
> qemu-server:
> 
> v4:
>  * moved extract_dir_path from qemu-server to DirConfig
> 
> v3:
>  * created own socket and get file descriptor for virtiofsd
>  so there is no race between starting virtiofsd & qemu
>  * added TODO to replace virtiofsd with rust implementation in bookworm
>  (I packaged the rust implementation for bookworm & the C implementation
>  in qemu will be removed in qemu 8.0)
> 
> v2:
>  * replaced sharedfiles_fmt path in qemu-server with dirid:
>  * user can use the dirid to specify the directory without requiring root access
> 
> Markus Frank (1):
>   feature #1027: virtio-9p & virtio-fs support
> 
>  PVE/API2/Qemu.pm  |  19 +++++++
>  PVE/QemuServer.pm | 141 ++++++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 160 insertions(+)
> 
> -- 
> 2.30.2
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 




^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [pve-devel] [PATCH manager v4 3/6] added Config for Shared Filesystem Directories
  2023-05-04  8:13     ` Thomas Lamprecht
@ 2023-05-04  8:31       ` Dominik Csapak
  2023-05-04  8:42         ` Thomas Lamprecht
  0 siblings, 1 reply; 19+ messages in thread
From: Dominik Csapak @ 2023-05-04  8:31 UTC (permalink / raw)
  To: Thomas Lamprecht, Proxmox VE development discussion, Markus Frank

On 5/4/23 10:13, Thomas Lamprecht wrote:
> Am 03/05/2023 um 13:26 schrieb Dominik Csapak:
>> just a short comment since this series overlaps a bit with my
>> cluster resource mapping series (i plan on sending a v4 soon).
>>
>> i'd prefer to have the configuration endpoints for mapping bundled in a subdirectory
>> so instead of /nodes/<node>/dirs/ i'd put it in /nodes/<node>/mapping/dirs/
>> (or /nodes/<node>/map/dirs )
>>
>> @thomas, @fabian, any other input on that?
>>
> 
> huh? aren't mappings per definition cluster wide i.e. /cluster/resource-map/<mapping-id>
> than then allows to add/update the mapping of a resource on a specific node?
> A node specific path makes no sense to me, at max it would if adding/removing a mapping
> is completely decoupled from adding/removing/updaing entries to it – but that seems
> convoluted from an usage POV and easy to get out of sync with the actual mapping list.

in markus series the mapping are only ever per node, so each node has it's
own dir mapping

in my series, the actual config was cluster-wide, but the api endpoint to configure
them were sitting in the node path (e.g. /node/<nodes>/hardware-map/pci/*)

the reason is that to check the validity of the mapping (at least for creating/updating)
need to happen at the node itself anyway since only that node can check it
(e.g. for pci devices, if it exists, the ids are correct etc.)

we *could* put them into the cluster path api, but we'd need to send a node parameter
along and forward it there anyway, so that wouldn't really make a difference

for reading the mappings, that could be done there, but in my series in the gui at least,
i have to make a call to each node to get the current state of the mapping
(e.g. if the pci device is still there)

if a mapping exists (globally) is not interesting most of the time, we only need to know
if it exists at a specific node

also, after seeing markus' patches, i also leaned more in the direction of splitting
my global mapping config into a config per type+node (so node1/usb.conf, node1/pci.conf,
etc.) since then i can omit my custom json/section config parser and use plain
section configs. for the api calls it does not make a difference, since we only
ever need to write from a specific node to that, and for the global state we
can simply read all configs. (we then also can omit the global cluster lock for
editing...)

does that make sense?





^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [pve-devel] [PATCH qemu-server v4 6/6] feature #1027: virtio-9p & virtio-fs support
  2023-04-25 10:21 ` [pve-devel] [PATCH qemu-server v4 6/6] feature #1027: virtio-9p & virtio-fs support Markus Frank
@ 2023-05-04  8:39   ` Fabian Grünbichler
  2023-05-05  8:27     ` Markus Frank
  0 siblings, 1 reply; 19+ messages in thread
From: Fabian Grünbichler @ 2023-05-04  8:39 UTC (permalink / raw)
  To: Proxmox VE development discussion

On April 25, 2023 12:21 pm, Markus Frank wrote:
> adds support for sharing directorys with a guest vm
> 
> virtio-9p can be simply started with qemu

9p is not really maintained anymore upstream AFAIK (only "Odd Fixes"),
and had security issues in the past. Is there a good reason for
supporting it (we didn't want to in the past ;)).

> virtio-fs needs virtiofsd to be started

it also is not compatible with migration, including savevm (so,
snapshots with RAM, suspend to disk). until that changes, checks need to
be added to error out early when attempting to do such an action.

> 
> Signed-off-by: Markus Frank <m.frank@proxmox.com>
> ---
>  PVE/API2/Qemu.pm  |  19 +++++++
>  PVE/QemuServer.pm | 141 ++++++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 160 insertions(+)
> 
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index 587bb22..810e124 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -639,6 +639,8 @@ my $check_vm_modify_config_perm = sub {
>  	    # the user needs Disk and PowerMgmt privileges to change the vmstate
>  	    # also needs privileges on the storage, that will be checked later
>  	    $rpcenv->check_vm_perm($authuser, $vmid, $pool, ['VM.Config.Disk', 'VM.PowerMgmt' ]);
> +	} elsif ($opt =~ m/^sharedfiles\d$/) {
> +	    # needs $param->{$opt} so checkout $check_vm_dir_perm

this is not quite correct - you should still check the permission for
modifying the VM config here (VM.Config.Options or VM.Config.Disk?). of
course, like for other things, there are extra checks later on to see
whether you are allowed to add/remove/modify a particular shareddir
usage. disks also require storage permissions which are not handled
here, for example, but the general 'allowed to change disk aspects of
VM' is.

>  	} else {
>  	    # catches hostpci\d+, args, lock, etc.
>  	    # new options will be checked here
> @@ -649,6 +651,20 @@ my $check_vm_modify_config_perm = sub {
>      return 1;
>  };
>  
> +my $check_vm_dir_perm = sub {
> +    my ($rpcenv, $authuser, $param) = @_;
> +
> +    return 1 if $authuser eq 'root@pam';
> +
> +    foreach my $opt (keys %{$param}) {
> +	if ($opt =~ m/^sharedfiles\d$/) {
> +	    my $sharedfiles = PVE::QemuServer::parse_sharedfiles($param->{$opt});
> +	    $rpcenv->check_dir_perm($authuser, $sharedfiles->{dirid}, ['Map.Use']);
> +	}
> +    }
> +    return 1;
> +};
> +
>  __PACKAGE__->register_method({
>      name => 'vmlist',
>      path => '',
> @@ -876,6 +892,7 @@ __PACKAGE__->register_method({
>  
>  	    &$check_vm_modify_config_perm($rpcenv, $authuser, $vmid, $pool, [ keys %$param]);
>  
> +	    &$check_vm_dir_perm($rpcenv, $authuser, $param);
>  	    &$check_vm_create_serial_perm($rpcenv, $authuser, $vmid, $pool, $param);
>  	    &$check_vm_create_usb_perm($rpcenv, $authuser, $vmid, $pool, $param);
>  
> @@ -1576,6 +1593,8 @@ my $update_vm_api  = sub {
>  
>      &$check_vm_modify_config_perm($rpcenv, $authuser, $vmid, undef, [keys %$param]);
>  
> +    &$check_vm_dir_perm($rpcenv, $authuser, $param);
> +
>      &$check_storage_access($rpcenv, $authuser, $storecfg, $vmid, $param);
>  
>      my $updatefn =  sub {
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index c1d0fd2..fc07311 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -274,6 +274,31 @@ my $rng_fmt = {
>      },
>  };
>  
> +my $sharedfiles_fmt = {
> +    type => {
> +	type => 'string',
> +	default_key => 1,
> +	enum => ['virtio-9p', 'virtio-fs'],
> +	description => "sharedfiles via"
> +	    ." virtio-9p (https://www.linux-kvm.org/page/9p_virtio)"
> +	    ." or virtio-fs (https://virtio-fs.gitlab.io/howto-qemu.html)",
> +	format_description => "virtio-sharedfiles-type",
> +	optional => 1,

why is this optional?

> +    },
> +    dirid => {
> +	type => 'string',
> +	description => "dirid of directory you want to share with the guest VM",
> +	format_description => "virtio-sharedfiles-dirid",
> +	optional => 1,

why is this optional?

> +    },
> +    tag => {
> +	type => 'string',
> +	description => "tag name for mounting in the guest VM",
> +	format_description => "virtio-sharedfiles-tag",
> +	optional => 1,

is the tag actually optional?

> +    },

see my comments in the manager patch, I think there are quite a few more
knobs that we might want to expose here.

> +};
> +
>  my $meta_info_fmt = {
>      'ctime' => {
>  	type => 'integer',
> @@ -832,6 +857,7 @@ while (my ($k, $v) = each %$confdesc) {
>  
>  my $MAX_USB_DEVICES = 14;
>  my $MAX_NETS = 32;
> +my $MAX_SHAREDFILES = 10;
>  my $MAX_SERIAL_PORTS = 4;
>  my $MAX_PARALLEL_PORTS = 3;
>  my $MAX_NUMA = 8;
> @@ -974,6 +1000,12 @@ my $netdesc = {
>      description => "Specify network devices.",
>  };
>  
> +my $sharedfilesdesc = {
> +    optional => 1,
> +    type => 'string', format => $sharedfiles_fmt,
> +    description => "share files between host and guest",
> +};
> +
>  PVE::JSONSchema::register_standard_option("pve-qm-net", $netdesc);
>  
>  my $ipconfig_fmt = {
> @@ -1035,6 +1067,10 @@ for (my $i = 0; $i < $MAX_NETS; $i++)  {
>      $confdesc_cloudinit->{"ipconfig$i"} = $ipconfigdesc;
>  }
>  
> +for (my $i = 0; $i < $MAX_SHAREDFILES; $i++)  {
> +    $confdesc->{"sharedfiles$i"} = $sharedfilesdesc;
> +}
> +
>  foreach my $key (keys %$confdesc_cloudinit) {
>      $confdesc->{$key} = $confdesc_cloudinit->{$key};
>  }
> @@ -1988,6 +2024,16 @@ sub parse_net {
>      return $res;
>  }
>  
> +sub parse_sharedfiles {
> +    my ($value) = @_;
> +
> +    return if !$value;
> +    my $res = eval { parse_property_string($sharedfiles_fmt, $value) };
> +
> +    warn $@ if $@;
> +    return $res;
> +}
> +
>  # ipconfigX ip=cidr,gw=ip,ip6=cidr,gw6=ip
>  sub parse_ipconfig {
>      my ($data) = @_;
> @@ -4105,6 +4151,44 @@ sub config_to_command {
>  	push @$devices, '-device', $netdevicefull;
>      }
>  
> +    my $onevirtiofs = 0;
> +    for (my $i = 0; $i < $MAX_SHAREDFILES; $i++) {

what about hotplug support?

> +	my $sharedfilesstr = "sharedfiles$i";
> +
> +	next if !$conf->{$sharedfilesstr};
> +	my $sharedfiles = parse_sharedfiles($conf->{$sharedfilesstr});
> +	next if !$sharedfiles;
> +
> +	die $sharedfilesstr.' needs a type (virtio-9p or virtio-fs)' if !$sharedfiles->{type};
> +	die $sharedfilesstr.' needs a dirid of a directory to share' if !$sharedfiles->{dirid};
> +	die $sharedfilesstr.' needs a mount tag' if !$sharedfiles->{tag};

so I guess all three are not optional ;)

> +
> +	if ($sharedfiles->{type} eq 'virtio-fs' && $conf->{numa}) {
> +	    die "disable numa to use virtio-fs or use virtio-9p instead";
> +	}

okay, that is an interesting restriction. is that just because it
requires some other, complex changes, or is it not working at all?

> +
> +	my $path = PVE::DirConfig::extract_dir_path($nodename, $sharedfiles->{dirid});
> +
> +	if ($sharedfiles->{type} eq 'virtio-9p') {
> +	    push @$devices, '-fsdev', "local,security_model=passthrough,id=fsdev$i"
> +		.",path=$path";
> +	    push @$devices, '-device', "virtio-9p-pci,id=fs9p$i,fsdev=fsdev$i"
> +		.",mount_tag=$sharedfiles->{tag}";
> +	}
> +	if ($sharedfiles->{type} eq 'virtio-fs') {
> +	    push @$devices, '-chardev', "socket,id=virtfs$i,path=/var/run/virtiofsd/vm$vmid-fs$i";
> +	    push @$devices, '-device', 'vhost-user-fs-pci,queue-size=1024'
> +		.",chardev=virtfs$i,tag=$sharedfiles->{tag}";
> +	    $onevirtiofs = 1;
> +	}
> +    }
> +
> +    if ($onevirtiofs) {
> +	push @$devices, '-object', 'memory-backend-file,id=mem'
> +	    .",size=$conf->{memory}M,mem-path=/dev/shm,share=on";
> +	push @$devices, '-numa', 'node,memdev=mem';
> +    }
> +
>      if ($conf->{ivshmem}) {
>  	my $ivshmem = parse_property_string($ivshmem_fmt, $conf->{ivshmem});
>  
> @@ -4190,6 +4274,41 @@ sub config_to_command {
>      return wantarray ? ($cmd, $vollist, $spice_port) : $cmd;
>  }
>  
> +sub start_virtiofs {

how/when are they stopped again? :)

> +    my ($vmid, $path, $fsid, $dirid) = @_;
> +
> +    my $socket_path_root = "/var/run/virtiofsd";
> +    mkdir $socket_path_root;
> +    my $socket_path = "$socket_path_root/vm$vmid-fs$fsid";
> +    unlink($socket_path);
> +    my $socket = IO::Socket::UNIX->new(
> +	Type => SOCK_STREAM,
> +	Local => $socket_path,
> +	Listen => 1,
> +    ) or die "cannot create socket - $!\n";
> +
> +    my $flags = fcntl($socket, F_GETFD, 0)
> +	or die "failed to get file descriptor flags: $!\n";
> +    fcntl($socket, F_SETFD, $flags & ~FD_CLOEXEC)
> +	or die "failed to remove FD_CLOEXEC from file descriptor\n";
> +
> +    my $fd = $socket->fileno();
> +
> +    my $pid = fork();
> +    if ($pid == 0) {
> +	# TODO replace with rust implementation in bookworm
> +	run_command(['/usr/lib/kvm/virtiofsd', "--fd=$fd",
> +	    '-o', "source=$path", '-o', 'cache=always']);
> +	POSIX::_exit(0);
> +    } elsif (!defined($pid)) {
> +	die "could not fork to start virtiofsd";
> +    }
> +
> +    # return socket to keep it alive,
> +    # so that qemu will wait for virtiofsd to start
> +    return $socket;
> +}
> +
>  sub check_rng_source {
>      my ($source) = @_;
>  
> @@ -5747,6 +5866,23 @@ sub vm_start_nolock {
>      my ($cmd, $vollist, $spice_port) = config_to_command($storecfg, $vmid,
>  	$conf, $defaults, $forcemachine, $forcecpu, $params->{'pbs-backing'});
>  
> +    my $nodename = nodename();
> +    my @sockets;
> +    for (my $i = 0; $i < $MAX_SHAREDFILES; $i++) {
> +	my $sharedfilesstr = "sharedfiles$i";
> +
> +	next if !$conf->{$sharedfilesstr};
> +	my $sharedfiles = parse_sharedfiles($conf->{$sharedfilesstr});
> +	next if !$sharedfiles;
> +
> +	my $path = PVE::DirConfig::extract_dir_path($nodename, $sharedfiles->{dirid});
> +
> +	if ($sharedfiles && $sharedfiles->{type} eq 'virtio-fs' && !$conf->{numa}) {
> +	    my $socket = start_virtiofs($vmid, $path, $i, $sharedfiles->{dirid});
> +	    push @sockets, $socket;
> +	}
> +    }
> +
>      my $migration_ip;
>      my $get_migration_ip = sub {
>  	my ($nodename) = @_;
> @@ -6094,6 +6230,11 @@ sub vm_start_nolock {
>  
>      PVE::GuestHelpers::exec_hookscript($conf, $vmid, 'post-start');
>  
> +    foreach my $socket (@sockets) {
> +	shutdown($socket, 2);
> +	close($socket);
> +    }
> +
>      return $res;
>  }
>  
> -- 
> 2.30.2
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 




^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [pve-devel] [PATCH manager v4 3/6] added Config for Shared Filesystem Directories
  2023-05-04  8:31       ` Dominik Csapak
@ 2023-05-04  8:42         ` Thomas Lamprecht
  2023-05-04  8:57           ` Dominik Csapak
  0 siblings, 1 reply; 19+ messages in thread
From: Thomas Lamprecht @ 2023-05-04  8:42 UTC (permalink / raw)
  To: Dominik Csapak, Proxmox VE development discussion, Markus Frank

Am 04/05/2023 um 10:31 schrieb Dominik Csapak:
> On 5/4/23 10:13, Thomas Lamprecht wrote:
>> Am 03/05/2023 um 13:26 schrieb Dominik Csapak:
>>> just a short comment since this series overlaps a bit with my
>>> cluster resource mapping series (i plan on sending a v4 soon).
>>>
>>> i'd prefer to have the configuration endpoints for mapping bundled in a subdirectory
>>> so instead of /nodes/<node>/dirs/ i'd put it in /nodes/<node>/mapping/dirs/
>>> (or /nodes/<node>/map/dirs )
>>>
>>> @thomas, @fabian, any other input on that?
>>>
>>
>> huh? aren't mappings per definition cluster wide i.e. /cluster/resource-map/<mapping-id>
>> than then allows to add/update the mapping of a resource on a specific node?
>> A node specific path makes no sense to me, at max it would if adding/removing a mapping
>> is completely decoupled from adding/removing/updaing entries to it – but that seems
>> convoluted from an usage POV and easy to get out of sync with the actual mapping list.
> 
> in markus series the mapping are only ever per node, so each node has it's
> own dir mapping

Every resource maping is always per node, so that's not really changing anything.

Rather what about migrations? Would be simpler from migration and ACL POV to have
it cluster wide,

> 
> in my series, the actual config was cluster-wide, but the api endpoint to configure
> them were sitting in the node path (e.g. /node/<nodes>/hardware-map/pci/*)

Please no.
 
> the reason is that to check the validity of the mapping (at least for creating/updating)
> need to happen at the node itself anyway since only that node can check it
> (e.g. for pci devices, if it exists, the ids are correct etc.)

That check is most relevant on using the map, not on updating/configuring it as there
the UX of getting the right one can be solved of providing a node selector per entry
that then loads the actual available devices/resources on that node. 

> 
> we *could* put them into the cluster path api, but we'd need to send a node parameter
> along and forward it there anyway, so that wouldn't really make a difference

no need for that, see above.

> 
> for reading the mappings, that could be done there, but in my series in the gui at least,
> i have to make a call to each node to get the current state of the mapping
> (e.g. if the pci device is still there)

For now not ideal but ok, in the future I'd rather go in the direction of broadcasting
some types of HW resources via pmxcfs KV and then this isn't an issue anymore.

> 
> if a mapping exists (globally) is not interesting most of the time, we only need to know
> if it exists at a specific node

that's looking at it backwards, the user and ACL only care for global mapings, how
the code implements that is then, well an implementation detail.

> 
> also, after seeing markus' patches, i also leaned more in the direction of splitting
> my global mapping config into a config per type+node (so node1/usb.conf, node1/pci.conf,

no, please no forest^W jungle of config trees :/

A /etc/pve/resource-map/<type>.conf must be enough, even a /etc/pve/resource-map.conf
should be tbh., but I could imagine that splitting per resource type makes some (schema)
things a bit easier and reduce some bug potential, so not to hard feelings on having one
cluster-wide per type; but really not more.





^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [pve-devel] [PATCH manager v4 3/6] added Config for Shared Filesystem Directories
  2023-05-04  8:42         ` Thomas Lamprecht
@ 2023-05-04  8:57           ` Dominik Csapak
  2023-05-04 10:21             ` Thomas Lamprecht
  0 siblings, 1 reply; 19+ messages in thread
From: Dominik Csapak @ 2023-05-04  8:57 UTC (permalink / raw)
  To: Thomas Lamprecht, Proxmox VE development discussion, Markus Frank

On 5/4/23 10:42, Thomas Lamprecht wrote:
> Am 04/05/2023 um 10:31 schrieb Dominik Csapak:
>> On 5/4/23 10:13, Thomas Lamprecht wrote:
>>> Am 03/05/2023 um 13:26 schrieb Dominik Csapak:
>>>> just a short comment since this series overlaps a bit with my
>>>> cluster resource mapping series (i plan on sending a v4 soon).
>>>>
>>>> i'd prefer to have the configuration endpoints for mapping bundled in a subdirectory
>>>> so instead of /nodes/<node>/dirs/ i'd put it in /nodes/<node>/mapping/dirs/
>>>> (or /nodes/<node>/map/dirs )
>>>>
>>>> @thomas, @fabian, any other input on that?
>>>>
>>>
>>> huh? aren't mappings per definition cluster wide i.e. /cluster/resource-map/<mapping-id>
>>> than then allows to add/update the mapping of a resource on a specific node?
>>> A node specific path makes no sense to me, at max it would if adding/removing a mapping
>>> is completely decoupled from adding/removing/updaing entries to it – but that seems
>>> convoluted from an usage POV and easy to get out of sync with the actual mapping list.
>>
>> in markus series the mapping are only ever per node, so each node has it's
>> own dir mapping
> 
> Every resource maping is always per node, so that's not really changing anything.
> 

i meant there is no cluster aspect in the current version of markus series at all

> Rather what about migrations? Would be simpler from migration and ACL POV to have
> it cluster wide,

sure, but how we get the info during migration is just an implementation detail and
for that shouldn't matter where the configs/api endpoints are

> 
>>
>> in my series, the actual config was cluster-wide, but the api endpoint to configure
>> them were sitting in the node path (e.g. /node/<nodes>/hardware-map/pci/*)
>  > Please no.

was that way since the very first version, would have been nice to get that feedback
earlier

>   
>> the reason is that to check the validity of the mapping (at least for creating/updating)
>> need to happen at the node itself anyway since only that node can check it
>> (e.g. for pci devices, if it exists, the ids are correct etc.)
> 
> That check is most relevant on using the map, not on updating/configuring it as there
> the UX of getting the right one can be solved of providing a node selector per entry
> that then loads the actual available devices/resources on that node.
i see what you mean, personally i'd still prefer doing these checks on creation to
prevent the user from accidentally (or intentionally) creating wrong entries
(e.g. when using it via the api/cli)

yes in the gui it shouldn't be a problem since we can fill in the info from the correct node

> 
>>
>> we *could* put them into the cluster path api, but we'd need to send a node parameter
>> along and forward it there anyway, so that wouldn't really make a difference
> 
> no need for that, see above.

there is a need for the node parameter, because you always need to know for which node the
mapping is anyway ;) or did you mean the 'forward' part?

> 
>>
>> for reading the mappings, that could be done there, but in my series in the gui at least,
>> i have to make a call to each node to get the current state of the mapping
>> (e.g. if the pci device is still there)
> 
> For now not ideal but ok, in the future I'd rather go in the direction of broadcasting
> some types of HW resources via pmxcfs KV and then this isn't an issue anymore.

yeah can be done, but the question is if we want to broadcast the whole pci/usb list
from all nodes? (i don't believe that scales well?)

> 
>>
>> if a mapping exists (globally) is not interesting most of the time, we only need to know
>> if it exists at a specific node
> 
> that's looking at it backwards, the user and ACL only care for global mapings, how
> the code implements that is then, well an implementation detail.

ACL yes, the user must configure the mapping on a vm (which sits on a specific node)
and there the mapping must exist on that node

> 
>>
>> also, after seeing markus' patches, i also leaned more in the direction of splitting
>> my global mapping config into a config per type+node (so node1/usb.conf, node1/pci.conf,
> 
> no, please no forest^W jungle of config trees :/
> 
> A /etc/pve/resource-map/<type>.conf must be enough, even a /etc/pve/resource-map.conf
> should be tbh., but I could imagine that splitting per resource type makes some (schema)
> things a bit easier and reduce some bug potential, so not to hard feelings on having one
> cluster-wide per type; but really not more.
> 

sure a single config is ok for me (only per type would be weird, since resuing the
sectionconfig would only ever have single type and we'd have to encode the
nodename somehow)





^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [pve-devel] [PATCH manager v4 3/6] added Config for Shared Filesystem Directories
  2023-05-04  8:57           ` Dominik Csapak
@ 2023-05-04 10:21             ` Thomas Lamprecht
  2023-05-09  9:31               ` Dominik Csapak
  0 siblings, 1 reply; 19+ messages in thread
From: Thomas Lamprecht @ 2023-05-04 10:21 UTC (permalink / raw)
  To: Dominik Csapak, Proxmox VE development discussion
  Cc: Fabian Grünbichler, Markus Frank, Wolfgang Bumiller

Am 04/05/2023 um 10:57 schrieb Dominik Csapak:
> On 5/4/23 10:42, Thomas Lamprecht wrote:
>> Am 04/05/2023 um 10:31 schrieb Dominik Csapak:
>>> On 5/4/23 10:13, Thomas Lamprecht wrote:
>>>> Am 03/05/2023 um 13:26 schrieb Dominik Csapak:
>>>>> just a short comment since this series overlaps a bit with my
>>>>> cluster resource mapping series (i plan on sending a v4 soon).
>>>>>
>>>>> i'd prefer to have the configuration endpoints for mapping bundled in a subdirectory
>>>>> so instead of /nodes/<node>/dirs/ i'd put it in /nodes/<node>/mapping/dirs/
>>>>> (or /nodes/<node>/map/dirs )
>>>>>
>>>>> @thomas, @fabian, any other input on that?
>>>>>
>>>>
>>>> huh? aren't mappings per definition cluster wide i.e. /cluster/resource-map/<mapping-id>
>>>> than then allows to add/update the mapping of a resource on a specific node?
>>>> A node specific path makes no sense to me, at max it would if adding/removing a mapping
>>>> is completely decoupled from adding/removing/updaing entries to it – but that seems
>>>> convoluted from an usage POV and easy to get out of sync with the actual mapping list.
>>>
>>> in markus series the mapping are only ever per node, so each node has it's
>>> own dir mapping
>>
>> Every resource maping is always per node, so that's not really changing anything.
>>
> 
> i meant there is no cluster aspect in the current version of markus series at all

only if you lock down the vm hard to be kept at the node its mapping was configured.

> 
>> Rather what about migrations? Would be simpler from migration and ACL POV to have
>> it cluster wide,
> 
> sure, but how we get the info during migration is just an implementation detail and
> for that shouldn't matter where the configs/api endpoints are


no it really isn't an implementation detail how ACLs govern access to mappings, which
in turn dictactes where the ACL object paths should be, which then again ahs implications
for where the API endpoints and configs should be – node level isn't the one.

>>>
>>> in my series, the actual config was cluster-wide, but the api endpoint to configure
>>> them were sitting in the node path (e.g. /node/<nodes>/hardware-map/pci/*)
>>  > Please no.
> 
> was that way since the very first version, would have been nice to get that feedback
> earlier
> 

IIRC I always talked about this being on cluster level, especially w.r.t. to making it
more general resource mapping, maybe I didn't looked to closely and was thrown off
by the series name, i.e., "add *cluster-wide* hardware device mapping" and on the ACL
path discussion (which IIRC never had any node-specific param in there, and thus only
work if it's either a clusterwide config or if ID uniquness is guaranteed by the pmxcfs
like we do for VMIDs)

Anyhow, sorry, I'll try to check for such things more closely earlier in the future.

>>>
>>> we *could* put them into the cluster path api, but we'd need to send a node parameter
>>> along and forward it there anyway, so that wouldn't really make a difference
>>
>> no need for that, see above.
> 
> there is a need for the node parameter, because you always need to know for which node the
> mapping is anyway ;) or did you mean the 'forward' part?

yes, I certainly meant the forward part – otherwise my "Every resource mapping is always
per node" sentence and mentioned the node selector above would be really odd. A
node:local-id map always needs to exist, I figured that's the core thing this series wants
to solve, but as it's not a hard requirement to forward every request (and a huge PITA
to get in sync if done that way, or at least would reaquire using a cfs_domain_lock),
so no (hard) need for proxying.


>>>
>>> for reading the mappings, that could be done there, but in my series in the gui at least,
>>> i have to make a call to each node to get the current state of the mapping
>>> (e.g. if the pci device is still there)
>>
>> For now not ideal but ok, in the future I'd rather go in the direction of broadcasting
>> some types of HW resources via pmxcfs KV and then this isn't an issue anymore.
> 
> yeah can be done, but the question is if we want to broadcast the whole pci/usb list
> from all nodes? (i don't believe that scales well?)

how is one or two dozen of PCI IDs + some meta info, broadcasted normally once per boot
(or statd start) per node a scaling problem? Especially compared to the amount of data we
broadcast otherwise it's rather nelligible.

> 
>>
>>>
>>> if a mapping exists (globally) is not interesting most of the time, we only need to know
>>> if it exists at a specific node
>>
>> that's looking at it backwards, the user and ACL only care for global mapings, how
>> the code implements that is then, well an implementation detail.
> 
> ACL yes, the user must configure the mapping on a vm (which sits on a specific node)
> and there the mapping must exist on that node
> 

Where they also don't care for local-nodeness, they just select a mapping (no need to hard
error on the mapping not being usable on the current node, VM can be migrated before the
start – albeit it would certainly improve UX if it was visually hinted); otherwise they
could've just used a local ID directly.

>>
>>>
>>> also, after seeing markus' patches, i also leaned more in the direction of splitting
>>> my global mapping config into a config per type+node (so node1/usb.conf, node1/pci.conf,
>>
>> no, please no forest^W jungle of config trees :/
>>
>> A /etc/pve/resource-map/<type>.conf must be enough, even a /etc/pve/resource-map.conf
>> should be tbh., but I could imagine that splitting per resource type makes some (schema)
>> things a bit easier and reduce some bug potential, so not to hard feelings on having one
>> cluster-wide per type; but really not more.
>>
> 
> sure a single config is ok for me (only per type would be weird, since resuing the

Not sure if I'd find that flat out weird, depends on the whole design and planned future 
direction and if it's OK that the ID used for an PCI mapping can never be reused for some
directory or other relatively unrelated mapping. FWIW, <type> here could also be the higher
level resource type (group), i.e., hardware.cfg for PCI/USB, one for local file system stuff
like directories.

FWIW, section config have a disadvantage if very different config types are mixed, as the
schema is then also shown as sum of all possible types in API/man page (even storage.cfg,
while handling similar things, is already very confusing); possibly solvable on api/manpage
doc generation though; just mentioning it as I had that in my mind already a few times
and as we should the improve sometime, and expanding use for wildly different types it
might be better to do it sooner than later (but not directly related to this).

> sectionconfig would only ever have single type and we'd have to encode the
> nodename somehow)

Ideally we wouldn't have to encode the nodename, as doing so would make especially using
a single config weird (i.e., it would make the <id>+<node> the sectionID from a config POV
but not from a usage POV, which would then be weird for a e.g., "pci: foo:nodeA" and a
"usb: foo:nodeB".

It should be tried to have a single config entry per ID, i.e. very handwavy:

pci: id
    comment foo
    allow-multi-function 1
    other-map-wide-e.g-enforcement-setting-a 1
    ...
    map nodeA:3e:00.0,nodeB:0a:00.0

Or better, with the long contemplated array support (which not necesarrily you'd need to
spearhead)


pci: id
    ...
    map node=A,id=3e:00.0
    map node=B,id=3e:01.0

I read (but not in extended detail) "[PATCH common v3 2/3] add PVE/HardwareMap" and above
is probably not expressive enough on it's own, but otoh, the formats used in the patch
seem a bit to flexible (i.e., makes not much sense to allow mdev on one map entry of one
node but not on the one from another node – in practice that can hardly work for VMs I
guess and would limit any future live-migration caps?). Also why's the ID that split up
there?

I'd rather try to keep the map to the minimal ID info and some general restrictions (e.g.,
if some special use is allowed), but other details that only apply on using the mapped
device in a guest would still be handled there (as long as the mapping allows it).

Sorry for the a bit higher level and slightly hand-wavy description, if you list specific
problems and required things I can try to formalize the config format I'd envision to then
be also useful for other resource mappings a bit more.

And just to bring it at the "table", in theory we _maybe_ could split naming a node device
and the cluster-wide mapping, i.e., the mapping would be, well, cluster-wide and naming
devices plus some extra "how can/should it be used" would then be node-specific (possible
not even in /etc/pve) – not saying I'd prefer that, but that's something that could be
explored before finalizing on a design; I don't want to rush this as we have to support
something big and certainly poplular as this basically forever, and while config migrations
can be done somehow it's basically always a pita.




^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [pve-devel] [PATCH qemu-server v4 6/6] feature #1027: virtio-9p & virtio-fs support
  2023-05-04  8:39   ` Fabian Grünbichler
@ 2023-05-05  8:27     ` Markus Frank
  0 siblings, 0 replies; 19+ messages in thread
From: Markus Frank @ 2023-05-05  8:27 UTC (permalink / raw)
  To: Proxmox VE development discussion, Fabian Grünbichler

Thanks for your review.

On 5/4/23 10:39, Fabian Grünbichler wrote:
> On April 25, 2023 12:21 pm, Markus Frank wrote:
>> adds support for sharing directorys with a guest vm
>>
>> virtio-9p can be simply started with qemu
> 
> 9p is not really maintained anymore upstream AFAIK (only "Odd Fixes"),
> and had security issues in the past. Is there a good reason for
> supporting it (we didn't want to in the past ;)> 

Except that it does not need a daemon to be started, no.
I will remove it.

>> virtio-fs needs virtiofsd to be started
> 
> it also is not compatible with migration, including savevm (so,
> snapshots with RAM, suspend to disk). until that changes, checks need to
> be added to error out early when attempting to do such an action.
> 
>>
>> Signed-off-by: Markus Frank <m.frank@proxmox.com>
>> ---
>>   PVE/API2/Qemu.pm  |  19 +++++++
>>   PVE/QemuServer.pm | 141 ++++++++++++++++++++++++++++++++++++++++++++++
>>   2 files changed, 160 insertions(+)
>>
>> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
>> index 587bb22..810e124 100644
>> --- a/PVE/API2/Qemu.pm
>> +++ b/PVE/API2/Qemu.pm
>> @@ -639,6 +639,8 @@ my $check_vm_modify_config_perm = sub {
>>   	    # the user needs Disk and PowerMgmt privileges to change the vmstate
>>   	    # also needs privileges on the storage, that will be checked later
>>   	    $rpcenv->check_vm_perm($authuser, $vmid, $pool, ['VM.Config.Disk', 'VM.PowerMgmt' ]);
>> +	} elsif ($opt =~ m/^sharedfiles\d$/) {
>> +	    # needs $param->{$opt} so checkout $check_vm_dir_perm
> 
> this is not quite correct - you should still check the permission for
> modifying the VM config here (VM.Config.Options or VM.Config.Disk?). of
> course, like for other things, there are extra checks later on to see
> whether you are allowed to add/remove/modify a particular shareddir
> usage. disks also require storage permissions which are not handled
> here, for example, but the general 'allowed to change disk aspects of
> VM' is.
> 
>>   	} else {
>>   	    # catches hostpci\d+, args, lock, etc.
>>   	    # new options will be checked here
>> @@ -649,6 +651,20 @@ my $check_vm_modify_config_perm = sub {
>>       return 1;
>>   };
>>   
>> +my $check_vm_dir_perm = sub {
>> +    my ($rpcenv, $authuser, $param) = @_;
>> +
>> +    return 1 if $authuser eq 'root@pam';
>> +
>> +    foreach my $opt (keys %{$param}) {
>> +	if ($opt =~ m/^sharedfiles\d$/) {
>> +	    my $sharedfiles = PVE::QemuServer::parse_sharedfiles($param->{$opt});
>> +	    $rpcenv->check_dir_perm($authuser, $sharedfiles->{dirid}, ['Map.Use']);
>> +	}
>> +    }
>> +    return 1;
>> +};
>> +
>>   __PACKAGE__->register_method({
>>       name => 'vmlist',
>>       path => '',
>> @@ -876,6 +892,7 @@ __PACKAGE__->register_method({
>>   
>>   	    &$check_vm_modify_config_perm($rpcenv, $authuser, $vmid, $pool, [ keys %$param]);
>>   
>> +	    &$check_vm_dir_perm($rpcenv, $authuser, $param);
>>   	    &$check_vm_create_serial_perm($rpcenv, $authuser, $vmid, $pool, $param);
>>   	    &$check_vm_create_usb_perm($rpcenv, $authuser, $vmid, $pool, $param);
>>   
>> @@ -1576,6 +1593,8 @@ my $update_vm_api  = sub {
>>   
>>       &$check_vm_modify_config_perm($rpcenv, $authuser, $vmid, undef, [keys %$param]);
>>   
>> +    &$check_vm_dir_perm($rpcenv, $authuser, $param);
>> +
>>       &$check_storage_access($rpcenv, $authuser, $storecfg, $vmid, $param);
>>   
>>       my $updatefn =  sub {
>> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
>> index c1d0fd2..fc07311 100644
>> --- a/PVE/QemuServer.pm
>> +++ b/PVE/QemuServer.pm
>> @@ -274,6 +274,31 @@ my $rng_fmt = {
>>       },
>>   };
>>   
>> +my $sharedfiles_fmt = {
>> +    type => {
>> +	type => 'string',
>> +	default_key => 1,
>> +	enum => ['virtio-9p', 'virtio-fs'],
>> +	description => "sharedfiles via"
>> +	    ." virtio-9p (https://www.linux-kvm.org/page/9p_virtio)"
>> +	    ." or virtio-fs (https://virtio-fs.gitlab.io/howto-qemu.html)",
>> +	format_description => "virtio-sharedfiles-type",
>> +	optional => 1,
> 
> why is this optional?

I forgot to delete

> 
>> +    },
>> +    dirid => {
>> +	type => 'string',
>> +	description => "dirid of directory you want to share with the guest VM",
>> +	format_description => "virtio-sharedfiles-dirid",
>> +	optional => 1,
> 
> why is this optional?
> 
>> +    },
>> +    tag => {
>> +	type => 'string',
>> +	description => "tag name for mounting in the guest VM",
>> +	format_description => "virtio-sharedfiles-tag",
>> +	optional => 1,
> 
> is the tag actually optional?
> 
>> +    },
> 
> see my comments in the manager patch, I think there are quite a few more
> knobs that we might want to expose here.
> 
>> +};
>> +
>>   my $meta_info_fmt = {
>>       'ctime' => {
>>   	type => 'integer',
>> @@ -832,6 +857,7 @@ while (my ($k, $v) = each %$confdesc) {
>>   
>>   my $MAX_USB_DEVICES = 14;
>>   my $MAX_NETS = 32;
>> +my $MAX_SHAREDFILES = 10;
>>   my $MAX_SERIAL_PORTS = 4;
>>   my $MAX_PARALLEL_PORTS = 3;
>>   my $MAX_NUMA = 8;
>> @@ -974,6 +1000,12 @@ my $netdesc = {
>>       description => "Specify network devices.",
>>   };
>>   
>> +my $sharedfilesdesc = {
>> +    optional => 1,
>> +    type => 'string', format => $sharedfiles_fmt,
>> +    description => "share files between host and guest",
>> +};
>> +
>>   PVE::JSONSchema::register_standard_option("pve-qm-net", $netdesc);
>>   
>>   my $ipconfig_fmt = {
>> @@ -1035,6 +1067,10 @@ for (my $i = 0; $i < $MAX_NETS; $i++)  {
>>       $confdesc_cloudinit->{"ipconfig$i"} = $ipconfigdesc;
>>   }
>>   
>> +for (my $i = 0; $i < $MAX_SHAREDFILES; $i++)  {
>> +    $confdesc->{"sharedfiles$i"} = $sharedfilesdesc;
>> +}
>> +
>>   foreach my $key (keys %$confdesc_cloudinit) {
>>       $confdesc->{$key} = $confdesc_cloudinit->{$key};
>>   }
>> @@ -1988,6 +2024,16 @@ sub parse_net {
>>       return $res;
>>   }
>>   
>> +sub parse_sharedfiles {
>> +    my ($value) = @_;
>> +
>> +    return if !$value;
>> +    my $res = eval { parse_property_string($sharedfiles_fmt, $value) };
>> +
>> +    warn $@ if $@;
>> +    return $res;
>> +}
>> +
>>   # ipconfigX ip=cidr,gw=ip,ip6=cidr,gw6=ip
>>   sub parse_ipconfig {
>>       my ($data) = @_;
>> @@ -4105,6 +4151,44 @@ sub config_to_command {
>>   	push @$devices, '-device', $netdevicefull;
>>       }
>>   
>> +    my $onevirtiofs = 0;
>> +    for (my $i = 0; $i < $MAX_SHAREDFILES; $i++) {
> 
> what about hotplug support?
> 
>> +	my $sharedfilesstr = "sharedfiles$i";
>> +
>> +	next if !$conf->{$sharedfilesstr};
>> +	my $sharedfiles = parse_sharedfiles($conf->{$sharedfilesstr});
>> +	next if !$sharedfiles;
>> +
>> +	die $sharedfilesstr.' needs a type (virtio-9p or virtio-fs)' if !$sharedfiles->{type};
>> +	die $sharedfilesstr.' needs a dirid of a directory to share' if !$sharedfiles->{dirid};
>> +	die $sharedfilesstr.' needs a mount tag' if !$sharedfiles->{tag};
> 
> so I guess all three are not optional ;)
> 
>> +
>> +	if ($sharedfiles->{type} eq 'virtio-fs' && $conf->{numa}) {
>> +	    die "disable numa to use virtio-fs or use virtio-9p instead";
>> +	}
> 
> okay, that is an interesting restriction. is that just because it
> requires some other, complex changes, or is it not working at all?

I thought so because I have read somewhere it will not work.
Also do not know why qemu returns this error when noma and virtiofsd is configured:
kvm: -chardev socket,id=virtfs0,path=/var/run/virtiofsd/vm100-fs0: Failed to connect to '/var/run/virtiofsd/vm100-fs0': Connection refused

I will try to find a solution.

> 
>> +
>> +	my $path = PVE::DirConfig::extract_dir_path($nodename, $sharedfiles->{dirid});
>> +
>> +	if ($sharedfiles->{type} eq 'virtio-9p') {
>> +	    push @$devices, '-fsdev', "local,security_model=passthrough,id=fsdev$i"
>> +		.",path=$path";
>> +	    push @$devices, '-device', "virtio-9p-pci,id=fs9p$i,fsdev=fsdev$i"
>> +		.",mount_tag=$sharedfiles->{tag}";
>> +	}
>> +	if ($sharedfiles->{type} eq 'virtio-fs') {
>> +	    push @$devices, '-chardev', "socket,id=virtfs$i,path=/var/run/virtiofsd/vm$vmid-fs$i";
>> +	    push @$devices, '-device', 'vhost-user-fs-pci,queue-size=1024'
>> +		.",chardev=virtfs$i,tag=$sharedfiles->{tag}";
>> +	    $onevirtiofs = 1;
>> +	}
>> +    }
>> +
>> +    if ($onevirtiofs) {
>> +	push @$devices, '-object', 'memory-backend-file,id=mem'
>> +	    .",size=$conf->{memory}M,mem-path=/dev/shm,share=on";
>> +	push @$devices, '-numa', 'node,memdev=mem';
>> +    }
>> +
>>       if ($conf->{ivshmem}) {
>>   	my $ivshmem = parse_property_string($ivshmem_fmt, $conf->{ivshmem});
>>   
>> @@ -4190,6 +4274,41 @@ sub config_to_command {
>>       return wantarray ? ($cmd, $vollist, $spice_port) : $cmd;
>>   }
>>   
>> +sub start_virtiofs {
> 
> how/when are they stopped again? :)

virtiofsd stops itself automatically together with qemu.

> 
>> +    my ($vmid, $path, $fsid, $dirid) = @_;
>> +
>> +    my $socket_path_root = "/var/run/virtiofsd";
>> +    mkdir $socket_path_root;
>> +    my $socket_path = "$socket_path_root/vm$vmid-fs$fsid";
>> +    unlink($socket_path);
>> +    my $socket = IO::Socket::UNIX->new(
>> +	Type => SOCK_STREAM,
>> +	Local => $socket_path,
>> +	Listen => 1,
>> +    ) or die "cannot create socket - $!\n";
>> +
>> +    my $flags = fcntl($socket, F_GETFD, 0)
>> +	or die "failed to get file descriptor flags: $!\n";
>> +    fcntl($socket, F_SETFD, $flags & ~FD_CLOEXEC)
>> +	or die "failed to remove FD_CLOEXEC from file descriptor\n";
>> +
>> +    my $fd = $socket->fileno();
>> +
>> +    my $pid = fork();
>> +    if ($pid == 0) {
>> +	# TODO replace with rust implementation in bookworm
>> +	run_command(['/usr/lib/kvm/virtiofsd', "--fd=$fd",
>> +	    '-o', "source=$path", '-o', 'cache=always']);
>> +	POSIX::_exit(0);
>> +    } elsif (!defined($pid)) {
>> +	die "could not fork to start virtiofsd";
>> +    }
>> +
>> +    # return socket to keep it alive,
>> +    # so that qemu will wait for virtiofsd to start
>> +    return $socket;
>> +}
>> +
>>   sub check_rng_source {
>>       my ($source) = @_;
>>   
>> @@ -5747,6 +5866,23 @@ sub vm_start_nolock {
>>       my ($cmd, $vollist, $spice_port) = config_to_command($storecfg, $vmid,
>>   	$conf, $defaults, $forcemachine, $forcecpu, $params->{'pbs-backing'});
>>   
>> +    my $nodename = nodename();
>> +    my @sockets;
>> +    for (my $i = 0; $i < $MAX_SHAREDFILES; $i++) {
>> +	my $sharedfilesstr = "sharedfiles$i";
>> +
>> +	next if !$conf->{$sharedfilesstr};
>> +	my $sharedfiles = parse_sharedfiles($conf->{$sharedfilesstr});
>> +	next if !$sharedfiles;
>> +
>> +	my $path = PVE::DirConfig::extract_dir_path($nodename, $sharedfiles->{dirid});
>> +
>> +	if ($sharedfiles && $sharedfiles->{type} eq 'virtio-fs' && !$conf->{numa}) {
>> +	    my $socket = start_virtiofs($vmid, $path, $i, $sharedfiles->{dirid});
>> +	    push @sockets, $socket;
>> +	}
>> +    }
>> +
>>       my $migration_ip;
>>       my $get_migration_ip = sub {
>>   	my ($nodename) = @_;
>> @@ -6094,6 +6230,11 @@ sub vm_start_nolock {
>>   
>>       PVE::GuestHelpers::exec_hookscript($conf, $vmid, 'post-start');
>>   
>> +    foreach my $socket (@sockets) {
>> +	shutdown($socket, 2);
>> +	close($socket);
>> +    }
>> +
>>       return $res;
>>   }
>>   
>> -- 
>> 2.30.2
>>
>>
>>
>> _______________________________________________
>> pve-devel mailing list
>> pve-devel@lists.proxmox.com
>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>>
>>
>>
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 




^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [pve-devel] [PATCH manager v4 3/6] added Config for Shared Filesystem Directories
  2023-05-04 10:21             ` Thomas Lamprecht
@ 2023-05-09  9:31               ` Dominik Csapak
  0 siblings, 0 replies; 19+ messages in thread
From: Dominik Csapak @ 2023-05-09  9:31 UTC (permalink / raw)
  To: Thomas Lamprecht, Proxmox VE development discussion
  Cc: Fabian Grünbichler, Markus Frank, Wolfgang Bumiller

to summarize what thomas and i discussed off-list:

1. i'll try to integrate arrays into the section config in pve(pbs has rudimentary support,
    but no "automatic" api integration)
    including a sensible api create/update/delete schema:
    - plan is to have the inner properties exposed but require an id parameter
      that identify which entry of which array property is to be updated
    i'll send this as a separate series, since for the remaining code it's
    not really relevant where the schema/config comes from, the internal
    api of that will probably be similar

2. Config should be per type (as mentioned in the last responses)
    and on create/update we'll not check for existence/correctness etc. for now
    (we can do that still when we broadcast the relevant info in pmxcfs in the future)

    we'll do the checks for that and insert the info in the gui and on vm start/migration/etc.

    also i'll combine some id properties when we don't require them separately, e.g. vendor/device.
    and make it clear in the description that these are for consistency checks on start

3. API will probably go to /cluster/resource/$type/$id
    where the get/create/put/delete depends on the above mentioned map id parameter for editing
    single entries
    (pending a better name than 'resource' since it's very similar to /cluster/resources
    which is an entirely different thing)

4. Most of Markus' permission structure is ok, besides replacing 'Map.' with 'Resource.'
    depending on which name we choose in the api

5. we'll probably remove the usermgmt permissions for PVEAdmin (to set it apart from the
    Administrator role)




^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2023-05-09  9:32 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-04-25 10:21 [pve-devel] [PATCH docs v4 0/6] feature #1027 virtio-9p/virtio-fs Markus Frank
2023-04-25 10:21 ` [pve-devel] [PATCH docs v4 1/6] added shared filesystem doc for virtio-fs & virtio-9p Markus Frank
2023-04-25 10:21 ` [pve-devel] [PATCH access-control v4 2/6] added acls for Shared Files Directories Markus Frank
2023-05-04  8:24   ` Fabian Grünbichler
2023-04-25 10:21 ` [pve-devel] [PATCH manager v4 3/6] added Config for Shared Filesystem Directories Markus Frank
2023-05-03 11:26   ` Dominik Csapak
2023-05-04  8:13     ` Thomas Lamprecht
2023-05-04  8:31       ` Dominik Csapak
2023-05-04  8:42         ` Thomas Lamprecht
2023-05-04  8:57           ` Dominik Csapak
2023-05-04 10:21             ` Thomas Lamprecht
2023-05-09  9:31               ` Dominik Csapak
2023-05-04  8:24   ` Fabian Grünbichler
2023-04-25 10:21 ` [pve-devel] [PATCH manager v4 4/6] added Shared Files tab in Node Settings Markus Frank
2023-04-25 10:21 ` [pve-devel] [PATCH manager v4 5/6] added options to add virtio-9p & virtio-fs Shared Filesystems to qemu config Markus Frank
2023-04-25 10:21 ` [pve-devel] [PATCH qemu-server v4 6/6] feature #1027: virtio-9p & virtio-fs support Markus Frank
2023-05-04  8:39   ` Fabian Grünbichler
2023-05-05  8:27     ` Markus Frank
2023-05-04  8:24 ` [pve-devel] [PATCH docs v4 0/6] feature #1027 virtio-9p/virtio-fs Fabian Grünbichler

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal