* [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v6 0/11] virtiofs
@ 2023-08-09 8:37 Markus Frank
2023-08-09 8:37 ` [pve-devel] [PATCH cluster v7 1/11] add mapping/dir.cfg for resource mapping Markus Frank
` (11 more replies)
0 siblings, 12 replies; 16+ messages in thread
From: Markus Frank @ 2023-08-09 8:37 UTC (permalink / raw)
To: pve-devel
qemu-server patches require pve-guest-common and pve-cluster patches
pve-manager patches require the pve-doc patch
I did not get virtiofsd to run with run_command without creating zombie
processes after stutdown.
So I replaced run_command with exec for now.
Maybe someone can find out why this happens.
cluster:
Markus Frank (1):
add mapping/dir.cfg for resource mapping
src/PVE/Cluster.pm | 1 +
src/pmxcfs/status.c | 1 +
2 files changed, 2 insertions(+)
guest-common:
v7:
* renamed DIR to Dir
* made xattr & acl settings per directory-id and not per node
Markus Frank (1):
add Dir mapping config
src/Makefile | 1 +
src/PVE/Mapping/Dir.pm | 177 +++++++++++++++++++++++++++++++++++++++++
2 files changed, 178 insertions(+)
create mode 100644 src/PVE/Mapping/Dir.pm
docs:
* added windows setup guide
Markus Frank (1):
added shared filesystem doc for virtio-fs
qm.adoc | 70 +++++++++++++++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 68 insertions(+), 2 deletions(-)
qemu-server:
v7:
* enabled use of hugepages
* renamed variables
* added acl & xattr parameters that overwrite the default directory
mapping settings
v6:
* added virtiofsd dependency
* 2 new patches:
* Permission check for virtiofs directory access
* check_local_resources: virtiofs
v5:
* allow numa settings with virtio-fs
* added direct-io & cache settings
* changed to rust implementation of virtiofsd
* made double fork and closed all file descriptor so that the lockfile
gets released.
v3:
* created own socket and get file descriptor for virtiofsd
so there is no race between starting virtiofsd & qemu
* added TODO to replace virtiofsd with rust implementation in bookworm
(I packaged the rust implementation for bookworm & the C implementation
in qemu will be removed in qemu 8.0)
v2:
* replaced sharedfiles_fmt path in qemu-server with dirid:
* user can use the dirid to specify the directory without requiring root access
Markus Frank (3):
feature #1027: virtio-fs support
Permission check for virtiofs directory access
check_local_resources: virtiofs
PVE/API2/Qemu.pm | 18 ++++
PVE/QemuServer.pm | 184 ++++++++++++++++++++++++++++++++++-
PVE/QemuServer/Memory.pm | 25 +++--
debian/control | 1 +
test/MigrationTest/Shared.pm | 7 ++
5 files changed, 227 insertions(+), 8 deletions(-)
manager:
v7:
* changed checkbox to dropdown menu for xattr & acl
* made xattr & acl settings per directory-id and not per node
Markus Frank (5):
api: add resource map api endpoints for directories
ui: add edit window for dir mappings
ui: ResourceMapTree for DIR
ui: form: add DIRMapSelector
ui: add options to add virtio-fs to qemu config
PVE/API2/Cluster/Mapping.pm | 7 +
PVE/API2/Cluster/Mapping/Dir.pm | 309 ++++++++++++++++++++++++++++
PVE/API2/Cluster/Mapping/Makefile | 3 +-
www/manager6/Makefile | 4 +
www/manager6/Utils.js | 1 +
www/manager6/dc/Config.js | 10 +
www/manager6/dc/DIRMapView.js | 50 +++++
www/manager6/form/DIRMapSelector.js | 63 ++++++
www/manager6/qemu/HardwareView.js | 19 ++
www/manager6/qemu/VirtiofsEdit.js | 146 +++++++++++++
www/manager6/window/DirMapEdit.js | 222 ++++++++++++++++++++
11 files changed, 833 insertions(+), 1 deletion(-)
create mode 100644 PVE/API2/Cluster/Mapping/Dir.pm
create mode 100644 www/manager6/dc/DIRMapView.js
create mode 100644 www/manager6/form/DIRMapSelector.js
create mode 100644 www/manager6/qemu/VirtiofsEdit.js
create mode 100644 www/manager6/window/DirMapEdit.js
--
2.39.2
^ permalink raw reply [flat|nested] 16+ messages in thread
* [pve-devel] [PATCH cluster v7 1/11] add mapping/dir.cfg for resource mapping
2023-08-09 8:37 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v6 0/11] virtiofs Markus Frank
@ 2023-08-09 8:37 ` Markus Frank
2023-08-09 8:37 ` [pve-devel] [PATCH guest-common v7 2/11] add Dir mapping config Markus Frank
` (10 subsequent siblings)
11 siblings, 0 replies; 16+ messages in thread
From: Markus Frank @ 2023-08-09 8:37 UTC (permalink / raw)
To: pve-devel
Add it to both, the perl side (PVE/Cluster.pm) and pmxcfs side
(status.c).
Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
src/PVE/Cluster.pm | 1 +
src/pmxcfs/status.c | 1 +
2 files changed, 2 insertions(+)
diff --git a/src/PVE/Cluster.pm b/src/PVE/Cluster.pm
index ff777ba..39f2d99 100644
--- a/src/PVE/Cluster.pm
+++ b/src/PVE/Cluster.pm
@@ -80,6 +80,7 @@ my $observed = {
'virtual-guest/cpu-models.conf' => 1,
'mapping/pci.cfg' => 1,
'mapping/usb.cfg' => 1,
+ 'mapping/dir.cfg' => 1,
};
sub prepare_observed_file_basedirs {
diff --git a/src/pmxcfs/status.c b/src/pmxcfs/status.c
index 1f29b07..e6f0bac 100644
--- a/src/pmxcfs/status.c
+++ b/src/pmxcfs/status.c
@@ -110,6 +110,7 @@ static memdb_change_t memdb_change_array[] = {
{ .path = "firewall/cluster.fw" },
{ .path = "mapping/pci.cfg" },
{ .path = "mapping/usb.cfg" },
+ { .path = "mapping/dir.cfg" },
};
static GMutex mutex;
--
2.39.2
^ permalink raw reply [flat|nested] 16+ messages in thread
* [pve-devel] [PATCH guest-common v7 2/11] add Dir mapping config
2023-08-09 8:37 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v6 0/11] virtiofs Markus Frank
2023-08-09 8:37 ` [pve-devel] [PATCH cluster v7 1/11] add mapping/dir.cfg for resource mapping Markus Frank
@ 2023-08-09 8:37 ` Markus Frank
2023-08-09 8:37 ` [pve-devel] [PATCH docs v7 3/11] added shared filesystem doc for virtio-fs Markus Frank
` (9 subsequent siblings)
11 siblings, 0 replies; 16+ messages in thread
From: Markus Frank @ 2023-08-09 8:37 UTC (permalink / raw)
To: pve-devel
Adds a config file for directories by using a 'map'
array propertystring for each node mapping.
Next to node & path, there is the optional
submounts parameter in the map array.
Additionally there are the default settings for xattr & acl.
example config:
```
some-dir-id
map node=node1,path=/mnt/share/,submounts=1
map node=node2,path=/mnt/share/
xattr 1
acl 1
```
Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
src/Makefile | 1 +
src/PVE/Mapping/Dir.pm | 177 +++++++++++++++++++++++++++++++++++++++++
2 files changed, 178 insertions(+)
create mode 100644 src/PVE/Mapping/Dir.pm
diff --git a/src/Makefile b/src/Makefile
index cbc40c1..2ebe08b 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -17,6 +17,7 @@ install: PVE
install -d ${PERL5DIR}/PVE/Mapping
install -m 0644 PVE/Mapping/PCI.pm ${PERL5DIR}/PVE/Mapping/
install -m 0644 PVE/Mapping/USB.pm ${PERL5DIR}/PVE/Mapping/
+ install -m 0644 PVE/Mapping/Dir.pm ${PERL5DIR}/PVE/Mapping/
install -d ${PERL5DIR}/PVE/VZDump
install -m 0644 PVE/VZDump/Plugin.pm ${PERL5DIR}/PVE/VZDump/
install -m 0644 PVE/VZDump/Common.pm ${PERL5DIR}/PVE/VZDump/
diff --git a/src/PVE/Mapping/Dir.pm b/src/PVE/Mapping/Dir.pm
new file mode 100644
index 0000000..6c87073
--- /dev/null
+++ b/src/PVE/Mapping/Dir.pm
@@ -0,0 +1,177 @@
+package PVE::Mapping::Dir;
+
+use strict;
+use warnings;
+
+use PVE::Cluster qw(cfs_register_file cfs_read_file cfs_lock_file cfs_write_file);
+use PVE::JSONSchema qw(get_standard_option parse_property_string);
+use PVE::SectionConfig;
+use PVE::INotify;
+
+use base qw(PVE::SectionConfig);
+
+my $FILENAME = 'mapping/dir.cfg';
+
+cfs_register_file($FILENAME,
+ sub { __PACKAGE__->parse_config(@_); },
+ sub { __PACKAGE__->write_config(@_); });
+
+
+# so we don't have to repeat the type every time
+sub parse_section_header {
+ my ($class, $line) = @_;
+
+ if ($line =~ m/^(\S+)\s*$/) {
+ my $id = $1;
+ my $errmsg = undef; # set if you want to skip whole section
+ eval { PVE::JSONSchema::pve_verify_configid($id) };
+ $errmsg = $@ if $@;
+ my $config = {}; # to return additional attributes
+ return ('dir', $id, $errmsg, $config);
+ }
+ return undef;
+}
+
+sub format_section_header {
+ my ($class, $type, $sectionId, $scfg, $done_hash) = @_;
+
+ return "$sectionId\n";
+}
+
+sub type {
+ return 'dir';
+}
+
+my $map_fmt = {
+ node => get_standard_option('pve-node'),
+ path => {
+ description => "Directory-path that should be shared with the guest.",
+ type => 'string',
+ format => 'pve-storage-path',
+ },
+ submounts => {
+ type => 'boolean',
+ description => "Option to tell the guest which directories are mount points.",
+ optional => 1,
+ },
+ description => {
+ description => "Description of the node specific directory.",
+ type => 'string',
+ optional => 1,
+ maxLength => 4096,
+ },
+};
+
+my $defaultData = {
+ propertyList => {
+ id => {
+ type => 'string',
+ description => "The ID of the directory",
+ format => 'pve-configid',
+ },
+ description => {
+ description => "Description of the directory",
+ type => 'string',
+ optional => 1,
+ maxLength => 4096,
+ },
+ map => {
+ type => 'array',
+ description => 'A list of maps for the cluster nodes.',
+ optional => 1,
+ items => {
+ type => 'string',
+ format => $map_fmt,
+ },
+ },
+ xattr => {
+ type => 'boolean',
+ description => "Enable support for extended attributes.",
+ optional => 1,
+ },
+ acl => {
+ type => 'boolean',
+ description => "Enable support for posix ACLs (implies --xattr).",
+ optional => 1,
+ },
+ },
+};
+
+sub private {
+ return $defaultData;
+}
+
+sub options {
+ return {
+ description => { optional => 1 },
+ map => {},
+ xattr => { optional => 1 },
+ acl => { optional => 1 },
+ };
+}
+
+sub assert_valid {
+ my ($dir_cfg) = @_;
+
+ my $path = $dir_cfg->{path};
+
+ if (! -e $path) {
+ die "Path $path does not exist\n";
+ }
+ if ((-e $path) && (! -d $path)) {
+ die "Path $path exists but is not a directory\n"
+ }
+
+ return 1;
+};
+
+sub config {
+ return cfs_read_file($FILENAME);
+}
+
+sub lock_dir_config {
+ my ($code, $errmsg) = @_;
+
+ cfs_lock_file($FILENAME, undef, $code);
+ my $err = $@;
+ if ($err) {
+ $errmsg ? die "$errmsg: $err" : die $err;
+ }
+}
+
+sub write_dir_config {
+ my ($cfg) = @_;
+
+ cfs_write_file($FILENAME, $cfg);
+}
+
+sub find_on_current_node {
+ my ($id) = @_;
+
+ my $cfg = config();
+ my $node = PVE::INotify::nodename();
+
+ return get_node_mapping($cfg, $id, $node);
+}
+
+sub get_node_mapping {
+ my ($cfg, $id, $nodename) = @_;
+
+ return undef if !defined($cfg->{ids}->{$id});
+
+ my $res = [];
+ my $mapping_list = $cfg->{ids}->{$id}->{map};
+ foreach my $map (@{$mapping_list}) {
+ my $entry = eval { parse_property_string($map_fmt, $map) };
+ warn $@ if $@;
+ if ($entry && $entry->{node} eq $nodename) {
+ push $res->@*, $entry;
+ }
+ }
+ return $res;
+}
+
+PVE::Mapping::Dir->register();
+PVE::Mapping::Dir->init();
+
+1;
--
2.39.2
^ permalink raw reply [flat|nested] 16+ messages in thread
* [pve-devel] [PATCH docs v7 3/11] added shared filesystem doc for virtio-fs
2023-08-09 8:37 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v6 0/11] virtiofs Markus Frank
2023-08-09 8:37 ` [pve-devel] [PATCH cluster v7 1/11] add mapping/dir.cfg for resource mapping Markus Frank
2023-08-09 8:37 ` [pve-devel] [PATCH guest-common v7 2/11] add Dir mapping config Markus Frank
@ 2023-08-09 8:37 ` Markus Frank
2023-10-05 8:56 ` Fabian Grünbichler
2023-08-09 8:37 ` [pve-devel] [PATCH qemu-server v7 4/11] feature #1027: virtio-fs support Markus Frank
` (8 subsequent siblings)
11 siblings, 1 reply; 16+ messages in thread
From: Markus Frank @ 2023-08-09 8:37 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
qm.adoc | 70 +++++++++++++++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 68 insertions(+), 2 deletions(-)
diff --git a/qm.adoc b/qm.adoc
index e35dbf0..8f4020d 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -997,6 +997,71 @@ recommended to always use a limiter to avoid guests using too many host
resources. If desired, a value of '0' for `max_bytes` can be used to disable
all limits.
+[[qm_virtiofs]]
+Virtio-fs
+~~~~~~~~~
+
+Virtio-fs is a shared file system, that enables sharing a directory between
+host and guest VM while taking advantage of the locality of virtual machines
+and the hypervisor to get a higher throughput than 9p.
+
+Linux VMs with kernel >=5.4 support this feature by default.
+
+There is a guide available on how to utilize virtiofs in Windows VMs.
+https://github.com/virtio-win/kvm-guest-drivers-windows/wiki/Virtiofs:-Shared-file-system
+
+Add mapping for Shared Directories
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To add a mapping, go to the Resource Mapping tab in Datacenter in the WebUI,
+use the API directly with pvesh as described in the
+xref:resource_mapping[Resource Mapping] section,
+or add the mapping to the configuration file `/etc/pve/mapping/dir.cfg`:
+
+----
+some-dir-id
+ map node=node1,path=/mnt/share/,submounts=1
+ map node=node2,path=/mnt/share/,
+ xattr 1
+ acl 1
+----
+
+Set `submounts` to `1` when multiple file systems are mounted in a
+shared directory.
+
+Add virtiofs to VM
+^^^^^^^^^^^^^^^^^^
+
+To share a directory using virtio-fs, you need to specify the directory ID
+(dirid) that has been configured in the Resource Mapping. Additionally, you
+can set the `cache` option to either `always`, `never`, or `auto`, depending on
+your requirements. If you want virtio-fs to honor the `O_DIRECT` flag, you can
+set the `direct-io` parameter to `1`.
+Additionally it possible to overwrite the default mapping settings
+for xattr & acl by setting then to either `1` or `0`.
+
+The `acl` parameter automatically implies `xattr`, that is, it makes no
+difference whether you set xattr to `0` if acl is set to `1`.
+
+----
+qm set <vmid> -virtiofs0 dirid=<dirid>,tag=<mount tag>,cache=always,direct-io=1
+qm set <vmid> -virtiofs1 <dirid>,tag=<mount tag>,cache=never,xattr=1
+qm set <vmid> -virtiofs2 <dirid>,tag=<mount tag>,acl=1
+----
+
+To mount virtio-fs in a guest VM with the Linux kernel virtiofs driver, run the
+following command:
+
+The dirid associated with the path on the current node is also used as the
+mount tag (name used to mount the device on the guest).
+
+----
+mount -t virtiofs <mount tag> <mount point>
+----
+
+For more information on available virtiofsd parameters, see the
+https://gitlab.com/virtio-fs/virtiofsd[GitLab virtiofsd project page].
+
[[qm_bootorder]]
Device Boot Order
~~~~~~~~~~~~~~~~~
@@ -1600,8 +1665,9 @@ in the relevant tab in the `Resource Mappings` category, or on the cli with
# pvesh create /cluster/mapping/<type> <options>
----
-Where `<type>` is the hardware type (currently either `pci` or `usb`) and
-`<options>` are the device mappings and other configuration parameters.
+Where `<type>` is the hardware type (currently either `pci`, `usb` or
+xref:qm_virtiofs[dir]) and `<options>` are the device mappings and other
+configuration parameters.
Note that the options must include a map property with all identifying
properties of that hardware, so that it's possible to verify the hardware did
--
2.39.2
^ permalink raw reply [flat|nested] 16+ messages in thread
* [pve-devel] [PATCH qemu-server v7 4/11] feature #1027: virtio-fs support
2023-08-09 8:37 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v6 0/11] virtiofs Markus Frank
` (2 preceding siblings ...)
2023-08-09 8:37 ` [pve-devel] [PATCH docs v7 3/11] added shared filesystem doc for virtio-fs Markus Frank
@ 2023-08-09 8:37 ` Markus Frank
2023-10-05 8:56 ` Fabian Grünbichler
2023-08-09 8:37 ` [pve-devel] [PATCH qemu-server v7 5/11] Permission check for virtiofs directory access Markus Frank
` (7 subsequent siblings)
11 siblings, 1 reply; 16+ messages in thread
From: Markus Frank @ 2023-08-09 8:37 UTC (permalink / raw)
To: pve-devel
add support for sharing directories with a guest vm
virtio-fs needs virtiofsd to be started.
In order to start virtiofsd as a process (despite being a daemon it is does not run
in the background), a double-fork is used.
virtiofsd should close itself together with qemu.
There are the parameters dirid
and the optional parameters direct-io & cache.
Additionally the xattr & acl parameter overwrite the
directory mapping settings for xattr & acl.
The dirid gets mapped to the path on the current node
and is also used as a mount-tag (name used to mount the
device on the guest).
example config:
```
virtiofs0: foo,direct-io=1,cache=always,acl=1
virtiofs1: dirid=bar,cache=never,xattr=1
```
For information on the optional parameters see there:
https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md
Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
I did not get virtiofsd to run with run_command without creating zombie
processes after stutdown.
So I replaced run_command with exec for now.
Maybe someone can find out why this happens.
PVE/QemuServer.pm | 174 ++++++++++++++++++++++++++++++++++++++-
PVE/QemuServer/Memory.pm | 25 ++++--
debian/control | 1 +
3 files changed, 193 insertions(+), 7 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 484bc7f..d547dd6 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -43,6 +43,7 @@ use PVE::PBSClient;
use PVE::RESTEnvironment qw(log_warn);
use PVE::RPCEnvironment;
use PVE::Storage;
+use PVE::Mapping::Dir;
use PVE::SysFSTools;
use PVE::Systemd;
use PVE::Tools qw(run_command file_read_firstline file_get_contents dir_glob_foreach get_host_arch $IPV6RE);
@@ -276,6 +277,42 @@ my $rng_fmt = {
},
};
+my $virtiofs_fmt = {
+ 'dirid' => {
+ type => 'string',
+ default_key => 1,
+ description => "Mapping identifier of the directory mapping to be"
+ ." shared with the guest. Also used as a mount tag inside the VM.",
+ format_description => 'mapping-id',
+ format => 'pve-configid',
+ },
+ 'cache' => {
+ type => 'string',
+ description => "The caching policy the file system should use"
+ ." (auto, always, never).",
+ format_description => "virtiofs-cache",
+ enum => [qw(auto always never)],
+ optional => 1,
+ },
+ 'direct-io' => {
+ type => 'boolean',
+ description => "Honor the O_DIRECT flag passed down by guest applications",
+ format_description => "virtiofs-directio",
+ optional => 1,
+ },
+ xattr => {
+ type => 'boolean',
+ description => "Enable support for extended attributes.",
+ optional => 1,
+ },
+ acl => {
+ type => 'boolean',
+ description => "Enable support for posix ACLs (implies --xattr).",
+ optional => 1,
+ },
+};
+PVE::JSONSchema::register_format('pve-qm-virtiofs', $virtiofs_fmt);
+
my $meta_info_fmt = {
'ctime' => {
type => 'integer',
@@ -840,6 +877,7 @@ while (my ($k, $v) = each %$confdesc) {
}
my $MAX_NETS = 32;
+my $MAX_VIRTIOFS = 10;
my $MAX_SERIAL_PORTS = 4;
my $MAX_PARALLEL_PORTS = 3;
my $MAX_NUMA = 8;
@@ -984,6 +1022,21 @@ my $netdesc = {
PVE::JSONSchema::register_standard_option("pve-qm-net", $netdesc);
+my $virtiofsdesc = {
+ optional => 1,
+ type => 'string', format => $virtiofs_fmt,
+ description => "share files between host and guest",
+};
+PVE::JSONSchema::register_standard_option("pve-qm-virtiofs", $virtiofsdesc);
+
+sub max_virtiofs {
+ return $MAX_VIRTIOFS;
+}
+
+for (my $i = 0; $i < $MAX_VIRTIOFS; $i++) {
+ $confdesc->{"virtiofs$i"} = $virtiofsdesc;
+}
+
my $ipconfig_fmt = {
ip => {
type => 'string',
@@ -4113,6 +4166,21 @@ sub config_to_command {
push @$devices, '-device', $netdevicefull;
}
+ my $virtiofs_enabled = 0;
+ for (my $i = 0; $i < $MAX_VIRTIOFS; $i++) {
+ my $opt = "virtiofs$i";
+
+ next if !$conf->{$opt};
+ my $virtiofs = parse_property_string('pve-qm-virtiofs', $conf->{$opt});
+ next if !$virtiofs;
+
+ push @$devices, '-chardev', "socket,id=virtfs$i,path=/var/run/virtiofsd/vm$vmid-fs$i";
+ push @$devices, '-device', 'vhost-user-fs-pci,queue-size=1024'
+ .",chardev=virtfs$i,tag=$virtiofs->{dirid}";
+
+ $virtiofs_enabled = 1;
+ }
+
if ($conf->{ivshmem}) {
my $ivshmem = parse_property_string($ivshmem_fmt, $conf->{ivshmem});
@@ -4172,6 +4240,14 @@ sub config_to_command {
}
push @$machineFlags, "type=${machine_type_min}";
+ if ($virtiofs_enabled && !$conf->{numa}) {
+ # kvm: '-machine memory-backend' and '-numa memdev' properties are
+ # mutually exclusive
+ push @$devices, '-object', 'memory-backend-file,id=virtiofs-mem'
+ .",size=$conf->{memory}M,mem-path=/dev/shm,share=on";
+ push @$machineFlags, 'memory-backend=virtiofs-mem';
+ }
+
push @$cmd, @$devices;
push @$cmd, '-rtc', join(',', @$rtcFlags) if scalar(@$rtcFlags);
push @$cmd, '-machine', join(',', @$machineFlags) if scalar(@$machineFlags);
@@ -4198,6 +4274,85 @@ sub config_to_command {
return wantarray ? ($cmd, $vollist, $spice_port, $pci_devices) : $cmd;
}
+sub start_virtiofs {
+ my ($vmid, $fsid, $virtiofs) = @_;
+
+ my $dir_cfg = PVE::Mapping::Dir::config()->{ids}->{$virtiofs->{dirid}};
+ my $node_list = PVE::Mapping::Dir::find_on_current_node($virtiofs->{dirid});
+
+ if (!$node_list || scalar($node_list->@*) != 1) {
+ die "virtiofs needs exactly one mapping for this node\n";
+ }
+
+ eval {
+ PVE::Mapping::Dir::assert_valid($node_list->[0]);
+ };
+ if (my $err = $@) {
+ die "Directory Mapping invalid: $err\n";
+ }
+
+ my $node_cfg = $node_list->[0];
+ my $path = $node_cfg->{path};
+ my $socket_path_root = "/var/run/virtiofsd";
+ mkdir $socket_path_root;
+ my $socket_path = "$socket_path_root/vm$vmid-fs$fsid";
+ unlink($socket_path);
+ my $socket = IO::Socket::UNIX->new(
+ Type => SOCK_STREAM,
+ Local => $socket_path,
+ Listen => 1,
+ ) or die "cannot create socket - $!\n";
+
+ my $flags = fcntl($socket, F_GETFD, 0)
+ or die "failed to get file descriptor flags: $!\n";
+ fcntl($socket, F_SETFD, $flags & ~FD_CLOEXEC)
+ or die "failed to remove FD_CLOEXEC from file descriptor\n";
+
+ my $fd = $socket->fileno();
+
+ my $virtiofsd_bin = '/usr/libexec/virtiofsd';
+
+ my $pid = fork();
+ if ($pid == 0) {
+ setsid();
+ $0 = "task pve-vm$vmid-virtiofs$fsid";
+ for my $fd_loop (3 .. POSIX::sysconf( &POSIX::_SC_OPEN_MAX )) {
+ POSIX::close($fd_loop) if ($fd_loop != $fd);
+ }
+
+ my $pid2 = fork();
+ if ($pid2 == 0) {
+ my $cmd = [$virtiofsd_bin, "--fd=$fd", "--shared-dir=$path"];
+ push @$cmd, '--xattr' if ($virtiofs->{xattr});
+ push @$cmd, '--posix-acl' if ($virtiofs->{acl});
+
+ # Default to dir config xattr & acl settings
+ push @$cmd, '--xattr'
+ if !defined $virtiofs->{'xattr'} && $dir_cfg->{'xattr'};
+ push @$cmd, '--posix-acl'
+ if !defined $virtiofs->{'acl'} && $dir_cfg->{'acl'};
+
+ push @$cmd, '--announce-submounts' if ($node_cfg->{submounts});
+ push @$cmd, '--allow-direct-io' if ($virtiofs->{'direct-io'});
+ push @$cmd, "--cache=$virtiofs->{'cache'}" if ($virtiofs->{'cache'});
+
+ exec(@$cmd);
+ } elsif (!defined($pid2)) {
+ die "could not fork to start virtiofsd\n";
+ } else {
+ POSIX::_exit(0);
+ }
+ } elsif (!defined($pid)) {
+ die "could not fork to start virtiofsd\n";
+ } else {
+ waitpid($pid, 0);
+ }
+
+ # return socket to keep it alive,
+ # so that qemu will wait for virtiofsd to start
+ return $socket;
+}
+
sub check_rng_source {
my ($source) = @_;
@@ -5655,7 +5810,6 @@ sub vm_start {
});
}
-
# params:
# statefile => 'tcp', 'unix' for migration or path/volid for RAM state
# skiplock => 0/1, skip checking for config lock
@@ -5918,10 +6072,23 @@ sub vm_start_nolock {
}
$systemd_properties{timeout} = 10 if $statefile; # setting up the scope shoul be quick
+
my $run_qemu = sub {
PVE::Tools::run_fork sub {
PVE::Systemd::enter_systemd_scope($vmid, "Proxmox VE VM $vmid", %systemd_properties);
+ my @virtiofs_sockets;
+ for (my $i = 0; $i < $MAX_VIRTIOFS; $i++) {
+ my $opt = "virtiofs$i";
+
+ next if !$conf->{$opt};
+ my $virtiofs = parse_property_string('pve-qm-virtiofs', $conf->{$opt});
+ next if !$virtiofs;
+
+ my $virtiofs_socket = start_virtiofs($vmid, $i, $virtiofs);
+ push @virtiofs_sockets, $virtiofs_socket;
+ }
+
my $tpmpid;
if (my $tpm = $conf->{tpmstate0}) {
# start the TPM emulator so QEMU can connect on start
@@ -5936,6 +6103,11 @@ sub vm_start_nolock {
}
die "QEMU exited with code $exitcode\n";
}
+
+ foreach my $virtiofs_socket (@virtiofs_sockets) {
+ shutdown($virtiofs_socket, 2);
+ close($virtiofs_socket);
+ }
};
};
diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
index 0601dd6..648bc08 100644
--- a/PVE/QemuServer/Memory.pm
+++ b/PVE/QemuServer/Memory.pm
@@ -278,6 +278,16 @@ sub config {
die "numa needs to be enabled to use hugepages" if $conf->{hugepages} && !$conf->{numa};
+ my $virtiofs_enabled = 0;
+ for (my $i = 0; $i < PVE::QemuServer::max_virtiofs(); $i++) {
+ my $opt = "virtiofs$i";
+ next if !$conf->{$opt};
+ my $virtiofs = PVE::JSONSchema::parse_property_string('pve-qm-virtiofs', $conf->{$opt});
+ if ($virtiofs) {
+ $virtiofs_enabled = 1;
+ }
+ }
+
if ($conf->{numa}) {
my $numa_totalmemory = undef;
@@ -290,7 +300,8 @@ sub config {
my $numa_memory = $numa->{memory};
$numa_totalmemory += $numa_memory;
- my $mem_object = print_mem_object($conf, "ram-node$i", $numa_memory);
+ my $memdev = $virtiofs_enabled ? "virtiofs-mem$i" : "ram-node$i";
+ my $mem_object = print_mem_object($conf, $memdev, $numa_memory);
# cpus
my $cpulists = $numa->{cpus};
@@ -315,7 +326,7 @@ sub config {
}
push @$cmd, '-object', $mem_object;
- push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=ram-node$i";
+ push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=$memdev";
}
die "total memory for NUMA nodes must be equal to vm static memory\n"
@@ -329,13 +340,13 @@ sub config {
die "host NUMA node$i doesn't exist\n"
if !host_numanode_exists($i) && $conf->{hugepages};
- my $mem_object = print_mem_object($conf, "ram-node$i", $numa_memory);
- push @$cmd, '-object', $mem_object;
-
my $cpus = ($cores * $i);
$cpus .= "-" . ($cpus + $cores - 1) if $cores > 1;
- push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=ram-node$i";
+ my $memdev = $virtiofs_enabled ? "virtiofs-mem$i" : "ram-node$i";
+ my $mem_object = print_mem_object($conf, $memdev, $numa_memory);
+ push @$cmd, '-object', $mem_object;
+ push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=$memdev";
}
}
}
@@ -364,6 +375,8 @@ sub print_mem_object {
my $path = hugepages_mount_path($hugepages_size);
return "memory-backend-file,id=$id,size=${size}M,mem-path=$path,share=on,prealloc=yes";
+ } elsif ($id =~ m/^virtiofs-mem/) {
+ return "memory-backend-file,id=$id,size=${size}M,mem-path=/dev/shm,share=on";
} else {
return "memory-backend-ram,id=$id,size=${size}M";
}
diff --git a/debian/control b/debian/control
index 49f67b2..f008a9b 100644
--- a/debian/control
+++ b/debian/control
@@ -53,6 +53,7 @@ Depends: dbus,
socat,
swtpm,
swtpm-tools,
+ virtiofsd,
${misc:Depends},
${perl:Depends},
${shlibs:Depends},
--
2.39.2
^ permalink raw reply [flat|nested] 16+ messages in thread
* [pve-devel] [PATCH qemu-server v7 5/11] Permission check for virtiofs directory access
2023-08-09 8:37 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v6 0/11] virtiofs Markus Frank
` (3 preceding siblings ...)
2023-08-09 8:37 ` [pve-devel] [PATCH qemu-server v7 4/11] feature #1027: virtio-fs support Markus Frank
@ 2023-08-09 8:37 ` Markus Frank
2023-10-05 8:56 ` Fabian Grünbichler
2023-08-09 8:37 ` [pve-devel] [PATCH qemu-server v7 6/11] check_local_resources: virtiofs Markus Frank
` (6 subsequent siblings)
11 siblings, 1 reply; 16+ messages in thread
From: Markus Frank @ 2023-08-09 8:37 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
PVE/API2/Qemu.pm | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 9606e72..65830f9 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -586,6 +586,19 @@ my $check_vm_create_serial_perm = sub {
return 1;
};
+my sub check_vm_dir_perm {
+ my ($rpcenv, $authuser, $param) = @_;
+
+ return 1 if $authuser eq 'root@pam';
+
+ foreach my $opt (keys %{$param}) {
+ next if $opt !~ m/^virtiofs\d+$/;
+ my $virtiofs = PVE::JSONSchema::parse_property_string('pve-qm-virtiofs', $param->{$opt});
+ $rpcenv->check_full($authuser, "/mapping/dir/$virtiofs->{dirid}", ['Mapping.Use']);
+ }
+ return 1;
+};
+
my sub check_usb_perm {
my ($rpcenv, $authuser, $vmid, $pool, $opt, $value) = @_;
@@ -687,6 +700,8 @@ my $check_vm_modify_config_perm = sub {
# the user needs Disk and PowerMgmt privileges to change the vmstate
# also needs privileges on the storage, that will be checked later
$rpcenv->check_vm_perm($authuser, $vmid, $pool, ['VM.Config.Disk', 'VM.PowerMgmt' ]);
+ } elsif ($opt =~ m/^virtiofs\d$/) {
+ $rpcenv->check_vm_perm($authuser, $vmid, $pool, ['VM.Config.Disk']);
} else {
# catches args, lock, etc.
# new options will be checked here
@@ -925,6 +940,7 @@ __PACKAGE__->register_method({
&$check_vm_modify_config_perm($rpcenv, $authuser, $vmid, $pool, [ keys %$param]);
+ check_vm_dir_perm($rpcenv, $authuser, $param);
&$check_vm_create_serial_perm($rpcenv, $authuser, $vmid, $pool, $param);
check_vm_create_usb_perm($rpcenv, $authuser, $vmid, $pool, $param);
check_vm_create_hostpci_perm($rpcenv, $authuser, $vmid, $pool, $param);
@@ -1660,6 +1676,8 @@ my $update_vm_api = sub {
&$check_vm_modify_config_perm($rpcenv, $authuser, $vmid, undef, [keys %$param]);
+ check_vm_dir_perm($rpcenv, $authuser, $param);
+
&$check_storage_access($rpcenv, $authuser, $storecfg, $vmid, $param);
PVE::QemuServer::check_bridge_access($rpcenv, $authuser, $param);
--
2.39.2
^ permalink raw reply [flat|nested] 16+ messages in thread
* [pve-devel] [PATCH qemu-server v7 6/11] check_local_resources: virtiofs
2023-08-09 8:37 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v6 0/11] virtiofs Markus Frank
` (4 preceding siblings ...)
2023-08-09 8:37 ` [pve-devel] [PATCH qemu-server v7 5/11] Permission check for virtiofs directory access Markus Frank
@ 2023-08-09 8:37 ` Markus Frank
2023-08-09 8:37 ` [pve-devel] [PATCH manager v7 07/11] api: add resource map api endpoints for directories Markus Frank
` (5 subsequent siblings)
11 siblings, 0 replies; 16+ messages in thread
From: Markus Frank @ 2023-08-09 8:37 UTC (permalink / raw)
To: pve-devel
add dir mapping checks to check_local_resources
Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
PVE/QemuServer.pm | 10 +++++++++-
test/MigrationTest/Shared.pm | 7 +++++++
2 files changed, 16 insertions(+), 1 deletion(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index d547dd6..e74d867 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2723,6 +2723,7 @@ sub check_local_resources {
my $nodelist = PVE::Cluster::get_nodelist();
my $pci_map = PVE::Mapping::PCI::config();
my $usb_map = PVE::Mapping::USB::config();
+ my $dir_map = PVE::Mapping::Dir::config();
my $missing_mappings_by_node = { map { $_ => [] } @$nodelist };
@@ -2734,6 +2735,8 @@ sub check_local_resources {
$entry = PVE::Mapping::PCI::get_node_mapping($pci_map, $id, $node);
} elsif ($type eq 'usb') {
$entry = PVE::Mapping::USB::get_node_mapping($usb_map, $id, $node);
+ } elsif ($type eq 'dir') {
+ $entry = PVE::Mapping::Dir::get_node_mapping($dir_map, $id, $node);
}
if (!scalar($entry->@*)) {
push @{$missing_mappings_by_node->{$node}}, $key;
@@ -2762,9 +2765,14 @@ sub check_local_resources {
push @$mapped_res, $k;
}
}
+ if ($k =~ m/^virtiofs/) {
+ my $entry = parse_property_string('pve-qm-virtiofs', $conf->{$k});
+ $add_missing_mapping->('dir', $k, $entry->{dirid});
+ push @$mapped_res, $k;
+ }
# sockets are safe: they will recreated be on the target side post-migrate
next if $k =~ m/^serial/ && ($conf->{$k} eq 'socket');
- push @loc_res, $k if $k =~ m/^(usb|hostpci|serial|parallel)\d+$/;
+ push @loc_res, $k if $k =~ m/^(usb|hostpci|serial|parallel|virtiofs)\d+$/;
}
die "VM uses local resources\n" if scalar @loc_res && !$noerr;
diff --git a/test/MigrationTest/Shared.pm b/test/MigrationTest/Shared.pm
index aa7203d..c5d0722 100644
--- a/test/MigrationTest/Shared.pm
+++ b/test/MigrationTest/Shared.pm
@@ -90,6 +90,13 @@ $mapping_pci_module->mock(
},
);
+our $mapping_dir_module = Test::MockModule->new("PVE::Mapping::Dir");
+$mapping_dir_module->mock(
+ config => sub {
+ return {};
+ },
+);
+
our $ha_config_module = Test::MockModule->new("PVE::HA::Config");
$ha_config_module->mock(
vm_is_ha_managed => sub {
--
2.39.2
^ permalink raw reply [flat|nested] 16+ messages in thread
* [pve-devel] [PATCH manager v7 07/11] api: add resource map api endpoints for directories
2023-08-09 8:37 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v6 0/11] virtiofs Markus Frank
` (5 preceding siblings ...)
2023-08-09 8:37 ` [pve-devel] [PATCH qemu-server v7 6/11] check_local_resources: virtiofs Markus Frank
@ 2023-08-09 8:37 ` Markus Frank
2023-08-09 8:37 ` [pve-devel] [PATCH manager v7 08/11] ui: add edit window for dir mappings Markus Frank
` (4 subsequent siblings)
11 siblings, 0 replies; 16+ messages in thread
From: Markus Frank @ 2023-08-09 8:37 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
PVE/API2/Cluster/Mapping.pm | 7 +
PVE/API2/Cluster/Mapping/Dir.pm | 309 ++++++++++++++++++++++++++++++
PVE/API2/Cluster/Mapping/Makefile | 3 +-
3 files changed, 318 insertions(+), 1 deletion(-)
create mode 100644 PVE/API2/Cluster/Mapping/Dir.pm
diff --git a/PVE/API2/Cluster/Mapping.pm b/PVE/API2/Cluster/Mapping.pm
index 40386579..c5993208 100644
--- a/PVE/API2/Cluster/Mapping.pm
+++ b/PVE/API2/Cluster/Mapping.pm
@@ -5,6 +5,7 @@ use warnings;
use PVE::API2::Cluster::Mapping::PCI;
use PVE::API2::Cluster::Mapping::USB;
+use PVE::API2::Cluster::Mapping::Dir;
use base qw(PVE::RESTHandler);
@@ -18,6 +19,11 @@ __PACKAGE__->register_method ({
path => 'usb',
});
+__PACKAGE__->register_method ({
+ subclass => "PVE::API2::Cluster::Mapping::Dir",
+ path => 'dir',
+});
+
__PACKAGE__->register_method ({
name => 'index',
path => '',
@@ -43,6 +49,7 @@ __PACKAGE__->register_method ({
my $result = [
{ name => 'pci' },
{ name => 'usb' },
+ { name => 'dir' },
];
return $result;
diff --git a/PVE/API2/Cluster/Mapping/Dir.pm b/PVE/API2/Cluster/Mapping/Dir.pm
new file mode 100644
index 00000000..f6e8f26f
--- /dev/null
+++ b/PVE/API2/Cluster/Mapping/Dir.pm
@@ -0,0 +1,309 @@
+package PVE::API2::Cluster::Mapping::Dir;
+
+use strict;
+use warnings;
+
+use Storable qw(dclone);
+
+use PVE::Mapping::Dir ();
+use PVE::JSONSchema qw(get_standard_option);
+use PVE::Tools qw(extract_param);
+
+use base qw(PVE::RESTHandler);
+
+__PACKAGE__->register_method ({
+ name => 'index',
+ path => '',
+ method => 'GET',
+ # only proxy if we give the 'check-node' parameter
+ proxyto_callback => sub {
+ my ($rpcenv, $proxyto, $param) = @_;
+ return $param->{'check-node'} // 'localhost';
+ },
+ description => "List Dir Hardware Mapping",
+ permissions => {
+ description => "Only lists entries where you have 'Mapping.Modify', 'Mapping.Use' or".
+ " 'Mapping.Audit' permissions on '/mapping/dir/<id>'.",
+ user => 'all',
+ },
+ parameters => {
+ additionalProperties => 0,
+ properties => {
+ 'check-node' => get_standard_option('pve-node', {
+ description => "If given, checks the configurations on the given node for ".
+ "correctness, and adds relevant diagnostics for the devices to the response.",
+ optional => 1,
+ }),
+ },
+ },
+ returns => {
+ type => 'array',
+ items => {
+ type => "object",
+ properties => {
+ id => {
+ type => 'string',
+ description => "The logical ID of the mapping."
+ },
+ map => {
+ type => 'array',
+ description => "The entries of the mapping.",
+ items => {
+ type => 'string',
+ description => "A mapping for a node.",
+ },
+ },
+ description => {
+ type => 'string',
+ description => "A description of the logical mapping.",
+ },
+ xattr => {
+ type => 'boolean',
+ description => "Enable support for extended attributes.",
+ optional => 1,
+ },
+ acl => {
+ type => 'boolean',
+ description => "Enable support for posix ACLs (implies --xattr).",
+ optional => 1,
+ },
+ checks => {
+ type => "array",
+ optional => 1,
+ description => "A list of checks, only present if 'check_node' is set.",
+ items => {
+ type => 'object',
+ properties => {
+ severity => {
+ type => "string",
+ enum => ['warning', 'error'],
+ description => "The severity of the error",
+ },
+ message => {
+ type => "string",
+ description => "The message of the error",
+ },
+ },
+ }
+ },
+ },
+ },
+ links => [ { rel => 'child', href => "{id}" } ],
+ },
+ code => sub {
+ my ($param) = @_;
+
+ my $rpcenv = PVE::RPCEnvironment::get();
+ my $authuser = $rpcenv->get_user();
+
+ my $check_node = $param->{'check-node'};
+ my $local_node = PVE::INotify::nodename();
+
+ die "wrong node to check - $check_node != $local_node\n"
+ if defined($check_node) && $check_node ne 'localhost' && $check_node ne $local_node;
+
+ my $cfg = PVE::Mapping::Dir::config();
+
+ my $can_see_mapping_privs = ['Mapping.Modify', 'Mapping.Use', 'Mapping.Audit'];
+
+ my $res = [];
+ for my $id (keys $cfg->{ids}->%*) {
+ next if !$rpcenv->check_any($authuser, "/mapping/dir/$id", $can_see_mapping_privs, 1);
+ next if !$cfg->{ids}->{$id};
+
+ my $entry = dclone($cfg->{ids}->{$id});
+ $entry->{id} = $id;
+ $entry->{digest} = $cfg->{digest};
+
+ if (defined($check_node)) {
+ $entry->{checks} = [];
+ if (my $mappings = PVE::Mapping::Dir::get_node_mapping($cfg, $id, $check_node)) {
+ if (!scalar($mappings->@*)) {
+ push $entry->{checks}->@*, {
+ severity => 'warning',
+ message => "No mapping for node $check_node.",
+ };
+ }
+ for my $mapping ($mappings->@*) {
+ eval { PVE::Mapping::Dir::assert_valid($id, $mapping) };
+ if (my $err = $@) {
+ push $entry->{checks}->@*, {
+ severity => 'error',
+ message => "Invalid configuration: $err",
+ };
+ }
+ }
+ }
+ }
+
+ push @$res, $entry;
+ }
+
+ return $res;
+ },
+});
+
+__PACKAGE__->register_method ({
+ name => 'get',
+ protected => 1,
+ path => '{id}',
+ method => 'GET',
+ description => "Get Dir Mapping.",
+ permissions => {
+ check =>['or',
+ ['perm', '/mapping/dir/{id}', ['Mapping.Use']],
+ ['perm', '/mapping/dir/{id}', ['Mapping.Modify']],
+ ['perm', '/mapping/dir/{id}', ['Mapping.Audit']],
+ ],
+ },
+ parameters => {
+ additionalProperties => 0,
+ properties => {
+ id => {
+ type => 'string',
+ format => 'pve-configid',
+ },
+ }
+ },
+ returns => { type => 'object' },
+ code => sub {
+ my ($param) = @_;
+
+ my $cfg = PVE::Mapping::Dir::config();
+ my $id = $param->{id};
+
+ my $entry = $cfg->{ids}->{$id};
+ die "mapping '$param->{id}' not found\n" if !defined($entry);
+
+ my $data = dclone($entry);
+
+ $data->{digest} = $cfg->{digest};
+
+ return $data;
+ }});
+
+__PACKAGE__->register_method ({
+ name => 'create',
+ protected => 1,
+ path => '',
+ method => 'POST',
+ description => "Create a new hardware mapping.",
+ permissions => {
+ check => ['perm', '/mapping/dir', ['Mapping.Modify']],
+ },
+ parameters => PVE::Mapping::Dir->createSchema(1),
+ returns => {
+ type => 'null',
+ },
+ code => sub {
+ my ($param) = @_;
+
+ my $id = extract_param($param, 'id');
+
+ my $plugin = PVE::Mapping::Dir->lookup('dir');
+ my $opts = $plugin->check_config($id, $param, 1, 1);
+
+ PVE::Mapping::Dir::lock_dir_config(sub {
+ my $cfg = PVE::Mapping::Dir::config();
+
+ die "dir ID '$id' already defined\n" if defined($cfg->{ids}->{$id});
+
+ $cfg->{ids}->{$id} = $opts;
+
+ PVE::Mapping::Dir::write_dir_config($cfg);
+
+ }, "create hardware mapping failed");
+
+ return;
+ },
+});
+
+__PACKAGE__->register_method ({
+ name => 'update',
+ protected => 1,
+ path => '{id}',
+ method => 'PUT',
+ description => "Update a hardware mapping.",
+ permissions => {
+ check => ['perm', '/mapping/dir/{id}', ['Mapping.Modify']],
+ },
+ parameters => PVE::Mapping::Dir->updateSchema(),
+ returns => {
+ type => 'null',
+ },
+ code => sub {
+ my ($param) = @_;
+
+ my $digest = extract_param($param, 'digest');
+ my $delete = extract_param($param, 'delete');
+ my $id = extract_param($param, 'id');
+
+ if ($delete) {
+ $delete = [ PVE::Tools::split_list($delete) ];
+ }
+
+ PVE::Mapping::Dir::lock_dir_config(sub {
+ my $cfg = PVE::Mapping::Dir::config();
+
+ PVE::Tools::assert_if_modified($cfg->{digest}, $digest) if defined($digest);
+
+ die "dir ID '$id' does not exist\n" if !defined($cfg->{ids}->{$id});
+
+ my $plugin = PVE::Mapping::Dir->lookup('dir');
+ my $opts = $plugin->check_config($id, $param, 1, 1);
+
+ my $data = $cfg->{ids}->{$id};
+
+ my $options = $plugin->private()->{options}->{dir};
+ PVE::SectionConfig::delete_from_config($data, $options, $opts, $delete);
+
+ $data->{$_} = $opts->{$_} for keys $opts->%*;
+
+ PVE::Mapping::Dir::write_dir_config($cfg);
+
+ }, "update hardware mapping failed");
+
+ return;
+ },
+});
+
+__PACKAGE__->register_method ({
+ name => 'delete',
+ protected => 1,
+ path => '{id}',
+ method => 'DELETE',
+ description => "Remove Hardware Mapping.",
+ permissions => {
+ check => [ 'perm', '/mapping/dir', ['Mapping.Modify']],
+ },
+ parameters => {
+ additionalProperties => 0,
+ properties => {
+ id => {
+ type => 'string',
+ format => 'pve-configid',
+ },
+ }
+ },
+ returns => { type => 'null' },
+ code => sub {
+ my ($param) = @_;
+
+ my $id = $param->{id};
+
+ PVE::Mapping::Dir::lock_dir_config(sub {
+ my $cfg = PVE::Mapping::Dir::config();
+
+ if ($cfg->{ids}->{$id}) {
+ delete $cfg->{ids}->{$id};
+ }
+
+ PVE::Mapping::Dir::write_dir_config($cfg);
+
+ }, "delete dir mapping failed");
+
+ return;
+ }
+});
+
+1;
diff --git a/PVE/API2/Cluster/Mapping/Makefile b/PVE/API2/Cluster/Mapping/Makefile
index e7345ab4..a80c8ab3 100644
--- a/PVE/API2/Cluster/Mapping/Makefile
+++ b/PVE/API2/Cluster/Mapping/Makefile
@@ -4,7 +4,8 @@ include ../../../../defines.mk
# ensure we do not conflict with files shipped by pve-cluster!!
PERLSOURCE= \
PCI.pm \
- USB.pm
+ USB.pm \
+ Dir.pm
all:
--
2.39.2
^ permalink raw reply [flat|nested] 16+ messages in thread
* [pve-devel] [PATCH manager v7 08/11] ui: add edit window for dir mappings
2023-08-09 8:37 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v6 0/11] virtiofs Markus Frank
` (6 preceding siblings ...)
2023-08-09 8:37 ` [pve-devel] [PATCH manager v7 07/11] api: add resource map api endpoints for directories Markus Frank
@ 2023-08-09 8:37 ` Markus Frank
2023-08-09 8:37 ` [pve-devel] [PATCH manager v7 09/11] ui: ResourceMapTree for DIR Markus Frank
` (3 subsequent siblings)
11 siblings, 0 replies; 16+ messages in thread
From: Markus Frank @ 2023-08-09 8:37 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
www/manager6/Makefile | 1 +
www/manager6/window/DirMapEdit.js | 222 ++++++++++++++++++++++++++++++
2 files changed, 223 insertions(+)
create mode 100644 www/manager6/window/DirMapEdit.js
diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 7ec9d7a5..ba559751 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -129,6 +129,7 @@ JSSRC= \
window/TreeSettingsEdit.js \
window/PCIMapEdit.js \
window/USBMapEdit.js \
+ window/DirMapEdit.js \
ha/Fencing.js \
ha/GroupEdit.js \
ha/GroupSelector.js \
diff --git a/www/manager6/window/DirMapEdit.js b/www/manager6/window/DirMapEdit.js
new file mode 100644
index 00000000..20af8ce6
--- /dev/null
+++ b/www/manager6/window/DirMapEdit.js
@@ -0,0 +1,222 @@
+Ext.define('PVE.window.DIRMapEditWindow', {
+ extend: 'Proxmox.window.Edit',
+
+ mixins: ['Proxmox.Mixin.CBind'],
+
+ cbindData: function(initialConfig) {
+ let me = this;
+ me.isCreate = !me.name;
+ me.method = me.isCreate ? 'POST' : 'PUT';
+ me.hideMapping = !!me.entryOnly;
+ me.hideComment = me.name && !me.entryOnly;
+ me.hideNodeSelector = me.nodename || me.entryOnly;
+ me.hideNode = !me.nodename || !me.hideNodeSelector;
+ return {
+ name: me.name,
+ nodename: me.nodename,
+ };
+ },
+
+ submitUrl: function(_url, data) {
+ let me = this;
+ let name = me.isCreate ? '' : me.name;
+ return `/cluster/mapping/dir/${name}`;
+ },
+
+ title: gettext('Add Dir mapping'),
+
+ onlineHelp: 'resource_mapping',
+
+ method: 'POST',
+
+ controller: {
+ xclass: 'Ext.app.ViewController',
+
+ onGetValues: function(values) {
+ let me = this;
+ let view = me.getView();
+ values.node ??= view.nodename;
+
+ let name = values.name;
+ let description = values.description;
+ let xattr = values.xattr;
+ let acl = values.acl;
+ let deletes = values.delete;
+
+ delete values.description;
+ delete values.name;
+ delete values.xattr;
+ delete values.acl;
+
+ let map = [];
+ if (me.originalMap) {
+ map = PVE.Parser.filterPropertyStringList(me.originalMap, (e) => e.node !== values.node);
+ }
+ if (values.path) {
+ map.push(PVE.Parser.printPropertyString(values));
+ }
+
+ values = { map };
+ if (description) {
+ values.description = description;
+ }
+ if (xattr) {
+ values.xattr = xattr;
+ }
+ if (acl) {
+ values.acl = acl;
+ }
+ if (deletes) {
+ values.delete = deletes;
+ }
+
+ if (view.isCreate) {
+ values.id = name;
+ }
+ return values;
+ },
+
+ onSetValues: function(values) {
+ let me = this;
+ let view = me.getView();
+ me.originalMap = [...values.map];
+ let configuredNodes = [];
+ PVE.Parser.filterPropertyStringList(values.map, (e) => {
+ configuredNodes.push(e.node);
+ if (e.node === view.nodename) {
+ values = e;
+ }
+ return false;
+ });
+
+ me.lookup('nodeselector').disallowedNodes = configuredNodes;
+
+ return values;
+ },
+
+ init: function(view) {
+ let me = this;
+
+ if (!view.nodename) {
+ //throw "no nodename given";
+ }
+ },
+ },
+
+ items: [
+ {
+ xtype: 'inputpanel',
+ onGetValues: function(values) {
+ return this.up('window').getController().onGetValues(values);
+ },
+
+ onSetValues: function(values) {
+ return this.up('window').getController().onSetValues(values);
+ },
+
+ columnT: [
+ {
+ xtype: 'displayfield',
+ reference: 'directory-hint',
+ columnWidth: 1,
+ value: 'Make sure the directory exists.',
+ cbind: {
+ disabled: '{hideMapping}',
+ hidden: '{hideMapping}',
+ },
+ userCls: 'pmx-hint',
+ },
+ ],
+
+ column1: [
+ {
+ xtype: 'pmxDisplayEditField',
+ fieldLabel: gettext('Name'),
+ cbind: {
+ editable: '{!name}',
+ value: '{name}',
+ submitValue: '{isCreate}',
+ },
+ name: 'name',
+ allowBlank: false,
+ },
+ {
+ xtype: 'pveNodeSelector',
+ reference: 'nodeselector',
+ fieldLabel: gettext('Node'),
+ name: 'node',
+ cbind: {
+ disabled: '{hideNodeSelector}',
+ hidden: '{hideNodeSelector}',
+ },
+ allowBlank: false,
+ },
+ ],
+
+ column2: [
+ {
+ xtype: 'fieldcontainer',
+ defaultType: 'radiofield',
+ layout: 'fit',
+ cbind: {
+ disabled: '{hideMapping}',
+ hidden: '{hideMapping}',
+ },
+ items: [
+ {
+ xtype: 'textfield',
+ name: 'path',
+ reference: 'path',
+ value: '',
+ emptyText: gettext('/some/path'),
+ cbind: {
+ nodename: '{nodename}',
+ disabled: '{hideMapping}',
+ },
+ allowBlank: false,
+ fieldLabel: gettext('Path'),
+ },
+ {
+ xtype: 'proxmoxcheckbox',
+ name: 'submounts',
+ fieldLabel: gettext('submounts'),
+ },
+ ],
+ },
+ ],
+
+ columnB: [
+ {
+ xtype: 'fieldcontainer',
+ defaultType: 'radiofield',
+ layout: 'fit',
+ cbind: {
+ disabled: '{hideComment}',
+ hidden: '{hideComment}',
+ },
+ items: [
+ {
+ xtype: 'proxmoxtextfield',
+ fieldLabel: gettext('Comment'),
+ submitValue: true,
+ name: 'description',
+ deleteEmpty: true,
+ },
+ {
+ xtype: 'proxmoxcheckbox',
+ fieldLabel: gettext('Use dir with xattr by default'),
+ name: 'xattr',
+ deleteEmpty: true,
+ },
+ {
+ xtype: 'proxmoxcheckbox',
+ fieldLabel: gettext('Use dir with acl by default'),
+ name: 'acl',
+ deleteEmpty: true,
+ },
+ ],
+ },
+ ],
+ },
+ ],
+});
--
2.39.2
^ permalink raw reply [flat|nested] 16+ messages in thread
* [pve-devel] [PATCH manager v7 09/11] ui: ResourceMapTree for DIR
2023-08-09 8:37 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v6 0/11] virtiofs Markus Frank
` (7 preceding siblings ...)
2023-08-09 8:37 ` [pve-devel] [PATCH manager v7 08/11] ui: add edit window for dir mappings Markus Frank
@ 2023-08-09 8:37 ` Markus Frank
2023-08-09 8:37 ` [pve-devel] [PATCH manager v7 10/11] ui: form: add DIRMapSelector Markus Frank
` (2 subsequent siblings)
11 siblings, 0 replies; 16+ messages in thread
From: Markus Frank @ 2023-08-09 8:37 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
www/manager6/Makefile | 1 +
www/manager6/dc/Config.js | 10 +++++++
www/manager6/dc/DIRMapView.js | 50 +++++++++++++++++++++++++++++++++++
3 files changed, 61 insertions(+)
create mode 100644 www/manager6/dc/DIRMapView.js
diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index ba559751..4c1fcdbb 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -180,6 +180,7 @@ JSSRC= \
dc/RealmSyncJob.js \
dc/PCIMapView.js \
dc/USBMapView.js \
+ dc/DIRMapView.js \
lxc/CmdMenu.js \
lxc/Config.js \
lxc/CreateWizard.js \
diff --git a/www/manager6/dc/Config.js b/www/manager6/dc/Config.js
index 04ed04f0..f7ba1799 100644
--- a/www/manager6/dc/Config.js
+++ b/www/manager6/dc/Config.js
@@ -312,6 +312,16 @@ Ext.define('PVE.dc.Config', {
title: gettext('USB Devices'),
flex: 1,
},
+ {
+ xtype: 'splitter',
+ collapsible: false,
+ performCollapse: false,
+ },
+ {
+ xtype: 'pveDcDIRMapView',
+ title: gettext('Directories'),
+ flex: 1,
+ },
],
},
);
diff --git a/www/manager6/dc/DIRMapView.js b/www/manager6/dc/DIRMapView.js
new file mode 100644
index 00000000..a4304c1b
--- /dev/null
+++ b/www/manager6/dc/DIRMapView.js
@@ -0,0 +1,50 @@
+Ext.define('pve-resource-dir-tree', {
+ extend: 'Ext.data.Model',
+ idProperty: 'internalId',
+ fields: ['type', 'text', 'path', 'id', 'description', 'digest'],
+});
+
+Ext.define('PVE.dc.DIRMapView', {
+ extend: 'PVE.tree.ResourceMapTree',
+ alias: 'widget.pveDcDIRMapView',
+
+ editWindowClass: 'PVE.window.DIRMapEditWindow',
+ baseUrl: '/cluster/mapping/dir',
+ mapIconCls: 'fa fa-folder',
+ entryIdProperty: 'path',
+
+ store: {
+ sorters: 'text',
+ model: 'pve-resource-dir-tree',
+ data: {},
+ },
+
+ columns: [
+ {
+ xtype: 'treecolumn',
+ text: gettext('ID/Node'),
+ dataIndex: 'text',
+ width: 200,
+ },
+ {
+ text: gettext('xattr'),
+ dataIndex: 'xattr',
+ },
+ {
+ text: gettext('acl'),
+ dataIndex: 'acl',
+ },
+ {
+ text: gettext('submounts'),
+ dataIndex: 'submounts',
+ },
+ {
+ header: gettext('Comment'),
+ dataIndex: 'description',
+ renderer: function(value, _meta, record) {
+ return value ?? record.data.comment;
+ },
+ flex: 1,
+ },
+ ],
+});
--
2.39.2
^ permalink raw reply [flat|nested] 16+ messages in thread
* [pve-devel] [PATCH manager v7 10/11] ui: form: add DIRMapSelector
2023-08-09 8:37 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v6 0/11] virtiofs Markus Frank
` (8 preceding siblings ...)
2023-08-09 8:37 ` [pve-devel] [PATCH manager v7 09/11] ui: ResourceMapTree for DIR Markus Frank
@ 2023-08-09 8:37 ` Markus Frank
2023-08-09 8:37 ` [pve-devel] [PATCH manager v7 11/11] ui: add options to add virtio-fs to qemu config Markus Frank
2023-10-05 8:57 ` [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v6 0/11] virtiofs Fabian Grünbichler
11 siblings, 0 replies; 16+ messages in thread
From: Markus Frank @ 2023-08-09 8:37 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
www/manager6/Makefile | 1 +
www/manager6/form/DIRMapSelector.js | 63 +++++++++++++++++++++++++++++
2 files changed, 64 insertions(+)
create mode 100644 www/manager6/form/DIRMapSelector.js
diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 4c1fcdbb..69627ae0 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -34,6 +34,7 @@ JSSRC= \
form/ContentTypeSelector.js \
form/ControllerSelector.js \
form/DayOfWeekSelector.js \
+ form/DIRMapSelector.js \
form/DiskFormatSelector.js \
form/DiskStorageSelector.js \
form/EmailNotificationSelector.js \
diff --git a/www/manager6/form/DIRMapSelector.js b/www/manager6/form/DIRMapSelector.js
new file mode 100644
index 00000000..6cfb89cf
--- /dev/null
+++ b/www/manager6/form/DIRMapSelector.js
@@ -0,0 +1,63 @@
+Ext.define('PVE.form.DIRMapSelector', {
+ extend: 'Proxmox.form.ComboGrid',
+ alias: 'widget.pveDIRMapSelector',
+
+ store: {
+ fields: ['name', 'path'],
+ filterOnLoad: true,
+ sorters: [
+ {
+ property: 'name',
+ direction: 'ASC',
+ },
+ ],
+ },
+
+ allowBlank: false,
+ autoSelect: false,
+ displayField: 'id',
+ valueField: 'id',
+
+ listConfig: {
+ columns: [
+ {
+ header: gettext('Directory ID'),
+ dataIndex: 'id',
+ flex: 1,
+ },
+ {
+ header: gettext('Comment'),
+ dataIndex: 'description',
+ flex: 1,
+ },
+ ],
+ },
+
+ setNodename: function(nodename) {
+ var me = this;
+
+ if (!nodename || me.nodename === nodename) {
+ return;
+ }
+
+ me.nodename = nodename;
+
+ me.store.setProxy({
+ type: 'proxmox',
+ url: `/api2/json/cluster/mapping/dir?check-node=${nodename}`,
+ });
+
+ me.store.load();
+ },
+
+ initComponent: function() {
+ var me = this;
+
+ var nodename = me.nodename;
+ me.nodename = undefined;
+
+ me.callParent();
+
+ me.setNodename(nodename);
+ },
+});
--
2.39.2
^ permalink raw reply [flat|nested] 16+ messages in thread
* [pve-devel] [PATCH manager v7 11/11] ui: add options to add virtio-fs to qemu config
2023-08-09 8:37 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v6 0/11] virtiofs Markus Frank
` (9 preceding siblings ...)
2023-08-09 8:37 ` [pve-devel] [PATCH manager v7 10/11] ui: form: add DIRMapSelector Markus Frank
@ 2023-08-09 8:37 ` Markus Frank
2023-10-05 8:57 ` [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v6 0/11] virtiofs Fabian Grünbichler
11 siblings, 0 replies; 16+ messages in thread
From: Markus Frank @ 2023-08-09 8:37 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
www/manager6/Makefile | 1 +
www/manager6/Utils.js | 1 +
www/manager6/qemu/HardwareView.js | 19 ++++
www/manager6/qemu/VirtiofsEdit.js | 146 ++++++++++++++++++++++++++++++
4 files changed, 167 insertions(+)
create mode 100644 www/manager6/qemu/VirtiofsEdit.js
diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 69627ae0..d8e23fd4 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -260,6 +260,7 @@ JSSRC= \
qemu/Smbios1Edit.js \
qemu/SystemEdit.js \
qemu/USBEdit.js \
+ qemu/VirtiofsEdit.js \
sdn/Browser.js \
sdn/ControllerView.js \
sdn/Status.js \
diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index 4e094213..eb0959e4 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -1609,6 +1609,7 @@ Ext.define('PVE.Utils', {
serial: 4,
rng: 1,
tpmstate: 1,
+ virtiofs: 10,
},
// we can have usb6 and up only for specific machine/ostypes
diff --git a/www/manager6/qemu/HardwareView.js b/www/manager6/qemu/HardwareView.js
index 5b33b1e2..b44485dc 100644
--- a/www/manager6/qemu/HardwareView.js
+++ b/www/manager6/qemu/HardwareView.js
@@ -309,6 +309,16 @@ Ext.define('PVE.qemu.HardwareView', {
never_delete: !caps.nodes['Sys.Console'],
header: gettext("VirtIO RNG"),
};
+ for (let i = 0; i < PVE.Utils.hardware_counts.virtiofs; i++) {
+ let confid = "virtiofs" + i.toString();
+ rows[confid] = {
+ group: 50,
+ order: i,
+ iconCls: 'folder',
+ editor: 'PVE.qemu.VirtiofsEdit',
+ header: gettext('Virtiofs') + ' (' + confid +')',
+ };
+ }
var sorterFn = function(rec1, rec2) {
var v1 = rec1.data.key;
@@ -583,6 +593,7 @@ Ext.define('PVE.qemu.HardwareView', {
const noVMConfigDiskPerm = !caps.vms['VM.Config.Disk'];
const noVMConfigCDROMPerm = !caps.vms['VM.Config.CDROM'];
const noVMConfigCloudinitPerm = !caps.vms['VM.Config.Cloudinit'];
+ const noVMConfigOptionsPerm = !caps.vms['VM.Config.Options'];
me.down('#addUsb').setDisabled(noHWPerm || isAtUsbLimit());
me.down('#addPci').setDisabled(noHWPerm || isAtLimit('hostpci'));
@@ -592,6 +603,7 @@ Ext.define('PVE.qemu.HardwareView', {
me.down('#addRng').setDisabled(noSysConsolePerm || isAtLimit('rng'));
efidisk_menuitem.setDisabled(noVMConfigDiskPerm || isAtLimit('efidisk'));
me.down('#addTpmState').setDisabled(noSysConsolePerm || isAtLimit('tpmstate'));
+ me.down('#addVirtiofs').setDisabled(noVMConfigOptionsPerm || isAtLimit('virtiofs'));
me.down('#addCloudinitDrive').setDisabled(noVMConfigCDROMPerm || noVMConfigCloudinitPerm || hasCloudInit);
if (!rec) {
@@ -736,6 +748,13 @@ Ext.define('PVE.qemu.HardwareView', {
disabled: !caps.nodes['Sys.Console'],
handler: editorFactory('RNGEdit'),
},
+ {
+ text: gettext("Virtiofs"),
+ itemId: 'addVirtiofs',
+ iconCls: 'fa fa-folder',
+ disabled: !caps.nodes['Sys.Console'],
+ handler: editorFactory('VirtiofsEdit'),
+ },
],
}),
},
diff --git a/www/manager6/qemu/VirtiofsEdit.js b/www/manager6/qemu/VirtiofsEdit.js
new file mode 100644
index 00000000..46a52854
--- /dev/null
+++ b/www/manager6/qemu/VirtiofsEdit.js
@@ -0,0 +1,146 @@
+Ext.define('PVE.qemu.VirtiofsInputPanel', {
+ extend: 'Proxmox.panel.InputPanel',
+ xtype: 'pveVirtiofsInputPanel',
+ onlineHelp: 'qm_virtiofs',
+
+ insideWizard: false,
+
+ onGetValues: function(values) {
+ var me = this;
+ var confid = me.confid;
+ var params = {};
+ if (values.delete === "xattr") {
+ delete values.xattr;
+ }
+ if (values.delete === "acl") {
+ delete values.acl;
+ }
+ delete values.delete;
+ params[confid] = PVE.Parser.printPropertyString(values, 'dirid');
+ return params;
+ },
+
+ setSharedfiles: function(confid, data) {
+ var me = this;
+ me.confid = confid;
+ me.virtiofs = data;
+ me.setValues(me.virtiofs);
+ },
+ initComponent: function() {
+ let me = this;
+
+ me.nodename = me.pveSelNode.data.node;
+ if (!me.nodename) {
+ throw "no node name specified";
+ }
+ me.items = [
+ {
+ xtype: 'pveDIRMapSelector',
+ emptyText: 'dirid',
+ nodename: me.nodename,
+ fieldLabel: gettext('Directory ID'),
+ name: 'dirid',
+ allowBlank: false,
+ },
+ {
+ xtype: 'proxmoxKVComboBox',
+ value: '__default__',
+ fieldLabel: gettext('Cache'),
+ name: 'cache',
+ deleteEmpty: false,
+ comboItems: [
+ ['__default__', 'Default (auto)'],
+ ['auto', 'auto'],
+ ['always', 'always'],
+ ['never', 'never'],
+ ],
+ allowBlank: false,
+ },
+ {
+ xtype: 'proxmoxKVComboBox',
+ value: '__default__',
+ fieldLabel: gettext('xattr'),
+ name: 'xattr',
+ deleteDefaultValue: true,
+ comboItems: [
+ ['__default__', 'Default (Mapping Settings)'],
+ ['0', 'off'],
+ ['1', 'on'],
+ ],
+ allowBlank: false,
+ },
+ {
+ xtype: 'proxmoxKVComboBox',
+ value: '__default__',
+ fieldLabel: gettext('acl (implies xattr)'),
+ name: 'acl',
+ deleteDefaultValue: true,
+ comboItems: [
+ ['__default__', 'Default (Mapping Settings)'],
+ ['0', 'off'],
+ ['1', 'on'],
+ ],
+ allowBlank: false,
+ },
+ {
+ xtype: 'proxmoxcheckbox',
+ fieldLabel: gettext('Direct-io'),
+ name: 'direct-io',
+ },
+ ];
+
+ me.virtiofs = {};
+ me.confid = 'virtiofs0';
+ me.callParent();
+ },
+});
+
+Ext.define('PVE.qemu.VirtiofsEdit', {
+ extend: 'Proxmox.window.Edit',
+
+ subject: gettext('Filesystem Passthrough'),
+
+ initComponent: function() {
+ var me = this;
+
+ me.isCreate = !me.confid;
+
+ var ipanel = Ext.create('PVE.qemu.VirtiofsInputPanel', {
+ confid: me.confid,
+ pveSelNode: me.pveSelNode,
+ isCreate: me.isCreate,
+ });
+
+ Ext.applyIf(me, {
+ items: ipanel,
+ });
+
+ me.callParent();
+
+ me.load({
+ success: function(response) {
+ me.conf = response.result.data;
+ var i, confid;
+ if (!me.isCreate) {
+ var value = me.conf[me.confid];
+ var virtiofs = PVE.Parser.parsePropertyString(value, "dirid");
+ if (!virtiofs) {
+ Ext.Msg.alert(gettext('Error'), 'Unable to parse virtiofs options');
+ me.close();
+ return;
+ }
+ ipanel.setSharedfiles(me.confid, virtiofs);
+ } else {
+ for (i = 0; i < PVE.Utils.hardware_counts.virtiofs; i++) {
+ confid = 'virtiofs' + i.toString();
+ if (!Ext.isDefined(me.conf[confid])) {
+ me.confid = confid;
+ break;
+ }
+ }
+ ipanel.setSharedfiles(me.confid, {});
+ }
+ },
+ });
+ },
+});
--
2.39.2
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [pve-devel] [PATCH qemu-server v7 5/11] Permission check for virtiofs directory access
2023-08-09 8:37 ` [pve-devel] [PATCH qemu-server v7 5/11] Permission check for virtiofs directory access Markus Frank
@ 2023-10-05 8:56 ` Fabian Grünbichler
0 siblings, 0 replies; 16+ messages in thread
From: Fabian Grünbichler @ 2023-10-05 8:56 UTC (permalink / raw)
To: Proxmox VE development discussion
this should likely also be added to PVE::QemuServer::check_mappings so
that cloning and restoring are covered (properly).. checking permission
for deletion is also not handled AFAICT, or for reverting pending
changes - basically, everywhere bridge or PCI/USB mapping access is
checked, Dir mapping access should be as well ;)
On August 9, 2023 10:37 am, Markus Frank wrote:
> Signed-off-by: Markus Frank <m.frank@proxmox.com>
> ---
> PVE/API2/Qemu.pm | 18 ++++++++++++++++++
> 1 file changed, 18 insertions(+)
>
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index 9606e72..65830f9 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -586,6 +586,19 @@ my $check_vm_create_serial_perm = sub {
> return 1;
> };
>
> +my sub check_vm_dir_perm {
> + my ($rpcenv, $authuser, $param) = @_;
> +
> + return 1 if $authuser eq 'root@pam';
> +
> + foreach my $opt (keys %{$param}) {
> + next if $opt !~ m/^virtiofs\d+$/;
> + my $virtiofs = PVE::JSONSchema::parse_property_string('pve-qm-virtiofs', $param->{$opt});
> + $rpcenv->check_full($authuser, "/mapping/dir/$virtiofs->{dirid}", ['Mapping.Use']);
> + }
> + return 1;
> +};
> +
> my sub check_usb_perm {
> my ($rpcenv, $authuser, $vmid, $pool, $opt, $value) = @_;
>
> @@ -687,6 +700,8 @@ my $check_vm_modify_config_perm = sub {
> # the user needs Disk and PowerMgmt privileges to change the vmstate
> # also needs privileges on the storage, that will be checked later
> $rpcenv->check_vm_perm($authuser, $vmid, $pool, ['VM.Config.Disk', 'VM.PowerMgmt' ]);
> + } elsif ($opt =~ m/^virtiofs\d$/) {
> + $rpcenv->check_vm_perm($authuser, $vmid, $pool, ['VM.Config.Disk']);
> } else {
> # catches args, lock, etc.
> # new options will be checked here
> @@ -925,6 +940,7 @@ __PACKAGE__->register_method({
>
> &$check_vm_modify_config_perm($rpcenv, $authuser, $vmid, $pool, [ keys %$param]);
>
> + check_vm_dir_perm($rpcenv, $authuser, $param);
> &$check_vm_create_serial_perm($rpcenv, $authuser, $vmid, $pool, $param);
> check_vm_create_usb_perm($rpcenv, $authuser, $vmid, $pool, $param);
> check_vm_create_hostpci_perm($rpcenv, $authuser, $vmid, $pool, $param);
> @@ -1660,6 +1676,8 @@ my $update_vm_api = sub {
>
> &$check_vm_modify_config_perm($rpcenv, $authuser, $vmid, undef, [keys %$param]);
>
> + check_vm_dir_perm($rpcenv, $authuser, $param);
> +
> &$check_storage_access($rpcenv, $authuser, $storecfg, $vmid, $param);
>
> PVE::QemuServer::check_bridge_access($rpcenv, $authuser, $param);
> --
> 2.39.2
>
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [pve-devel] [PATCH qemu-server v7 4/11] feature #1027: virtio-fs support
2023-08-09 8:37 ` [pve-devel] [PATCH qemu-server v7 4/11] feature #1027: virtio-fs support Markus Frank
@ 2023-10-05 8:56 ` Fabian Grünbichler
0 siblings, 0 replies; 16+ messages in thread
From: Fabian Grünbichler @ 2023-10-05 8:56 UTC (permalink / raw)
To: Proxmox VE development discussion
On August 9, 2023 10:37 am, Markus Frank wrote:
> add support for sharing directories with a guest vm
>
> virtio-fs needs virtiofsd to be started.
>
> In order to start virtiofsd as a process (despite being a daemon it is does not run
> in the background), a double-fork is used.
>
> virtiofsd should close itself together with qemu.
>
> There are the parameters dirid
> and the optional parameters direct-io & cache.
> Additionally the xattr & acl parameter overwrite the
> directory mapping settings for xattr & acl.
>
> The dirid gets mapped to the path on the current node
> and is also used as a mount-tag (name used to mount the
> device on the guest).
>
> example config:
> ```
> virtiofs0: foo,direct-io=1,cache=always,acl=1
> virtiofs1: dirid=bar,cache=never,xattr=1
> ```
>
> For information on the optional parameters see there:
> https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md
>
> Signed-off-by: Markus Frank <m.frank@proxmox.com>
> ---
> I did not get virtiofsd to run with run_command without creating zombie
> processes after stutdown.
> So I replaced run_command with exec for now.
> Maybe someone can find out why this happens.
>
> PVE/QemuServer.pm | 174 ++++++++++++++++++++++++++++++++++++++-
> PVE/QemuServer/Memory.pm | 25 ++++--
> debian/control | 1 +
> 3 files changed, 193 insertions(+), 7 deletions(-)
>
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index 484bc7f..d547dd6 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -43,6 +43,7 @@ use PVE::PBSClient;
> use PVE::RESTEnvironment qw(log_warn);
> use PVE::RPCEnvironment;
> use PVE::Storage;
> +use PVE::Mapping::Dir;
> use PVE::SysFSTools;
> use PVE::Systemd;
> use PVE::Tools qw(run_command file_read_firstline file_get_contents dir_glob_foreach get_host_arch $IPV6RE);
> @@ -276,6 +277,42 @@ my $rng_fmt = {
> },
> };
>
> +my $virtiofs_fmt = {
> + 'dirid' => {
> + type => 'string',
> + default_key => 1,
> + description => "Mapping identifier of the directory mapping to be"
> + ." shared with the guest. Also used as a mount tag inside the VM.",
> + format_description => 'mapping-id',
> + format => 'pve-configid',
> + },
> + 'cache' => {
> + type => 'string',
> + description => "The caching policy the file system should use"
> + ." (auto, always, never).",
> + format_description => "virtiofs-cache",
> + enum => [qw(auto always never)],
> + optional => 1,
> + },
> + 'direct-io' => {
> + type => 'boolean',
> + description => "Honor the O_DIRECT flag passed down by guest applications",
> + format_description => "virtiofs-directio",
> + optional => 1,
> + },
> + xattr => {
> + type => 'boolean',
> + description => "Enable support for extended attributes.",
> + optional => 1,
> + },
> + acl => {
> + type => 'boolean',
> + description => "Enable support for posix ACLs (implies --xattr).",
> + optional => 1,
> + },
> +};
> +PVE::JSONSchema::register_format('pve-qm-virtiofs', $virtiofs_fmt);
> +
> my $meta_info_fmt = {
> 'ctime' => {
> type => 'integer',
> @@ -840,6 +877,7 @@ while (my ($k, $v) = each %$confdesc) {
> }
>
> my $MAX_NETS = 32;
> +my $MAX_VIRTIOFS = 10;
> my $MAX_SERIAL_PORTS = 4;
> my $MAX_PARALLEL_PORTS = 3;
> my $MAX_NUMA = 8;
> @@ -984,6 +1022,21 @@ my $netdesc = {
>
> PVE::JSONSchema::register_standard_option("pve-qm-net", $netdesc);
>
> +my $virtiofsdesc = {
> + optional => 1,
> + type => 'string', format => $virtiofs_fmt,
> + description => "share files between host and guest",
> +};
> +PVE::JSONSchema::register_standard_option("pve-qm-virtiofs", $virtiofsdesc);
> +
> +sub max_virtiofs {
> + return $MAX_VIRTIOFS;
> +}
> +
> +for (my $i = 0; $i < $MAX_VIRTIOFS; $i++) {
> + $confdesc->{"virtiofs$i"} = $virtiofsdesc;
> +}
> +
> my $ipconfig_fmt = {
> ip => {
> type => 'string',
> @@ -4113,6 +4166,21 @@ sub config_to_command {
> push @$devices, '-device', $netdevicefull;
> }
>
> + my $virtiofs_enabled = 0;
> + for (my $i = 0; $i < $MAX_VIRTIOFS; $i++) {
> + my $opt = "virtiofs$i";
> +
> + next if !$conf->{$opt};
> + my $virtiofs = parse_property_string('pve-qm-virtiofs', $conf->{$opt});
> + next if !$virtiofs;
> +
> + push @$devices, '-chardev', "socket,id=virtfs$i,path=/var/run/virtiofsd/vm$vmid-fs$i";
> + push @$devices, '-device', 'vhost-user-fs-pci,queue-size=1024'
> + .",chardev=virtfs$i,tag=$virtiofs->{dirid}";
> +
> + $virtiofs_enabled = 1;
> + }
> +
> if ($conf->{ivshmem}) {
> my $ivshmem = parse_property_string($ivshmem_fmt, $conf->{ivshmem});
>
> @@ -4172,6 +4240,14 @@ sub config_to_command {
> }
> push @$machineFlags, "type=${machine_type_min}";
>
> + if ($virtiofs_enabled && !$conf->{numa}) {
> + # kvm: '-machine memory-backend' and '-numa memdev' properties are
> + # mutually exclusive
> + push @$devices, '-object', 'memory-backend-file,id=virtiofs-mem'
> + .",size=$conf->{memory}M,mem-path=/dev/shm,share=on";
as discussed off-list, this might be switched to memfd to avoid /dev/shm
(same further below)
> + push @$machineFlags, 'memory-backend=virtiofs-mem';
> + }
> +
> push @$cmd, @$devices;
> push @$cmd, '-rtc', join(',', @$rtcFlags) if scalar(@$rtcFlags);
> push @$cmd, '-machine', join(',', @$machineFlags) if scalar(@$machineFlags);
> @@ -4198,6 +4274,85 @@ sub config_to_command {
> return wantarray ? ($cmd, $vollist, $spice_port, $pci_devices) : $cmd;
> }
>
> +sub start_virtiofs {
> + my ($vmid, $fsid, $virtiofs) = @_;
> +
> + my $dir_cfg = PVE::Mapping::Dir::config()->{ids}->{$virtiofs->{dirid}};
> + my $node_list = PVE::Mapping::Dir::find_on_current_node($virtiofs->{dirid});
> +
> + if (!$node_list || scalar($node_list->@*) != 1) {
> + die "virtiofs needs exactly one mapping for this node\n";
> + }
> +
> + eval {
> + PVE::Mapping::Dir::assert_valid($node_list->[0]);
> + };
> + if (my $err = $@) {
> + die "Directory Mapping invalid: $err\n";
> + }
> +
> + my $node_cfg = $node_list->[0];
> + my $path = $node_cfg->{path};
> + my $socket_path_root = "/var/run/virtiofsd";
> + mkdir $socket_path_root;
> + my $socket_path = "$socket_path_root/vm$vmid-fs$fsid";
> + unlink($socket_path);
> + my $socket = IO::Socket::UNIX->new(
> + Type => SOCK_STREAM,
> + Local => $socket_path,
> + Listen => 1,
> + ) or die "cannot create socket - $!\n";
> +
> + my $flags = fcntl($socket, F_GETFD, 0)
> + or die "failed to get file descriptor flags: $!\n";
> + fcntl($socket, F_SETFD, $flags & ~FD_CLOEXEC)
> + or die "failed to remove FD_CLOEXEC from file descriptor\n";
> +
> + my $fd = $socket->fileno();
> +
> + my $virtiofsd_bin = '/usr/libexec/virtiofsd';
> +
> + my $pid = fork();
> + if ($pid == 0) {
> + setsid();
> + $0 = "task pve-vm$vmid-virtiofs$fsid";
> + for my $fd_loop (3 .. POSIX::sysconf( &POSIX::_SC_OPEN_MAX )) {
> + POSIX::close($fd_loop) if ($fd_loop != $fd);
> + }
> +
> + my $pid2 = fork();
> + if ($pid2 == 0) {
> + my $cmd = [$virtiofsd_bin, "--fd=$fd", "--shared-dir=$path"];
> + push @$cmd, '--xattr' if ($virtiofs->{xattr});
> + push @$cmd, '--posix-acl' if ($virtiofs->{acl});
> +
> + # Default to dir config xattr & acl settings
> + push @$cmd, '--xattr'
> + if !defined $virtiofs->{'xattr'} && $dir_cfg->{'xattr'};
> + push @$cmd, '--posix-acl'
> + if !defined $virtiofs->{'acl'} && $dir_cfg->{'acl'};
nit: this could be a lot simpler:
my $xattr = $virtiofs->{xattr} // $dir_cfg->{xattr};
push @$cmd, '--xattr' if $xattr;
or even as a one-liner ;)
same for ACL
> +
> + push @$cmd, '--announce-submounts' if ($node_cfg->{submounts});
> + push @$cmd, '--allow-direct-io' if ($virtiofs->{'direct-io'});
> + push @$cmd, "--cache=$virtiofs->{'cache'}" if ($virtiofs->{'cache'});
> +
> + exec(@$cmd);
> + } elsif (!defined($pid2)) {
> + die "could not fork to start virtiofsd\n";
> + } else {
> + POSIX::_exit(0);
> + }
> + } elsif (!defined($pid)) {
> + die "could not fork to start virtiofsd\n";
> + } else {
> + waitpid($pid, 0);
> + }
> +
> + # return socket to keep it alive,
> + # so that qemu will wait for virtiofsd to start
> + return $socket;
> +}
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [pve-devel] [PATCH docs v7 3/11] added shared filesystem doc for virtio-fs
2023-08-09 8:37 ` [pve-devel] [PATCH docs v7 3/11] added shared filesystem doc for virtio-fs Markus Frank
@ 2023-10-05 8:56 ` Fabian Grünbichler
0 siblings, 0 replies; 16+ messages in thread
From: Fabian Grünbichler @ 2023-10-05 8:56 UTC (permalink / raw)
To: Proxmox VE development discussion
On August 9, 2023 10:37 am, Markus Frank wrote:
> Signed-off-by: Markus Frank <m.frank@proxmox.com>
> ---
> qm.adoc | 70 +++++++++++++++++++++++++++++++++++++++++++++++++++++++--
> 1 file changed, 68 insertions(+), 2 deletions(-)
>
> diff --git a/qm.adoc b/qm.adoc
> index e35dbf0..8f4020d 100644
> --- a/qm.adoc
> +++ b/qm.adoc
> @@ -997,6 +997,71 @@ recommended to always use a limiter to avoid guests using too many host
> resources. If desired, a value of '0' for `max_bytes` can be used to disable
> all limits.
>
> +[[qm_virtiofs]]
> +Virtio-fs
> +~~~~~~~~~
> +
> +Virtio-fs is a shared file system, that enables sharing a directory between
> +host and guest VM while taking advantage of the locality of virtual machines
> +and the hypervisor to get a higher throughput than 9p.
> +
> +Linux VMs with kernel >=5.4 support this feature by default.
> +
> +There is a guide available on how to utilize virtiofs in Windows VMs.
> +https://github.com/virtio-win/kvm-guest-drivers-windows/wiki/Virtiofs:-Shared-file-system
> +
> +Add mapping for Shared Directories
> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> +
> +To add a mapping, go to the Resource Mapping tab in Datacenter in the WebUI,
> +use the API directly with pvesh as described in the
> +xref:resource_mapping[Resource Mapping] section,
> +or add the mapping to the configuration file `/etc/pve/mapping/dir.cfg`:
> +
> +----
> +some-dir-id
> + map node=node1,path=/mnt/share/,submounts=1
> + map node=node2,path=/mnt/share/,
> + xattr 1
> + acl 1
> +----
> +
> +Set `submounts` to `1` when multiple file systems are mounted in a
> +shared directory.
> +
> +Add virtiofs to VM
> +^^^^^^^^^^^^^^^^^^
> +
> +To share a directory using virtio-fs, you need to specify the directory ID
> +(dirid) that has been configured in the Resource Mapping. Additionally, you
> +can set the `cache` option to either `always`, `never`, or `auto`, depending on
> +your requirements. If you want virtio-fs to honor the `O_DIRECT` flag, you can
> +set the `direct-io` parameter to `1`.
> +Additionally it possible to overwrite the default mapping settings
> +for xattr & acl by setting then to either `1` or `0`.
> +
> +The `acl` parameter automatically implies `xattr`, that is, it makes no
> +difference whether you set xattr to `0` if acl is set to `1`.
> +
> +----
> +qm set <vmid> -virtiofs0 dirid=<dirid>,tag=<mount tag>,cache=always,direct-io=1
> +qm set <vmid> -virtiofs1 <dirid>,tag=<mount tag>,cache=never,xattr=1
> +qm set <vmid> -virtiofs2 <dirid>,tag=<mount tag>,acl=1
nit: the mount tag is not settable anymore in this version
some more caveats and limitations would be nice in the docs IMHO!
> +----
> +
> +To mount virtio-fs in a guest VM with the Linux kernel virtiofs driver, run the
> +following command:
> +
> +The dirid associated with the path on the current node is also used as the
> +mount tag (name used to mount the device on the guest).
> +
> +----
> +mount -t virtiofs <mount tag> <mount point>
> +----
> +
> +For more information on available virtiofsd parameters, see the
> +https://gitlab.com/virtio-fs/virtiofsd[GitLab virtiofsd project page].
> +
> [[qm_bootorder]]
> Device Boot Order
> ~~~~~~~~~~~~~~~~~
> @@ -1600,8 +1665,9 @@ in the relevant tab in the `Resource Mappings` category, or on the cli with
> # pvesh create /cluster/mapping/<type> <options>
> ----
>
> -Where `<type>` is the hardware type (currently either `pci` or `usb`) and
> -`<options>` are the device mappings and other configuration parameters.
> +Where `<type>` is the hardware type (currently either `pci`, `usb` or
> +xref:qm_virtiofs[dir]) and `<options>` are the device mappings and other
> +configuration parameters.
>
> Note that the options must include a map property with all identifying
> properties of that hardware, so that it's possible to verify the hardware did
> --
> 2.39.2
>
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v6 0/11] virtiofs
2023-08-09 8:37 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v6 0/11] virtiofs Markus Frank
` (10 preceding siblings ...)
2023-08-09 8:37 ` [pve-devel] [PATCH manager v7 11/11] ui: add options to add virtio-fs to qemu config Markus Frank
@ 2023-10-05 8:57 ` Fabian Grünbichler
11 siblings, 0 replies; 16+ messages in thread
From: Fabian Grünbichler @ 2023-10-05 8:57 UTC (permalink / raw)
To: Proxmox VE development discussion
On August 9, 2023 10:37 am, Markus Frank wrote:
> qemu-server patches require pve-guest-common and pve-cluster patches
> pve-manager patches require the pve-doc patch
>
> I did not get virtiofsd to run with run_command without creating zombie
> processes after stutdown.
> So I replaced run_command with exec for now.
> Maybe someone can find out why this happens.
some high-level remarks:
- in general, seems to work as expected within the limitations of the
current virtiofsd
- log messages by virtiofsd after the initial startup are lost, adding
`--syslog` or otherwise improving the process startup to capture them
would be good
- I am not sure whether we want to expose this on the GUI just yet
- checking earlier when doing a snapshot with RAM might be sensible
(since virtiofsd state is not migrateable, it's also not
snapshot-saveable and aborts pretty early on, but a nicer error
message up front would be even better)[1]
- maybe default to ACLs off, or improve detection of support, since
having them on but no support means no mounting possible (haven't
tested whether the same applies to XATTRs as well)
- currently virtiofsd crashing means no recovery until VM is fully
stopped and restarted [2]
- virtiofsd not responding for whatever reason means NFS-like hanging
access in the VM (this should be noted somewhere)
- virtiofs shares don't seem to work on older Linux VMs with memory
hotplug enabled (it might be good to have some sort of
supported/tested-with matrix somewhere so that users don't have to try
known-to-not-work combinations..)
- bwlimit support once upstream has it would be nice[3]
- reboots seem broken (accessing the mount after the reboot hangs), but
that might be fixed with a newer upstream version[4] that I'll prepare
in the meantime :)
noting the build-order/interdependencies would be nice ;)
some more smaller nits noted in individual patches
1: https://gitlab.com/virtio-fs/virtiofsd/-/issues/81
2: https://gitlab.com/virtio-fs/virtiofsd/-/issues/62
3: https://gitlab.com/virtio-fs/virtiofsd/-/merge_requests/147
4: https://gitlab.com/virtio-fs/virtiofsd/-/commit/ee50078626536b8e25389f01e7e4be43897418c9
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2023-10-05 8:57 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-08-09 8:37 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v6 0/11] virtiofs Markus Frank
2023-08-09 8:37 ` [pve-devel] [PATCH cluster v7 1/11] add mapping/dir.cfg for resource mapping Markus Frank
2023-08-09 8:37 ` [pve-devel] [PATCH guest-common v7 2/11] add Dir mapping config Markus Frank
2023-08-09 8:37 ` [pve-devel] [PATCH docs v7 3/11] added shared filesystem doc for virtio-fs Markus Frank
2023-10-05 8:56 ` Fabian Grünbichler
2023-08-09 8:37 ` [pve-devel] [PATCH qemu-server v7 4/11] feature #1027: virtio-fs support Markus Frank
2023-10-05 8:56 ` Fabian Grünbichler
2023-08-09 8:37 ` [pve-devel] [PATCH qemu-server v7 5/11] Permission check for virtiofs directory access Markus Frank
2023-10-05 8:56 ` Fabian Grünbichler
2023-08-09 8:37 ` [pve-devel] [PATCH qemu-server v7 6/11] check_local_resources: virtiofs Markus Frank
2023-08-09 8:37 ` [pve-devel] [PATCH manager v7 07/11] api: add resource map api endpoints for directories Markus Frank
2023-08-09 8:37 ` [pve-devel] [PATCH manager v7 08/11] ui: add edit window for dir mappings Markus Frank
2023-08-09 8:37 ` [pve-devel] [PATCH manager v7 09/11] ui: ResourceMapTree for DIR Markus Frank
2023-08-09 8:37 ` [pve-devel] [PATCH manager v7 10/11] ui: form: add DIRMapSelector Markus Frank
2023-08-09 8:37 ` [pve-devel] [PATCH manager v7 11/11] ui: add options to add virtio-fs to qemu config Markus Frank
2023-10-05 8:57 ` [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v6 0/11] virtiofs Fabian Grünbichler
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox