From: Filip Schauer <f.schauer@proxmox.com>
To: "Michael Köppl" <m.koeppl@proxmox.com>,
"Proxmox VE development discussion" <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH container v2 4/4] implement device hotplug
Date: Mon, 2 Jun 2025 13:40:26 +0200 [thread overview]
Message-ID: <099846df-78ca-4b7d-a305-2dc26f1100c6@proxmox.com> (raw)
In-Reply-To: <07541fab-1372-4129-a790-db9b8f96794f@proxmox.com>
On 30/05/2025 16:18, Michael Köppl wrote:
>> + my $id_map = (PVE::LXC::parse_id_maps($conf))[0];
>> + my $passthrough_device_path = create_passthrough_device_node(
>> + "/var/lib/lxc/$vmid/passthrough", $dev, $mode, $rdev, $id_map);
> I understand that this path is guaranteed to exist because it is created
> as part of the prestart hook, but maybe this could also be moved to a
> helper function and called before calling create_passthrough_device_node
> as well, just to be sure. Just a suggestion.
I don't think we should do that. The prestart hook doesn't just create
the directory, it also mounts a tmpfs onto it. We wouldn't want to mount
a new tmpfs on top every time a device is hotplugged.
>> +
>> + my $srcfh = PVE::Tools::open_tree(&AT_FDCWD, $passthrough_device_path, &OPEN_TREE_CLOEXEC | &OPEN_TREE_CLONE)
>> + or die "open_tree() on passthrough device node failed: $!\n";
>> +
>> + if ($conf->{unprivileged}) {
>> + PVE::Tools::setns(fileno($ct_user_ns), PVE::Tools::CLONE_NEWUSER)
>> + or die "failed to enter user namespace of container $vmid: $!\n";
>> +
>> + POSIX::setuid(0);
>> + POSIX::setgid(0);
>> + }
>> +
>> + # Create a regular file in the container to bind mount the device node onto.
>> + sysopen(my $dstfh, "/proc/$ct_pid/root$dev->{path}", O_CREAT)
>> + or die "failed to open '/proc/$ct_pid/root$dev->{path}': $!\n";
> While this does create the file, it does not create the directory.
> CallingFile::Path::make_path(dirname("/proc/$ct_pid/root$dev->{path}"))
> directly before the sysopen call makes this behave as I would expect it
> to and then the hotplug works.
You're right, I forgot to create the path if it doesn't exist. I only
tested this with device nodes directly under /dev.
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
next prev parent reply other threads:[~2025-06-02 11:40 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-23 12:56 [pve-devel] [PATCH container v2 0/4] " Filip Schauer
2025-04-23 12:56 ` [pve-devel] [PATCH container v2 1/4] extract apparmor profile & namespace switch to a helper Filip Schauer
2025-04-23 12:56 ` [pve-devel] [PATCH container v2 2/4] extract passthrough device node creation " Filip Schauer
2025-05-30 14:18 ` Michael Köppl
2025-04-23 12:56 ` [pve-devel] [PATCH container v2 3/4] config: support printing a device Filip Schauer
2025-04-23 12:56 ` [pve-devel] [PATCH container v2 4/4] implement device hotplug Filip Schauer
2025-05-30 14:18 ` Michael Köppl
2025-06-02 11:40 ` Filip Schauer [this message]
2025-06-02 13:36 ` [pve-devel] superseded: [PATCH container v2 0/4] " Filip Schauer
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=099846df-78ca-4b7d-a305-2dc26f1100c6@proxmox.com \
--to=f.schauer@proxmox.com \
--cc=m.koeppl@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal