From: Jing Luo via pve-devel <pve-devel@lists.proxmox.com>
To: pve-devel@lists.proxmox.com
Cc: Jing Luo <jing@jing.rocks>
Subject: [pve-devel] [PATCH qemu-server] tree-wide: change /var/run to /run and /var/lock to /run/lock
Date: Sun, 23 Mar 2025 00:17:19 +0900 [thread overview]
Message-ID: <mailman.92.1742657463.359.pve-devel@lists.proxmox.com> (raw)
In-Reply-To: <20250322152004.1646886-1-jing@jing.rocks>
[-- Attachment #1: Type: message/rfc822, Size: 9254 bytes --]
From: Jing Luo <jing@jing.rocks>
To: pve-devel@lists.proxmox.com
Cc: Jing Luo <jing@jing.rocks>
Subject: [PATCH qemu-server] tree-wide: change /var/run to /run and /var/lock to /run/lock
Date: Sun, 23 Mar 2025 00:17:19 +0900
Message-ID: <20250322152004.1646886-12-jing@jing.rocks>
"/var/run" and "/var/lock" are deprecated.
This is to comply with Debian Policy 9.1.4 "/run and /run/lock".
(https://www.debian.org/doc/debian-policy/ch-opersys.html#run-and-run-lock)
Signed-off-by: Jing Luo <jing@jing.rocks>
---
PVE/CLI/qm.pm | 2 +-
PVE/QemuConfig.pm | 2 +-
PVE/QemuServer.pm | 14 +++++++-------
PVE/QemuServer/Helpers.pm | 2 +-
PVE/QemuServer/Memory.pm | 2 +-
qmeventd/qmeventd.service | 4 ++--
6 files changed, 13 insertions(+), 13 deletions(-)
diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index 4214a7ca..5e1321c6 100755
--- a/PVE/CLI/qm.pm
+++ b/PVE/CLI/qm.pm
@@ -703,7 +703,7 @@ __PACKAGE__->register_method ({
die "VM $vmid not running\n" if !PVE::QemuServer::check_running($vmid);
- my $socket = "/var/run/qemu-server/${vmid}.$iface";
+ my $socket = "/run/qemu-server/${vmid}.$iface";
my $cmd = "socat UNIX-CONNECT:$socket STDIO,raw,echo=0$escape";
diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm
index ffdf9f03..b414db99 100644
--- a/PVE/QemuConfig.pm
+++ b/PVE/QemuConfig.pm
@@ -24,7 +24,7 @@ my $nodename = PVE::INotify::nodename();
mkdir "/etc/pve/nodes/$nodename";
mkdir "/etc/pve/nodes/$nodename/qemu-server";
-my $lock_dir = "/var/lock/qemu-server";
+my $lock_dir = "/run/lock/qemu-server";
mkdir $lock_dir;
sub assert_config_exists_on_node {
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 5bb86f7a..268140ce 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -3204,8 +3204,8 @@ sub audio_devs {
sub get_tpm_paths {
my ($vmid) = @_;
return {
- socket => "/var/run/qemu-server/$vmid.swtpm",
- pid => "/var/run/qemu-server/$vmid.swtpm.pid",
+ socket => "/run/qemu-server/$vmid.swtpm",
+ pid => "/run/qemu-server/$vmid.swtpm.pid",
};
}
@@ -3461,7 +3461,7 @@ sub query_supported_cpu_flags {
$qemu_cmd,
'-machine', $default_machine,
'-display', 'none',
- '-chardev', "socket,id=qmp,path=/var/run/qemu-server/$fakevmid.qmp,server=on,wait=off",
+ '-chardev', "socket,id=qmp,path=/run/qemu-server/$fakevmid.qmp,server=on,wait=off",
'-mon', 'chardev=qmp,mode=control',
'-pidfile', $pidfile,
'-S', '-daemonize'
@@ -3710,7 +3710,7 @@ sub config_to_command {
push @$cmd, '-mon', "chardev=qmp,mode=control";
if (min_version($machine_version, 2, 12)) {
- push @$cmd, '-chardev', "socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5";
+ push @$cmd, '-chardev', "socket,id=qmp-event,path=/run/qmeventd.sock,reconnect=5";
push @$cmd, '-mon', "chardev=qmp-event,mode=control";
}
@@ -3812,7 +3812,7 @@ sub config_to_command {
for (my $i = 0; $i < $MAX_SERIAL_PORTS; $i++) {
my $path = $conf->{"serial$i"} or next;
if ($path eq 'socket') {
- my $socket = "/var/run/qemu-server/${vmid}.serial$i";
+ my $socket = "/run/qemu-server/${vmid}.serial$i";
push @$devices, '-chardev', "socket,id=serial$i,path=$socket,server=on,wait=off";
# On aarch64, serial0 is the UART device. QEMU only allows
# connecting UART devices via the '-serial' command line, as
@@ -6330,7 +6330,7 @@ sub vm_stop_cleanup {
}
foreach my $ext (qw(mon qmp pid vnc qga)) {
- unlink "/var/run/qemu-server/${vmid}.$ext";
+ unlink "/run/qemu-server/${vmid}.$ext";
}
if ($conf->{ivshmem}) {
@@ -8789,7 +8789,7 @@ sub register_qmeventd_handle {
my ($vmid) = @_;
my $fh;
- my $peer = "/var/run/qmeventd.sock";
+ my $peer = "/run/qmeventd.sock";
my $count = 0;
for (;;) {
diff --git a/PVE/QemuServer/Helpers.pm b/PVE/QemuServer/Helpers.pm
index 72a46a0a..a27fcfdd 100644
--- a/PVE/QemuServer/Helpers.pm
+++ b/PVE/QemuServer/Helpers.pm
@@ -21,7 +21,7 @@ my $nodename = PVE::INotify::nodename();
# Paths and directories
-our $var_run_tmpdir = "/var/run/qemu-server";
+our $var_run_tmpdir = "/run/qemu-server";
mkdir $var_run_tmpdir;
sub qmp_socket {
diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
index e5024cd2..d87b3a06 100644
--- a/PVE/QemuServer/Memory.pm
+++ b/PVE/QemuServer/Memory.pm
@@ -726,7 +726,7 @@ sub hugepages_update_locked {
my $timeout = 60; #could be long if a lot of hugepages need to be allocated
- my $lock_filename = "/var/lock/hugepages.lck";
+ my $lock_filename = "/run/lock/hugepages.lck";
my $res = lock_file($lock_filename, $timeout, $code, @param);
die $@ if $@;
diff --git a/qmeventd/qmeventd.service b/qmeventd/qmeventd.service
index 1e2465be..d692d1a0 100644
--- a/qmeventd/qmeventd.service
+++ b/qmeventd/qmeventd.service
@@ -1,11 +1,11 @@
[Unit]
Description=PVE Qemu Event Daemon
-RequiresMountsFor=/var/run
+RequiresMountsFor=/run
Before=pve-ha-lrm.service
Before=pve-guests.service
[Service]
-ExecStart=/usr/sbin/qmeventd /var/run/qmeventd.sock
+ExecStart=/usr/sbin/qmeventd /run/qmeventd.sock
Type=forking
[Install]
--
2.49.0
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
next prev parent reply other threads:[~2025-03-22 15:31 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20250322152004.1646886-1-jing@jing.rocks>
2025-03-22 15:17 ` [pve-devel] [PATCH pve-manager 2/2] move /run/vzdump.lock to /run/lock/vzdump.lock Jing Luo via pve-devel
2025-03-22 15:17 ` [pve-devel] [PATCH] rust-proxmox-network-api: change /var/lock to /run/lock Jing Luo via pve-devel
2025-03-22 15:17 ` [pve-devel] [PATCH pmg-api] tree-wide: change /var/run to /run and " Jing Luo via pve-devel
2025-03-22 15:17 ` [pve-devel] [PATCH] rust-proxmox-backup: change " Jing Luo via pve-devel
2025-03-22 15:17 ` [pve-devel] [PATCH pve-cluster] tree-wide: change /var/run to /run and " Jing Luo via pve-devel
2025-03-22 15:17 ` [pve-devel] [PATCH pve-common] " Jing Luo via pve-devel
2025-03-22 15:17 ` [pve-devel] [PATCH pve-container] tree-wide: change /var/run to /run Jing Luo via pve-devel
2025-03-22 15:17 ` [pve-devel] [PATCH pve-firewall] tree-wide: change /var/run to /run and /var/lock to /run/lock Jing Luo via pve-devel
2025-03-22 15:17 ` [pve-devel] [PATCH pve-guest-common] tree-wide: change " Jing Luo via pve-devel
2025-03-22 15:17 ` [pve-devel] [PATCH pve-storage] " Jing Luo via pve-devel
2025-03-22 15:17 ` Jing Luo via pve-devel [this message]
[not found] ` <20250322152004.1646886-2-jing@jing.rocks>
2025-03-24 8:02 ` [pve-devel] [PATCH pve-manager 2/2] move /run/vzdump.lock to /run/lock/vzdump.lock Thomas Lamprecht
2025-03-24 11:41 ` Jing Luo via pve-devel
[not found] ` <d8306d3000f15b2dc4dad5f0be32db4f@jing.rocks>
2025-03-24 11:56 ` Thomas Lamprecht
2025-03-24 13:04 ` Jing Luo via pve-devel
[not found] ` <39e9a7cd767a4de295ab573fa71c7722@jing.rocks>
2025-03-24 13:11 ` Thomas Lamprecht
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=mailman.92.1742657463.359.pve-devel@lists.proxmox.com \
--to=pve-devel@lists.proxmox.com \
--cc=jing@jing.rocks \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal