public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Stelios Vailakakis <stelios@libvirt.dev>
To: Friedrich Weber <f.weber@proxmox.com>,
	Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH] iscsi: fix excessive connection test spam on storage monitoring
Date: Mon, 10 Nov 2025 23:25:40 +0000	[thread overview]
Message-ID: <DS7PR06MB6903850B07D5E163D38EA903A6CEA@DS7PR06MB6903.namprd06.prod.outlook.com> (raw)
In-Reply-To: <f67b4c55-65ec-4c64-a632-f48704966616@proxmox.com>


Hi Friedrich,

> Thanks! Just FYI, there is also a more convenient public-inbox instance
for browsing our mailing lists [1].

Great, thanks for pointing me towards this page!

> So if I understand correctly, after applying only your hostname patch on
> top of an up-to-date libpve-storage-perl, you are still seeing the
> "connection lost" entries on the iSCSI target? Can you double-check the
> version of libpve-storage-perl (e.g. using `pveversion -v | grep
> libpve-storage`) on top of which you applied your hostname patch? Could
> you post the (anonymized) output of `iscsiadm -m node` and `iscsiadm -m
> session` on nodes 1-4 and 5?

For clarity, the connection lost entries no longer occur after applying my patch on all previously mentioned PVE stacks as well as within 9.01 which I recently upgraded to.

Current environment is proxmox1 - patched, proxmox2 - default. I will show icsiadm output on a "good" proxmox1 vs "bad" proxmox2.

12345 is a placeholder for domain, IP addresses irrelevant and made up. I will leave my proxmox2 node unpatched in case we need any more information.

#Version sanity check
root@proxmox1:~# pveversion -v | grep 'libpve-storage\|pve-manager'
pve-manager: 9.0.11 (running version: 9.0.11/3bf5476b8a4699e2)
libpve-storage-perl: 9.0.13

root@proxmox2:~# pveversion -v | grep 'libpve-storage\|pve-manager'
pve-manager: 9.0.11 (running version: 9.0.11/3bf5476b8a4699e2)
libpve-storage-perl: 9.0.13


#proxmox1 iscsiadm
root@proxmox1:~# iscsiadm -m node
nas.12345.com:3260,4294967295 iqn.2024-01.com.12345.vm-stor
nas.12345.com:3260,4294967295 iqn.2025-06.com.12345.ssd-vm-stor
root@proxmox1:~# iscsiadm -m session
tcp: [1] 192.168.1.10:3260,1 iqn.2024-01.com.12345.vm-stor (non-flash)
tcp: [2] 192.168.1.10:3260,1 iqn.2025-06.com.12345.ssd-vm-stor (non-flash)

#proxmox2 icsiadm 
root@proxmox2:~# iscsiadm -m node
nas.12345.com:3260,4294967295 iqn.2024-01.com.12345.vm-stor
nas.12345.com:3260,4294967295 iqn.2025-06.com.12345.ssd-vm-stor
root@proxmox2:~# iscsiadm -m session
tcp: [1] 192.168.1.10:3260,1 iqn.2025-06.com.12345.ssd-vm-stor (non-flash)
tcp: [2] 192.168.1.10:3260,1 iqn.2024-01.com.12345.vm-stor (non-flash)



#BEFORE PATCH journalctl -xeu pvestatd on proxmox1 9.01- constantly repeating:
Nov 10 16:56:32 proxmox1 pvestatd[1245435]: command '/usr/bin/iscsiadm --mode node --targetname iqn.2024-01.com.12345.vm-stor --login' failed: exit code 15
Nov 10 16:56:39 proxmox1 pvestatd[1245435]: command '/usr/bin/iscsiadm --mode node --targetname iqn.2025-06.com.12345.ssd-vm-stor --login' failed: exit code 15
Nov 10 16:56:39 proxmox1 pvestatd[1245435]: command '/usr/bin/iscsiadm --mode node --targetname iqn.2024-01.com.12345.vm-stor --login' failed: exit code 15
Nov 10 16:56:50 proxmox1 pvestatd[1245435]: command '/usr/bin/iscsiadm --mode node --targetname iqn.2025-06.com.12345.ssd-vm-stor --login' failed: exit code 15
Nov 10 16:56:51 proxmox1 pvestatd[1245435]: command '/usr/bin/iscsiadm --mode node --targetname iqn.2024-01.com.12345.vm-stor --login' failed: exit code 15

#AFTER PATCH journalctl -xeu pvestatd on proxmox1 9.01:
[Service started successfully several minutes ago as expected, no errors like above]


#journalctl -xeu pvestatd with no patch on proxmox2 - constantly repeating:
Nov 10 17:12:42 proxmox2 pvestatd[1153]: command '/usr/bin/iscsiadm --mode node --targetname iqn.2025-06.com.12345.ssd-vm-stor --login' failed: exit code 15
Nov 10 17:12:45 proxmox2 pvestatd[1153]: command '/usr/bin/iscsiadm --mode node --targetname iqn.2024-01.com.12345.vm-stor --login' failed: exit code 15
Nov 10 17:12:52 proxmox2 pvestatd[1153]: command '/usr/bin/iscsiadm --mode node --targetname iqn.2025-06.com.12345.ssd-vm-stor --login' failed: exit code 15
Nov 10 17:12:54 proxmox2 pvestatd[1153]: command '/usr/bin/iscsiadm --mode node --targetname iqn.2024-01.com.12345.vm-stor --login' failed: exit code 15

Regards,
Stelios Vailakakis
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


  reply	other threads:[~2025-11-10 23:58 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-06-20  0:44 Stelios Vailakakis
2025-06-23  8:24 ` Friedrich Weber
2025-07-06 14:14   ` Stelios Vailakakis
2025-07-15 16:11     ` Friedrich Weber
2025-11-10 23:25       ` Stelios Vailakakis [this message]
2025-11-11 16:45         ` Friedrich Weber

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DS7PR06MB6903850B07D5E163D38EA903A6CEA@DS7PR06MB6903.namprd06.prod.outlook.com \
    --to=stelios@libvirt.dev \
    --cc=f.weber@proxmox.com \
    --cc=f67b4c55-65ec-4c64-a632-f48704966616@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal