From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id D0D2AFDE0 for ; Tue, 25 Jul 2023 10:12:46 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id B44C41708C for ; Tue, 25 Jul 2023 10:12:46 +0200 (CEST) Received: from mamuang.bug.ch (mamuang.bug.ch [IPv6:2a00:d08:a000::7]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS for ; Tue, 25 Jul 2023 10:12:46 +0200 (CEST) Received: from mamuang.bug.ch (localhost.localdomain [127.0.0.1]) by mamuang.bug.ch (Proxmox) with ESMTP id AE82AC09BF for ; Tue, 25 Jul 2023 10:12:30 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bug.ch; h=cc :content-type:content-type:date:from:from:message-id :mime-version:reply-to:subject:subject:to:to; s=default; bh=dI5K BqZxsE0lorUSYj3oEAL+jVSYuOt74XKD4fhqRDg=; b=JWkcy0EUeiqE0UNoNLcN qBZA9R77BZrOWQP10ld//pnPxsUkde3LE0q7gPKGlPVJkx2sSJeWVBT7B9Pk2tvO R58J8CUlEaB49pJwcgRwlA6p/rwPdJPF95gdRuWOkNMKGpBRw7sLV0MUgh662ATw 8KWQu8hz5LOra8IjmF5xVYV5g8xN3eexrlTsH/VNbXJbnENZy1o0guf5ImShoVM7 l7JqMKw4RJaVPk/NmOLQWL1hF/l9yOeC4M089M+H9CuXzJMRMbKzxUFbVdmNiG3d 2T9EGFMkfpOHtJnibtW5CocvB30QTBdAcutEroe+PbFAl/3WPcaUuoTXL+qCmS8U pw== Received: from mangkut.bug.ch (mangkut.bug.ch [46.235.147.15]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mamuang.bug.ch (Proxmox) with ESMTPS id 8C988C1D37 for ; Tue, 25 Jul 2023 10:12:22 +0200 (CEST) Received: from [192.168.34.161] (legatech.legatech.ch [62.2.198.20]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (Client did not present a certificate) (Authenticated sender: fabiana@bug.ch) by mangkut.bug.ch (Postfix) with ESMTPSA id 7766C7E22A2 for ; Tue, 25 Jul 2023 10:12:22 +0200 (CEST) Message-ID: <5b28404b-29e9-3dc4-246f-e59034c6fd1c@bug.ch> Date: Tue, 25 Jul 2023 10:12:21 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Content-Language: de-CH To: PVE User List From: Fabian Abplanalp X-SPAM-LEVEL: Spam detection results: 0 BAYES_00 -1.9 Bayes spam probability is 0 to 1% DKIM_SIGNED 0.1 Message has a DKIM or DK signature, not necessarily valid DKIM_VALID -0.1 Message has at least one valid DKIM or DK signature DKIM_VALID_AU -0.1 Message has a valid DKIM or DK signature from author's domain DKIM_VALID_EF -0.1 Message has a valid DKIM or DK signature from envelope-from domain DMARC_MISSING 0.1 Missing DMARC policy HTML_MESSAGE 0.001 HTML included in message SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record T_SCC_BODY_TEXT_LINE -0.01 - Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Content-Filtered-By: Mailman/MimeDel 2.1.29 Subject: [PVE-User] Problem with ssh sessions X-BeenThere: pve-user@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE user list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 25 Jul 2023 08:12:46 -0000 Hi there The sessions are opened by a Nagios server for various tests, which means there is always a clean exit status, otherwise the tests would not work. However, the same happens with sessions opened manually. The sessions run over a ProxyCommand/JumpHost with Proxmox 8.0.3/Debian 12.1 to the VMs (All Debian 11) over the internal bridge. Nagios -> Proxmox -> VM hosts Since the sshd remain on the Proxmox and the VM hosts, they also eat up all the memory over time. On the VM host: user@vm:~$ ps -ALf | grep nagios [...] root 196819 732 196819 0 1 09:17 ? 00:00:00 sshd: nagios [priv] nagios 196825 196819 196825 0 1 09:17 ? 00:00:00 sshd: nagios@notty [...] user@proxmox:~# ps -ALf | grep nagios [...] nagios 617299 1 617299 0 1 09:17 ? 00:00:00 nc 10.0.0.80 22 nagios 617300 1 617300 0 1 09:17 ? 00:00:00 nc 10.0.0.25 22 [...] With loginctl the sessions are still listed: root@vm:~# loginctl [...]   18112 6000 nagios   18113 6000 nagios [...] root@proxmox:~# loginctl [...]  129729 6000 nagios  129730 6000 nagios [...] It even records on the proxmox that the session has been closed: root@proxmox:~# loginctl session-status 129538 129538 - nagios (6000)            Since: Tue 2023-07-25 09:17:03 CEST; 24min ago           Leader: 617115           Remote: 84.xx.xx.xx          Service: sshd; type tty; class user            State: closing             Unit: session-129538.scope                   └─617299 nc 10.0.0.80 22 Jul 25 09:17:03 proxmox systemd[1]: Started session-129538.scope - Session 129538 of User nagios. Jul 25 09:17:04 proxmox sshd[617273]: Received disconnect from 84.xx.xx.xx port 8152:11: disconnected by user Jul 25 09:17:04 proxmox sshd[617273]: Disconnected from user nagios 84.xx.xx.xx port 8152 Jul 25 09:17:04 proxmox sshd[617115]: pam_unix(sshd:session): session closed for user nagios ...in contrast on the VMs: root@vm:~# loginctl session-status 18084 18084 - nagios (6000)            Since: Tue 2023-07-25 09:17:04 CEST; 25min ago           Leader: 196819 (sshd)           Remote: 10.0.0.11          Service: sshd; type tty; class user            State: active             Unit: session-18084.scope                   ├─196819 sshd: nagios [priv].                   └─196825 sshd: nagios@notty Jul 25 09:17:04 webserver systemd[1]: Started session 18084 of user nagios. If I kill the sessions on the Proxmox/Jumphost, they also disappear on the VMs. The irritating thing is that this problem did not exist before with Debian 11.7 and KVM/qemu, the VMs did not change. Any Ideas? BR, Fabian >From alwin@antreich.com Tue Jul 25 12:48:40 2023 Return-Path: X-Original-To: pve-user@lists.proxmox.com Delivered-To: pve-user@lists.proxmox.com Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 54DF0FF25 for ; Tue, 25 Jul 2023 12:48:40 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 2E77919415 for ; Tue, 25 Jul 2023 12:48:10 +0200 (CEST) Received: from mx.antreich.com (mx.antreich.com [173.249.42.230]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS for ; Tue, 25 Jul 2023 12:48:09 +0200 (CEST) Received: from mail2.antreich.com (unknown [172.16.9.25]) by mx.antreich.com (Postfix) with ESMTPS id 2DCCCA0371; Tue, 25 Jul 2023 12:40:58 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=antreich.com; s=2018; t=1690281658; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=s2yM/Dfptz5dCDXB1bcGp10VmFcRxK4TO//tD7P0Vs0=; b=Cm36IqBgRIXPXscxSuY+uWw52ixeSymjqcuXqkGlq90Mwobh3yOKDs2HkEau3GC8/f8q63 OCzAD+2kJ3s3cXxAfXX1mX7qEXNwMEFqjIu3N8tu617TXCYkXSDjRZqgGrHUZnM2wJ8xV+ oIfcpUHS0ahwtNWIBjDFjpkBsvhr0wpJlOBuINPSgRRf5orbXCIv/gA3ZWHHk9N/S5ih1f dLEmFM6HlnyFiUeQsujo/hUoAj5GxgRaKrbEoUm5e9Llsn+ch31kqqHmrC4eHaEOPbYv5M QMsp3DcMbw9URQOcFFcjMWcJ6F4oKffofq6PTZBxaG9WnwnYzoJMcdEw/KSTMw== MIME-Version: 1.0 Date: Tue, 25 Jul 2023 10:40:57 +0000 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable From: "Alwin Antreich" Message-ID: Subject: Re: [PVE-User] DeviceMapper devices get filtered by Proxmox To: uwe.sauter.de@gmail.com, "Proxmox VE user list" In-Reply-To: References: <2b5b83bb-c90e-b6dd-4b15-a57414b42542@gmail.com> X-SPAM-LEVEL: Spam detection results: 0 AWL 0.167 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DKIM_SIGNED 0.1 Message has a DKIM or DK signature, not necessarily valid DKIM_VALID -0.1 Message has at least one valid DKIM or DK signature DKIM_VALID_AU -0.1 Message has a valid DKIM or DK signature from author's domain DKIM_VALID_EF -0.1 Message has a valid DKIM or DK signature from envelope-from domain DMARC_PASS -0.1 DMARC pass policy SPF_HELO_PASS -0.001 SPF: HELO matches SPF record SPF_PASS -0.001 SPF: sender matches SPF record T_SCC_BODY_TEXT_LINE -0.01 - URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [diskmanage.pm, proxmox.com, antreich.com, ceph.com] X-List-Received-Date: Tue, 25 Jul 2023 10:48:40 -0000 Hi Uwe, July 25, 2023 9:24 AM, "Uwe Sauter" wrote: > So, I've been looking further into this and indeed, there seem to be ve= ry strict filters regarding > the block device names that Proxmox allows to be used. >=20 >=20/usr/share/perl5/PVE/Diskmanage.pm >=20 >=20512 # whitelisting following devices > 513 # - hdX ide block device > 514 # - sdX scsi/sata block device > 515 # - vdX virtIO block device > 516 # - xvdX: xen virtual block device > 517 # - nvmeXnY: nvme devices > 518 # - cciss!cXnY cciss devices > 519 print Dumper($dev); > 520 return if $dev !~ m/^(h|s|x?v)d[a-z]+$/ && > 521 $dev !~ m/^nvme\d+n\d+$/ && > 522 $dev !~ m/^cciss\!c\d+d\d+$/; >=20 >=20I don't understand all the consequences of allowing ALL ^dm-\d+$ devi= ces but with proper filtering > it should be possible to allow multipath devices (and given that there = might be udev rules that > create additinal symlinks below /dev, each device's name should be reso= lved to its canonical name > before checking). It is also a matter of ceph support [0]. Aside the extra complexity, usin= g the amount of HDDs is not a good use-case for virtualization. And HDDs = definitely need the DB/WAL on a separate device (60x disks -> 5x NVMe). Best to set it up with ceph-volume directly. See the forum post [1] for t= he experience of other users. Cheers, Alwin [0] https://docs.ceph.com/en/latest/ceph-volume/lvm/prepare/#multipath-su= pport [1] https://forum.proxmox.com/threads/ceph-with-multipath.70813/