From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 04F5474675 for ; Wed, 1 Jun 2022 12:30:44 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 02D751D55C for ; Wed, 1 Jun 2022 12:30:44 +0200 (CEST) Received: from bacon.elettra.eu (bacon.elettra.eu [140.105.206.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id 1B07D1D553 for ; Wed, 1 Jun 2022 12:30:42 +0200 (CEST) X-Envelope-From: Received: from zmp.elettra.eu (zmp.elettra.trieste.it [140.105.206.204]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by bacon.elettra.eu (Postfix) with ESMTPS id B53E04163C for ; Wed, 1 Jun 2022 12:30:29 +0200 (CEST) Received: from zmp.elettra.eu (localhost [127.0.0.1]) by zmp.elettra.eu (Postfix) with ESMTPS id AC73F14308B7 for ; Wed, 1 Jun 2022 12:30:29 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by zmp.elettra.eu (Postfix) with ESMTP id 8F6B014308B9 for ; Wed, 1 Jun 2022 12:30:29 +0200 (CEST) X-Virus-Scanned: amavisd-new at zmp.elettra.eu Received: from zmp.elettra.eu ([127.0.0.1]) by localhost (zmp.elettra.eu [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id tqtyVScO7SJd for ; Wed, 1 Jun 2022 12:30:29 +0200 (CEST) Received: from [140.105.2.28] (iztok-pc.elettra.trieste.it [140.105.2.28]) by zmp.elettra.eu (Postfix) with ESMTPSA id 7561114308B7 for ; Wed, 1 Jun 2022 12:30:29 +0200 (CEST) Message-ID: <28938ffa-9e73-1076-d1c3-10157d1d6618@elettra.eu> Date: Wed, 1 Jun 2022 12:30:29 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.9.1 Content-Language: it-IT To: pve-user@lists.proxmox.com References: <00ab49ec-d822-2522-c861-ed2409681f27@elettra.eu> <2757d9cfb5e8f35341b599493fb12a81@verdnatura.es> From: Iztok Gregori In-Reply-To: <2757d9cfb5e8f35341b599493fb12a81@verdnatura.es> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: quoted-printable X-elettra-Libra-ESVA-Information: Please contact elettra for more information X-elettra-Libra-ESVA-ID: B53E04163C.A98F3 X-elettra-Libra-ESVA: No virus found X-elettra-Libra-ESVA-From: iztok.gregori@elettra.eu X-elettra-Libra-ESVA-Watermark: 1654684230.20553@J7UvNkH/Dmn6qr5P/87z/w X-SPAM-LEVEL: Spam detection results: 0 AWL 1.272 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% KAM_ASCII_DIVIDERS 0.8 Spam that uses ascii formatting tricks KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment NICE_REPLY_A -2.764 Looks like a legit reply (A) SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record T_SCC_BODY_TEXT_LINE -0.01 - Subject: Re: [PVE-User] Cannot list disks from an external CEPH pool X-BeenThere: pve-user@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE user list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 01 Jun 2022 10:30:44 -0000 On 01/06/22 12:11, nada wrote: > hello, just correct IP address at pool1 > 1172.16.1.2 probably 172.16.1.2 1172.16.1 it was just a typo when i edited out the real addresses, the=20 monitor IPs are the same on both pools. > and you may enforce access by krbd 1 What improvement should I see with enabling krbd in regards to my=20 original question? > and simplify your list command by symlinks CEPH CLI administration is done on a different, not proxmox, node. It=20 was my understanding that proxmox doesn't need ceph.conf file to access=20 an external CEPH cluster, so I never created any configuration/symlinks.=20 Am I missing something? Cheers Iztok > example >=20 > rbd: pool1 > =C2=A0=C2=A0=C2=A0=C2=A0 content images > =C2=A0=C2=A0=C2=A0=C2=A0 krbd 1 > =C2=A0=C2=A0=C2=A0=C2=A0 monhost 172.16.1.1,172.16.1.2,172.16.1.3 > =C2=A0=C2=A0=C2=A0=C2=A0 pool pool1 > =C2=A0=C2=A0=C2=A0=C2=A0 username admin >=20 > # la /etc/ceph/ > total 20 > drwxr-xr-x=C2=A0 2 root root=C2=A0=C2=A0 7 Mar=C2=A0 6 13:03 . > drwxr-xr-x 97 root root 193 May 19 03:10 .. > lrwxrwxrwx=C2=A0 1 root root=C2=A0 27 Aug=C2=A0 4=C2=A0 2021 rbd.conf -= >=20 > /etc/pve/priv/ceph/rbd.conf > lrwxrwxrwx=C2=A0 1 root root=C2=A0 30 Aug=C2=A0 4=C2=A0 2021 rbd.keyrin= g ->=20 > /etc/pve/priv/ceph/rbd.keyring > -rw-r--r--=C2=A0 1 root root=C2=A0 92 Aug 28=C2=A0 2019 rbdmap > lrwxrwxrwx=C2=A0 1 root root=C2=A0 31 Feb=C2=A0 2 12:37 rbd_ssd.conf ->= =20 > /etc/pve/priv/ceph/rbd_ssd.conf > lrwxrwxrwx=C2=A0 1 root root=C2=A0 34 Feb=C2=A0 2 12:37 rbd_ssd.keyring= ->=20 > /etc/pve/priv/ceph/rbd_ssd.keyring >=20 > # pvesm list rbd > Volid=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 Format=C2=A0 Type=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0 Size VMID > rbd:vm-105-disk-0 raw=C2=A0=C2=A0=C2=A0=C2=A0 images=C2=A0=C2=A0=C2=A0=C2= =A0 42949672960 105 > rbd:vm-111-disk-0 raw=C2=A0=C2=A0=C2=A0=C2=A0 images=C2=A0=C2=A0=C2=A0=C2= =A0 42949672960 111 >=20 > # pvesm list rbd_ssd > Volid=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 Format=C2=A0 Type=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 Size VMID > rbd_ssd:vm-102-disk-0 raw=C2=A0=C2=A0=C2=A0=C2=A0 images=C2=A0=C2=A0=C2= =A0=C2=A0 6442450944 102 > rbd_ssd:vm-103-disk-0 raw=C2=A0=C2=A0=C2=A0=C2=A0 images=C2=A0=C2=A0=C2= =A0=C2=A0 4294967296 103 >=20 > good luck > Nada >=20 >=20 > On 2022-06-01 11:51, Iztok Gregori wrote: >> On 01/06/22 11:29, Aaron Lauterer wrote: >>> Do you get additional errors if you run the following command?=20 >>> Assuming that the storage is also called pool1. >>> >>> pvesm list pool1 >> >> No additional errors: >> >> root@pmx-14:~# pvesm list pool1 >> rbd error: rbd: listing images failed: (2) No such file or directory >> >> >> >>> Do you have VMs with disk images on that storage? If so, do they=20 >>> start normally? >> >> Yes, we have a lot of VMs with disk on that storage and yes they seems >> to start normally (last start yesterday when we first notice the GUI >> behaviour) >> >>> >>> Can you show the configuration of that storage and the one of the=20 >>> working pool? (/etc/pve/storage.cfg) >> >> Sure (edited the IP addresses and pool names): >> >> [cit /etc/pve/storage.cfg] >> ... >> rbd: pool1 >> =C2=A0=C2=A0=C2=A0=C2=A0content images >> =C2=A0=C2=A0=C2=A0=C2=A0monhost 172.16.1.1;1172.16.1.2;172.16.1.3 >> =C2=A0=C2=A0=C2=A0=C2=A0pool pool1 >> =C2=A0=C2=A0=C2=A0=C2=A0username admin >> >> rbd: pool2 >> =C2=A0=C2=A0=C2=A0=C2=A0content images >> =C2=A0=C2=A0=C2=A0=C2=A0monhost 172.16.1.1;172.16.1.2;172.16.1.3 >> =C2=A0=C2=A0=C2=A0=C2=A0pool pool2 >> =C2=A0=C2=A0=C2=A0=C2=A0username admin >> ... >> [/cit] >> >> Thanks! >> >> Iztok >> >>> >>> On 6/1/22 11:13, Iztok Gregori wrote: >>>> Hi to all! >>>> >>>> I have a Proxmox cluster (7.1) connected to an external CEPH cluster= =20 >>>> (octopus). =C2=A0From the GUI I cannot list the content (disks) of o= ne=20 >>>> pool (but I'm able to list all the other pools): >>>> >>>> rbd error: rbd: listing images failed: (2) No such file or directory= =20 >>>> (500) >>>> >>>> The pveproxy/access.log shows the error for "pool1": >>>> >>>> "GET /api2/json/nodes/pmx-14/storage/pool1/content?content=3Dimages=20 >>>> HTTP/1.1" 500 13 >>>> >>>> but when I try another pool ("pool2") it works: >>>> >>>> "GET /api2/json/nodes/pmx-14/storage/pool2/content?content=3Dimages=20 >>>> HTTP/1.1" 200 841 >>>> >>>> =C2=A0From the command line "rbd ls pool1" is working fine (because = I=20 >>>> don't have a ceph.conf I ran it with "rbd -m 172.16.1.1 --keyring=20 >>>> /etc/pve/priv/ceph/pool1.keyring ls pool1") and I see the pool=20 >>>> contents. >>>> >>>> The cluster is running fine, the VMs access the disks on that pool=20 >>>> without a problem >>>> >>>> What can it be? >>>> >>>> The cluster is a mix of freshly installed nodes and upgraded ones,=20 >>>> all the 17 nodes (but one which is 6.4 but without any running VMs)=20 >>>> are running: >>>> >>>> root@pmx-14:~# pveversion -v >>>> proxmox-ve: 7.1-1 (running kernel: 5.13.19-6-pve) >>>> pve-manager: 7.1-12 (running version: 7.1-12/b3c09de3) >>>> pve-kernel-helper: 7.1-14 >>>> pve-kernel-5.13: 7.1-9 >>>> pve-kernel-5.13.19-6-pve: 5.13.19-15 >>>> pve-kernel-5.13.19-2-pve: 5.13.19-4 >>>> ceph-fuse: 15.2.15-pve1 >>>> corosync: 3.1.5-pve2 >>>> criu: 3.15-1+pve-1 >>>> glusterfs-client: 9.2-1 >>>> ifupdown2: 3.1.0-1+pmx3 >>>> ksm-control-daemon: 1.4-1 >>>> libjs-extjs: 7.0.0-1 >>>> libknet1: 1.22-pve2 >>>> libproxmox-acme-perl: 1.4.1 >>>> libproxmox-backup-qemu0: 1.2.0-1 >>>> libpve-access-control: 7.1-7 >>>> libpve-apiclient-perl: 3.2-1 >>>> libpve-common-perl: 7.1-5 >>>> libpve-guest-common-perl: 4.1-1 >>>> libpve-http-server-perl: 4.1-1 >>>> libpve-storage-perl: 7.1-1 >>>> libspice-server1: 0.14.3-2.1 >>>> lvm2: 2.03.11-2.1 >>>> lxc-pve: 4.0.11-1 >>>> lxcfs: 4.0.11-pve1 >>>> novnc-pve: 1.3.0-2 >>>> proxmox-backup-client: 2.1.5-1 >>>> proxmox-backup-file-restore: 2.1.5-1 >>>> proxmox-mini-journalreader: 1.3-1 >>>> proxmox-widget-toolkit: 3.4-7 >>>> pve-cluster: 7.1-3 >>>> pve-container: 4.1-4 >>>> pve-docs: 7.1-2 >>>> pve-edk2-firmware: 3.20210831-2 >>>> pve-firewall: 4.2-5 >>>> pve-firmware: 3.3-6 >>>> pve-ha-manager: 3.3-3 >>>> pve-i18n: 2.6-2 >>>> pve-qemu-kvm: 6.1.1-2 >>>> pve-xtermjs: 4.16.0-1 >>>> qemu-server: 7.1-4 >>>> smartmontools: 7.2-1 >>>> spiceterm: 3.2-2 >>>> swtpm: 0.7.1~bpo11+1 >>>> vncterm: 1.7-1 >>>> zfsutils-linux: 2.1.4-pve1 >>>> >>>> I can provide other information if it's needed. >>>> >>>> Cheers >>>> Iztok Gregori >>>> >>>> >>> >> >> >> _______________________________________________ >> pve-user mailing list >> pve-user@lists.proxmox.com >> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user >=20 > _______________________________________________ > pve-user mailing list > pve-user@lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user