public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
* [PVE-User] Cannot list disks from an external CEPH pool
@ 2022-06-01  9:13 Iztok Gregori
  2022-06-01  9:29 ` Aaron Lauterer
  0 siblings, 1 reply; 5+ messages in thread
From: Iztok Gregori @ 2022-06-01  9:13 UTC (permalink / raw)
  To: Proxmox VE user list

Hi to all!

I have a Proxmox cluster (7.1) connected to an external CEPH cluster 
(octopus). From the GUI I cannot list the content (disks) of one pool 
(but I'm able to list all the other pools):

rbd error: rbd: listing images failed: (2) No such file or directory (500)

The pveproxy/access.log shows the error for "pool1":

"GET /api2/json/nodes/pmx-14/storage/pool1/content?content=images 
HTTP/1.1" 500 13

but when I try another pool ("pool2") it works:

"GET /api2/json/nodes/pmx-14/storage/pool2/content?content=images 
HTTP/1.1" 200 841

 From the command line "rbd ls pool1" is working fine (because I don't 
have a ceph.conf I ran it with "rbd -m 172.16.1.1 --keyring 
/etc/pve/priv/ceph/pool1.keyring ls pool1") and I see the pool contents.

The cluster is running fine, the VMs access the disks on that pool 
without a problem

What can it be?

The cluster is a mix of freshly installed nodes and upgraded ones, all 
the 17 nodes (but one which is 6.4 but without any running VMs) are running:

root@pmx-14:~# pveversion -v
proxmox-ve: 7.1-1 (running kernel: 5.13.19-6-pve)
pve-manager: 7.1-12 (running version: 7.1-12/b3c09de3)
pve-kernel-helper: 7.1-14
pve-kernel-5.13: 7.1-9
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph-fuse: 15.2.15-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.1
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-7
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.1-5
libpve-guest-common-perl: 4.1-1
libpve-http-server-perl: 4.1-1
libpve-storage-perl: 7.1-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.3.0-2
proxmox-backup-client: 2.1.5-1
proxmox-backup-file-restore: 2.1.5-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-7
pve-cluster: 7.1-3
pve-container: 4.1-4
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-6
pve-ha-manager: 3.3-3
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.1-2
pve-xtermjs: 4.16.0-1
qemu-server: 7.1-4
smartmontools: 7.2-1
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.4-pve1

I can provide other information if it's needed.

Cheers
Iztok Gregori


-- 
Iztok Gregori
ICT Systems and Services
Elettra - Sincrotrone Trieste S.C.p.A.
Telephone: +39 040 3758948
http://www.elettra.eu



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PVE-User] Cannot list disks from an external CEPH pool
  2022-06-01  9:13 [PVE-User] Cannot list disks from an external CEPH pool Iztok Gregori
@ 2022-06-01  9:29 ` Aaron Lauterer
  2022-06-01  9:51   ` Iztok Gregori
  0 siblings, 1 reply; 5+ messages in thread
From: Aaron Lauterer @ 2022-06-01  9:29 UTC (permalink / raw)
  To: Proxmox VE user list, Iztok Gregori

Do you get additional errors if you run the following command? Assuming that the 
storage is also called pool1.

pvesm list pool1


Do you have VMs with disk images on that storage? If so, do they start normally?

Can you show the configuration of that storage and the one of the working pool? 
(/etc/pve/storage.cfg)

On 6/1/22 11:13, Iztok Gregori wrote:
> Hi to all!
> 
> I have a Proxmox cluster (7.1) connected to an external CEPH cluster (octopus). 
>  From the GUI I cannot list the content (disks) of one pool (but I'm able to 
> list all the other pools):
> 
> rbd error: rbd: listing images failed: (2) No such file or directory (500)
> 
> The pveproxy/access.log shows the error for "pool1":
> 
> "GET /api2/json/nodes/pmx-14/storage/pool1/content?content=images HTTP/1.1" 500 13
> 
> but when I try another pool ("pool2") it works:
> 
> "GET /api2/json/nodes/pmx-14/storage/pool2/content?content=images HTTP/1.1" 200 841
> 
>  From the command line "rbd ls pool1" is working fine (because I don't have a 
> ceph.conf I ran it with "rbd -m 172.16.1.1 --keyring 
> /etc/pve/priv/ceph/pool1.keyring ls pool1") and I see the pool contents.
> 
> The cluster is running fine, the VMs access the disks on that pool without a 
> problem
> 
> What can it be?
> 
> The cluster is a mix of freshly installed nodes and upgraded ones, all the 17 
> nodes (but one which is 6.4 but without any running VMs) are running:
> 
> root@pmx-14:~# pveversion -v
> proxmox-ve: 7.1-1 (running kernel: 5.13.19-6-pve)
> pve-manager: 7.1-12 (running version: 7.1-12/b3c09de3)
> pve-kernel-helper: 7.1-14
> pve-kernel-5.13: 7.1-9
> pve-kernel-5.13.19-6-pve: 5.13.19-15
> pve-kernel-5.13.19-2-pve: 5.13.19-4
> ceph-fuse: 15.2.15-pve1
> corosync: 3.1.5-pve2
> criu: 3.15-1+pve-1
> glusterfs-client: 9.2-1
> ifupdown2: 3.1.0-1+pmx3
> ksm-control-daemon: 1.4-1
> libjs-extjs: 7.0.0-1
> libknet1: 1.22-pve2
> libproxmox-acme-perl: 1.4.1
> libproxmox-backup-qemu0: 1.2.0-1
> libpve-access-control: 7.1-7
> libpve-apiclient-perl: 3.2-1
> libpve-common-perl: 7.1-5
> libpve-guest-common-perl: 4.1-1
> libpve-http-server-perl: 4.1-1
> libpve-storage-perl: 7.1-1
> libspice-server1: 0.14.3-2.1
> lvm2: 2.03.11-2.1
> lxc-pve: 4.0.11-1
> lxcfs: 4.0.11-pve1
> novnc-pve: 1.3.0-2
> proxmox-backup-client: 2.1.5-1
> proxmox-backup-file-restore: 2.1.5-1
> proxmox-mini-journalreader: 1.3-1
> proxmox-widget-toolkit: 3.4-7
> pve-cluster: 7.1-3
> pve-container: 4.1-4
> pve-docs: 7.1-2
> pve-edk2-firmware: 3.20210831-2
> pve-firewall: 4.2-5
> pve-firmware: 3.3-6
> pve-ha-manager: 3.3-3
> pve-i18n: 2.6-2
> pve-qemu-kvm: 6.1.1-2
> pve-xtermjs: 4.16.0-1
> qemu-server: 7.1-4
> smartmontools: 7.2-1
> spiceterm: 3.2-2
> swtpm: 0.7.1~bpo11+1
> vncterm: 1.7-1
> zfsutils-linux: 2.1.4-pve1
> 
> I can provide other information if it's needed.
> 
> Cheers
> Iztok Gregori
> 
> 




^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PVE-User] Cannot list disks from an external CEPH pool
  2022-06-01  9:29 ` Aaron Lauterer
@ 2022-06-01  9:51   ` Iztok Gregori
  2022-06-01 10:11     ` nada
  0 siblings, 1 reply; 5+ messages in thread
From: Iztok Gregori @ 2022-06-01  9:51 UTC (permalink / raw)
  To: Aaron Lauterer, Proxmox VE user list

On 01/06/22 11:29, Aaron Lauterer wrote:
> Do you get additional errors if you run the following command? Assuming 
> that the storage is also called pool1.
> 
> pvesm list pool1

No additional errors:

root@pmx-14:~# pvesm list pool1
rbd error: rbd: listing images failed: (2) No such file or directory



> Do you have VMs with disk images on that storage? If so, do they start 
> normally?

Yes, we have a lot of VMs with disk on that storage and yes they seems 
to start normally (last start yesterday when we first notice the GUI 
behaviour)

> 
> Can you show the configuration of that storage and the one of the 
> working pool? (/etc/pve/storage.cfg)

Sure (edited the IP addresses and pool names):

[cit /etc/pve/storage.cfg]
...
rbd: pool1
	content images
	monhost 172.16.1.1;1172.16.1.2;172.16.1.3
	pool pool1
	username admin

rbd: pool2
	content images
	monhost 172.16.1.1;172.16.1.2;172.16.1.3
	pool pool2
	username admin
...
[/cit]

Thanks!

Iztok

> 
> On 6/1/22 11:13, Iztok Gregori wrote:
>> Hi to all!
>>
>> I have a Proxmox cluster (7.1) connected to an external CEPH cluster 
>> (octopus).  From the GUI I cannot list the content (disks) of one pool 
>> (but I'm able to list all the other pools):
>>
>> rbd error: rbd: listing images failed: (2) No such file or directory 
>> (500)
>>
>> The pveproxy/access.log shows the error for "pool1":
>>
>> "GET /api2/json/nodes/pmx-14/storage/pool1/content?content=images 
>> HTTP/1.1" 500 13
>>
>> but when I try another pool ("pool2") it works:
>>
>> "GET /api2/json/nodes/pmx-14/storage/pool2/content?content=images 
>> HTTP/1.1" 200 841
>>
>>  From the command line "rbd ls pool1" is working fine (because I don't 
>> have a ceph.conf I ran it with "rbd -m 172.16.1.1 --keyring 
>> /etc/pve/priv/ceph/pool1.keyring ls pool1") and I see the pool contents.
>>
>> The cluster is running fine, the VMs access the disks on that pool 
>> without a problem
>>
>> What can it be?
>>
>> The cluster is a mix of freshly installed nodes and upgraded ones, all 
>> the 17 nodes (but one which is 6.4 but without any running VMs) are 
>> running:
>>
>> root@pmx-14:~# pveversion -v
>> proxmox-ve: 7.1-1 (running kernel: 5.13.19-6-pve)
>> pve-manager: 7.1-12 (running version: 7.1-12/b3c09de3)
>> pve-kernel-helper: 7.1-14
>> pve-kernel-5.13: 7.1-9
>> pve-kernel-5.13.19-6-pve: 5.13.19-15
>> pve-kernel-5.13.19-2-pve: 5.13.19-4
>> ceph-fuse: 15.2.15-pve1
>> corosync: 3.1.5-pve2
>> criu: 3.15-1+pve-1
>> glusterfs-client: 9.2-1
>> ifupdown2: 3.1.0-1+pmx3
>> ksm-control-daemon: 1.4-1
>> libjs-extjs: 7.0.0-1
>> libknet1: 1.22-pve2
>> libproxmox-acme-perl: 1.4.1
>> libproxmox-backup-qemu0: 1.2.0-1
>> libpve-access-control: 7.1-7
>> libpve-apiclient-perl: 3.2-1
>> libpve-common-perl: 7.1-5
>> libpve-guest-common-perl: 4.1-1
>> libpve-http-server-perl: 4.1-1
>> libpve-storage-perl: 7.1-1
>> libspice-server1: 0.14.3-2.1
>> lvm2: 2.03.11-2.1
>> lxc-pve: 4.0.11-1
>> lxcfs: 4.0.11-pve1
>> novnc-pve: 1.3.0-2
>> proxmox-backup-client: 2.1.5-1
>> proxmox-backup-file-restore: 2.1.5-1
>> proxmox-mini-journalreader: 1.3-1
>> proxmox-widget-toolkit: 3.4-7
>> pve-cluster: 7.1-3
>> pve-container: 4.1-4
>> pve-docs: 7.1-2
>> pve-edk2-firmware: 3.20210831-2
>> pve-firewall: 4.2-5
>> pve-firmware: 3.3-6
>> pve-ha-manager: 3.3-3
>> pve-i18n: 2.6-2
>> pve-qemu-kvm: 6.1.1-2
>> pve-xtermjs: 4.16.0-1
>> qemu-server: 7.1-4
>> smartmontools: 7.2-1
>> spiceterm: 3.2-2
>> swtpm: 0.7.1~bpo11+1
>> vncterm: 1.7-1
>> zfsutils-linux: 2.1.4-pve1
>>
>> I can provide other information if it's needed.
>>
>> Cheers
>> Iztok Gregori
>>
>>
> 




^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PVE-User] Cannot list disks from an external CEPH pool
  2022-06-01  9:51   ` Iztok Gregori
@ 2022-06-01 10:11     ` nada
  2022-06-01 10:30       ` Iztok Gregori
  0 siblings, 1 reply; 5+ messages in thread
From: nada @ 2022-06-01 10:11 UTC (permalink / raw)
  To: Proxmox VE user list

hello, just correct IP address at pool1
1172.16.1.2 probably 172.16.1.2
and you may enforce access by krbd 1
and simplify your list command by symlinks
example

rbd: pool1
      content images
      krbd 1
      monhost 172.16.1.1,172.16.1.2,172.16.1.3
      pool pool1
      username admin

# la /etc/ceph/
total 20
drwxr-xr-x  2 root root   7 Mar  6 13:03 .
drwxr-xr-x 97 root root 193 May 19 03:10 ..
lrwxrwxrwx  1 root root  27 Aug  4  2021 rbd.conf -> 
/etc/pve/priv/ceph/rbd.conf
lrwxrwxrwx  1 root root  30 Aug  4  2021 rbd.keyring -> 
/etc/pve/priv/ceph/rbd.keyring
-rw-r--r--  1 root root  92 Aug 28  2019 rbdmap
lrwxrwxrwx  1 root root  31 Feb  2 12:37 rbd_ssd.conf -> 
/etc/pve/priv/ceph/rbd_ssd.conf
lrwxrwxrwx  1 root root  34 Feb  2 12:37 rbd_ssd.keyring -> 
/etc/pve/priv/ceph/rbd_ssd.keyring

# pvesm list rbd
Volid             Format  Type              Size VMID
rbd:vm-105-disk-0 raw     images     42949672960 105
rbd:vm-111-disk-0 raw     images     42949672960 111

# pvesm list rbd_ssd
Volid                 Format  Type             Size VMID
rbd_ssd:vm-102-disk-0 raw     images     6442450944 102
rbd_ssd:vm-103-disk-0 raw     images     4294967296 103

good luck
Nada


On 2022-06-01 11:51, Iztok Gregori wrote:
> On 01/06/22 11:29, Aaron Lauterer wrote:
>> Do you get additional errors if you run the following command? 
>> Assuming that the storage is also called pool1.
>> 
>> pvesm list pool1
> 
> No additional errors:
> 
> root@pmx-14:~# pvesm list pool1
> rbd error: rbd: listing images failed: (2) No such file or directory
> 
> 
> 
>> Do you have VMs with disk images on that storage? If so, do they start 
>> normally?
> 
> Yes, we have a lot of VMs with disk on that storage and yes they seems
> to start normally (last start yesterday when we first notice the GUI
> behaviour)
> 
>> 
>> Can you show the configuration of that storage and the one of the 
>> working pool? (/etc/pve/storage.cfg)
> 
> Sure (edited the IP addresses and pool names):
> 
> [cit /etc/pve/storage.cfg]
> ...
> rbd: pool1
> 	content images
> 	monhost 172.16.1.1;1172.16.1.2;172.16.1.3
> 	pool pool1
> 	username admin
> 
> rbd: pool2
> 	content images
> 	monhost 172.16.1.1;172.16.1.2;172.16.1.3
> 	pool pool2
> 	username admin
> ...
> [/cit]
> 
> Thanks!
> 
> Iztok
> 
>> 
>> On 6/1/22 11:13, Iztok Gregori wrote:
>>> Hi to all!
>>> 
>>> I have a Proxmox cluster (7.1) connected to an external CEPH cluster 
>>> (octopus).  From the GUI I cannot list the content (disks) of one 
>>> pool (but I'm able to list all the other pools):
>>> 
>>> rbd error: rbd: listing images failed: (2) No such file or directory 
>>> (500)
>>> 
>>> The pveproxy/access.log shows the error for "pool1":
>>> 
>>> "GET /api2/json/nodes/pmx-14/storage/pool1/content?content=images 
>>> HTTP/1.1" 500 13
>>> 
>>> but when I try another pool ("pool2") it works:
>>> 
>>> "GET /api2/json/nodes/pmx-14/storage/pool2/content?content=images 
>>> HTTP/1.1" 200 841
>>> 
>>>  From the command line "rbd ls pool1" is working fine (because I 
>>> don't have a ceph.conf I ran it with "rbd -m 172.16.1.1 --keyring 
>>> /etc/pve/priv/ceph/pool1.keyring ls pool1") and I see the pool 
>>> contents.
>>> 
>>> The cluster is running fine, the VMs access the disks on that pool 
>>> without a problem
>>> 
>>> What can it be?
>>> 
>>> The cluster is a mix of freshly installed nodes and upgraded ones, 
>>> all the 17 nodes (but one which is 6.4 but without any running VMs) 
>>> are running:
>>> 
>>> root@pmx-14:~# pveversion -v
>>> proxmox-ve: 7.1-1 (running kernel: 5.13.19-6-pve)
>>> pve-manager: 7.1-12 (running version: 7.1-12/b3c09de3)
>>> pve-kernel-helper: 7.1-14
>>> pve-kernel-5.13: 7.1-9
>>> pve-kernel-5.13.19-6-pve: 5.13.19-15
>>> pve-kernel-5.13.19-2-pve: 5.13.19-4
>>> ceph-fuse: 15.2.15-pve1
>>> corosync: 3.1.5-pve2
>>> criu: 3.15-1+pve-1
>>> glusterfs-client: 9.2-1
>>> ifupdown2: 3.1.0-1+pmx3
>>> ksm-control-daemon: 1.4-1
>>> libjs-extjs: 7.0.0-1
>>> libknet1: 1.22-pve2
>>> libproxmox-acme-perl: 1.4.1
>>> libproxmox-backup-qemu0: 1.2.0-1
>>> libpve-access-control: 7.1-7
>>> libpve-apiclient-perl: 3.2-1
>>> libpve-common-perl: 7.1-5
>>> libpve-guest-common-perl: 4.1-1
>>> libpve-http-server-perl: 4.1-1
>>> libpve-storage-perl: 7.1-1
>>> libspice-server1: 0.14.3-2.1
>>> lvm2: 2.03.11-2.1
>>> lxc-pve: 4.0.11-1
>>> lxcfs: 4.0.11-pve1
>>> novnc-pve: 1.3.0-2
>>> proxmox-backup-client: 2.1.5-1
>>> proxmox-backup-file-restore: 2.1.5-1
>>> proxmox-mini-journalreader: 1.3-1
>>> proxmox-widget-toolkit: 3.4-7
>>> pve-cluster: 7.1-3
>>> pve-container: 4.1-4
>>> pve-docs: 7.1-2
>>> pve-edk2-firmware: 3.20210831-2
>>> pve-firewall: 4.2-5
>>> pve-firmware: 3.3-6
>>> pve-ha-manager: 3.3-3
>>> pve-i18n: 2.6-2
>>> pve-qemu-kvm: 6.1.1-2
>>> pve-xtermjs: 4.16.0-1
>>> qemu-server: 7.1-4
>>> smartmontools: 7.2-1
>>> spiceterm: 3.2-2
>>> swtpm: 0.7.1~bpo11+1
>>> vncterm: 1.7-1
>>> zfsutils-linux: 2.1.4-pve1
>>> 
>>> I can provide other information if it's needed.
>>> 
>>> Cheers
>>> Iztok Gregori
>>> 
>>> 
>> 
> 
> 
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PVE-User] Cannot list disks from an external CEPH pool
  2022-06-01 10:11     ` nada
@ 2022-06-01 10:30       ` Iztok Gregori
  0 siblings, 0 replies; 5+ messages in thread
From: Iztok Gregori @ 2022-06-01 10:30 UTC (permalink / raw)
  To: pve-user

On 01/06/22 12:11, nada wrote:
> hello, just correct IP address at pool1
> 1172.16.1.2 probably 172.16.1.2

1172.16.1 it was just a typo when i edited out the real addresses, the 
monitor IPs are the same on both pools.


> and you may enforce access by krbd 1

What improvement should I see with enabling krbd in regards to my 
original question?

> and simplify your list command by symlinks

CEPH CLI administration is done on a different, not proxmox, node. It 
was my understanding that proxmox doesn't need ceph.conf file to access 
an external CEPH cluster, so I never created any configuration/symlinks. 
Am I missing something?

Cheers
Iztok



> example
> 
> rbd: pool1
>       content images
>       krbd 1
>       monhost 172.16.1.1,172.16.1.2,172.16.1.3
>       pool pool1
>       username admin
> 
> # la /etc/ceph/
> total 20
> drwxr-xr-x  2 root root   7 Mar  6 13:03 .
> drwxr-xr-x 97 root root 193 May 19 03:10 ..
> lrwxrwxrwx  1 root root  27 Aug  4  2021 rbd.conf -> 
> /etc/pve/priv/ceph/rbd.conf
> lrwxrwxrwx  1 root root  30 Aug  4  2021 rbd.keyring -> 
> /etc/pve/priv/ceph/rbd.keyring
> -rw-r--r--  1 root root  92 Aug 28  2019 rbdmap
> lrwxrwxrwx  1 root root  31 Feb  2 12:37 rbd_ssd.conf -> 
> /etc/pve/priv/ceph/rbd_ssd.conf
> lrwxrwxrwx  1 root root  34 Feb  2 12:37 rbd_ssd.keyring -> 
> /etc/pve/priv/ceph/rbd_ssd.keyring
> 
> # pvesm list rbd
> Volid             Format  Type              Size VMID
> rbd:vm-105-disk-0 raw     images     42949672960 105
> rbd:vm-111-disk-0 raw     images     42949672960 111
> 
> # pvesm list rbd_ssd
> Volid                 Format  Type             Size VMID
> rbd_ssd:vm-102-disk-0 raw     images     6442450944 102
> rbd_ssd:vm-103-disk-0 raw     images     4294967296 103
> 
> good luck
> Nada
> 
> 
> On 2022-06-01 11:51, Iztok Gregori wrote:
>> On 01/06/22 11:29, Aaron Lauterer wrote:
>>> Do you get additional errors if you run the following command? 
>>> Assuming that the storage is also called pool1.
>>>
>>> pvesm list pool1
>>
>> No additional errors:
>>
>> root@pmx-14:~# pvesm list pool1
>> rbd error: rbd: listing images failed: (2) No such file or directory
>>
>>
>>
>>> Do you have VMs with disk images on that storage? If so, do they 
>>> start normally?
>>
>> Yes, we have a lot of VMs with disk on that storage and yes they seems
>> to start normally (last start yesterday when we first notice the GUI
>> behaviour)
>>
>>>
>>> Can you show the configuration of that storage and the one of the 
>>> working pool? (/etc/pve/storage.cfg)
>>
>> Sure (edited the IP addresses and pool names):
>>
>> [cit /etc/pve/storage.cfg]
>> ...
>> rbd: pool1
>>     content images
>>     monhost 172.16.1.1;1172.16.1.2;172.16.1.3
>>     pool pool1
>>     username admin
>>
>> rbd: pool2
>>     content images
>>     monhost 172.16.1.1;172.16.1.2;172.16.1.3
>>     pool pool2
>>     username admin
>> ...
>> [/cit]
>>
>> Thanks!
>>
>> Iztok
>>
>>>
>>> On 6/1/22 11:13, Iztok Gregori wrote:
>>>> Hi to all!
>>>>
>>>> I have a Proxmox cluster (7.1) connected to an external CEPH cluster 
>>>> (octopus).  From the GUI I cannot list the content (disks) of one 
>>>> pool (but I'm able to list all the other pools):
>>>>
>>>> rbd error: rbd: listing images failed: (2) No such file or directory 
>>>> (500)
>>>>
>>>> The pveproxy/access.log shows the error for "pool1":
>>>>
>>>> "GET /api2/json/nodes/pmx-14/storage/pool1/content?content=images 
>>>> HTTP/1.1" 500 13
>>>>
>>>> but when I try another pool ("pool2") it works:
>>>>
>>>> "GET /api2/json/nodes/pmx-14/storage/pool2/content?content=images 
>>>> HTTP/1.1" 200 841
>>>>
>>>>  From the command line "rbd ls pool1" is working fine (because I 
>>>> don't have a ceph.conf I ran it with "rbd -m 172.16.1.1 --keyring 
>>>> /etc/pve/priv/ceph/pool1.keyring ls pool1") and I see the pool 
>>>> contents.
>>>>
>>>> The cluster is running fine, the VMs access the disks on that pool 
>>>> without a problem
>>>>
>>>> What can it be?
>>>>
>>>> The cluster is a mix of freshly installed nodes and upgraded ones, 
>>>> all the 17 nodes (but one which is 6.4 but without any running VMs) 
>>>> are running:
>>>>
>>>> root@pmx-14:~# pveversion -v
>>>> proxmox-ve: 7.1-1 (running kernel: 5.13.19-6-pve)
>>>> pve-manager: 7.1-12 (running version: 7.1-12/b3c09de3)
>>>> pve-kernel-helper: 7.1-14
>>>> pve-kernel-5.13: 7.1-9
>>>> pve-kernel-5.13.19-6-pve: 5.13.19-15
>>>> pve-kernel-5.13.19-2-pve: 5.13.19-4
>>>> ceph-fuse: 15.2.15-pve1
>>>> corosync: 3.1.5-pve2
>>>> criu: 3.15-1+pve-1
>>>> glusterfs-client: 9.2-1
>>>> ifupdown2: 3.1.0-1+pmx3
>>>> ksm-control-daemon: 1.4-1
>>>> libjs-extjs: 7.0.0-1
>>>> libknet1: 1.22-pve2
>>>> libproxmox-acme-perl: 1.4.1
>>>> libproxmox-backup-qemu0: 1.2.0-1
>>>> libpve-access-control: 7.1-7
>>>> libpve-apiclient-perl: 3.2-1
>>>> libpve-common-perl: 7.1-5
>>>> libpve-guest-common-perl: 4.1-1
>>>> libpve-http-server-perl: 4.1-1
>>>> libpve-storage-perl: 7.1-1
>>>> libspice-server1: 0.14.3-2.1
>>>> lvm2: 2.03.11-2.1
>>>> lxc-pve: 4.0.11-1
>>>> lxcfs: 4.0.11-pve1
>>>> novnc-pve: 1.3.0-2
>>>> proxmox-backup-client: 2.1.5-1
>>>> proxmox-backup-file-restore: 2.1.5-1
>>>> proxmox-mini-journalreader: 1.3-1
>>>> proxmox-widget-toolkit: 3.4-7
>>>> pve-cluster: 7.1-3
>>>> pve-container: 4.1-4
>>>> pve-docs: 7.1-2
>>>> pve-edk2-firmware: 3.20210831-2
>>>> pve-firewall: 4.2-5
>>>> pve-firmware: 3.3-6
>>>> pve-ha-manager: 3.3-3
>>>> pve-i18n: 2.6-2
>>>> pve-qemu-kvm: 6.1.1-2
>>>> pve-xtermjs: 4.16.0-1
>>>> qemu-server: 7.1-4
>>>> smartmontools: 7.2-1
>>>> spiceterm: 3.2-2
>>>> swtpm: 0.7.1~bpo11+1
>>>> vncterm: 1.7-1
>>>> zfsutils-linux: 2.1.4-pve1
>>>>
>>>> I can provide other information if it's needed.
>>>>
>>>> Cheers
>>>> Iztok Gregori
>>>>
>>>>
>>>
>>
>>
>> _______________________________________________
>> pve-user mailing list
>> pve-user@lists.proxmox.com
>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user




^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2022-06-01 10:30 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-01  9:13 [PVE-User] Cannot list disks from an external CEPH pool Iztok Gregori
2022-06-01  9:29 ` Aaron Lauterer
2022-06-01  9:51   ` Iztok Gregori
2022-06-01 10:11     ` nada
2022-06-01 10:30       ` Iztok Gregori

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal