From: Iztok Gregori <iztok.gregori@elettra.eu>
To: Proxmox VE user list <pve-user@lists.proxmox.com>
Subject: [PVE-User] Cannot list disks from an external CEPH pool
Date: Wed, 1 Jun 2022 11:13:11 +0200 [thread overview]
Message-ID: <00ab49ec-d822-2522-c861-ed2409681f27@elettra.eu> (raw)
Hi to all!
I have a Proxmox cluster (7.1) connected to an external CEPH cluster
(octopus). From the GUI I cannot list the content (disks) of one pool
(but I'm able to list all the other pools):
rbd error: rbd: listing images failed: (2) No such file or directory (500)
The pveproxy/access.log shows the error for "pool1":
"GET /api2/json/nodes/pmx-14/storage/pool1/content?content=images
HTTP/1.1" 500 13
but when I try another pool ("pool2") it works:
"GET /api2/json/nodes/pmx-14/storage/pool2/content?content=images
HTTP/1.1" 200 841
From the command line "rbd ls pool1" is working fine (because I don't
have a ceph.conf I ran it with "rbd -m 172.16.1.1 --keyring
/etc/pve/priv/ceph/pool1.keyring ls pool1") and I see the pool contents.
The cluster is running fine, the VMs access the disks on that pool
without a problem
What can it be?
The cluster is a mix of freshly installed nodes and upgraded ones, all
the 17 nodes (but one which is 6.4 but without any running VMs) are running:
root@pmx-14:~# pveversion -v
proxmox-ve: 7.1-1 (running kernel: 5.13.19-6-pve)
pve-manager: 7.1-12 (running version: 7.1-12/b3c09de3)
pve-kernel-helper: 7.1-14
pve-kernel-5.13: 7.1-9
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph-fuse: 15.2.15-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.1
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-7
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.1-5
libpve-guest-common-perl: 4.1-1
libpve-http-server-perl: 4.1-1
libpve-storage-perl: 7.1-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.3.0-2
proxmox-backup-client: 2.1.5-1
proxmox-backup-file-restore: 2.1.5-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-7
pve-cluster: 7.1-3
pve-container: 4.1-4
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-6
pve-ha-manager: 3.3-3
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.1-2
pve-xtermjs: 4.16.0-1
qemu-server: 7.1-4
smartmontools: 7.2-1
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.4-pve1
I can provide other information if it's needed.
Cheers
Iztok Gregori
--
Iztok Gregori
ICT Systems and Services
Elettra - Sincrotrone Trieste S.C.p.A.
Telephone: +39 040 3758948
http://www.elettra.eu
next reply other threads:[~2022-06-01 9:20 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-06-01 9:13 Iztok Gregori [this message]
2022-06-01 9:29 ` Aaron Lauterer
2022-06-01 9:51 ` Iztok Gregori
2022-06-01 10:11 ` nada
2022-06-01 10:30 ` Iztok Gregori
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=00ab49ec-d822-2522-c861-ed2409681f27@elettra.eu \
--to=iztok.gregori@elettra.eu \
--cc=pve-user@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox