all lists on lists.proxmox.com
 help / color / mirror / Atom feed
From: Iztok Gregori <iztok.gregori@elettra.eu>
To: Aaron Lauterer <a.lauterer@proxmox.com>,
	Proxmox VE user list <pve-user@lists.proxmox.com>
Subject: Re: [PVE-User] Cannot list disks from an external CEPH pool
Date: Wed, 1 Jun 2022 11:51:37 +0200	[thread overview]
Message-ID: <b1f115cd-daa3-e983-34c7-4348a2d93d98@elettra.eu> (raw)
In-Reply-To: <d41822c4-75dd-c6ae-5adf-eb9df2de7f34@proxmox.com>

On 01/06/22 11:29, Aaron Lauterer wrote:
> Do you get additional errors if you run the following command? Assuming 
> that the storage is also called pool1.
> 
> pvesm list pool1

No additional errors:

root@pmx-14:~# pvesm list pool1
rbd error: rbd: listing images failed: (2) No such file or directory



> Do you have VMs with disk images on that storage? If so, do they start 
> normally?

Yes, we have a lot of VMs with disk on that storage and yes they seems 
to start normally (last start yesterday when we first notice the GUI 
behaviour)

> 
> Can you show the configuration of that storage and the one of the 
> working pool? (/etc/pve/storage.cfg)

Sure (edited the IP addresses and pool names):

[cit /etc/pve/storage.cfg]
...
rbd: pool1
	content images
	monhost 172.16.1.1;1172.16.1.2;172.16.1.3
	pool pool1
	username admin

rbd: pool2
	content images
	monhost 172.16.1.1;172.16.1.2;172.16.1.3
	pool pool2
	username admin
...
[/cit]

Thanks!

Iztok

> 
> On 6/1/22 11:13, Iztok Gregori wrote:
>> Hi to all!
>>
>> I have a Proxmox cluster (7.1) connected to an external CEPH cluster 
>> (octopus).  From the GUI I cannot list the content (disks) of one pool 
>> (but I'm able to list all the other pools):
>>
>> rbd error: rbd: listing images failed: (2) No such file or directory 
>> (500)
>>
>> The pveproxy/access.log shows the error for "pool1":
>>
>> "GET /api2/json/nodes/pmx-14/storage/pool1/content?content=images 
>> HTTP/1.1" 500 13
>>
>> but when I try another pool ("pool2") it works:
>>
>> "GET /api2/json/nodes/pmx-14/storage/pool2/content?content=images 
>> HTTP/1.1" 200 841
>>
>>  From the command line "rbd ls pool1" is working fine (because I don't 
>> have a ceph.conf I ran it with "rbd -m 172.16.1.1 --keyring 
>> /etc/pve/priv/ceph/pool1.keyring ls pool1") and I see the pool contents.
>>
>> The cluster is running fine, the VMs access the disks on that pool 
>> without a problem
>>
>> What can it be?
>>
>> The cluster is a mix of freshly installed nodes and upgraded ones, all 
>> the 17 nodes (but one which is 6.4 but without any running VMs) are 
>> running:
>>
>> root@pmx-14:~# pveversion -v
>> proxmox-ve: 7.1-1 (running kernel: 5.13.19-6-pve)
>> pve-manager: 7.1-12 (running version: 7.1-12/b3c09de3)
>> pve-kernel-helper: 7.1-14
>> pve-kernel-5.13: 7.1-9
>> pve-kernel-5.13.19-6-pve: 5.13.19-15
>> pve-kernel-5.13.19-2-pve: 5.13.19-4
>> ceph-fuse: 15.2.15-pve1
>> corosync: 3.1.5-pve2
>> criu: 3.15-1+pve-1
>> glusterfs-client: 9.2-1
>> ifupdown2: 3.1.0-1+pmx3
>> ksm-control-daemon: 1.4-1
>> libjs-extjs: 7.0.0-1
>> libknet1: 1.22-pve2
>> libproxmox-acme-perl: 1.4.1
>> libproxmox-backup-qemu0: 1.2.0-1
>> libpve-access-control: 7.1-7
>> libpve-apiclient-perl: 3.2-1
>> libpve-common-perl: 7.1-5
>> libpve-guest-common-perl: 4.1-1
>> libpve-http-server-perl: 4.1-1
>> libpve-storage-perl: 7.1-1
>> libspice-server1: 0.14.3-2.1
>> lvm2: 2.03.11-2.1
>> lxc-pve: 4.0.11-1
>> lxcfs: 4.0.11-pve1
>> novnc-pve: 1.3.0-2
>> proxmox-backup-client: 2.1.5-1
>> proxmox-backup-file-restore: 2.1.5-1
>> proxmox-mini-journalreader: 1.3-1
>> proxmox-widget-toolkit: 3.4-7
>> pve-cluster: 7.1-3
>> pve-container: 4.1-4
>> pve-docs: 7.1-2
>> pve-edk2-firmware: 3.20210831-2
>> pve-firewall: 4.2-5
>> pve-firmware: 3.3-6
>> pve-ha-manager: 3.3-3
>> pve-i18n: 2.6-2
>> pve-qemu-kvm: 6.1.1-2
>> pve-xtermjs: 4.16.0-1
>> qemu-server: 7.1-4
>> smartmontools: 7.2-1
>> spiceterm: 3.2-2
>> swtpm: 0.7.1~bpo11+1
>> vncterm: 1.7-1
>> zfsutils-linux: 2.1.4-pve1
>>
>> I can provide other information if it's needed.
>>
>> Cheers
>> Iztok Gregori
>>
>>
> 




  reply	other threads:[~2022-06-01  9:51 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-01  9:13 Iztok Gregori
2022-06-01  9:29 ` Aaron Lauterer
2022-06-01  9:51   ` Iztok Gregori [this message]
2022-06-01 10:11     ` nada
2022-06-01 10:30       ` Iztok Gregori

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b1f115cd-daa3-e983-34c7-4348a2d93d98@elettra.eu \
    --to=iztok.gregori@elettra.eu \
    --cc=a.lauterer@proxmox.com \
    --cc=pve-user@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal