public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
From: David der Nederlanden | ITTY via pve-user <pve-user@lists.proxmox.com>
To: Proxmox VE user list <pve-user@lists.proxmox.com>
Cc: David der Nederlanden | ITTY <david@itty.nl>
Subject: Re: [PVE-User] "nearfull" status in PVE Dashboard not consistent
Date: Sat, 7 Sep 2024 19:27:38 +0000	[thread overview]
Message-ID: <mailman.113.1725738192.414.pve-user@lists.proxmox.com> (raw)
In-Reply-To: <1bf05195-83a5-490c-a362-6af62b6bbdf3@dkfz-heidelberg.de>

[-- Attachment #1: Type: message/rfc822, Size: 15553 bytes --]

From: David der Nederlanden | ITTY <david@itty.nl>
To: Proxmox VE user list <pve-user@lists.proxmox.com>
Subject: RE: [PVE-User] "nearfull" status in PVE Dashboard not consistent
Date: Sat, 7 Sep 2024 19:27:38 +0000
Message-ID: <AM8P193MB11393CE9AC8873699D5E5950B89F2@AM8P193MB1139.EURP193.PROD.OUTLOOK.COM>

Hi Frank,

Can you share your OSD layout too?

My first thought is that you added the SSD's as OSD which caused that OSD to get full, with a near full pool as a result.

You can get some insights with:
`ceph osd tree`

And if needed you can reweight the OSD's, but that would require a good OSD layout:
`ceph osd reweight-by-utilization`

Sources:
https://forum.proxmox.com/threads/ceph-pool-full.47810/ 
https://docs.ceph.com/en/reef/rados/operations/health-checks/#pool-near-full

Kind regards,
David der Nederlanden

-----Oorspronkelijk bericht-----
Van: pve-user <pve-user-bounces@lists.proxmox.com> Namens Frank Thommen
Verzonden: Saturday, September 7, 2024 21:15
Aan: pve-user@lists.proxmox.com
Onderwerp: Re: [PVE-User] "nearfull" status in PVE Dashboard not consistent

Mailman is making fun of me: First it does not accept the mail because of too big attachments, now that I reduced the size, it removes them completely :-(

Sorry, I digress... Please find the two images here:

   * https://pasteboard.co/CUPNjkTmyYV8.jpg
(Ceph_dashboard_nearfull_warning.jpg)
   * https://pasteboard.co/34GBggOiUNII.jpg (Ceph_pool_overview.jpg)

HTH, Frank


On 07.09.24 21:07, Frank Thommen wrote:
> It seems, the attachments got lost on their way. Here they are (again) 
> Frank
> 
> On 07.09.24 21:04, Frank Thommen wrote:
>> Dear all,
>>
>> I am currently in the process to add SSDs for DB/WAL to our 
>> "converged" 3-node Ceph cluster. After having done so on two of three 
>> nodes, the PVE Ceph dashboard now reports "5 pool(s) nearfull":
>>
>>       HEALTH_WARN: 5 pool(s) nearfull
>>       pool 'pve-pool1' is nearfull
>>       pool 'cephfs_data' is nearfull
>>       pool 'cephfs_metadata' is nearfull
>>       pool '.mgr' is nearfull
>>       pool '.rgw.root' is nearfull
>>
>> (see also attached Ceph_dashboard_nearfull_warning.jpg). The storage 
>> in general is 73% full ("40.87 TiB of 55.67 TiB").
>>
>> However when looking at the pool overview in PVE, the pools don't 
>> seem to be very full at all. Some of them are even reported as being 
>> completely empty (see the attached Ceph_pool_overview.jpg).
>>
>> Please note: All Ceph manipulations have been done from the PVE UI, 
>> as we are not very experienced with the Ceph CLI.
>>
>> We are running PVE 8.2.3 and Ceph runs on version 17.2.7.
>>
>> Is this inconsistency normal or a problem? And if the latter, then
>> (how) can it be fixed?
>>
>> Cheers, Frank
>> _______________________________________________
>> pve-user mailing list
>> pve-user@lists.proxmox.com
>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user

_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user

  reply	other threads:[~2024-09-07 19:42 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-09-07 19:04 Frank Thommen
2024-09-07 19:07 ` Frank Thommen
2024-09-07 19:14   ` Frank Thommen
2024-09-07 19:27     ` David der Nederlanden | ITTY via pve-user [this message]
2024-09-07 19:49       ` Peter Eisch via pve-user
2024-09-08 12:17         ` [PVE-User] [Extern] - " Frank Thommen
2024-09-09 10:36           ` Eneko Lacunza via pve-user
2024-09-10 12:02             ` Frank Thommen
2024-09-10 18:31               ` David der Nederlanden | ITTY via pve-user
2024-09-11 11:00                 ` Frank Thommen
2024-09-11 11:52                   ` Daniel Oliver
2024-09-11 14:24                     ` Frank Thommen
2024-09-11 14:22                   ` Frank Thommen
2024-09-11 10:51               ` Frank Thommen
2024-09-08 12:17       ` [PVE-User] " Frank Thommen
2024-09-09  0:46     ` Bryan Fields
2024-09-10 11:48       ` [PVE-User] [Extern] - " Frank Thommen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=mailman.113.1725738192.414.pve-user@lists.proxmox.com \
    --to=pve-user@lists.proxmox.com \
    --cc=david@itty.nl \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal