From: Aaron Lauterer <a.lauterer@proxmox.com>
To: Proxmox VE user list <pve-user@lists.proxmox.com>,
Joseph John <jjk.saji@gmail.com>
Subject: Re: [PVE-User] HEALTH_ERR , showing x 1 Full osd(s) , guidance requested
Date: Thu, 4 May 2023 09:29:51 +0200 [thread overview]
Message-ID: <947b91ff-efa5-1ce7-7f63-c2f211eced1e@proxmox.com> (raw)
In-Reply-To: <CAKeuxjA2MbdxsdDEN+=zFrvkVxAcC2c3fo1z2Grey0VUemkTzw@mail.gmail.com>
As already mentioned in another reply, you will have to make space by either
reweighting the OSD or adding more OSDs.
Another option, though quite a bit more radical, would be to reduce the size of
the pool.
Right now, you hopefully have a size/min_size of 3/2.
ceph osd pool get {pool} size
ceph osd pool get {pool} min_size
By reducing the size to 2, you will gain about 1/3 of space which can help you
get out of the situation. But the pool will become IO blocked once a single OSD
is down.
So that should only be done as an emergency measure to get operational again.
But then you need to address the actual issue ASAP (get more space) so that you
can increase the size back to 3 again.
ceph osd pool set {pool} size 2
And please plan an upgrade of the cluster soon! Proxmox VE 6 is EOL since last
summer and the intricacies of that version and the Ceph versions along with it
are fading away ;)
https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0
Cheers,
Aaron
On 5/4/23 08:39, Joseph John wrote:
> Dear All,
> Good morning
> We have a proxmox setup, with 4 nodes, with 6.3-3
> We have Node 1 and Node 2 running with 6.3-3
> and Node 3 and Node 4 running with 6.4-15
>
> today we noticed that , we were not able to ssh to the virtual instance ,
> or not able to login using the console based option
> when I checked the summary, we could see that in "HEALTH_ERR" we are
> getting message that 1 full osd(s)
>
>
>
> Thanks
> Joseph John
> 00971-50-7451809
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>
next prev parent reply other threads:[~2023-05-04 7:29 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-05-04 6:39 Joseph John
2023-05-04 7:29 ` Aaron Lauterer [this message]
[not found] ` <mailman.136.1683184866.359.pve-user@lists.proxmox.com>
2023-05-04 7:54 ` Joseph John
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=947b91ff-efa5-1ce7-7f63-c2f211eced1e@proxmox.com \
--to=a.lauterer@proxmox.com \
--cc=jjk.saji@gmail.com \
--cc=pve-user@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox