* [PVE-User] HEALTH_ERR , showing x 1 Full osd(s) , guidance requested
@ 2023-05-04 6:39 Joseph John
2023-05-04 7:29 ` Aaron Lauterer
[not found] ` <mailman.136.1683184866.359.pve-user@lists.proxmox.com>
0 siblings, 2 replies; 3+ messages in thread
From: Joseph John @ 2023-05-04 6:39 UTC (permalink / raw)
To: Proxmox VE user list
Dear All,
Good morning
We have a proxmox setup, with 4 nodes, with 6.3-3
We have Node 1 and Node 2 running with 6.3-3
and Node 3 and Node 4 running with 6.4-15
today we noticed that , we were not able to ssh to the virtual instance ,
or not able to login using the console based option
when I checked the summary, we could see that in "HEALTH_ERR" we are
getting message that 1 full osd(s)
Thanks
Joseph John
00971-50-7451809
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PVE-User] HEALTH_ERR , showing x 1 Full osd(s) , guidance requested
2023-05-04 6:39 [PVE-User] HEALTH_ERR , showing x 1 Full osd(s) , guidance requested Joseph John
@ 2023-05-04 7:29 ` Aaron Lauterer
[not found] ` <mailman.136.1683184866.359.pve-user@lists.proxmox.com>
1 sibling, 0 replies; 3+ messages in thread
From: Aaron Lauterer @ 2023-05-04 7:29 UTC (permalink / raw)
To: Proxmox VE user list, Joseph John
As already mentioned in another reply, you will have to make space by either
reweighting the OSD or adding more OSDs.
Another option, though quite a bit more radical, would be to reduce the size of
the pool.
Right now, you hopefully have a size/min_size of 3/2.
ceph osd pool get {pool} size
ceph osd pool get {pool} min_size
By reducing the size to 2, you will gain about 1/3 of space which can help you
get out of the situation. But the pool will become IO blocked once a single OSD
is down.
So that should only be done as an emergency measure to get operational again.
But then you need to address the actual issue ASAP (get more space) so that you
can increase the size back to 3 again.
ceph osd pool set {pool} size 2
And please plan an upgrade of the cluster soon! Proxmox VE 6 is EOL since last
summer and the intricacies of that version and the Ceph versions along with it
are fading away ;)
https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0
Cheers,
Aaron
On 5/4/23 08:39, Joseph John wrote:
> Dear All,
> Good morning
> We have a proxmox setup, with 4 nodes, with 6.3-3
> We have Node 1 and Node 2 running with 6.3-3
> and Node 3 and Node 4 running with 6.4-15
>
> today we noticed that , we were not able to ssh to the virtual instance ,
> or not able to login using the console based option
> when I checked the summary, we could see that in "HEALTH_ERR" we are
> getting message that 1 full osd(s)
>
>
>
> Thanks
> Joseph John
> 00971-50-7451809
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>
^ permalink raw reply [flat|nested] 3+ messages in thread
[parent not found: <mailman.136.1683184866.359.pve-user@lists.proxmox.com>]
* Re: [PVE-User] HEALTH_ERR , showing x 1 Full osd(s) , guidance requested
[not found] ` <mailman.136.1683184866.359.pve-user@lists.proxmox.com>
@ 2023-05-04 7:54 ` Joseph John
0 siblings, 0 replies; 3+ messages in thread
From: Joseph John @ 2023-05-04 7:54 UTC (permalink / raw)
To: Proxmox VE user list
Dear All
THANKS a lot
Like to update
I managed to add some space, by removing some virtual instances which I was
not using at all. Then we restarted the 4th node which was showing
HEALTH_ERR and now it is working
THANKS A LOT to all who gave advice and support
thanks
On Thu, May 4, 2023 at 11:21 AM Eneko Lacunza via pve-user <
pve-user@lists.proxmox.com> wrote:
>
>
>
> ---------- Forwarded message ----------
> From: Eneko Lacunza <elacunza@binovo.es>
> To: pve-user@lists.proxmox.com
> Cc:
> Bcc:
> Date: Thu, 4 May 2023 09:20:25 +0200
> Subject: Re: [PVE-User] HEALTH_ERR , showing x 1 Full osd(s) , guidance
> requested
> Hi Joseph,
>
> You must resolve that 1 full osd(s) issue. If other OSDs aren't as full,
> you can try to reweight the full OSD so that some info is removed from it.
>
> Another option would be to add additional OSDs.
>
> Cheers
>
> El 4/5/23 a las 8:39, Joseph John escribió:
> > Dear All,
> > Good morning
> > We have a proxmox setup, with 4 nodes, with 6.3-3
> > We have Node 1 and Node 2 running with 6.3-3
> > and Node 3 and Node 4 running with 6.4-15
> >
> > today we noticed that , we were not able to ssh to the virtual instance ,
> > or not able to login using the console based option
> > when I checked the summary, we could see that in "HEALTH_ERR" we are
> > getting message that 1 full osd(s)
> >
> >
> >
> > Thanks
> > Joseph John
> > 00971-50-7451809
> > _______________________________________________
> > pve-user mailing list
> > pve-user@lists.proxmox.com
> > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> >
>
> Eneko Lacunza
> Zuzendari teknikoa | Director técnico
> Binovo IT Human Project
>
> Tel. +34 943 569 206 | https://www.binovo.es
> Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun
>
> https://www.youtube.com/user/CANALBINOVO
> https://www.linkedin.com/company/37269706/
>
>
>
>
>
> ---------- Forwarded message ----------
> From: Eneko Lacunza via pve-user <pve-user@lists.proxmox.com>
> To: pve-user@lists.proxmox.com
> Cc: Eneko Lacunza <elacunza@binovo.es>
> Bcc:
> Date: Thu, 4 May 2023 09:20:25 +0200
> Subject: Re: [PVE-User] HEALTH_ERR , showing x 1 Full osd(s) , guidance
> requested
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2023-05-04 7:55 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-05-04 6:39 [PVE-User] HEALTH_ERR , showing x 1 Full osd(s) , guidance requested Joseph John
2023-05-04 7:29 ` Aaron Lauterer
[not found] ` <mailman.136.1683184866.359.pve-user@lists.proxmox.com>
2023-05-04 7:54 ` Joseph John
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox