public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
* [PVE-User] confirmation on osd replacement
@ 2020-11-25  8:18 mj
       [not found] ` <E9407153-C09E-4098-BC0B-F0605CC43E26@suse.com>
  0 siblings, 1 reply; 3+ messages in thread
From: mj @ 2020-11-25  8:18 UTC (permalink / raw)
  To: pve-user

Hi,

I would just like to verify/confirm something here, as we are going to 
replace our spinning OSDs with SSDs.

We have 8 OSDs per server, and two empty front driveslots available.

The proxmox boot disk is internal, and currently known as /dev/sdk

Suppose I insert two new SSDs in the (two empty) front drive bays, I 
expect the internal boot disk to shift from /dev/sdk to /dev/sdm

The questions:
- should we expect boot problems or other side-effects of doing that?
(of course I will test on the first server, I'd just like to know what 
to expect)

And then I am going to first add per server two new bluestore SSDs, 
making a (temporarily) total of 10 OSDs per server.

And then I want to replace the 8 remaining filestore spinning OSDs with 
6 bluestore SSDs. Making again a total of 8 OSDs per server.

The idea is: first add two SSDs to increase IO capacity for the rest of 
the procedure, while at the same time reducing stress on our filestore 
journal SSD (wear level=75%)

Any comments?

MJ



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PVE-User] confirmation on osd replacement
       [not found] ` <E9407153-C09E-4098-BC0B-F0605CC43E26@suse.com>
@ 2020-11-26  7:54   ` mj
       [not found]     ` <7B860047-76E2-44B5-8F66-D04FFA216C07@suse.com>
  0 siblings, 1 reply; 3+ messages in thread
From: mj @ 2020-11-26  7:54 UTC (permalink / raw)
  To: Alejandro Bonilla, Proxmox VE user list

Hi,

Yes, perhaps I should have given more details :-)

On 11/25/20 3:03 PM, Alejandro Bonilla wrote:

> Have a look at /etc/fstab for any disk path mounts - since I think Proxmox uses lvm mostly, you shouldn’t see a problem.
I will, thanks!

> What is the pool replication configuration or ec-profile? How many nodes in the cluster?
We're 3/2 replication, no ec. It's a three-node (small) cluster, 8 
filestore OSDs per node, with an SSD journal (wear evel 75%)

We will be using samsung PM833 bluestore of the same size
(4TB spinners to 3.83GB PM833 SSDs)

> Are you planning to remove all disks per server at once or disk by disk?
I was planning to:
- first add two SSDs to each server, and gradually increase their weight
- then, disk-by-disk, replace the 8 (old) spinners with the 6 remaining SSDs

> Will all new drives equal or increase the disk capacity of the cluster?
Approx equal yes.
The aim in not to increase space.

MJ



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PVE-User] confirmation on osd replacement
       [not found]     ` <7B860047-76E2-44B5-8F66-D04FFA216C07@suse.com>
@ 2020-11-26 20:31       ` mj
  0 siblings, 0 replies; 3+ messages in thread
From: mj @ 2020-11-26 20:31 UTC (permalink / raw)
  To: Alejandro Bonilla; +Cc: Proxmox VE user list

Hi Alejandro,

Thanks for your feedback, much appreciated!

Enjoy your weekend!

MJ

On 11/26/20 4:39 PM, Alejandro Bonilla wrote:
> 
> 
>> On Nov 26, 2020, at 2:54 AM, mj <lists@merit.unu.edu> wrote:
>>
>> Hi,
>>
>> Yes, perhaps I should have given more details :-)
>>
>> On 11/25/20 3:03 PM, Alejandro Bonilla wrote:
>>
>>> Have a look at /etc/fstab for any disk path mounts - since I think Proxmox uses lvm mostly, you shouldn’t see a problem.
>> I will, thanks!
>>
>>> What is the pool replication configuration or ec-profile? How many nodes in the cluster?
>> We're 3/2 replication, no ec. It's a three-node (small) cluster, 8 filestore OSDs per node, with an SSD journal (wear evel 75%)
> 
> If it’s 3 replicas, min 2, then you should be able to clear all drives from a system at once and replace them all to minimize the amount of times the cluster will end up rebalancing.
> 
>>
>> We will be using samsung PM833 bluestore of the same size
>> (4TB spinners to 3.83GB PM833 SSDs)
>>
>>> Are you planning to remove all disks per server at once or disk by disk?
>> I was planning to:
>> - first add two SSDs to each server, and gradually increase their weight
> 
> Two per server to ensure the disk replacement will work as expected is a good idea - I don’t think you’ll gain anything with a gradual re-weight.
> 
>> - then, disk-by-disk, replace the 8 (old) spinners with the 6 remaining SSDs
> 
> IF you have two other replicas, then a full system disk replacement should be no trouble - especially after two other SSDs were added and most data was shuffled around.
> 
>>
>>> Will all new drives equal or increase the disk capacity of the cluster?
>> Approx equal yes.
>> The aim in not to increase space.
> 
> There are other reasons why I ask, specifically based on PG count and balancing of the cluster.
> 
>>
>> MJ
>>
> 



^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2020-11-26 20:32 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-25  8:18 [PVE-User] confirmation on osd replacement mj
     [not found] ` <E9407153-C09E-4098-BC0B-F0605CC43E26@suse.com>
2020-11-26  7:54   ` mj
     [not found]     ` <7B860047-76E2-44B5-8F66-D04FFA216C07@suse.com>
2020-11-26 20:31       ` mj

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal