From: mj <lists@merit.unu.edu>
To: Alejandro Bonilla <abonilla@suse.com>
Cc: Proxmox VE user list <pve-user@lists.proxmox.com>
Subject: Re: [PVE-User] confirmation on osd replacement
Date: Thu, 26 Nov 2020 21:31:40 +0100 [thread overview]
Message-ID: <e408aa8b-f7d5-57d9-f29c-de7684c24587@merit.unu.edu> (raw)
In-Reply-To: <7B860047-76E2-44B5-8F66-D04FFA216C07@suse.com>
Hi Alejandro,
Thanks for your feedback, much appreciated!
Enjoy your weekend!
MJ
On 11/26/20 4:39 PM, Alejandro Bonilla wrote:
>
>
>> On Nov 26, 2020, at 2:54 AM, mj <lists@merit.unu.edu> wrote:
>>
>> Hi,
>>
>> Yes, perhaps I should have given more details :-)
>>
>> On 11/25/20 3:03 PM, Alejandro Bonilla wrote:
>>
>>> Have a look at /etc/fstab for any disk path mounts - since I think Proxmox uses lvm mostly, you shouldn’t see a problem.
>> I will, thanks!
>>
>>> What is the pool replication configuration or ec-profile? How many nodes in the cluster?
>> We're 3/2 replication, no ec. It's a three-node (small) cluster, 8 filestore OSDs per node, with an SSD journal (wear evel 75%)
>
> If it’s 3 replicas, min 2, then you should be able to clear all drives from a system at once and replace them all to minimize the amount of times the cluster will end up rebalancing.
>
>>
>> We will be using samsung PM833 bluestore of the same size
>> (4TB spinners to 3.83GB PM833 SSDs)
>>
>>> Are you planning to remove all disks per server at once or disk by disk?
>> I was planning to:
>> - first add two SSDs to each server, and gradually increase their weight
>
> Two per server to ensure the disk replacement will work as expected is a good idea - I don’t think you’ll gain anything with a gradual re-weight.
>
>> - then, disk-by-disk, replace the 8 (old) spinners with the 6 remaining SSDs
>
> IF you have two other replicas, then a full system disk replacement should be no trouble - especially after two other SSDs were added and most data was shuffled around.
>
>>
>>> Will all new drives equal or increase the disk capacity of the cluster?
>> Approx equal yes.
>> The aim in not to increase space.
>
> There are other reasons why I ask, specifically based on PG count and balancing of the cluster.
>
>>
>> MJ
>>
>
prev parent reply other threads:[~2020-11-26 20:32 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-11-25 8:18 mj
[not found] ` <E9407153-C09E-4098-BC0B-F0605CC43E26@suse.com>
2020-11-26 7:54 ` mj
[not found] ` <7B860047-76E2-44B5-8F66-D04FFA216C07@suse.com>
2020-11-26 20:31 ` mj [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=e408aa8b-f7d5-57d9-f29c-de7684c24587@merit.unu.edu \
--to=lists@merit.unu.edu \
--cc=abonilla@suse.com \
--cc=pve-user@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.