From: Aaron Lauterer <a.lauterer@proxmox.com>
To: Fiona Ebner <f.ebner@proxmox.com>,
Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH manager] fix #4631: ceph: osd: create: add osds-per-device
Date: Mon, 21 Aug 2023 12:51:47 +0200 [thread overview]
Message-ID: <cad5f917-49df-4855-bc55-420f694f9186@proxmox.com> (raw)
In-Reply-To: <5d60e2f0-7d45-75a5-8fd9-506f950c5d2f@proxmox.com>
responses inline
On 8/21/23 10:20, Fiona Ebner wrote:
> Am 18.04.23 um 14:26 schrieb Aaron Lauterer:
>> Allows to automatically create multiple OSDs per physical device. The
>> main use case are fast NVME drives that would be bottlenecked by a
>> single OSD service.
>>
>> By using the 'ceph-volume lvm batch' command instead of the 'ceph-volume
>> lvm create' for multiple OSDs / device, we don't have to deal with the
>> split of the drive ourselves.
>>
>> But this means that the parameters to specify a DB or WAL device won't
>> work as the 'batch' command doesn't use them. Dedicated DB and WAL
>> devices don't make much sense anyway if we place the OSDs on fast NVME
>> drives.
>>
>> Some other changes to how the command is built were needed as well, as
>> the 'batch' command needs the path to the disk as a positional argument,
>> not as '--data /dev/sdX'.
>> We drop the '--cluster-fsid' paramter because the 'batch' command
>> doesn't accept it. The 'create' will fall back to reading it from the
>> ceph.conf file.
>>
>> Removal of OSDs works as expected without any code changes. As long as
>> there are other OSDs on a disk, the VG & PV won't be removed, even if
>> 'cleanup' is enabled.
>>
>> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
>> ---
>
> I noticed a warning while testing
>
> --> DEPRECATION NOTICE
> --> You are using the legacy automatic disk sorting behavior
> --> The Pacific release will change the default to --no-auto
> --> passed data devices: 1 physical, 0 LVM
> --> relative data size: 0.3333333333333333
>
> Note that I'm on Quincy, so maybe they didn't still didn't change it :P
Also shows up when using `ceph-volume lvm batch …` directly. So I guess not much
we can do about it after consulting the man page.
>
>> @@ -275,6 +275,12 @@ __PACKAGE__->register_method ({
>> type => 'string',
>> description => "Set the device class of the OSD in crush."
>> },
>> + 'osds-per-device' => {
>> + optional => 1,
>> + type => 'number',
>
> should be integer
will change
>
>> + minimum => '1',
>> + description => 'OSD services per physical device. Can improve fast NVME utilization.',
>
> Can we add an explicit recommendation against doing it for other disk
> types? I imagine it's not beneficial for those, or?
What about something like:
"Only useful for fast NVME devices to utilize their performance better."?
>
>> + },
>> },
>> },
>> returns => { type => 'string' },
>> @@ -294,6 +300,15 @@ __PACKAGE__->register_method ({
>> # extract parameter info and fail if a device is set more than once
>> my $devs = {};
>>
>> + # allow 'osds-per-device' only without dedicated db and/or wal devs. We cannot specify them with
>> + # 'ceph-volume lvm batch' and they don't make a lot of sense on fast NVMEs anyway.
>> + if ($param->{'osds-per-device'}) {
>> + for my $type ( qw(db_dev wal_dev) ) {
>> + die "Cannot use 'osds-per-device' parameter with '${type}'"
>
> Missing newline after error message.
> Could also use raise_param_exc().
Ah thanks. Will switch it to an `raise_param_exc()` where we don't need the
newline AFAICT?
>
>> + if $param->{$type};
>> + }
>> + }
>> +
>> my $ceph_conf = cfs_read_file('ceph.conf');
>>
>> my $osd_network = $ceph_conf->{global}->{cluster_network};
next prev parent reply other threads:[~2023-08-21 10:52 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-04-18 12:26 Aaron Lauterer
2023-08-21 8:20 ` Fiona Ebner
2023-08-21 10:51 ` Aaron Lauterer [this message]
2023-08-21 11:27 ` Fiona Ebner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=cad5f917-49df-4855-bc55-420f694f9186@proxmox.com \
--to=a.lauterer@proxmox.com \
--cc=f.ebner@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox