From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <m.sandoval@proxmox.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by lists.proxmox.com (Postfix) with ESMTPS id BC2F299CDA
 for <pve-devel@lists.proxmox.com>; Thu, 16 Nov 2023 16:37:29 +0100 (CET)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
 by firstgate.proxmox.com (Proxmox) with ESMTP id 9E62317128
 for <pve-devel@lists.proxmox.com>; Thu, 16 Nov 2023 16:37:29 +0100 (CET)
Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com
 [94.136.29.106])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by firstgate.proxmox.com (Proxmox) with ESMTPS
 for <pve-devel@lists.proxmox.com>; Thu, 16 Nov 2023 16:37:29 +0100 (CET)
Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1])
 by proxmox-new.maurer-it.com (Proxmox) with ESMTP id DC39743906
 for <pve-devel@lists.proxmox.com>; Thu, 16 Nov 2023 16:37:28 +0100 (CET)
References: <20231108121034.3332613-1-a.lauterer@proxmox.com>
User-agent: mu4e 1.10.7; emacs 29.1
From: Maximiliano Sandoval <m.sandoval@proxmox.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Date: Thu, 16 Nov 2023 16:34:47 +0100
In-reply-to: <20231108121034.3332613-1-a.lauterer@proxmox.com>
Message-ID: <s8ov8a1lnew.fsf@proxmox.com>
MIME-Version: 1.0
Content-Type: text/plain
X-SPAM-LEVEL: Spam detection results:  0
 AWL 0.002 Adjusted score from AWL reputation of From: address
 BAYES_00                 -1.9 Bayes spam probability is 0 to 1%
 DMARC_MISSING             0.1 Missing DMARC policy
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
 T_SCC_BODY_TEXT_LINE    -0.01 -
Subject: Re: [pve-devel] [PATCH manager] api: osd: destroy: remove mclock
 max iops settings
X-BeenThere: pve-devel@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/>
List-Post: <mailto:pve-devel@lists.proxmox.com>
List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe>
X-List-Received-Date: Thu, 16 Nov 2023 15:37:29 -0000


Tested on a new Proxmox VE 8 cluster. The mclock scheduler settings do
not appear in `ceph config dump` after removing the OSD using the web
UI. Removing a OSD without this setting being set does not cause any
issue either.

Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>

Aaron Lauterer <a.lauterer@proxmox.com> writes:

> Ceph does a quick benchmark when creating a new OSD and stores the
> osd_mclock_max_capacity_iops_{ssd,hdd} settings in the config DB.
>
> When destroying the OSD, Ceph does not automatically remove these
> settings. Keeping them can be problematic if a new OSD with potentially
> more performance is added and ends up getting the same OSD ID.
>
> Therefore, we remove these settings ourselves when destroying an OSD.
> Removing both variants, hdd and ssd should be fine, as the MON does not
> complain if the setting does not exist.
>
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
>  PVE/API2/Ceph/OSD.pm | 4 ++++
>  1 file changed, 4 insertions(+)
>
> diff --git a/PVE/API2/Ceph/OSD.pm b/PVE/API2/Ceph/OSD.pm
> index 0c07e7ce..2893456a 100644
> --- a/PVE/API2/Ceph/OSD.pm
> +++ b/PVE/API2/Ceph/OSD.pm
> @@ -985,6 +985,10 @@ __PACKAGE__->register_method ({
>  	    print "Remove OSD $osdsection\n";
>  	    $rados->mon_command({ prefix => "osd rm", ids => [ $osdsection ], format => 'plain' });
>
> +	    print "Remove $osdsection mclock max capacity iops settings from config\n";
> +	    $rados->mon_command({ prefix => "config rm", who => $osdsection, name => 'osd_mclock_max_capacity_iops_ssd' });
> +	    $rados->mon_command({ prefix => "config rm", who => $osdsection, name => 'osd_mclock_max_capacity_iops_hdd' });
> +
>  	    # try to unmount from standard mount point
>  	    my $mountpoint = "/var/lib/ceph/osd/ceph-$osdid";


--
Maximiliano