From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <a.lauterer@proxmox.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by lists.proxmox.com (Postfix) with ESMTPS id 2E1D5A03B2
 for <pve-devel@lists.proxmox.com>; Wed,  8 Nov 2023 13:10:37 +0100 (CET)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
 by firstgate.proxmox.com (Proxmox) with ESMTP id 10F6E72BF
 for <pve-devel@lists.proxmox.com>; Wed,  8 Nov 2023 13:10:37 +0100 (CET)
Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com
 [94.136.29.106])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by firstgate.proxmox.com (Proxmox) with ESMTPS
 for <pve-devel@lists.proxmox.com>; Wed,  8 Nov 2023 13:10:36 +0100 (CET)
Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1])
 by proxmox-new.maurer-it.com (Proxmox) with ESMTP id C4AC74739F
 for <pve-devel@lists.proxmox.com>; Wed,  8 Nov 2023 13:10:35 +0100 (CET)
From: Aaron Lauterer <a.lauterer@proxmox.com>
To: pve-devel@lists.proxmox.com
Date: Wed,  8 Nov 2023 13:10:34 +0100
Message-Id: <20231108121034.3332613-1-a.lauterer@proxmox.com>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SPAM-LEVEL: Spam detection results:  0
 AWL -0.070 Adjusted score from AWL reputation of From: address
 BAYES_00                 -1.9 Bayes spam probability is 0 to 1%
 DMARC_MISSING             0.1 Missing DMARC policy
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
 T_SCC_BODY_TEXT_LINE    -0.01 -
Subject: [pve-devel] [PATCH manager] api: osd: destroy: remove mclock max
 iops settings
X-BeenThere: pve-devel@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/>
List-Post: <mailto:pve-devel@lists.proxmox.com>
List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe>
X-List-Received-Date: Wed, 08 Nov 2023 12:10:37 -0000

Ceph does a quick benchmark when creating a new OSD and stores the
osd_mclock_max_capacity_iops_{ssd,hdd} settings in the config DB.

When destroying the OSD, Ceph does not automatically remove these
settings. Keeping them can be problematic if a new OSD with potentially
more performance is added and ends up getting the same OSD ID.

Therefore, we remove these settings ourselves when destroying an OSD.
Removing both variants, hdd and ssd should be fine, as the MON does not
complain if the setting does not exist.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
 PVE/API2/Ceph/OSD.pm | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/PVE/API2/Ceph/OSD.pm b/PVE/API2/Ceph/OSD.pm
index 0c07e7ce..2893456a 100644
--- a/PVE/API2/Ceph/OSD.pm
+++ b/PVE/API2/Ceph/OSD.pm
@@ -985,6 +985,10 @@ __PACKAGE__->register_method ({
 	    print "Remove OSD $osdsection\n";
 	    $rados->mon_command({ prefix => "osd rm", ids => [ $osdsection ], format => 'plain' });
 
+	    print "Remove $osdsection mclock max capacity iops settings from config\n";
+	    $rados->mon_command({ prefix => "config rm", who => $osdsection, name => 'osd_mclock_max_capacity_iops_ssd' });
+	    $rados->mon_command({ prefix => "config rm", who => $osdsection, name => 'osd_mclock_max_capacity_iops_hdd' });
+
 	    # try to unmount from standard mount point
 	    my $mountpoint = "/var/lib/ceph/osd/ceph-$osdid";
 
-- 
2.39.2