From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id 00BDF1FF138 for ; Wed, 18 Mar 2026 13:19:56 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id BA42735864; Wed, 18 Mar 2026 13:20:06 +0100 (CET) Message-ID: Date: Wed, 18 Mar 2026 13:19:30 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH manager 2/2] ceph: osd: stop running pveremove during cleanup To: Maximiliano Sandoval , pve-devel@lists.proxmox.com References: <20260317123343.299525-1-m.sandoval@proxmox.com> <20260317123343.299525-3-m.sandoval@proxmox.com> Content-Language: en-US From: Daniel Herzig In-Reply-To: <20260317123343.299525-3-m.sandoval@proxmox.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1773836329319 X-SPAM-LEVEL: Spam detection results: 0 AWL -0.573 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.408 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_RPBL_BLOCKED 0.819 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_SAFE_BLOCKED 0.903 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Message-ID-Hash: PRILRW4Y4E7KEICEZ2QDY5IIRYTJRTKQ X-Message-ID-Hash: PRILRW4Y4E7KEICEZ2QDY5IIRYTJRTKQ X-MailFrom: d.herzig@proxmox.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header X-Mailman-Version: 3.3.10 Precedence: list List-Id: Proxmox VE development discussion List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: Thanks for this! I just tested the patched package on a 9.1.6 cluster with ceph-19.2.3 [0]. I tested the sequence: + `ceph osd out ` + `ceph osd ok-to-stop ` (until OK) + `pveceph stop --service osd.` + `pveceph osd destroy --cleanup Formerly, this yielded a confusing last message from `pvremove`, caused by the PV already having been deleted by `ceph-volume lvm zap --osd-id --destroy` [1]. This is gone now, and should provide a much nicer UX. [0] on no-subscription I needed to install libpve-common-perl 9.1.8 prior to the patched pve-manager package. [1] "command '/sbin/pvremove ' failed: exit code 5" Tested-by: Daniel Herzig On 3/17/26 1:33 PM, Maximiliano Sandoval wrote: > Since [7f007e7fc] ceph will call pvremove while zapping the OSD. Thus we > remove our pvremove call in order to avoid failing to remove an already > removed PV. > > The aforementioned commit is included in all versions since Reef which > is the oldest supported version in Proxmox VE 9. > > [7f007e7fc] https://github.com/ceph/ceph/commit/7f007e7fc75b4d6e7465c684f7e5b2458883dcc5 > > Suggested-by: Fiona Ebner > Signed-off-by: Maximiliano Sandoval > --- > PVE/API2/Ceph/OSD.pm | 14 -------------- > 1 file changed, 14 deletions(-) > > diff --git a/PVE/API2/Ceph/OSD.pm b/PVE/API2/Ceph/OSD.pm > index a0768dc0..28f631ca 100644 > --- a/PVE/API2/Ceph/OSD.pm > +++ b/PVE/API2/Ceph/OSD.pm > @@ -1048,20 +1048,6 @@ __PACKAGE__->register_method({ > > eval { PVE::Ceph::Tools::ceph_volume_zap($osdid, $cleanup) }; > warn $@ if $@; > - > - if ($cleanup) { > - # try to remove pvs, but do not fail if it does not work > - for my $osd_part (@{ $osd_list->{$osdid} }) { > - for my $dev (@{ $osd_part->{devices} }) { > - ($dev) = ($dev =~ m|^(/dev/[-_.a-zA-Z0-9\/]+)$|); #untaint > - > - eval { > - run_command(['/sbin/pvremove', $dev], errfunc => sub { }); > - }; > - warn $@ if $@; > - } > - } > - } > } else { > my $partitions_to_remove = []; > if ($cleanup) {