From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <m.carrara@proxmox.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by lists.proxmox.com (Postfix) with ESMTPS id 4117B95345
 for <pve-devel@lists.proxmox.com>; Mon, 26 Feb 2024 10:51:23 +0100 (CET)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
 by firstgate.proxmox.com (Proxmox) with ESMTP id 26AC0C86B
 for <pve-devel@lists.proxmox.com>; Mon, 26 Feb 2024 10:51:23 +0100 (CET)
Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com
 [94.136.29.106])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by firstgate.proxmox.com (Proxmox) with ESMTPS
 for <pve-devel@lists.proxmox.com>; Mon, 26 Feb 2024 10:51:21 +0100 (CET)
Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1])
 by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 659F046E99
 for <pve-devel@lists.proxmox.com>; Mon, 26 Feb 2024 10:51:21 +0100 (CET)
Message-ID: <c99f7913-04f8-457d-9611-b65a86207cef@proxmox.com>
Date: Mon, 26 Feb 2024 10:51:20 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
To: Friedrich Weber <f.weber@proxmox.com>,
 Proxmox VE development discussion <pve-devel@lists.proxmox.com>
References: <20240216145615.2301594-1-m.carrara@proxmox.com>
 <4fa0d7e0-c428-48e0-85ee-422aa8d26e99@proxmox.com>
 <ce6907be-1714-422e-8f5c-c8d8d3f1af56@proxmox.com>
 <cb9c8452-8331-42cb-a833-31d30257a668@proxmox.com>
Content-Language: en-US
From: Max Carrara <m.carrara@proxmox.com>
In-Reply-To: <cb9c8452-8331-42cb-a833-31d30257a668@proxmox.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-SPAM-LEVEL: Spam detection results:  0
 AWL -0.005 Adjusted score from AWL reputation of From: address
 BAYES_00                 -1.9 Bayes spam probability is 0 to 1%
 DMARC_MISSING             0.1 Missing DMARC policy
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
 T_SCC_BODY_TEXT_LINE    -0.01 -
Subject: Re: [pve-devel] [PATCH v3 ceph master, ceph quincy-stable 8,
 pve-storage,
 pve-manager 00/13] Fix #4759: Configure Permissions for ceph-crash.service
X-BeenThere: pve-devel@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/>
List-Post: <mailto:pve-devel@lists.proxmox.com>
List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe>
X-List-Received-Date: Mon, 26 Feb 2024 09:51:23 -0000

On 2/23/24 17:19, Friedrich Weber wrote:
> On 21/02/2024 14:15, Max Carrara wrote:
>> On 2/21/24 12:55, Friedrich Weber wrote:
>>> [...]
>>>
>>> - the `ceph-crash` service does not restart after installing the patched
>>> ceph-base package, so the reordering done by patches 02+04 does not take
>>> effect immediately: ceph-crash posts crash logs just fine, but logs to
>>> the journal that it can't find a keyring. After a restart of ceph-crash,
>>> the patch takes effect, so only a tiny inconvenience, but still: Not
>>> sure if restarting the service is something we'd want to do in a
>>> postinst -- is this an acceptable thing to do in a postinst?
>>
>> Initially the service was being restarted, but that's omitted in the new
>> hook, as Fabian and I had noticed that `ceph-crash` just checks for its
>> expected keys after its waiting period again anyways. I had unfortunately
>> forgotten to put that into the changelog of the postinst hook stuff -
>> mea culpa>
>> I think restarting the service would be necessary then in order to apply
>> the new sequence which keys are checked in, as that's hard-coded in
>> `ceph-crash`.
>>
>> It certainly should be acceptable (as we already do it in some instances),
>> as long as we restart it only if the service is enabled. That was part
>> of the old BASH function anyway - I don't think there's any harm in adding
>> it back (either in BASH or Perl).
> 
> If it's acceptable, I think it would be nice to restart ceph-crash (it
> doesn't seem to be restarted that often).

I agree!

> 
>>> - Might there be issues in a mixed-version cluster scenario, so if some
>>> node A already has an updated pve-storage package (patches 05-10), but
>>> node B doesn't yet? One thing I noticed is that node A will add the
>>> [client.crash] section, but node B may remove it again when it needs to
>>> rewrite the Ceph config (e.g. when creating a monitor). I don't find
>>> this particular issue too concerning, as hopefully node B will be
>>> updated eventually as well and reinstate the [client.crash] section. But
>>> I wonder if there could be other more serious issues?
>>
>> The scenario you mentioned might indeed happen somehow, but once all
>> nodes are updated - even if the config has been changed inbetween updates -
>> the '[client.crash]' section should definitely exist.
>>
>> One issue that's been fixed by moving things to the Perl helper is that
>> simultaneous updates could potentially modify 'ceph.conf' at the same time
>> - the Perl helper now locks the file on pmxcfs, so that cannot happen anymore.
> 
> Nice!
> 
>> I cannot think of any other scenario at the moment.
> 
> Yeah, me neither.
> 
>> In any case, even if *somehow* 'ceph.conf' ends up not containing the section
>> or the keyring file ends up missing, the helper script will be available
>> after the update has been performed, so it's possible to just run it again
>> manually to adapt the config.
>>
>> That being said, this reminds me that the '[ceph.crash]' section, the location
>> of the keyring file, etc. should probably be in our docs as well, so I will
>> send in a follow-up series for that (unless this series ends up needing a v4,
>> then I'll include it there).
>>
>> Thanks again for the feedback and the tests you ran!
> 
> Sure! I ran some more tests installing a fresh Reef cluster with the
> patched packages, and did not notice any major issues.
> 
> One minor thing I noticed: If a user has manually worked around the
> issue by generating a client.crash keyring, and adding a [client.crash]
> section, as described in [1]:
> 
> [client.crash]
>     key = <yourbase64key>
> 
> ... after the upgrade, this user will end up with the following
> [client.crash] section:
> 
> [client.crash]
> key = <yourbase64key>
> keyring = /etc/pve/ceph/$cluster.$name.keyring
> 
> and the same keyring <yourbase64key> in
> /etc/pve/ceph/ceph.client.crash.keyring.
> 
> In my test this is not a problem, though (probably since both keys are
> the same).
> 
> [1] https://bugzilla.proxmox.com/show_bug.cgi?id=4759#c7

Oh, good catch! I'll correct this in a v4, I think. We want to ensure we're
only setting the keyring, in our case.

Thanks again for all the tests, much appreciated!