public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
From: Thomas Lamprecht <t.lamprecht@proxmox.com>
To: Proxmox VE user list <pve-user@lists.proxmox.com>
Subject: Re: [PVE-User] Caution: ceph-mon service does not start after today's updates
Date: Thu, 26 Nov 2020 19:56:44 +0100	[thread overview]
Message-ID: <00c86fc4-6d3a-8a64-db84-27d5b5558617@proxmox.com> (raw)
In-Reply-To: <269aba60-412c-578b-9757-6a0567d270e5@gmail.com>

Some news.

There are a few things at play, it boils down to two things:
* a update of various service orderings in ceph with 14.2.12 (released a bit
  ago), they introduced pretty much everywhere a `Before=remote-fs-pre.target`
  order enforcement.

* rrdcached, a service used by pve-cluster.service (pmxcfs), this has no native
  systemd service file, so systemd auto generates one, with an `Before=remote-pre.target`
  order enforcement which then has ordering for the aforementioned
  `Before=remote-fs-pre.target`


Thus you get the cycle (-> means an after odering, all befores where transformed
to after by reversing them (systemd does that too)):


.> pve-cluster -> rrdcached -> remote-pre -> remote-fs-pre -> ceph-mgr@ -.
|                                                                        |
`------------------------------------------------------------------------'

We're building a new ceph version with the Before=remote-fs-pre.target removed,
it is bogus for the ceph mgr, mds, mon, .. services as is.

As you probably guessed, one can also fix this by adapting rrdcached, and as
a work around you can do so:

1. copy over the generated ephemeral service file from /run to /etc, which
   has higher priority.

# cp /run/systemd/generator.late/rrdcached.service /etc/systemd/system/

2. Drop the after ordering for remote-fs.target
# sed -i '/^After=remote-fs.target/d' /etc/systemd/system/rrdcached.service

3. reboot 

A ceph 14.2.15-pve2 package will soon be available, we'll also see if we can
improve the rrdcached situation in the future, it has no fault on its own
naturally, the systemd auto generators heuristic is to blame, but maybe we
can see if upstream or Debian has interest in adding an hand crafted systemd
unit file, avoiding auto-generation. Otionally we could maintain it for PVE,
or do like in Proxmox Backup Server - use our own rust based RRD implementation

regards,
Thomas





  parent reply	other threads:[~2020-11-26 18:56 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-26 12:18 Uwe Sauter
2020-11-26 14:10 ` Lindsay Mathieson
2020-11-26 14:15   ` Uwe Sauter
2020-11-26 14:46     ` Thomas Lamprecht
2020-11-26 15:03       ` Lindsay Mathieson
2020-11-26 16:14         ` Thomas Lamprecht
2020-11-26 16:35           ` Thomas Lamprecht
2020-11-26 18:56         ` Thomas Lamprecht [this message]
2020-11-26 19:35 ` [PVE-User] [update available] " Thomas Lamprecht
2020-11-26 19:54   ` Uwe Sauter
2020-11-27  0:53   ` Lindsay Mathieson
2020-11-27  8:19   ` Uwe Sauter
2020-11-27 13:18   ` Lindsay Mathieson
2020-11-27 13:29     ` Jean-Luc Oms
2020-11-27 13:31       ` Lindsay Mathieson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=00c86fc4-6d3a-8a64-db84-27d5b5558617@proxmox.com \
    --to=t.lamprecht@proxmox.com \
    --cc=pve-user@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal