From: Fiona Ebner <f.ebner@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [RFC ha-manager] manager: clear stale maintenance node in edge case
Date: Wed, 22 Mar 2023 17:21:30 +0100 [thread overview]
Message-ID: <20230322162130.65863-1-f.ebner@proxmox.com> (raw)
where the whole cluster was shut down at the same time and the
service was never started on another node since the maintenance node
was set.
If a user ends up in this edge case, it would be rather surprising
that the service would ignore the rebalance on start setting and would
be automatically migrated back to the "maintenance node" which
actually is not in maintenance mode anymore after a migration away
from it.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
We could also think about doing the check more broadly in manage(), in
preparation to add a feature where stopped services are also migrated
during maintenance also. But that needs more consideration.
src/PVE/HA/Manager.pm | 18 +++++
src/test/test-stale-maintenance-node/cmdlist | 5 ++
.../datacenter.cfg | 5 ++
.../hardware_status | 5 ++
.../test-stale-maintenance-node/log.expect | 76 +++++++++++++++++++
.../manager_status | 1 +
.../service_config | 3 +
7 files changed, 113 insertions(+)
create mode 100644 src/test/test-stale-maintenance-node/cmdlist
create mode 100644 src/test/test-stale-maintenance-node/datacenter.cfg
create mode 100644 src/test/test-stale-maintenance-node/hardware_status
create mode 100644 src/test/test-stale-maintenance-node/log.expect
create mode 100644 src/test/test-stale-maintenance-node/manager_status
create mode 100644 src/test/test-stale-maintenance-node/service_config
diff --git a/src/PVE/HA/Manager.pm b/src/PVE/HA/Manager.pm
index 0d0cad2..59e5cfe 100644
--- a/src/PVE/HA/Manager.pm
+++ b/src/PVE/HA/Manager.pm
@@ -907,6 +907,24 @@ sub next_state_started {
)
);
}
+
+ if ($sd->{maintenance_node} && $sd->{node} eq $sd->{maintenance_node}) {
+ my $node_state = $ns->get_node_state($sd->{node});
+ if ($node_state eq 'online') {
+ # Having the maintenance node set here means that the service was never
+ # started on a different node since it was set. This can happen in the edge
+ # case that the whole cluster is shut down at the same time while the
+ # 'migrate' policy was configured. Node is not in maintenance mode anymore
+ # and service is started on this node, so it's fine to clear the setting.
+ $haenv->log(
+ 'info',
+ "service '$sid': clearing stale maintenance node "
+ ."'$sd->{maintenance_node}' setting (is current node)",
+ );
+ delete $sd->{maintenance_node};
+ }
+ }
+
# ensure service get started again if it went unexpected down
# but ensure also no LRM result gets lost
$sd->{uid} = compute_new_uuid($sd->{state}) if defined($lrm_res);
diff --git a/src/test/test-stale-maintenance-node/cmdlist b/src/test/test-stale-maintenance-node/cmdlist
new file mode 100644
index 0000000..34bf737
--- /dev/null
+++ b/src/test/test-stale-maintenance-node/cmdlist
@@ -0,0 +1,5 @@
+[
+ [ "power node1 on", "power node2 on", "power node3 on"],
+ [ "shutdown node1", "shutdown node2", "shutdown node3"],
+ [ "power node1 on", "power node2 on", "power node3 on"]
+]
diff --git a/src/test/test-stale-maintenance-node/datacenter.cfg b/src/test/test-stale-maintenance-node/datacenter.cfg
new file mode 100644
index 0000000..de0bf81
--- /dev/null
+++ b/src/test/test-stale-maintenance-node/datacenter.cfg
@@ -0,0 +1,5 @@
+{
+ "ha": {
+ "shutdown_policy": "migrate"
+ }
+}
diff --git a/src/test/test-stale-maintenance-node/hardware_status b/src/test/test-stale-maintenance-node/hardware_status
new file mode 100644
index 0000000..451beb1
--- /dev/null
+++ b/src/test/test-stale-maintenance-node/hardware_status
@@ -0,0 +1,5 @@
+{
+ "node1": { "power": "off", "network": "off" },
+ "node2": { "power": "off", "network": "off" },
+ "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-stale-maintenance-node/log.expect b/src/test/test-stale-maintenance-node/log.expect
new file mode 100644
index 0000000..cd1fb81
--- /dev/null
+++ b/src/test/test-stale-maintenance-node/log.expect
@@ -0,0 +1,76 @@
+info 0 hardware: starting simulation
+info 20 cmdlist: execute power node1 on
+info 20 node1/crm: status change startup => wait_for_quorum
+info 20 node1/lrm: status change startup => wait_for_agent_lock
+info 20 cmdlist: execute power node2 on
+info 20 node2/crm: status change startup => wait_for_quorum
+info 20 node2/lrm: status change startup => wait_for_agent_lock
+info 20 cmdlist: execute power node3 on
+info 20 node3/crm: status change startup => wait_for_quorum
+info 20 node3/lrm: status change startup => wait_for_agent_lock
+info 20 node1/crm: got lock 'ha_manager_lock'
+info 20 node1/crm: status change wait_for_quorum => master
+info 20 node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info 20 node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info 20 node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info 20 node1/crm: adding new service 'vm:103' on node 'node1'
+info 20 node1/crm: service 'vm:103': state changed from 'request_start' to 'started' (node = node1)
+info 21 node1/lrm: got lock 'ha_agent_node1_lock'
+info 21 node1/lrm: status change wait_for_agent_lock => active
+info 21 node1/lrm: starting service vm:103
+info 21 node1/lrm: service status vm:103 started
+info 22 node2/crm: status change wait_for_quorum => slave
+info 24 node3/crm: status change wait_for_quorum => slave
+info 120 cmdlist: execute shutdown node1
+info 120 node1/lrm: got shutdown request with shutdown policy 'migrate'
+info 120 node1/lrm: shutdown LRM, doing maintenance, removing this node from active list
+info 120 cmdlist: execute shutdown node2
+info 120 node2/lrm: got shutdown request with shutdown policy 'migrate'
+info 120 node2/lrm: shutdown LRM, doing maintenance, removing this node from active list
+info 120 cmdlist: execute shutdown node3
+info 120 node3/lrm: got shutdown request with shutdown policy 'migrate'
+info 120 node3/lrm: shutdown LRM, doing maintenance, removing this node from active list
+info 120 node1/crm: node 'node1': state changed from 'online' => 'maintenance'
+info 120 node1/crm: node 'node2': state changed from 'online' => 'maintenance'
+info 120 node1/crm: node 'node3': state changed from 'online' => 'maintenance'
+info 121 node1/lrm: status change active => maintenance
+info 124 node2/lrm: exit (loop end)
+info 124 shutdown: execute crm node2 stop
+info 123 node2/crm: server received shutdown request
+info 126 node3/lrm: exit (loop end)
+info 126 shutdown: execute crm node3 stop
+info 125 node3/crm: server received shutdown request
+info 143 node2/crm: exit (loop end)
+info 143 shutdown: execute power node2 off
+info 144 node3/crm: exit (loop end)
+info 144 shutdown: execute power node3 off
+info 160 node1/crm: status change master => lost_manager_lock
+info 160 node1/crm: status change lost_manager_lock => wait_for_quorum
+info 161 node1/lrm: status change maintenance => lost_agent_lock
+err 161 node1/lrm: get shutdown request in state 'lost_agent_lock' - detected 1 running services
+err 181 node1/lrm: get shutdown request in state 'lost_agent_lock' - detected 1 running services
+err 201 node1/lrm: get shutdown request in state 'lost_agent_lock' - detected 1 running services
+info 202 watchdog: execute power node1 off
+info 201 node1/crm: killed by poweroff
+info 202 node1/lrm: killed by poweroff
+info 202 hardware: server 'node1' stopped by poweroff (watchdog)
+info 220 cmdlist: execute power node1 on
+info 220 node1/crm: status change startup => wait_for_quorum
+info 220 node1/lrm: status change startup => wait_for_agent_lock
+info 220 cmdlist: execute power node2 on
+info 220 node2/crm: status change startup => wait_for_quorum
+info 220 node2/lrm: status change startup => wait_for_agent_lock
+info 220 cmdlist: execute power node3 on
+info 220 node3/crm: status change startup => wait_for_quorum
+info 220 node3/lrm: status change startup => wait_for_agent_lock
+info 220 node1/crm: status change wait_for_quorum => master
+info 221 node1/lrm: status change wait_for_agent_lock => active
+info 221 node1/lrm: starting service vm:103
+info 221 node1/lrm: service status vm:103 started
+info 222 node2/crm: status change wait_for_quorum => slave
+info 224 node3/crm: status change wait_for_quorum => slave
+info 240 node1/crm: node 'node1': state changed from 'maintenance' => 'online'
+info 240 node1/crm: node 'node2': state changed from 'maintenance' => 'online'
+info 240 node1/crm: node 'node3': state changed from 'maintenance' => 'online'
+info 240 node1/crm: service 'vm:103': clearing stale maintenance node 'node1' setting (is current node)
+info 820 hardware: exit simulation - done
diff --git a/src/test/test-stale-maintenance-node/manager_status b/src/test/test-stale-maintenance-node/manager_status
new file mode 100644
index 0000000..0967ef4
--- /dev/null
+++ b/src/test/test-stale-maintenance-node/manager_status
@@ -0,0 +1 @@
+{}
diff --git a/src/test/test-stale-maintenance-node/service_config b/src/test/test-stale-maintenance-node/service_config
new file mode 100644
index 0000000..cfed86f
--- /dev/null
+++ b/src/test/test-stale-maintenance-node/service_config
@@ -0,0 +1,3 @@
+{
+ "vm:103": { "node": "node1", "state": "enabled" }
+}
--
2.30.2
next reply other threads:[~2023-03-22 16:22 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-03-22 16:21 Fiona Ebner [this message]
2023-06-07 16:19 ` Thomas Lamprecht
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230322162130.65863-1-f.ebner@proxmox.com \
--to=f.ebner@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox