From: Daniel Kral <d.kral@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH ha-manager 7/9] tests: add test cases for migrating resources with node affinity rules
Date: Mon, 15 Dec 2025 16:52:17 +0100 [thread overview]
Message-ID: <20251215155334.476984-8-d.kral@proxmox.com> (raw)
In-Reply-To: <20251215155334.476984-1-d.kral@proxmox.com>
These test cases show the current behavior of manual HA resource
migrations, where the HA resource is in a strict or non-strict node
affinity rule.
These are added in preparation of preventing manual HA resource
migrations/relocations to nodes, which are not in the allowed set
according to the HA resource's node affinity rules.
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
---
src/test/test-node-affinity-nonstrict7/README | 9 ++
.../test-node-affinity-nonstrict7/cmdlist | 9 ++
.../hardware_status | 5 ++
.../test-node-affinity-nonstrict7/log.expect | 89 +++++++++++++++++++
.../manager_status | 1 +
.../rules_config | 7 ++
.../service_config | 4 +
src/test/test-node-affinity-strict7/README | 9 ++
src/test/test-node-affinity-strict7/cmdlist | 9 ++
.../hardware_status | 5 ++
.../test-node-affinity-strict7/log.expect | 87 ++++++++++++++++++
.../test-node-affinity-strict7/manager_status | 1 +
.../test-node-affinity-strict7/rules_config | 9 ++
.../test-node-affinity-strict7/service_config | 4 +
14 files changed, 248 insertions(+)
create mode 100644 src/test/test-node-affinity-nonstrict7/README
create mode 100644 src/test/test-node-affinity-nonstrict7/cmdlist
create mode 100644 src/test/test-node-affinity-nonstrict7/hardware_status
create mode 100644 src/test/test-node-affinity-nonstrict7/log.expect
create mode 100644 src/test/test-node-affinity-nonstrict7/manager_status
create mode 100644 src/test/test-node-affinity-nonstrict7/rules_config
create mode 100644 src/test/test-node-affinity-nonstrict7/service_config
create mode 100644 src/test/test-node-affinity-strict7/README
create mode 100644 src/test/test-node-affinity-strict7/cmdlist
create mode 100644 src/test/test-node-affinity-strict7/hardware_status
create mode 100644 src/test/test-node-affinity-strict7/log.expect
create mode 100644 src/test/test-node-affinity-strict7/manager_status
create mode 100644 src/test/test-node-affinity-strict7/rules_config
create mode 100644 src/test/test-node-affinity-strict7/service_config
diff --git a/src/test/test-node-affinity-nonstrict7/README b/src/test/test-node-affinity-nonstrict7/README
new file mode 100644
index 00000000..35e532cc
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict7/README
@@ -0,0 +1,9 @@
+Test whether services in a non-strict node affinity rule handle manual
+migrations to nodes as expected with respect to whether these are part of the
+node affinity rule or not.
+
+The test scenario is:
+- vm:101 should be kept on node1 or node3 (preferred)
+- vm:102 should be kept on node1 or node2 (preferred)
+- vm:101 is running on node3 with failback enabled
+- vm:102 is running on node3 with failback disabled
diff --git a/src/test/test-node-affinity-nonstrict7/cmdlist b/src/test/test-node-affinity-nonstrict7/cmdlist
new file mode 100644
index 00000000..d992c805
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict7/cmdlist
@@ -0,0 +1,9 @@
+[
+ [ "power node1 on", "power node2 on", "power node3 on"],
+ [ "service vm:101 migrate node1" ],
+ [ "service vm:101 migrate node2" ],
+ [ "service vm:101 migrate node3" ],
+ [ "service vm:102 migrate node3" ],
+ [ "service vm:102 migrate node2" ],
+ [ "service vm:102 migrate node1" ]
+]
diff --git a/src/test/test-node-affinity-nonstrict7/hardware_status b/src/test/test-node-affinity-nonstrict7/hardware_status
new file mode 100644
index 00000000..451beb13
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict7/hardware_status
@@ -0,0 +1,5 @@
+{
+ "node1": { "power": "off", "network": "off" },
+ "node2": { "power": "off", "network": "off" },
+ "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-node-affinity-nonstrict7/log.expect b/src/test/test-node-affinity-nonstrict7/log.expect
new file mode 100644
index 00000000..31daa618
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict7/log.expect
@@ -0,0 +1,89 @@
+info 0 hardware: starting simulation
+info 20 cmdlist: execute power node1 on
+info 20 node1/crm: status change startup => wait_for_quorum
+info 20 node1/lrm: status change startup => wait_for_agent_lock
+info 20 cmdlist: execute power node2 on
+info 20 node2/crm: status change startup => wait_for_quorum
+info 20 node2/lrm: status change startup => wait_for_agent_lock
+info 20 cmdlist: execute power node3 on
+info 20 node3/crm: status change startup => wait_for_quorum
+info 20 node3/lrm: status change startup => wait_for_agent_lock
+info 20 node1/crm: got lock 'ha_manager_lock'
+info 20 node1/crm: status change wait_for_quorum => master
+info 20 node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info 20 node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info 20 node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info 20 node1/crm: adding new service 'vm:101' on node 'node3'
+info 20 node1/crm: adding new service 'vm:102' on node 'node2'
+info 20 node1/crm: service 'vm:101': state changed from 'request_start' to 'started' (node = node3)
+info 20 node1/crm: service 'vm:102': state changed from 'request_start' to 'started' (node = node2)
+info 22 node2/crm: status change wait_for_quorum => slave
+info 23 node2/lrm: got lock 'ha_agent_node2_lock'
+info 23 node2/lrm: status change wait_for_agent_lock => active
+info 23 node2/lrm: starting service vm:102
+info 23 node2/lrm: service status vm:102 started
+info 24 node3/crm: status change wait_for_quorum => slave
+info 25 node3/lrm: got lock 'ha_agent_node3_lock'
+info 25 node3/lrm: status change wait_for_agent_lock => active
+info 25 node3/lrm: starting service vm:101
+info 25 node3/lrm: service status vm:101 started
+info 120 cmdlist: execute service vm:101 migrate node1
+info 120 node1/crm: got crm command: migrate vm:101 node1
+info 120 node1/crm: migrate service 'vm:101' to node 'node1'
+info 120 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node3, target = node1)
+info 121 node1/lrm: got lock 'ha_agent_node1_lock'
+info 121 node1/lrm: status change wait_for_agent_lock => active
+info 125 node3/lrm: service vm:101 - start migrate to node 'node1'
+info 125 node3/lrm: service vm:101 - end migrate to node 'node1'
+info 140 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node1)
+info 140 node1/crm: migrate service 'vm:101' to node 'node3' (running)
+info 140 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node1, target = node3)
+info 141 node1/lrm: service vm:101 - start migrate to node 'node3'
+info 141 node1/lrm: service vm:101 - end migrate to node 'node3'
+info 160 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node3)
+info 165 node3/lrm: starting service vm:101
+info 165 node3/lrm: service status vm:101 started
+info 220 cmdlist: execute service vm:101 migrate node2
+info 220 node1/crm: got crm command: migrate vm:101 node2
+info 220 node1/crm: migrate service 'vm:101' to node 'node2'
+info 220 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node3, target = node2)
+info 225 node3/lrm: service vm:101 - start migrate to node 'node2'
+info 225 node3/lrm: service vm:101 - end migrate to node 'node2'
+info 240 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node2)
+info 240 node1/crm: migrate service 'vm:101' to node 'node3' (running)
+info 240 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node2, target = node3)
+info 243 node2/lrm: service vm:101 - start migrate to node 'node3'
+info 243 node2/lrm: service vm:101 - end migrate to node 'node3'
+info 260 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node3)
+info 265 node3/lrm: starting service vm:101
+info 265 node3/lrm: service status vm:101 started
+info 320 cmdlist: execute service vm:101 migrate node3
+info 320 node1/crm: ignore crm command - service already on target node: migrate vm:101 node3
+info 420 cmdlist: execute service vm:102 migrate node3
+info 420 node1/crm: got crm command: migrate vm:102 node3
+info 420 node1/crm: migrate service 'vm:102' to node 'node3'
+info 420 node1/crm: service 'vm:102': state changed from 'started' to 'migrate' (node = node2, target = node3)
+info 423 node2/lrm: service vm:102 - start migrate to node 'node3'
+info 423 node2/lrm: service vm:102 - end migrate to node 'node3'
+info 440 node1/crm: service 'vm:102': state changed from 'migrate' to 'started' (node = node3)
+info 445 node3/lrm: starting service vm:102
+info 445 node3/lrm: service status vm:102 started
+info 520 cmdlist: execute service vm:102 migrate node2
+info 520 node1/crm: got crm command: migrate vm:102 node2
+info 520 node1/crm: migrate service 'vm:102' to node 'node2'
+info 520 node1/crm: service 'vm:102': state changed from 'started' to 'migrate' (node = node3, target = node2)
+info 525 node3/lrm: service vm:102 - start migrate to node 'node2'
+info 525 node3/lrm: service vm:102 - end migrate to node 'node2'
+info 540 node1/crm: service 'vm:102': state changed from 'migrate' to 'started' (node = node2)
+info 543 node2/lrm: starting service vm:102
+info 543 node2/lrm: service status vm:102 started
+info 620 cmdlist: execute service vm:102 migrate node1
+info 620 node1/crm: got crm command: migrate vm:102 node1
+info 620 node1/crm: migrate service 'vm:102' to node 'node1'
+info 620 node1/crm: service 'vm:102': state changed from 'started' to 'migrate' (node = node2, target = node1)
+info 623 node2/lrm: service vm:102 - start migrate to node 'node1'
+info 623 node2/lrm: service vm:102 - end migrate to node 'node1'
+info 640 node1/crm: service 'vm:102': state changed from 'migrate' to 'started' (node = node1)
+info 641 node1/lrm: starting service vm:102
+info 641 node1/lrm: service status vm:102 started
+info 1220 hardware: exit simulation - done
diff --git a/src/test/test-node-affinity-nonstrict7/manager_status b/src/test/test-node-affinity-nonstrict7/manager_status
new file mode 100644
index 00000000..9e26dfee
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict7/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-node-affinity-nonstrict7/rules_config b/src/test/test-node-affinity-nonstrict7/rules_config
new file mode 100644
index 00000000..8aa2c589
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict7/rules_config
@@ -0,0 +1,7 @@
+node-affinity: vm101-should-be-on-node1-node3
+ nodes node1:1,node3:2
+ resources vm:101
+
+node-affinity: vm102-should-be-on-node1-node2
+ nodes node1:1,node2:2
+ resources vm:102
diff --git a/src/test/test-node-affinity-nonstrict7/service_config b/src/test/test-node-affinity-nonstrict7/service_config
new file mode 100644
index 00000000..3a916390
--- /dev/null
+++ b/src/test/test-node-affinity-nonstrict7/service_config
@@ -0,0 +1,4 @@
+{
+ "vm:101": { "node": "node3", "state": "started", "failback": 1 },
+ "vm:102": { "node": "node2", "state": "started", "failback": 0 }
+}
diff --git a/src/test/test-node-affinity-strict7/README b/src/test/test-node-affinity-strict7/README
new file mode 100644
index 00000000..bc0096f5
--- /dev/null
+++ b/src/test/test-node-affinity-strict7/README
@@ -0,0 +1,9 @@
+Test whether services in a strict node affinity rule handle manual migrations
+to nodes as expected with respect to whether these are part of the node
+affinity rule or not.
+
+The test scenario is:
+- vm:101 must be kept on node1 or node3 (preferred)
+- vm:102 must be kept on node1 or node2 (preferred)
+- vm:101 is running on node3 with failback enabled
+- vm:102 is running on node3 with failback disabled
diff --git a/src/test/test-node-affinity-strict7/cmdlist b/src/test/test-node-affinity-strict7/cmdlist
new file mode 100644
index 00000000..d992c805
--- /dev/null
+++ b/src/test/test-node-affinity-strict7/cmdlist
@@ -0,0 +1,9 @@
+[
+ [ "power node1 on", "power node2 on", "power node3 on"],
+ [ "service vm:101 migrate node1" ],
+ [ "service vm:101 migrate node2" ],
+ [ "service vm:101 migrate node3" ],
+ [ "service vm:102 migrate node3" ],
+ [ "service vm:102 migrate node2" ],
+ [ "service vm:102 migrate node1" ]
+]
diff --git a/src/test/test-node-affinity-strict7/hardware_status b/src/test/test-node-affinity-strict7/hardware_status
new file mode 100644
index 00000000..451beb13
--- /dev/null
+++ b/src/test/test-node-affinity-strict7/hardware_status
@@ -0,0 +1,5 @@
+{
+ "node1": { "power": "off", "network": "off" },
+ "node2": { "power": "off", "network": "off" },
+ "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-node-affinity-strict7/log.expect b/src/test/test-node-affinity-strict7/log.expect
new file mode 100644
index 00000000..cbe9f323
--- /dev/null
+++ b/src/test/test-node-affinity-strict7/log.expect
@@ -0,0 +1,87 @@
+info 0 hardware: starting simulation
+info 20 cmdlist: execute power node1 on
+info 20 node1/crm: status change startup => wait_for_quorum
+info 20 node1/lrm: status change startup => wait_for_agent_lock
+info 20 cmdlist: execute power node2 on
+info 20 node2/crm: status change startup => wait_for_quorum
+info 20 node2/lrm: status change startup => wait_for_agent_lock
+info 20 cmdlist: execute power node3 on
+info 20 node3/crm: status change startup => wait_for_quorum
+info 20 node3/lrm: status change startup => wait_for_agent_lock
+info 20 node1/crm: got lock 'ha_manager_lock'
+info 20 node1/crm: status change wait_for_quorum => master
+info 20 node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info 20 node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info 20 node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info 20 node1/crm: adding new service 'vm:101' on node 'node3'
+info 20 node1/crm: adding new service 'vm:102' on node 'node2'
+info 20 node1/crm: service 'vm:101': state changed from 'request_start' to 'started' (node = node3)
+info 20 node1/crm: service 'vm:102': state changed from 'request_start' to 'started' (node = node2)
+info 22 node2/crm: status change wait_for_quorum => slave
+info 23 node2/lrm: got lock 'ha_agent_node2_lock'
+info 23 node2/lrm: status change wait_for_agent_lock => active
+info 23 node2/lrm: starting service vm:102
+info 23 node2/lrm: service status vm:102 started
+info 24 node3/crm: status change wait_for_quorum => slave
+info 25 node3/lrm: got lock 'ha_agent_node3_lock'
+info 25 node3/lrm: status change wait_for_agent_lock => active
+info 25 node3/lrm: starting service vm:101
+info 25 node3/lrm: service status vm:101 started
+info 120 cmdlist: execute service vm:101 migrate node1
+info 120 node1/crm: got crm command: migrate vm:101 node1
+info 120 node1/crm: migrate service 'vm:101' to node 'node1'
+info 120 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node3, target = node1)
+info 121 node1/lrm: got lock 'ha_agent_node1_lock'
+info 121 node1/lrm: status change wait_for_agent_lock => active
+info 125 node3/lrm: service vm:101 - start migrate to node 'node1'
+info 125 node3/lrm: service vm:101 - end migrate to node 'node1'
+info 140 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node1)
+info 140 node1/crm: migrate service 'vm:101' to node 'node3' (running)
+info 140 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node1, target = node3)
+info 141 node1/lrm: service vm:101 - start migrate to node 'node3'
+info 141 node1/lrm: service vm:101 - end migrate to node 'node3'
+info 160 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node3)
+info 165 node3/lrm: starting service vm:101
+info 165 node3/lrm: service status vm:101 started
+info 220 cmdlist: execute service vm:101 migrate node2
+info 220 node1/crm: got crm command: migrate vm:101 node2
+info 220 node1/crm: migrate service 'vm:101' to node 'node2'
+info 220 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node3, target = node2)
+info 225 node3/lrm: service vm:101 - start migrate to node 'node2'
+info 225 node3/lrm: service vm:101 - end migrate to node 'node2'
+info 240 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node2)
+info 240 node1/crm: migrate service 'vm:101' to node 'node3' (running)
+info 240 node1/crm: service 'vm:101': state changed from 'started' to 'migrate' (node = node2, target = node3)
+info 243 node2/lrm: service vm:101 - start migrate to node 'node3'
+info 243 node2/lrm: service vm:101 - end migrate to node 'node3'
+info 260 node1/crm: service 'vm:101': state changed from 'migrate' to 'started' (node = node3)
+info 265 node3/lrm: starting service vm:101
+info 265 node3/lrm: service status vm:101 started
+info 320 cmdlist: execute service vm:101 migrate node3
+info 320 node1/crm: ignore crm command - service already on target node: migrate vm:101 node3
+info 420 cmdlist: execute service vm:102 migrate node3
+info 420 node1/crm: got crm command: migrate vm:102 node3
+info 420 node1/crm: migrate service 'vm:102' to node 'node3'
+info 420 node1/crm: service 'vm:102': state changed from 'started' to 'migrate' (node = node2, target = node3)
+info 423 node2/lrm: service vm:102 - start migrate to node 'node3'
+info 423 node2/lrm: service vm:102 - end migrate to node 'node3'
+info 440 node1/crm: service 'vm:102': state changed from 'migrate' to 'started' (node = node3)
+info 440 node1/crm: migrate service 'vm:102' to node 'node2' (running)
+info 440 node1/crm: service 'vm:102': state changed from 'started' to 'migrate' (node = node3, target = node2)
+info 445 node3/lrm: service vm:102 - start migrate to node 'node2'
+info 445 node3/lrm: service vm:102 - end migrate to node 'node2'
+info 460 node1/crm: service 'vm:102': state changed from 'migrate' to 'started' (node = node2)
+info 463 node2/lrm: starting service vm:102
+info 463 node2/lrm: service status vm:102 started
+info 520 cmdlist: execute service vm:102 migrate node2
+info 520 node1/crm: ignore crm command - service already on target node: migrate vm:102 node2
+info 620 cmdlist: execute service vm:102 migrate node1
+info 620 node1/crm: got crm command: migrate vm:102 node1
+info 620 node1/crm: migrate service 'vm:102' to node 'node1'
+info 620 node1/crm: service 'vm:102': state changed from 'started' to 'migrate' (node = node2, target = node1)
+info 623 node2/lrm: service vm:102 - start migrate to node 'node1'
+info 623 node2/lrm: service vm:102 - end migrate to node 'node1'
+info 640 node1/crm: service 'vm:102': state changed from 'migrate' to 'started' (node = node1)
+info 641 node1/lrm: starting service vm:102
+info 641 node1/lrm: service status vm:102 started
+info 1220 hardware: exit simulation - done
diff --git a/src/test/test-node-affinity-strict7/manager_status b/src/test/test-node-affinity-strict7/manager_status
new file mode 100644
index 00000000..9e26dfee
--- /dev/null
+++ b/src/test/test-node-affinity-strict7/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-node-affinity-strict7/rules_config b/src/test/test-node-affinity-strict7/rules_config
new file mode 100644
index 00000000..622ba80b
--- /dev/null
+++ b/src/test/test-node-affinity-strict7/rules_config
@@ -0,0 +1,9 @@
+node-affinity: vm101-must-be-on-node1-node3
+ nodes node1:1,node3:2
+ resources vm:101
+ strict 1
+
+node-affinity: vm102-must-be-on-node1-node2
+ nodes node1:1,node2:2
+ resources vm:102
+ strict 1
diff --git a/src/test/test-node-affinity-strict7/service_config b/src/test/test-node-affinity-strict7/service_config
new file mode 100644
index 00000000..3a916390
--- /dev/null
+++ b/src/test/test-node-affinity-strict7/service_config
@@ -0,0 +1,4 @@
+{
+ "vm:101": { "node": "node3", "state": "started", "failback": 1 },
+ "vm:102": { "node": "node2", "state": "started", "failback": 0 }
+}
--
2.47.3
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
next prev parent reply other threads:[~2025-12-15 15:53 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-12-15 15:52 [pve-devel] [PATCH-SERIES container/ha-manager/manager/qemu-server 00/12] HA node affinity blockers (#1497) Daniel Kral
2025-12-15 15:52 ` [pve-devel] [PATCH ha-manager 1/9] ha: put source files on individual new lines Daniel Kral
2025-12-15 15:52 ` [pve-devel] [PATCH ha-manager 2/9] d/pve-ha-manager.install: remove duplicate Config.pm Daniel Kral
2025-12-15 15:52 ` [pve-devel] [PATCH ha-manager 3/9] config: group and sort use statements Daniel Kral
2025-12-15 15:52 ` [pve-devel] [PATCH ha-manager 4/9] manager: " Daniel Kral
2025-12-15 15:52 ` [pve-devel] [PATCH ha-manager 5/9] manager: report all reasons when resources are blocked from migration Daniel Kral
2025-12-15 15:52 ` [pve-devel] [PATCH ha-manager 6/9] config, manager: factor out resource motion info logic Daniel Kral
2025-12-15 15:52 ` Daniel Kral [this message]
2025-12-15 15:52 ` [pve-devel] [PATCH ha-manager 8/9] handle strict node affinity rules in manual migrations Daniel Kral
2025-12-15 15:52 ` [pve-devel] [PATCH ha-manager 9/9] handle node affinity rules with failback " Daniel Kral
2025-12-15 15:52 ` [pve-devel] [PATCH qemu-server 1/1] api: migration preconditions: add node affinity as blocking cause Daniel Kral
2025-12-15 15:52 ` [pve-devel] [PATCH container " Daniel Kral
2025-12-15 15:52 ` [pve-devel] [PATCH manager 1/1] ui: migrate: display precondition messages for ha node affinity Daniel Kral
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251215155334.476984-8-d.kral@proxmox.com \
--to=d.kral@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.