From: Kefu Chai <k.chai@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH pve-cluster 02/15] pmxcfs-rs: add pmxcfs-config crate
Date: Tue, 6 Jan 2026 22:24:26 +0800 [thread overview]
Message-ID: <20260106142440.2368585-3-k.chai@proxmox.com> (raw)
In-Reply-To: <20260106142440.2368585-1-k.chai@proxmox.com>
Add configuration management crate that provides:
- Config struct for runtime configuration
- Node hostname, IP, and group ID tracking
- Debug and local mode flags
- Thread-safe configuration access via parking_lot Mutex
This is a foundational crate with no internal dependencies, only
requiring parking_lot for synchronization. Other crates will use
this for accessing runtime configuration.
Signed-off-by: Kefu Chai <k.chai@proxmox.com>
---
src/pmxcfs-rs/Cargo.toml | 3 +-
src/pmxcfs-rs/pmxcfs-config/Cargo.toml | 16 +
src/pmxcfs-rs/pmxcfs-config/README.md | 127 +++++++
src/pmxcfs-rs/pmxcfs-config/src/lib.rs | 471 +++++++++++++++++++++++++
4 files changed, 616 insertions(+), 1 deletion(-)
create mode 100644 src/pmxcfs-rs/pmxcfs-config/Cargo.toml
create mode 100644 src/pmxcfs-rs/pmxcfs-config/README.md
create mode 100644 src/pmxcfs-rs/pmxcfs-config/src/lib.rs
diff --git a/src/pmxcfs-rs/Cargo.toml b/src/pmxcfs-rs/Cargo.toml
index 15d88f52..28e20bb7 100644
--- a/src/pmxcfs-rs/Cargo.toml
+++ b/src/pmxcfs-rs/Cargo.toml
@@ -1,7 +1,8 @@
# Workspace root for pmxcfs Rust implementation
[workspace]
members = [
- "pmxcfs-api-types", # Shared types and error definitions
+ "pmxcfs-api-types", # Shared types and error definitions
+ "pmxcfs-config", # Configuration management
]
resolver = "2"
diff --git a/src/pmxcfs-rs/pmxcfs-config/Cargo.toml b/src/pmxcfs-rs/pmxcfs-config/Cargo.toml
new file mode 100644
index 00000000..f5a60995
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs-config/Cargo.toml
@@ -0,0 +1,16 @@
+[package]
+name = "pmxcfs-config"
+description = "Configuration management for pmxcfs"
+
+version.workspace = true
+edition.workspace = true
+authors.workspace = true
+license.workspace = true
+repository.workspace = true
+
+[lints]
+workspace = true
+
+[dependencies]
+# Concurrency primitives
+parking_lot.workspace = true
diff --git a/src/pmxcfs-rs/pmxcfs-config/README.md b/src/pmxcfs-rs/pmxcfs-config/README.md
new file mode 100644
index 00000000..c06b2170
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs-config/README.md
@@ -0,0 +1,127 @@
+# pmxcfs-config
+
+**Configuration Management** and **Cluster Services** for pmxcfs.
+
+This crate provides configuration structures and cluster integration services including quorum tracking and cluster configuration monitoring via Corosync APIs.
+
+## Overview
+
+This crate contains:
+1. **Config struct**: Runtime configuration (node name, IPs, flags)
+2. Integration with Corosync services (tracked in main pmxcfs crate):
+ - **QuorumService** (`pmxcfs/src/quorum_service.rs`) - Quorum monitoring
+ - **ClusterConfigService** (`pmxcfs/src/cluster_config_service.rs`) - Config tracking
+
+## Config Struct
+
+The `Config` struct holds daemon-wide configuration including node hostname, IP address, www-data group ID, debug flag, local mode flag, and cluster name.
+
+## Cluster Services
+
+The following services are implemented in the main pmxcfs crate but documented here for completeness.
+
+### QuorumService
+
+**C Equivalent:** `src/pmxcfs/quorum.c` - `service_quorum_new()`
+**Rust Location:** `src/pmxcfs-rs/pmxcfs/src/quorum_service.rs`
+
+Monitors cluster quorum status via Corosync quorum API.
+
+#### Features
+- Tracks quorum state (quorate/inquorate)
+- Monitors member list changes
+- Automatic reconnection on Corosync restart
+- Updates `Status` quorum flag
+
+#### C to Rust Mapping
+
+| C Function | Rust Equivalent | Location |
+|-----------|-----------------|----------|
+| `service_quorum_new()` | `QuorumService::new()` | quorum_service.rs |
+| `service_quorum_destroy()` | (Drop trait / finalize) | Automatic |
+| `quorum_notification_fn` | quorum_notification closure | quorum_service.rs |
+| `nodelist_notification_fn` | nodelist_notification closure | quorum_service.rs |
+
+#### Quorum Notifications
+
+The service monitors quorum state changes and member list changes, updating the Status accordingly.
+
+### ClusterConfigService
+
+**C Equivalent:** `src/pmxcfs/confdb.c` - `service_confdb_new()`
+**Rust Location:** `src/pmxcfs-rs/pmxcfs/src/cluster_config_service.rs`
+
+Monitors Corosync cluster configuration (cmap) and tracks node membership.
+
+#### Features
+- Monitors cluster membership via Corosync cmap API
+- Tracks node additions/removals
+- Registers nodes in Status
+- Automatic reconnection on Corosync restart
+
+#### C to Rust Mapping
+
+| C Function | Rust Equivalent | Location |
+|-----------|-----------------|----------|
+| `service_confdb_new()` | `ClusterConfigService::new()` | cluster_config_service.rs |
+| `service_confdb_destroy()` | (Drop trait / finalize) | Automatic |
+| `confdb_track_fn` | (direct cmap queries) | Different approach |
+
+#### Configuration Tracking
+
+The service monitors:
+- `nodelist.node.*.nodeid` - Node IDs
+- `nodelist.node.*.name` - Node names
+- `nodelist.node.*.ring*_addr` - Node IP addresses
+
+Updates `Status` with current cluster membership.
+
+## Key Differences from C Implementation
+
+### Cluster Config Service API
+
+**C Version (confdb.c):**
+- Uses deprecated confdb API
+- Track changes via confdb notifications
+
+**Rust Version:**
+- Uses modern cmap API
+- Direct cmap queries
+
+Both read the same data, but Rust uses the modern Corosync API.
+
+### Service Integration
+
+**C Version:**
+- qb_loop manages lifecycle
+
+**Rust Version:**
+- Service trait abstracts lifecycle
+- ServiceManager handles retry
+- Tokio async dispatch
+
+## Known Issues / TODOs
+
+### Compatibility
+- **Quorum tracking**: Compatible with C implementation
+- **Node registration**: Equivalent behavior
+- **cmap vs confdb**: Rust uses modern cmap API (C uses deprecated confdb)
+
+### Missing Features
+- None identified
+
+### Behavioral Differences (Benign)
+- **API choice**: Rust uses cmap, C uses confdb (both read same data)
+- **Lifecycle**: Rust uses Service trait, C uses manual lifecycle
+
+## References
+
+### C Implementation
+- `src/pmxcfs/quorum.c` / `quorum.h` - Quorum service
+- `src/pmxcfs/confdb.c` / `confdb.h` - Cluster config service
+
+### Related Crates
+- **pmxcfs**: Main daemon with QuorumService and ClusterConfigService
+- **pmxcfs-status**: Status tracking updated by these services
+- **pmxcfs-services**: Service framework used by both services
+- **rust-corosync**: Corosync FFI bindings
diff --git a/src/pmxcfs-rs/pmxcfs-config/src/lib.rs b/src/pmxcfs-rs/pmxcfs-config/src/lib.rs
new file mode 100644
index 00000000..5e1ee1b2
--- /dev/null
+++ b/src/pmxcfs-rs/pmxcfs-config/src/lib.rs
@@ -0,0 +1,471 @@
+use parking_lot::RwLock;
+use std::sync::Arc;
+
+/// Global configuration for pmxcfs
+pub struct Config {
+ /// Node name (hostname without domain)
+ pub nodename: String,
+
+ /// Node IP address
+ pub node_ip: String,
+
+ /// www-data group ID for file permissions
+ pub www_data_gid: u32,
+
+ /// Debug mode enabled
+ pub debug: bool,
+
+ /// Force local mode (no clustering)
+ pub local_mode: bool,
+
+ /// Cluster name (CPG group name)
+ pub cluster_name: String,
+
+ /// Debug level (0 = normal, 1+ = debug) - mutable at runtime
+ debug_level: RwLock<u8>,
+}
+
+impl Clone for Config {
+ fn clone(&self) -> Self {
+ Self {
+ nodename: self.nodename.clone(),
+ node_ip: self.node_ip.clone(),
+ www_data_gid: self.www_data_gid,
+ debug: self.debug,
+ local_mode: self.local_mode,
+ cluster_name: self.cluster_name.clone(),
+ debug_level: RwLock::new(*self.debug_level.read()),
+ }
+ }
+}
+
+impl std::fmt::Debug for Config {
+ fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+ f.debug_struct("Config")
+ .field("nodename", &self.nodename)
+ .field("node_ip", &self.node_ip)
+ .field("www_data_gid", &self.www_data_gid)
+ .field("debug", &self.debug)
+ .field("local_mode", &self.local_mode)
+ .field("cluster_name", &self.cluster_name)
+ .field("debug_level", &*self.debug_level.read())
+ .finish()
+ }
+}
+
+impl Config {
+ pub fn new(
+ nodename: String,
+ node_ip: String,
+ www_data_gid: u32,
+ debug: bool,
+ local_mode: bool,
+ cluster_name: String,
+ ) -> Arc<Self> {
+ let debug_level = if debug { 1 } else { 0 };
+ Arc::new(Self {
+ nodename,
+ node_ip,
+ www_data_gid,
+ debug,
+ local_mode,
+ cluster_name,
+ debug_level: RwLock::new(debug_level),
+ })
+ }
+
+ pub fn cluster_name(&self) -> &str {
+ &self.cluster_name
+ }
+
+ pub fn nodename(&self) -> &str {
+ &self.nodename
+ }
+
+ pub fn node_ip(&self) -> &str {
+ &self.node_ip
+ }
+
+ pub fn www_data_gid(&self) -> u32 {
+ self.www_data_gid
+ }
+
+ pub fn is_debug(&self) -> bool {
+ self.debug
+ }
+
+ pub fn is_local_mode(&self) -> bool {
+ self.local_mode
+ }
+
+ /// Get current debug level (0 = normal, 1+ = debug)
+ pub fn debug_level(&self) -> u8 {
+ *self.debug_level.read()
+ }
+
+ /// Set debug level (0 = normal, 1+ = debug)
+ pub fn set_debug_level(&self, level: u8) {
+ *self.debug_level.write() = level;
+ }
+}
+
+#[cfg(test)]
+mod tests {
+ //! Unit tests for Config struct
+ //!
+ //! This test module provides comprehensive coverage for:
+ //! - Configuration creation and initialization
+ //! - Getter methods for all configuration fields
+ //! - Debug level mutation and thread safety
+ //! - Concurrent access patterns (reads and writes)
+ //! - Clone independence
+ //! - Debug formatting
+ //! - Edge cases (empty strings, long strings, special characters, unicode)
+ //!
+ //! ## Thread Safety
+ //!
+ //! The Config struct uses `Arc<AtomicU8>` for debug_level to allow
+ //! safe concurrent reads and writes. Tests verify:
+ //! - 10 threads × 100 operations (concurrent modifications)
+ //! - 20 threads × 1000 operations (concurrent reads)
+ //!
+ //! ## Edge Cases
+ //!
+ //! Tests cover various edge cases including:
+ //! - Empty strings for node/cluster names
+ //! - Long strings (1000+ characters)
+ //! - Special characters in strings
+ //! - Unicode support (emoji, non-ASCII characters)
+
+ use super::*;
+ use std::thread;
+
+ // ===== Basic Construction Tests =====
+
+ #[test]
+ fn test_config_creation() {
+ let config = Config::new(
+ "node1".to_string(),
+ "192.168.1.10".to_string(),
+ 33,
+ false,
+ false,
+ "pmxcfs".to_string(),
+ );
+
+ assert_eq!(config.nodename(), "node1");
+ assert_eq!(config.node_ip(), "192.168.1.10");
+ assert_eq!(config.www_data_gid(), 33);
+ assert!(!config.is_debug());
+ assert!(!config.is_local_mode());
+ assert_eq!(config.cluster_name(), "pmxcfs");
+ assert_eq!(
+ config.debug_level(),
+ 0,
+ "Debug level should be 0 when debug is false"
+ );
+ }
+
+ #[test]
+ fn test_config_creation_with_debug() {
+ let config = Config::new(
+ "node2".to_string(),
+ "10.0.0.5".to_string(),
+ 1000,
+ true,
+ false,
+ "test-cluster".to_string(),
+ );
+
+ assert!(config.is_debug());
+ assert_eq!(
+ config.debug_level(),
+ 1,
+ "Debug level should be 1 when debug is true"
+ );
+ }
+
+ #[test]
+ fn test_config_creation_local_mode() {
+ let config = Config::new(
+ "localhost".to_string(),
+ "127.0.0.1".to_string(),
+ 33,
+ false,
+ true,
+ "local".to_string(),
+ );
+
+ assert!(config.is_local_mode());
+ assert!(!config.is_debug());
+ }
+
+ // ===== Getter Tests =====
+
+ #[test]
+ fn test_all_getters() {
+ let config = Config::new(
+ "testnode".to_string(),
+ "172.16.0.1".to_string(),
+ 999,
+ true,
+ true,
+ "my-cluster".to_string(),
+ );
+
+ // Test all getter methods
+ assert_eq!(config.nodename(), "testnode");
+ assert_eq!(config.node_ip(), "172.16.0.1");
+ assert_eq!(config.www_data_gid(), 999);
+ assert!(config.is_debug());
+ assert!(config.is_local_mode());
+ assert_eq!(config.cluster_name(), "my-cluster");
+ assert_eq!(config.debug_level(), 1);
+ }
+
+ // ===== Debug Level Mutation Tests =====
+
+ #[test]
+ fn test_debug_level_mutation() {
+ let config = Config::new(
+ "node1".to_string(),
+ "192.168.1.1".to_string(),
+ 33,
+ false,
+ false,
+ "pmxcfs".to_string(),
+ );
+
+ assert_eq!(config.debug_level(), 0);
+
+ config.set_debug_level(1);
+ assert_eq!(config.debug_level(), 1);
+
+ config.set_debug_level(5);
+ assert_eq!(config.debug_level(), 5);
+
+ config.set_debug_level(0);
+ assert_eq!(config.debug_level(), 0);
+ }
+
+ #[test]
+ fn test_debug_level_max_value() {
+ let config = Config::new(
+ "node1".to_string(),
+ "192.168.1.1".to_string(),
+ 33,
+ false,
+ false,
+ "pmxcfs".to_string(),
+ );
+
+ config.set_debug_level(255);
+ assert_eq!(config.debug_level(), 255);
+
+ config.set_debug_level(0);
+ assert_eq!(config.debug_level(), 0);
+ }
+
+ // ===== Thread Safety Tests =====
+
+ #[test]
+ fn test_debug_level_thread_safety() {
+ let config = Config::new(
+ "node1".to_string(),
+ "192.168.1.1".to_string(),
+ 33,
+ false,
+ false,
+ "pmxcfs".to_string(),
+ );
+
+ let config_clone = Arc::clone(&config);
+
+ // Spawn multiple threads that concurrently modify debug level
+ let handles: Vec<_> = (0..10)
+ .map(|i| {
+ let cfg = Arc::clone(&config);
+ thread::spawn(move || {
+ for _ in 0..100 {
+ cfg.set_debug_level(i);
+ let _ = cfg.debug_level();
+ }
+ })
+ })
+ .collect();
+
+ // All threads should complete without panicking
+ for handle in handles {
+ handle.join().unwrap();
+ }
+
+ // Final value should be one of the values set by threads
+ let final_level = config_clone.debug_level();
+ assert!(
+ final_level < 10,
+ "Debug level should be < 10, got {final_level}"
+ );
+ }
+
+ #[test]
+ fn test_concurrent_reads() {
+ let config = Config::new(
+ "node1".to_string(),
+ "192.168.1.1".to_string(),
+ 33,
+ true,
+ false,
+ "pmxcfs".to_string(),
+ );
+
+ // Spawn multiple threads that concurrently read config
+ let handles: Vec<_> = (0..20)
+ .map(|_| {
+ let cfg = Arc::clone(&config);
+ thread::spawn(move || {
+ for _ in 0..1000 {
+ assert_eq!(cfg.nodename(), "node1");
+ assert_eq!(cfg.node_ip(), "192.168.1.1");
+ assert_eq!(cfg.www_data_gid(), 33);
+ assert!(cfg.is_debug());
+ assert!(!cfg.is_local_mode());
+ assert_eq!(cfg.cluster_name(), "pmxcfs");
+ }
+ })
+ })
+ .collect();
+
+ for handle in handles {
+ handle.join().unwrap();
+ }
+ }
+
+ // ===== Clone Tests =====
+
+ #[test]
+ fn test_config_clone() {
+ let config1 = Config::new(
+ "node1".to_string(),
+ "192.168.1.1".to_string(),
+ 33,
+ true,
+ false,
+ "pmxcfs".to_string(),
+ );
+
+ config1.set_debug_level(5);
+
+ let config2 = (*config1).clone();
+
+ // Cloned config should have same values
+ assert_eq!(config2.nodename(), config1.nodename());
+ assert_eq!(config2.node_ip(), config1.node_ip());
+ assert_eq!(config2.www_data_gid(), config1.www_data_gid());
+ assert_eq!(config2.is_debug(), config1.is_debug());
+ assert_eq!(config2.is_local_mode(), config1.is_local_mode());
+ assert_eq!(config2.cluster_name(), config1.cluster_name());
+ assert_eq!(config2.debug_level(), 5);
+
+ // Modifying one should not affect the other
+ config2.set_debug_level(10);
+ assert_eq!(config1.debug_level(), 5);
+ assert_eq!(config2.debug_level(), 10);
+ }
+
+ // ===== Debug Formatting Tests =====
+
+ #[test]
+ fn test_debug_format() {
+ let config = Config::new(
+ "node1".to_string(),
+ "192.168.1.1".to_string(),
+ 33,
+ true,
+ false,
+ "pmxcfs".to_string(),
+ );
+
+ let debug_str = format!("{config:?}");
+
+ // Check that debug output contains all fields
+ assert!(debug_str.contains("Config"));
+ assert!(debug_str.contains("nodename"));
+ assert!(debug_str.contains("node1"));
+ assert!(debug_str.contains("node_ip"));
+ assert!(debug_str.contains("192.168.1.1"));
+ assert!(debug_str.contains("www_data_gid"));
+ assert!(debug_str.contains("33"));
+ assert!(debug_str.contains("debug"));
+ assert!(debug_str.contains("true"));
+ assert!(debug_str.contains("local_mode"));
+ assert!(debug_str.contains("false"));
+ assert!(debug_str.contains("cluster_name"));
+ assert!(debug_str.contains("pmxcfs"));
+ assert!(debug_str.contains("debug_level"));
+ }
+
+ // ===== Edge Cases and Boundary Tests =====
+
+ #[test]
+ fn test_empty_strings() {
+ let config = Config::new(String::new(), String::new(), 0, false, false, String::new());
+
+ assert_eq!(config.nodename(), "");
+ assert_eq!(config.node_ip(), "");
+ assert_eq!(config.cluster_name(), "");
+ assert_eq!(config.www_data_gid(), 0);
+ }
+
+ #[test]
+ fn test_long_strings() {
+ let long_name = "a".repeat(1000);
+ let long_ip = "192.168.1.".to_string() + &"1".repeat(100);
+ let long_cluster = "cluster-".to_string() + &"x".repeat(500);
+
+ let config = Config::new(
+ long_name.clone(),
+ long_ip.clone(),
+ u32::MAX,
+ true,
+ true,
+ long_cluster.clone(),
+ );
+
+ assert_eq!(config.nodename(), long_name);
+ assert_eq!(config.node_ip(), long_ip);
+ assert_eq!(config.cluster_name(), long_cluster);
+ assert_eq!(config.www_data_gid(), u32::MAX);
+ }
+
+ #[test]
+ fn test_special_characters_in_strings() {
+ let config = Config::new(
+ "node-1_test.local".to_string(),
+ "192.168.1.10:8006".to_string(),
+ 33,
+ false,
+ false,
+ "my-cluster_v2.0".to_string(),
+ );
+
+ assert_eq!(config.nodename(), "node-1_test.local");
+ assert_eq!(config.node_ip(), "192.168.1.10:8006");
+ assert_eq!(config.cluster_name(), "my-cluster_v2.0");
+ }
+
+ #[test]
+ fn test_unicode_in_strings() {
+ let config = Config::new(
+ "ノード1".to_string(),
+ "::1".to_string(),
+ 33,
+ false,
+ false,
+ "集群".to_string(),
+ );
+
+ assert_eq!(config.nodename(), "ノード1");
+ assert_eq!(config.node_ip(), "::1");
+ assert_eq!(config.cluster_name(), "集群");
+ }
+}
--
2.47.3
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
next prev parent reply other threads:[~2026-01-06 14:24 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-06 14:24 [pve-devel] [PATCH pve-cluster 00/15 v1] Rewrite pmxcfs with Rust Kefu Chai
2026-01-06 14:24 ` [pve-devel] [PATCH pve-cluster 01/15] pmxcfs-rs: add workspace and pmxcfs-api-types crate Kefu Chai
2026-01-06 14:24 ` Kefu Chai [this message]
2026-01-06 14:24 ` [pve-devel] [PATCH pve-cluster 03/15] pmxcfs-rs: add pmxcfs-logger crate Kefu Chai
2026-01-06 14:24 ` [pve-devel] [PATCH pve-cluster 04/15] pmxcfs-rs: add pmxcfs-rrd crate Kefu Chai
2026-01-06 14:24 ` [pve-devel] [PATCH pve-cluster 05/15] pmxcfs-rs: add pmxcfs-memdb crate Kefu Chai
2026-01-06 14:24 ` [pve-devel] [PATCH pve-cluster 06/15] pmxcfs-rs: add pmxcfs-status crate Kefu Chai
2026-01-06 14:24 ` [pve-devel] [PATCH pve-cluster 07/15] pmxcfs-rs: add pmxcfs-test-utils infrastructure crate Kefu Chai
2026-01-06 14:24 ` [pve-devel] [PATCH pve-cluster 08/15] pmxcfs-rs: add pmxcfs-services crate Kefu Chai
2026-01-06 14:24 ` [pve-devel] [PATCH pve-cluster 09/15] pmxcfs-rs: add pmxcfs-ipc crate Kefu Chai
2026-01-06 14:24 ` [pve-devel] [PATCH pve-cluster 10/15] pmxcfs-rs: add pmxcfs-dfsm crate Kefu Chai
2026-01-06 14:24 ` [pve-devel] [PATCH pve-cluster 11/15] pmxcfs-rs: vendor patched rust-corosync for CPG compatibility Kefu Chai
2026-01-06 14:24 ` [pve-devel] [PATCH pve-cluster 13/15] pmxcfs-rs: add integration and workspace tests Kefu Chai
2026-01-06 14:24 ` [pve-devel] [PATCH pve-cluster 14/15] pmxcfs-rs: add Makefile for build automation Kefu Chai
2026-01-06 14:24 ` [pve-devel] [PATCH pve-cluster 15/15] pmxcfs-rs: add project documentation Kefu Chai
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260106142440.2368585-3-k.chai@proxmox.com \
--to=k.chai@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.