* [PATCH datacenter-manager v2 0/8] subscription: add central key pool registry with reissue support
@ 2026-05-07 8:26 Thomas Lamprecht
2026-05-07 8:26 ` [PATCH datacenter-manager v2 1/8] api: subscription cache: ensure max_age=0 forces a fresh fetch Thomas Lamprecht
` (9 more replies)
0 siblings, 10 replies; 15+ messages in thread
From: Thomas Lamprecht @ 2026-05-07 8:26 UTC (permalink / raw)
To: pdm-devel
Add a Subscription Registry to PDM: a central pool of PVE and PBS
subscription keys that an operator can assign to remote nodes from one
place, with an explicit Apply/Clear lifecycle for staged changes plus a
Reissue Key action for freeing a key bound to a node so it can be
reassigned elsewhere.
Motivation: managing subscriptions across many remotes today means
doing this for each node individually. PDM already has the remote
inventory; with a key pool plus per-remote query data we can show "which
nodes need a subscription" and "which keys are unused" together, and let
an admin batch-assign and tear down from one place.
In the near/mid-term we can also make polling keys from customers more
integrated, but that needs a bit adaption in our shop infa and does not
block the base work here in anyway. Actually, the implementation here
was split out from a more complete work, so most parts of it are already
prepared to adopt this relatively easily.
Design points worth flagging for review:
* Storage layout. subscriptions.cfg holds key entries via the typed
section-config layer, with `product-type` as the section type so PVE
and PBS sections live side-by-side.
The subscriptions.shadow file is reserved for a future shop-bundle
import flow (signed info blobs) and stays empty for manually-added
keys. I can drop that part for now too, but figured it might be nice
and potentially relevant for review to see the direction this probably
goes now already.
* Endpoints take PRIV_SYS_AUDIT/MODIFY at the macro
level for the pool itself, with per-remote PRIV_RESOURCE_* enforced
inside the handlers when a specific remote is touched.
A dedicated subscription privilege seemed not like a necessity and
also not fit that well into our general priv approach in PDM.
* The pending lifecycle goes like: Pool entries with a (remote, node)
* binding whose
live state does not match are "pending push"; entries with the new
pending-reissue flag are "pending removal". Apply Pending walks both
queues; Clear Pending drops the queue without touching any remote
(binding-clear for push, flag-only for reissue so the operator can
retry without re-importing the key).
The per-remote subscription cache is invalidated after each successful
apply step so the next panel load reflects the change rather than a
5-minute-stale snapshot, which is highly confusing UI/UX wise. This
might warrant a closer look though, might be currently done in a
rather heavier handed fashion as potentially needed (had no time to
recheck).
* Locking is best-effort, but here that should be fine in practice given
that another entity can always alter the state on a remote node in
parallel anyway.
The lib/pdm-api-types/tests/test_import.rs test should provide basic
coverage for section-config roundtrip for both subscription.cfg and the
shadow file (which is why I'd be fine with keeping it, but not _that_
hard feelings), schema acceptance and rejection (for now only accept
PVE/PBS; everything else rejected), ProductType classification, the
SubscriptionLevel display/from-str backward-compat (single-letter and
full-name forms both parse), and pick_best_pve_socket_key edge cases.
Open follow-ups deliberately out of scope here:
* Auto-import existing remote-side keys into the pool on first
observation (the reissue path already adopts; an explicit import for
legacy onboarding would be cleaner).
* Make reissue a full reissue, if it goes in like this it should be
rather called "Clear Key", but that can be handled on applying too, if
really nothing else comes up (which I doubt)
* A shop-bundle import path (the shadow file plumbing is already in),
either manual copy+paste or through an api token.
* Some polishing code and ui/ux wise (e.g., a reload button), but wante
to finally get this out now.
* ...
changes v1 -> v2:
* cleanly adopt (naughty) in-patch edit that broke docs changes of
subsequent patch.
* add correct subjectprefix
Thomas Lamprecht (8):
api: subscription cache: ensure max_age=0 forces a fresh fetch
api types: subscription level: render full names
subscription: add key pool data model and config layer
subscription: add key pool and node status API endpoints
ui: add subscription registry with key pool and node status
cli: add subscription key pool management subcommands
docs: add subscription registry chapter
subscription: add Reissue Key action with pending-reissue queue
cli/client/src/subscriptions.rs | 226 +++-
docs/index.rst | 1 +
docs/subscription-registry.rst | 64 +
lib/pdm-api-types/Cargo.toml | 4 +
lib/pdm-api-types/src/subscription.rs | 422 +++++-
lib/pdm-api-types/tests/test_import.rs | 310 +++++
lib/pdm-client/src/lib.rs | 157 ++-
lib/pdm-config/src/lib.rs | 1 +
lib/pdm-config/src/subscriptions.rs | 102 ++
server/src/api/mod.rs | 2 +
server/src/api/resources.rs | 13 +-
server/src/api/subscriptions/mod.rs | 1199 +++++++++++++++++
server/src/context.rs | 7 +
ui/src/configuration/mod.rs | 2 +
ui/src/configuration/subscription_keys.rs | 458 +++++++
ui/src/configuration/subscription_registry.rs | 791 +++++++++++
ui/src/dashboard/subscriptions_list.rs | 18 +-
ui/src/main_menu.rs | 10 +
18 files changed, 3751 insertions(+), 36 deletions(-)
create mode 100644 docs/subscription-registry.rst
create mode 100644 lib/pdm-api-types/tests/test_import.rs
create mode 100644 lib/pdm-config/src/subscriptions.rs
create mode 100644 server/src/api/subscriptions/mod.rs
create mode 100644 ui/src/configuration/subscription_keys.rs
create mode 100644 ui/src/configuration/subscription_registry.rs
--
2.47.3
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH datacenter-manager v2 1/8] api: subscription cache: ensure max_age=0 forces a fresh fetch
2026-05-07 8:26 [PATCH datacenter-manager v2 0/8] subscription: add central key pool registry with reissue support Thomas Lamprecht
@ 2026-05-07 8:26 ` Thomas Lamprecht
2026-05-07 13:23 ` Lukas Wagner
2026-05-08 12:43 ` applied: " Lukas Wagner
2026-05-07 8:26 ` [PATCH datacenter-manager v2 2/8] api types: subscription level: render full names Thomas Lamprecht
` (8 subsequent siblings)
9 siblings, 2 replies; 15+ messages in thread
From: Thomas Lamprecht @ 2026-05-07 8:26 UTC (permalink / raw)
To: pdm-devel
The cache lookup used 'diff > max_age', so a same-second hit with
max_age=0 still returned cached data; collect_status_uncached and the
direct user-supplied ?max-age=0 bypass both silently lost their
freshness guarantee. Short-circuit max_age=0 explicitly and switch the
TTL comparison to '>=' so the boundary is an exact miss.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
---
This threw me off quite a bit, as I observed seemingly stale cache
issues, which where ultimately due to something completely different.
server/src/api/resources.rs | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/server/src/api/resources.rs b/server/src/api/resources.rs
index 04628a8..50315b1 100644
--- a/server/src/api/resources.rs
+++ b/server/src/api/resources.rs
@@ -830,11 +830,14 @@ fn get_cached_subscription_info(remote: &str, max_age: u64) -> Option<CachedSubs
.read()
.expect("subscription mutex poisoned");
+ if max_age == 0 {
+ return None;
+ }
if let Some(cached_subscription) = cache.get(remote) {
let now = proxmox_time::epoch_i64();
let diff = now - cached_subscription.timestamp;
- if diff > max_age as i64 || diff < 0 {
+ if diff >= max_age as i64 || diff < 0 {
// value is too old or from the future
None
} else {
--
2.47.3
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH datacenter-manager v2 2/8] api types: subscription level: render full names
2026-05-07 8:26 [PATCH datacenter-manager v2 0/8] subscription: add central key pool registry with reissue support Thomas Lamprecht
2026-05-07 8:26 ` [PATCH datacenter-manager v2 1/8] api: subscription cache: ensure max_age=0 forces a fresh fetch Thomas Lamprecht
@ 2026-05-07 8:26 ` Thomas Lamprecht
2026-05-07 13:23 ` Lukas Wagner
2026-05-07 8:26 ` [PATCH datacenter-manager v2 3/8] subscription: add key pool data model and config layer Thomas Lamprecht
` (7 subsequent siblings)
9 siblings, 1 reply; 15+ messages in thread
From: Thomas Lamprecht @ 2026-05-07 8:26 UTC (permalink / raw)
To: pdm-devel
The Display impl produced single-letter codes ("c", "b", "s", "p"),
forcing the dashboard to keep a private letter-to-name helper just
to render labels.
Switching Display to the full names is safe: FromStr is extended to
accept the names alongside the legacy single-letter codes, so any
previously serialised value still parses, and the only in-tree
caller of Display on this enum is the dashboard helper that this
commit drops. The level strings reported by the PVE/PBS API land in
unrelated String fields and are not touched.
Add Debug to the derives, required for assert_eq! over the level in
the upcoming key-pool tests.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
---
lib/pdm-api-types/src/subscription.rs | 24 ++++++++++++------------
ui/src/dashboard/subscriptions_list.rs | 18 ++----------------
2 files changed, 14 insertions(+), 28 deletions(-)
diff --git a/lib/pdm-api-types/src/subscription.rs b/lib/pdm-api-types/src/subscription.rs
index ca23b8e..f0eb525 100644
--- a/lib/pdm-api-types/src/subscription.rs
+++ b/lib/pdm-api-types/src/subscription.rs
@@ -8,7 +8,7 @@ use proxmox_subscription::{SubscriptionInfo, SubscriptionStatus};
#[api]
// order is important here, since we use that for determining if a node has a valid subscription
-#[derive(Default, Copy, Clone, PartialEq, Eq, PartialOrd, Ord)]
+#[derive(Default, Copy, Clone, Debug, PartialEq, Eq, PartialOrd, Ord)]
/// Describes the level of subscription
pub enum SubscriptionLevel {
#[default]
@@ -50,11 +50,11 @@ impl FromStr for SubscriptionLevel {
fn from_str(s: &str) -> Result<Self, Self::Err> {
Ok(match s {
- "p" => SubscriptionLevel::Premium,
- "s" => SubscriptionLevel::Standard,
- "b" => SubscriptionLevel::Basic,
- "c" => SubscriptionLevel::Community,
- "" => SubscriptionLevel::None,
+ "p" | "premium" | "Premium" => SubscriptionLevel::Premium,
+ "s" | "standard" | "Standard" => SubscriptionLevel::Standard,
+ "b" | "basic" | "Basic" => SubscriptionLevel::Basic,
+ "c" | "community" | "Community" => SubscriptionLevel::Community,
+ "" | "none" | "None" => SubscriptionLevel::None,
_ => SubscriptionLevel::Unknown,
})
}
@@ -63,12 +63,12 @@ impl FromStr for SubscriptionLevel {
impl std::fmt::Display for SubscriptionLevel {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.write_str(match self {
- SubscriptionLevel::None => "",
- SubscriptionLevel::Unknown => "unknown",
- SubscriptionLevel::Community => "c",
- SubscriptionLevel::Basic => "b",
- SubscriptionLevel::Standard => "s",
- SubscriptionLevel::Premium => "p",
+ SubscriptionLevel::None => "None",
+ SubscriptionLevel::Unknown => "Unknown",
+ SubscriptionLevel::Community => "Community",
+ SubscriptionLevel::Basic => "Basic",
+ SubscriptionLevel::Standard => "Standard",
+ SubscriptionLevel::Premium => "Premium",
})
}
}
diff --git a/ui/src/dashboard/subscriptions_list.rs b/ui/src/dashboard/subscriptions_list.rs
index b0a96eb..fdb9e9e 100644
--- a/ui/src/dashboard/subscriptions_list.rs
+++ b/ui/src/dashboard/subscriptions_list.rs
@@ -204,17 +204,6 @@ fn columns(
.with_child(Container::from_tag("span").with_child(text))
}
- fn render_subscription_level(level: SubscriptionLevel) -> &'static str {
- match level {
- SubscriptionLevel::None => "None",
- SubscriptionLevel::Basic => "Basic",
- SubscriptionLevel::Community => "Community",
- SubscriptionLevel::Premium => "Premium",
- SubscriptionLevel::Standard => "Standard",
- SubscriptionLevel::Unknown => "Unknown",
- }
- }
-
let subscription_column = DataTableColumn::new(tr!("Subscription"))
.render(|entry: &SubscriptionTreeEntry| match entry {
SubscriptionTreeEntry::Node(node) => {
@@ -222,16 +211,13 @@ fn columns(
let (sub_state, text) = match node.level {
SubscriptionLevel::None => (RemoteSubscriptionState::None, None),
SubscriptionLevel::Unknown => (RemoteSubscriptionState::Unknown, None),
- other => (
- RemoteSubscriptionState::Active,
- Some(render_subscription_level(other)),
- ),
+ other => (RemoteSubscriptionState::Active, Some(other.to_string())),
};
render_subscription_state(&sub_state)
.with_optional_child(text)
.into()
} else {
- render_subscription_level(node.level).into()
+ node.level.to_string().into()
}
}
SubscriptionTreeEntry::Remote(remote) => {
--
2.47.3
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH datacenter-manager v2 3/8] subscription: add key pool data model and config layer
2026-05-07 8:26 [PATCH datacenter-manager v2 0/8] subscription: add central key pool registry with reissue support Thomas Lamprecht
2026-05-07 8:26 ` [PATCH datacenter-manager v2 1/8] api: subscription cache: ensure max_age=0 forces a fresh fetch Thomas Lamprecht
2026-05-07 8:26 ` [PATCH datacenter-manager v2 2/8] api types: subscription level: render full names Thomas Lamprecht
@ 2026-05-07 8:26 ` Thomas Lamprecht
2026-05-07 8:26 ` [PATCH datacenter-manager v2 4/8] subscription: add key pool and node status API endpoints Thomas Lamprecht
` (6 subsequent siblings)
9 siblings, 0 replies; 15+ messages in thread
From: Thomas Lamprecht @ 2026-05-07 8:26 UTC (permalink / raw)
To: pdm-devel
Add a section-config-backed pool of subscription keys, each
optionally pinned to a remote node.
The schema accepts only PVE and PBS keys; other prefixes get
rejected with a warning so a new SKU is noticed instead of silently
falling through. Entries carry an origin marker, currently only a
manual-entry variant, and the on-disk layout reserves a shadow file
for the signed info blobs that a future shop-bundle import will
populate.
Init the subscription config on both the production and fake-remote
build paths so test builds don't panic on first access.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
---
lib/pdm-api-types/Cargo.toml | 4 +
lib/pdm-api-types/src/subscription.rs | 375 ++++++++++++++++++++++++-
lib/pdm-api-types/tests/test_import.rs | 309 ++++++++++++++++++++
lib/pdm-config/src/lib.rs | 1 +
lib/pdm-config/src/subscriptions.rs | 102 +++++++
server/src/context.rs | 7 +
6 files changed, 797 insertions(+), 1 deletion(-)
create mode 100644 lib/pdm-api-types/tests/test_import.rs
create mode 100644 lib/pdm-config/src/subscriptions.rs
diff --git a/lib/pdm-api-types/Cargo.toml b/lib/pdm-api-types/Cargo.toml
index cb8b505..8282184 100644
--- a/lib/pdm-api-types/Cargo.toml
+++ b/lib/pdm-api-types/Cargo.toml
@@ -15,6 +15,7 @@ serde_plain.workspace = true
serde_json.workspace = true
proxmox-acme-api.workspace = true
+proxmox-base64.workspace = true
proxmox-access-control = { workspace = true, features = ["acl"] }
proxmox-auth-api = { workspace = true, features = ["api-types"] }
proxmox-apt-api-types.workspace = true
@@ -32,3 +33,6 @@ proxmox-uuid = { workspace = true, features = ["serde"] }
pbs-api-types = { workspace = true }
pve-api-types = { workspace = true }
+
+[dev-dependencies]
+serde_json.workspace = true
diff --git a/lib/pdm-api-types/src/subscription.rs b/lib/pdm-api-types/src/subscription.rs
index f0eb525..26ecfba 100644
--- a/lib/pdm-api-types/src/subscription.rs
+++ b/lib/pdm-api-types/src/subscription.rs
@@ -1,11 +1,16 @@
+use std::sync::OnceLock;
use std::{collections::HashMap, str::FromStr};
use anyhow::Error;
use serde::{Deserialize, Serialize};
-use proxmox_schema::api;
+use proxmox_schema::{api, const_regex, ApiStringFormat, ApiType, Schema, StringSchema};
+use proxmox_section_config::typed::ApiSectionDataEntry;
+use proxmox_section_config::{SectionConfig, SectionConfigPlugin};
use proxmox_subscription::{SubscriptionInfo, SubscriptionStatus};
+use crate::remotes::RemoteType;
+
#[api]
// order is important here, since we use that for determining if a node has a valid subscription
#[derive(Default, Copy, Clone, Debug, PartialEq, Eq, PartialOrd, Ord)]
@@ -174,3 +179,371 @@ pub struct PdmSubscriptionInfo {
/// PDM subscription statistics
pub statistics: SubscriptionStatistics,
}
+
+const_regex! {
+ /// Subscription key pattern, restricted to the products PDM can drive.
+ ///
+ /// All keys follow `<prefix>-<10 hex>`. PVE encodes the maximum CPU socket count between
+ /// the product letters and the level letter, for example `pve4b-1234567890`. PBS has no
+ /// socket count, so its keys look like `pbsc-1234567890`. Level letters are c/b/s/p
+ /// (Community/Basic/Standard/Premium).
+ ///
+ /// PMG and POM keys are not accepted yet: PDM has no remote-side handler for them. Widen
+ /// this regex and `ProductType::from_key` in lockstep when PDM grows support for them.
+ pub PRODUCT_KEY_REGEX = r"^(?:pve[0-9]+|pbs)[cbsp]-[0-9a-f]{10}$";
+}
+
+pub const PRODUCT_KEY_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&PRODUCT_KEY_REGEX);
+
+pub const SUBSCRIPTION_KEY_SCHEMA: Schema = StringSchema::new("Subscription key.")
+ .format(&PRODUCT_KEY_FORMAT)
+ .min_length(15)
+ .max_length(18)
+ .schema();
+
+#[api]
+#[derive(Clone, Copy, Debug, Default, Eq, PartialEq, Hash, Deserialize, Serialize)]
+#[serde(rename_all = "lowercase")]
+/// Proxmox product line a subscription key belongs to.
+pub enum ProductType {
+ /// Proxmox Virtual Environment (PVE).
+ #[default]
+ Pve,
+ /// Proxmox Backup Server (PBS).
+ Pbs,
+ /// Proxmox Mail Gateway (PMG).
+ Pmg,
+ /// Proxmox Offline Mirror (POM).
+ Pom,
+}
+
+impl ProductType {
+ /// Static string used as the section-config type marker on disk.
+ pub const fn as_section_type(self) -> &'static str {
+ match self {
+ ProductType::Pve => "pve",
+ ProductType::Pbs => "pbs",
+ ProductType::Pmg => "pmg",
+ ProductType::Pom => "pom",
+ }
+ }
+
+ /// Classify a key by its prefix.
+ ///
+ /// Returns None when the prefix does not match any product PDM currently knows about;
+ /// callers should log that case so a new product line gets noticed instead of silently
+ /// sorted into a default bucket.
+ pub fn from_key(key: &str) -> Option<Self> {
+ let (prefix, _) = key.split_once('-')?;
+ if prefix.starts_with("pve") {
+ Some(ProductType::Pve)
+ } else if prefix.starts_with("pbs") {
+ Some(ProductType::Pbs)
+ } else if prefix.starts_with("pmg") {
+ Some(ProductType::Pmg)
+ } else if prefix.starts_with("pom") {
+ Some(ProductType::Pom)
+ } else {
+ None
+ }
+ }
+
+ /// Whether PDM currently knows how to drive a remote of this product type.
+ ///
+ /// PDM only manages PVE and PBS remotes today, and the schema regex rejects everything else
+ /// at insert time. This method covers in-memory paths for forward-compat, for example
+ /// existing pool entries loaded after the regex is widened in a future release.
+ pub fn matches_remote_type(self, remote_type: RemoteType) -> bool {
+ matches!(
+ (self, remote_type),
+ (ProductType::Pve, RemoteType::Pve) | (ProductType::Pbs, RemoteType::Pbs)
+ )
+ }
+}
+
+impl std::fmt::Display for ProductType {
+ fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+ f.write_str(self.as_section_type())
+ }
+}
+
+/// Extract the socket count a PVE key covers (for example, 4 from "pve4b-...").
+///
+/// Returns None for non-PVE keys or unparseable prefixes.
+pub fn socket_count_from_key(key: &str) -> Option<u32> {
+ let (prefix, _) = key.split_once('-')?;
+ if !prefix.starts_with("pve") {
+ return None;
+ }
+ let after_pve = &prefix[3..];
+ let digits: String = after_pve
+ .chars()
+ .take_while(|c| c.is_ascii_digit())
+ .collect();
+ digits.parse().ok()
+}
+
+/// Pick the candidate PVE key with the smallest socket count that still covers `node_sockets`.
+///
+/// `candidates` yields `(id, key_string)` pairs. Keys without a parseable PVE socket count are
+/// skipped, and keys covering fewer sockets than the node needs are filtered out. Returns the
+/// id of the best fit, or None when no candidate covers the node.
+pub fn pick_best_pve_socket_key<'a, I, K>(node_sockets: u32, candidates: I) -> Option<K>
+where
+ I: IntoIterator<Item = (K, &'a str)>,
+{
+ candidates
+ .into_iter()
+ .filter_map(|(id, key)| socket_count_from_key(key).map(|s| (id, s)))
+ .filter(|(_, s)| *s >= node_sockets)
+ .min_by_key(|(_, s)| *s)
+ .map(|(id, _)| id)
+}
+
+#[api]
+#[derive(Clone, Copy, Debug, Default, Eq, PartialEq, Deserialize, Serialize)]
+#[serde(rename_all = "kebab-case")]
+/// Origin of a subscription key entry.
+pub enum SubscriptionKeySource {
+ /// Hand-entered into the pool by an admin. Used for any key added through the manual-entry
+ /// UI or CLI, and as the `serde(default)` for entries that predate this field.
+ #[default]
+ Manual,
+}
+
+#[api(
+ properties: {
+ "key": { schema: SUBSCRIPTION_KEY_SCHEMA },
+ "level": { optional: true },
+ "status": { optional: true },
+ "source": { optional: true },
+ },
+)]
+#[derive(Clone, Debug, Default, Deserialize, Serialize, PartialEq)]
+#[serde(rename_all = "kebab-case")]
+/// A subscription key entry in the PDM key pool.
+pub struct SubscriptionKeyEntry {
+ /// The subscription key (for example, pve4b-1234567890).
+ pub key: String,
+
+ /// Product type derived from the key prefix.
+ #[serde(rename = "product-type")]
+ pub product_type: ProductType,
+
+ /// Subscription level, derived from the key suffix.
+ #[serde(default)]
+ pub level: SubscriptionLevel,
+
+ /// Where the key entry came from. Defaults to manual entry.
+ #[serde(default)]
+ pub source: SubscriptionKeySource,
+
+ /// Remote this key is assigned to (if any).
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub remote: Option<String>,
+
+ /// Node within the remote this key is assigned to (if any).
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub node: Option<String>,
+
+ /// Server ID this key is bound to (from signed info, if available).
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub serverid: Option<String>,
+
+ /// Subscription status from last check.
+ #[serde(default)]
+ pub status: SubscriptionStatus,
+
+ /// Next due date.
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub nextduedate: Option<String>,
+
+ /// Product name.
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub productname: Option<String>,
+
+ /// Epoch of last import or refresh of this key's data.
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub checktime: Option<i64>,
+}
+
+impl ApiSectionDataEntry for SubscriptionKeyEntry {
+ const INTERNALLY_TAGGED: Option<&'static str> = Some("product-type");
+ const SECION_CONFIG_USES_TYPE_KEY: bool = true;
+
+ fn section_config() -> &'static SectionConfig {
+ static CONFIG: OnceLock<SectionConfig> = OnceLock::new();
+
+ CONFIG.get_or_init(|| {
+ let mut this =
+ SectionConfig::new(&SUBSCRIPTION_KEY_SCHEMA).with_type_key("product-type");
+ for ty in [
+ ProductType::Pve,
+ ProductType::Pbs,
+ ProductType::Pmg,
+ ProductType::Pom,
+ ] {
+ this.register_plugin(SectionConfigPlugin::new(
+ ty.as_section_type().to_string(),
+ Some("key".to_string()),
+ SubscriptionKeyEntry::API_SCHEMA.unwrap_object_schema(),
+ ));
+ }
+ this
+ })
+ }
+
+ fn section_type(&self) -> &'static str {
+ self.product_type.as_section_type()
+ }
+}
+
+#[api(
+ properties: {
+ "key": { schema: SUBSCRIPTION_KEY_SCHEMA },
+ },
+)]
+#[derive(Clone, Debug, Default, Deserialize, Serialize)]
+#[serde(rename_all = "kebab-case")]
+/// Shadow entry storing the signed subscription info blob for a key.
+///
+/// Currently only populated by the future shop-bundle import flow; manually-added keys leave
+/// this table empty. The data layer is in place so that adding the import path later does not
+/// require reshaping the on-disk config.
+pub struct SubscriptionKeyShadow {
+ /// The subscription key.
+ pub key: String,
+
+ /// Product type (section type marker).
+ #[serde(rename = "product-type")]
+ pub product_type: ProductType,
+
+ /// Base64-encoded signed SubscriptionInfo JSON.
+ #[serde(default)]
+ pub info: String,
+}
+
+impl ApiSectionDataEntry for SubscriptionKeyShadow {
+ const INTERNALLY_TAGGED: Option<&'static str> = Some("product-type");
+ const SECION_CONFIG_USES_TYPE_KEY: bool = true;
+
+ fn section_config() -> &'static SectionConfig {
+ static CONFIG: OnceLock<SectionConfig> = OnceLock::new();
+
+ CONFIG.get_or_init(|| {
+ let mut this =
+ SectionConfig::new(&SUBSCRIPTION_KEY_SCHEMA).with_type_key("product-type");
+ for ty in [
+ ProductType::Pve,
+ ProductType::Pbs,
+ ProductType::Pmg,
+ ProductType::Pom,
+ ] {
+ this.register_plugin(SectionConfigPlugin::new(
+ ty.as_section_type().to_string(),
+ Some("key".to_string()),
+ SubscriptionKeyShadow::API_SCHEMA.unwrap_object_schema(),
+ ));
+ }
+ this
+ })
+ }
+
+ fn section_type(&self) -> &'static str {
+ self.product_type.as_section_type()
+ }
+}
+
+/// Decode a base64-encoded `SubscriptionInfo` JSON blob from the shadow file.
+///
+/// Forward-compat helper for the future shop-bundle import path. Returns the parsed
+/// `SubscriptionInfo`; the caller is responsible for verifying the signature against the shop's
+/// signing key.
+pub fn parse_signed_info_blob(b64: &str) -> Result<SubscriptionInfo, Error> {
+ let bytes = proxmox_base64::decode(b64)?;
+ let info = serde_json::from_slice(&bytes)?;
+ Ok(info)
+}
+
+/// Cross-check the `serverid` of a shadowed entry against what the remote reports.
+///
+/// Forward-compat helper for the future bundle-import and push flow: when the shadow has a
+/// signed serverid binding, the operator should be warned if the remote it is being pushed to
+/// has a different hardware id. Returns Ok(None) when there is nothing to compare.
+pub fn verify_serverid(
+ entry: &SubscriptionKeyEntry,
+ remote_info: &SubscriptionInfo,
+) -> Result<Option<ServeridMismatch>, Error> {
+ let Some(expected) = entry.serverid.as_deref() else {
+ return Ok(None);
+ };
+ let Some(actual) = remote_info.serverid.as_deref() else {
+ return Ok(None);
+ };
+ if expected == actual {
+ Ok(None)
+ } else {
+ Ok(Some(ServeridMismatch {
+ key: entry.key.clone(),
+ expected: expected.to_string(),
+ actual: actual.to_string(),
+ }))
+ }
+}
+
+/// Result of [`verify_serverid`] when the bound and observed server-ids disagree.
+#[derive(Clone, Debug, PartialEq, Eq)]
+pub struct ServeridMismatch {
+ pub key: String,
+ pub expected: String,
+ pub actual: String,
+}
+
+#[api]
+#[derive(Clone, Debug, Default, Serialize, Deserialize, PartialEq)]
+#[serde(rename_all = "kebab-case")]
+/// Subscription status of a single remote node, combining remote query data with key pool
+/// assignment information.
+pub struct RemoteNodeStatus {
+ /// Remote name.
+ pub remote: String,
+ /// Remote type (pve or pbs).
+ #[serde(rename = "type")]
+ pub ty: RemoteType,
+ /// Node name.
+ pub node: String,
+ /// Number of CPU sockets (PVE only).
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub sockets: Option<i64>,
+ /// Current subscription status.
+ #[serde(default)]
+ pub status: SubscriptionStatus,
+ /// Subscription level.
+ #[serde(default)]
+ pub level: SubscriptionLevel,
+ /// Currently assigned key from the pool (if any).
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub assigned_key: Option<String>,
+ /// Current key on the node (from remote query).
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub current_key: Option<String>,
+}
+
+#[api]
+#[derive(Clone, Debug, Default, Serialize, Deserialize, PartialEq, Eq)]
+#[serde(rename_all = "kebab-case")]
+/// A proposed key-to-node assignment from the auto-assign algorithm.
+pub struct ProposedAssignment {
+ /// The subscription key to assign.
+ pub key: String,
+ /// Target remote.
+ pub remote: String,
+ /// Target node.
+ pub node: String,
+ /// Socket count of the key (PVE only).
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub key_sockets: Option<u32>,
+ /// Socket count of the node (PVE only).
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub node_sockets: Option<i64>,
+}
diff --git a/lib/pdm-api-types/tests/test_import.rs b/lib/pdm-api-types/tests/test_import.rs
new file mode 100644
index 0000000..2bb1cd6
--- /dev/null
+++ b/lib/pdm-api-types/tests/test_import.rs
@@ -0,0 +1,309 @@
+//! SectionConfig round-trip and helper tests for the subscription key pool.
+//!
+//! Run with: cargo test -p pdm-api-types --test test_import
+
+use pdm_api_types::subscription::*;
+use proxmox_section_config::typed::{ApiSectionDataEntry, SectionConfigData};
+use proxmox_subscription::SubscriptionStatus;
+
+#[test]
+fn entry_roundtrip() {
+ let mut config = SectionConfigData::<SubscriptionKeyEntry>::default();
+
+ let entry = SubscriptionKeyEntry {
+ key: "pve4b-aa11bb2233".to_string(),
+ product_type: ProductType::Pve,
+ level: SubscriptionLevel::Basic,
+ source: SubscriptionKeySource::Manual,
+ remote: Some("my-cluster".to_string()),
+ node: Some("node1".to_string()),
+ serverid: Some("AABBCCDD".to_string()),
+ status: SubscriptionStatus::Active,
+ nextduedate: Some("2027-06-01".to_string()),
+ productname: Some("Proxmox VE Basic".to_string()),
+ checktime: Some(1700000000),
+ };
+
+ config.insert("pve4b-aa11bb2233".to_string(), entry);
+
+ let raw = SubscriptionKeyEntry::write_section_config("test", &config).expect("write failed");
+ let parsed = SubscriptionKeyEntry::parse_section_config("test", &raw).expect("parse failed");
+
+ let back = parsed.get("pve4b-aa11bb2233").expect("key not found");
+ assert_eq!(back.key, "pve4b-aa11bb2233");
+ assert_eq!(back.product_type, ProductType::Pve);
+ assert_eq!(back.source, SubscriptionKeySource::Manual);
+ assert_eq!(back.remote.as_deref(), Some("my-cluster"));
+ assert_eq!(back.node.as_deref(), Some("node1"));
+ assert_eq!(back.status, SubscriptionStatus::Active);
+ assert_eq!(back.nextduedate.as_deref(), Some("2027-06-01"));
+}
+
+#[test]
+fn shadow_roundtrip() {
+ let mut shadow = SectionConfigData::<SubscriptionKeyShadow>::default();
+
+ shadow.insert(
+ "pve4b-aa11bb2233".to_string(),
+ SubscriptionKeyShadow {
+ key: "pve4b-aa11bb2233".to_string(),
+ product_type: ProductType::Pve,
+ info: "dGVzdA==".to_string(),
+ },
+ );
+
+ let raw = SubscriptionKeyShadow::write_section_config("test", &shadow).expect("write failed");
+ let parsed = SubscriptionKeyShadow::parse_section_config("test", &raw).expect("parse failed");
+
+ let back = parsed.get("pve4b-aa11bb2233").expect("key not found");
+ assert_eq!(back.info, "dGVzdA==");
+}
+
+#[test]
+fn deserialize_api_response_json() {
+ let json = serde_json::json!({
+ "key": "pve4b-aa11bb2233",
+ "nextduedate": "2027-06-01",
+ "product-type": "pve",
+ "productname": "Proxmox VE Basic",
+ "serverid": "AABBCCDD",
+ "status": "active"
+ });
+
+ let entry: SubscriptionKeyEntry = serde_json::from_value(json).unwrap();
+ assert_eq!(entry.key, "pve4b-aa11bb2233");
+ assert_eq!(entry.product_type, ProductType::Pve);
+ assert_eq!(entry.status, SubscriptionStatus::Active);
+ assert_eq!(entry.source, SubscriptionKeySource::Manual);
+}
+
+#[test]
+fn deserialize_without_optional_fields() {
+ let json = serde_json::json!({
+ "key": "pbsb-ee77ff8899",
+ "product-type": "pbs",
+ });
+
+ let entry: SubscriptionKeyEntry = serde_json::from_value(json).unwrap();
+ assert_eq!(entry.key, "pbsb-ee77ff8899");
+ assert_eq!(entry.product_type, ProductType::Pbs);
+ assert!(entry.remote.is_none());
+ assert!(entry.nextduedate.is_none());
+}
+
+#[test]
+fn product_type_classification() {
+ let cases = [
+ ("pve4b-1234567890", Some(ProductType::Pve), "pve"),
+ ("pbss-abcdef0123", Some(ProductType::Pbs), "pbs"),
+ ("pmgb-1234567890", Some(ProductType::Pmg), "pmg"),
+ ("pomb-1234567890", Some(ProductType::Pom), "pom"),
+ ("xxx-1234567890", None, ""),
+ ("no-dash", None, ""),
+ ];
+ for (key, expected, marker) in cases {
+ assert_eq!(ProductType::from_key(key), expected, "from_key({key})");
+ if let Some(pt) = expected {
+ assert_eq!(pt.as_section_type(), marker, "section_type for {key}");
+ }
+ }
+}
+
+#[test]
+fn socket_count_extraction() {
+ assert_eq!(socket_count_from_key("pve1c-1234567890"), Some(1));
+ assert_eq!(socket_count_from_key("pve2b-1234567890"), Some(2));
+ assert_eq!(socket_count_from_key("pve4s-1234567890"), Some(4));
+ assert_eq!(socket_count_from_key("pve8p-1234567890"), Some(8));
+ assert_eq!(socket_count_from_key("pbss-1234567890"), None);
+ assert_eq!(socket_count_from_key("pvexb-1234567890"), None);
+}
+
+#[test]
+fn remote_type_matching() {
+ use pdm_api_types::remotes::RemoteType;
+
+ assert!(ProductType::Pve.matches_remote_type(RemoteType::Pve));
+ assert!(!ProductType::Pve.matches_remote_type(RemoteType::Pbs));
+ assert!(ProductType::Pbs.matches_remote_type(RemoteType::Pbs));
+ assert!(!ProductType::Pbs.matches_remote_type(RemoteType::Pve));
+ // PMG and POM are reserved product types but PDM cannot manage those remotes yet.
+ assert!(!ProductType::Pmg.matches_remote_type(RemoteType::Pve));
+ assert!(!ProductType::Pmg.matches_remote_type(RemoteType::Pbs));
+ assert!(!ProductType::Pom.matches_remote_type(RemoteType::Pbs));
+}
+
+#[test]
+fn subscription_level_from_key_suffix() {
+ assert_eq!(
+ SubscriptionLevel::from_key(Some("pve4c-123")),
+ SubscriptionLevel::Community
+ );
+ assert_eq!(
+ SubscriptionLevel::from_key(Some("pve4b-123")),
+ SubscriptionLevel::Basic
+ );
+ assert_eq!(
+ SubscriptionLevel::from_key(Some("pve4s-123")),
+ SubscriptionLevel::Standard
+ );
+ assert_eq!(
+ SubscriptionLevel::from_key(Some("pve2p-123")),
+ SubscriptionLevel::Premium
+ );
+ assert_eq!(
+ SubscriptionLevel::from_key(Some("pbsb-123")),
+ SubscriptionLevel::Basic
+ );
+ assert_eq!(SubscriptionLevel::from_key(None), SubscriptionLevel::None);
+ assert_eq!(
+ SubscriptionLevel::from_key(Some("")),
+ SubscriptionLevel::None
+ );
+}
+
+#[test]
+fn subscription_level_display_fromstr_roundtrip() {
+ for level in [
+ SubscriptionLevel::None,
+ SubscriptionLevel::Community,
+ SubscriptionLevel::Basic,
+ SubscriptionLevel::Standard,
+ SubscriptionLevel::Premium,
+ SubscriptionLevel::Unknown,
+ ] {
+ let s = format!("{level}");
+ let parsed: SubscriptionLevel = s.parse().unwrap();
+ assert_eq!(parsed, level, "roundtrip failed for {s}");
+ }
+
+ // Backward compatibility: legacy single-letter wire format still parses.
+ for (letter, level) in [
+ ("c", SubscriptionLevel::Community),
+ ("b", SubscriptionLevel::Basic),
+ ("s", SubscriptionLevel::Standard),
+ ("p", SubscriptionLevel::Premium),
+ ] {
+ assert_eq!(letter.parse::<SubscriptionLevel>().unwrap(), level);
+ }
+}
+
+#[test]
+fn multiple_keys_different_types() {
+ let mut config = SectionConfigData::<SubscriptionKeyEntry>::default();
+
+ config.insert(
+ "pve4b-aaaa111111".to_string(),
+ SubscriptionKeyEntry {
+ key: "pve4b-aaaa111111".to_string(),
+ product_type: ProductType::Pve,
+ status: SubscriptionStatus::Active,
+ ..Default::default()
+ },
+ );
+ config.insert(
+ "pbss-bbbb222222".to_string(),
+ SubscriptionKeyEntry {
+ key: "pbss-bbbb222222".to_string(),
+ product_type: ProductType::Pbs,
+ status: SubscriptionStatus::Active,
+ ..Default::default()
+ },
+ );
+
+ let raw = SubscriptionKeyEntry::write_section_config("test", &config).unwrap();
+ let parsed = SubscriptionKeyEntry::parse_section_config("test", &raw).unwrap();
+
+ assert_eq!(
+ parsed.get("pve4b-aaaa111111").unwrap().product_type,
+ ProductType::Pve
+ );
+ assert_eq!(
+ parsed.get("pbss-bbbb222222").unwrap().product_type,
+ ProductType::Pbs
+ );
+}
+
+#[test]
+fn pick_best_pve_socket_key_edge_cases() {
+ let pool = [
+ ("pve1c-aaa", "pve1c-aaa"),
+ ("pve2b-bbb", "pve2b-bbb"),
+ ("pve4s-ccc", "pve4s-ccc"),
+ ("pve8p-ddd", "pve8p-ddd"),
+ ];
+ let pick =
+ |sockets: u32| pick_best_pve_socket_key(sockets, pool.iter().map(|(id, k)| (*id, *k)));
+
+ // Exact match prefers the equally-sized key over a larger one.
+ assert_eq!(pick(2), Some("pve2b-bbb"));
+
+ // No exact match: fall through to the smallest key that still covers the node.
+ assert_eq!(pick(3), Some("pve4s-ccc"));
+ assert_eq!(pick(5), Some("pve8p-ddd"));
+
+ // Single-socket node still picks the single-socket key (does not overprovision).
+ assert_eq!(pick(1), Some("pve1c-aaa"));
+
+ // Node larger than every key has no fit.
+ assert_eq!(pick(16), None);
+
+ // Empty candidate list is None.
+ let empty: [(&str, &str); 0] = [];
+ assert_eq!(
+ pick_best_pve_socket_key(2, empty.iter().map(|(id, k)| (*id, *k))),
+ None,
+ );
+
+ // Non-PVE keys are skipped silently.
+ let mixed = [("a", "pbsc-aaaa111111"), ("b", "pve2b-bbbb222222")];
+ assert_eq!(
+ pick_best_pve_socket_key(1, mixed.iter().map(|(id, k)| (*id, *k))),
+ Some("b"),
+ );
+}
+
+#[test]
+fn schema_accepts_pve_pbs_only() {
+ use proxmox_schema::ApiType;
+ let schema = SubscriptionKeyEntry::API_SCHEMA.unwrap_object_schema();
+ let key_schema = schema
+ .lookup("key")
+ .expect("key property in object schema")
+ .1;
+ assert!(key_schema.parse_simple_value("garbage").is_err());
+ assert!(key_schema.parse_simple_value("xxx-yyyyyyyyyy").is_err());
+ assert!(key_schema.parse_simple_value("pve4b-1234567890").is_ok());
+ assert!(key_schema.parse_simple_value("pbss-abcdef0123").is_ok());
+ // PMG and POM are not driven by PDM today, so the schema rejects them; widen the regex
+ // when remote-side support lands.
+ assert!(key_schema.parse_simple_value("pmgb-deadbeef00").is_err());
+ assert!(key_schema.parse_simple_value("pomb-deadbeef00").is_err());
+}
+
+#[test]
+fn verify_serverid_helper() {
+ let entry = SubscriptionKeyEntry {
+ key: "pve4b-aa11bb2233".to_string(),
+ product_type: ProductType::Pve,
+ serverid: Some("AABBCCDD".to_string()),
+ ..Default::default()
+ };
+
+ let mut info = proxmox_subscription::SubscriptionInfo::default();
+ info.serverid = Some("AABBCCDD".to_string());
+ assert_eq!(verify_serverid(&entry, &info).unwrap(), None);
+
+ info.serverid = Some("DEADBEEF".to_string());
+ let mismatch = verify_serverid(&entry, &info).unwrap().unwrap();
+ assert_eq!(mismatch.expected, "AABBCCDD");
+ assert_eq!(mismatch.actual, "DEADBEEF");
+
+ // entry without serverid -> nothing to verify
+ let entry = SubscriptionKeyEntry {
+ key: "pve4b-aa11bb2233".to_string(),
+ product_type: ProductType::Pve,
+ ..Default::default()
+ };
+ assert_eq!(verify_serverid(&entry, &info).unwrap(), None);
+}
diff --git a/lib/pdm-config/src/lib.rs b/lib/pdm-config/src/lib.rs
index 5b9bcca..46ad1a2 100644
--- a/lib/pdm-config/src/lib.rs
+++ b/lib/pdm-config/src/lib.rs
@@ -8,6 +8,7 @@ pub mod domains;
pub mod node;
pub mod remotes;
pub mod setup;
+pub mod subscriptions;
pub mod views;
mod config_version_cache;
diff --git a/lib/pdm-config/src/subscriptions.rs b/lib/pdm-config/src/subscriptions.rs
new file mode 100644
index 0000000..7e930ba
--- /dev/null
+++ b/lib/pdm-config/src/subscriptions.rs
@@ -0,0 +1,102 @@
+//! Read/write subscription key pool configuration.
+//!
+//! Call [`init`] to inject a concrete `SubscriptionKeyConfig` instance before using the
+//! module-level functions.
+//!
+//! The shadow-config functions stash signed `SubscriptionInfo` blobs alongside the plain key
+//! entries, which is intended as future proofing for a more automated (shop) import without having
+//! to adapt the data layer.
+
+use std::sync::OnceLock;
+
+use anyhow::Error;
+
+use proxmox_config_digest::ConfigDigest;
+use proxmox_product_config::{open_api_lockfile, replace_config, ApiLockGuard};
+use proxmox_section_config::typed::{ApiSectionDataEntry, SectionConfigData};
+
+use pdm_api_types::subscription::{SubscriptionKeyEntry, SubscriptionKeyShadow};
+use pdm_buildcfg::configdir;
+
+pub const SUBSCRIPTIONS_CFG_FILENAME: &str = configdir!("/subscriptions.cfg");
+const SUBSCRIPTIONS_SHADOW_FILENAME: &str = configdir!("/subscriptions.shadow");
+pub const SUBSCRIPTIONS_CFG_LOCKFILE: &str = configdir!("/.subscriptions.lock");
+
+static INSTANCE: OnceLock<Box<dyn SubscriptionKeyConfig + Send + Sync>> = OnceLock::new();
+
+fn instance() -> &'static (dyn SubscriptionKeyConfig + Send + Sync) {
+ INSTANCE
+ .get()
+ .expect("subscription key config not initialized")
+ .as_ref()
+}
+
+pub fn lock_config() -> Result<ApiLockGuard, Error> {
+ instance().lock_config()
+}
+
+pub fn config() -> Result<(SectionConfigData<SubscriptionKeyEntry>, ConfigDigest), Error> {
+ instance().config()
+}
+
+pub fn shadow_config() -> Result<SectionConfigData<SubscriptionKeyShadow>, Error> {
+ instance().shadow_config()
+}
+
+pub fn save_config(config: &SectionConfigData<SubscriptionKeyEntry>) -> Result<(), Error> {
+ instance().save_config(config)
+}
+
+pub fn save_shadow(shadow: &SectionConfigData<SubscriptionKeyShadow>) -> Result<(), Error> {
+ instance().save_shadow(shadow)
+}
+
+pub trait SubscriptionKeyConfig {
+ fn config(&self) -> Result<(SectionConfigData<SubscriptionKeyEntry>, ConfigDigest), Error>;
+ fn shadow_config(&self) -> Result<SectionConfigData<SubscriptionKeyShadow>, Error>;
+ fn lock_config(&self) -> Result<ApiLockGuard, Error>;
+ fn save_config(&self, config: &SectionConfigData<SubscriptionKeyEntry>) -> Result<(), Error>;
+ fn save_shadow(&self, shadow: &SectionConfigData<SubscriptionKeyShadow>) -> Result<(), Error>;
+}
+
+pub struct DefaultSubscriptionKeyConfig;
+
+impl SubscriptionKeyConfig for DefaultSubscriptionKeyConfig {
+ fn lock_config(&self) -> Result<ApiLockGuard, Error> {
+ open_api_lockfile(SUBSCRIPTIONS_CFG_LOCKFILE, None, true)
+ }
+
+ fn config(&self) -> Result<(SectionConfigData<SubscriptionKeyEntry>, ConfigDigest), Error> {
+ let content = proxmox_sys::fs::file_read_optional_string(SUBSCRIPTIONS_CFG_FILENAME)?
+ .unwrap_or_default();
+
+ let digest = openssl::sha::sha256(content.as_bytes());
+ let data =
+ SubscriptionKeyEntry::parse_section_config(SUBSCRIPTIONS_CFG_FILENAME, &content)?;
+
+ Ok((data, digest.into()))
+ }
+
+ fn shadow_config(&self) -> Result<SectionConfigData<SubscriptionKeyShadow>, Error> {
+ let content = proxmox_sys::fs::file_read_optional_string(SUBSCRIPTIONS_SHADOW_FILENAME)?
+ .unwrap_or_default();
+ SubscriptionKeyShadow::parse_section_config(SUBSCRIPTIONS_SHADOW_FILENAME, &content)
+ }
+
+ fn save_config(&self, config: &SectionConfigData<SubscriptionKeyEntry>) -> Result<(), Error> {
+ let raw = SubscriptionKeyEntry::write_section_config(SUBSCRIPTIONS_CFG_FILENAME, config)?;
+ replace_config(SUBSCRIPTIONS_CFG_FILENAME, raw.as_bytes())
+ }
+
+ fn save_shadow(&self, shadow: &SectionConfigData<SubscriptionKeyShadow>) -> Result<(), Error> {
+ let raw =
+ SubscriptionKeyShadow::write_section_config(SUBSCRIPTIONS_SHADOW_FILENAME, shadow)?;
+ replace_config(SUBSCRIPTIONS_SHADOW_FILENAME, raw.as_bytes())
+ }
+}
+
+pub fn init(instance: Box<dyn SubscriptionKeyConfig + Send + Sync>) {
+ if INSTANCE.set(instance).is_err() {
+ panic!("subscription key config instance already set");
+ }
+}
diff --git a/server/src/context.rs b/server/src/context.rs
index c5da0af..a4afcdd 100644
--- a/server/src/context.rs
+++ b/server/src/context.rs
@@ -15,6 +15,13 @@ fn default_remote_setup() {
/// Dependency-inject concrete implementations needed at runtime.
pub fn init() -> Result<(), Error> {
+ // The subscription key pool is product-only (PDM stores its own pool of
+ // keys regardless of how remotes are mocked or not), so initialise it on
+ // both paths.
+ pdm_config::subscriptions::init(Box::new(
+ pdm_config::subscriptions::DefaultSubscriptionKeyConfig,
+ ));
+
#[cfg(remote_config = "faked")]
{
use anyhow::bail;
--
2.47.3
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH datacenter-manager v2 4/8] subscription: add key pool and node status API endpoints
2026-05-07 8:26 [PATCH datacenter-manager v2 0/8] subscription: add central key pool registry with reissue support Thomas Lamprecht
` (2 preceding siblings ...)
2026-05-07 8:26 ` [PATCH datacenter-manager v2 3/8] subscription: add key pool data model and config layer Thomas Lamprecht
@ 2026-05-07 8:26 ` Thomas Lamprecht
2026-05-07 13:23 ` Lukas Wagner
2026-05-07 8:26 ` [PATCH datacenter-manager v2 5/8] ui: add subscription registry with key pool and node status Thomas Lamprecht
` (5 subsequent siblings)
9 siblings, 1 reply; 15+ messages in thread
From: Thomas Lamprecht @ 2026-05-07 8:26 UTC (permalink / raw)
To: pdm-devel
Add REST endpoints under /subscriptions for the pool, the combined
remote-vs-pool node-status view, and the bulk paths (auto-assign,
apply-pending, clear-pending).
Endpoints touching a specific remote require the matching resource
privilege on that remote in addition to the system-scope bit, so an
operator with global system access alone cannot push keys to remotes
they have no other authority on. Audit-only callers see only the
remotes they may audit on read paths; pending operations skip remotes
they may not modify.
Add takes an all-or-nothing list. Apply-pending runs in a worker that
re-reads its plan when it fires (honouring a re-assign between API
call and worker firing) and bails on the first push failure so the
rest stays pending for retry.
Mutating endpoints accept an optional ConfigDigest; reads expose it
in the response.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
---
lib/pdm-api-types/src/subscription.rs | 9 +
server/src/api/mod.rs | 2 +
server/src/api/resources.rs | 8 +
server/src/api/subscriptions/mod.rs | 967 ++++++++++++++++++++++++++
4 files changed, 986 insertions(+)
create mode 100644 server/src/api/subscriptions/mod.rs
diff --git a/lib/pdm-api-types/src/subscription.rs b/lib/pdm-api-types/src/subscription.rs
index 26ecfba..ead3c1b 100644
--- a/lib/pdm-api-types/src/subscription.rs
+++ b/lib/pdm-api-types/src/subscription.rs
@@ -529,6 +529,15 @@ pub struct RemoteNodeStatus {
pub current_key: Option<String>,
}
+#[api]
+#[derive(Clone, Debug, Default, Serialize, Deserialize, PartialEq, Eq)]
+#[serde(rename_all = "kebab-case")]
+/// Result of the bulk clear-pending API endpoint.
+pub struct ClearPendingResult {
+ /// Number of pool entries whose pending push or reissue was cleared.
+ pub cleared: u32,
+}
+
#[api]
#[derive(Clone, Debug, Default, Serialize, Deserialize, PartialEq, Eq)]
#[serde(rename_all = "kebab-case")]
diff --git a/server/src/api/mod.rs b/server/src/api/mod.rs
index 110191b..9680edc 100644
--- a/server/src/api/mod.rs
+++ b/server/src/api/mod.rs
@@ -18,6 +18,7 @@ pub mod remotes;
pub mod resources;
mod rrd_common;
pub mod sdn;
+pub mod subscriptions;
#[sortable]
const SUBDIRS: SubdirMap = &sorted!([
@@ -31,6 +32,7 @@ const SUBDIRS: SubdirMap = &sorted!([
("resources", &resources::ROUTER),
("nodes", &nodes::ROUTER),
("sdn", &sdn::ROUTER),
+ ("subscriptions", &subscriptions::ROUTER),
("version", &Router::new().get(&API_METHOD_VERSION)),
]);
diff --git a/server/src/api/resources.rs b/server/src/api/resources.rs
index 50315b1..49c3974 100644
--- a/server/src/api/resources.rs
+++ b/server/src/api/resources.rs
@@ -848,6 +848,14 @@ fn get_cached_subscription_info(remote: &str, max_age: u64) -> Option<CachedSubs
}
}
+/// Drop the cached subscription state for a remote, forcing the next read to refetch.
+pub fn invalidate_subscription_info_for_remote(remote_id: &str) {
+ let mut cache = SUBSCRIPTION_CACHE
+ .write()
+ .expect("subscription mutex poisoned");
+ cache.remove(remote_id);
+}
+
/// Update cached subscription data.
///
/// If the cache already contains more recent data we don't insert the passed resources.
diff --git a/server/src/api/subscriptions/mod.rs b/server/src/api/subscriptions/mod.rs
new file mode 100644
index 0000000..26d9ecf
--- /dev/null
+++ b/server/src/api/subscriptions/mod.rs
@@ -0,0 +1,967 @@
+//! Subscription key pool management API.
+//!
+//! Manages a PDM-side pool of subscription keys, proposes key-to-node assignments, and pushes
+//! assigned keys to remote nodes. All entries are added manually for now; each entry is a bare
+//! `key` string with the product type derived from its prefix.
+
+use anyhow::{bail, format_err, Context, Error};
+use futures::future::join_all;
+
+use proxmox_access_control::CachedUserInfo;
+use proxmox_client::HttpApiClient;
+use proxmox_config_digest::ConfigDigest;
+use proxmox_log::{info, warn};
+use proxmox_router::{
+ http_bail, http_err, list_subdirs_api_method, Permission, Router, RpcEnvironment, SubdirMap,
+};
+use proxmox_schema::api;
+use proxmox_section_config::typed::SectionConfigData;
+use proxmox_sortable_macro::sortable;
+
+use pdm_api_types::remotes::{Remote, REMOTE_ID_SCHEMA};
+use pdm_api_types::subscription::{
+ pick_best_pve_socket_key, socket_count_from_key, ClearPendingResult, ProductType,
+ ProposedAssignment, RemoteNodeStatus, SubscriptionKeyEntry, SubscriptionKeySource,
+ SubscriptionLevel, SUBSCRIPTION_KEY_SCHEMA,
+};
+use pdm_api_types::{
+ Authid, NODE_SCHEMA, PRIV_RESOURCE_AUDIT, PRIV_RESOURCE_MODIFY, PRIV_SYS_AUDIT, PRIV_SYS_MODIFY,
+};
+
+use crate::api::resources::{
+ get_subscription_info_for_remote, invalidate_subscription_info_for_remote,
+};
+
+pub const ROUTER: Router = Router::new()
+ .get(&list_subdirs_api_method!(SUBDIRS))
+ .subdirs(SUBDIRS);
+
+#[sortable]
+const SUBDIRS: SubdirMap = &sorted!([
+ (
+ "apply-pending",
+ &Router::new().post(&API_METHOD_APPLY_PENDING)
+ ),
+ ("auto-assign", &Router::new().post(&API_METHOD_AUTO_ASSIGN)),
+ (
+ "clear-pending",
+ &Router::new().post(&API_METHOD_CLEAR_PENDING)
+ ),
+ ("keys", &KEYS_ROUTER),
+ ("node-status", &Router::new().get(&API_METHOD_NODE_STATUS)),
+]);
+
+const KEYS_ROUTER: Router = Router::new()
+ .get(&API_METHOD_LIST_KEYS)
+ .post(&API_METHOD_ADD_KEYS)
+ .match_all("key", &KEY_ITEM_ROUTER);
+
+const KEY_ITEM_ROUTER: Router = Router::new()
+ .get(&API_METHOD_GET_KEY)
+ .put(&API_METHOD_ASSIGN_KEY)
+ .delete(&API_METHOD_DELETE_KEY);
+
+/// Force-fresh node-status query so the next view reflects the new state instead of returning a
+/// cached entry up to 5 minutes later. Used by auto-assign / apply-pending / clear-pending to
+/// avoid double-driving a node that has already moved to Active in the cache window.
+const FRESH_NODE_STATUS_MAX_AGE: u64 = 0;
+
+/// Cached node-status freshness used by read-only views. Five minutes matches the resource-cache
+/// convention and is short enough that admins rarely see stale data on the panel.
+const PANEL_NODE_STATUS_MAX_AGE: u64 = 5 * 60;
+
+#[api(
+ returns: {
+ type: Array,
+ description: "List of subscription keys in the pool.",
+ items: { type: SubscriptionKeyEntry },
+ },
+ access: {
+ permission: &Permission::Privilege(&["system"], PRIV_SYS_AUDIT, false),
+ },
+)]
+/// List all subscription keys in the key pool.
+fn list_keys(rpcenv: &mut dyn RpcEnvironment) -> Result<Vec<SubscriptionKeyEntry>, Error> {
+ let (config, digest) = pdm_config::subscriptions::config()?;
+ rpcenv["digest"] = digest.to_hex().into();
+ Ok(config
+ .into_iter()
+ .map(|(_id, mut entry)| {
+ entry.level = SubscriptionLevel::from_key(Some(&entry.key));
+ entry
+ })
+ .collect())
+}
+
+#[api(
+ input: {
+ properties: {
+ keys: {
+ type: Array,
+ description: "Subscription keys to add to the pool.",
+ items: { schema: SUBSCRIPTION_KEY_SCHEMA },
+ },
+ digest: {
+ type: ConfigDigest,
+ optional: true,
+ },
+ },
+ },
+ access: {
+ permission: &Permission::Privilege(&["system"], PRIV_SYS_MODIFY, false),
+ },
+)]
+/// Add one or more subscription keys to the pool.
+///
+/// The key prefix determines the product type via [`ProductType::from_key`]. The schema regex
+/// rejects anything that isn't a PVE or PBS key today; widen [`PRODUCT_KEY_REGEX`] in lockstep
+/// with `from_key` and `push_key_to_remote` when PMG/POM remote support lands.
+///
+/// All-or-nothing: every key is validated for prefix and uniqueness (against the existing pool
+/// and within the input list) before any change is persisted. A single bad key fails the
+/// request and leaves the pool untouched.
+fn add_keys(keys: Vec<String>, digest: Option<ConfigDigest>) -> Result<(), Error> {
+ if keys.is_empty() {
+ http_bail!(BAD_REQUEST, "no keys provided");
+ }
+
+ let mut entries: Vec<SubscriptionKeyEntry> = Vec::with_capacity(keys.len());
+ let mut seen: std::collections::HashSet<&str> = std::collections::HashSet::new();
+ for key in &keys {
+ if !seen.insert(key.as_str()) {
+ http_bail!(BAD_REQUEST, "duplicate key in input: '{key}'");
+ }
+ let product_type = ProductType::from_key(key).ok_or_else(|| {
+ warn!("rejecting unrecognised key prefix '{key}', possibly a new product line");
+ format_err!("unrecognised key format: {key}")
+ })?;
+ entries.push(SubscriptionKeyEntry {
+ key: key.clone(),
+ product_type,
+ level: SubscriptionLevel::from_key(Some(key)),
+ source: SubscriptionKeySource::Manual,
+ ..Default::default()
+ });
+ }
+
+ let _lock = pdm_config::subscriptions::lock_config()?;
+ let (mut config, config_digest) = pdm_config::subscriptions::config()?;
+ config_digest.detect_modification(digest.as_ref())?;
+
+ for entry in &entries {
+ if config.contains_key(&entry.key) {
+ http_bail!(CONFLICT, "key '{}' already exists in pool", entry.key);
+ }
+ }
+
+ for entry in entries {
+ config.insert(entry.key.clone(), entry);
+ }
+
+ pdm_config::subscriptions::save_config(&config)
+}
+
+#[api(
+ input: {
+ properties: {
+ key: { schema: SUBSCRIPTION_KEY_SCHEMA },
+ },
+ },
+ returns: { type: SubscriptionKeyEntry },
+ access: {
+ permission: &Permission::Privilege(&["system"], PRIV_SYS_AUDIT, false),
+ },
+)]
+/// Get details for a single key.
+fn get_key(key: String, rpcenv: &mut dyn RpcEnvironment) -> Result<SubscriptionKeyEntry, Error> {
+ let (config, digest) = pdm_config::subscriptions::config()?;
+ rpcenv["digest"] = digest.to_hex().into();
+ let mut entry = config
+ .get(&key)
+ .cloned()
+ .ok_or_else(|| http_err!(NOT_FOUND, "key '{key}' not found in pool"))?;
+ entry.level = SubscriptionLevel::from_key(Some(&entry.key));
+ Ok(entry)
+}
+
+#[api(
+ input: {
+ properties: {
+ key: { schema: SUBSCRIPTION_KEY_SCHEMA },
+ digest: {
+ type: ConfigDigest,
+ optional: true,
+ },
+ },
+ },
+ access: {
+ permission: &Permission::Privilege(&["system"], PRIV_SYS_MODIFY, false),
+ },
+)]
+/// Remove a key from the pool.
+///
+/// If the key is currently assigned to a remote node, the caller must also have
+/// `PRIV_RESOURCE_MODIFY` on that remote, so an audit-only operator cannot release a key
+/// another admin had pinned. Refuses if the key is currently the live active key on its bound
+/// node, since dropping the pool entry would orphan that subscription on the remote: the
+/// operator must Reissue Key first.
+async fn delete_key(
+ key: String,
+ digest: Option<ConfigDigest>,
+ rpcenv: &mut dyn RpcEnvironment,
+) -> Result<(), Error> {
+ let auth_id: Authid = rpcenv
+ .get_auth_id()
+ .context("no authid available")?
+ .parse()?;
+ let user_info = CachedUserInfo::new()?;
+
+ // Live fetch must happen before the lock since the lock cannot span an .await; the
+ // post-lock check below only refuses if the binding still matches what we observed.
+ let synced_block = check_synced_assignment_for_unassign(&key).await?;
+
+ let _lock = pdm_config::subscriptions::lock_config()?;
+ let (mut config, config_digest) = pdm_config::subscriptions::config()?;
+ config_digest.detect_modification(digest.as_ref())?;
+ let mut shadow = pdm_config::subscriptions::shadow_config()?;
+
+ let Some(entry) = config.get(&key) else {
+ http_bail!(NOT_FOUND, "key '{key}' not found in pool");
+ };
+
+ if let Some(assigned_remote) = entry.remote.as_deref() {
+ user_info.check_privs(
+ &auth_id,
+ &["resource", assigned_remote],
+ PRIV_RESOURCE_MODIFY,
+ false,
+ )?;
+ }
+
+ if let Some((blocking_remote, blocking_node)) = synced_block {
+ if entry.remote.as_deref() == Some(blocking_remote.as_str())
+ && entry.node.as_deref() == Some(blocking_node.as_str())
+ {
+ http_bail!(
+ BAD_REQUEST,
+ "key '{key}' is currently active on {blocking_remote}/{blocking_node}; \
+ use Reissue Key to remove it from the remote first"
+ );
+ }
+ }
+
+ config.remove(&key);
+ // Cascade the shadow drop. Order mirrors `pdm-config/src/remotes.rs`: shadow first, then
+ // main, so a partial failure cannot leave a key entry that lost its signed blob.
+ shadow.remove(&key);
+ pdm_config::subscriptions::save_shadow(&shadow)?;
+ pdm_config::subscriptions::save_config(&config)?;
+
+ Ok(())
+}
+
+#[api(
+ input: {
+ properties: {
+ key: { schema: SUBSCRIPTION_KEY_SCHEMA },
+ remote: {
+ schema: REMOTE_ID_SCHEMA,
+ optional: true,
+ },
+ node: {
+ // NODE_SCHEMA rejects path-traversal input before it ends up interpolated into
+ // the remote URL `/api2/extjs/nodes/{node}/subscription`.
+ schema: NODE_SCHEMA,
+ optional: true,
+ },
+ digest: {
+ type: ConfigDigest,
+ optional: true,
+ },
+ },
+ },
+ access: {
+ permission: &Permission::Privilege(&["system"], PRIV_SYS_MODIFY, false),
+ },
+)]
+/// Assign a key to a remote node, or unassign it (omit remote and node).
+///
+/// `PRIV_SYS_MODIFY` lets the caller touch the pool config; per-remote `PRIV_RESOURCE_MODIFY`
+/// is enforced inside this handler so an operator cannot push a key to a remote they have no
+/// other authority on.
+async fn assign_key(
+ key: String,
+ remote: Option<String>,
+ node: Option<String>,
+ digest: Option<ConfigDigest>,
+ rpcenv: &mut dyn RpcEnvironment,
+) -> Result<(), Error> {
+ let auth_id: Authid = rpcenv
+ .get_auth_id()
+ .context("no authid available")?
+ .parse()?;
+ let user_info = CachedUserInfo::new()?;
+
+ // Best-effort orphan-prevention for the unassign path: if the entry is currently bound and
+ // synced (the assigned key is the live current_key on its remote), refuse and direct the
+ // operator to Reissue Key. The live fetch must happen before the lock since that lock cannot
+ // span an .await; we re-check the binding under the lock and only refuse if it still
+ // matches what we observed live.
+ let synced_block = if remote.is_none() && node.is_none() {
+ check_synced_assignment_for_unassign(&key).await?
+ } else {
+ None
+ };
+
+ let _lock = pdm_config::subscriptions::lock_config()?;
+ let (mut config, config_digest) = pdm_config::subscriptions::config()?;
+ config_digest.detect_modification(digest.as_ref())?;
+
+ let Some(stored_entry) = config.get(&key).cloned() else {
+ http_bail!(NOT_FOUND, "key '{key}' not found in pool");
+ };
+ let product_type = stored_entry.product_type;
+
+ match (&remote, &node) {
+ (Some(remote_name), Some(node_name)) => {
+ user_info.check_privs(
+ &auth_id,
+ &["resource", remote_name],
+ PRIV_RESOURCE_MODIFY,
+ false,
+ )?;
+
+ // Reassigning away from a previous remote requires modify on that remote too,
+ // otherwise an audit-only-on-A operator could effectively pull a key off A by
+ // re-binding it to a remote B they can modify and applying the push (which makes
+ // the shop reissue the serverid to B and invalidates A).
+ if let Some(prev_remote) = stored_entry.remote.as_deref() {
+ if prev_remote != remote_name {
+ user_info.check_privs(
+ &auth_id,
+ &["resource", prev_remote],
+ PRIV_RESOURCE_MODIFY,
+ false,
+ )?;
+ }
+ }
+
+ let (remotes_config, _) = pdm_config::remotes::config()?;
+ let remote_entry = remotes_config
+ .get(remote_name)
+ .ok_or_else(|| http_err!(NOT_FOUND, "remote '{remote_name}' not found"))?;
+
+ if !product_type.matches_remote_type(remote_entry.ty) {
+ http_bail!(
+ BAD_REQUEST,
+ "key type '{product_type}' does not match remote type '{}'",
+ remote_entry.ty
+ );
+ }
+
+ for (_id, other) in config.iter() {
+ if other.key != key
+ && other.remote.as_deref() == Some(remote_name.as_str())
+ && other.node.as_deref() == Some(node_name.as_str())
+ {
+ http_bail!(
+ CONFLICT,
+ "key '{}' is already assigned to {remote_name}/{node_name}",
+ other.key
+ );
+ }
+ }
+
+ let entry = config.get_mut(&key).unwrap();
+ entry.remote = Some(remote_name.clone());
+ entry.node = Some(node_name.clone());
+ }
+ (None, None) => {
+ // Unassign also requires modify on the previously-pinned remote, so an audit-only
+ // operator cannot rip a key off a node they cannot otherwise touch.
+ if let Some(prev_remote) = stored_entry.remote.as_deref() {
+ user_info.check_privs(
+ &auth_id,
+ &["resource", prev_remote],
+ PRIV_RESOURCE_MODIFY,
+ false,
+ )?;
+ }
+ // Honour the pre-lock synced check only if the binding still matches what we
+ // observed; if the binding moved between the live fetch and lock, the orphan check
+ // is moot and we let the unassign through.
+ if let Some((blocking_remote, blocking_node)) = synced_block {
+ if stored_entry.remote.as_deref() == Some(blocking_remote.as_str())
+ && stored_entry.node.as_deref() == Some(blocking_node.as_str())
+ {
+ http_bail!(
+ BAD_REQUEST,
+ "key '{key}' is currently active on {blocking_remote}/{blocking_node}; \
+ use Reissue Key to remove it from the remote first"
+ );
+ }
+ }
+ let entry = config.get_mut(&key).unwrap();
+ entry.remote = None;
+ entry.node = None;
+ }
+ _ => {
+ http_bail!(
+ BAD_REQUEST,
+ "both 'remote' and 'node' must be provided, or neither"
+ );
+ }
+ }
+
+ pdm_config::subscriptions::save_config(&config)?;
+
+ Ok(())
+}
+
+/// Pre-lock check for [`assign_key`]'s unassign path: returns the (remote, node) the entry is
+/// currently active on, if any, so the lock-protected branch can refuse the unassign and prompt
+/// the operator to Reissue Key instead. Returns `None` for entries with no binding, no live
+/// subscription, or a live subscription whose key does not match the entry.
+async fn check_synced_assignment_for_unassign(
+ key: &str,
+) -> Result<Option<(String, String)>, Error> {
+ let (config, _) = pdm_config::subscriptions::config()?;
+ let Some(entry) = config.get(key) else {
+ return Ok(None);
+ };
+ let (Some(prev_remote), Some(prev_node)) = (entry.remote.clone(), entry.node.clone()) else {
+ return Ok(None);
+ };
+ let (remotes_config, _) = pdm_config::remotes::config()?;
+ let Some(remote_entry) = remotes_config.get(&prev_remote) else {
+ return Ok(None);
+ };
+ let live = match get_subscription_info_for_remote(remote_entry, FRESH_NODE_STATUS_MAX_AGE).await
+ {
+ Ok(v) => v,
+ Err(_) => return Ok(None),
+ };
+ let synced = live
+ .get(&prev_node)
+ .and_then(|info| info.as_ref())
+ .map(|info| {
+ info.status == proxmox_subscription::SubscriptionStatus::Active
+ && info.key.as_deref() == Some(key)
+ })
+ .unwrap_or(false);
+ Ok(synced.then_some((prev_remote, prev_node)))
+}
+
+/// Push a single key to its assigned remote node. Operates on a borrowed `Remote` so the
+/// caller can fetch the remotes-config once and reuse it.
+async fn push_key_to_remote(remote: &Remote, key: &str, node_name: &str) -> Result<(), Error> {
+ let product_type =
+ ProductType::from_key(key).ok_or_else(|| format_err!("unrecognised key format: {key}"))?;
+
+ // PVE and PBS share `proxmox_client::Client`, so `make_pbs_client_and_login` works for both;
+ // only the PUT path differs.
+ let path = match product_type {
+ ProductType::Pve => format!("/api2/extjs/nodes/{node_name}/subscription"),
+ ProductType::Pbs => "/api2/extjs/nodes/localhost/subscription".to_string(),
+ ProductType::Pmg | ProductType::Pom => {
+ bail!("PDM cannot push '{product_type}' keys: no remote support yet");
+ }
+ };
+
+ let client = crate::connection::make_pbs_client_and_login(remote).await?;
+
+ client
+ .0
+ .put(&path, &serde_json::json!({ "key": key }))
+ .await?;
+ client.0.post(&path, &serde_json::json!({})).await?;
+
+ info!("pushed key '{key}' to {}/{node_name}", remote.id);
+ Ok(())
+}
+
+#[api(
+ input: {
+ properties: {
+ "max-age": {
+ type: u64,
+ optional: true,
+ description: "Override the cache freshness window in seconds. \
+ Default 300 for panel views; pass 0 to force a fresh query.",
+ },
+ },
+ },
+ returns: {
+ type: Array,
+ description: "Subscription status of all remote nodes the user can audit.",
+ items: { type: RemoteNodeStatus },
+ },
+ access: {
+ permission: &Permission::Privilege(&["system"], PRIV_SYS_AUDIT, false),
+ },
+)]
+/// Get the subscription status of every remote node the caller can audit, combined with key pool
+/// assignment information.
+///
+/// Per-remote `PRIV_RESOURCE_AUDIT` is enforced inside the handler so users only see remotes
+/// they may audit.
+async fn node_status(
+ max_age: Option<u64>,
+ rpcenv: &mut dyn RpcEnvironment,
+) -> Result<Vec<RemoteNodeStatus>, Error> {
+ collect_node_status(max_age.unwrap_or(PANEL_NODE_STATUS_MAX_AGE), rpcenv).await
+}
+
+/// Shared helper: fan out subscription queries to all remotes the caller has audit privilege on,
+/// in parallel, reusing the per-remote `SUBSCRIPTION_CACHE` via `get_subscription_info_for_remote`.
+/// Joins the results with the key-pool assignment table.
+async fn collect_node_status(
+ max_age: u64,
+ rpcenv: &mut dyn RpcEnvironment,
+) -> Result<Vec<RemoteNodeStatus>, Error> {
+ let auth_id: Authid = rpcenv
+ .get_auth_id()
+ .context("no authid available")?
+ .parse()?;
+ let user_info = CachedUserInfo::new()?;
+
+ let visible_remotes: Vec<(String, Remote)> = crate::api::remotes::RemoteIterator::new()?
+ .any_privs(&user_info, &auth_id, PRIV_RESOURCE_AUDIT)
+ .into_iter()
+ .collect();
+
+ let (keys_config, _) = pdm_config::subscriptions::config()?;
+
+ // `get_subscription_info_for_remote` re-uses the per-remote `SUBSCRIPTION_CACHE` so this
+ // fan-out is safe to run concurrently.
+ let fetch = visible_remotes.iter().map(|(name, remote)| async move {
+ let res = get_subscription_info_for_remote(remote, max_age).await;
+ (name.clone(), remote.ty, res)
+ });
+ let results = join_all(fetch).await;
+
+ let mut out = Vec::new();
+ for (remote_name, remote_ty, result) in results {
+ let node_infos = match result {
+ Ok(info) => info,
+ Err(err) => {
+ warn!("failed to query subscription for remote {remote_name}: {err}");
+ continue;
+ }
+ };
+
+ for (node_name, node_info) in &node_infos {
+ let (status, level, sockets, current_key) = match node_info {
+ Some(info) => (info.status, info.level, info.sockets, info.key.clone()),
+ None => (
+ proxmox_subscription::SubscriptionStatus::NotFound,
+ SubscriptionLevel::None,
+ None,
+ None,
+ ),
+ };
+
+ let assigned_key = keys_config
+ .iter()
+ .find(|(_id, entry)| {
+ entry.remote.as_deref() == Some(remote_name.as_str())
+ && entry.node.as_deref() == Some(node_name.as_str())
+ })
+ .map(|(_id, entry)| entry.key.clone());
+
+ out.push(RemoteNodeStatus {
+ remote: remote_name.clone(),
+ ty: remote_ty,
+ node: node_name.to_string(),
+ sockets,
+ status,
+ level,
+ assigned_key,
+ current_key,
+ });
+ }
+ }
+
+ out.sort_by(|a, b| (&a.remote, &a.node).cmp(&(&b.remote, &b.node)));
+ Ok(out)
+}
+
+#[api(
+ input: {
+ properties: {
+ apply: {
+ type: bool,
+ optional: true,
+ default: false,
+ description: "Actually apply the proposed assignments. Without this, only a preview is returned.",
+ },
+ },
+ },
+ returns: {
+ type: Array,
+ description: "List of proposed or applied assignments.",
+ items: { type: ProposedAssignment },
+ },
+ access: {
+ permission: &Permission::Privilege(&["system"], PRIV_SYS_MODIFY, false),
+ },
+)]
+/// Propose or apply automatic key-to-node assignments.
+///
+/// Matches unused pool keys to remote nodes that do not yet have a pool-assigned key, picking
+/// the smallest PVE key that covers each node's socket count. When `apply=true`, the live node
+/// statuses are fetched first (without holding the config lock - sync locks must not span
+/// awaits), then proposals are computed and persisted under the lock with a per-key re-check
+/// against the now-current pool state, so a parallel admin edit between fetch and apply does
+/// not get silently overwritten.
+async fn auto_assign(
+ apply: Option<bool>,
+ rpcenv: &mut dyn RpcEnvironment,
+) -> Result<Vec<ProposedAssignment>, Error> {
+ let apply = apply.unwrap_or(false);
+
+ let auth_id: Authid = rpcenv
+ .get_auth_id()
+ .context("no authid available")?
+ .parse()?;
+ let user_info = CachedUserInfo::new()?;
+
+ let node_statuses = collect_node_status(FRESH_NODE_STATUS_MAX_AGE, rpcenv).await?;
+
+ if !apply {
+ let (config, _digest) = pdm_config::subscriptions::config()?;
+ return Ok(compute_proposals(&config, &node_statuses));
+ }
+
+ let _lock = pdm_config::subscriptions::lock_config()?;
+ let (mut config, _digest) = pdm_config::subscriptions::config()?;
+ let mut proposals = compute_proposals(&config, &node_statuses);
+
+ // Audit-only callers may see a remote in the preview but must not be able to stage a write
+ // for it that another admin would later push on their behalf.
+ proposals.retain(|p| {
+ user_info.lookup_privs(&auth_id, &["resource", &p.remote]) & PRIV_RESOURCE_MODIFY != 0
+ });
+
+ for p in &proposals {
+ if let Some(entry) = config.get_mut(&p.key) {
+ // Skip keys that another writer assigned between the preview and the lock.
+ if entry.remote.is_none() {
+ entry.remote = Some(p.remote.clone());
+ entry.node = Some(p.node.clone());
+ }
+ }
+ }
+ pdm_config::subscriptions::save_config(&config)?;
+
+ Ok(proposals)
+}
+
+fn compute_proposals(
+ config: &SectionConfigData<SubscriptionKeyEntry>,
+ node_statuses: &[RemoteNodeStatus],
+) -> Vec<ProposedAssignment> {
+ let mut target_nodes: Vec<&RemoteNodeStatus> = node_statuses
+ .iter()
+ .filter(|n| {
+ n.assigned_key.is_none() && n.status != proxmox_subscription::SubscriptionStatus::Active
+ })
+ .collect();
+
+ // Sort PVE nodes by socket count descending so large nodes get keys first.
+ target_nodes.sort_by(|a, b| b.sockets.unwrap_or(0).cmp(&a.sockets.unwrap_or(0)));
+
+ let mut proposals: Vec<ProposedAssignment> = Vec::new();
+ let mut taken: std::collections::HashSet<String> = std::collections::HashSet::new();
+
+ for node in &target_nodes {
+ let remote_type = node.ty;
+
+ let candidates = config.iter().filter(|(id, entry)| {
+ entry.remote.is_none()
+ && !taken.contains(*id)
+ && entry.product_type.matches_remote_type(remote_type)
+ });
+
+ let best_key = if remote_type == pdm_api_types::remotes::RemoteType::Pve {
+ let node_sockets = node.sockets.unwrap_or(1) as u32;
+ pick_best_pve_socket_key(
+ node_sockets,
+ candidates.map(|(id, entry)| (id.to_string(), entry.key.as_str())),
+ )
+ } else {
+ candidates.map(|(id, _)| id.to_string()).next()
+ };
+
+ if let Some(key_id) = best_key {
+ let ks = config
+ .get(&key_id)
+ .and_then(|e| socket_count_from_key(&e.key));
+ taken.insert(key_id.clone());
+ proposals.push(ProposedAssignment {
+ key: key_id,
+ remote: node.remote.clone(),
+ node: node.node.clone(),
+ key_sockets: ks,
+ node_sockets: node.sockets,
+ });
+ }
+ }
+
+ proposals
+}
+
+#[api(
+ returns: {
+ type: String,
+ optional: true,
+ description: "Task UPID; absent when there is nothing pending to push.",
+ },
+ access: {
+ permission: &Permission::Privilege(&["system"], PRIV_SYS_MODIFY, false),
+ },
+)]
+/// Push every pending key assignment to its remote node.
+///
+/// Pending = the live node does not confirm the assigned key as its current active subscription
+/// (status not Active, a different `current_key`, or the remote did not respond / is gone). Each
+/// push is logged from a worker task so the admin can follow progress.
+///
+/// The worker bails on the first failure; the remaining entries stay pending so the operator
+/// can fix the underlying issue (or clear that one assignment) and trigger another apply.
+///
+/// Returns `None` when nothing is pending so the caller can show a short info message instead of
+/// opening a task progress dialog for a no-op worker.
+async fn apply_pending(rpcenv: &mut dyn RpcEnvironment) -> Result<Option<String>, Error> {
+ let auth_id: Authid = rpcenv
+ .get_auth_id()
+ .context("no authid available")?
+ .parse()?;
+ let user_info = CachedUserInfo::new()?;
+
+ let node_statuses = collect_node_status(FRESH_NODE_STATUS_MAX_AGE, rpcenv).await?;
+ let pending = compute_pending(&user_info, &auth_id, &node_statuses)?;
+
+ if pending.is_empty() {
+ return Ok(None);
+ }
+
+ let worker_auth = auth_id.clone();
+ let upid = proxmox_rest_server::WorkerTask::spawn(
+ "subscription-push",
+ None,
+ auth_id.to_string(),
+ true,
+ move |_worker| async move { run_apply_pending(worker_auth).await },
+ )?;
+
+ Ok(Some(upid))
+}
+
+/// Re-validate and run the apply-pending plan from inside a worker.
+///
+/// The worker re-reads remotes and the pool config so a reassign or removal between the API call
+/// returning a UPID and the worker firing is honoured (pushing the old key to a node after the
+/// operator retracted the assignment was a real footgun).
+async fn run_apply_pending(auth_id: Authid) -> Result<(), Error> {
+ let user_info = CachedUserInfo::new()?;
+ let (remotes_config, _) = pdm_config::remotes::config()?;
+ let (config, _) = pdm_config::subscriptions::config()?;
+
+ let node_statuses = collect_status_uncached(&remotes_config).await;
+ let pending = compute_pending(&user_info, &auth_id, &node_statuses)?;
+
+ if pending.is_empty() {
+ info!("apply-pending: nothing to do (state changed since the API call)");
+ return Ok(());
+ }
+
+ let total = pending.len();
+ let mut ok = 0usize;
+
+ for entry in pending {
+ let Some(remote) = remotes_config.get(&entry.remote) else {
+ bail!(
+ "remote '{}' vanished, aborting after {ok}/{total} successful pushes",
+ entry.remote,
+ );
+ };
+ // Honour the case where the operator unassigned the key while the worker was queued.
+ if !pool_assignment_still_valid(&config, &entry) {
+ info!(
+ "skipping {}/{}: pool assignment changed before worker ran",
+ entry.remote, entry.node
+ );
+ continue;
+ }
+
+ info!(
+ "pushing {} to {}/{}...",
+ entry.key, entry.remote, entry.node
+ );
+ if let Err(err) = push_key_to_remote(remote, &entry.key, &entry.node).await {
+ bail!(
+ "push of {} to {}/{} failed after {ok}/{total} successful pushes: {err}",
+ entry.key,
+ entry.remote,
+ entry.node,
+ );
+ }
+ info!(" success");
+ invalidate_subscription_info_for_remote(&entry.remote);
+ ok += 1;
+ }
+
+ info!("finished: {ok}/{total} pushes succeeded");
+ Ok(())
+}
+
+#[api(
+ returns: { type: ClearPendingResult },
+ access: {
+ permission: &Permission::Privilege(&["system"], PRIV_SYS_MODIFY, false),
+ },
+)]
+/// Clear every pending assignment in one bulk transaction.
+///
+/// Pending = pool key bound to a remote node whose live state does not confirm the assignment
+/// (status not Active, a different `current_key`, or no row returned at all because the remote is
+/// unreachable / the node is gone). Clears only those entries the caller has
+/// `PRIV_RESOURCE_MODIFY` on; remotes the caller may only audit are skipped. Mirrors
+/// `apply-pending` but drops the assignments instead of pushing them, so an operator can disown
+/// stuck assignments without first having to bring the target back online.
+async fn clear_pending(rpcenv: &mut dyn RpcEnvironment) -> Result<ClearPendingResult, Error> {
+ let auth_id: Authid = rpcenv
+ .get_auth_id()
+ .context("no authid available")?
+ .parse()?;
+ let user_info = CachedUserInfo::new()?;
+
+ let node_statuses = collect_node_status(FRESH_NODE_STATUS_MAX_AGE, rpcenv).await?;
+ let pending = compute_pending(&user_info, &auth_id, &node_statuses)?;
+
+ if pending.is_empty() {
+ return Ok(ClearPendingResult { cleared: 0 });
+ }
+
+ let _lock = pdm_config::subscriptions::lock_config()?;
+ let (mut config, _digest) = pdm_config::subscriptions::config()?;
+
+ let mut cleared: u32 = 0;
+ for entry in &pending {
+ // Re-check inside the lock so a concurrent reassign is not silently overwritten.
+ if let Some(stored) = config.get_mut(&entry.key) {
+ if stored.remote.as_deref() == Some(entry.remote.as_str())
+ && stored.node.as_deref() == Some(entry.node.as_str())
+ {
+ stored.remote = None;
+ stored.node = None;
+ cleared += 1;
+ }
+ }
+ }
+
+ if cleared > 0 {
+ pdm_config::subscriptions::save_config(&config)?;
+ }
+
+ Ok(ClearPendingResult { cleared })
+}
+
+/// Plan entry for one pending push.
+#[derive(Clone, Debug)]
+struct PendingEntry {
+ key: String,
+ remote: String,
+ node: String,
+}
+
+fn compute_pending(
+ user_info: &CachedUserInfo,
+ auth_id: &Authid,
+ node_statuses: &[RemoteNodeStatus],
+) -> Result<Vec<PendingEntry>, Error> {
+ let (config, _) = pdm_config::subscriptions::config()?;
+
+ Ok(config
+ .iter()
+ .filter_map(|(_id, entry)| {
+ let remote = entry.remote.as_deref()?;
+ let node = entry.node.as_deref()?;
+
+ if user_info.lookup_privs(auth_id, &["resource", remote]) & PRIV_RESOURCE_MODIFY == 0 {
+ return None;
+ }
+
+ // Treat anything other than "Active with the assigned key as the live current_key"
+ // as pending, including unreachable remotes, so an operator can clear stuck
+ // assignments without first having to bring the target back online.
+ let is_pending = match node_statuses
+ .iter()
+ .find(|n| n.remote == remote && n.node == node)
+ {
+ Some(n) => {
+ n.status != proxmox_subscription::SubscriptionStatus::Active
+ || n.current_key.as_deref() != Some(entry.key.as_str())
+ }
+ None => true,
+ };
+
+ is_pending.then(|| PendingEntry {
+ key: entry.key.clone(),
+ remote: remote.to_string(),
+ node: node.to_string(),
+ })
+ })
+ .collect())
+}
+
+fn pool_assignment_still_valid(
+ config: &SectionConfigData<SubscriptionKeyEntry>,
+ entry: &PendingEntry,
+) -> bool {
+ let Some(stored) = config.get(&entry.key) else {
+ return false;
+ };
+ stored.remote.as_deref() == Some(entry.remote.as_str())
+ && stored.node.as_deref() == Some(entry.node.as_str())
+}
+
+/// Like [`collect_node_status`] but bypasses the auth filter, for the apply-pending worker
+/// which gates each entry through its own per-remote priv check based on the persisted pool plan.
+async fn collect_status_uncached(
+ remotes_config: &SectionConfigData<Remote>,
+) -> Vec<RemoteNodeStatus> {
+ let fetch = remotes_config.iter().map(|(name, remote)| async move {
+ let res = get_subscription_info_for_remote(remote, FRESH_NODE_STATUS_MAX_AGE).await;
+ (name.to_string(), remote.ty, res)
+ });
+ let results = join_all(fetch).await;
+
+ let mut out = Vec::new();
+ for (remote_name, remote_ty, result) in results {
+ let Ok(node_infos) = result else { continue };
+ for (node_name, node_info) in &node_infos {
+ let (status, level, sockets, current_key) = match node_info {
+ Some(info) => (info.status, info.level, info.sockets, info.key.clone()),
+ None => (
+ proxmox_subscription::SubscriptionStatus::NotFound,
+ SubscriptionLevel::None,
+ None,
+ None,
+ ),
+ };
+ out.push(RemoteNodeStatus {
+ remote: remote_name.clone(),
+ ty: remote_ty,
+ node: node_name.to_string(),
+ sockets,
+ status,
+ level,
+ assigned_key: None,
+ current_key,
+ });
+ }
+ }
+ out
+}
--
2.47.3
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH datacenter-manager v2 5/8] ui: add subscription registry with key pool and node status
2026-05-07 8:26 [PATCH datacenter-manager v2 0/8] subscription: add central key pool registry with reissue support Thomas Lamprecht
` (3 preceding siblings ...)
2026-05-07 8:26 ` [PATCH datacenter-manager v2 4/8] subscription: add key pool and node status API endpoints Thomas Lamprecht
@ 2026-05-07 8:26 ` Thomas Lamprecht
2026-05-07 8:26 ` [PATCH datacenter-manager v2 6/8] cli: add subscription key pool management subcommands Thomas Lamprecht
` (4 subsequent siblings)
9 siblings, 0 replies; 15+ messages in thread
From: Thomas Lamprecht @ 2026-05-07 8:26 UTC (permalink / raw)
To: pdm-devel
Add a top-level Subscription Registry view with a Key Pool panel
next to a Node Status tree.
The Add dialog takes a textarea so an operator can paste several
keys at once, newline or comma separated; the backend validates the
whole batch atomically. The Assign dialog filters the remote
selector by the key's compatible remote type and pulls a node
selector for the chosen remote. PMG and POM keys leave Assign
disabled since PDM cannot push them to a remote yet.
Pending assignments show up in the Node Status panel with a clock
icon. Selecting a node there exposes Clear Assignment and Remove
actions: an operator often thinks in terms of "this node is wrong"
rather than tracking down the key entry on the left side.
The Key Pool panel notifies the parent on every successful pool
mutation so the Node Status tree reloads in lockstep, otherwise the
right side keeps showing the pre-mutation view until the operator
navigates away and back.
Apply Pending shows an info toast on the no-op reply instead of
opening a task dialog. Clear Pending hits the bulk backend endpoint
rather than issuing per-key PUTs from the UI.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
---
ui/src/configuration/mod.rs | 2 +
ui/src/configuration/subscription_keys.rs | 449 +++++++++++
ui/src/configuration/subscription_registry.rs | 713 ++++++++++++++++++
ui/src/main_menu.rs | 10 +
4 files changed, 1174 insertions(+)
create mode 100644 ui/src/configuration/subscription_keys.rs
create mode 100644 ui/src/configuration/subscription_registry.rs
diff --git a/ui/src/configuration/mod.rs b/ui/src/configuration/mod.rs
index 6ffb64b..4180111 100644
--- a/ui/src/configuration/mod.rs
+++ b/ui/src/configuration/mod.rs
@@ -13,7 +13,9 @@ mod permission_path_selector;
mod webauthn;
pub use webauthn::WebauthnPanel;
+pub mod subscription_keys;
pub mod subscription_panel;
+pub mod subscription_registry;
pub mod views;
diff --git a/ui/src/configuration/subscription_keys.rs b/ui/src/configuration/subscription_keys.rs
new file mode 100644
index 0000000..c535e94
--- /dev/null
+++ b/ui/src/configuration/subscription_keys.rs
@@ -0,0 +1,449 @@
+use std::future::Future;
+use std::pin::Pin;
+use std::rc::Rc;
+
+use anyhow::Error;
+
+use pdm_api_types::remotes::RemoteType;
+use pdm_api_types::subscription::{ProductType, RemoteNodeStatus, SubscriptionKeyEntry};
+use yew::virtual_dom::{Key, VComp, VNode};
+
+use proxmox_yew_comp::percent_encoding::percent_encode_component;
+use proxmox_yew_comp::{http_delete, http_get, http_post, http_put, EditWindow};
+use proxmox_yew_comp::{
+ LoadableComponent, LoadableComponentContext, LoadableComponentMaster,
+ LoadableComponentScopeExt, LoadableComponentState,
+};
+
+use pwt::css::FontStyle;
+use pwt::prelude::*;
+use pwt::state::{Selection, Store};
+use pwt::widget::data_table::{DataTable, DataTableColumn, DataTableHeader};
+use pwt::widget::form::{DisplayField, FormContext, TextArea};
+use pwt::widget::{Button, ConfirmDialog, Container, InputPanel, Toolbar};
+
+use crate::widget::{PveNodeSelector, RemoteSelector};
+
+const BASE_URL: &str = "/subscriptions/keys";
+
+#[derive(Properties, PartialEq, Clone)]
+pub struct SubscriptionKeyGrid {
+ /// Called after every successful pool mutation (add, assign, clear, remove). Lets the parent
+ /// view (the Subscription Registry) reload its own data so the Node Status side stays in
+ /// sync with the Key Pool side.
+ #[prop_or_default]
+ pub on_change: Option<Callback<()>>,
+
+ /// Latest live node-status snapshot from the parent view. Used to disable the Clear button
+ /// when the selected entry's binding is currently synced (the assigned key is the live
+ /// active key on its remote), since unassigning then would orphan the live subscription.
+ /// The server enforces the same gate; this prop just turns it into a UI affordance.
+ #[prop_or_default]
+ pub node_status: Rc<Vec<RemoteNodeStatus>>,
+}
+
+impl SubscriptionKeyGrid {
+ pub fn new() -> Self {
+ yew::props!(Self {})
+ }
+
+ pub fn on_change(mut self, cb: impl Into<Option<Callback<()>>>) -> Self {
+ self.on_change = cb.into();
+ self
+ }
+
+ pub fn node_status(mut self, statuses: Rc<Vec<RemoteNodeStatus>>) -> Self {
+ self.node_status = statuses;
+ self
+ }
+}
+
+impl Default for SubscriptionKeyGrid {
+ fn default() -> Self {
+ Self::new()
+ }
+}
+
+impl From<SubscriptionKeyGrid> for VNode {
+ fn from(val: SubscriptionKeyGrid) -> Self {
+ VComp::new::<LoadableComponentMaster<SubscriptionKeyGridComp>>(Rc::new(val), None).into()
+ }
+}
+
+pub enum Msg {
+ LoadFinished(Vec<SubscriptionKeyEntry>),
+ Remove(Key),
+ Reload,
+}
+
+#[derive(PartialEq)]
+pub enum ViewState {
+ Add,
+ Assign,
+ Remove,
+}
+
+#[doc(hidden)]
+pub struct SubscriptionKeyGridComp {
+ state: LoadableComponentState<ViewState>,
+ store: Store<SubscriptionKeyEntry>,
+ columns: Rc<Vec<DataTableHeader<SubscriptionKeyEntry>>>,
+ selection: Selection,
+}
+
+pwt::impl_deref_mut_property!(
+ SubscriptionKeyGridComp,
+ state,
+ LoadableComponentState<ViewState>
+);
+
+impl SubscriptionKeyGridComp {
+ fn columns() -> Rc<Vec<DataTableHeader<SubscriptionKeyEntry>>> {
+ Rc::new(vec![
+ DataTableColumn::new(tr!("Key"))
+ .flex(2)
+ .get_property(|entry: &SubscriptionKeyEntry| entry.key.as_str())
+ .sort_order(true)
+ .into(),
+ DataTableColumn::new(tr!("Product"))
+ .width("80px")
+ .render(|entry: &SubscriptionKeyEntry| entry.product_type.to_string().into())
+ .into(),
+ DataTableColumn::new(tr!("Level"))
+ .width("90px")
+ .render(|entry: &SubscriptionKeyEntry| entry.level.to_string().into())
+ .into(),
+ DataTableColumn::new(tr!("Assignment"))
+ .flex(2)
+ .render(
+ |entry: &SubscriptionKeyEntry| match (&entry.remote, &entry.node) {
+ (Some(remote), Some(node)) => format!("{remote} / {node}").into(),
+ _ => Html::default(),
+ },
+ )
+ .into(),
+ ])
+ }
+
+ fn selected_entry(&self) -> Option<SubscriptionKeyEntry> {
+ let key = self.selection.selected_key()?;
+ self.store.read().lookup_record(&key).cloned()
+ }
+
+ fn create_add_dialog(&self, ctx: &LoadableComponentContext<Self>) -> Html {
+ EditWindow::new(tr!("Add Subscription Keys"))
+ .renderer(|_form_ctx| add_input_panel())
+ .on_submit(submit_add_keys)
+ .on_done(ctx.link().clone().callback(|_| Msg::Reload))
+ .into()
+ }
+
+ fn create_assign_dialog(
+ &self,
+ entry: &SubscriptionKeyEntry,
+ ctx: &LoadableComponentContext<Self>,
+ ) -> Html {
+ let key = entry.key.clone();
+ let product_type = entry.product_type;
+ EditWindow::new(tr!("Assign Key to Remote"))
+ .renderer({
+ let key = key.clone();
+ move |form_ctx| assign_input_panel(&key, product_type, form_ctx)
+ })
+ .on_submit({
+ let key = key.clone();
+ move |form| submit_assign(key.clone(), form)
+ })
+ .on_done(ctx.link().clone().callback(|_| Msg::Reload))
+ .into()
+ }
+}
+
+impl LoadableComponent for SubscriptionKeyGridComp {
+ type Properties = SubscriptionKeyGrid;
+ type Message = Msg;
+ type ViewState = ViewState;
+
+ fn create(ctx: &LoadableComponentContext<Self>) -> Self {
+ let selection = Selection::new().on_select({
+ let link = ctx.link().clone();
+ move |_| link.send_redraw()
+ });
+ Self {
+ state: LoadableComponentState::new(),
+ store: Store::with_extract_key(|entry: &SubscriptionKeyEntry| {
+ entry.key.as_str().into()
+ }),
+ columns: Self::columns(),
+ selection,
+ }
+ }
+
+ fn update(&mut self, ctx: &LoadableComponentContext<Self>, msg: Self::Message) -> bool {
+ match msg {
+ Msg::LoadFinished(data) => self.store.set_data(data),
+ Msg::Remove(key) => {
+ let id = key.to_string();
+ let link = ctx.link().clone();
+ ctx.link().spawn(async move {
+ let url = format!("{BASE_URL}/{}", percent_encode_component(&id));
+ if let Err(err) = http_delete(&url, None).await {
+ link.show_error(
+ tr!("Error"),
+ tr!("Could not remove {id}: {err}", id = id, err = err),
+ true,
+ );
+ }
+ link.send_message(Msg::Reload);
+ });
+ }
+ Msg::Reload => {
+ ctx.link().change_view(None);
+ ctx.link().send_reload();
+ if let Some(cb) = &ctx.props().on_change {
+ cb.emit(());
+ }
+ }
+ }
+ true
+ }
+
+ fn toolbar(&self, ctx: &LoadableComponentContext<Self>) -> Option<Html> {
+ let entry = self.selected_entry();
+ let has_selection = entry.is_some();
+ let is_assigned = entry.as_ref().map(|e| e.remote.is_some()).unwrap_or(false);
+ let synced_assignment = entry
+ .as_ref()
+ .map(|e| is_synced_assignment(e, &ctx.props().node_status))
+ .unwrap_or(false);
+ let assignable = entry
+ .as_ref()
+ .map(|e| {
+ e.product_type.matches_remote_type(RemoteType::Pve)
+ || e.product_type.matches_remote_type(RemoteType::Pbs)
+ })
+ .unwrap_or(false);
+ let link = ctx.link();
+
+ Some(
+ Toolbar::new()
+ .border_bottom(true)
+ .with_child(
+ Button::new(tr!("Add"))
+ .icon_class("fa fa-plus")
+ .on_activate(link.change_view_callback(|_| Some(ViewState::Add))),
+ )
+ .with_spacer()
+ .with_child(
+ Button::new(tr!("Remove Key"))
+ .icon_class("fa fa-trash-o")
+ .disabled(!has_selection || synced_assignment)
+ .on_activate(link.change_view_callback(|_| Some(ViewState::Remove))),
+ )
+ .with_spacer()
+ .with_child(
+ Button::new(tr!("Assign"))
+ .icon_class("fa fa-link")
+ .disabled(!has_selection || is_assigned || !assignable)
+ .on_activate(link.change_view_callback(|_| Some(ViewState::Assign))),
+ )
+ .into(),
+ )
+ }
+
+ fn load(
+ &self,
+ ctx: &LoadableComponentContext<Self>,
+ ) -> Pin<Box<dyn Future<Output = Result<(), Error>>>> {
+ let link = ctx.link().clone();
+ Box::pin(async move {
+ let data: Vec<SubscriptionKeyEntry> = http_get(BASE_URL, None).await?;
+ link.send_message(Msg::LoadFinished(data));
+ Ok(())
+ })
+ }
+
+ fn main_view(&self, _ctx: &LoadableComponentContext<Self>) -> Html {
+ DataTable::new(self.columns.clone(), self.store.clone())
+ .selection(self.selection.clone())
+ .into()
+ }
+
+ fn dialog_view(
+ &self,
+ ctx: &LoadableComponentContext<Self>,
+ view_state: &Self::ViewState,
+ ) -> Option<Html> {
+ match view_state {
+ ViewState::Add => Some(self.create_add_dialog(ctx)),
+ ViewState::Assign => self
+ .selected_entry()
+ .map(|entry| self.create_assign_dialog(&entry, ctx)),
+ ViewState::Remove => self.selection.selected_key().map(|key| {
+ ConfirmDialog::new(
+ tr!("Remove Key"),
+ tr!(
+ "Remove {key} from the key pool? This does not revoke the subscription.",
+ key = key.to_string(),
+ ),
+ )
+ .on_confirm({
+ let link = ctx.link().clone();
+ let key = key.clone();
+ move |_| link.send_message(Msg::Remove(key.clone()))
+ })
+ .into()
+ }),
+ }
+ }
+}
+
+/// Returns true when the pool entry's binding currently runs the same key on the remote and is
+/// Active - meaning a clear-assignment would orphan the live subscription. Mirrors the
+/// server-side gate; the operator should use Reissue Key in that state.
+fn is_synced_assignment(entry: &SubscriptionKeyEntry, statuses: &[RemoteNodeStatus]) -> bool {
+ let (Some(remote), Some(node)) = (entry.remote.as_deref(), entry.node.as_deref()) else {
+ return false;
+ };
+ statuses
+ .iter()
+ .find(|n| n.remote == remote && n.node == node)
+ .map(|n| {
+ n.status == proxmox_subscription::SubscriptionStatus::Active
+ && n.current_key.as_deref() == Some(entry.key.as_str())
+ })
+ .unwrap_or(false)
+}
+
+fn add_input_panel() -> Html {
+ let hint = Container::new()
+ .class(FontStyle::TitleSmall)
+ .class(pwt::css::Opacity::Quarter)
+ .padding_top(2)
+ .with_child(tr!(
+ "One key per line, or comma-separated. Only Proxmox VE and Proxmox Backup Server keys are accepted."
+ ));
+
+ // The textarea opts into `width: 100%` so it fills the InputPanel's grid cell instead of
+ // shrinking to browser-default cols.
+ InputPanel::new()
+ .padding(4)
+ .min_width(500)
+ .with_large_custom_child(
+ TextArea::new()
+ .name("keys")
+ .submit_empty(false)
+ .required(true)
+ .attribute("rows", "8")
+ .attribute("placeholder", tr!("Subscription key(s)"))
+ .style("width", "100%")
+ .style("box-sizing", "border-box"),
+ )
+ .with_large_custom_child(hint)
+ .into()
+}
+
+async fn submit_add_keys(form_ctx: FormContext) -> Result<(), Error> {
+ let raw = form_ctx.read().get_field_text("keys");
+ let keys: Vec<String> = raw
+ .split(|c: char| c.is_whitespace() || c == ',')
+ .map(str::trim)
+ .filter(|s| !s.is_empty())
+ .map(str::to_string)
+ .collect();
+
+ if keys.is_empty() {
+ anyhow::bail!(tr!("no keys provided"));
+ }
+
+ http_post(BASE_URL, Some(serde_json::json!({ "keys": keys }))).await
+}
+
+/// Map a subscription product type to the remote type its keys can drive.
+fn remote_type_for(product_type: ProductType) -> Option<RemoteType> {
+ if product_type.matches_remote_type(RemoteType::Pve) {
+ Some(RemoteType::Pve)
+ } else if product_type.matches_remote_type(RemoteType::Pbs) {
+ Some(RemoteType::Pbs)
+ } else {
+ None
+ }
+}
+
+fn assign_input_panel(key: &str, product_type: ProductType, form_ctx: &FormContext) -> Html {
+ let mut panel = InputPanel::new().padding(4).min_width(500).with_field(
+ tr!("Key"),
+ DisplayField::new()
+ .name("key")
+ .value(key.to_string())
+ .key("key-display"),
+ );
+
+ let Some(remote_type) = remote_type_for(product_type) else {
+ // Defensive: the toolbar disables Assign for these product types.
+ return panel
+ .with_large_custom_child(
+ Container::new()
+ .class(FontStyle::TitleSmall)
+ .class(pwt::css::Opacity::Quarter)
+ .with_child(tr!(
+ "PDM cannot manage {product} remotes yet; this key is parked in the pool.",
+ product = product_type.to_string(),
+ )),
+ )
+ .into();
+ };
+
+ panel = panel.with_field(
+ tr!("Remote"),
+ RemoteSelector::new()
+ .name("remote")
+ .remote_type(remote_type)
+ .required(true),
+ );
+
+ match remote_type {
+ RemoteType::Pve => {
+ let selected_remote = form_ctx.read().get_field_text("remote");
+ if selected_remote.is_empty() {
+ panel
+ .with_field(
+ tr!("Node"),
+ DisplayField::new()
+ .name("node")
+ .key("node-no-remote")
+ .value(AttrValue::from(tr!("Select a remote first."))),
+ )
+ .into()
+ } else {
+ // `PveNodeSelector` fetches its node list in `create` and does not re-fetch on
+ // prop change, so a per-remote `key` forces a fresh component when the operator
+ // picks a target.
+ panel
+ .with_field(
+ tr!("Node"),
+ PveNodeSelector::new(selected_remote.clone())
+ .name("node")
+ .key(format!("node-selector-{selected_remote}"))
+ .required(true),
+ )
+ .into()
+ }
+ }
+ RemoteType::Pbs => panel
+ .with_field(
+ tr!("Node"),
+ DisplayField::new()
+ .name("node")
+ .value(AttrValue::from("localhost"))
+ .key("node-localhost"),
+ )
+ .into(),
+ }
+}
+
+async fn submit_assign(key: String, form_ctx: FormContext) -> Result<(), Error> {
+ let data = form_ctx.get_submit_data();
+ let url = format!("{BASE_URL}/{}", percent_encode_component(&key));
+ http_put(&url, Some(data)).await
+}
diff --git a/ui/src/configuration/subscription_registry.rs b/ui/src/configuration/subscription_registry.rs
new file mode 100644
index 0000000..7ed96e6
--- /dev/null
+++ b/ui/src/configuration/subscription_registry.rs
@@ -0,0 +1,713 @@
+use std::future::Future;
+use std::pin::Pin;
+use std::rc::Rc;
+
+use anyhow::Error;
+
+use yew::virtual_dom::{Key, VComp, VNode};
+
+use proxmox_yew_comp::percent_encoding::percent_encode_component;
+use proxmox_yew_comp::{http_delete, http_get, http_post, http_put};
+use proxmox_yew_comp::{
+ LoadableComponent, LoadableComponentContext, LoadableComponentMaster,
+ LoadableComponentScopeExt, LoadableComponentState,
+};
+
+use pwt::css::{AlignItems, Flex, FlexDirection, FlexFit, FontColor, JustifyContent, Overflow};
+use pwt::prelude::*;
+use pwt::props::{ContainerBuilder, ExtractPrimaryKey, WidgetBuilder};
+use pwt::state::{Selection, SlabTree, Store, TreeStore};
+use pwt::widget::data_table::{DataTable, DataTableColumn, DataTableHeader};
+use pwt::widget::{Button, Column, Container, Fa, Panel, Row, Toolbar, Tooltip};
+
+use pdm_api_types::subscription::{ProposedAssignment, RemoteNodeStatus, SubscriptionLevel};
+
+use super::subscription_keys::SubscriptionKeyGrid;
+
+const NODE_STATUS_URL: &str = "/subscriptions/node-status";
+const AUTO_ASSIGN_URL: &str = "/subscriptions/auto-assign";
+const APPLY_PENDING_URL: &str = "/subscriptions/apply-pending";
+const CLEAR_PENDING_URL: &str = "/subscriptions/clear-pending";
+
+/// Map a [`SubscriptionStatus`] to the icon shown in subscription panels.
+///
+/// Public so the dashboard subscriptions panel can render the same icon for the same state
+/// without redefining the mapping. The 4-variant `proxmox_yew_comp::Status` does not cover
+/// every subscription state (New, Expired, Suspended need their own icons), hence the dedicated
+/// helper.
+pub fn subscription_status_icon(status: proxmox_subscription::SubscriptionStatus) -> Fa {
+ use proxmox_subscription::SubscriptionStatus as S;
+ match status {
+ S::Active => Fa::new("check-circle").class(FontColor::Success),
+ S::New => Fa::new("clock-o").class(FontColor::Primary),
+ S::NotFound => Fa::new("exclamation-circle").class(FontColor::Error),
+ S::Invalid => Fa::new("times-circle").class(FontColor::Warning),
+ S::Expired => Fa::new("clock-o").class(FontColor::Warning),
+ S::Suspended => Fa::new("ban").class(FontColor::Error),
+ }
+}
+
+#[derive(Clone, Debug, PartialEq)]
+enum NodeTreeEntry {
+ Root,
+ Remote {
+ name: String,
+ ty: pdm_api_types::remotes::RemoteType,
+ active: u32,
+ total: u32,
+ },
+ Node {
+ data: RemoteNodeStatus,
+ /// If true, this is the only node in its remote and is shown at the top level under the
+ /// remote name instead of nested.
+ standalone: bool,
+ },
+}
+
+impl NodeTreeEntry {
+ fn name(&self) -> &str {
+ match self {
+ Self::Root => "",
+ Self::Remote { name, .. } => name,
+ Self::Node { data, standalone } => {
+ if *standalone {
+ &data.remote
+ } else {
+ &data.node
+ }
+ }
+ }
+ }
+}
+
+impl ExtractPrimaryKey for NodeTreeEntry {
+ fn extract_key(&self) -> Key {
+ Key::from(match self {
+ NodeTreeEntry::Root => "/".to_string(),
+ NodeTreeEntry::Remote { name, .. } => format!("/{name}"),
+ NodeTreeEntry::Node { data, .. } => format!("/{}/{}", data.remote, data.node),
+ })
+ }
+}
+
+fn build_tree(nodes: Vec<RemoteNodeStatus>) -> SlabTree<NodeTreeEntry> {
+ use std::collections::BTreeMap;
+
+ let mut by_remote: BTreeMap<String, Vec<RemoteNodeStatus>> = BTreeMap::new();
+ for n in nodes {
+ by_remote.entry(n.remote.clone()).or_default().push(n);
+ }
+
+ let mut tree = SlabTree::new();
+ let mut root = tree.set_root(NodeTreeEntry::Root);
+ root.set_expanded(true);
+
+ for (remote_name, remote_nodes) in &by_remote {
+ let total = remote_nodes.len() as u32;
+ let active = remote_nodes
+ .iter()
+ .filter(|n| n.status == proxmox_subscription::SubscriptionStatus::Active)
+ .count() as u32;
+
+ let ty = remote_nodes.first().map(|n| n.ty).unwrap_or_default();
+
+ if remote_nodes.len() == 1 {
+ root.append(NodeTreeEntry::Node {
+ data: remote_nodes[0].clone(),
+ standalone: true,
+ });
+ } else {
+ let mut remote_entry = root.append(NodeTreeEntry::Remote {
+ name: remote_name.clone(),
+ ty,
+ active,
+ total,
+ });
+ remote_entry.set_expanded(true);
+ for n in remote_nodes {
+ remote_entry.append(NodeTreeEntry::Node {
+ data: n.clone(),
+ standalone: false,
+ });
+ }
+ }
+ }
+
+ tree
+}
+
+#[derive(Properties, PartialEq, Clone)]
+pub struct SubscriptionRegistryProps {}
+
+impl SubscriptionRegistryProps {
+ pub fn new() -> Self {
+ yew::props!(Self {})
+ }
+}
+
+impl Default for SubscriptionRegistryProps {
+ fn default() -> Self {
+ Self::new()
+ }
+}
+
+impl From<SubscriptionRegistryProps> for VNode {
+ fn from(val: SubscriptionRegistryProps) -> Self {
+ VComp::new::<LoadableComponentMaster<SubscriptionRegistryComp>>(Rc::new(val), None).into()
+ }
+}
+
+pub enum Msg {
+ LoadFinished(Vec<RemoteNodeStatus>),
+ AutoAssignPreview,
+ AutoAssignApply,
+ ApplyPending,
+ ClearPending,
+ /// Clear the pool's pin on the currently-selected node by un-assigning its key.
+ ClearSelectedNode,
+ /// Remove the pool entry currently pinned to the selected node.
+ RemoveSelectedNodeKey,
+}
+
+#[derive(PartialEq)]
+pub enum ViewState {
+ ConfirmAutoAssign(Vec<ProposedAssignment>),
+ ConfirmClearPending,
+ ConfirmRemoveSelectedNodeKey(String),
+}
+
+#[doc(hidden)]
+pub struct SubscriptionRegistryComp {
+ state: LoadableComponentState<ViewState>,
+ tree_store: TreeStore<NodeTreeEntry>,
+ tree_columns: Rc<Vec<DataTableHeader<NodeTreeEntry>>>,
+ proposal_columns: Rc<Vec<DataTableHeader<ProposedAssignment>>>,
+ node_selection: Selection,
+ last_node_data: Vec<RemoteNodeStatus>,
+}
+
+pwt::impl_deref_mut_property!(
+ SubscriptionRegistryComp,
+ state,
+ LoadableComponentState<ViewState>
+);
+
+fn tree_sorter(a: &NodeTreeEntry, b: &NodeTreeEntry) -> std::cmp::Ordering {
+ a.name().cmp(b.name())
+}
+
+impl SubscriptionRegistryComp {
+ fn tree_columns(store: TreeStore<NodeTreeEntry>) -> Rc<Vec<DataTableHeader<NodeTreeEntry>>> {
+ Rc::new(vec![
+ DataTableColumn::new(tr!("Name"))
+ .tree_column(store)
+ .flex(3)
+ .render(|entry: &NodeTreeEntry| {
+ let (icon, name) = match entry {
+ NodeTreeEntry::Root => return Html::default(),
+ NodeTreeEntry::Remote { name, ty, .. } => {
+ let icon = if *ty == pdm_api_types::remotes::RemoteType::Pbs {
+ "building-o"
+ } else {
+ "server"
+ };
+ (icon, name.as_str())
+ }
+ NodeTreeEntry::Node {
+ data: n,
+ standalone,
+ } => {
+ let icon = if n.ty == pdm_api_types::remotes::RemoteType::Pbs {
+ "building-o"
+ } else {
+ "building"
+ };
+ let label = if *standalone { &n.remote } else { &n.node };
+ (icon, label.as_str())
+ }
+ };
+ Row::new()
+ .class(AlignItems::Baseline)
+ .gap(2)
+ .with_child(Fa::new(icon))
+ .with_child(name)
+ .into()
+ })
+ .sorter(tree_sorter)
+ .into(),
+ DataTableColumn::new(tr!("Sockets"))
+ .width("70px")
+ .render(|entry: &NodeTreeEntry| match entry {
+ NodeTreeEntry::Node { data: n, .. } => {
+ n.sockets.map(|s| s.to_string()).unwrap_or_default().into()
+ }
+ _ => Html::default(),
+ })
+ .into(),
+ DataTableColumn::new(tr!("Status"))
+ .width("120px")
+ .render(|entry: &NodeTreeEntry| match entry {
+ NodeTreeEntry::Node { data: n, .. } => Row::new()
+ .class(AlignItems::Baseline)
+ .gap(2)
+ .with_child(subscription_status_icon(n.status))
+ .with_child(n.status.to_string())
+ .into(),
+ NodeTreeEntry::Remote { active, total, .. } => {
+ let icon = if active == total {
+ Fa::new("check-circle").class(FontColor::Success)
+ } else if *active == 0 {
+ Fa::new("exclamation-circle").class(FontColor::Error)
+ } else {
+ Fa::new("exclamation-triangle").class(FontColor::Warning)
+ };
+ Tooltip::new(
+ Row::new()
+ .class(AlignItems::Baseline)
+ .gap(2)
+ .with_child(icon)
+ .with_child(format!("{active}/{total}")),
+ )
+ .tip(tr!(
+ "{active} of {total} nodes subscribed",
+ active = active,
+ total = total,
+ ))
+ .into()
+ }
+ _ => Html::default(),
+ })
+ .into(),
+ DataTableColumn::new(tr!("Level"))
+ .width("90px")
+ .render(|entry: &NodeTreeEntry| match entry {
+ NodeTreeEntry::Node { data: n, .. } if n.level != SubscriptionLevel::None => {
+ n.level.to_string().into()
+ }
+ _ => Html::default(),
+ })
+ .into(),
+ DataTableColumn::new(tr!("Key"))
+ .flex(2)
+ .render(|entry: &NodeTreeEntry| match entry {
+ NodeTreeEntry::Node { data: n, .. } => key_cell(n),
+ _ => Html::default(),
+ })
+ .into(),
+ ])
+ }
+
+ fn proposal_columns() -> Rc<Vec<DataTableHeader<ProposedAssignment>>> {
+ Rc::new(vec![
+ DataTableColumn::new(tr!("Remote / Node"))
+ .flex(2)
+ .render(|p: &ProposedAssignment| format!("{} / {}", p.remote, p.node).into())
+ .into(),
+ DataTableColumn::new(tr!("Key"))
+ .flex(2)
+ .render(|p: &ProposedAssignment| {
+ Container::from_tag("span")
+ .class(pwt::css::FontStyle::LabelMedium)
+ .with_child(p.key.clone())
+ .into()
+ })
+ .into(),
+ DataTableColumn::new(tr!("Sockets (node / key)"))
+ .width("160px")
+ .render(|p: &ProposedAssignment| {
+ let label = match (p.node_sockets, p.key_sockets) {
+ (Some(ns), Some(ks)) => format!("{ns} / {ks}"),
+ (Some(ns), None) => format!("{ns} / -"),
+ (None, Some(ks)) => format!("- / {ks}"),
+ _ => String::new(),
+ };
+ label.into()
+ })
+ .into(),
+ ])
+ }
+}
+
+fn key_cell(n: &RemoteNodeStatus) -> Html {
+ let assigned = n.assigned_key.as_deref();
+ let current = n.current_key.as_deref();
+
+ // Pending = pool key assigned but the node doesn't have an active subscription yet (the key
+ // still needs to be pushed).
+ let pending =
+ assigned.is_some() && n.status != proxmox_subscription::SubscriptionStatus::Active;
+
+ match (assigned, current) {
+ (Some(a), Some(c)) if a != c => Row::new()
+ .class(AlignItems::Baseline)
+ .gap(2)
+ .with_child(Fa::new("clock-o").class(FontColor::Warning))
+ .with_child(format!("{a} \u{2192} {c}"))
+ .into(),
+ _ => {
+ let text = current.or(assigned).unwrap_or("");
+ if pending {
+ Row::new()
+ .class(AlignItems::Baseline)
+ .gap(2)
+ .with_child(Fa::new("clock-o").class(FontColor::Warning))
+ .with_child(text)
+ .into()
+ } else {
+ text.into()
+ }
+ }
+ }
+}
+
+impl LoadableComponent for SubscriptionRegistryComp {
+ type Properties = SubscriptionRegistryProps;
+ type Message = Msg;
+ type ViewState = ViewState;
+
+ fn create(ctx: &LoadableComponentContext<Self>) -> Self {
+ let store = TreeStore::new().view_root(false);
+ store.set_sorter(tree_sorter);
+
+ let node_selection = Selection::new().on_select({
+ let link = ctx.link().clone();
+ move |_| link.send_redraw()
+ });
+
+ Self {
+ state: LoadableComponentState::new(),
+ tree_store: store.clone(),
+ tree_columns: Self::tree_columns(store),
+ proposal_columns: Self::proposal_columns(),
+ node_selection,
+ last_node_data: Vec::new(),
+ }
+ }
+
+ fn update(&mut self, ctx: &LoadableComponentContext<Self>, msg: Self::Message) -> bool {
+ match msg {
+ Msg::LoadFinished(data) => {
+ self.last_node_data = data.clone();
+ let tree = build_tree(data);
+ self.tree_store.write().update_root_tree(tree);
+ }
+ Msg::AutoAssignPreview => {
+ let link = ctx.link().clone();
+ ctx.link().spawn(async move {
+ match http_post::<Vec<ProposedAssignment>>(AUTO_ASSIGN_URL, None).await {
+ Ok(proposals) if proposals.is_empty() => {
+ link.show_error(
+ tr!("Auto-Assign"),
+ tr!("No suitable unassigned keys for the remaining nodes."),
+ false,
+ );
+ }
+ Ok(proposals) => {
+ link.change_view(Some(ViewState::ConfirmAutoAssign(proposals)));
+ }
+ Err(err) => link.show_error(tr!("Auto-Assign"), err.to_string(), true),
+ }
+ });
+ }
+ Msg::AutoAssignApply => {
+ let link = ctx.link().clone();
+ ctx.link().spawn(async move {
+ let url = format!("{AUTO_ASSIGN_URL}?apply=1");
+ match http_post::<Vec<ProposedAssignment>>(&url, None).await {
+ Ok(_) => {
+ link.change_view(None);
+ link.send_reload();
+ }
+ Err(err) => link.show_error(tr!("Auto-Assign"), err.to_string(), true),
+ }
+ });
+ }
+ Msg::ApplyPending => {
+ let link = ctx.link().clone();
+ ctx.link().spawn(async move {
+ match http_post::<Option<String>>(APPLY_PENDING_URL, None).await {
+ Ok(None) => link.show_error(
+ tr!("Apply Pending"),
+ tr!("No pending assignments. Every assigned key is already active on its remote node."),
+ false,
+ ),
+ Ok(Some(upid)) => link.show_task_progres(upid),
+ Err(err) => link.show_error(tr!("Apply"), err.to_string(), true),
+ }
+ });
+ }
+ Msg::ClearPending => {
+ let link = ctx.link().clone();
+ ctx.link().spawn(async move {
+ match http_post::<serde_json::Value>(CLEAR_PENDING_URL, None).await {
+ Ok(_) => {
+ link.change_view(None);
+ link.send_reload();
+ }
+ Err(err) => link.show_error(tr!("Clear Pending"), err.to_string(), true),
+ }
+ });
+ }
+ Msg::ClearSelectedNode => {
+ let Some(key) = self.selected_assigned_key() else {
+ return false;
+ };
+ let link = ctx.link().clone();
+ ctx.link().spawn(async move {
+ let url = format!("/subscriptions/keys/{}", percent_encode_component(&key),);
+ if let Err(err) = http_put::<()>(&url, Some(serde_json::json!({}))).await {
+ link.show_error(tr!("Clear Assignment"), err.to_string(), true);
+ }
+ link.send_reload();
+ });
+ }
+ Msg::RemoveSelectedNodeKey => {
+ let Some(key) = self.selected_assigned_key() else {
+ return false;
+ };
+ ctx.link()
+ .change_view(Some(ViewState::ConfirmRemoveSelectedNodeKey(key)));
+ }
+ }
+ true
+ }
+
+ fn toolbar(&self, ctx: &LoadableComponentContext<Self>) -> Option<Html> {
+ let link = ctx.link();
+ Some(
+ Toolbar::new()
+ .border_bottom(true)
+ .with_child(
+ Button::new(tr!("Auto-Assign"))
+ .icon_class("fa fa-magic")
+ .on_activate(link.callback(|_| Msg::AutoAssignPreview)),
+ )
+ .with_spacer()
+ .with_child(
+ Button::new(tr!("Apply Pending"))
+ .icon_class("fa fa-play")
+ .on_activate(link.callback(|_| Msg::ApplyPending)),
+ )
+ .with_child(
+ Button::new(tr!("Clear Pending"))
+ .icon_class("fa fa-eraser")
+ .on_activate(
+ link.change_view_callback(|_| Some(ViewState::ConfirmClearPending)),
+ ),
+ )
+ .with_flex_spacer()
+ .with_child(
+ Button::refresh(ctx.loading()).on_activate({
+ let link = link.clone();
+ move |_| link.send_reload()
+ }),
+ )
+ .into(),
+ )
+ }
+
+ fn load(
+ &self,
+ ctx: &LoadableComponentContext<Self>,
+ ) -> Pin<Box<dyn Future<Output = Result<(), Error>>>> {
+ let link = ctx.link().clone();
+ Box::pin(async move {
+ let data: Vec<RemoteNodeStatus> = http_get(NODE_STATUS_URL, None).await?;
+ link.send_message(Msg::LoadFinished(data));
+ Ok(())
+ })
+ }
+
+ fn main_view(&self, ctx: &LoadableComponentContext<Self>) -> Html {
+ Container::new()
+ .class("pwt-content-spacer")
+ .class(FlexFit)
+ .class(FlexDirection::Row)
+ .with_child(self.render_key_pool_panel(ctx))
+ .with_child(self.render_node_tree_panel(ctx))
+ .into()
+ }
+
+ fn dialog_view(
+ &self,
+ ctx: &LoadableComponentContext<Self>,
+ view_state: &Self::ViewState,
+ ) -> Option<Html> {
+ match view_state {
+ ViewState::ConfirmClearPending => {
+ use pwt::widget::ConfirmDialog;
+ Some(
+ ConfirmDialog::new(
+ tr!("Clear Pending Assignments"),
+ tr!("Remove all assignments that have not yet been applied to the remote nodes?"),
+ )
+ .on_confirm({
+ let link = ctx.link().clone();
+ move |_| link.send_message(Msg::ClearPending)
+ })
+ .into(),
+ )
+ }
+ ViewState::ConfirmAutoAssign(proposals) => {
+ Some(self.render_auto_assign_dialog(ctx, proposals))
+ }
+ ViewState::ConfirmRemoveSelectedNodeKey(key) => {
+ use pwt::widget::ConfirmDialog;
+ let link = ctx.link().clone();
+ let key_for_callback = key.clone();
+ Some(
+ ConfirmDialog::new(
+ tr!("Remove Key"),
+ tr!(
+ "Remove {key} from the key pool? This does not revoke the subscription on the remote node.",
+ key = key.clone(),
+ ),
+ )
+ .on_confirm(move |_| {
+ let link = link.clone();
+ let key = key_for_callback.clone();
+ link.clone().spawn(async move {
+ let url = format!(
+ "/subscriptions/keys/{}",
+ percent_encode_component(&key),
+ );
+ if let Err(err) = http_delete(&url, None).await {
+ link.show_error(tr!("Remove Key"), err.to_string(), true);
+ }
+ link.change_view(None);
+ link.send_reload();
+ });
+ })
+ .into(),
+ )
+ }
+ }
+ }
+}
+
+impl SubscriptionRegistryComp {
+ fn render_key_pool_panel(&self, ctx: &LoadableComponentContext<Self>) -> Panel {
+ // Reload the right-side node tree whenever the left-side key pool mutates, so a fresh
+ // assignment shows up as pending without forcing the operator to re-navigate.
+ let link = ctx.link().clone();
+ // Pass the current node-status snapshot into the grid so its Clear button can be
+ // disabled for synced bindings (orphan-prevention - mirrors the server-side refusal).
+ let statuses = Rc::new(self.last_node_data.clone());
+ Panel::new()
+ .class(FlexFit)
+ .border(true)
+ .style("flex", "3 1 0")
+ .min_width(300)
+ .title(tr!("Key Pool"))
+ .with_child(
+ SubscriptionKeyGrid::new()
+ .on_change(Callback::from(move |_| link.send_reload()))
+ .node_status(statuses),
+ )
+ }
+
+ fn render_node_tree_panel(&self, ctx: &LoadableComponentContext<Self>) -> Panel {
+ let table = DataTable::new(self.tree_columns.clone(), self.tree_store.clone())
+ .selection(self.node_selection.clone())
+ .striped(false)
+ .borderless(true)
+ .show_header(true)
+ .class(FlexFit);
+
+ let has_assigned = self.selected_assigned_key().is_some();
+ let clear_button = Button::new(tr!("Clear Assignment"))
+ .icon_class("fa fa-unlink")
+ .disabled(!has_assigned)
+ .on_activate(ctx.link().callback(|_| Msg::ClearSelectedNode));
+ let remove_button = Button::new(tr!("Remove"))
+ .icon_class("fa fa-trash-o")
+ .disabled(!has_assigned)
+ .on_activate(ctx.link().callback(|_| Msg::RemoveSelectedNodeKey));
+
+ Panel::new()
+ .class(FlexFit)
+ .border(true)
+ .style("flex", "4 1 0")
+ .min_width(400)
+ .title(tr!("Node Status"))
+ .with_tool(clear_button)
+ .with_tool(remove_button)
+ .with_child(table)
+ }
+
+ /// Pool key currently assigned to whatever node the operator selected in the tree.
+ ///
+ /// Returns None when no node row is selected, the selected entry is a remote-level
+ /// aggregate, or the node has no pool assignment.
+ fn selected_assigned_key(&self) -> Option<String> {
+ let key = self.node_selection.selected_key()?;
+ let raw = key.to_string();
+ let mut parts = raw.trim_start_matches('/').splitn(2, '/');
+ let remote = parts.next()?;
+ let node = parts.next()?;
+ self.last_node_data
+ .iter()
+ .find(|n| n.remote == remote && n.node == node)
+ .and_then(|n| n.assigned_key.clone())
+ }
+
+ fn render_auto_assign_dialog(
+ &self,
+ ctx: &LoadableComponentContext<Self>,
+ proposals: &[ProposedAssignment],
+ ) -> Html {
+ use pwt::widget::Dialog;
+
+ let store: Store<ProposedAssignment> = Store::with_extract_key(|p: &ProposedAssignment| {
+ format!("{}/{}", p.remote, p.node).into()
+ });
+ store.set_data(proposals.to_vec());
+
+ let link_close = ctx.link().clone();
+ let link_apply = ctx.link().clone();
+ let body = Column::new()
+ .class(Flex::Fill)
+ .class(Overflow::Hidden)
+ .min_height(0)
+ .padding(2)
+ .gap(2)
+ .min_width(600)
+ .with_child(Container::from_tag("p").with_child(tr!(
+ "The following {n} assignments are proposed. Click Apply to confirm.",
+ n = proposals.len(),
+ )))
+ .with_child(
+ DataTable::new(self.proposal_columns.clone(), store)
+ .striped(true)
+ .class(FlexFit)
+ .min_height(140),
+ )
+ .with_child(
+ Row::new()
+ .class(JustifyContent::FlexEnd)
+ .gap(2)
+ .padding_top(2)
+ .with_child(
+ Button::new(tr!("Cancel"))
+ .on_activate(move |_| link_close.change_view(None)),
+ )
+ .with_child(
+ Button::new(tr!("Apply"))
+ .on_activate(move |_| link_apply.send_message(Msg::AutoAssignApply)),
+ ),
+ );
+
+ Dialog::new(tr!("Auto-Assign Proposal"))
+ .resizable(true)
+ .width(700)
+ .min_width(500)
+ .min_height(300)
+ .max_height("80vh")
+ .on_close({
+ let link = ctx.link().clone();
+ move |_| link.change_view(None)
+ })
+ .with_child(body)
+ .into()
+ }
+}
diff --git a/ui/src/main_menu.rs b/ui/src/main_menu.rs
index 18988ea..eba02d5 100644
--- a/ui/src/main_menu.rs
+++ b/ui/src/main_menu.rs
@@ -15,6 +15,7 @@ use pdm_api_types::remotes::RemoteType;
use pdm_api_types::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY};
use crate::configuration::subscription_panel::SubscriptionPanel;
+use crate::configuration::subscription_registry::SubscriptionRegistryProps;
use crate::configuration::views::ViewGrid;
use crate::dashboard::view::View;
use crate::remotes::RemotesPanel;
@@ -292,6 +293,15 @@ impl Component for PdmMainMenu {
config_submenu,
);
+ register_view(
+ &mut menu,
+ &mut content,
+ tr!("Subscription Registry"),
+ "subscription-registry",
+ Some("fa fa-id-card"),
+ |_| SubscriptionRegistryProps::new().into(),
+ );
+
let mut admin_submenu = Menu::new();
register_view(
--
2.47.3
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH datacenter-manager v2 6/8] cli: add subscription key pool management subcommands
2026-05-07 8:26 [PATCH datacenter-manager v2 0/8] subscription: add central key pool registry with reissue support Thomas Lamprecht
` (4 preceding siblings ...)
2026-05-07 8:26 ` [PATCH datacenter-manager v2 5/8] ui: add subscription registry with key pool and node status Thomas Lamprecht
@ 2026-05-07 8:26 ` Thomas Lamprecht
2026-05-07 8:26 ` [PATCH datacenter-manager v2 7/8] docs: add subscription registry chapter Thomas Lamprecht
` (3 subsequent siblings)
9 siblings, 0 replies; 15+ messages in thread
From: Thomas Lamprecht @ 2026-05-07 8:26 UTC (permalink / raw)
To: pdm-devel
Expose the new key-pool API to the CLI so that the subscriptions
command group gains the following sub commands:
list-keys
add-keys (variadic)
assign-key
clear-key
remove-key
auto-assign
apply-pending
clear-pending
The pre-existing `status` subcommand becomes a sibling under this
`subscriptions` group.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
---
cli/client/src/subscriptions.rs | 195 +++++++++++++++++++++++++++++++-
lib/pdm-client/src/lib.rs | 126 ++++++++++++++++++++-
2 files changed, 316 insertions(+), 5 deletions(-)
diff --git a/cli/client/src/subscriptions.rs b/cli/client/src/subscriptions.rs
index d8bf1e0..5d5532b 100644
--- a/cli/client/src/subscriptions.rs
+++ b/cli/client/src/subscriptions.rs
@@ -1,18 +1,44 @@
use anyhow::Error;
use proxmox_router::cli::{
- format_and_print_result, CliCommand, CommandLineInterface, OutputFormat,
+ format_and_print_result, CliCommand, CliCommandMap, CommandLineInterface, OutputFormat,
};
use proxmox_schema::api;
-use pdm_api_types::subscription::RemoteSubscriptionState;
+use pdm_api_types::remotes::REMOTE_ID_SCHEMA;
+use pdm_api_types::subscription::{RemoteSubscriptionState, SUBSCRIPTION_KEY_SCHEMA};
use pdm_api_types::VIEW_ID_SCHEMA;
use crate::env::emoji;
use crate::{client, env};
pub fn cli() -> CommandLineInterface {
- CliCommand::new(&API_METHOD_GET_SUBSCRIPTION_STATUS).into()
+ CliCommandMap::new()
+ .insert(
+ "status",
+ CliCommand::new(&API_METHOD_GET_SUBSCRIPTION_STATUS),
+ )
+ .insert("list-keys", CliCommand::new(&API_METHOD_LIST_KEYS))
+ .insert(
+ "add-keys",
+ CliCommand::new(&API_METHOD_ADD_KEYS).arg_param(&["keys"]),
+ )
+ .insert(
+ "assign-key",
+ CliCommand::new(&API_METHOD_ASSIGN_KEY).arg_param(&["key"]),
+ )
+ .insert(
+ "clear-key",
+ CliCommand::new(&API_METHOD_CLEAR_KEY).arg_param(&["key"]),
+ )
+ .insert(
+ "remove-key",
+ CliCommand::new(&API_METHOD_REMOVE_KEY).arg_param(&["key"]),
+ )
+ .insert("auto-assign", CliCommand::new(&API_METHOD_AUTO_ASSIGN))
+ .insert("apply-pending", CliCommand::new(&API_METHOD_APPLY_PENDING))
+ .insert("clear-pending", CliCommand::new(&API_METHOD_CLEAR_PENDING))
+ .into()
}
#[api(
@@ -37,7 +63,7 @@ pub fn cli() -> CommandLineInterface {
},
}
)]
-/// List all the remotes this instance is managing.
+/// Show the subscription status of all remotes.
async fn get_subscription_status(
max_age: Option<u64>,
verbose: Option<bool>,
@@ -106,3 +132,164 @@ async fn get_subscription_status(
}
Ok(())
}
+
+#[api]
+/// List all subscription keys in the pool.
+async fn list_keys() -> Result<(), Error> {
+ let (keys, _digest) = client()?.list_subscription_keys().await?;
+
+ let output_format = env().format_args.output_format;
+ if output_format == OutputFormat::Text {
+ if keys.is_empty() {
+ println!("No keys in pool.");
+ return Ok(());
+ }
+ let key_width = keys.iter().map(|k| k.key.len()).max().unwrap_or(20);
+ for key in &keys {
+ let assignment = match (&key.remote, &key.node) {
+ (Some(r), Some(n)) => format!("{r}/{n}"),
+ _ => "(unassigned)".to_string(),
+ };
+ println!(
+ " {key:<kw$} {product:<5} {level:<10} {status:<10} {assignment}",
+ key = key.key,
+ kw = key_width,
+ product = key.product_type.to_string(),
+ level = key.level.to_string(),
+ status = key.status.to_string(),
+ );
+ }
+ } else {
+ format_and_print_result(&keys, &output_format.to_string());
+ }
+ Ok(())
+}
+
+#[api(
+ input: {
+ properties: {
+ keys: {
+ type: Array,
+ description: "Subscription keys to add to the pool.",
+ items: { schema: SUBSCRIPTION_KEY_SCHEMA },
+ },
+ },
+ },
+)]
+/// Add one or more subscription keys to the pool.
+async fn add_keys(keys: Vec<String>) -> Result<(), Error> {
+ client()?.add_subscription_keys(&keys, None).await?;
+ let n = keys.len();
+ if n == 1 {
+ println!("Added {} to pool.", keys[0]);
+ } else {
+ println!("Added {n} keys to pool.");
+ }
+ Ok(())
+}
+
+#[api(
+ input: {
+ properties: {
+ key: { schema: SUBSCRIPTION_KEY_SCHEMA },
+ remote: { schema: REMOTE_ID_SCHEMA },
+ node: {
+ type: String,
+ description: "Node name within the remote.",
+ },
+ },
+ },
+)]
+/// Assign a key from the pool to a remote node.
+async fn assign_key(key: String, remote: String, node: String) -> Result<(), Error> {
+ client()?
+ .assign_subscription_key(&key, Some(&remote), Some(&node), None)
+ .await?;
+ println!("Assigned {key} to {remote}/{node}.");
+ Ok(())
+}
+
+#[api(
+ input: {
+ properties: {
+ key: { schema: SUBSCRIPTION_KEY_SCHEMA },
+ },
+ },
+)]
+/// Clear the assignment of a key (unassign from its remote node).
+async fn clear_key(key: String) -> Result<(), Error> {
+ client()?
+ .assign_subscription_key(&key, None, None, None)
+ .await?;
+ println!("Cleared assignment for {key}.");
+ Ok(())
+}
+
+#[api(
+ input: {
+ properties: {
+ key: { schema: SUBSCRIPTION_KEY_SCHEMA },
+ },
+ },
+)]
+/// Remove a key from the pool entirely.
+async fn remove_key(key: String) -> Result<(), Error> {
+ client()?.delete_subscription_key(&key).await?;
+ println!("Removed {key} from pool.");
+ Ok(())
+}
+
+#[api(
+ input: {
+ properties: {
+ apply: {
+ type: bool,
+ optional: true,
+ default: false,
+ description: "Apply the proposed assignments immediately.",
+ },
+ },
+ },
+)]
+/// Propose or apply automatic key-to-node assignments.
+async fn auto_assign(apply: Option<bool>) -> Result<(), Error> {
+ let apply = apply.unwrap_or(false);
+ let proposals = client()?.subscription_auto_assign(apply).await?;
+
+ if proposals.is_empty() {
+ println!("No suitable keys available for unsubscribed nodes.");
+ return Ok(());
+ }
+
+ for p in &proposals {
+ let verb = if apply { "assigned" } else { "proposed" };
+ println!(" {verb}: {} -> {}/{}", p.key, p.remote, p.node);
+ }
+
+ if !apply {
+ println!("\nRe-run with --apply to apply these assignments.");
+ }
+ Ok(())
+}
+
+#[api]
+/// Push all pending key assignments to remotes as a worker task.
+async fn apply_pending() -> Result<(), Error> {
+ match client()?.subscription_apply_pending().await? {
+ None => println!("No pending assignments to apply."),
+ Some(upid) => println!("Task started: {upid}"),
+ }
+ Ok(())
+}
+
+#[api]
+/// Clear every pending assignment in one bulk transaction.
+async fn clear_pending() -> Result<(), Error> {
+ let cleared = client()?.subscription_clear_pending().await?;
+ if cleared == 0 {
+ println!("No pending assignments to clear.");
+ } else {
+ println!("Cleared {cleared} pending assignment(s).");
+ }
+ Ok(())
+}
diff --git a/lib/pdm-client/src/lib.rs b/lib/pdm-client/src/lib.rs
index 76b33ef..b0527b1 100644
--- a/lib/pdm-client/src/lib.rs
+++ b/lib/pdm-client/src/lib.rs
@@ -76,7 +76,10 @@ pub mod types {
pub use pve_api_types::StorageStatus as PveStorageStatus;
- pub use pdm_api_types::subscription::{RemoteSubscriptionState, RemoteSubscriptions};
+ pub use pdm_api_types::subscription::{
+ ClearPendingResult, ProductType, ProposedAssignment, RemoteNodeStatus,
+ RemoteSubscriptionState, RemoteSubscriptions, SubscriptionKeyEntry, SubscriptionKeySource,
+ };
pub use pve_api_types::{SdnVnetMacVrf, SdnZoneIpVrf};
}
@@ -1089,6 +1092,127 @@ impl<T: HttpApiClient> PdmClient<T> {
Ok(self.0.get(&path).await?.expect_json()?.data)
}
+ /// List all keys in the subscription pool. Returns the entries plus the matching
+ /// `ConfigDigest` so the caller can chain a digest-aware add / assign / delete back.
+ pub async fn list_subscription_keys(
+ &self,
+ ) -> Result<(Vec<SubscriptionKeyEntry>, Option<ConfigDigest>), Error> {
+ let mut res = self
+ .0
+ .get("/api2/extjs/subscriptions/keys")
+ .await?
+ .expect_json()?;
+ Ok((res.data, res.attribs.remove("digest").map(ConfigDigest)))
+ }
+
+ /// Add one or more keys to the pool. See the daemon-side endpoint for the all-or-nothing
+ /// validation semantics.
+ pub async fn add_subscription_keys(
+ &self,
+ keys: &[String],
+ digest: Option<ConfigDigest>,
+ ) -> Result<(), Error> {
+ #[derive(Serialize)]
+ struct AddArgs<'a> {
+ keys: &'a [String],
+ #[serde(skip_serializing_if = "Option::is_none")]
+ digest: Option<ConfigDigest>,
+ }
+ self.0
+ .post("/api2/extjs/subscriptions/keys", &AddArgs { keys, digest })
+ .await?
+ .nodata()
+ }
+
+ /// Assign a key to a remote node. Pass `None` for both `remote` and `node` to clear the
+ /// assignment instead.
+ pub async fn assign_subscription_key(
+ &self,
+ key: &str,
+ remote: Option<&str>,
+ node: Option<&str>,
+ digest: Option<ConfigDigest>,
+ ) -> Result<(), Error> {
+ #[derive(Serialize)]
+ struct AssignArgs<'a> {
+ #[serde(skip_serializing_if = "Option::is_none")]
+ remote: Option<&'a str>,
+ #[serde(skip_serializing_if = "Option::is_none")]
+ node: Option<&'a str>,
+ #[serde(skip_serializing_if = "Option::is_none")]
+ digest: Option<ConfigDigest>,
+ }
+ let path = format!("/api2/extjs/subscriptions/keys/{key}");
+ self.0
+ .put(
+ &path,
+ &AssignArgs {
+ remote,
+ node,
+ digest,
+ },
+ )
+ .await?
+ .nodata()
+ }
+
+ /// Remove a key from the pool entirely.
+ ///
+ /// No digest parameter: deletion is a point-of-no-return operation and the typed-client
+ /// surface elsewhere (delete_remote, delete_user, ...) does not round-trip a digest on
+ /// DELETE either. External REST callers can still pass `digest` via the URL query if they
+ /// want optimistic concurrency on deletion; the server-side endpoint accepts it.
+ pub async fn delete_subscription_key(&self, key: &str) -> Result<(), Error> {
+ let path = format!("/api2/extjs/subscriptions/keys/{key}");
+ self.0.delete(&path).await?.nodata()
+ }
+
+ /// Combined remote/node subscription status, filtered to remotes the caller has audit
+ /// privilege on.
+ pub async fn subscription_node_status(
+ &self,
+ max_age: Option<u64>,
+ ) -> Result<Vec<RemoteNodeStatus>, Error> {
+ let path = ApiPathBuilder::new("/api2/extjs/subscriptions/node-status")
+ .maybe_arg("max-age", &max_age)
+ .build();
+ Ok(self.0.get(&path).await?.expect_json()?.data)
+ }
+
+ /// Propose (or apply) automatic key-to-node assignments.
+ pub async fn subscription_auto_assign(
+ &self,
+ apply: bool,
+ ) -> Result<Vec<ProposedAssignment>, Error> {
+ let path = ApiPathBuilder::new("/api2/extjs/subscriptions/auto-assign")
+ .arg("apply", &apply)
+ .build();
+ Ok(self.0.post(&path, &json!({})).await?.expect_json()?.data)
+ }
+
+ /// Push every pending assignment. Returns the worker UPID, or `None` when there is nothing
+ /// to do.
+ pub async fn subscription_apply_pending(&self) -> Result<Option<String>, Error> {
+ Ok(self
+ .0
+ .post("/api2/extjs/subscriptions/apply-pending", &json!({}))
+ .await?
+ .expect_json()?
+ .data)
+ }
+
+ /// Clear every pending assignment in one bulk transaction; returns the count of cleared
+ /// entries.
+ pub async fn subscription_clear_pending(&self) -> Result<u32, Error> {
+ let result: types::ClearPendingResult = self
+ .0
+ .post("/api2/extjs/subscriptions/clear-pending", &json!({}))
+ .await?
+ .expect_json()?
+ .data;
+ Ok(result.cleared)
+ }
+
pub async fn pve_list_networks(
&self,
remote: &str,
--
2.47.3
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH datacenter-manager v2 7/8] docs: add subscription registry chapter
2026-05-07 8:26 [PATCH datacenter-manager v2 0/8] subscription: add central key pool registry with reissue support Thomas Lamprecht
` (5 preceding siblings ...)
2026-05-07 8:26 ` [PATCH datacenter-manager v2 6/8] cli: add subscription key pool management subcommands Thomas Lamprecht
@ 2026-05-07 8:26 ` Thomas Lamprecht
2026-05-07 8:26 ` [PATCH datacenter-manager v2 8/8] subscription: add Reissue Key action with pending-reissue queue Thomas Lamprecht
` (2 subsequent siblings)
9 siblings, 0 replies; 15+ messages in thread
From: Thomas Lamprecht @ 2026-05-07 8:26 UTC (permalink / raw)
To: pdm-devel
Cover the new top-level feature: key pool, node status view, manual
assignment versus auto-assign, the pending/apply/clear lifecycle, and
the privilege model that gates mutation on per-remote resource
privileges in addition to system-scope MODIFY.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
---
docs/index.rst | 1 +
docs/subscription-registry.rst | 50 ++++++++++++++++++++++++++++++++++
2 files changed, 51 insertions(+)
create mode 100644 docs/subscription-registry.rst
diff --git a/docs/index.rst b/docs/index.rst
index 2fc8a5d..2aaf86e 100644
--- a/docs/index.rst
+++ b/docs/index.rst
@@ -27,6 +27,7 @@ in the section entitled "GNU Free Documentation License".
remotes.rst
automated-installations.rst
views.rst
+ subscription-registry.rst
access-control.rst
sysadmin.rst
faq.rst
diff --git a/docs/subscription-registry.rst b/docs/subscription-registry.rst
new file mode 100644
index 0000000..95c2cd4
--- /dev/null
+++ b/docs/subscription-registry.rst
@@ -0,0 +1,50 @@
+Subscription Registry
+=====================
+
+The subscription registry maintains a central pool of Proxmox VE and Proxmox Backup Server
+subscription keys and lets an administrator assign them to remote nodes from a single place, without
+having to select and configure a key for all remote nodes individually.
+
+Key Pool
+--------
+
+The pool accepts Proxmox VE and Proxmox Backup Server keys; other key prefixes are rejected so that
+a new product type is noticed instead of silently parking unusable entries. Each entry records its
+origin and the optional remote node it has been assigned to.
+
+Keys can be added in bulk from the web interface or with the ``proxmox-datacenter-client
+subscriptions add-keys`` command. The Add dialog takes multiple keys, separated by newlines or
+commas, and validates the whole batch atomically.
+
+Node Status
+-----------
+
+The Node Status panel shows the live subscription state of every node behind a configured remote
+alongside any pending plan from the pool. Nodes that already hold a key the registry assigned appear
+with the live level; nodes with a pending pool assignment show a clock icon until the change is
+pushed to the remote.
+
+From this view an operator can clear a pending assignment or remove the key from the pool entirely,
+which is convenient when a node is known to be wrong without first having to find the matching entry
+on the key list.
+
+Assignment
+----------
+
+A key can be pinned to a single node manually.
+
+The Auto-Assign action proposes a plan that fills unsubscribed nodes from free pool keys. For
+Proxmox VE, the smallest covering key by socket count is chosen, so a 4-socket key is not used on a
+2-socket host while a larger host stays unsubscribed.
+
+The proposed plan can be inspected before it is applied. Apply Pending pushes the queued keys to
+their target nodes; if a push fails the remaining queue is kept intact for retry. Clear Pending
+drops the plan without touching any remote.
+
+Permissions
+-----------
+
+Listing the pool and the node status view follows the regular audit privileges on each affected
+remote. Mutating an assignment requires the matching resource privilege on the target remote in
+addition to the system-scope MODIFY privilege, so an operator with global system access alone
+cannot push keys to remotes they have no other authority on.
--
2.47.3
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH datacenter-manager v2 8/8] subscription: add Reissue Key action with pending-reissue queue
2026-05-07 8:26 [PATCH datacenter-manager v2 0/8] subscription: add central key pool registry with reissue support Thomas Lamprecht
` (6 preceding siblings ...)
2026-05-07 8:26 ` [PATCH datacenter-manager v2 7/8] docs: add subscription registry chapter Thomas Lamprecht
@ 2026-05-07 8:26 ` Thomas Lamprecht
2026-05-07 8:34 ` [PATCH datacenter-manager v2 9/9] fixup! ui: add subscription registry with key pool and node status Thomas Lamprecht
2026-05-07 13:23 ` [PATCH datacenter-manager v2 0/8] subscription: add central key pool registry with reissue support Lukas Wagner
9 siblings, 0 replies; 15+ messages in thread
From: Thomas Lamprecht @ 2026-05-07 8:26 UTC (permalink / raw)
To: pdm-devel
Wire a new "Reissue Key" action on the Node Subscription Status panel
(previously titled Node Status) that queues the live subscription on a
remote node for removal at next Apply Pending. This mirrors how the
shop describes keys that may be re-bound to a different server ID, so
the panel uses the same vocabulary end to end.
The pool entry bound to (remote, node) gets a pending-reissue flag.
Apply Pending issues a DELETE call on the remote and clears the
binding on success; the entry stays in the pool as a free key. Clear
Pending only resets the flag and leaves the binding intact, so an
operator can retry the queueing without losing context.
Foreign-key adoption: when the live node runs a subscription the pool
has never seen, queueing a reissue imports that key into the pool with
the flag set. If the operator then runs Clear Pending, the imported
entry stays in the pool as a free key. This shortcut lets a legacy
node be brought under PDM management by queueing a reissue and then
immediately cancelling it, without re-typing the key.
The left-side Key Pool grid renames its action button to "Remove Key"
for clarity against the new Reissue Key action, and the confirmation
dialog now warns when the entry is still bound to a remote node so an
operator does not lose track of the live subscription by mistake.
Note as of now this might be better called "Clear Key", it was split
out from some work where we can actually actively reissue here.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
---
changes v1 -> v2:
- drop docs adaption for now until this is fleshed out
cli/client/src/subscriptions.rs | 31 +-
docs/subscription-registry.rst | 44 ++-
lib/pdm-api-types/src/subscription.rs | 14 +
lib/pdm-api-types/tests/test_import.rs | 1 +
lib/pdm-client/src/lib.rs | 31 ++
server/src/api/subscriptions/mod.rs | 282 ++++++++++++++++--
ui/src/configuration/subscription_keys.rs | 29 +-
ui/src/configuration/subscription_registry.rs | 176 ++++++++---
8 files changed, 508 insertions(+), 100 deletions(-)
diff --git a/cli/client/src/subscriptions.rs b/cli/client/src/subscriptions.rs
index 5d5532b..5472a71 100644
--- a/cli/client/src/subscriptions.rs
+++ b/cli/client/src/subscriptions.rs
@@ -7,7 +7,7 @@ use proxmox_schema::api;
use pdm_api_types::remotes::REMOTE_ID_SCHEMA;
use pdm_api_types::subscription::{RemoteSubscriptionState, SUBSCRIPTION_KEY_SCHEMA};
-use pdm_api_types::VIEW_ID_SCHEMA;
+use pdm_api_types::{NODE_SCHEMA, VIEW_ID_SCHEMA};
use crate::env::emoji;
use crate::{client, env};
@@ -38,6 +38,10 @@ pub fn cli() -> CommandLineInterface {
.insert("auto-assign", CliCommand::new(&API_METHOD_AUTO_ASSIGN))
.insert("apply-pending", CliCommand::new(&API_METHOD_APPLY_PENDING))
.insert("clear-pending", CliCommand::new(&API_METHOD_CLEAR_PENDING))
+ .insert(
+ "queue-reissue",
+ CliCommand::new(&API_METHOD_QUEUE_REISSUE).arg_param(&["remote", "node"]),
+ )
.into()
}
@@ -147,6 +151,9 @@ async fn list_keys() -> Result<(), Error> {
let key_width = keys.iter().map(|k| k.key.len()).max().unwrap_or(20);
for key in &keys {
let assignment = match (&key.remote, &key.node) {
+ (Some(r), Some(n)) if key.pending_reissue => {
+ format!("{r}/{n} [reissue queued]")
+ }
(Some(r), Some(n)) => format!("{r}/{n}"),
_ => "(unassigned)".to_string(),
};
@@ -272,6 +279,28 @@ async fn auto_assign(apply: Option<bool>) -> Result<(), Error> {
Ok(())
}
+#[api(
+ input: {
+ properties: {
+ remote: { schema: REMOTE_ID_SCHEMA },
+ node: { schema: NODE_SCHEMA },
+ },
+ },
+)]
+/// Queue a reissue on a remote node so its subscription can be removed at next Apply Pending.
+///
+/// If the live node runs a key that is not yet in the pool, the key is imported into the pool
+/// with the queueing flag set. Cancelling via Clear Pending later leaves that imported entry in
+/// the pool, effectively adopting the previously foreign subscription for future PDM-side
+/// management.
+async fn queue_reissue(remote: String, node: String) -> Result<(), Error> {
+ client()?
+ .subscription_queue_reissue(&remote, &node, None)
+ .await?;
+ println!("Queued reissue on {remote}/{node}; run apply-pending to apply.");
+ Ok(())
+}
+
#[api]
/// Push all pending key assignments to remotes as a worker task.
async fn apply_pending() -> Result<(), Error> {
diff --git a/docs/subscription-registry.rst b/docs/subscription-registry.rst
index 95c2cd4..9f5522d 100644
--- a/docs/subscription-registry.rst
+++ b/docs/subscription-registry.rst
@@ -16,29 +16,43 @@ Keys can be added in bulk from the web interface or with the ``proxmox-datacente
subscriptions add-keys`` command. The Add dialog takes multiple keys, separated by newlines or
commas, and validates the whole batch atomically.
-Node Status
------------
+Node Subscription Status
+------------------------
-The Node Status panel shows the live subscription state of every node behind a configured remote
-alongside any pending plan from the pool. Nodes that already hold a key the registry assigned appear
-with the live level; nodes with a pending pool assignment show a clock icon until the change is
-pushed to the remote.
+The Node Subscription Status panel shows the live subscription state of every node behind a
+configured remote alongside any pending plan from the pool. Nodes that already hold a key the
+registry assigned appear with the live level; nodes with a pending pool assignment show a clock
+icon until the change is pushed to the remote.
-From this view an operator can clear a pending assignment or remove the key from the pool entirely,
-which is convenient when a node is known to be wrong without first having to find the matching entry
-on the key list.
+From this view an operator can clear a pending assignment or queue a reissue on the selected
+node. Reissuing means freeing the live subscription key from this node so it can be reassigned
+elsewhere; this is the same notion the shop uses for keys that may be re-bound to a new server
+ID. The action is queued, not immediate, so it can be confirmed or cancelled together with
+other pending changes via Apply Pending and Clear Pending.
-Assignment
-----------
+Assignment and Reissue
+----------------------
A key can be pinned to a single node manually.
The Auto-Assign action proposes a plan that fills unsubscribed nodes from free pool keys. For
-Proxmox VE, the smallest covering key by socket count is chosen, so a 4-socket key is not used on a
-2-socket host while a larger host stays unsubscribed.
+Proxmox VE, the smallest covering key by socket count is chosen, so a 4-socket key is not used
+on a 2-socket host while a larger host stays unsubscribed.
-The proposed plan can be inspected before it is applied. Apply Pending pushes the queued keys to
-their target nodes; if a push fails the remaining queue is kept intact for retry. Clear Pending
+The Reissue Key action queues the live subscription on the selected node for removal. Apply
+Pending later issues the removal on the remote and releases the pool binding so the key becomes
+available for reassignment. Clear Pending drops the queued reissue without touching the remote;
+the binding stays intact and the operator can retry.
+
+When the live node runs a subscription that is not yet tracked by the pool, queueing a reissue
+also imports that key into the pool. If the operator then runs Clear Pending, the imported
+entry stays as a free pool key, effectively adopting the previously foreign subscription for
+future PDM-side management. This shortcut lets a legacy node be brought under the registry by
+queueing a reissue and then immediately cancelling it, without having to re-enter the key by
+hand.
+
+The proposed plan can be inspected before it is applied. Apply Pending walks the queue in
+order; if any push or reissue fails the remaining queue is kept intact for retry. Clear Pending
drops the plan without touching any remote.
Permissions
diff --git a/lib/pdm-api-types/src/subscription.rs b/lib/pdm-api-types/src/subscription.rs
index ead3c1b..9927d38 100644
--- a/lib/pdm-api-types/src/subscription.rs
+++ b/lib/pdm-api-types/src/subscription.rs
@@ -317,6 +317,7 @@ pub enum SubscriptionKeySource {
"level": { optional: true },
"status": { optional: true },
"source": { optional: true },
+ "pending-reissue": { optional: true },
},
)]
#[derive(Clone, Debug, Default, Deserialize, Serialize, PartialEq)]
@@ -346,6 +347,16 @@ pub struct SubscriptionKeyEntry {
#[serde(skip_serializing_if = "Option::is_none")]
pub node: Option<String>,
+ /// True when the operator queued a reissue for this entry's bound node, that is, a request
+ /// to free the key from `remote`/`node` so it can be reassigned to a different node.
+ ///
+ /// Apply Pending will issue a DELETE on the remote and then clear `remote`/`node` on
+ /// success. Clear Pending only resets this flag and leaves the binding untouched, so a
+ /// foreign current_key that was newly imported to queue a reissue stays adopted in the
+ /// pool. A bare flag is enough since the (remote, node) binding lives next to it.
+ #[serde(default, rename = "pending-reissue")]
+ pub pending_reissue: bool,
+
/// Server ID this key is bound to (from signed info, if available).
#[serde(skip_serializing_if = "Option::is_none")]
pub serverid: Option<String>,
@@ -527,6 +538,9 @@ pub struct RemoteNodeStatus {
/// Current key on the node (from remote query).
#[serde(skip_serializing_if = "Option::is_none")]
pub current_key: Option<String>,
+ /// True when the pool has a reissue queued for this node.
+ #[serde(default, rename = "pending-reissue")]
+ pub pending_reissue: bool,
}
#[api]
diff --git a/lib/pdm-api-types/tests/test_import.rs b/lib/pdm-api-types/tests/test_import.rs
index 2bb1cd6..e8502a5 100644
--- a/lib/pdm-api-types/tests/test_import.rs
+++ b/lib/pdm-api-types/tests/test_import.rs
@@ -17,6 +17,7 @@ fn entry_roundtrip() {
source: SubscriptionKeySource::Manual,
remote: Some("my-cluster".to_string()),
node: Some("node1".to_string()),
+ pending_reissue: false,
serverid: Some("AABBCCDD".to_string()),
status: SubscriptionStatus::Active,
nextduedate: Some("2027-06-01".to_string()),
diff --git a/lib/pdm-client/src/lib.rs b/lib/pdm-client/src/lib.rs
index b0527b1..7e6411c 100644
--- a/lib/pdm-client/src/lib.rs
+++ b/lib/pdm-client/src/lib.rs
@@ -1201,6 +1201,37 @@ impl<T: HttpApiClient> PdmClient<T> {
.data)
}
+ /// Queue a reissue for the subscription on `remote`/`node`. Apply Pending later removes the
+ /// subscription from the node so the key can be reassigned elsewhere; Clear Pending undoes
+ /// the queueing without touching the remote. If the live node runs a key that is not yet in
+ /// the pool, this also imports it into the pool with the queueing flag set, see the server
+ /// endpoint docs for the adoption semantics.
+ pub async fn subscription_queue_reissue(
+ &self,
+ remote: &str,
+ node: &str,
+ digest: Option<ConfigDigest>,
+ ) -> Result<(), Error> {
+ #[derive(Serialize)]
+ struct ReissueArgs<'a> {
+ remote: &'a str,
+ node: &'a str,
+ #[serde(skip_serializing_if = "Option::is_none")]
+ digest: Option<ConfigDigest>,
+ }
+ self.0
+ .post(
+ "/api2/extjs/subscriptions/queue-reissue",
+ &ReissueArgs {
+ remote,
+ node,
+ digest,
+ },
+ )
+ .await?
+ .nodata()
+ }
+
/// Clear every pending assignment in one bulk transaction; returns the count of cleared
/// entries.
pub async fn subscription_clear_pending(&self) -> Result<u32, Error> {
diff --git a/server/src/api/subscriptions/mod.rs b/server/src/api/subscriptions/mod.rs
index 26d9ecf..ce6be0b 100644
--- a/server/src/api/subscriptions/mod.rs
+++ b/server/src/api/subscriptions/mod.rs
@@ -49,6 +49,10 @@ const SUBDIRS: SubdirMap = &sorted!([
),
("keys", &KEYS_ROUTER),
("node-status", &Router::new().get(&API_METHOD_NODE_STATUS)),
+ (
+ "queue-reissue",
+ &Router::new().post(&API_METHOD_QUEUE_REISSUE)
+ ),
]);
const KEYS_ROUTER: Router = Router::new()
@@ -404,6 +408,9 @@ async fn assign_key(
let entry = config.get_mut(&key).unwrap();
entry.remote = None;
entry.node = None;
+ // pending_reissue without a binding is meaningless - reset so a later reassignment
+ // does not re-trigger a stale teardown.
+ entry.pending_reissue = false;
}
_ => {
http_bail!(
@@ -480,6 +487,156 @@ async fn push_key_to_remote(remote: &Remote, key: &str, node_name: &str) -> Resu
Ok(())
}
+/// Tear down a node's subscription via the remote's `/nodes/{node}/subscription` endpoint.
+async fn delete_subscription_on_remote(
+ remote: &Remote,
+ product_type: ProductType,
+ node_name: &str,
+) -> Result<(), Error> {
+ let path = match product_type {
+ ProductType::Pve => format!("/api2/extjs/nodes/{node_name}/subscription"),
+ ProductType::Pbs => "/api2/extjs/nodes/localhost/subscription".to_string(),
+ ProductType::Pmg | ProductType::Pom => {
+ bail!("PDM cannot reissue '{product_type}' keys: no remote support yet");
+ }
+ };
+
+ let client = crate::connection::make_pbs_client_and_login(remote).await?;
+ client.0.delete(&path).await?;
+
+ info!("removed subscription from {}/{node_name}", remote.id);
+ Ok(())
+}
+
+#[api(
+ input: {
+ properties: {
+ remote: { schema: REMOTE_ID_SCHEMA },
+ // NODE_SCHEMA rejects path-traversal input before it ends up interpolated into
+ // the remote URL `/api2/extjs/nodes/{node}/subscription`.
+ node: { schema: NODE_SCHEMA },
+ digest: {
+ type: ConfigDigest,
+ optional: true,
+ },
+ },
+ },
+ access: {
+ permission: &Permission::Privilege(&["system"], PRIV_SYS_MODIFY, false),
+ },
+)]
+/// Queue a reissue on a remote node, that is, mark its subscription for removal so the key can
+/// be reassigned elsewhere.
+///
+/// Sets `pending_reissue` on the pool entry currently bound to (remote, node). Apply Pending
+/// later issues the DELETE on the remote and clears the binding on success; Clear Pending only
+/// resets the flag and leaves the binding intact so the operator can retry.
+///
+/// Adoption path: when the live node runs a key that is not yet in the pool, queueing a reissue
+/// adds that key to the pool with the flag set. If the operator later runs Clear Pending, the
+/// newly-imported entry stays in the pool as a free key, effectively adopting a previously
+/// foreign subscription for future PDM-side management.
+///
+/// Per-remote `PRIV_RESOURCE_MODIFY` is enforced inside the handler so an operator with global
+/// system access alone cannot tear down subscriptions on remotes they have no other authority on.
+async fn queue_reissue(
+ remote: String,
+ node: String,
+ digest: Option<ConfigDigest>,
+ rpcenv: &mut dyn RpcEnvironment,
+) -> Result<(), Error> {
+ let auth_id: Authid = rpcenv
+ .get_auth_id()
+ .context("no authid available")?
+ .parse()?;
+ let user_info = CachedUserInfo::new()?;
+ user_info.check_privs(
+ &auth_id,
+ &["resource", &remote],
+ PRIV_RESOURCE_MODIFY,
+ false,
+ )?;
+
+ // Fetch live state before grabbing the config lock so the network call does not pin the lock
+ // for the duration of a remote query.
+ let (remotes_config, _) = pdm_config::remotes::config()?;
+ let remote_entry = remotes_config
+ .get(&remote)
+ .ok_or_else(|| http_err!(NOT_FOUND, "remote '{remote}' not found"))?;
+ let live = get_subscription_info_for_remote(remote_entry, FRESH_NODE_STATUS_MAX_AGE)
+ .await
+ .map_err(|err| {
+ http_err!(
+ BAD_REQUEST,
+ "could not read subscription on {remote}/{node}: {err}"
+ )
+ })?;
+ let live_current_key: Option<String> = live
+ .get(&node)
+ .and_then(|info| info.as_ref())
+ .and_then(|info| info.key.clone());
+
+ let _lock = pdm_config::subscriptions::lock_config()?;
+ let (mut config, config_digest) = pdm_config::subscriptions::config()?;
+ config_digest.detect_modification(digest.as_ref())?;
+
+ let bound_id = config
+ .iter()
+ .find(|(_, e)| {
+ e.remote.as_deref() == Some(remote.as_str()) && e.node.as_deref() == Some(node.as_str())
+ })
+ .map(|(id, _)| id.to_string());
+
+ if let Some(id) = bound_id {
+ let entry = config.get_mut(&id).unwrap();
+ if entry.pending_reissue {
+ http_bail!(BAD_REQUEST, "reissue already queued for {remote}/{node}");
+ }
+ entry.pending_reissue = true;
+ } else {
+ // No pool entry is bound to this (remote, node). Three sub-cases for the live key:
+ // - Already in the pool but unbound: rebind to target and flag. This covers the user
+ // path where Clear Assignment dropped the binding while the live subscription stayed.
+ // - Already in the pool but bound elsewhere: refuse, the operator has to reconcile.
+ // - Not in the pool: adopt by inserting a fresh entry. If Clear Pending fires later,
+ // the entry stays as a now-managed free pool key.
+ let current_key = live_current_key
+ .ok_or_else(|| http_err!(NOT_FOUND, "no subscription on {remote}/{node}"))?;
+
+ if let Some(existing) = config.get_mut(¤t_key) {
+ if existing.remote.is_some() || existing.node.is_some() {
+ http_bail!(
+ CONFLICT,
+ "key '{current_key}' is in the pool but bound elsewhere; resolve manually first"
+ );
+ }
+ existing.remote = Some(remote.clone());
+ existing.node = Some(node.clone());
+ existing.pending_reissue = true;
+ } else {
+ SUBSCRIPTION_KEY_SCHEMA
+ .parse_simple_value(¤t_key)
+ .map_err(|err| http_err!(BAD_REQUEST, "key '{current_key}' rejected: {err}"))?;
+ let product_type = ProductType::from_key(¤t_key)
+ .ok_or_else(|| format_err!("unrecognised key prefix: {current_key}"))?;
+ let entry = SubscriptionKeyEntry {
+ key: current_key.clone(),
+ product_type,
+ level: SubscriptionLevel::from_key(Some(¤t_key)),
+ source: SubscriptionKeySource::Manual,
+ remote: Some(remote.clone()),
+ node: Some(node.clone()),
+ pending_reissue: true,
+ ..Default::default()
+ };
+ config.insert(current_key, entry);
+ }
+ }
+
+ pdm_config::subscriptions::save_config(&config)?;
+ Ok(())
+}
+
#[api(
input: {
properties: {
@@ -561,13 +718,14 @@ async fn collect_node_status(
),
};
- let assigned_key = keys_config
- .iter()
- .find(|(_id, entry)| {
- entry.remote.as_deref() == Some(remote_name.as_str())
- && entry.node.as_deref() == Some(node_name.as_str())
- })
- .map(|(_id, entry)| entry.key.clone());
+ let pool_entry = keys_config.iter().find(|(_id, entry)| {
+ entry.remote.as_deref() == Some(remote_name.as_str())
+ && entry.node.as_deref() == Some(node_name.as_str())
+ });
+ let (assigned_key, pending_reissue) = match pool_entry {
+ Some((_id, entry)) => (Some(entry.key.clone()), entry.pending_reissue),
+ None => (None, false),
+ };
out.push(RemoteNodeStatus {
remote: remote_name.clone(),
@@ -578,6 +736,7 @@ async fn collect_node_status(
level,
assigned_key,
current_key,
+ pending_reissue,
});
}
}
@@ -782,37 +941,77 @@ async fn run_apply_pending(auth_id: Authid) -> Result<(), Error> {
for entry in pending {
let Some(remote) = remotes_config.get(&entry.remote) else {
bail!(
- "remote '{}' vanished, aborting after {ok}/{total} successful pushes",
+ "remote '{}' vanished, aborting after {ok}/{total} successful operations",
entry.remote,
);
};
- // Honour the case where the operator unassigned the key while the worker was queued.
+ // Honour the case where the operator unassigned or reset the entry while the worker was
+ // queued. `pool_assignment_still_valid` re-reads the binding under no lock, which is
+ // good enough since `save_config` writes are atomic and a stale skip is benign.
if !pool_assignment_still_valid(&config, &entry) {
info!(
- "skipping {}/{}: pool assignment changed before worker ran",
+ "skipping {}/{}: pool entry changed before worker ran",
entry.remote, entry.node
);
continue;
}
- info!(
- "pushing {} to {}/{}...",
- entry.key, entry.remote, entry.node
- );
- if let Err(err) = push_key_to_remote(remote, &entry.key, &entry.node).await {
- bail!(
- "push of {} to {}/{} failed after {ok}/{total} successful pushes: {err}",
- entry.key,
- entry.remote,
- entry.node,
- );
+ match entry.op {
+ PendingOp::Push => {
+ info!(
+ "pushing {} to {}/{}...",
+ entry.key, entry.remote, entry.node
+ );
+ if let Err(err) = push_key_to_remote(remote, &entry.key, &entry.node).await {
+ bail!(
+ "push of {} to {}/{} failed after {ok}/{total} successful operations: {err}",
+ entry.key,
+ entry.remote,
+ entry.node,
+ );
+ }
+ }
+ PendingOp::Reissue => {
+ let product_type = match ProductType::from_key(&entry.key) {
+ Some(ty) => ty,
+ None => bail!("unrecognised key format: {}", entry.key),
+ };
+ info!(
+ "reissuing: removing subscription from {}/{}...",
+ entry.remote, entry.node
+ );
+ if let Err(err) =
+ delete_subscription_on_remote(remote, product_type, &entry.node).await
+ {
+ bail!(
+ "reissue of {} on {}/{} failed after {ok}/{total} successful operations: {err}",
+ entry.key,
+ entry.remote,
+ entry.node,
+ );
+ }
+ // Clear the binding under the config lock. A subsequent compute_pending call must
+ // not propose another push or reissue for the same entry.
+ let _lock = pdm_config::subscriptions::lock_config()?;
+ let (mut updated, _) = pdm_config::subscriptions::config()?;
+ if let Some(stored) = updated.get_mut(&entry.key) {
+ if stored.remote.as_deref() == Some(entry.remote.as_str())
+ && stored.node.as_deref() == Some(entry.node.as_str())
+ {
+ stored.remote = None;
+ stored.node = None;
+ stored.pending_reissue = false;
+ }
+ }
+ pdm_config::subscriptions::save_config(&updated)?;
+ }
}
info!(" success");
invalidate_subscription_info_for_remote(&entry.remote);
ok += 1;
}
- info!("finished: {ok}/{total} pushes succeeded");
+ info!("finished: {ok}/{total} operations succeeded");
Ok(())
}
@@ -854,8 +1053,21 @@ async fn clear_pending(rpcenv: &mut dyn RpcEnvironment) -> Result<ClearPendingRe
if stored.remote.as_deref() == Some(entry.remote.as_str())
&& stored.node.as_deref() == Some(entry.node.as_str())
{
- stored.remote = None;
- stored.node = None;
+ match entry.op {
+ PendingOp::Reissue => {
+ // Only reset the flag - leave the binding so the operator can retry the
+ // reissue without having to re-import a foreign key from scratch.
+ stored.pending_reissue = false;
+ }
+ PendingOp::Push => {
+ stored.remote = None;
+ stored.node = None;
+ // Defensive: an entry that flipped to pending_reissue between the
+ // pre-lock snapshot and now would otherwise leave a meaningless flag
+ // on a now-unbound entry. Reset alongside the binding clear.
+ stored.pending_reissue = false;
+ }
+ }
cleared += 1;
}
}
@@ -868,12 +1080,21 @@ async fn clear_pending(rpcenv: &mut dyn RpcEnvironment) -> Result<ClearPendingRe
Ok(ClearPendingResult { cleared })
}
-/// Plan entry for one pending push.
+/// Plan entry for one pending push or reissue.
#[derive(Clone, Debug)]
struct PendingEntry {
key: String,
remote: String,
node: String,
+ op: PendingOp,
+}
+
+#[derive(Clone, Copy, Debug, PartialEq, Eq)]
+enum PendingOp {
+ /// PUT the assigned key to the remote because the live state does not match.
+ Push,
+ /// DELETE the subscription on the remote and clear the binding on success.
+ Reissue,
}
fn compute_pending(
@@ -893,6 +1114,15 @@ fn compute_pending(
return None;
}
+ if entry.pending_reissue {
+ return Some(PendingEntry {
+ key: entry.key.clone(),
+ remote: remote.to_string(),
+ node: node.to_string(),
+ op: PendingOp::Reissue,
+ });
+ }
+
// Treat anything other than "Active with the assigned key as the live current_key"
// as pending, including unreachable remotes, so an operator can clear stuck
// assignments without first having to bring the target back online.
@@ -911,6 +1141,7 @@ fn compute_pending(
key: entry.key.clone(),
remote: remote.to_string(),
node: node.to_string(),
+ op: PendingOp::Push,
})
})
.collect())
@@ -960,6 +1191,7 @@ async fn collect_status_uncached(
level,
assigned_key: None,
current_key,
+ pending_reissue: false,
});
}
}
diff --git a/ui/src/configuration/subscription_keys.rs b/ui/src/configuration/subscription_keys.rs
index c535e94..e2cb8ed 100644
--- a/ui/src/configuration/subscription_keys.rs
+++ b/ui/src/configuration/subscription_keys.rs
@@ -280,19 +280,28 @@ impl LoadableComponent for SubscriptionKeyGridComp {
.selected_entry()
.map(|entry| self.create_assign_dialog(&entry, ctx)),
ViewState::Remove => self.selection.selected_key().map(|key| {
- ConfirmDialog::new(
- tr!("Remove Key"),
- tr!(
+ let assignment = self.selected_entry().and_then(|e| {
+ Some((e.remote.clone()?, e.node.clone()?))
+ });
+ let body = match assignment {
+ Some((remote, node)) => tr!(
+ "Remove {key} from the key pool? It is still assigned to {remote}/{node}; the assignment is released without removing the subscription on the remote. Use Reissue Key on the Node Subscription Status panel first if you want to free the live subscription too.",
+ key = key.to_string(),
+ remote = remote,
+ node = node,
+ ),
+ None => tr!(
"Remove {key} from the key pool? This does not revoke the subscription.",
key = key.to_string(),
),
- )
- .on_confirm({
- let link = ctx.link().clone();
- let key = key.clone();
- move |_| link.send_message(Msg::Remove(key.clone()))
- })
- .into()
+ };
+ ConfirmDialog::new(tr!("Remove Key"), body)
+ .on_confirm({
+ let link = ctx.link().clone();
+ let key = key.clone();
+ move |_| link.send_message(Msg::Remove(key.clone()))
+ })
+ .into()
}),
}
}
diff --git a/ui/src/configuration/subscription_registry.rs b/ui/src/configuration/subscription_registry.rs
index 7ed96e6..82d6058 100644
--- a/ui/src/configuration/subscription_registry.rs
+++ b/ui/src/configuration/subscription_registry.rs
@@ -7,7 +7,7 @@ use anyhow::Error;
use yew::virtual_dom::{Key, VComp, VNode};
use proxmox_yew_comp::percent_encoding::percent_encode_component;
-use proxmox_yew_comp::{http_delete, http_get, http_post, http_put};
+use proxmox_yew_comp::{http_get, http_post, http_put};
use proxmox_yew_comp::{
LoadableComponent, LoadableComponentContext, LoadableComponentMaster,
LoadableComponentScopeExt, LoadableComponentState,
@@ -28,6 +28,7 @@ const NODE_STATUS_URL: &str = "/subscriptions/node-status";
const AUTO_ASSIGN_URL: &str = "/subscriptions/auto-assign";
const APPLY_PENDING_URL: &str = "/subscriptions/apply-pending";
const CLEAR_PENDING_URL: &str = "/subscriptions/clear-pending";
+const QUEUE_REISSUE_URL: &str = "/subscriptions/queue-reissue";
/// Map a [`SubscriptionStatus`] to the icon shown in subscription panels.
///
@@ -165,15 +166,21 @@ pub enum Msg {
ClearPending,
/// Clear the pool's pin on the currently-selected node by un-assigning its key.
ClearSelectedNode,
- /// Remove the pool entry currently pinned to the selected node.
- RemoveSelectedNodeKey,
+ /// Open the confirmation dialog for queueing a reissue on the selected node.
+ QueueReissueForSelectedNode,
}
#[derive(PartialEq)]
pub enum ViewState {
ConfirmAutoAssign(Vec<ProposedAssignment>),
ConfirmClearPending,
- ConfirmRemoveSelectedNodeKey(String),
+ /// Pending confirmation to queue a reissue for `(remote, node)`. The current key on the
+ /// node is shown in the dialog body when available.
+ ConfirmQueueReissue {
+ remote: String,
+ node: String,
+ current_key: Option<String>,
+ },
}
#[doc(hidden)]
@@ -332,8 +339,24 @@ fn key_cell(n: &RemoteNodeStatus) -> Html {
let assigned = n.assigned_key.as_deref();
let current = n.current_key.as_deref();
- // Pending = pool key assigned but the node doesn't have an active subscription yet (the key
- // still needs to be pushed).
+ if n.pending_reissue {
+ // Reissue queued: surface the live key the operator is about to free, with a recycle
+ // icon in the warning colour so the row stands out next to ordinary pending pushes.
+ let text = current.or(assigned).unwrap_or("");
+ return Tooltip::new(
+ Row::new()
+ .class(AlignItems::Baseline)
+ .gap(2)
+ .with_child(Fa::new("recycle").class(FontColor::Warning))
+ .with_child(text),
+ )
+ .tip(tr!(
+ "Pending Reissue - Apply Pending will remove this subscription from the node."
+ ))
+ .into();
+ }
+
+ // Pending push = pool key assigned but the node doesn't have an active subscription yet.
let pending =
assigned.is_some() && n.status != proxmox_subscription::SubscriptionStatus::Active;
@@ -449,7 +472,7 @@ impl LoadableComponent for SubscriptionRegistryComp {
});
}
Msg::ClearSelectedNode => {
- let Some(key) = self.selected_assigned_key() else {
+ let Some(key) = self.clear_assignment_target_key() else {
return false;
};
let link = ctx.link().clone();
@@ -461,12 +484,16 @@ impl LoadableComponent for SubscriptionRegistryComp {
link.send_reload();
});
}
- Msg::RemoveSelectedNodeKey => {
- let Some(key) = self.selected_assigned_key() else {
+ Msg::QueueReissueForSelectedNode => {
+ let Some((remote, node, current_key)) = self.selected_node_for_reissue() else {
return false;
};
ctx.link()
- .change_view(Some(ViewState::ConfirmRemoveSelectedNodeKey(key)));
+ .change_view(Some(ViewState::ConfirmQueueReissue {
+ remote,
+ node,
+ current_key,
+ }));
}
}
true
@@ -551,34 +578,56 @@ impl LoadableComponent for SubscriptionRegistryComp {
ViewState::ConfirmAutoAssign(proposals) => {
Some(self.render_auto_assign_dialog(ctx, proposals))
}
- ViewState::ConfirmRemoveSelectedNodeKey(key) => {
+ ViewState::ConfirmQueueReissue {
+ remote,
+ node,
+ current_key,
+ } => {
use pwt::widget::ConfirmDialog;
+ let question = match current_key {
+ Some(k) => tr!(
+ "Queue a reissue of {key} on {remote}/{node}?",
+ key = k.clone(),
+ remote = remote.clone(),
+ node = node.clone(),
+ ),
+ None => tr!(
+ "Queue a reissue on {remote}/{node}?",
+ remote = remote.clone(),
+ node = node.clone(),
+ ),
+ };
+ let body = Column::new()
+ .gap(2)
+ .with_child(Container::from_tag("p").with_child(question))
+ .with_child(Container::from_tag("p").with_child(tr!(
+ "Apply Pending will remove the subscription from the node so the key can be reassigned elsewhere; Clear Pending undoes the queueing without touching the remote."
+ )));
+ let remote_for_cb = remote.clone();
+ let node_for_cb = node.clone();
let link = ctx.link().clone();
- let key_for_callback = key.clone();
Some(
- ConfirmDialog::new(
- tr!("Remove Key"),
- tr!(
- "Remove {key} from the key pool? This does not revoke the subscription on the remote node.",
- key = key.clone(),
- ),
- )
- .on_confirm(move |_| {
- let link = link.clone();
- let key = key_for_callback.clone();
- link.clone().spawn(async move {
- let url = format!(
- "/subscriptions/keys/{}",
- percent_encode_component(&key),
- );
- if let Err(err) = http_delete(&url, None).await {
- link.show_error(tr!("Remove Key"), err.to_string(), true);
- }
- link.change_view(None);
- link.send_reload();
- });
- })
- .into(),
+ ConfirmDialog::default()
+ .title(tr!("Reissue Key"))
+ .confirm_message(body)
+ .on_confirm(move |_| {
+ let link = link.clone();
+ let remote = remote_for_cb.clone();
+ let node = node_for_cb.clone();
+ link.clone().spawn(async move {
+ let body = serde_json::json!({
+ "remote": remote,
+ "node": node,
+ });
+ if let Err(err) = http_post::<()>(QUEUE_REISSUE_URL, Some(body)).await
+ {
+ link.show_error(tr!("Reissue Key"), err.to_string(), true);
+ }
+ link.change_view(None);
+ link.send_reload();
+ });
+ })
+ .into(),
)
}
}
@@ -614,32 +663,30 @@ impl SubscriptionRegistryComp {
.show_header(true)
.class(FlexFit);
- let has_assigned = self.selected_assigned_key().is_some();
+ let can_clear = self.clear_assignment_target_key().is_some();
+ let can_reissue = self.selected_node_for_reissue().is_some();
let clear_button = Button::new(tr!("Clear Assignment"))
.icon_class("fa fa-unlink")
- .disabled(!has_assigned)
+ .disabled(!can_clear)
.on_activate(ctx.link().callback(|_| Msg::ClearSelectedNode));
- let remove_button = Button::new(tr!("Remove"))
- .icon_class("fa fa-trash-o")
- .disabled(!has_assigned)
- .on_activate(ctx.link().callback(|_| Msg::RemoveSelectedNodeKey));
+ let reissue_button = Button::new(tr!("Reissue Key"))
+ .icon_class("fa fa-recycle")
+ .disabled(!can_reissue)
+ .on_activate(ctx.link().callback(|_| Msg::QueueReissueForSelectedNode));
Panel::new()
.class(FlexFit)
.border(true)
.style("flex", "4 1 0")
.min_width(400)
- .title(tr!("Node Status"))
+ .title(tr!("Node Subscription Status"))
.with_tool(clear_button)
- .with_tool(remove_button)
+ .with_tool(reissue_button)
.with_child(table)
}
- /// Pool key currently assigned to whatever node the operator selected in the tree.
- ///
- /// Returns None when no node row is selected, the selected entry is a remote-level
- /// aggregate, or the node has no pool assignment.
- fn selected_assigned_key(&self) -> Option<String> {
+ /// Resolve the selected tree row to its `RemoteNodeStatus`, if any.
+ fn selected_node_status(&self) -> Option<&RemoteNodeStatus> {
let key = self.node_selection.selected_key()?;
let raw = key.to_string();
let mut parts = raw.trim_start_matches('/').splitn(2, '/');
@@ -648,7 +695,38 @@ impl SubscriptionRegistryComp {
self.last_node_data
.iter()
.find(|n| n.remote == remote && n.node == node)
- .and_then(|n| n.assigned_key.clone())
+ }
+
+ /// Returns the assigned key when Clear Assignment is appropriate: there is a binding AND it
+ /// has not yet been pushed (different from current_key, or the node is not Active). For an
+ /// already-synced assignment, clearing would orphan the live subscription on the remote, so
+ /// the operator must use Reissue Key instead. A queued reissue is also off-limits here so
+ /// "Clear Assignment" stays a binding-clear verb; cancelling a reissue goes through Clear
+ /// Pending, which only resets the flag and keeps the binding for retry.
+ fn clear_assignment_target_key(&self) -> Option<String> {
+ let n = self.selected_node_status()?;
+ if n.pending_reissue {
+ return None;
+ }
+ let assigned = n.assigned_key.as_ref()?;
+ let synced = n.status == proxmox_subscription::SubscriptionStatus::Active
+ && n.current_key.as_deref() == Some(assigned.as_str());
+ if synced {
+ return None;
+ }
+ Some(assigned.clone())
+ }
+
+ /// Returns `(remote, node, current_key)` when the selected node has a live subscription that
+ /// can be queued for reissue: there is a current key on the node and no reissue is already
+ /// queued for it. Disabling the button below this gate keeps the action idempotent without a
+ /// server round-trip.
+ fn selected_node_for_reissue(&self) -> Option<(String, String, Option<String>)> {
+ let n = self.selected_node_status()?;
+ if n.pending_reissue || n.current_key.is_none() {
+ return None;
+ }
+ Some((n.remote.clone(), n.node.clone(), n.current_key.clone()))
}
fn render_auto_assign_dialog(
--
2.47.3
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH datacenter-manager v2 9/9] fixup! ui: add subscription registry with key pool and node status
2026-05-07 8:26 [PATCH datacenter-manager v2 0/8] subscription: add central key pool registry with reissue support Thomas Lamprecht
` (7 preceding siblings ...)
2026-05-07 8:26 ` [PATCH datacenter-manager v2 8/8] subscription: add Reissue Key action with pending-reissue queue Thomas Lamprecht
@ 2026-05-07 8:34 ` Thomas Lamprecht
2026-05-07 13:23 ` [PATCH datacenter-manager v2 0/8] subscription: add central key pool registry with reissue support Lukas Wagner
9 siblings, 0 replies; 15+ messages in thread
From: Thomas Lamprecht @ 2026-05-07 8:34 UTC (permalink / raw)
To: pdm-devel
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
---
missed Lukas second mail before sending and only ran cargo check in the
top level workspace, sorry. Sendign as follow-up to avoid a v3 until
there is more actual feedback.
ui/src/configuration/subscription_registry.rs | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/ui/src/configuration/subscription_registry.rs b/ui/src/configuration/subscription_registry.rs
index 82d6058..6390b97 100644
--- a/ui/src/configuration/subscription_registry.rs
+++ b/ui/src/configuration/subscription_registry.rs
@@ -524,7 +524,7 @@ impl LoadableComponent for SubscriptionRegistryComp {
)
.with_flex_spacer()
.with_child(
- Button::refresh(ctx.loading()).on_activate({
+ Button::refresh(self.loading()).on_activate({
let link = link.clone();
move |_| link.send_reload()
}),
--
2.47.3
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH datacenter-manager v2 1/8] api: subscription cache: ensure max_age=0 forces a fresh fetch
2026-05-07 8:26 ` [PATCH datacenter-manager v2 1/8] api: subscription cache: ensure max_age=0 forces a fresh fetch Thomas Lamprecht
@ 2026-05-07 13:23 ` Lukas Wagner
2026-05-08 12:43 ` applied: " Lukas Wagner
1 sibling, 0 replies; 15+ messages in thread
From: Lukas Wagner @ 2026-05-07 13:23 UTC (permalink / raw)
To: Thomas Lamprecht, pdm-devel
On Thu May 7, 2026 at 10:26 AM CEST, Thomas Lamprecht wrote:
> The cache lookup used 'diff > max_age', so a same-second hit with
> max_age=0 still returned cached data; collect_status_uncached and the
> direct user-supplied ?max-age=0 bypass both silently lost their
> freshness guarantee. Short-circuit max_age=0 explicitly and switch the
> TTL comparison to '>=' so the boundary is an exact miss.
>
> Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
> ---
>
> This threw me off quite a bit, as I observed seemingly stale cache
> issues, which where ultimately due to something completely different.
>
> server/src/api/resources.rs | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/server/src/api/resources.rs b/server/src/api/resources.rs
> index 04628a8..50315b1 100644
> --- a/server/src/api/resources.rs
> +++ b/server/src/api/resources.rs
> @@ -830,11 +830,14 @@ fn get_cached_subscription_info(remote: &str, max_age: u64) -> Option<CachedSubs
> .read()
> .expect("subscription mutex poisoned");
>
> + if max_age == 0 {
> + return None;
> + }
> if let Some(cached_subscription) = cache.get(remote) {
> let now = proxmox_time::epoch_i64();
> let diff = now - cached_subscription.timestamp;
>
> - if diff > max_age as i64 || diff < 0 {
> + if diff >= max_age as i64 || diff < 0 {
> // value is too old or from the future
> None
> } else {
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH datacenter-manager v2 2/8] api types: subscription level: render full names
2026-05-07 8:26 ` [PATCH datacenter-manager v2 2/8] api types: subscription level: render full names Thomas Lamprecht
@ 2026-05-07 13:23 ` Lukas Wagner
0 siblings, 0 replies; 15+ messages in thread
From: Lukas Wagner @ 2026-05-07 13:23 UTC (permalink / raw)
To: Thomas Lamprecht, pdm-devel
On Thu May 7, 2026 at 10:26 AM CEST, Thomas Lamprecht wrote:
> The Display impl produced single-letter codes ("c", "b", "s", "p"),
> forcing the dashboard to keep a private letter-to-name helper just
> to render labels.
>
> Switching Display to the full names is safe: FromStr is extended to
> accept the names alongside the legacy single-letter codes, so any
> previously serialised value still parses, and the only in-tree
> caller of Display on this enum is the dashboard helper that this
> commit drops. The level strings reported by the PVE/PBS API land in
> unrelated String fields and are not touched.
>
> Add Debug to the derives, required for assert_eq! over the level in
> the upcoming key-pool tests.
>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH datacenter-manager v2 4/8] subscription: add key pool and node status API endpoints
2026-05-07 8:26 ` [PATCH datacenter-manager v2 4/8] subscription: add key pool and node status API endpoints Thomas Lamprecht
@ 2026-05-07 13:23 ` Lukas Wagner
0 siblings, 0 replies; 15+ messages in thread
From: Lukas Wagner @ 2026-05-07 13:23 UTC (permalink / raw)
To: Thomas Lamprecht, pdm-devel
Hi!
not a deep review of the code yet, I mostly focused on checking out the
UX and API design so far. Some notes inline.
On Thu May 7, 2026 at 10:26 AM CEST, Thomas Lamprecht wrote:
[...]
> +
> +const KEYS_ROUTER: Router = Router::new()
> + .get(&API_METHOD_LIST_KEYS)
> + .post(&API_METHOD_ADD_KEYS)
> + .match_all("key", &KEY_ITEM_ROUTER);
> +
> +const KEY_ITEM_ROUTER: Router = Router::new()
> + .get(&API_METHOD_GET_KEY)
> + .put(&API_METHOD_ASSIGN_KEY)
Maybe instead of PUTing a key to assign/unassign it, we could rather
have a
POST .../keys/<id>/assignment
and
DELETE .../keys/<id>/assignment
Then the POST could have remote and node as non-optional params, I
think?
> + .delete(&API_METHOD_DELETE_KEY);
> +
> +/// Force-fresh node-status query so the next view reflects the new state instead of returning a
> +/// cached entry up to 5 minutes later. Used by auto-assign / apply-pending / clear-pending to
> +/// avoid double-driving a node that has already moved to Active in the cache window.
> +const FRESH_NODE_STATUS_MAX_AGE: u64 = 0;
> +
> +/// Cached node-status freshness used by read-only views. Five minutes matches the resource-cache
> +/// convention and is short enough that admins rarely see stale data on the panel.
> +const PANEL_NODE_STATUS_MAX_AGE: u64 = 5 * 60;
> +
[...]
> +
> +/// Pre-lock check for [`assign_key`]'s unassign path: returns the (remote, node) the entry is
> +/// currently active on, if any, so the lock-protected branch can refuse the unassign and prompt
> +/// the operator to Reissue Key instead. Returns `None` for entries with no binding, no live
> +/// subscription, or a live subscription whose key does not match the entry.
> +async fn check_synced_assignment_for_unassign(
> + key: &str,
> +) -> Result<Option<(String, String)>, Error> {
> + let (config, _) = pdm_config::subscriptions::config()?;
> + let Some(entry) = config.get(key) else {
> + return Ok(None);
> + };
> + let (Some(prev_remote), Some(prev_node)) = (entry.remote.clone(), entry.node.clone()) else {
> + return Ok(None);
> + };
> + let (remotes_config, _) = pdm_config::remotes::config()?;
> + let Some(remote_entry) = remotes_config.get(&prev_remote) else {
> + return Ok(None);
> + };
> + let live = match get_subscription_info_for_remote(remote_entry, FRESH_NODE_STATUS_MAX_AGE).await
> + {
> + Ok(v) => v,
> + Err(_) => return Ok(None),
> + };
> + let synced = live
> + .get(&prev_node)
> + .and_then(|info| info.as_ref())
> + .map(|info| {
> + info.status == proxmox_subscription::SubscriptionStatus::Active
> + && info.key.as_deref() == Some(key)
> + })
> + .unwrap_or(false);
> + Ok(synced.then_some((prev_remote, prev_node)))
> +}
> +
> +/// Push a single key to its assigned remote node. Operates on a borrowed `Remote` so the
> +/// caller can fetch the remotes-config once and reuse it.
> +async fn push_key_to_remote(remote: &Remote, key: &str, node_name: &str) -> Result<(), Error> {
> + let product_type =
> + ProductType::from_key(key).ok_or_else(|| format_err!("unrecognised key format: {key}"))?;
> +
> + // PVE and PBS share `proxmox_client::Client`, so `make_pbs_client_and_login` works for both;
> + // only the PUT path differs.
> + let path = match product_type {
> + ProductType::Pve => format!("/api2/extjs/nodes/{node_name}/subscription"),
> + ProductType::Pbs => "/api2/extjs/nodes/localhost/subscription".to_string(),
> + ProductType::Pmg | ProductType::Pom => {
> + bail!("PDM cannot push '{product_type}' keys: no remote support yet");
> + }
> + };
> +
> + let client = crate::connection::make_pbs_client_and_login(remote).await?;
> +
> + client
> + .0
> + .put(&path, &serde_json::json!({ "key": key }))
> + .await?;
> + client.0.post(&path, &serde_json::json!({})).await?;
> +
> + info!("pushed key '{key}' to {}/{node_name}", remote.id);
> + Ok(())
naturally this should use proper API bindings from pbs_client and
pve-api-types at some point. FWIW, I have patches ready for both, I can
supply them if desired (can also happen as a follow-up, since your
approach obviously works too for now)
> +}
> +
> +#[api(
> + input: {
> + properties: {
> + "max-age": {
> + type: u64,
> + optional: true,
> + description: "Override the cache freshness window in seconds. \
> + Default 300 for panel views; pass 0 to force a fresh query.",
> + },
> + },
> + },
> + returns: {
> + type: Array,
> + description: "Subscription status of all remote nodes the user can audit.",
> + items: { type: RemoteNodeStatus },
> + },
> + access: {
> + permission: &Permission::Privilege(&["system"], PRIV_SYS_AUDIT, false),
> + },
> +)]
> +/// Get the subscription status of every remote node the caller can audit, combined with key pool
> +/// assignment information.
> +///
> +/// Per-remote `PRIV_RESOURCE_AUDIT` is enforced inside the handler so users only see remotes
> +/// they may audit.
> +async fn node_status(
> + max_age: Option<u64>,
> + rpcenv: &mut dyn RpcEnvironment,
> +) -> Result<Vec<RemoteNodeStatus>, Error> {
> + collect_node_status(max_age.unwrap_or(PANEL_NODE_STATUS_MAX_AGE), rpcenv).await
> +}
> +
> +/// Shared helper: fan out subscription queries to all remotes the caller has audit privilege on,
> +/// in parallel, reusing the per-remote `SUBSCRIPTION_CACHE` via `get_subscription_info_for_remote`.
> +/// Joins the results with the key-pool assignment table.
> +async fn collect_node_status(
> + max_age: u64,
> + rpcenv: &mut dyn RpcEnvironment,
> +) -> Result<Vec<RemoteNodeStatus>, Error> {
> + let auth_id: Authid = rpcenv
> + .get_auth_id()
> + .context("no authid available")?
> + .parse()?;
> + let user_info = CachedUserInfo::new()?;
> +
> + let visible_remotes: Vec<(String, Remote)> = crate::api::remotes::RemoteIterator::new()?
> + .any_privs(&user_info, &auth_id, PRIV_RESOURCE_AUDIT)
> + .into_iter()
> + .collect();
> +
> + let (keys_config, _) = pdm_config::subscriptions::config()?;
> +
> + // `get_subscription_info_for_remote` re-uses the per-remote `SUBSCRIPTION_CACHE` so this
> + // fan-out is safe to run concurrently.
> + let fetch = visible_remotes.iter().map(|(name, remote)| async move {
> + let res = get_subscription_info_for_remote(remote, max_age).await;
> + (name.clone(), remote.ty, res)
> + });
> + let results = join_all(fetch).await;
> +
> + let mut out = Vec::new();
> + for (remote_name, remote_ty, result) in results {
> + let node_infos = match result {
> + Ok(info) => info,
> + Err(err) => {
> + warn!("failed to query subscription for remote {remote_name}: {err}");
> + continue;
> + }
> + };
> +
> + for (node_name, node_info) in &node_infos {
> + let (status, level, sockets, current_key) = match node_info {
> + Some(info) => (info.status, info.level, info.sockets, info.key.clone()),
> + None => (
> + proxmox_subscription::SubscriptionStatus::NotFound,
> + SubscriptionLevel::None,
> + None,
> + None,
> + ),
> + };
> +
> + let assigned_key = keys_config
> + .iter()
> + .find(|(_id, entry)| {
> + entry.remote.as_deref() == Some(remote_name.as_str())
> + && entry.node.as_deref() == Some(node_name.as_str())
> + })
> + .map(|(_id, entry)| entry.key.clone());
> +
> + out.push(RemoteNodeStatus {
> + remote: remote_name.clone(),
> + ty: remote_ty,
> + node: node_name.to_string(),
> + sockets,
> + status,
> + level,
> + assigned_key,
> + current_key,
> + });
> + }
> + }
> +
> + out.sort_by(|a, b| (&a.remote, &a.node).cmp(&(&b.remote, &b.node)));
> + Ok(out)
> +}
> +
> +#[api(
> + input: {
> + properties: {
> + apply: {
I'd rather call this 'assign', as to avoid confusion with the 'apply-pending' action?
(but with my comment down below this would become obsolete)
> + type: bool,
> + optional: true,
> + default: false,
> + description: "Actually apply the proposed assignments. Without this, only a preview is returned.",
> + },
> + },
> + },
> + returns: {
> + type: Array,
> + description: "List of proposed or applied assignments.",
> + items: { type: ProposedAssignment },
> + },
> + access: {
> + permission: &Permission::Privilege(&["system"], PRIV_SYS_MODIFY, false),
> + },
> +)]
> +/// Propose or apply automatic key-to-node assignments.
> +///
> +/// Matches unused pool keys to remote nodes that do not yet have a pool-assigned key, picking
> +/// the smallest PVE key that covers each node's socket count. When `apply=true`, the live node
> +/// statuses are fetched first (without holding the config lock - sync locks must not span
> +/// awaits), then proposals are computed and persisted under the lock with a per-key re-check
> +/// against the now-current pool state, so a parallel admin edit between fetch and apply does
> +/// not get silently overwritten.
> +async fn auto_assign(
> + apply: Option<bool>,
AFAIK, this can just be 'apply: bool' (or rather 'assign: bool'), the
default should be automatically injected
> + rpcenv: &mut dyn RpcEnvironment,
> +) -> Result<Vec<ProposedAssignment>, Error> {
> + let apply = apply.unwrap_or(false);
... then you can drop this assignment here.
> +
> + let auth_id: Authid = rpcenv
> + .get_auth_id()
> + .context("no authid available")?
> + .parse()?;
> + let user_info = CachedUserInfo::new()?;
> +
> + let node_statuses = collect_node_status(FRESH_NODE_STATUS_MAX_AGE, rpcenv).await?;
> +
> + if !apply {
> + let (config, _digest) = pdm_config::subscriptions::config()?;
> + return Ok(compute_proposals(&config, &node_statuses));
> + }
Maybe I'm missing something, but how do we actually ensure that the
preview is then the same thing as what is assigned later when 'apply' is
set to true?
The proposal is computed both times from scratch, so if anything changes
(available keys, node subscription status) in the mean while, the
outcome from POST .../auto-assign?apply=1 is not the same that was shown
in the UI earlier?
It's probably not *that* big of an issue in real life (after all, the
time between seeing the preview and pressing 'Apply' is rather short
usually), but I guess with a slightly different API design this could be
eliminated?
Maybe, the auto-assign endpoint could always just return a proposal, and
then there is a 'bulk assign' endpoint, which the UI then uses to submit
the (confirmed) proposal. The 'bulk assign' endpoint could then fail if
any of the assumptions from the proposal do not hold any more (e.g.
digests changed).
That proposal also would allow us to allow edits of the proposal in the
preview dialog before it is applied/assigned.
What do you think?
> +
> + let _lock = pdm_config::subscriptions::lock_config()?;
> + let (mut config, _digest) = pdm_config::subscriptions::config()?;
> + let mut proposals = compute_proposals(&config, &node_statuses);
> +
> + // Audit-only callers may see a remote in the preview but must not be able to stage a write
> + // for it that another admin would later push on their behalf.
> + proposals.retain(|p| {
> + user_info.lookup_privs(&auth_id, &["resource", &p.remote]) & PRIV_RESOURCE_MODIFY != 0
> + });
> +
> + for p in &proposals {
> + if let Some(entry) = config.get_mut(&p.key) {
> + // Skip keys that another writer assigned between the preview and the lock.
> + if entry.remote.is_none() {
> + entry.remote = Some(p.remote.clone());
> + entry.node = Some(p.node.clone());
> + }
> + }
> + }
> + pdm_config::subscriptions::save_config(&config)?;
> +
> + Ok(proposals)
> +}
> +
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH datacenter-manager v2 0/8] subscription: add central key pool registry with reissue support
2026-05-07 8:26 [PATCH datacenter-manager v2 0/8] subscription: add central key pool registry with reissue support Thomas Lamprecht
` (8 preceding siblings ...)
2026-05-07 8:34 ` [PATCH datacenter-manager v2 9/9] fixup! ui: add subscription registry with key pool and node status Thomas Lamprecht
@ 2026-05-07 13:23 ` Lukas Wagner
9 siblings, 0 replies; 15+ messages in thread
From: Lukas Wagner @ 2026-05-07 13:23 UTC (permalink / raw)
To: Thomas Lamprecht, pdm-devel
On Thu May 7, 2026 at 10:26 AM CEST, Thomas Lamprecht wrote:
> Add a Subscription Registry to PDM: a central pool of PVE and PBS
> subscription keys that an operator can assign to remote nodes from one
> place, with an explicit Apply/Clear lifecycle for staged changes plus a
> Reissue Key action for freeing a key bound to a node so it can be
> reassigned elsewhere.
>
> Motivation: managing subscriptions across many remotes today means
> doing this for each node individually. PDM already has the remote
> inventory; with a key pool plus per-remote query data we can show "which
> nodes need a subscription" and "which keys are unused" together, and let
> an admin batch-assign and tear down from one place.
> In the near/mid-term we can also make polling keys from customers more
> integrated, but that needs a bit adaption in our shop infa and does not
> block the base work here in anyway. Actually, the implementation here
> was split out from a more complete work, so most parts of it are already
> prepared to adopt this relatively easily.
>
> Design points worth flagging for review:
>
> * Storage layout. subscriptions.cfg holds key entries via the typed
> section-config layer, with `product-type` as the section type so PVE
> and PBS sections live side-by-side.
We've been using subdirectories quite a bit more liberally in PDM, so
maybe this could also be one?
So maybe `subscriptions/keys.cfg`, `subscriptions/keys.shadow`, etc.?
Not sure if we ever need more than these files, so might be overkill.
> The subscriptions.shadow file is reserved for a future shop-bundle
> import flow (signed info blobs) and stays empty for manually-added
> keys. I can drop that part for now too, but figured it might be nice
> and potentially relevant for review to see the direction this probably
> goes now already.
>
> * Endpoints take PRIV_SYS_AUDIT/MODIFY at the macro
> level for the pool itself, with per-remote PRIV_RESOURCE_* enforced
> inside the handlers when a specific remote is touched.
> A dedicated subscription privilege seemed not like a necessity and
> also not fit that well into our general priv approach in PDM.
>
> * The pending lifecycle goes like: Pool entries with a (remote, node)
> * binding whose
> live state does not match are "pending push"; entries with the new
> pending-reissue flag are "pending removal". Apply Pending walks both
> queues; Clear Pending drops the queue without touching any remote
> (binding-clear for push, flag-only for reissue so the operator can
> retry without re-importing the key).
> The per-remote subscription cache is invalidated after each successful
> apply step so the next panel load reflects the change rather than a
> 5-minute-stale snapshot, which is highly confusing UI/UX wise. This
> might warrant a closer look though, might be currently done in a
> rather heavier handed fashion as potentially needed (had no time to
> recheck).
>
> * Locking is best-effort, but here that should be fine in practice given
> that another entity can always alter the state on a remote node in
> parallel anyway.
>
> The lib/pdm-api-types/tests/test_import.rs test should provide basic
> coverage for section-config roundtrip for both subscription.cfg and the
> shadow file (which is why I'd be fine with keeping it, but not _that_
> hard feelings), schema acceptance and rejection (for now only accept
> PVE/PBS; everything else rejected), ProductType classification, the
> SubscriptionLevel display/from-str backward-compat (single-letter and
> full-name forms both parse), and pick_best_pve_socket_key edge cases.
>
> Open follow-ups deliberately out of scope here:
> * Auto-import existing remote-side keys into the pool on first
> observation (the reissue path already adopts; an explicit import for
> legacy onboarding would be cleaner).
> * Make reissue a full reissue, if it goes in like this it should be
> rather called "Clear Key", but that can be handled on applying too, if
> really nothing else comes up (which I doubt)
> * A shop-bundle import path (the shadow file plumbing is already in),
> either manual copy+paste or through an api token.
> * Some polishing code and ui/ux wise (e.g., a reload button), but wante
> to finally get this out now.
> * ...
>
Haven't really done a (deep) review of the code yet, I focused mostly on
the API and UX for now.
Gave this a try on the latest master. Worked nicely, except a minor
glitch with key-pool refreshes in the UI:
- The key pool list misses a couple of 'refresh' events, mainly
noticeable when the cells in the 'Assignment' column change, e.g. when
using
- Auto-Assign
- Clear Assignment
I always had to press F5 to get the actual refreshed assignments
displayed here, the "Reload" button in the toolbar did not refresh it
properly.
Some more thoughts, mostly UX related:
- In the 'Auto-Assign Proposal' window, the 'Sockets (node/key)' column
showed '-/4' for all my PVE nodes (with a key that allows four
sockets). It is probably supposed to show the actual socket count for
the node there, I assume?
- In the 'Auto-Assign Proposal' window, I'd rather use the term 'Assign'
on the button that confirms the action, in order to avoid confusion
with the "Apply Pending" action.
- When adding keys to the key pool and the "Add Subscription Keys"
dialog is shown, I noticed a couple of warnings in the browser
console:
"WARN /usr/share/cargo/registry/pwt-0.8.3/src/widget/input_panel.rs:234 could not extract key from custom child, generating one"
I've seen this before somewhere else, so might be not related to these
changes.
- In general, the UI could more visibly show that there are actions
pending, I think. When trying out this feature for the first time, I
really wasn't sure whether the keys had been fully applied yet or not;
I actually had to double-check in the PBS/PVE web interface to be
sure. The icons in the node lists were rather subtle to me. Maybe
there could be some status banner in the toolbar iff there are any
pending actions?
- The "Auto-Assign", "Apply Pending" and "Clear Pending" buttons could
maybe use a tool tip with some (brief) explanation.
- Should we, maybe later, allow to edit the 'Auto-Assign' proposal (e.g.
excluding nodes, allow to to override the key assignment)?
I guess this would require a different approach for the API, as
explained in one of my other messages
- The 'Status' column in the "Node Subscription Panel" could maybe use
(translated) pretty printing, e.g. "No subscription" instead of
'notfound'
- The "Assign Key to Remote" dialog allowed me select nodes which
already had a subscription, I guess it would be good to filter them
out (or at least grey them out).
- Maybe the "Assign" action should rather be something that is
done in the "Node Subscription Status" panel, where the user can first
select a node in the tree and then press the button?
Then the "Assign" button would simply be disabled when selecting a
node that already has a subscription.
When pressing "Assign", we could pre-select a matching key (product,
socket count) in the dialog that is then shown.
Having the "Assign" action also makes sense to me since the "Clear
Assignment" is also right there.
A "Assign" dialog could also then allow to add a (single) key right
there, which is then automatically added to the pool. Could be nice
for UX streamlininig.
- It seems to me that the "Clear Assignment" button only works for
pending assignments - so maybe the button text itself could indicate
that, e.g "Clear Pending Assignment"? Could be a bit long though, so
might be fine as is.
- Most panels in the PDM UI have the remote/node tree on the left side,
maybe we should do the same here? On the other hand, having the key
pool on the left dictates the correct order of operations (first, add
keys), so maybe keeping it there is also good.
^ permalink raw reply [flat|nested] 15+ messages in thread
* applied: [PATCH datacenter-manager v2 1/8] api: subscription cache: ensure max_age=0 forces a fresh fetch
2026-05-07 8:26 ` [PATCH datacenter-manager v2 1/8] api: subscription cache: ensure max_age=0 forces a fresh fetch Thomas Lamprecht
2026-05-07 13:23 ` Lukas Wagner
@ 2026-05-08 12:43 ` Lukas Wagner
1 sibling, 0 replies; 15+ messages in thread
From: Lukas Wagner @ 2026-05-08 12:43 UTC (permalink / raw)
To: Thomas Lamprecht, pdm-devel
On Thu May 7, 2026 at 10:26 AM CEST, Thomas Lamprecht wrote:
> The cache lookup used 'diff > max_age', so a same-second hit with
> max_age=0 still returned cached data; collect_status_uncached and the
> direct user-supplied ?max-age=0 bypass both silently lost their
> freshness guarantee. Short-circuit max_age=0 explicitly and switch the
> TTL comparison to '>=' so the boundary is an exact miss.
>
> Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
> ---
>
> This threw me off quite a bit, as I observed seemingly stale cache
> issues, which where ultimately due to something completely different.
>
> server/src/api/resources.rs | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/server/src/api/resources.rs b/server/src/api/resources.rs
> index 04628a8..50315b1 100644
> --- a/server/src/api/resources.rs
> +++ b/server/src/api/resources.rs
> @@ -830,11 +830,14 @@ fn get_cached_subscription_info(remote: &str, max_age: u64) -> Option<CachedSubs
> .read()
> .expect("subscription mutex poisoned");
>
> + if max_age == 0 {
> + return None;
> + }
> if let Some(cached_subscription) = cache.get(remote) {
> let now = proxmox_time::epoch_i64();
> let diff = now - cached_subscription.timestamp;
>
> - if diff > max_age as i64 || diff < 0 {
> + if diff >= max_age as i64 || diff < 0 {
> // value is too old or from the future
> None
> } else {
applied this one already - I'm modifying the same code for my generic
cache implementation RFC, so applying this early avoids stepping on each
other's toes
Thanks!
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2026-05-08 12:43 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-07 8:26 [PATCH datacenter-manager v2 0/8] subscription: add central key pool registry with reissue support Thomas Lamprecht
2026-05-07 8:26 ` [PATCH datacenter-manager v2 1/8] api: subscription cache: ensure max_age=0 forces a fresh fetch Thomas Lamprecht
2026-05-07 13:23 ` Lukas Wagner
2026-05-08 12:43 ` applied: " Lukas Wagner
2026-05-07 8:26 ` [PATCH datacenter-manager v2 2/8] api types: subscription level: render full names Thomas Lamprecht
2026-05-07 13:23 ` Lukas Wagner
2026-05-07 8:26 ` [PATCH datacenter-manager v2 3/8] subscription: add key pool data model and config layer Thomas Lamprecht
2026-05-07 8:26 ` [PATCH datacenter-manager v2 4/8] subscription: add key pool and node status API endpoints Thomas Lamprecht
2026-05-07 13:23 ` Lukas Wagner
2026-05-07 8:26 ` [PATCH datacenter-manager v2 5/8] ui: add subscription registry with key pool and node status Thomas Lamprecht
2026-05-07 8:26 ` [PATCH datacenter-manager v2 6/8] cli: add subscription key pool management subcommands Thomas Lamprecht
2026-05-07 8:26 ` [PATCH datacenter-manager v2 7/8] docs: add subscription registry chapter Thomas Lamprecht
2026-05-07 8:26 ` [PATCH datacenter-manager v2 8/8] subscription: add Reissue Key action with pending-reissue queue Thomas Lamprecht
2026-05-07 8:34 ` [PATCH datacenter-manager v2 9/9] fixup! ui: add subscription registry with key pool and node status Thomas Lamprecht
2026-05-07 13:23 ` [PATCH datacenter-manager v2 0/8] subscription: add central key pool registry with reissue support Lukas Wagner
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.