* [PATCH datacenter-manager v3 00/12] subscription key pool registry
@ 2026-05-15 7:43 Thomas Lamprecht
2026-05-15 7:43 ` [PATCH datacenter-manager v3 01/12] api types: subscription level: render full names Thomas Lamprecht
` (11 more replies)
0 siblings, 12 replies; 16+ messages in thread
From: Thomas Lamprecht @ 2026-05-15 7:43 UTC (permalink / raw)
To: pdm-devel
v3 of the Subscription Registry - many thanks @Lukas for the review!
The notable shape change vs v2: the single Reissue Key patch is split
into the four discrete actions PDM can drive today (Clear Key, Adopt
Key, Adopt All, Check Subscription); the on-disk layout split moves to
its own commit; each user-visible action ships with its api / cli / ui /
docs change so a reviewer reads one patch end-to-end.
Check Subscription uses the canonical UpdateSubscription from the
recently uploaded proxmox-subscription 1.0.2; the matching PBS-side
adoption is posted as a separate patch on proxmox-backup.
Notable v2 -> v3:
* Reissue Key renamed to Clear Key (v2 reissue did not actually round-
trip through the shop); "Reissue" stays reserved for the future
shop-side action.
* Clear Pending renamed to Discard Pending; now also cancels queued
clears, not just pending pushes.
* New Adopt Key / Adopt All paths import a foreign live key into the
pool without touching the remote, for fleet onboarding.
* New Check Subscription action drives update_subscription(force=true)
and invalidates the PDM cache, so a stale Invalid / Expired verdict
can be promoted without waiting for the periodic check.
* Pool grid columns are sortable; new Source column (hidden by
default) distinguishes manually-added from adopted entries.
* ESC dismisses every confirmation dialog on the registry view.
* Invalid keys land with a clear error instead of staying queued with
a misleading pending badge.
* Per-node Revert can drop a single queued clear without the global
Discard Pending.
Open follow-ups, not in this series:
* Shop-side full reissue, so PDM can drive the actual key rotation
rather than just Clear Key on its side.
* Atomic clear-and-assign so swapping a key on a node gets reduced from
doing four (Clear / Apply / Assign / Apply) steps to one queued
change (+ apply).
* Shop-bundle import path; the on-disk shadow file plumbing already
accommodates the signed SubscriptionInfo blob.
* Per-row Auto-Assign overrides for pinning a specific key to a node.
* Status column filter on the node-status tree.
The trailing wizard commit (v3-0012) is sent as RFC and should be
probably skipped; see its diffstat note for details.
Thomas Lamprecht (12):
api types: subscription level: render full names
pdm-client: add wait_for_local_task helper
subscription: pool: add data model and config layer
subscription: api: add key pool and node status endpoints
ui: registry: add view with key pool and node status
cli: client: add subscription key pool management subcommands
docs: add subscription registry chapter
subscription: add Clear Key action and per-node revert
subscription: add Adopt Key action for foreign live subscriptions
subscription: add Adopt All bulk action
subscription: add Check Subscription action
ui: registry: add Add-and-Assign wizard from Assign Key dialog
Cargo.toml | 4 +-
cli/client/src/subscriptions.rs | 413 ++-
docs/index.rst | 1 +
docs/subscription-registry.rst | 84 +
lib/pdm-api-types/Cargo.toml | 1 +
lib/pdm-api-types/src/subscription.rs | 496 +++-
lib/pdm-api-types/tests/test_import.rs | 367 +++
lib/pdm-client/Cargo.toml | 3 +
lib/pdm-client/src/lib.rs | 337 ++-
lib/pdm-config/src/lib.rs | 1 +
lib/pdm-config/src/setup.rs | 7 +
lib/pdm-config/src/subscriptions.rs | 116 +
server/src/api/mod.rs | 2 +
server/src/api/resources.rs | 28 +-
server/src/api/subscriptions/mod.rs | 2297 +++++++++++++++++
server/src/context.rs | 7 +
server/src/pbs_client.rs | 31 +
ui/Cargo.toml | 2 +-
ui/src/configuration/mod.rs | 3 +
ui/src/configuration/subscription_assign.rs | 755 ++++++
ui/src/configuration/subscription_keys.rs | 561 ++++
ui/src/configuration/subscription_registry.rs | 1520 +++++++++++
ui/src/dashboard/subscriptions_list.rs | 18 +-
ui/src/main_menu.rs | 10 +
ui/src/widget/pve_node_selector.rs | 41 +-
25 files changed, 7061 insertions(+), 44 deletions(-)
create mode 100644 docs/subscription-registry.rst
create mode 100644 lib/pdm-api-types/tests/test_import.rs
create mode 100644 lib/pdm-config/src/subscriptions.rs
create mode 100644 server/src/api/subscriptions/mod.rs
create mode 100644 ui/src/configuration/subscription_assign.rs
create mode 100644 ui/src/configuration/subscription_keys.rs
create mode 100644 ui/src/configuration/subscription_registry.rs
--
2.47.3
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH datacenter-manager v3 01/12] api types: subscription level: render full names
2026-05-15 7:43 [PATCH datacenter-manager v3 00/12] subscription key pool registry Thomas Lamprecht
@ 2026-05-15 7:43 ` Thomas Lamprecht
2026-05-15 7:43 ` [PATCH datacenter-manager v3 02/12] pdm-client: add wait_for_local_task helper Thomas Lamprecht
` (10 subsequent siblings)
11 siblings, 0 replies; 16+ messages in thread
From: Thomas Lamprecht @ 2026-05-15 7:43 UTC (permalink / raw)
To: pdm-devel
The Display impl produced single-letter codes ("c", "b", "s", "p"),
forcing the dashboard to keep a private letter-to-name helper just
to render labels.
Switching Display to the full names is safe: FromStr is extended to
accept the names alongside the legacy single-letter codes, so any
previously serialised value still parses, and the only in-tree
Display caller, the dashboard helper, is dropped alongside the
change. The level strings reported by the PVE/PBS API land in
unrelated String fields and are not touched.
Add Debug to the derives, required for assert_eq! over the level in
the upcoming key-pool tests.
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
---
No changes since v2 (besides picking up Lukas R-b)
lib/pdm-api-types/src/subscription.rs | 24 ++++++++++++------------
ui/src/dashboard/subscriptions_list.rs | 18 ++----------------
2 files changed, 14 insertions(+), 28 deletions(-)
diff --git a/lib/pdm-api-types/src/subscription.rs b/lib/pdm-api-types/src/subscription.rs
index ca23b8e5..f0eb525b 100644
--- a/lib/pdm-api-types/src/subscription.rs
+++ b/lib/pdm-api-types/src/subscription.rs
@@ -8,7 +8,7 @@ use proxmox_subscription::{SubscriptionInfo, SubscriptionStatus};
#[api]
// order is important here, since we use that for determining if a node has a valid subscription
-#[derive(Default, Copy, Clone, PartialEq, Eq, PartialOrd, Ord)]
+#[derive(Default, Copy, Clone, Debug, PartialEq, Eq, PartialOrd, Ord)]
/// Describes the level of subscription
pub enum SubscriptionLevel {
#[default]
@@ -50,11 +50,11 @@ impl FromStr for SubscriptionLevel {
fn from_str(s: &str) -> Result<Self, Self::Err> {
Ok(match s {
- "p" => SubscriptionLevel::Premium,
- "s" => SubscriptionLevel::Standard,
- "b" => SubscriptionLevel::Basic,
- "c" => SubscriptionLevel::Community,
- "" => SubscriptionLevel::None,
+ "p" | "premium" | "Premium" => SubscriptionLevel::Premium,
+ "s" | "standard" | "Standard" => SubscriptionLevel::Standard,
+ "b" | "basic" | "Basic" => SubscriptionLevel::Basic,
+ "c" | "community" | "Community" => SubscriptionLevel::Community,
+ "" | "none" | "None" => SubscriptionLevel::None,
_ => SubscriptionLevel::Unknown,
})
}
@@ -63,12 +63,12 @@ impl FromStr for SubscriptionLevel {
impl std::fmt::Display for SubscriptionLevel {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.write_str(match self {
- SubscriptionLevel::None => "",
- SubscriptionLevel::Unknown => "unknown",
- SubscriptionLevel::Community => "c",
- SubscriptionLevel::Basic => "b",
- SubscriptionLevel::Standard => "s",
- SubscriptionLevel::Premium => "p",
+ SubscriptionLevel::None => "None",
+ SubscriptionLevel::Unknown => "Unknown",
+ SubscriptionLevel::Community => "Community",
+ SubscriptionLevel::Basic => "Basic",
+ SubscriptionLevel::Standard => "Standard",
+ SubscriptionLevel::Premium => "Premium",
})
}
}
diff --git a/ui/src/dashboard/subscriptions_list.rs b/ui/src/dashboard/subscriptions_list.rs
index b0a96eb6..fdb9e9e1 100644
--- a/ui/src/dashboard/subscriptions_list.rs
+++ b/ui/src/dashboard/subscriptions_list.rs
@@ -204,17 +204,6 @@ fn columns(
.with_child(Container::from_tag("span").with_child(text))
}
- fn render_subscription_level(level: SubscriptionLevel) -> &'static str {
- match level {
- SubscriptionLevel::None => "None",
- SubscriptionLevel::Basic => "Basic",
- SubscriptionLevel::Community => "Community",
- SubscriptionLevel::Premium => "Premium",
- SubscriptionLevel::Standard => "Standard",
- SubscriptionLevel::Unknown => "Unknown",
- }
- }
-
let subscription_column = DataTableColumn::new(tr!("Subscription"))
.render(|entry: &SubscriptionTreeEntry| match entry {
SubscriptionTreeEntry::Node(node) => {
@@ -222,16 +211,13 @@ fn columns(
let (sub_state, text) = match node.level {
SubscriptionLevel::None => (RemoteSubscriptionState::None, None),
SubscriptionLevel::Unknown => (RemoteSubscriptionState::Unknown, None),
- other => (
- RemoteSubscriptionState::Active,
- Some(render_subscription_level(other)),
- ),
+ other => (RemoteSubscriptionState::Active, Some(other.to_string())),
};
render_subscription_state(&sub_state)
.with_optional_child(text)
.into()
} else {
- render_subscription_level(node.level).into()
+ node.level.to_string().into()
}
}
SubscriptionTreeEntry::Remote(remote) => {
--
2.47.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH datacenter-manager v3 02/12] pdm-client: add wait_for_local_task helper
2026-05-15 7:43 [PATCH datacenter-manager v3 00/12] subscription key pool registry Thomas Lamprecht
2026-05-15 7:43 ` [PATCH datacenter-manager v3 01/12] api types: subscription level: render full names Thomas Lamprecht
@ 2026-05-15 7:43 ` Thomas Lamprecht
2026-05-15 15:21 ` Wolfgang Bumiller
2026-05-15 7:43 ` [PATCH datacenter-manager v3 03/12] subscription: pool: add data model and config layer Thomas Lamprecht
` (9 subsequent siblings)
11 siblings, 1 reply; 16+ messages in thread
From: Thomas Lamprecht @ 2026-05-15 7:43 UTC (permalink / raw)
To: pdm-devel
PDM-local worker tasks (those spawned via WorkerTask::spawn in the
manager) return a UPID to the API caller, but the local task-status
endpoint has no server-side wait=1 query like the per-remote PVE/PBS
surface. A CLI that wants to surface the actual outcome rather than
just print the UPID has to hand-roll a polling loop.
Add a one-second-poll helper to consolidate that. Native-only since
the loop uses tokio::time::sleep, so the WASM UI does not pull tokio
into its dep tree (target-gated).
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
---
New in v3.
lib/pdm-client/Cargo.toml | 3 +++
lib/pdm-client/src/lib.rs | 30 ++++++++++++++++++++++++++++++
2 files changed, 33 insertions(+)
diff --git a/lib/pdm-client/Cargo.toml b/lib/pdm-client/Cargo.toml
index bb41b87b..a3f11059 100644
--- a/lib/pdm-client/Cargo.toml
+++ b/lib/pdm-client/Cargo.toml
@@ -22,6 +22,9 @@ proxmox-tfa = { workspace = true, features = [ "types" ] }
pve-api-types = { workspace = true, features = [ "client" ] }
pbs-api-types.workspace = true
+[target.'cfg(not(target_arch = "wasm32"))'.dependencies]
+tokio = { workspace = true, features = [ "time" ] }
+
[features]
default = []
hyper-client = [ "proxmox-client/hyper-client" ]
diff --git a/lib/pdm-client/src/lib.rs b/lib/pdm-client/src/lib.rs
index 76b33ef8..cb5bb043 100644
--- a/lib/pdm-client/src/lib.rs
+++ b/lib/pdm-client/src/lib.rs
@@ -890,6 +890,36 @@ impl<T: HttpApiClient> PdmClient<T> {
Ok(self.0.get(&path).await?.expect_json()?.data)
}
+ /// Block until a PDM-local worker task finishes; returns the final status payload.
+ ///
+ /// The local task-status endpoint (`/nodes/localhost/tasks/{upid}/status`) has no
+ /// server-side `wait=1` today, so the helper polls at one-second intervals; sub-second
+ /// tasks (e.g. an Apply Pending with an empty queue) settle on the first request. Once a
+ /// server-side wait surface lands this method becomes a single GET with no behaviour change
+ /// for callers.
+ ///
+ /// No built-in time bound; wrap in `tokio::time::timeout` if needed. Dropping the future
+ /// stops the client-side polling only - the server-side worker keeps running.
+ ///
+ /// Native-only: the polling loop relies on `tokio::time::sleep`, which is not available on
+ /// the wasm32 target the UI builds for.
+ #[cfg(not(target_arch = "wasm32"))]
+ pub async fn wait_for_local_task(&self, upid: &str) -> Result<Value, Error> {
+ let path = format!("/api2/extjs/nodes/localhost/tasks/{upid}/status");
+ loop {
+ let body: Value = self.0.get(&path).await?.expect_json()?.data;
+ let running = body
+ .get("status")
+ .and_then(Value::as_str)
+ .map(|s| s == "running")
+ .unwrap_or(false);
+ if !running {
+ return Ok(body);
+ }
+ tokio::time::sleep(std::time::Duration::from_secs(1)).await;
+ }
+ }
+
pub async fn read_acl(
&self,
path: Option<&str>,
--
2.47.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH datacenter-manager v3 03/12] subscription: pool: add data model and config layer
2026-05-15 7:43 [PATCH datacenter-manager v3 00/12] subscription key pool registry Thomas Lamprecht
2026-05-15 7:43 ` [PATCH datacenter-manager v3 01/12] api types: subscription level: render full names Thomas Lamprecht
2026-05-15 7:43 ` [PATCH datacenter-manager v3 02/12] pdm-client: add wait_for_local_task helper Thomas Lamprecht
@ 2026-05-15 7:43 ` Thomas Lamprecht
2026-05-15 15:21 ` Wolfgang Bumiller
2026-05-15 7:43 ` [PATCH datacenter-manager v3 04/12] subscription: api: add key pool and node status endpoints Thomas Lamprecht
` (8 subsequent siblings)
11 siblings, 1 reply; 16+ messages in thread
From: Thomas Lamprecht @ 2026-05-15 7:43 UTC (permalink / raw)
To: pdm-devel
Introduce the on-disk data model and locked config helpers that the
following commits build on, mirroring the pdm-config::remotes
pattern. The shadow file holds the signed SubscriptionInfo blob a
future shop-bundle import will provide, kept apart from the main
config so the bare keys list stays human-readable.
The source field is an enum so other origins (shop-bundle import,
remote adoption) can be added later without a wire-format break.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
---
Changes v2 -> 3:
* Move the on-disk files into a dedicated subscriptions/ subdir
(keys.cfg + keys.shadow).
lib/pdm-api-types/Cargo.toml | 1 +
lib/pdm-api-types/src/subscription.rs | 399 ++++++++++++++++++++++++-
lib/pdm-api-types/tests/test_import.rs | 338 +++++++++++++++++++++
lib/pdm-config/src/lib.rs | 1 +
lib/pdm-config/src/setup.rs | 7 +
lib/pdm-config/src/subscriptions.rs | 116 +++++++
6 files changed, 861 insertions(+), 1 deletion(-)
create mode 100644 lib/pdm-api-types/tests/test_import.rs
create mode 100644 lib/pdm-config/src/subscriptions.rs
diff --git a/lib/pdm-api-types/Cargo.toml b/lib/pdm-api-types/Cargo.toml
index cb8b5054..f9e3d07e 100644
--- a/lib/pdm-api-types/Cargo.toml
+++ b/lib/pdm-api-types/Cargo.toml
@@ -15,6 +15,7 @@ serde_plain.workspace = true
serde_json.workspace = true
proxmox-acme-api.workspace = true
+proxmox-base64.workspace = true
proxmox-access-control = { workspace = true, features = ["acl"] }
proxmox-auth-api = { workspace = true, features = ["api-types"] }
proxmox-apt-api-types.workspace = true
diff --git a/lib/pdm-api-types/src/subscription.rs b/lib/pdm-api-types/src/subscription.rs
index f0eb525b..811bce4c 100644
--- a/lib/pdm-api-types/src/subscription.rs
+++ b/lib/pdm-api-types/src/subscription.rs
@@ -1,11 +1,16 @@
+use std::sync::OnceLock;
use std::{collections::HashMap, str::FromStr};
use anyhow::Error;
use serde::{Deserialize, Serialize};
-use proxmox_schema::api;
+use proxmox_schema::{api, const_regex, ApiStringFormat, ApiType, Schema, StringSchema};
+use proxmox_section_config::typed::ApiSectionDataEntry;
+use proxmox_section_config::{SectionConfig, SectionConfigPlugin};
use proxmox_subscription::{SubscriptionInfo, SubscriptionStatus};
+use crate::remotes::RemoteType;
+
#[api]
// order is important here, since we use that for determining if a node has a valid subscription
#[derive(Default, Copy, Clone, Debug, PartialEq, Eq, PartialOrd, Ord)]
@@ -174,3 +179,395 @@ pub struct PdmSubscriptionInfo {
/// PDM subscription statistics
pub statistics: SubscriptionStatistics,
}
+
+const_regex! {
+ /// Subscription key pattern, restricted to the products PDM can drive.
+ ///
+ /// All keys follow `<prefix>-<10 hex>`. PVE encodes the maximum CPU socket count between
+ /// the product letters and the level letter, for example `pve4b-1234567890`. PBS has no
+ /// socket count, so its keys look like `pbsc-1234567890`. Level letters are c/b/s/p
+ /// (Community/Basic/Standard/Premium).
+ ///
+ /// PMG and POM keys are not accepted yet: PDM has no remote-side handler for them. Widen
+ /// this regex and `ProductType::from_key` in lockstep when PDM grows support for them.
+ pub PRODUCT_KEY_REGEX = r"^(?:pve[0-9]+|pbs)[cbsp]-[0-9a-f]{10}$";
+}
+
+pub const PRODUCT_KEY_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&PRODUCT_KEY_REGEX);
+
+pub const SUBSCRIPTION_KEY_SCHEMA: Schema = StringSchema::new("Subscription key.")
+ .format(&PRODUCT_KEY_FORMAT)
+ .min_length(15)
+ .max_length(18)
+ .schema();
+
+#[api]
+#[derive(Clone, Copy, Debug, Default, Eq, PartialEq, Hash, Deserialize, Serialize)]
+#[serde(rename_all = "lowercase")]
+/// Proxmox product line a subscription key belongs to.
+pub enum ProductType {
+ /// Proxmox Virtual Environment (PVE).
+ #[default]
+ Pve,
+ /// Proxmox Backup Server (PBS).
+ Pbs,
+ /// Proxmox Mail Gateway (PMG).
+ Pmg,
+ /// Proxmox Offline Mirror (POM).
+ Pom,
+}
+
+impl ProductType {
+ /// Static string used as the section-config type marker on disk.
+ pub const fn as_section_type(self) -> &'static str {
+ match self {
+ ProductType::Pve => "pve",
+ ProductType::Pbs => "pbs",
+ ProductType::Pmg => "pmg",
+ ProductType::Pom => "pom",
+ }
+ }
+
+ /// Classify a key by its prefix.
+ ///
+ /// Returns None when the prefix does not match any product PDM currently knows about;
+ /// callers should log that case so a new product line gets noticed instead of silently
+ /// sorted into a default bucket.
+ pub fn from_key(key: &str) -> Option<Self> {
+ let (prefix, _) = key.split_once('-')?;
+ if prefix.starts_with("pve") {
+ Some(ProductType::Pve)
+ } else if prefix.starts_with("pbs") {
+ Some(ProductType::Pbs)
+ } else if prefix.starts_with("pmg") {
+ Some(ProductType::Pmg)
+ } else if prefix.starts_with("pom") {
+ Some(ProductType::Pom)
+ } else {
+ None
+ }
+ }
+
+ /// Whether PDM currently knows how to drive a remote of this product type.
+ ///
+ /// PDM only manages PVE and PBS remotes today, and the schema regex rejects everything else
+ /// at insert time. This method covers in-memory paths for forward-compat, for example
+ /// existing pool entries loaded after the regex is widened in a future release.
+ pub fn matches_remote_type(self, remote_type: RemoteType) -> bool {
+ matches!(
+ (self, remote_type),
+ (ProductType::Pve, RemoteType::Pve) | (ProductType::Pbs, RemoteType::Pbs)
+ )
+ }
+}
+
+impl std::fmt::Display for ProductType {
+ fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+ f.write_str(self.as_section_type())
+ }
+}
+
+/// Extract the socket count a PVE key covers (for example, 4 from "pve4b-...").
+///
+/// Returns None for non-PVE keys or unparseable prefixes.
+pub fn socket_count_from_key(key: &str) -> Option<u32> {
+ let (prefix, _) = key.split_once('-')?;
+ if !prefix.starts_with("pve") {
+ return None;
+ }
+ let after_pve = &prefix[3..];
+ let digits: String = after_pve
+ .chars()
+ .take_while(|c| c.is_ascii_digit())
+ .collect();
+ digits.parse().ok()
+}
+
+/// Pick the candidate PVE key with the smallest socket count that still covers `node_sockets`.
+///
+/// `candidates` yields `(id, key_string)` pairs. Keys without a parseable PVE socket count are
+/// skipped, and keys covering fewer sockets than the node needs are filtered out. Returns the
+/// id of the best fit, or None when no candidate covers the node.
+pub fn pick_best_pve_socket_key<'a, I, K>(node_sockets: u32, candidates: I) -> Option<K>
+where
+ I: IntoIterator<Item = (K, &'a str)>,
+{
+ candidates
+ .into_iter()
+ .filter_map(|(id, key)| socket_count_from_key(key).map(|s| (id, s)))
+ .filter(|(_, s)| *s >= node_sockets)
+ .min_by_key(|(_, s)| *s)
+ .map(|(id, _)| id)
+}
+
+#[api]
+#[derive(Clone, Copy, Debug, Default, Eq, PartialEq, Deserialize, Serialize)]
+#[serde(rename_all = "kebab-case")]
+/// Origin of a subscription key entry.
+pub enum SubscriptionKeySource {
+ /// Hand-entered into the pool by an admin. Used for any key added through the manual-entry
+ /// UI or CLI, and as the `serde(default)` for entries that predate this field.
+ #[default]
+ Manual,
+}
+
+#[api(
+ properties: {
+ "key": { schema: SUBSCRIPTION_KEY_SCHEMA },
+ "level": { optional: true },
+ "status": { optional: true },
+ "source": { optional: true },
+ "pending-clear": { optional: true },
+ },
+)]
+#[derive(Clone, Debug, Default, Deserialize, Serialize, PartialEq)]
+#[serde(rename_all = "kebab-case")]
+/// An entry in the subscription key pool.
+pub struct SubscriptionKeyEntry {
+ /// The subscription key (for example, pve4b-1234567890).
+ pub key: String,
+
+ /// Product type derived from the key prefix.
+ #[serde(rename = "product-type")]
+ pub product_type: ProductType,
+
+ /// Subscription level, derived from the key suffix.
+ #[serde(default)]
+ pub level: SubscriptionLevel,
+
+ /// Where the key entry came from. Defaults to manual entry.
+ #[serde(default)]
+ pub source: SubscriptionKeySource,
+
+ /// Remote this key is assigned to (if any).
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub remote: Option<String>,
+
+ /// Node within the remote this key is assigned to (if any).
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub node: Option<String>,
+
+ /// True when the operator queued a clear for this entry's bound node, that is, a request
+ /// to free the key from `remote`/`node` so it can be reassigned to a different node.
+ ///
+ /// Apply Pending issues a DELETE on the remote and then clears `remote`/`node` on success.
+ /// Clear Pending only resets this flag and leaves the binding untouched so the operator can
+ /// retry. A bare flag is enough since the (remote, node) binding lives next to it.
+ ///
+ /// Omitted from the serialised representation when false so the on-disk section and the
+ /// API response do not carry `pending-clear false` lines for every entry.
+ #[serde(default, skip_serializing_if = "std::ops::Not::not")]
+ pub pending_clear: bool,
+
+ /// Server ID this key is bound to (from signed info, if available).
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub serverid: Option<String>,
+
+ /// Subscription status from last check.
+ #[serde(default)]
+ pub status: SubscriptionStatus,
+
+ /// Next due date.
+ ///
+ /// Accepts the upstream `nextduedate` spelling on deserialisation so a future shop-bundle
+ /// import path can hand a raw `SubscriptionInfo` blob through without a field-name
+ /// translation step; canonical (and on-disk) form is `next-due-date` per the struct's
+ /// kebab-case rename.
+ #[serde(alias = "nextduedate", skip_serializing_if = "Option::is_none")]
+ pub next_due_date: Option<String>,
+
+ /// Product name.
+ ///
+ /// Accepts the upstream `productname` spelling on deserialisation; canonical form is
+ /// `product-name` to stay self-consistent with the sibling `product-type` field.
+ #[serde(alias = "productname", skip_serializing_if = "Option::is_none")]
+ pub product_name: Option<String>,
+
+ /// Epoch of last import or refresh of this key's data.
+ ///
+ /// Accepts the upstream `checktime` spelling on deserialisation; canonical form is
+ /// `check-time`.
+ #[serde(alias = "checktime", skip_serializing_if = "Option::is_none")]
+ pub check_time: Option<i64>,
+}
+
+impl ApiSectionDataEntry for SubscriptionKeyEntry {
+ const INTERNALLY_TAGGED: Option<&'static str> = Some("product-type");
+ const SECION_CONFIG_USES_TYPE_KEY: bool = true;
+
+ fn section_config() -> &'static SectionConfig {
+ static CONFIG: OnceLock<SectionConfig> = OnceLock::new();
+
+ CONFIG.get_or_init(|| {
+ let mut this =
+ SectionConfig::new(&SUBSCRIPTION_KEY_SCHEMA).with_type_key("product-type");
+ for ty in [
+ ProductType::Pve,
+ ProductType::Pbs,
+ ProductType::Pmg,
+ ProductType::Pom,
+ ] {
+ this.register_plugin(SectionConfigPlugin::new(
+ ty.as_section_type().to_string(),
+ Some("key".to_string()),
+ SubscriptionKeyEntry::API_SCHEMA.unwrap_object_schema(),
+ ));
+ }
+ this
+ })
+ }
+
+ fn section_type(&self) -> &'static str {
+ self.product_type.as_section_type()
+ }
+}
+
+#[api(
+ properties: {
+ "key": { schema: SUBSCRIPTION_KEY_SCHEMA },
+ },
+)]
+#[derive(Clone, Debug, Default, Deserialize, Serialize)]
+#[serde(rename_all = "kebab-case")]
+/// Shadow entry storing the signed subscription info blob for a key.
+///
+/// Currently only populated by the future shop-bundle import flow; manually-added keys leave
+/// this table empty. The data layer is in place so that adding the import path later does not
+/// require reshaping the on-disk config.
+pub struct SubscriptionKeyShadow {
+ /// The subscription key.
+ pub key: String,
+
+ /// Product type (section type marker).
+ #[serde(rename = "product-type")]
+ pub product_type: ProductType,
+
+ /// Base64-encoded signed SubscriptionInfo JSON.
+ #[serde(default)]
+ pub info: String,
+}
+
+impl ApiSectionDataEntry for SubscriptionKeyShadow {
+ const INTERNALLY_TAGGED: Option<&'static str> = Some("product-type");
+ const SECION_CONFIG_USES_TYPE_KEY: bool = true;
+
+ fn section_config() -> &'static SectionConfig {
+ static CONFIG: OnceLock<SectionConfig> = OnceLock::new();
+
+ CONFIG.get_or_init(|| {
+ let mut this =
+ SectionConfig::new(&SUBSCRIPTION_KEY_SCHEMA).with_type_key("product-type");
+ for ty in [
+ ProductType::Pve,
+ ProductType::Pbs,
+ ProductType::Pmg,
+ ProductType::Pom,
+ ] {
+ this.register_plugin(SectionConfigPlugin::new(
+ ty.as_section_type().to_string(),
+ Some("key".to_string()),
+ SubscriptionKeyShadow::API_SCHEMA.unwrap_object_schema(),
+ ));
+ }
+ this
+ })
+ }
+
+ fn section_type(&self) -> &'static str {
+ self.product_type.as_section_type()
+ }
+}
+
+/// Decode a base64-encoded `SubscriptionInfo` JSON blob from the shadow file.
+///
+/// Forward-compat helper for the future shop-bundle import path. Returns the parsed
+/// `SubscriptionInfo`; the caller is responsible for verifying the signature against the shop's
+/// signing key.
+pub fn parse_signed_info_blob(b64: &str) -> Result<SubscriptionInfo, Error> {
+ let bytes = proxmox_base64::decode(b64)?;
+ let info = serde_json::from_slice(&bytes)?;
+ Ok(info)
+}
+
+/// Cross-check the `serverid` of a shadowed entry against what the remote reports.
+///
+/// Forward-compat helper for the future bundle-import and push flow: when the shadow has a
+/// signed serverid binding, the operator should be warned if the remote it is being pushed to
+/// has a different hardware id. Returns Ok(None) when there is nothing to compare.
+pub fn verify_serverid(
+ entry: &SubscriptionKeyEntry,
+ remote_info: &SubscriptionInfo,
+) -> Result<Option<ServeridMismatch>, Error> {
+ let Some(expected) = entry.serverid.as_deref() else {
+ return Ok(None);
+ };
+ let Some(actual) = remote_info.serverid.as_deref() else {
+ return Ok(None);
+ };
+ if expected == actual {
+ Ok(None)
+ } else {
+ Ok(Some(ServeridMismatch {
+ key: entry.key.clone(),
+ expected: expected.to_string(),
+ actual: actual.to_string(),
+ }))
+ }
+}
+
+/// Result of [`verify_serverid`] when the bound and observed server-ids disagree.
+#[derive(Clone, Debug, PartialEq, Eq)]
+pub struct ServeridMismatch {
+ pub key: String,
+ pub expected: String,
+ pub actual: String,
+}
+
+#[api]
+#[derive(Clone, Debug, Default, Serialize, Deserialize, PartialEq)]
+#[serde(rename_all = "kebab-case")]
+/// Subscription status of a single remote node, combining remote query data with key pool
+/// assignment information.
+pub struct RemoteNodeStatus {
+ /// Remote name.
+ pub remote: String,
+ /// Remote type (pve or pbs).
+ #[serde(rename = "type")]
+ pub ty: RemoteType,
+ /// Node name.
+ pub node: String,
+ /// Number of CPU sockets (PVE only).
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub sockets: Option<i64>,
+ /// Current subscription status.
+ #[serde(default)]
+ pub status: SubscriptionStatus,
+ /// Subscription level.
+ #[serde(default)]
+ pub level: SubscriptionLevel,
+ /// Currently assigned key from the pool (if any).
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub assigned_key: Option<String>,
+ /// Current key on the node (from remote query).
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub current_key: Option<String>,
+}
+
+#[api]
+#[derive(Clone, Debug, Default, Serialize, Deserialize, PartialEq, Eq)]
+#[serde(rename_all = "kebab-case")]
+/// A proposed key-to-node assignment from the auto-assign algorithm.
+pub struct ProposedAssignment {
+ /// The subscription key to assign.
+ pub key: String,
+ /// Target remote.
+ pub remote: String,
+ /// Target node.
+ pub node: String,
+ /// Socket count of the key (PVE only).
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub key_sockets: Option<u32>,
+ /// Socket count of the node (PVE only).
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub node_sockets: Option<i64>,
+}
diff --git a/lib/pdm-api-types/tests/test_import.rs b/lib/pdm-api-types/tests/test_import.rs
new file mode 100644
index 00000000..33601620
--- /dev/null
+++ b/lib/pdm-api-types/tests/test_import.rs
@@ -0,0 +1,338 @@
+//! SectionConfig round-trip and helper tests for the subscription key pool.
+//!
+//! Run with: cargo test -p pdm-api-types --test test_import
+
+use pdm_api_types::subscription::*;
+use proxmox_section_config::typed::{ApiSectionDataEntry, SectionConfigData};
+use proxmox_subscription::SubscriptionStatus;
+
+#[test]
+fn entry_roundtrip() {
+ let mut config = SectionConfigData::<SubscriptionKeyEntry>::default();
+
+ let entry = SubscriptionKeyEntry {
+ key: "pve4b-aa11bb2233".to_string(),
+ product_type: ProductType::Pve,
+ level: SubscriptionLevel::Basic,
+ source: SubscriptionKeySource::Manual,
+ remote: Some("my-cluster".to_string()),
+ node: Some("node1".to_string()),
+ pending_clear: false,
+ serverid: Some("AABBCCDD".to_string()),
+ status: SubscriptionStatus::Active,
+ next_due_date: Some("2027-06-01".to_string()),
+ product_name: Some("Proxmox VE Basic".to_string()),
+ check_time: Some(1700000000),
+ };
+
+ config.insert("pve4b-aa11bb2233".to_string(), entry);
+
+ let raw = SubscriptionKeyEntry::write_section_config("test", &config).expect("write failed");
+ let parsed = SubscriptionKeyEntry::parse_section_config("test", &raw).expect("parse failed");
+
+ let back = parsed.get("pve4b-aa11bb2233").expect("key not found");
+ assert_eq!(back.key, "pve4b-aa11bb2233");
+ assert_eq!(back.product_type, ProductType::Pve);
+ assert_eq!(back.source, SubscriptionKeySource::Manual);
+ assert_eq!(back.remote.as_deref(), Some("my-cluster"));
+ assert_eq!(back.node.as_deref(), Some("node1"));
+ assert_eq!(back.status, SubscriptionStatus::Active);
+ assert_eq!(back.next_due_date.as_deref(), Some("2027-06-01"));
+}
+
+#[test]
+fn shadow_roundtrip() {
+ let mut shadow = SectionConfigData::<SubscriptionKeyShadow>::default();
+
+ shadow.insert(
+ "pve4b-aa11bb2233".to_string(),
+ SubscriptionKeyShadow {
+ key: "pve4b-aa11bb2233".to_string(),
+ product_type: ProductType::Pve,
+ info: "dGVzdA==".to_string(),
+ },
+ );
+
+ let raw = SubscriptionKeyShadow::write_section_config("test", &shadow).expect("write failed");
+ let parsed = SubscriptionKeyShadow::parse_section_config("test", &raw).expect("parse failed");
+
+ let back = parsed.get("pve4b-aa11bb2233").expect("key not found");
+ assert_eq!(back.info, "dGVzdA==");
+}
+
+#[test]
+fn deserialize_api_response_json() {
+ // The legacy `nextduedate` / `productname` / `checktime` spellings are the shop's wire
+ // format (mirrored from `proxmox_subscription::SubscriptionInfo`); a future shop-bundle
+ // import path will feed exactly these into the pool. Keep the alias coverage explicit so a
+ // serde rename without an accompanying alias gets caught at test time.
+ let json = serde_json::json!({
+ "key": "pve4b-aa11bb2233",
+ "nextduedate": "2027-06-01",
+ "product-type": "pve",
+ "productname": "Proxmox VE Basic",
+ "checktime": 1700000000,
+ "serverid": "AABBCCDD",
+ "status": "active"
+ });
+
+ let entry: SubscriptionKeyEntry = serde_json::from_value(json).unwrap();
+ assert_eq!(entry.key, "pve4b-aa11bb2233");
+ assert_eq!(entry.product_type, ProductType::Pve);
+ assert_eq!(entry.status, SubscriptionStatus::Active);
+ assert_eq!(entry.source, SubscriptionKeySource::Manual);
+ assert_eq!(entry.next_due_date.as_deref(), Some("2027-06-01"));
+ assert_eq!(entry.product_name.as_deref(), Some("Proxmox VE Basic"));
+ assert_eq!(entry.check_time, Some(1700000000));
+}
+
+#[test]
+fn deserialize_canonical_kebab_case_json() {
+ // The canonical wire form for these fields uses the struct's `kebab-case` rename; verify
+ // the renamed spelling round-trips through serde even though the field shapes share the
+ // alias with the legacy form above.
+ let json = serde_json::json!({
+ "key": "pve4b-aa11bb2233",
+ "next-due-date": "2027-06-01",
+ "product-type": "pve",
+ "product-name": "Proxmox VE Basic",
+ "check-time": 1700000000,
+ "status": "active"
+ });
+
+ let entry: SubscriptionKeyEntry = serde_json::from_value(json).unwrap();
+ assert_eq!(entry.next_due_date.as_deref(), Some("2027-06-01"));
+ assert_eq!(entry.product_name.as_deref(), Some("Proxmox VE Basic"));
+ assert_eq!(entry.check_time, Some(1700000000));
+}
+
+#[test]
+fn deserialize_without_optional_fields() {
+ let json = serde_json::json!({
+ "key": "pbsb-ee77ff8899",
+ "product-type": "pbs",
+ });
+
+ let entry: SubscriptionKeyEntry = serde_json::from_value(json).unwrap();
+ assert_eq!(entry.key, "pbsb-ee77ff8899");
+ assert_eq!(entry.product_type, ProductType::Pbs);
+ assert!(entry.remote.is_none());
+ assert!(entry.next_due_date.is_none());
+}
+
+#[test]
+fn product_type_classification() {
+ let cases = [
+ ("pve4b-1234567890", Some(ProductType::Pve), "pve"),
+ ("pbss-abcdef0123", Some(ProductType::Pbs), "pbs"),
+ ("pmgb-1234567890", Some(ProductType::Pmg), "pmg"),
+ ("pomb-1234567890", Some(ProductType::Pom), "pom"),
+ ("xxx-1234567890", None, ""),
+ ("no-dash", None, ""),
+ ];
+ for (key, expected, marker) in cases {
+ assert_eq!(ProductType::from_key(key), expected, "from_key({key})");
+ if let Some(pt) = expected {
+ assert_eq!(pt.as_section_type(), marker, "section_type for {key}");
+ }
+ }
+}
+
+#[test]
+fn socket_count_extraction() {
+ assert_eq!(socket_count_from_key("pve1c-1234567890"), Some(1));
+ assert_eq!(socket_count_from_key("pve2b-1234567890"), Some(2));
+ assert_eq!(socket_count_from_key("pve4s-1234567890"), Some(4));
+ assert_eq!(socket_count_from_key("pve8p-1234567890"), Some(8));
+ assert_eq!(socket_count_from_key("pbss-1234567890"), None);
+ assert_eq!(socket_count_from_key("pvexb-1234567890"), None);
+}
+
+#[test]
+fn remote_type_matching() {
+ use pdm_api_types::remotes::RemoteType;
+
+ assert!(ProductType::Pve.matches_remote_type(RemoteType::Pve));
+ assert!(!ProductType::Pve.matches_remote_type(RemoteType::Pbs));
+ assert!(ProductType::Pbs.matches_remote_type(RemoteType::Pbs));
+ assert!(!ProductType::Pbs.matches_remote_type(RemoteType::Pve));
+ // PMG and POM are reserved product types but PDM cannot manage those remotes yet.
+ assert!(!ProductType::Pmg.matches_remote_type(RemoteType::Pve));
+ assert!(!ProductType::Pmg.matches_remote_type(RemoteType::Pbs));
+ assert!(!ProductType::Pom.matches_remote_type(RemoteType::Pbs));
+}
+
+#[test]
+fn subscription_level_from_key_suffix() {
+ assert_eq!(
+ SubscriptionLevel::from_key(Some("pve4c-123")),
+ SubscriptionLevel::Community
+ );
+ assert_eq!(
+ SubscriptionLevel::from_key(Some("pve4b-123")),
+ SubscriptionLevel::Basic
+ );
+ assert_eq!(
+ SubscriptionLevel::from_key(Some("pve4s-123")),
+ SubscriptionLevel::Standard
+ );
+ assert_eq!(
+ SubscriptionLevel::from_key(Some("pve2p-123")),
+ SubscriptionLevel::Premium
+ );
+ assert_eq!(
+ SubscriptionLevel::from_key(Some("pbsb-123")),
+ SubscriptionLevel::Basic
+ );
+ assert_eq!(SubscriptionLevel::from_key(None), SubscriptionLevel::None);
+ assert_eq!(
+ SubscriptionLevel::from_key(Some("")),
+ SubscriptionLevel::None
+ );
+}
+
+#[test]
+fn subscription_level_display_fromstr_roundtrip() {
+ for level in [
+ SubscriptionLevel::None,
+ SubscriptionLevel::Community,
+ SubscriptionLevel::Basic,
+ SubscriptionLevel::Standard,
+ SubscriptionLevel::Premium,
+ SubscriptionLevel::Unknown,
+ ] {
+ let s = format!("{level}");
+ let parsed: SubscriptionLevel = s.parse().unwrap();
+ assert_eq!(parsed, level, "roundtrip failed for {s}");
+ }
+
+ // Backward compatibility: legacy single-letter wire format still parses.
+ for (letter, level) in [
+ ("c", SubscriptionLevel::Community),
+ ("b", SubscriptionLevel::Basic),
+ ("s", SubscriptionLevel::Standard),
+ ("p", SubscriptionLevel::Premium),
+ ] {
+ assert_eq!(letter.parse::<SubscriptionLevel>().unwrap(), level);
+ }
+}
+
+#[test]
+fn multiple_keys_different_types() {
+ let mut config = SectionConfigData::<SubscriptionKeyEntry>::default();
+
+ config.insert(
+ "pve4b-aaaa111111".to_string(),
+ SubscriptionKeyEntry {
+ key: "pve4b-aaaa111111".to_string(),
+ product_type: ProductType::Pve,
+ status: SubscriptionStatus::Active,
+ ..Default::default()
+ },
+ );
+ config.insert(
+ "pbss-bbbb222222".to_string(),
+ SubscriptionKeyEntry {
+ key: "pbss-bbbb222222".to_string(),
+ product_type: ProductType::Pbs,
+ status: SubscriptionStatus::Active,
+ ..Default::default()
+ },
+ );
+
+ let raw = SubscriptionKeyEntry::write_section_config("test", &config).unwrap();
+ let parsed = SubscriptionKeyEntry::parse_section_config("test", &raw).unwrap();
+
+ assert_eq!(
+ parsed.get("pve4b-aaaa111111").unwrap().product_type,
+ ProductType::Pve
+ );
+ assert_eq!(
+ parsed.get("pbss-bbbb222222").unwrap().product_type,
+ ProductType::Pbs
+ );
+}
+
+#[test]
+fn pick_best_pve_socket_key_edge_cases() {
+ let pool = [
+ ("pve1c-aaa", "pve1c-aaa"),
+ ("pve2b-bbb", "pve2b-bbb"),
+ ("pve4s-ccc", "pve4s-ccc"),
+ ("pve8p-ddd", "pve8p-ddd"),
+ ];
+ let pick =
+ |sockets: u32| pick_best_pve_socket_key(sockets, pool.iter().map(|(id, k)| (*id, *k)));
+
+ // Exact match prefers the equally-sized key over a larger one.
+ assert_eq!(pick(2), Some("pve2b-bbb"));
+
+ // No exact match: fall through to the smallest key that still covers the node.
+ assert_eq!(pick(3), Some("pve4s-ccc"));
+ assert_eq!(pick(5), Some("pve8p-ddd"));
+
+ // Single-socket node still picks the single-socket key (does not overprovision).
+ assert_eq!(pick(1), Some("pve1c-aaa"));
+
+ // Node larger than every key has no fit.
+ assert_eq!(pick(16), None);
+
+ // Empty candidate list is None.
+ let empty: [(&str, &str); 0] = [];
+ assert_eq!(
+ pick_best_pve_socket_key(2, empty.iter().map(|(id, k)| (*id, *k))),
+ None,
+ );
+
+ // Non-PVE keys are skipped silently.
+ let mixed = [("a", "pbsc-aaaa111111"), ("b", "pve2b-bbbb222222")];
+ assert_eq!(
+ pick_best_pve_socket_key(1, mixed.iter().map(|(id, k)| (*id, *k))),
+ Some("b"),
+ );
+}
+
+#[test]
+fn schema_accepts_pve_pbs_only() {
+ use proxmox_schema::ApiType;
+ let schema = SubscriptionKeyEntry::API_SCHEMA.unwrap_object_schema();
+ let key_schema = schema
+ .lookup("key")
+ .expect("key property in object schema")
+ .1;
+ assert!(key_schema.parse_simple_value("garbage").is_err());
+ assert!(key_schema.parse_simple_value("xxx-yyyyyyyyyy").is_err());
+ assert!(key_schema.parse_simple_value("pve4b-1234567890").is_ok());
+ assert!(key_schema.parse_simple_value("pbss-abcdef0123").is_ok());
+ // PMG and POM are not driven by PDM today, so the schema rejects them; widen the regex
+ // when remote-side support lands.
+ assert!(key_schema.parse_simple_value("pmgb-deadbeef00").is_err());
+ assert!(key_schema.parse_simple_value("pomb-deadbeef00").is_err());
+}
+
+#[test]
+fn verify_serverid_helper() {
+ let entry = SubscriptionKeyEntry {
+ key: "pve4b-aa11bb2233".to_string(),
+ product_type: ProductType::Pve,
+ serverid: Some("AABBCCDD".to_string()),
+ ..Default::default()
+ };
+
+ let mut info = proxmox_subscription::SubscriptionInfo::default();
+ info.serverid = Some("AABBCCDD".to_string());
+ assert_eq!(verify_serverid(&entry, &info).unwrap(), None);
+
+ info.serverid = Some("DEADBEEF".to_string());
+ let mismatch = verify_serverid(&entry, &info).unwrap().unwrap();
+ assert_eq!(mismatch.expected, "AABBCCDD");
+ assert_eq!(mismatch.actual, "DEADBEEF");
+
+ // entry without serverid -> nothing to verify
+ let entry = SubscriptionKeyEntry {
+ key: "pve4b-aa11bb2233".to_string(),
+ product_type: ProductType::Pve,
+ ..Default::default()
+ };
+ assert_eq!(verify_serverid(&entry, &info).unwrap(), None);
+}
diff --git a/lib/pdm-config/src/lib.rs b/lib/pdm-config/src/lib.rs
index 5b9bcca3..46ad1a2b 100644
--- a/lib/pdm-config/src/lib.rs
+++ b/lib/pdm-config/src/lib.rs
@@ -8,6 +8,7 @@ pub mod domains;
pub mod node;
pub mod remotes;
pub mod setup;
+pub mod subscriptions;
pub mod views;
mod config_version_cache;
diff --git a/lib/pdm-config/src/setup.rs b/lib/pdm-config/src/setup.rs
index 5adb05f8..77941fc4 100644
--- a/lib/pdm-config/src/setup.rs
+++ b/lib/pdm-config/src/setup.rs
@@ -31,6 +31,13 @@ pub fn create_configdir() -> Result<(), Error> {
0o750,
)?;
+ mkdir_perms(
+ crate::subscriptions::CONFIG_PATH,
+ api_user.uid,
+ api_user.gid,
+ 0o750,
+ )?;
+
Ok(())
}
diff --git a/lib/pdm-config/src/subscriptions.rs b/lib/pdm-config/src/subscriptions.rs
new file mode 100644
index 00000000..9e6eeeca
--- /dev/null
+++ b/lib/pdm-config/src/subscriptions.rs
@@ -0,0 +1,116 @@
+//! Read/write subscription key pool configuration.
+//!
+//! Call [`init`] to inject a concrete `SubscriptionKeyConfig` instance before using the
+//! module-level functions.
+//!
+//! The shadow-config functions stash signed `SubscriptionInfo` blobs alongside the plain key
+//! entries, which is intended as future proofing for a more automated (shop) import without having
+//! to adapt the data layer.
+
+use std::sync::OnceLock;
+
+use anyhow::Error;
+
+use proxmox_config_digest::ConfigDigest;
+use proxmox_product_config::{open_api_lockfile, replace_config, replace_secret_config, ApiLockGuard};
+use proxmox_section_config::typed::{ApiSectionDataEntry, SectionConfigData};
+
+use pdm_api_types::subscription::{SubscriptionKeyEntry, SubscriptionKeyShadow};
+use pdm_buildcfg::configdir;
+
+pub const CONFIG_PATH: &str = configdir!("/subscriptions");
+pub const SUBSCRIPTIONS_CFG_FILENAME: &str = configdir!("/subscriptions/keys.cfg");
+const SUBSCRIPTIONS_SHADOW_FILENAME: &str = configdir!("/subscriptions/keys.shadow");
+pub const SUBSCRIPTIONS_CFG_LOCKFILE: &str = configdir!("/subscriptions/.keys.lock");
+
+static INSTANCE: OnceLock<Box<dyn SubscriptionKeyConfig + Send + Sync>> = OnceLock::new();
+
+fn instance() -> &'static (dyn SubscriptionKeyConfig + Send + Sync) {
+ INSTANCE
+ .get()
+ .expect("subscription key config not initialized")
+ .as_ref()
+}
+
+pub fn lock_config() -> Result<ApiLockGuard, Error> {
+ instance().lock_config()
+}
+
+pub fn config() -> Result<(SectionConfigData<SubscriptionKeyEntry>, ConfigDigest), Error> {
+ instance().config()
+}
+
+pub fn shadow_config() -> Result<SectionConfigData<SubscriptionKeyShadow>, Error> {
+ instance().shadow_config()
+}
+
+pub fn save_config(
+ config: &SectionConfigData<SubscriptionKeyEntry>,
+) -> Result<ConfigDigest, Error> {
+ instance().save_config(config)
+}
+
+pub fn save_shadow(shadow: &SectionConfigData<SubscriptionKeyShadow>) -> Result<(), Error> {
+ instance().save_shadow(shadow)
+}
+
+pub trait SubscriptionKeyConfig {
+ fn config(&self) -> Result<(SectionConfigData<SubscriptionKeyEntry>, ConfigDigest), Error>;
+ fn shadow_config(&self) -> Result<SectionConfigData<SubscriptionKeyShadow>, Error>;
+ fn lock_config(&self) -> Result<ApiLockGuard, Error>;
+ fn save_config(
+ &self,
+ config: &SectionConfigData<SubscriptionKeyEntry>,
+ ) -> Result<ConfigDigest, Error>;
+ fn save_shadow(&self, shadow: &SectionConfigData<SubscriptionKeyShadow>) -> Result<(), Error>;
+}
+
+pub struct DefaultSubscriptionKeyConfig;
+
+impl SubscriptionKeyConfig for DefaultSubscriptionKeyConfig {
+ fn lock_config(&self) -> Result<ApiLockGuard, Error> {
+ open_api_lockfile(SUBSCRIPTIONS_CFG_LOCKFILE, None, true)
+ }
+
+ fn config(&self) -> Result<(SectionConfigData<SubscriptionKeyEntry>, ConfigDigest), Error> {
+ let content = proxmox_sys::fs::file_read_optional_string(SUBSCRIPTIONS_CFG_FILENAME)?
+ .unwrap_or_default();
+
+ let digest = openssl::sha::sha256(content.as_bytes());
+ let data =
+ SubscriptionKeyEntry::parse_section_config(SUBSCRIPTIONS_CFG_FILENAME, &content)?;
+
+ Ok((data, digest.into()))
+ }
+
+ fn shadow_config(&self) -> Result<SectionConfigData<SubscriptionKeyShadow>, Error> {
+ let content = proxmox_sys::fs::file_read_optional_string(SUBSCRIPTIONS_SHADOW_FILENAME)?
+ .unwrap_or_default();
+ SubscriptionKeyShadow::parse_section_config(SUBSCRIPTIONS_SHADOW_FILENAME, &content)
+ }
+
+ fn save_config(
+ &self,
+ config: &SectionConfigData<SubscriptionKeyEntry>,
+ ) -> Result<ConfigDigest, Error> {
+ let raw = SubscriptionKeyEntry::write_section_config(SUBSCRIPTIONS_CFG_FILENAME, config)?;
+ let digest: ConfigDigest = openssl::sha::sha256(raw.as_bytes()).into();
+ replace_config(SUBSCRIPTIONS_CFG_FILENAME, raw.as_bytes())?;
+ Ok(digest)
+ }
+
+ fn save_shadow(&self, shadow: &SectionConfigData<SubscriptionKeyShadow>) -> Result<(), Error> {
+ let raw =
+ SubscriptionKeyShadow::write_section_config(SUBSCRIPTIONS_SHADOW_FILENAME, shadow)?;
+ // Signed `SubscriptionInfo` blobs are secrets - mode 0600, priv:priv, so the
+ // unprivileged API user cannot read them. The main keys.cfg keeps 0640 since the API
+ // process still needs to read the key strings.
+ replace_secret_config(SUBSCRIPTIONS_SHADOW_FILENAME, raw.as_bytes())
+ }
+}
+
+pub fn init(instance: Box<dyn SubscriptionKeyConfig + Send + Sync>) {
+ if INSTANCE.set(instance).is_err() {
+ panic!("subscription key config instance already set");
+ }
+}
--
2.47.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH datacenter-manager v3 04/12] subscription: api: add key pool and node status endpoints
2026-05-15 7:43 [PATCH datacenter-manager v3 00/12] subscription key pool registry Thomas Lamprecht
` (2 preceding siblings ...)
2026-05-15 7:43 ` [PATCH datacenter-manager v3 03/12] subscription: pool: add data model and config layer Thomas Lamprecht
@ 2026-05-15 7:43 ` Thomas Lamprecht
2026-05-15 15:21 ` Wolfgang Bumiller
2026-05-15 7:43 ` [PATCH datacenter-manager v3 05/12] ui: registry: add view with key pool and node status Thomas Lamprecht
` (7 subsequent siblings)
11 siblings, 1 reply; 16+ messages in thread
From: Thomas Lamprecht @ 2026-05-15 7:43 UTC (permalink / raw)
To: pdm-devel
Add the REST surface under /subscriptions: the pool itself, the
combined remote-vs-pool node-status view, and the bulk paths
(auto-assign, apply-pending, clear-pending).
Endpoints touching a specific remote require the matching resource
privilege on that remote in addition to the system-scope MODIFY bit,
so an operator with global system access alone cannot push keys to
remotes they have no other authority on. Read paths filter remotes
the caller may not audit.
Mutating endpoints accept an optional ConfigDigest. Delete and
unassign refuse on any post-lock divergence, so a parallel admin's
Assign-and-push during a delete cannot orphan the live subscription
on the remote.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
---
Changes v2 -> 3:
* Internal restructuring to make room for the Clear Key, Adopt Key /
Adopt All, and Check Subscription endpoints added later in the series.
Cargo.toml | 4 +-
lib/pdm-api-types/src/subscription.rs | 39 +
server/src/api/mod.rs | 2 +
server/src/api/resources.rs | 24 +-
server/src/api/subscriptions/mod.rs | 1542 +++++++++++++++++++++++++
server/src/context.rs | 7 +
server/src/pbs_client.rs | 31 +
7 files changed, 1643 insertions(+), 6 deletions(-)
create mode 100644 server/src/api/subscriptions/mod.rs
diff --git a/Cargo.toml b/Cargo.toml
index 9806a4f0..3d58e380 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -61,7 +61,7 @@ proxmox-serde = "1"
proxmox-shared-memory = "1"
proxmox-simple-config = "1"
proxmox-sortable-macro = "1"
-proxmox-subscription = { version = "1", features = [ "api-types"], default-features = false }
+proxmox-subscription = { version = "1.0.2", features = [ "api-types"], default-features = false }
proxmox-sys = "1"
proxmox-systemd = "1"
proxmox-tfa = { version = "6", features = [ "api-types" ], default-features = false }
@@ -86,7 +86,7 @@ proxmox-acme-api = "1"
proxmox-node-status = "1"
# API types for PVE (and later PMG?)
-pve-api-types = "8.1.5"
+pve-api-types = "8.1.6"
# API types for PBS
pbs-api-types = { version = "1.0.9", features = [ "enum-fallback" ] }
diff --git a/lib/pdm-api-types/src/subscription.rs b/lib/pdm-api-types/src/subscription.rs
index 811bce4c..559f725d 100644
--- a/lib/pdm-api-types/src/subscription.rs
+++ b/lib/pdm-api-types/src/subscription.rs
@@ -4,6 +4,7 @@ use std::{collections::HashMap, str::FromStr};
use anyhow::Error;
use serde::{Deserialize, Serialize};
+use proxmox_config_digest::ConfigDigest;
use proxmox_schema::{api, const_regex, ApiStringFormat, ApiType, Schema, StringSchema};
use proxmox_section_config::typed::ApiSectionDataEntry;
use proxmox_section_config::{SectionConfig, SectionConfigPlugin};
@@ -551,6 +552,18 @@ pub struct RemoteNodeStatus {
/// Current key on the node (from remote query).
#[serde(skip_serializing_if = "Option::is_none")]
pub current_key: Option<String>,
+ /// True when the pool entry bound to this node has a pending clear queued.
+ #[serde(default, skip_serializing_if = "std::ops::Not::not")]
+ pub pending_clear: bool,
+}
+
+#[api]
+#[derive(Clone, Debug, Default, Serialize, Deserialize, PartialEq, Eq)]
+#[serde(rename_all = "kebab-case")]
+/// Result of the bulk clear-pending API endpoint.
+pub struct ClearPendingResult {
+ /// Number of pool entries whose pending push or reissue was cleared.
+ pub cleared: u32,
}
#[api]
@@ -571,3 +584,29 @@ pub struct ProposedAssignment {
#[serde(skip_serializing_if = "Option::is_none")]
pub node_sockets: Option<i64>,
}
+
+#[api(
+ properties: {
+ assignments: {
+ type: Array,
+ description: "Proposed assignments. Empty when nothing matches.",
+ items: { type: ProposedAssignment },
+ },
+ "keys-digest": { type: ConfigDigest },
+ },
+)]
+#[derive(Clone, Debug, Serialize, Deserialize, PartialEq, Eq)]
+#[serde(rename_all = "kebab-case")]
+/// The full plan returned by auto-assign and accepted by bulk-assign.
+///
+/// `keys_digest` and `node_status_digest` are snapshots taken when the plan was computed.
+/// `bulk_assign` rejects the plan with 409 if either has changed in the meantime, so the
+/// operator never silently commits a plan that no longer matches the live state.
+pub struct AutoAssignProposal {
+ /// Proposed assignments. Empty when nothing matches.
+ pub assignments: Vec<ProposedAssignment>,
+ /// Digest of the key pool config the proposal was computed against.
+ pub keys_digest: ConfigDigest,
+ /// SHA-256 over the relevant slice of node status (sorted JSON) at proposal time.
+ pub node_status_digest: String,
+}
diff --git a/server/src/api/mod.rs b/server/src/api/mod.rs
index 110191b8..9680edc7 100644
--- a/server/src/api/mod.rs
+++ b/server/src/api/mod.rs
@@ -18,6 +18,7 @@ pub mod remotes;
pub mod resources;
mod rrd_common;
pub mod sdn;
+pub mod subscriptions;
#[sortable]
const SUBDIRS: SubdirMap = &sorted!([
@@ -31,6 +32,7 @@ const SUBDIRS: SubdirMap = &sorted!([
("resources", &resources::ROUTER),
("nodes", &nodes::ROUTER),
("sdn", &sdn::ROUTER),
+ ("subscriptions", &subscriptions::ROUTER),
("version", &Router::new().get(&API_METHOD_VERSION)),
]);
diff --git a/server/src/api/resources.rs b/server/src/api/resources.rs
index 50315b11..d4ed5ab0 100644
--- a/server/src/api/resources.rs
+++ b/server/src/api/resources.rs
@@ -848,6 +848,14 @@ fn get_cached_subscription_info(remote: &str, max_age: u64) -> Option<CachedSubs
}
}
+/// Drop the cached subscription state for a remote, forcing the next read to refetch.
+pub fn invalidate_subscription_info_for_remote(remote_id: &str) {
+ let mut cache = SUBSCRIPTION_CACHE
+ .write()
+ .expect("subscription mutex poisoned");
+ cache.remove(remote_id);
+}
+
/// Update cached subscription data.
///
/// If the cache already contains more recent data we don't insert the passed resources.
@@ -923,11 +931,19 @@ async fn fetch_remote_subscription_info(
let nodes = client.list_nodes().await?;
let mut futures = Vec::with_capacity(nodes.len());
for node in nodes.iter() {
- let future = client.get_subscription(&node.node).map(|res| res.ok());
- futures.push(async move { (node.node.clone(), future.await) });
+ let sub_fut = client.get_subscription(&node.node).map(|res| res.ok());
+ // PVE's subscription endpoint only returns `sockets` once a key is registered, so
+ // auto-assign needs a separate hardware-socket source for un-subscribed nodes.
+ let status_fut = client.node_status(&node.node).map(|res| res.ok());
+ let node_name = node.node.clone();
+ futures.push(async move {
+ let (sub, status) = futures::future::join(sub_fut, status_fut).await;
+ (node_name, sub, status)
+ });
}
- for (node_name, remote_info) in join_all(futures).await {
+ for (node_name, remote_info, node_status) in join_all(futures).await {
+ let hw_sockets = node_status.map(|s| s.cpuinfo.sockets);
list.insert(
node_name,
remote_info.map(|info| {
@@ -936,7 +952,7 @@ async fn fetch_remote_subscription_info(
.unwrap_or_default();
NodeSubscriptionInfo {
status,
- sockets: info.sockets,
+ sockets: info.sockets.or(hw_sockets),
key: info.key,
serverid: info.serverid,
level: info
diff --git a/server/src/api/subscriptions/mod.rs b/server/src/api/subscriptions/mod.rs
new file mode 100644
index 00000000..aa3146ec
--- /dev/null
+++ b/server/src/api/subscriptions/mod.rs
@@ -0,0 +1,1542 @@
+//! Subscription key pool management API.
+//!
+//! Manages a PDM-side pool of subscription keys, proposes key-to-node assignments, and pushes
+//! assigned keys to remote nodes. All entries are added manually for now; each entry is a bare
+//! `key` string with the product type derived from its prefix.
+
+use std::collections::HashSet;
+
+use anyhow::{bail, format_err, Context, Error};
+use futures::future::join_all;
+
+use proxmox_access_control::CachedUserInfo;
+use proxmox_config_digest::ConfigDigest;
+use proxmox_log::{info, warn};
+use proxmox_router::{
+ http_bail, http_err, list_subdirs_api_method, Permission, Router, RpcEnvironment, SubdirMap,
+};
+use proxmox_schema::api;
+use proxmox_section_config::typed::SectionConfigData;
+use proxmox_sortable_macro::sortable;
+
+use pdm_api_types::remotes::{Remote, REMOTE_ID_SCHEMA};
+use pdm_api_types::subscription::{
+ pick_best_pve_socket_key, socket_count_from_key, AutoAssignProposal, ClearPendingResult,
+ ProductType, ProposedAssignment, RemoteNodeStatus, SubscriptionKeyEntry,
+ SubscriptionKeySource, SubscriptionLevel, SUBSCRIPTION_KEY_SCHEMA,
+};
+use pdm_api_types::{
+ Authid, NODE_SCHEMA, PRIV_RESOURCE_AUDIT, PRIV_RESOURCE_MODIFY, PRIV_SYS_AUDIT, PRIV_SYS_MODIFY,
+};
+
+use crate::api::resources::{
+ get_subscription_info_for_remote, invalidate_subscription_info_for_remote,
+};
+
+pub const ROUTER: Router = Router::new()
+ .get(&list_subdirs_api_method!(SUBDIRS))
+ .subdirs(SUBDIRS);
+
+#[sortable]
+const SUBDIRS: SubdirMap = &sorted!([
+ (
+ "apply-pending",
+ &Router::new().post(&API_METHOD_APPLY_PENDING)
+ ),
+ ("auto-assign", &Router::new().post(&API_METHOD_AUTO_ASSIGN)),
+ ("bulk-assign", &Router::new().post(&API_METHOD_BULK_ASSIGN)),
+ (
+ "clear-pending",
+ &Router::new().post(&API_METHOD_CLEAR_PENDING)
+ ),
+ ("keys", &KEYS_ROUTER),
+ ("node-status", &Router::new().get(&API_METHOD_NODE_STATUS)),
+]);
+
+const KEYS_ROUTER: Router = Router::new()
+ .get(&API_METHOD_LIST_KEYS)
+ .post(&API_METHOD_ADD_KEYS)
+ .match_all("key", &KEY_ITEM_ROUTER);
+
+const KEY_ITEM_ROUTER: Router = Router::new()
+ .get(&API_METHOD_GET_KEY)
+ .delete(&API_METHOD_DELETE_KEY)
+ .subdirs(KEY_ITEM_SUBDIRS);
+
+const KEY_ITEM_SUBDIRS: SubdirMap = &[("assignment", &ASSIGNMENT_ROUTER)];
+
+const ASSIGNMENT_ROUTER: Router = Router::new()
+ .post(&API_METHOD_SET_ASSIGNMENT)
+ .delete(&API_METHOD_CLEAR_ASSIGNMENT);
+
+/// Force-fresh node-status query so the next view reflects the new state instead of returning a
+/// cached entry up to 5 minutes later. Used by auto-assign / apply-pending / clear-pending to
+/// avoid double-driving a node that has already moved to Active in the cache window.
+const FRESH_NODE_STATUS_MAX_AGE: u64 = 0;
+
+/// Cached node-status freshness used by read-only views. Five minutes matches the resource-cache
+/// convention and is short enough that admins rarely see stale data on the panel.
+const PANEL_NODE_STATUS_MAX_AGE: u64 = 5 * 60;
+
+/// Render a subscription key for worker logs and bail messages without exposing the full secret.
+/// Keeps the product prefix and the first/last hex characters of the secret so an operator can
+/// still tell two keys apart in a tail of `journalctl`, but the full key never lands in a log
+/// file readable by anyone other than the priv user.
+fn redact_key(key: &str) -> String {
+ let Some((prefix, secret)) = key.split_once('-') else {
+ return "<malformed-key>".to_string();
+ };
+ let mut chars = secret.chars();
+ let Some(first) = chars.next() else {
+ return format!("{prefix}-...");
+ };
+ match chars.next_back() {
+ Some(last) => format!("{prefix}-{first}...{last}"),
+ None => format!("{prefix}-{first}..."),
+ }
+}
+
+#[api(
+ returns: {
+ type: Array,
+ description: "List of subscription keys in the pool.",
+ items: { type: SubscriptionKeyEntry },
+ },
+ access: {
+ permission: &Permission::Privilege(&["system"], PRIV_SYS_AUDIT, false),
+ },
+)]
+/// List all subscription keys in the key pool the caller has audit access to.
+///
+/// Unbound pool entries are visible to anyone holding the system-AUDIT bit. Bound entries are
+/// additionally gated on per-remote `PRIV_RESOURCE_AUDIT` so that an operator who can audit the
+/// pool but not a specific remote does not learn which keys are pinned to it (and through that,
+/// the existence and rough size of that remote's deployment).
+fn list_keys(rpcenv: &mut dyn RpcEnvironment) -> Result<Vec<SubscriptionKeyEntry>, Error> {
+ let auth_id: Authid = rpcenv
+ .get_auth_id()
+ .context("no authid available")?
+ .parse()?;
+ let user_info = CachedUserInfo::new()?;
+
+ let (config, digest) = pdm_config::subscriptions::config()?;
+ rpcenv["digest"] = digest.to_hex().into();
+ Ok(config
+ .into_iter()
+ .filter_map(|(_id, mut entry)| {
+ if let Some(remote) = entry.remote.as_deref() {
+ if user_info.lookup_privs(&auth_id, &["resource", remote]) & PRIV_RESOURCE_AUDIT
+ == 0
+ {
+ return None;
+ }
+ }
+ entry.level = SubscriptionLevel::from_key(Some(&entry.key));
+ Some(entry)
+ })
+ .collect())
+}
+
+#[api(
+ input: {
+ properties: {
+ keys: {
+ type: Array,
+ description: "Subscription keys to add to the pool.",
+ items: { schema: SUBSCRIPTION_KEY_SCHEMA },
+ },
+ digest: {
+ type: ConfigDigest,
+ optional: true,
+ },
+ },
+ },
+ access: {
+ permission: &Permission::Privilege(&["system"], PRIV_SYS_MODIFY, false),
+ },
+)]
+/// Add one or more subscription keys to the pool.
+///
+/// The key prefix determines the product type via [`ProductType::from_key`]. The schema regex
+/// rejects anything that isn't a PVE or PBS key today; widen [`PRODUCT_KEY_REGEX`] in lockstep
+/// with `from_key` and `push_key_to_remote` when PMG/POM remote support lands.
+///
+/// All-or-nothing: every key is validated for prefix and uniqueness (against the existing pool
+/// and within the input list) before any change is persisted. A single bad key fails the
+/// request and leaves the pool untouched.
+///
+/// The post-save digest is set on the response so clients can chain a follow-up mutation without
+/// a refetch round-trip.
+async fn add_keys(
+ keys: Vec<String>,
+ digest: Option<ConfigDigest>,
+ rpcenv: &mut dyn RpcEnvironment,
+) -> Result<(), Error> {
+ if keys.is_empty() {
+ http_bail!(BAD_REQUEST, "no keys provided");
+ }
+
+ let mut entries: Vec<SubscriptionKeyEntry> = Vec::with_capacity(keys.len());
+ let mut seen: HashSet<&str> = HashSet::new();
+ for key in &keys {
+ if !seen.insert(key.as_str()) {
+ http_bail!(BAD_REQUEST, "duplicate key in input: '{key}'");
+ }
+ let product_type = ProductType::from_key(key).ok_or_else(|| {
+ // Currently unreachable because the schema regex caps inputs to known prefixes, but
+ // a future regex widening (PMG/POM) where `from_key` lags behind would fire this -
+ // redact defensively so a real key doesn't end up in the journal.
+ warn!(
+ "rejecting unrecognised key prefix '{}', possibly a new product line",
+ redact_key(key),
+ );
+ http_err!(BAD_REQUEST, "unrecognised key format: {}", redact_key(key))
+ })?;
+ entries.push(SubscriptionKeyEntry {
+ key: key.clone(),
+ product_type,
+ level: SubscriptionLevel::from_key(Some(key)),
+ source: SubscriptionKeySource::Manual,
+ ..Default::default()
+ });
+ }
+
+ let new_digest = tokio::task::spawn_blocking(move || -> Result<ConfigDigest, Error> {
+ let _lock = pdm_config::subscriptions::lock_config()?;
+ let (mut config, config_digest) = pdm_config::subscriptions::config()?;
+ config_digest.detect_modification(digest.as_ref())?;
+
+ // `insert` returns the previous entry when one existed; treat that as the duplicate
+ // signal. Doing this inline avoids a second pass over `entries` and falls out of the
+ // loop on the first collision. The all-or-nothing contract holds because save_config
+ // only runs after the loop completes, so a bail on entry N leaves the on-disk pool
+ // untouched even if entries 1..N already landed in the in-memory `config`.
+ for entry in entries {
+ let key = entry.key.clone();
+ if let Some(existing) = config.insert(key.clone(), entry) {
+ http_bail!(CONFLICT, "key '{}' already exists in pool", existing.key);
+ }
+ }
+
+ pdm_config::subscriptions::save_config(&config)
+ })
+ .await??;
+ rpcenv["digest"] = new_digest.to_hex().into();
+ Ok(())
+}
+
+#[api(
+ input: {
+ properties: {
+ key: { schema: SUBSCRIPTION_KEY_SCHEMA },
+ },
+ },
+ returns: { type: SubscriptionKeyEntry },
+ access: {
+ permission: &Permission::Privilege(&["system"], PRIV_SYS_AUDIT, false),
+ },
+)]
+/// Get details for a single key.
+///
+/// Bound entries are hidden from operators who cannot audit the bound remote (mirrors the
+/// `list_keys` filter); the response is the same 404 either way so a probe cannot distinguish
+/// "key exists but you cannot see it" from "key not in pool".
+fn get_key(key: String, rpcenv: &mut dyn RpcEnvironment) -> Result<SubscriptionKeyEntry, Error> {
+ let auth_id: Authid = rpcenv
+ .get_auth_id()
+ .context("no authid available")?
+ .parse()?;
+ let user_info = CachedUserInfo::new()?;
+
+ let (config, digest) = pdm_config::subscriptions::config()?;
+ rpcenv["digest"] = digest.to_hex().into();
+ let mut entry = config
+ .get(&key)
+ .cloned()
+ .ok_or_else(|| http_err!(NOT_FOUND, "key '{key}' not found in pool"))?;
+
+ if let Some(remote) = entry.remote.as_deref() {
+ if user_info.lookup_privs(&auth_id, &["resource", remote]) & PRIV_RESOURCE_AUDIT == 0 {
+ http_bail!(NOT_FOUND, "key '{key}' not found in pool");
+ }
+ }
+
+ entry.level = SubscriptionLevel::from_key(Some(&entry.key));
+ Ok(entry)
+}
+
+#[api(
+ // Required because save_shadow writes a priv:priv 0600 file (signed-blob storage); only the
+ // privileged daemon can chown to that uid.
+ protected: true,
+ input: {
+ properties: {
+ key: { schema: SUBSCRIPTION_KEY_SCHEMA },
+ digest: {
+ type: ConfigDigest,
+ optional: true,
+ },
+ },
+ },
+ access: {
+ permission: &Permission::Privilege(&["system"], PRIV_SYS_MODIFY, false),
+ },
+)]
+/// Remove a key from the pool.
+///
+/// If the key is currently assigned to a remote node, the caller must also have
+/// `PRIV_RESOURCE_MODIFY` on that remote, so an audit-only operator cannot release a key
+/// another admin had pinned. Refuses if the key is currently the live active key on its bound
+/// node, since dropping the pool entry would orphan that subscription on the remote: the
+/// operator must release the live subscription on the remote first.
+async fn delete_key(
+ key: String,
+ digest: Option<ConfigDigest>,
+ rpcenv: &mut dyn RpcEnvironment,
+) -> Result<(), Error> {
+ let auth_id: Authid = rpcenv
+ .get_auth_id()
+ .context("no authid available")?
+ .parse()?;
+ let user_info = CachedUserInfo::new()?;
+
+ // Authorise the caller against the entry's bound remote BEFORE hitting the network: an
+ // operator with only PRIV_SYS_MODIFY should not be able to probe live subscription state on
+ // a remote they cannot audit. Read the entry once without the lock for this gate; the
+ // authoritative read happens under the spawn_blocking section below.
+ let (pre_config, pre_digest) = pdm_config::subscriptions::config()?;
+ let Some(pre_entry) = pre_config.get(&key) else {
+ http_bail!(NOT_FOUND, "key '{key}' not found in pool");
+ };
+ if let Some(assigned_remote) = pre_entry.remote.as_deref() {
+ user_info.check_privs(
+ &auth_id,
+ &["resource", assigned_remote],
+ PRIV_RESOURCE_MODIFY,
+ false,
+ )?;
+ }
+
+ // Live fetch must happen before the lock since the lock cannot span an .await. Pass the
+ // pre-read binding so the helper hits only a remote we already priv-checked above: a
+ // parallel rebind to a remote we cannot AUDIT would otherwise probe that remote here.
+ let pre_binding = pre_entry
+ .remote
+ .as_deref()
+ .zip(pre_entry.node.as_deref());
+ // Owned bool so the orphan guard inside spawn_blocking does not borrow `pre_config`.
+ let pre_had_binding = pre_binding.is_some();
+ let synced_block = check_synced_assignment_for_unassign(&key, pre_binding).await?;
+ drop(pre_config);
+
+ // The lock + sync IO runs on a blocking thread so the async runtime is free for other work
+ // even when /etc/proxmox-datacenter-manager/subscriptions is on slow storage. The
+ // post-lock priv re-check is duplicated inside the closure since `user_info` cannot easily
+ // cross the boundary; reconstructing it is cheap (it just reads the shared ACL cache).
+ let new_digest = tokio::task::spawn_blocking(move || -> Result<ConfigDigest, Error> {
+ let user_info = CachedUserInfo::new()?;
+ let _lock = pdm_config::subscriptions::lock_config()?;
+ let (mut config, config_digest) = pdm_config::subscriptions::config()?;
+ config_digest.detect_modification(digest.as_ref())?;
+ let mut shadow = pdm_config::subscriptions::shadow_config()?;
+
+ let Some(entry) = config.get(&key) else {
+ http_bail!(NOT_FOUND, "key '{key}' not found in pool");
+ };
+
+ // Re-check the priv on the locked entry: a parallel rebind during the live fetch could
+ // have moved the binding to a remote the caller cannot modify.
+ if let Some(assigned_remote) = entry.remote.as_deref() {
+ user_info.check_privs(
+ &auth_id,
+ &["resource", assigned_remote],
+ PRIV_RESOURCE_MODIFY,
+ false,
+ )?;
+ }
+
+ // Orphan guard: refuse on any post-lock divergence that makes the pre-lock live check
+ // stale (still synced; digest moved while bound; binding appeared during the .await).
+ let bound_post = entry.remote.is_some();
+ let raced = config_digest != pre_digest;
+ let still_synced = synced_block
+ .as_ref()
+ .map(|(r, n)| {
+ entry.remote.as_deref() == Some(r.as_str())
+ && entry.node.as_deref() == Some(n.as_str())
+ })
+ .unwrap_or(false);
+ let appeared_unchecked = !pre_had_binding && bound_post;
+ if still_synced || (raced && bound_post) || appeared_unchecked {
+ http_bail!(
+ BAD_REQUEST,
+ "key '{key}' is currently bound to a remote node with a live active \
+ subscription; release it on the remote first"
+ );
+ }
+
+ config.remove(&key);
+ shadow.remove(&key);
+ // Save main config first: an interrupted remove must not leave a `key` entry whose
+ // signed blob is gone (other readers would see the entry and try to consult the
+ // missing shadow). A stale shadow blob with no main entry is benign - readers do not
+ // consult it.
+ let new_digest = pdm_config::subscriptions::save_config(&config)?;
+ pdm_config::subscriptions::save_shadow(&shadow)?;
+ Ok(new_digest)
+ })
+ .await??;
+ rpcenv["digest"] = new_digest.to_hex().into();
+
+ Ok(())
+}
+
+#[api(
+ input: {
+ properties: {
+ key: { schema: SUBSCRIPTION_KEY_SCHEMA },
+ remote: { schema: REMOTE_ID_SCHEMA },
+ // NODE_SCHEMA rejects path-traversal input before it ends up interpolated into the
+ // remote URL `/api2/extjs/nodes/{node}/subscription`.
+ node: { schema: NODE_SCHEMA },
+ digest: {
+ type: ConfigDigest,
+ optional: true,
+ },
+ },
+ },
+ access: {
+ permission: &Permission::Privilege(&["system"], PRIV_SYS_MODIFY, false),
+ },
+)]
+/// Bind a pool key to a remote node.
+///
+/// `PRIV_SYS_MODIFY` lets the caller touch the pool config; per-remote `PRIV_RESOURCE_MODIFY`
+/// is enforced inside this handler so an operator cannot push a key to a remote they have no
+/// other authority on.
+async fn set_assignment(
+ key: String,
+ remote: String,
+ node: String,
+ digest: Option<ConfigDigest>,
+ rpcenv: &mut dyn RpcEnvironment,
+) -> Result<(), Error> {
+ let auth_id: Authid = rpcenv
+ .get_auth_id()
+ .context("no authid available")?
+ .parse()?;
+ let user_info = CachedUserInfo::new()?;
+ user_info.check_privs(&auth_id, &["resource", &remote], PRIV_RESOURCE_MODIFY, false)?;
+
+ // Pre-lock orphan-prevention for the rebind path: pushing the same key to a NEW remote at
+ // the next Apply Pending makes the shop reissue the serverid against the new remote and
+ // orphans whatever live subscription the old remote still ran. Same shape and same guard
+ // as delete_key / clear_assignment; only fires when the binding actually moves (re-set to
+ // the same target leaves the OLD binding intact and carries no orphan risk).
+ let (pre_config, pre_digest) = pdm_config::subscriptions::config()?;
+ let pre_entry = pre_config.get(&key);
+ let pre_binding = pre_entry.and_then(|e| e.remote.as_deref().zip(e.node.as_deref()));
+ let rebind_moves_binding = match pre_binding {
+ Some((r, n)) => r != remote.as_str() || n != node.as_str(),
+ None => false,
+ };
+ if rebind_moves_binding {
+ if let Some((prev_remote, _)) = pre_binding {
+ // Reassigning away from a previous remote requires modify on that remote too,
+ // otherwise an audit-only-on-A operator could effectively pull a key off A by
+ // re-binding it to a remote B they can modify and applying the push (which makes
+ // the shop reissue the serverid to B and invalidates A).
+ user_info.check_privs(
+ &auth_id,
+ &["resource", prev_remote],
+ PRIV_RESOURCE_MODIFY,
+ false,
+ )?;
+ }
+ }
+ let pre_had_binding = pre_binding.is_some();
+ let synced_block = if rebind_moves_binding {
+ check_synced_assignment_for_unassign(&key, pre_binding).await?
+ } else {
+ None
+ };
+ drop(pre_config);
+
+ // Lock + sync IO under spawn_blocking so the async runtime stays free during the file
+ // operations. `user_info` is reconstructed inside the closure since the priv check happens
+ // under the lock.
+ let new_digest = tokio::task::spawn_blocking(move || -> Result<ConfigDigest, Error> {
+ let user_info = CachedUserInfo::new()?;
+ let _lock = pdm_config::subscriptions::lock_config()?;
+ let (mut config, config_digest) = pdm_config::subscriptions::config()?;
+ config_digest.detect_modification(digest.as_ref())?;
+
+ let Some(stored_entry) = config.get(&key).cloned() else {
+ http_bail!(NOT_FOUND, "key '{key}' not found in pool");
+ };
+ let product_type = stored_entry.product_type;
+
+ // Re-check the priv on the locked entry: a parallel rebind during the live fetch could
+ // have moved the binding to a remote the caller cannot modify.
+ if let Some(prev_remote) = stored_entry.remote.as_deref() {
+ if prev_remote != remote {
+ user_info.check_privs(
+ &auth_id,
+ &["resource", prev_remote],
+ PRIV_RESOURCE_MODIFY,
+ false,
+ )?;
+ }
+ }
+
+ // Orphan guard for the rebind path: refuse on any post-lock divergence that makes the
+ // pre-lock live check stale (still synced; digest moved while bound; binding appeared
+ // during the .await). Only fires when the binding moves: re-set to the same target
+ // leaves the old binding intact and is not a rebind.
+ let bound_post = stored_entry.remote.is_some();
+ let post_moves = match (stored_entry.remote.as_deref(), stored_entry.node.as_deref()) {
+ (Some(r), Some(n)) => r != remote.as_str() || n != node.as_str(),
+ _ => false,
+ };
+ let raced = config_digest != pre_digest;
+ let still_synced = synced_block
+ .as_ref()
+ .map(|(r, n)| {
+ stored_entry.remote.as_deref() == Some(r.as_str())
+ && stored_entry.node.as_deref() == Some(n.as_str())
+ })
+ .unwrap_or(false);
+ let appeared_unchecked = !pre_had_binding && bound_post && post_moves;
+ if (still_synced && post_moves)
+ || (raced && bound_post && post_moves)
+ || appeared_unchecked
+ {
+ http_bail!(
+ BAD_REQUEST,
+ "key '{key}' is currently bound to a remote node with a live active \
+ subscription; release it on the remote before rebinding"
+ );
+ }
+
+ let (remotes_config, _) = pdm_config::remotes::config()?;
+ let remote_entry = remotes_config
+ .get(&remote)
+ .ok_or_else(|| http_err!(NOT_FOUND, "remote '{remote}' not found"))?;
+
+ if !product_type.matches_remote_type(remote_entry.ty) {
+ http_bail!(
+ BAD_REQUEST,
+ "key type '{product_type}' does not match remote type '{}'",
+ remote_entry.ty
+ );
+ }
+
+ for (_id, other) in config.iter() {
+ if other.key != key
+ && other.remote.as_deref() == Some(remote.as_str())
+ && other.node.as_deref() == Some(node.as_str())
+ {
+ http_bail!(
+ CONFLICT,
+ "key '{}' is already assigned to {remote}/{node}",
+ other.key
+ );
+ }
+ }
+
+ // Safe: the earlier `config.get(&key).cloned()` above proved the key exists, and the
+ // `_lock` guard keeps the config stable across this section.
+ let entry = config
+ .get_mut(&key)
+ .expect("entry verified to exist under lock above");
+ entry.remote = Some(remote);
+ entry.node = Some(node);
+
+ pdm_config::subscriptions::save_config(&config)
+ })
+ .await??;
+ rpcenv["digest"] = new_digest.to_hex().into();
+
+ Ok(())
+}
+
+#[api(
+ input: {
+ properties: {
+ key: { schema: SUBSCRIPTION_KEY_SCHEMA },
+ digest: {
+ type: ConfigDigest,
+ optional: true,
+ },
+ },
+ },
+ access: {
+ permission: &Permission::Privilege(&["system"], PRIV_SYS_MODIFY, false),
+ },
+)]
+/// Drop the remote-node binding for a pool key.
+///
+/// Refuses when the binding is currently synced (the assigned key is the live active key on
+/// its remote): unassigning then would orphan that subscription, so the operator must release
+/// the live subscription on the remote first.
+async fn clear_assignment(
+ key: String,
+ digest: Option<ConfigDigest>,
+ rpcenv: &mut dyn RpcEnvironment,
+) -> Result<(), Error> {
+ let auth_id: Authid = rpcenv
+ .get_auth_id()
+ .context("no authid available")?
+ .parse()?;
+ let user_info = CachedUserInfo::new()?;
+
+ // Authorise against the entry's bound remote BEFORE hitting the network. An operator with
+ // only PRIV_SYS_MODIFY should not be able to probe live subscription state on a remote
+ // they cannot audit. The authoritative re-check happens after the lock below.
+ let (pre_config, pre_digest) = pdm_config::subscriptions::config()?;
+ let pre_entry = pre_config.get(&key);
+ if let Some(pre_entry) = pre_entry {
+ if let Some(assigned_remote) = pre_entry.remote.as_deref() {
+ user_info.check_privs(
+ &auth_id,
+ &["resource", assigned_remote],
+ PRIV_RESOURCE_MODIFY,
+ false,
+ )?;
+ }
+ }
+
+ // The live fetch must happen before the lock since the lock cannot span an .await. Snapshot
+ // the digest above so the post-lock check below can refuse if a parallel admin rebound the
+ // entry during the .await - in that race the original remote still has our live
+ // subscription and unbinding here would orphan it. Pass the pre-read binding so the helper
+ // hits only the remote the pre-priv check above already covered.
+ let pre_binding = pre_entry.and_then(|e| e.remote.as_deref().zip(e.node.as_deref()));
+ // Owned bool so the orphan guard inside spawn_blocking does not borrow `pre_config`.
+ let pre_had_binding = pre_binding.is_some();
+ let synced_block = check_synced_assignment_for_unassign(&key, pre_binding).await?;
+ drop(pre_config);
+
+ // The lock + sync IO runs on a blocking thread so the async runtime is free for other work
+ // even when /etc/proxmox-datacenter-manager/subscriptions is on slow storage. The post-lock
+ // priv re-check is duplicated inside the closure since `user_info` cannot easily cross the
+ // boundary; reconstructing it is cheap (it just reads the shared ACL cache).
+ let new_digest = tokio::task::spawn_blocking(move || -> Result<ConfigDigest, Error> {
+ let user_info = CachedUserInfo::new()?;
+ let _lock = pdm_config::subscriptions::lock_config()?;
+ let (mut config, config_digest) = pdm_config::subscriptions::config()?;
+ config_digest.detect_modification(digest.as_ref())?;
+
+ let Some(stored_entry) = config.get(&key).cloned() else {
+ http_bail!(NOT_FOUND, "key '{key}' not found in pool");
+ };
+
+ // Re-check the priv on the locked entry: a parallel rebind during the live fetch could
+ // have moved the binding to a remote the caller cannot modify.
+ if let Some(prev_remote) = stored_entry.remote.as_deref() {
+ user_info.check_privs(
+ &auth_id,
+ &["resource", prev_remote],
+ PRIV_RESOURCE_MODIFY,
+ false,
+ )?;
+ }
+
+ // Orphan guard: refuse on any post-lock divergence that makes the pre-lock live check
+ // stale (still synced; digest moved while bound; binding appeared during the .await).
+ let bound_post = stored_entry.remote.is_some();
+ let raced = config_digest != pre_digest;
+ let still_synced = synced_block
+ .as_ref()
+ .map(|(r, n)| {
+ stored_entry.remote.as_deref() == Some(r.as_str())
+ && stored_entry.node.as_deref() == Some(n.as_str())
+ })
+ .unwrap_or(false);
+ let appeared_unchecked = !pre_had_binding && bound_post;
+ if still_synced || (raced && bound_post) || appeared_unchecked {
+ http_bail!(
+ BAD_REQUEST,
+ "key '{key}' is currently bound to a remote node with a live active \
+ subscription; release it on the remote first"
+ );
+ }
+ // Safe: the earlier `config.get(&key).cloned()` above proved the key exists, and the
+ // `_lock` guard keeps the config stable across this section.
+ let entry = config
+ .get_mut(&key)
+ .expect("entry verified to exist under lock above");
+ entry.remote = None;
+ entry.node = None;
+
+ pdm_config::subscriptions::save_config(&config)
+ })
+ .await??;
+ rpcenv["digest"] = new_digest.to_hex().into();
+
+ Ok(())
+}
+
+/// Pre-lock check for the unassign / delete-key paths ([`clear_assignment`] and [`delete_key`]):
+/// returns the (remote, node) the entry is currently active on, if any, so the lock-protected
+/// branch can refuse the operation and prompt the operator to release the live subscription
+/// on the remote first. Returns `None` for entries with no binding, no live subscription, or
+/// a live subscription whose key does not match the entry.
+///
+/// Takes the binding from the caller's pre-read entry rather than re-reading config so the
+/// remote we hit on the network is the one the caller's pre-priv check already covered: a
+/// parallel rebind between pre-read and here cannot redirect us at a remote the caller has no
+/// AUDIT on.
+async fn check_synced_assignment_for_unassign(
+ key: &str,
+ binding: Option<(&str, &str)>,
+) -> Result<Option<(String, String)>, Error> {
+ let Some((prev_remote, prev_node)) = binding else {
+ return Ok(None);
+ };
+ let (remotes_config, _) = pdm_config::remotes::config()?;
+ let Some(remote_entry) = remotes_config.get(prev_remote) else {
+ return Ok(None);
+ };
+ let live = match get_subscription_info_for_remote(remote_entry, FRESH_NODE_STATUS_MAX_AGE).await
+ {
+ Ok(v) => v,
+ Err(_) => return Ok(None),
+ };
+ let synced = live
+ .get(prev_node)
+ .and_then(|info| info.as_ref())
+ .map(|info| {
+ info.status == proxmox_subscription::SubscriptionStatus::Active
+ && info.key.as_deref() == Some(key)
+ })
+ .unwrap_or(false);
+ Ok(synced.then_some((prev_remote.to_string(), prev_node.to_string())))
+}
+
+/// Push a single key to its assigned remote node. Operates on a borrowed `Remote` so the
+/// caller can fetch the remotes-config once and reuse it.
+async fn push_key_to_remote(remote: &Remote, key: &str, node_name: &str) -> Result<(), Error> {
+ let product_type =
+ ProductType::from_key(key).ok_or_else(|| format_err!("unrecognised key format: {key}"))?;
+
+ match product_type {
+ ProductType::Pve => {
+ let client = crate::connection::make_pve_client(remote)?;
+ client
+ .set_subscription(
+ node_name,
+ pve_api_types::SetSubscription { key: key.to_string() },
+ )
+ .await?;
+ }
+ ProductType::Pbs => {
+ let client = crate::connection::make_pbs_client(remote)?;
+ client
+ .set_subscription(proxmox_subscription::SetSubscription { key: key.to_string() })
+ .await?;
+ }
+ ProductType::Pmg | ProductType::Pom => {
+ bail!("PDM cannot push '{product_type}' keys: no remote support yet");
+ }
+ }
+
+ info!(
+ "pushed key '{}' to {}/{node_name}",
+ redact_key(key),
+ remote.id,
+ );
+ Ok(())
+}
+
+#[api(
+ input: {
+ properties: {
+ "max-age": {
+ type: u64,
+ optional: true,
+ description: "Override the cache freshness window in seconds. \
+ Default 300 for panel views; pass 0 to force a fresh query.",
+ },
+ },
+ },
+ returns: {
+ type: Array,
+ description: "Subscription status of all remote nodes the user can audit.",
+ items: { type: RemoteNodeStatus },
+ },
+ access: {
+ permission: &Permission::Privilege(&["system"], PRIV_SYS_AUDIT, false),
+ },
+)]
+/// Get the subscription status of every remote node the caller can audit, combined with key pool
+/// assignment information.
+///
+/// Per-remote `PRIV_RESOURCE_AUDIT` is enforced inside the handler so users only see remotes
+/// they may audit.
+async fn node_status(
+ max_age: Option<u64>,
+ rpcenv: &mut dyn RpcEnvironment,
+) -> Result<Vec<RemoteNodeStatus>, Error> {
+ collect_node_status(max_age.unwrap_or(PANEL_NODE_STATUS_MAX_AGE), rpcenv).await
+}
+
+/// Shared helper: fan out subscription queries to all remotes the caller has audit privilege on,
+/// in parallel, reusing the per-remote `SUBSCRIPTION_CACHE` via `get_subscription_info_for_remote`.
+/// Joins the results with the key-pool assignment table.
+async fn collect_node_status(
+ max_age: u64,
+ rpcenv: &mut dyn RpcEnvironment,
+) -> Result<Vec<RemoteNodeStatus>, Error> {
+ let auth_id: Authid = rpcenv
+ .get_auth_id()
+ .context("no authid available")?
+ .parse()?;
+ let user_info = CachedUserInfo::new()?;
+
+ let visible_remotes: Vec<(String, Remote)> = crate::api::remotes::RemoteIterator::new()?
+ .any_privs(&user_info, &auth_id, PRIV_RESOURCE_AUDIT)
+ .into_iter()
+ .collect();
+
+ let (keys_config, _) = pdm_config::subscriptions::config()?;
+
+ // `get_subscription_info_for_remote` re-uses the per-remote `SUBSCRIPTION_CACHE` so this
+ // fan-out is safe to run concurrently.
+ let fetch = visible_remotes.iter().map(|(name, remote)| async move {
+ let res = get_subscription_info_for_remote(remote, max_age).await;
+ (name.clone(), remote.ty, res)
+ });
+ let results = join_all(fetch).await;
+
+ let mut out = Vec::new();
+ for (remote_name, remote_ty, result) in results {
+ let node_infos = match result {
+ Ok(info) => info,
+ Err(err) => {
+ warn!("failed to query subscription for remote {remote_name}: {err}");
+ continue;
+ }
+ };
+
+ for (node_name, node_info) in &node_infos {
+ let (status, level, sockets, current_key) = match node_info {
+ Some(info) => (info.status, info.level, info.sockets, info.key.clone()),
+ None => (
+ proxmox_subscription::SubscriptionStatus::NotFound,
+ SubscriptionLevel::None,
+ None,
+ None,
+ ),
+ };
+
+ let pool_entry = keys_config.iter().find(|(_id, entry)| {
+ entry.remote.as_deref() == Some(remote_name.as_str())
+ && entry.node.as_deref() == Some(node_name.as_str())
+ });
+ let (assigned_key, pending_clear) = match pool_entry {
+ Some((_id, entry)) => (Some(entry.key.clone()), entry.pending_clear),
+ None => (None, false),
+ };
+
+ out.push(RemoteNodeStatus {
+ remote: remote_name.clone(),
+ ty: remote_ty,
+ node: node_name.to_string(),
+ sockets,
+ status,
+ level,
+ assigned_key,
+ current_key,
+ pending_clear,
+ });
+ }
+ }
+
+ out.sort_by(|a, b| (&a.remote, &a.node).cmp(&(&b.remote, &b.node)));
+ Ok(out)
+}
+
+#[api(
+ returns: { type: AutoAssignProposal },
+ access: {
+ permission: &Permission::Privilege(&["system"], PRIV_SYS_MODIFY, false),
+ },
+)]
+/// Compute a proposed mapping of unused pool keys to nodes without an active subscription.
+///
+/// Returns the plan plus snapshots of the inputs (pool digest and a hash of the consulted
+/// node-status). The plan is committed by `bulk_assign` and rejected there if either snapshot no
+/// longer matches the live state, so an operator never silently applies a plan that drifted
+/// between preview and commit.
+///
+/// `PRIV_SYS_MODIFY` is required to *preview* the plan; the actual commit performed by
+/// `bulk_assign` additionally drops proposals on any remote the caller cannot
+/// `PRIV_RESOURCE_MODIFY`, so an audit-only-on-a-remote operator can see the suggestion but the
+/// write never lands there.
+async fn auto_assign(rpcenv: &mut dyn RpcEnvironment) -> Result<AutoAssignProposal, Error> {
+ let node_statuses = collect_node_status(FRESH_NODE_STATUS_MAX_AGE, rpcenv).await?;
+ let (config, keys_digest) = pdm_config::subscriptions::config()?;
+ let assignments = compute_proposals(&config, &node_statuses);
+ Ok(AutoAssignProposal {
+ assignments,
+ keys_digest,
+ node_status_digest: hash_node_status(&node_statuses),
+ })
+}
+
+#[api(
+ input: {
+ properties: {
+ proposal: { type: AutoAssignProposal },
+ },
+ },
+ returns: {
+ type: Array,
+ description: "Assignments that were actually persisted.",
+ items: { type: ProposedAssignment },
+ },
+ access: {
+ permission: &Permission::Privilege(&["system"], PRIV_SYS_MODIFY, false),
+ },
+)]
+/// Apply a proposal previously returned by `auto_assign`.
+///
+/// Rejects with 409 if the pool config digest has moved or the live node-status hash differs
+/// from what the proposal was computed against; the caller is expected to refresh the proposal
+/// and retry. Per-remote `PRIV_RESOURCE_MODIFY` is checked inside the handler so an audit-only
+/// caller's previously-rendered preview cannot be applied on their behalf.
+async fn bulk_assign(
+ proposal: AutoAssignProposal,
+ rpcenv: &mut dyn RpcEnvironment,
+) -> Result<Vec<ProposedAssignment>, Error> {
+ let auth_id: Authid = rpcenv
+ .get_auth_id()
+ .context("no authid available")?
+ .parse()?;
+
+ let node_statuses = collect_node_status(FRESH_NODE_STATUS_MAX_AGE, rpcenv).await?;
+ let live_digest = hash_node_status(&node_statuses);
+ if live_digest != proposal.node_status_digest {
+ http_bail!(
+ CONFLICT,
+ "node status changed since proposal; refresh and try again"
+ );
+ }
+
+ // Lock + sync IO under spawn_blocking so the async runtime stays free during the file
+ // operations. `user_info` and `auth_id` are reconstructed/cloned into the closure since the
+ // priv lookups for every proposal entry happen under the lock.
+ let (applied, new_digest_opt) = tokio::task::spawn_blocking(
+ move || -> Result<(Vec<ProposedAssignment>, Option<ConfigDigest>), Error> {
+ let user_info = CachedUserInfo::new()?;
+ let _lock = pdm_config::subscriptions::lock_config()?;
+ let (mut config, config_digest) = pdm_config::subscriptions::config()?;
+ config_digest.detect_modification(Some(&proposal.keys_digest))?;
+ let (remotes_config, _) = pdm_config::remotes::config()?;
+
+ let mut applied = Vec::with_capacity(proposal.assignments.len());
+ for p in &proposal.assignments {
+ // Audit-only callers may see a remote in the preview but must not be able to
+ // stage a write for it that another admin would later push on their behalf.
+ if user_info.lookup_privs(&auth_id, &["resource", &p.remote])
+ & PRIV_RESOURCE_MODIFY
+ == 0
+ {
+ continue;
+ }
+ // The proposal is client-controlled (a malicious client could submit a
+ // fabricated `p.node`) and was originally sourced from each remote's
+ // node-status reply (a compromised or buggy remote could inject a
+ // path-traversal token). Re-validate against NODE_SCHEMA before persisting; the
+ // node string later interpolates into the remote URL in `push_key_to_remote`,
+ // so this is the only line of defence at that boundary.
+ if NODE_SCHEMA.parse_simple_value(&p.node).is_err() {
+ warn!(
+ "skipping bulk-assign entry with invalid node name from proposal: \
+ remote={} (raw node rejected)",
+ p.remote,
+ );
+ continue;
+ }
+ // Mirror set_assignment's invariants: a client-fabricated proposal must not be
+ // able to bind a PVE key to a PBS remote (apply-pending would fail leaving the
+ // pool inconsistent), nor double-bind a single (remote, node) target.
+ let Some(remote_entry) = remotes_config.get(&p.remote) else {
+ continue;
+ };
+ let Some(pool_entry) = config.get(&p.key) else {
+ continue;
+ };
+ if !pool_entry.product_type.matches_remote_type(remote_entry.ty) {
+ continue;
+ }
+ if config.iter().any(|(_, e)| {
+ e.key != p.key
+ && e.remote.as_deref() == Some(p.remote.as_str())
+ && e.node.as_deref() == Some(p.node.as_str())
+ }) {
+ continue;
+ }
+ if let Some(entry) = config.get_mut(&p.key) {
+ // Defensive: with the digest check above the entry should still be unbound,
+ // but a bug in the proposal computation could otherwise overwrite a foreign
+ // binding.
+ if entry.remote.is_none() {
+ entry.remote = Some(p.remote.clone());
+ entry.node = Some(p.node.clone());
+ applied.push(p.clone());
+ }
+ }
+ }
+
+ let new_digest = if applied.is_empty() {
+ None
+ } else {
+ Some(pdm_config::subscriptions::save_config(&config)?)
+ };
+ Ok((applied, new_digest))
+ },
+ )
+ .await??;
+
+ if let Some(new_digest) = new_digest_opt {
+ rpcenv["digest"] = new_digest.to_hex().into();
+ }
+
+ Ok(applied)
+}
+
+/// Stable hash of the slice of node-status fields consulted by `compute_proposals`. Changing
+/// what `compute_proposals` reads requires updating this digest to match, otherwise the
+/// preview/commit guarantee breaks silently.
+fn hash_node_status(statuses: &[RemoteNodeStatus]) -> String {
+ let mut keyed: Vec<(&str, &str, proxmox_subscription::SubscriptionStatus, Option<i64>, bool)> =
+ statuses
+ .iter()
+ .map(|n| {
+ (
+ n.remote.as_str(),
+ n.node.as_str(),
+ n.status,
+ n.sockets,
+ n.assigned_key.is_some(),
+ )
+ })
+ .collect();
+ keyed.sort_by(|a, b| (a.0, a.1).cmp(&(b.0, b.1)));
+ let raw = serde_json::to_vec(&keyed).unwrap_or_default();
+ hex::encode(openssl::sha::sha256(&raw))
+}
+
+fn compute_proposals(
+ config: &SectionConfigData<SubscriptionKeyEntry>,
+ node_statuses: &[RemoteNodeStatus],
+) -> Vec<ProposedAssignment> {
+ let mut target_nodes: Vec<&RemoteNodeStatus> = node_statuses
+ .iter()
+ .filter(|n| {
+ n.assigned_key.is_none() && n.status != proxmox_subscription::SubscriptionStatus::Active
+ })
+ .collect();
+
+ // Sort PVE nodes by socket count descending so large nodes get keys first.
+ target_nodes.sort_by_key(|n| std::cmp::Reverse(n.sockets.unwrap_or(0)));
+
+ let mut proposals: Vec<ProposedAssignment> = Vec::new();
+ let mut taken: HashSet<String> = HashSet::new();
+
+ for node in &target_nodes {
+ let remote_type = node.ty;
+
+ let candidates = config.iter().filter(|(id, entry)| {
+ entry.remote.is_none()
+ && !taken.contains(*id)
+ && entry.product_type.matches_remote_type(remote_type)
+ });
+
+ let best_key = if remote_type == pdm_api_types::remotes::RemoteType::Pve {
+ let node_sockets = node.sockets.unwrap_or(1) as u32;
+ pick_best_pve_socket_key(
+ node_sockets,
+ candidates.map(|(id, entry)| (id.to_string(), entry.key.as_str())),
+ )
+ } else {
+ candidates.map(|(id, _)| id.to_string()).next()
+ };
+
+ if let Some(key_id) = best_key {
+ let ks = config
+ .get(&key_id)
+ .and_then(|e| socket_count_from_key(&e.key));
+ taken.insert(key_id.clone());
+ proposals.push(ProposedAssignment {
+ key: key_id,
+ remote: node.remote.clone(),
+ node: node.node.clone(),
+ key_sockets: ks,
+ node_sockets: node.sockets,
+ });
+ }
+ }
+
+ proposals
+}
+
+#[api(
+ input: {
+ properties: {
+ digest: {
+ type: ConfigDigest,
+ optional: true,
+ },
+ },
+ },
+ returns: {
+ schema: pdm_api_types::UPID_SCHEMA,
+ optional: true,
+ },
+ access: {
+ permission: &Permission::Privilege(&["system"], PRIV_SYS_MODIFY, false),
+ },
+)]
+/// Apply every pending pool change to its remote node.
+///
+/// Pending entries are pool keys whose live `current_key` on the bound node does not match the
+/// assigned pool key (either a different live key, no key, or the remote did not respond / the
+/// node is gone). Each step is logged from a worker task so the admin can follow progress.
+///
+/// Subscription health (Invalid, Expired, ...) is intentionally not considered pending: the
+/// assigned key already reached the node, re-pushing it would not change the shop's verdict.
+///
+/// The worker bails on the first failure; the remaining entries stay pending so the operator
+/// can fix the underlying issue (or clear that one assignment) and trigger another apply.
+///
+/// Returns `None` when nothing is pending so the caller can show a short info message instead of
+/// opening a task progress dialog for a no-op worker.
+///
+/// The optional `digest` rejects the call at the API boundary if the pool changed since the
+/// caller last loaded it, so a stale browser tab cannot start a worker on a plan the operator
+/// no longer sees. The worker itself deliberately re-reads the pool when it fires (a worker can
+/// be scheduled with delay), so a parallel admin edit between API return and worker firing is
+/// still honoured - the digest only pins the at-API-call-time plan, not the executed plan.
+async fn apply_pending(
+ digest: Option<ConfigDigest>,
+ rpcenv: &mut dyn RpcEnvironment,
+) -> Result<Option<String>, Error> {
+ let auth_id: Authid = rpcenv
+ .get_auth_id()
+ .context("no authid available")?
+ .parse()?;
+ let user_info = CachedUserInfo::new()?;
+
+ let (_, config_digest) = pdm_config::subscriptions::config()?;
+ config_digest.detect_modification(digest.as_ref())?;
+
+ let node_statuses = collect_node_status(FRESH_NODE_STATUS_MAX_AGE, rpcenv).await?;
+ let pending = compute_pending(&user_info, &auth_id, &node_statuses)?;
+
+ if pending.is_empty() {
+ return Ok(None);
+ }
+
+ let worker_auth = auth_id.clone();
+ let upid = proxmox_rest_server::WorkerTask::spawn(
+ "subscription-apply-pending",
+ None,
+ auth_id.to_string(),
+ true,
+ move |_worker| async move { run_apply_pending(worker_auth).await },
+ )?;
+
+ Ok(Some(upid))
+}
+
+/// Re-validate and run the apply-pending plan from inside a worker.
+///
+/// The worker re-reads remotes and the pool config so a reassign or removal between the API call
+/// returning a UPID and the worker firing is honoured (pushing the old key to a node after the
+/// operator retracted the assignment was a real footgun).
+async fn run_apply_pending(auth_id: Authid) -> Result<(), Error> {
+ let user_info = CachedUserInfo::new()?;
+ let (remotes_config, _) = pdm_config::remotes::config()?;
+ let (config, _) = pdm_config::subscriptions::config()?;
+
+ let node_statuses = collect_status_uncached(&remotes_config).await;
+ let pending = compute_pending(&user_info, &auth_id, &node_statuses)?;
+
+ if pending.is_empty() {
+ info!("apply-pending: nothing to do (state changed since the API call)");
+ return Ok(());
+ }
+
+ let total = pending.len();
+ let mut ok = 0usize;
+
+ for entry in pending {
+ let Some(remote) = remotes_config.get(&entry.remote) else {
+ bail!(
+ "remote '{}' vanished, aborting after {ok}/{total} successful pushes",
+ entry.remote,
+ );
+ };
+ // Honour the case where the operator unassigned the key while the worker was queued.
+ if !pool_assignment_still_valid(&config, &entry) {
+ info!(
+ "skipping {}/{}: pool assignment changed before worker ran",
+ entry.remote, entry.node
+ );
+ continue;
+ }
+
+ let redacted = redact_key(&entry.key);
+ info!("pushing {redacted} to {}/{}...", entry.remote, entry.node);
+ if let Err(err) = push_key_to_remote(remote, &entry.key, &entry.node).await {
+ bail!(
+ "push of {redacted} to {}/{} failed after {ok}/{total} successful pushes: {err}",
+ entry.remote,
+ entry.node,
+ );
+ }
+ info!(" success");
+ invalidate_subscription_info_for_remote(&entry.remote);
+ ok += 1;
+ }
+
+ info!("finished: {ok}/{total} pushes succeeded");
+ Ok(())
+}
+
+#[api(
+ input: {
+ properties: {
+ digest: {
+ type: ConfigDigest,
+ optional: true,
+ },
+ },
+ },
+ returns: { type: ClearPendingResult },
+ access: {
+ permission: &Permission::Privilege(&["system"], PRIV_SYS_MODIFY, false),
+ },
+)]
+/// Clear every pending assignment in one bulk transaction.
+///
+/// Pending = pool key bound to a remote node whose live `current_key` does not match the
+/// assigned pool key (a different live key, no key, or no row returned at all because the remote
+/// is unreachable / the node is gone). Clears only those entries the caller has
+/// `PRIV_RESOURCE_MODIFY` on; remotes the caller may only audit are skipped. Mirrors
+/// `apply-pending` but drops the assignments instead of pushing them, so an operator can disown
+/// stuck assignments without first having to bring the target back online.
+///
+/// The optional `digest` is checked twice: once before the live-state fetch so a stale browser
+/// tab is rejected up-front, and again under the config lock so a parallel admin edit between
+/// fetch and write does not get silently overwritten.
+async fn clear_pending(
+ digest: Option<ConfigDigest>,
+ rpcenv: &mut dyn RpcEnvironment,
+) -> Result<ClearPendingResult, Error> {
+ let auth_id: Authid = rpcenv
+ .get_auth_id()
+ .context("no authid available")?
+ .parse()?;
+ let user_info = CachedUserInfo::new()?;
+
+ let (_, pre_digest) = pdm_config::subscriptions::config()?;
+ pre_digest.detect_modification(digest.as_ref())?;
+
+ let node_statuses = collect_node_status(FRESH_NODE_STATUS_MAX_AGE, rpcenv).await?;
+ let pending = compute_pending(&user_info, &auth_id, &node_statuses)?;
+
+ if pending.is_empty() {
+ return Ok(ClearPendingResult { cleared: 0 });
+ }
+
+ // Lock + sync IO under spawn_blocking so the async runtime stays free during the file
+ // operations.
+ let (cleared, new_digest_opt) = tokio::task::spawn_blocking(
+ move || -> Result<(u32, Option<ConfigDigest>), Error> {
+ let _lock = pdm_config::subscriptions::lock_config()?;
+ let (mut config, locked_digest) = pdm_config::subscriptions::config()?;
+ locked_digest.detect_modification(digest.as_ref())?;
+
+ let mut cleared: u32 = 0;
+ for entry in &pending {
+ // Re-check inside the lock so a concurrent reassign is not silently
+ // overwritten.
+ if let Some(stored) = config.get_mut(&entry.key) {
+ if stored.remote.as_deref() == Some(entry.remote.as_str())
+ && stored.node.as_deref() == Some(entry.node.as_str())
+ {
+ stored.remote = None;
+ stored.node = None;
+ cleared += 1;
+ }
+ }
+ }
+
+ let new_digest = if cleared > 0 {
+ Some(pdm_config::subscriptions::save_config(&config)?)
+ } else {
+ None
+ };
+ Ok((cleared, new_digest))
+ },
+ )
+ .await??;
+
+ if let Some(new_digest) = new_digest_opt {
+ rpcenv["digest"] = new_digest.to_hex().into();
+ }
+
+ Ok(ClearPendingResult { cleared })
+}
+
+/// Plan entry for one pending push.
+#[derive(Clone, Debug)]
+struct PendingEntry {
+ key: String,
+ remote: String,
+ node: String,
+}
+
+fn compute_pending(
+ user_info: &CachedUserInfo,
+ auth_id: &Authid,
+ node_statuses: &[RemoteNodeStatus],
+) -> Result<Vec<PendingEntry>, Error> {
+ let (config, _) = pdm_config::subscriptions::config()?;
+
+ Ok(config
+ .iter()
+ .filter_map(|(_id, entry)| {
+ let remote = entry.remote.as_deref()?;
+ let node = entry.node.as_deref()?;
+
+ if user_info.lookup_privs(auth_id, &["resource", remote]) & PRIV_RESOURCE_MODIFY == 0 {
+ return None;
+ }
+
+ // Pending push = the live current key on the node does not match the assigned pool
+ // key. Subscription health (Invalid, Expired, ...) is a separate axis surfaced via
+ // the Status column; re-pushing the same key would not change the shop's verdict.
+ // Unreachable remotes count as pending so a stuck assignment can still be cleared
+ // without first having to bring the target back online.
+ let is_pending = match node_statuses
+ .iter()
+ .find(|n| n.remote == remote && n.node == node)
+ {
+ Some(n) => n.current_key.as_deref() != Some(entry.key.as_str()),
+ None => true,
+ };
+
+ is_pending.then(|| PendingEntry {
+ key: entry.key.clone(),
+ remote: remote.to_string(),
+ node: node.to_string(),
+ })
+ })
+ .collect())
+}
+
+fn pool_assignment_still_valid(
+ config: &SectionConfigData<SubscriptionKeyEntry>,
+ entry: &PendingEntry,
+) -> bool {
+ let Some(stored) = config.get(&entry.key) else {
+ return false;
+ };
+ stored.remote.as_deref() == Some(entry.remote.as_str())
+ && stored.node.as_deref() == Some(entry.node.as_str())
+}
+
+/// Like [`collect_node_status`] but bypasses the auth filter, for the apply-pending worker
+/// which gates each entry through its own per-remote priv check based on the persisted pool plan.
+async fn collect_status_uncached(
+ remotes_config: &SectionConfigData<Remote>,
+) -> Vec<RemoteNodeStatus> {
+ let fetch = remotes_config.iter().map(|(name, remote)| async move {
+ let res = get_subscription_info_for_remote(remote, FRESH_NODE_STATUS_MAX_AGE).await;
+ (name.to_string(), remote.ty, res)
+ });
+ let results = join_all(fetch).await;
+
+ let mut out = Vec::new();
+ for (remote_name, remote_ty, result) in results {
+ let Ok(node_infos) = result else { continue };
+ for (node_name, node_info) in &node_infos {
+ let (status, level, sockets, current_key) = match node_info {
+ Some(info) => (info.status, info.level, info.sockets, info.key.clone()),
+ None => (
+ proxmox_subscription::SubscriptionStatus::NotFound,
+ SubscriptionLevel::None,
+ None,
+ None,
+ ),
+ };
+ out.push(RemoteNodeStatus {
+ remote: remote_name.clone(),
+ ty: remote_ty,
+ node: node_name.to_string(),
+ sockets,
+ status,
+ level,
+ assigned_key: None,
+ current_key,
+ pending_clear: false,
+ });
+ }
+ }
+ out
+}
+
+#[cfg(test)]
+mod tests {
+ use super::*;
+ use pdm_api_types::remotes::RemoteType;
+ use pdm_api_types::subscription::SubscriptionKeyEntry;
+ use proxmox_subscription::SubscriptionStatus;
+
+ #[test]
+ fn redact_key_handles_standard_pve_key() {
+ assert_eq!(redact_key("pve4b-1234567890"), "pve4b-1...0");
+ }
+
+ #[test]
+ fn redact_key_handles_standard_pbs_key() {
+ assert_eq!(redact_key("pbsc-abcdef0123"), "pbsc-a...3");
+ }
+
+ #[test]
+ fn redact_key_safe_on_single_char_secret() {
+ assert_eq!(redact_key("pve4b-x"), "pve4b-x...");
+ }
+
+ #[test]
+ fn redact_key_safe_on_empty_secret() {
+ assert_eq!(redact_key("pve4b-"), "pve4b-...");
+ }
+
+ #[test]
+ fn redact_key_malformed_no_dash() {
+ assert_eq!(redact_key("nodash"), "<malformed-key>");
+ }
+
+ fn pool_entry(key: &str, remote: Option<&str>, node: Option<&str>) -> SubscriptionKeyEntry {
+ SubscriptionKeyEntry {
+ key: key.to_string(),
+ product_type: ProductType::from_key(key).unwrap_or_default(),
+ level: SubscriptionLevel::from_key(Some(key)),
+ source: SubscriptionKeySource::Manual,
+ remote: remote.map(str::to_string),
+ node: node.map(str::to_string),
+ ..Default::default()
+ }
+ }
+
+ fn pool_config(entries: Vec<SubscriptionKeyEntry>) -> SectionConfigData<SubscriptionKeyEntry> {
+ let mut data = SectionConfigData::default();
+ for e in entries {
+ data.insert(e.key.clone(), e);
+ }
+ data
+ }
+
+ fn node_status(
+ remote: &str,
+ node: &str,
+ ty: RemoteType,
+ sockets: Option<i64>,
+ ) -> RemoteNodeStatus {
+ RemoteNodeStatus {
+ remote: remote.to_string(),
+ ty,
+ node: node.to_string(),
+ sockets,
+ status: SubscriptionStatus::NotFound,
+ level: SubscriptionLevel::None,
+ assigned_key: None,
+ current_key: None,
+ pending_clear: false,
+ }
+ }
+
+ #[test]
+ fn compute_proposals_picks_smallest_covering_pve_key() {
+ // Pool has a 1-socket, 2-socket, and 4-socket PVE key; the 2-socket target node should
+ // take the 2-socket key (smallest covering), not the 4-socket one.
+ let config = pool_config(vec![
+ pool_entry("pve1b-1111111111", None, None),
+ pool_entry("pve2b-2222222222", None, None),
+ pool_entry("pve4b-4444444444", None, None),
+ ]);
+ let statuses = vec![node_status("remote1", "node1", RemoteType::Pve, Some(2))];
+ let proposals = compute_proposals(&config, &statuses);
+ assert_eq!(proposals.len(), 1);
+ assert_eq!(proposals[0].key, "pve2b-2222222222");
+ assert_eq!(proposals[0].remote, "remote1");
+ assert_eq!(proposals[0].node, "node1");
+ }
+
+ #[test]
+ fn compute_proposals_skips_active_nodes() {
+ let config = pool_config(vec![pool_entry("pve2b-2222222222", None, None)]);
+ let mut active = node_status("remote1", "node1", RemoteType::Pve, Some(2));
+ active.status = SubscriptionStatus::Active;
+ let proposals = compute_proposals(&config, &[active]);
+ assert!(proposals.is_empty());
+ }
+
+ #[test]
+ fn compute_proposals_no_double_assignment() {
+ // Two nodes compete for one pool key; only one should be assigned.
+ let config = pool_config(vec![pool_entry("pve2b-2222222222", None, None)]);
+ let statuses = vec![
+ node_status("remote1", "node1", RemoteType::Pve, Some(2)),
+ node_status("remote1", "node2", RemoteType::Pve, Some(2)),
+ ];
+ let proposals = compute_proposals(&config, &statuses);
+ assert_eq!(proposals.len(), 1);
+ }
+
+ #[test]
+ fn compute_proposals_pbs_picks_first_candidate() {
+ // PBS keys have no socket count; the first matching candidate wins.
+ let config = pool_config(vec![pool_entry("pbsc-1111111111", None, None)]);
+ let statuses = vec![node_status("remote1", "node1", RemoteType::Pbs, None)];
+ let proposals = compute_proposals(&config, &statuses);
+ assert_eq!(proposals.len(), 1);
+ assert_eq!(proposals[0].key, "pbsc-1111111111");
+ }
+
+ #[test]
+ fn hash_node_status_stable_across_input_order() {
+ let a = node_status("r1", "n1", RemoteType::Pve, Some(2));
+ let b = node_status("r2", "n2", RemoteType::Pve, Some(4));
+ let h1 = hash_node_status(&[a.clone(), b.clone()]);
+ let h2 = hash_node_status(&[b, a]);
+ assert_eq!(h1, h2, "hash must be order-independent");
+ }
+
+ #[test]
+ fn hash_node_status_changes_with_status() {
+ let mut a = node_status("r1", "n1", RemoteType::Pve, Some(2));
+ let before = hash_node_status(&[a.clone()]);
+ a.status = SubscriptionStatus::Active;
+ let after = hash_node_status(&[a]);
+ assert_ne!(before, after, "hash must reflect status changes");
+ }
+
+ #[test]
+ fn hash_node_status_changes_with_assigned_key_presence() {
+ let mut a = node_status("r1", "n1", RemoteType::Pve, Some(2));
+ let before = hash_node_status(&[a.clone()]);
+ a.assigned_key = Some("pve2b-1234567890".to_string());
+ let after = hash_node_status(&[a]);
+ assert_ne!(
+ before, after,
+ "hash must reflect assigned_key presence (gates the auto-assign apply window)"
+ );
+ }
+}
diff --git a/server/src/context.rs b/server/src/context.rs
index c5da0afd..a4afcddd 100644
--- a/server/src/context.rs
+++ b/server/src/context.rs
@@ -15,6 +15,13 @@ fn default_remote_setup() {
/// Dependency-inject concrete implementations needed at runtime.
pub fn init() -> Result<(), Error> {
+ // The subscription key pool is product-only (PDM stores its own pool of
+ // keys regardless of how remotes are mocked or not), so initialise it on
+ // both paths.
+ pdm_config::subscriptions::init(Box::new(
+ pdm_config::subscriptions::DefaultSubscriptionKeyConfig,
+ ));
+
#[cfg(remote_config = "faked")]
{
use anyhow::bail;
diff --git a/server/src/pbs_client.rs b/server/src/pbs_client.rs
index c3025091..d494b04d 100644
--- a/server/src/pbs_client.rs
+++ b/server/src/pbs_client.rs
@@ -338,6 +338,37 @@ impl PbsClient {
.data)
}
+ /// Write a new subscription key on the PBS node and trigger a fresh shop-side check.
+ pub async fn set_subscription(
+ &self,
+ params: proxmox_subscription::SetSubscription,
+ ) -> Result<(), Error> {
+ self.0
+ .put("/api2/extjs/nodes/localhost/subscription", ¶ms)
+ .await?;
+ Ok(())
+ }
+
+ /// Tear down the subscription on the PBS node.
+ pub async fn delete_subscription(&self) -> Result<(), Error> {
+ self.0
+ .delete("/api2/extjs/nodes/localhost/subscription")
+ .await?;
+ Ok(())
+ }
+
+ /// Trigger a fresh shop-side check of the stored subscription on the PBS node. With
+ /// `force=true` the request bypasses PBS's on-disk cache and always hits the shop.
+ pub async fn check_subscription(
+ &self,
+ params: proxmox_subscription::UpdateSubscription,
+ ) -> Result<(), Error> {
+ self.0
+ .post("/api2/extjs/nodes/localhost/subscription", ¶ms)
+ .await?;
+ Ok(())
+ }
+
/// Return a list of available system updates.
pub async fn list_available_updates(&self) -> Result<Vec<pbs_api_types::APTUpdateInfo>, Error> {
Ok(self
--
2.47.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH datacenter-manager v3 05/12] ui: registry: add view with key pool and node status
2026-05-15 7:43 [PATCH datacenter-manager v3 00/12] subscription key pool registry Thomas Lamprecht
` (3 preceding siblings ...)
2026-05-15 7:43 ` [PATCH datacenter-manager v3 04/12] subscription: api: add key pool and node status endpoints Thomas Lamprecht
@ 2026-05-15 7:43 ` Thomas Lamprecht
2026-05-15 7:43 ` [PATCH datacenter-manager v3 06/12] cli: client: add subscription key pool management subcommands Thomas Lamprecht
` (6 subsequent siblings)
11 siblings, 0 replies; 16+ messages in thread
From: Thomas Lamprecht @ 2026-05-15 7:43 UTC (permalink / raw)
To: pdm-devel
Add a top-level Subscription Registry view with a Key Pool panel
next to a Node Status tree.
The Add dialog takes a textarea so an operator can paste several
keys at once. The Assign dialog filters the remote selector by the
key's compatible product type; PMG and POM keys leave Assign
disabled since PDM cannot push them to a remote yet.
Pending assignments show in the Node Status panel with a clock
icon; the toolbar carries a counts badge driven by the same
predicate the server uses for compute_pending. Selecting a node
exposes a Revert action that drops the entry's pending change.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
---
Changes v2 -> 3:
* ESC dismisses every ConfirmDialog on the registry view; the v2 view
got stuck in the dialog state on ESC.
* Pool grid columns are sortable.
* New hidden-by-default Source column for the Adopted entries
introduced in v3-0009.
* Pending counts use the same predicate the server uses for
compute_pending, so client and server stay in sync.
* Invalid keys land with a clear error instead of staying queued
with a misleading pending badge.
* "Clear Pending" button renamed to "Discard Pending"; the action
also cancels queued clears once the Clear Key flow lands in v3-0008.
* "Clear Assignment" action on the Node Status panel renamed to
"Revert"; its sibling "Remove" button there is dropped (Remove
Key on the pool grid covers it).
ui/Cargo.toml | 2 +-
ui/src/configuration/mod.rs | 3 +
ui/src/configuration/subscription_assign.rs | 332 ++++++
ui/src/configuration/subscription_keys.rs | 546 +++++++++
ui/src/configuration/subscription_registry.rs | 1019 +++++++++++++++++
ui/src/main_menu.rs | 10 +
ui/src/widget/pve_node_selector.rs | 41 +-
7 files changed, 1950 insertions(+), 3 deletions(-)
create mode 100644 ui/src/configuration/subscription_assign.rs
create mode 100644 ui/src/configuration/subscription_keys.rs
create mode 100644 ui/src/configuration/subscription_registry.rs
diff --git a/ui/Cargo.toml b/ui/Cargo.toml
index 4e1f772f..3d578022 100644
--- a/ui/Cargo.toml
+++ b/ui/Cargo.toml
@@ -30,7 +30,7 @@ yew-router = { version = "0.18" }
pwt = "0.8.0"
pwt-macros = "0.5"
-proxmox-yew-comp = { version = "0.8.7", features = ["apt", "dns", "network", "rrd"] }
+proxmox-yew-comp = { version = "0.8.8", features = ["apt", "dns", "network", "rrd"] }
proxmox-access-control = { version = "1.1", features = []}
proxmox-acme-api = "1"
diff --git a/ui/src/configuration/mod.rs b/ui/src/configuration/mod.rs
index 6ffb64be..b3eff105 100644
--- a/ui/src/configuration/mod.rs
+++ b/ui/src/configuration/mod.rs
@@ -13,7 +13,10 @@ mod permission_path_selector;
mod webauthn;
pub use webauthn::WebauthnPanel;
+pub mod subscription_assign;
+pub mod subscription_keys;
pub mod subscription_panel;
+pub mod subscription_registry;
pub mod views;
diff --git a/ui/src/configuration/subscription_assign.rs b/ui/src/configuration/subscription_assign.rs
new file mode 100644
index 00000000..16936b7f
--- /dev/null
+++ b/ui/src/configuration/subscription_assign.rs
@@ -0,0 +1,332 @@
+//! Node-first Assign Key dialog opened from the Subscription Registry's node tree panel.
+
+use std::rc::Rc;
+
+use anyhow::Error;
+use serde_json::json;
+
+use yew::html::IntoEventCallback;
+use yew::virtual_dom::{Key, VComp, VNode};
+
+use pwt::css::FlexFit;
+use pwt::prelude::*;
+use pwt::props::{ContainerBuilder, WidgetBuilder};
+use pwt::state::{Selection, Store};
+use pwt::widget::data_table::{DataTable, DataTableColumn, DataTableHeader};
+use pwt::widget::{Button, Column, Container, Dialog, Row};
+
+use proxmox_yew_comp::http_post;
+use proxmox_yew_comp::percent_encoding::percent_encode_component;
+
+use pdm_api_types::remotes::RemoteType;
+use pdm_api_types::subscription::{
+ pick_best_pve_socket_key, socket_count_from_key, SubscriptionKeyEntry,
+};
+
+const KEYS_URL: &str = "/subscriptions/keys";
+
+/// Filter the pool to keys that can land on a `remote_type` node and are not yet bound.
+fn candidates_for(
+ pool_keys: &[SubscriptionKeyEntry],
+ remote_type: RemoteType,
+) -> Vec<SubscriptionKeyEntry> {
+ let mut out: Vec<SubscriptionKeyEntry> = pool_keys
+ .iter()
+ .filter(|e| e.remote.is_none() && e.product_type.matches_remote_type(remote_type))
+ .cloned()
+ .collect();
+ // PVE: smallest covering socket count first so the default selection is the cheapest fit
+ // that still works. PBS keys have no socket count, fall back to key string.
+ out.sort_by(|a, b| {
+ let sa = socket_count_from_key(&a.key);
+ let sb = socket_count_from_key(&b.key);
+ sa.cmp(&sb).then_with(|| a.key.cmp(&b.key))
+ });
+ out
+}
+
+/// Pick a sensible default key for the dialog. For PVE, the smallest covering socket-count;
+/// for PBS, the first candidate.
+fn default_candidate(
+ candidates: &[SubscriptionKeyEntry],
+ remote_type: RemoteType,
+ node_sockets: Option<i64>,
+) -> Option<String> {
+ if candidates.is_empty() {
+ return None;
+ }
+ if remote_type == RemoteType::Pve {
+ let needed = node_sockets.unwrap_or(1).max(1) as u32;
+ if let Some(picked) = pick_best_pve_socket_key(
+ needed,
+ candidates.iter().map(|e| (e.key.clone(), e.key.as_str())),
+ ) {
+ return Some(picked);
+ }
+ }
+ candidates.first().map(|e| e.key.clone())
+}
+
+fn key_columns() -> Rc<Vec<DataTableHeader<SubscriptionKeyEntry>>> {
+ Rc::new(vec![
+ DataTableColumn::new(tr!("Key"))
+ .flex(2)
+ .get_property(|e: &SubscriptionKeyEntry| e.key.as_str())
+ .into(),
+ DataTableColumn::new(tr!("Product"))
+ .width("80px")
+ .render(|e: &SubscriptionKeyEntry| e.product_type.to_string().into())
+ .into(),
+ DataTableColumn::new(tr!("Level"))
+ .width("90px")
+ .render(|e: &SubscriptionKeyEntry| e.level.to_string().into())
+ .into(),
+ DataTableColumn::new(tr!("Sockets"))
+ .width("70px")
+ .render(|e: &SubscriptionKeyEntry| {
+ socket_count_from_key(&e.key)
+ .map(|s| s.to_string())
+ .unwrap_or_default()
+ .into()
+ })
+ .into(),
+ ])
+}
+
+async fn submit_assignment(
+ key: &str,
+ remote: &str,
+ node: &str,
+ digest: Option<&str>,
+) -> Result<(), Error> {
+ let url = format!(
+ "/subscriptions/keys/{}/assignment",
+ percent_encode_component(key),
+ );
+ let mut body = json!({ "remote": remote, "node": node });
+ if let Some(d) = digest {
+ body["digest"] = d.into();
+ }
+ http_post::<()>(&url, Some(body)).await
+}
+
+/// Simple "Assign Key to <remote>/<node>" dialog.
+#[derive(Properties, Clone, PartialEq)]
+pub struct AssignKeyToNodeDialog {
+ pub remote: String,
+ pub node: String,
+ pub ty: RemoteType,
+ pub node_sockets: Option<i64>,
+ pub pool_keys: Rc<Vec<SubscriptionKeyEntry>>,
+
+ #[prop_or_default]
+ pub pool_digest: Option<String>,
+
+ #[prop_or_default]
+ pub on_done: Option<Callback<()>>,
+}
+
+impl AssignKeyToNodeDialog {
+ pub fn new(
+ remote: impl Into<String>,
+ node: impl Into<String>,
+ ty: RemoteType,
+ node_sockets: Option<i64>,
+ pool_keys: Rc<Vec<SubscriptionKeyEntry>>,
+ ) -> Self {
+ Self {
+ remote: remote.into(),
+ node: node.into(),
+ ty,
+ node_sockets,
+ pool_keys,
+ pool_digest: None,
+ on_done: None,
+ }
+ }
+
+ pub fn pool_digest(mut self, digest: Option<String>) -> Self {
+ self.pool_digest = digest;
+ self
+ }
+
+ pub fn on_done(mut self, cb: impl IntoEventCallback<()>) -> Self {
+ self.on_done = cb.into_event_callback();
+ self
+ }
+}
+
+impl From<AssignKeyToNodeDialog> for VNode {
+ fn from(val: AssignKeyToNodeDialog) -> Self {
+ VComp::new::<AssignKeyToNodeComp>(Rc::new(val), None).into()
+ }
+}
+
+pub enum AssignMsg {
+ SelectionChanged,
+ Submit,
+ SubmitDone(Result<(), Error>),
+}
+
+pub struct AssignKeyToNodeComp {
+ store: Store<SubscriptionKeyEntry>,
+ columns: Rc<Vec<DataTableHeader<SubscriptionKeyEntry>>>,
+ selection: Selection,
+ last_error: Option<String>,
+ submitting: bool,
+}
+
+impl yew::Component for AssignKeyToNodeComp {
+ type Message = AssignMsg;
+ type Properties = AssignKeyToNodeDialog;
+
+ fn create(ctx: &yew::Context<Self>) -> Self {
+ let props = ctx.props();
+ let candidates = candidates_for(&props.pool_keys, props.ty);
+ let default = default_candidate(&candidates, props.ty, props.node_sockets);
+
+ let store = Store::with_extract_key(|e: &SubscriptionKeyEntry| Key::from(e.key.as_str()));
+ store.set_data(candidates);
+
+ let selection = Selection::new().on_select({
+ let link = ctx.link().clone();
+ move |_| link.send_message(AssignMsg::SelectionChanged)
+ });
+ if let Some(key) = default {
+ selection.select(Key::from(key));
+ }
+
+ Self {
+ store,
+ columns: key_columns(),
+ selection,
+ last_error: None,
+ submitting: false,
+ }
+ }
+
+ fn update(&mut self, ctx: &yew::Context<Self>, msg: Self::Message) -> bool {
+ match msg {
+ AssignMsg::SelectionChanged => true,
+ AssignMsg::Submit => {
+ let Some(picked) = self.selection.selected_key() else {
+ self.last_error = Some(tr!("Select a key first."));
+ return true;
+ };
+ let key = picked.to_string();
+ let remote = ctx.props().remote.clone();
+ let node = ctx.props().node.clone();
+ let digest = ctx.props().pool_digest.clone();
+ self.submitting = true;
+ self.last_error = None;
+ ctx.link().send_future(async move {
+ let res = submit_assignment(&key, &remote, &node, digest.as_deref()).await;
+ AssignMsg::SubmitDone(res)
+ });
+ true
+ }
+ AssignMsg::SubmitDone(Ok(())) => {
+ self.submitting = false;
+ if let Some(cb) = &ctx.props().on_done {
+ cb.emit(());
+ }
+ false
+ }
+ AssignMsg::SubmitDone(Err(err)) => {
+ self.submitting = false;
+ self.last_error = Some(err.to_string());
+ true
+ }
+ }
+ }
+
+ fn view(&self, ctx: &yew::Context<Self>) -> Html {
+ let props = ctx.props();
+ let no_candidates = self.store.read().len() == 0;
+
+ // The dialog title already carries `{remote}/{node}`; render only the sockets line here
+ // so the body adds context the title cannot fit. Without sockets there is nothing to add.
+ let header: Option<Html> = props.node_sockets.map(|s| {
+ Row::new()
+ .gap(2)
+ .with_child(Container::new().with_child(tr!("Node sockets:")))
+ .with_child(Container::new().with_child(s.to_string()))
+ .into()
+ });
+
+ let body_keys: Html = if no_candidates {
+ Container::new()
+ .padding(2)
+ .with_child(tr!(
+ "No matching free keys in the pool. Add one via the Key Pool panel first."
+ ))
+ .into()
+ } else {
+ DataTable::new(self.columns.clone(), self.store.clone())
+ .selection(self.selection.clone())
+ .striped(true)
+ .min_height(140)
+ .class(FlexFit)
+ .into()
+ };
+
+ let mut footer = Row::new()
+ .padding_top(2)
+ .gap(2)
+ .class(pwt::css::JustifyContent::FlexEnd)
+ .with_flex_spacer()
+ .with_child(Button::new(tr!("Cancel")).on_activate({
+ let cb = props.on_done.clone();
+ move |_| {
+ if let Some(cb) = &cb {
+ cb.emit(());
+ }
+ }
+ }))
+ .with_child(
+ Button::new(tr!("Assign"))
+ .disabled(no_candidates || self.submitting)
+ .on_activate(ctx.link().callback(|_| AssignMsg::Submit)),
+ );
+
+ if let Some(err) = &self.last_error {
+ footer = footer.with_child(
+ Container::new()
+ .padding_x(2)
+ .class(pwt::css::FontColor::Error)
+ .with_child(err.clone()),
+ );
+ }
+
+ let mut body = Column::new()
+ .padding(2)
+ .gap(2)
+ .min_width(640)
+ .min_height(0);
+ if let Some(h) = header {
+ body = body.with_child(h);
+ }
+ let body = body.with_child(body_keys).with_child(footer);
+
+ Dialog::new(tr!(
+ "Assign Key to {remote}/{node}",
+ remote = props.remote.clone(),
+ node = props.node.clone()
+ ))
+ .resizable(true)
+ .min_width(500)
+ .min_height(300)
+ .max_height("80vh")
+ .on_close({
+ let cb = props.on_done.clone();
+ move |_| {
+ if let Some(cb) = &cb {
+ cb.emit(());
+ }
+ }
+ })
+ .with_child(body)
+ .into()
+ }
+}
+
diff --git a/ui/src/configuration/subscription_keys.rs b/ui/src/configuration/subscription_keys.rs
new file mode 100644
index 00000000..e43543ae
--- /dev/null
+++ b/ui/src/configuration/subscription_keys.rs
@@ -0,0 +1,546 @@
+use std::future::Future;
+use std::pin::Pin;
+use std::rc::Rc;
+
+use anyhow::Error;
+
+use pdm_api_types::remotes::RemoteType;
+use pdm_api_types::subscription::{ProductType, RemoteNodeStatus, SubscriptionKeyEntry};
+use yew::virtual_dom::{Key, VComp, VNode};
+
+use proxmox_yew_comp::percent_encoding::percent_encode_component;
+use proxmox_yew_comp::{http_delete, http_post, EditWindow};
+use proxmox_yew_comp::{
+ LoadableComponent, LoadableComponentContext, LoadableComponentMaster,
+ LoadableComponentScopeExt, LoadableComponentState,
+};
+
+use pwt::css::FontStyle;
+use pwt::prelude::*;
+use pwt::state::{Selection, Store};
+use pwt::widget::data_table::{DataTable, DataTableColumn, DataTableHeader};
+use pwt::widget::form::{DisplayField, FormContext, TextArea};
+use pwt::widget::{Button, ConfirmDialog, Container, InputPanel, Toolbar, Tooltip};
+
+use crate::widget::{PveNodeSelector, RemoteSelector};
+
+const BASE_URL: &str = "/subscriptions/keys";
+
+#[derive(Properties, PartialEq, Clone)]
+pub struct SubscriptionKeyGrid {
+ /// Pool keys, owned by the parent registry so both panels see the same snapshot.
+ #[prop_or_default]
+ pub pool_keys: Rc<Vec<SubscriptionKeyEntry>>,
+
+ /// Pool-config digest captured by the parent registry on its last `/subscriptions/keys`
+ /// fetch. Passed through to every mutation so the server can reject (409) a call made
+ /// against a stale view rather than silently overwriting a parallel admin's edits.
+ #[prop_or_default]
+ pub pool_digest: Option<String>,
+
+ /// Called after every successful pool mutation (add, assign, clear, remove). Lets the parent
+ /// view (the Subscription Registry) reload its own data so the Node Status side stays in
+ /// sync with the Key Pool side.
+ #[prop_or_default]
+ pub on_change: Option<Callback<()>>,
+
+ /// Latest live node-status snapshot from the parent view. Used to disable the Clear button
+ /// when the selected entry's binding is currently synced (the assigned key is the live
+ /// active key on its remote), since unassigning then would orphan the live subscription.
+ /// The server enforces the same gate; this prop just turns it into a UI affordance.
+ #[prop_or_default]
+ pub node_status: Rc<Vec<RemoteNodeStatus>>,
+}
+
+impl SubscriptionKeyGrid {
+ pub fn new() -> Self {
+ yew::props!(Self {})
+ }
+
+ pub fn on_change(mut self, cb: impl Into<Option<Callback<()>>>) -> Self {
+ self.on_change = cb.into();
+ self
+ }
+
+ pub fn node_status(mut self, statuses: Rc<Vec<RemoteNodeStatus>>) -> Self {
+ self.node_status = statuses;
+ self
+ }
+
+ pub fn pool_keys(mut self, keys: Rc<Vec<SubscriptionKeyEntry>>) -> Self {
+ self.pool_keys = keys;
+ self
+ }
+
+ pub fn pool_digest(mut self, digest: Option<String>) -> Self {
+ self.pool_digest = digest;
+ self
+ }
+}
+
+impl Default for SubscriptionKeyGrid {
+ fn default() -> Self {
+ Self::new()
+ }
+}
+
+impl From<SubscriptionKeyGrid> for VNode {
+ fn from(val: SubscriptionKeyGrid) -> Self {
+ VComp::new::<LoadableComponentMaster<SubscriptionKeyGridComp>>(Rc::new(val), None).into()
+ }
+}
+
+pub enum Msg {
+ Remove(Key),
+ Reload,
+}
+
+#[derive(PartialEq)]
+pub enum ViewState {
+ Add,
+ Assign,
+ Remove,
+}
+
+#[doc(hidden)]
+pub struct SubscriptionKeyGridComp {
+ state: LoadableComponentState<ViewState>,
+ store: Store<SubscriptionKeyEntry>,
+ columns: Rc<Vec<DataTableHeader<SubscriptionKeyEntry>>>,
+ selection: Selection,
+}
+
+pwt::impl_deref_mut_property!(
+ SubscriptionKeyGridComp,
+ state,
+ LoadableComponentState<ViewState>
+);
+
+impl SubscriptionKeyGridComp {
+ fn columns() -> Rc<Vec<DataTableHeader<SubscriptionKeyEntry>>> {
+ Rc::new(vec![
+ DataTableColumn::new(tr!("Key"))
+ .flex(2)
+ .get_property(|entry: &SubscriptionKeyEntry| entry.key.as_str())
+ .sort_order(true)
+ .into(),
+ DataTableColumn::new(tr!("Product"))
+ .width("80px")
+ .sorter(|a: &SubscriptionKeyEntry, b: &SubscriptionKeyEntry| {
+ a.product_type
+ .to_string()
+ .cmp(&b.product_type.to_string())
+ })
+ .render(|entry: &SubscriptionKeyEntry| entry.product_type.to_string().into())
+ .into(),
+ DataTableColumn::new(tr!("Level"))
+ .width("90px")
+ .sorter(|a: &SubscriptionKeyEntry, b: &SubscriptionKeyEntry| a.level.cmp(&b.level))
+ .render(|entry: &SubscriptionKeyEntry| entry.level.to_string().into())
+ .into(),
+ DataTableColumn::new(tr!("Assignment"))
+ .flex(2)
+ .sorter(|a: &SubscriptionKeyEntry, b: &SubscriptionKeyEntry| {
+ (&a.remote, &a.node).cmp(&(&b.remote, &b.node))
+ })
+ .render(
+ |entry: &SubscriptionKeyEntry| match (&entry.remote, &entry.node) {
+ (Some(remote), Some(node)) => format!("{remote} / {node}").into(),
+ _ => Html::default(),
+ },
+ )
+ .into(),
+ ])
+ }
+
+ fn selected_entry(&self) -> Option<SubscriptionKeyEntry> {
+ let key = self.selection.selected_key()?;
+ self.store.read().lookup_record(&key).cloned()
+ }
+
+ fn create_add_dialog(&self, ctx: &LoadableComponentContext<Self>) -> Html {
+ let digest = ctx.props().pool_digest.clone();
+ EditWindow::new(tr!("Add Subscription Keys"))
+ .renderer(|_form_ctx| add_input_panel())
+ .on_submit(move |form| submit_add_keys(form, digest.clone()))
+ .on_done(ctx.link().clone().callback(|_| Msg::Reload))
+ .into()
+ }
+
+ fn create_assign_dialog(
+ &self,
+ entry: &SubscriptionKeyEntry,
+ ctx: &LoadableComponentContext<Self>,
+ ) -> Html {
+ let key = entry.key.clone();
+ let product_type = entry.product_type;
+ let node_status = ctx.props().node_status.clone();
+ let digest = ctx.props().pool_digest.clone();
+ EditWindow::new(tr!("Assign Key to Remote"))
+ .renderer({
+ let key = key.clone();
+ move |form_ctx| assign_input_panel(&key, product_type, form_ctx, &node_status)
+ })
+ .on_submit({
+ let key = key.clone();
+ move |form| submit_assign(key.clone(), form, digest.clone())
+ })
+ .on_done(ctx.link().clone().callback(|_| Msg::Reload))
+ .into()
+ }
+}
+
+impl LoadableComponent for SubscriptionKeyGridComp {
+ type Properties = SubscriptionKeyGrid;
+ type Message = Msg;
+ type ViewState = ViewState;
+
+ fn create(ctx: &LoadableComponentContext<Self>) -> Self {
+ let selection = Selection::new().on_select({
+ let link = ctx.link().clone();
+ move |_| link.send_redraw()
+ });
+ let store = Store::with_extract_key(|entry: &SubscriptionKeyEntry| {
+ entry.key.as_str().into()
+ });
+ store.set_data((*ctx.props().pool_keys).clone());
+ Self {
+ state: LoadableComponentState::new(),
+ store,
+ columns: Self::columns(),
+ selection,
+ }
+ }
+
+ fn update(&mut self, ctx: &LoadableComponentContext<Self>, msg: Self::Message) -> bool {
+ match msg {
+ Msg::Remove(key) => {
+ let id = key.to_string();
+ let link = ctx.link().clone();
+ let digest = ctx.props().pool_digest.clone();
+ ctx.link().spawn(async move {
+ let url = format!("{BASE_URL}/{}", percent_encode_component(&id));
+ let query = digest.map(|d| serde_json::json!({ "digest": d }));
+ if let Err(err) = http_delete(&url, query).await {
+ link.show_error(
+ tr!("Error"),
+ tr!("Could not remove {id}: {err}", id = id, err = err),
+ true,
+ );
+ }
+ link.send_message(Msg::Reload);
+ });
+ }
+ Msg::Reload => {
+ ctx.link().change_view(None);
+ if let Some(cb) = &ctx.props().on_change {
+ cb.emit(());
+ }
+ }
+ }
+ true
+ }
+
+ fn toolbar(&self, ctx: &LoadableComponentContext<Self>) -> Option<Html> {
+ let entry = self.selected_entry();
+ let has_selection = entry.is_some();
+ let is_assigned = entry.as_ref().map(|e| e.remote.is_some()).unwrap_or(false);
+ let synced_assignment = entry
+ .as_ref()
+ .map(|e| is_synced_assignment(e, &ctx.props().node_status))
+ .unwrap_or(false);
+ let assignable = entry
+ .as_ref()
+ .map(|e| {
+ e.product_type.matches_remote_type(RemoteType::Pve)
+ || e.product_type.matches_remote_type(RemoteType::Pbs)
+ })
+ .unwrap_or(false);
+ let link = ctx.link();
+
+ Some(
+ Toolbar::new()
+ .border_bottom(true)
+ .with_child(
+ Tooltip::new(
+ Button::new(tr!("Add"))
+ .icon_class("fa fa-plus")
+ .on_activate(link.change_view_callback(|_| Some(ViewState::Add))),
+ )
+ .tip(tr!(
+ "Add one or more subscription keys to the pool; the Assign step \
+ happens later."
+ )),
+ )
+ .with_spacer()
+ .with_child(
+ Tooltip::new(
+ Button::new(tr!("Remove Key"))
+ .icon_class("fa fa-trash-o")
+ .disabled(!has_selection || synced_assignment)
+ .on_activate(link.change_view_callback(|_| Some(ViewState::Remove))),
+ )
+ .tip(tr!(
+ "Remove the selected key from the pool. Disabled while the key is \
+ live on a remote node."
+ )),
+ )
+ .with_spacer()
+ .with_child(
+ Tooltip::new(
+ Button::new(tr!("Assign"))
+ .icon_class("fa fa-link")
+ .disabled(!has_selection || is_assigned || !assignable)
+ .on_activate(link.change_view_callback(|_| Some(ViewState::Assign))),
+ )
+ .tip(tr!(
+ "Pin the selected key to a remote node; Apply Pending pushes the \
+ assignment to the remote."
+ )),
+ )
+ .into(),
+ )
+ }
+
+ fn changed(
+ &mut self,
+ ctx: &LoadableComponentContext<Self>,
+ old_props: &Self::Properties,
+ ) -> bool {
+ if !Rc::ptr_eq(&old_props.pool_keys, &ctx.props().pool_keys) {
+ self.store.set_data((*ctx.props().pool_keys).clone());
+ }
+ true
+ }
+
+ fn load(
+ &self,
+ _ctx: &LoadableComponentContext<Self>,
+ ) -> Pin<Box<dyn Future<Output = Result<(), Error>>>> {
+ // Pool data flows in via the `pool_keys` prop owned by the parent registry; the grid
+ // does not fetch on its own. Resolve immediately so the LoadableComponent harness does
+ // not show its mask.
+ Box::pin(async { Ok(()) })
+ }
+
+ fn main_view(&self, _ctx: &LoadableComponentContext<Self>) -> Html {
+ DataTable::new(self.columns.clone(), self.store.clone())
+ .selection(self.selection.clone())
+ .into()
+ }
+
+ fn dialog_view(
+ &self,
+ ctx: &LoadableComponentContext<Self>,
+ view_state: &Self::ViewState,
+ ) -> Option<Html> {
+ match view_state {
+ ViewState::Add => Some(self.create_add_dialog(ctx)),
+ ViewState::Assign => self
+ .selected_entry()
+ .map(|entry| self.create_assign_dialog(&entry, ctx)),
+ ViewState::Remove => self.selection.selected_key().map(|key| {
+ let assignment = self.selected_entry().and_then(|e| {
+ Some((e.remote.clone()?, e.node.clone()?))
+ });
+ let body = match assignment {
+ Some((remote, node)) => tr!(
+ "Remove {key} from the key pool? It is still assigned to {remote}/{node}; the assignment is released without removing the subscription on the remote.",
+ key = key.to_string(),
+ remote = remote,
+ node = node,
+ ),
+ None => tr!(
+ "Remove {key} from the key pool? This does not revoke the subscription.",
+ key = key.to_string(),
+ ),
+ };
+ ConfirmDialog::new(tr!("Remove Key"), body)
+ .on_confirm({
+ let link = ctx.link().clone();
+ let key = key.clone();
+ move |_| link.send_message(Msg::Remove(key.clone()))
+ })
+ .on_close({
+ let link = ctx.link().clone();
+ move |_| link.change_view(None)
+ })
+ .into()
+ }),
+ }
+ }
+}
+
+/// Returns true when the pool entry's binding currently runs the same key on the remote and is
+/// Active - meaning a clear-assignment would orphan the live subscription. Mirrors the
+/// server-side gate; the operator must release the live subscription on the remote first.
+fn is_synced_assignment(entry: &SubscriptionKeyEntry, statuses: &[RemoteNodeStatus]) -> bool {
+ let (Some(remote), Some(node)) = (entry.remote.as_deref(), entry.node.as_deref()) else {
+ return false;
+ };
+ statuses
+ .iter()
+ .find(|n| n.remote == remote && n.node == node)
+ .map(|n| {
+ n.status == proxmox_subscription::SubscriptionStatus::Active
+ && n.current_key.as_deref() == Some(entry.key.as_str())
+ })
+ .unwrap_or(false)
+}
+
+fn add_input_panel() -> Html {
+ let hint = Container::new()
+ .class(FontStyle::TitleSmall)
+ .class(pwt::css::Opacity::Quarter)
+ .padding_top(2)
+ .with_child(tr!(
+ "One key per line, or comma-separated. Only Proxmox VE and Proxmox Backup Server keys are accepted."
+ ));
+
+ // The textarea opts into `width: 100%` so it fills the InputPanel's grid cell instead of
+ // shrinking to browser-default cols.
+ InputPanel::new()
+ .padding(4)
+ .min_width(500)
+ .with_large_custom_child(
+ TextArea::new()
+ .name("keys")
+ .submit_empty(false)
+ .required(true)
+ .attribute("rows", "8")
+ .attribute("placeholder", tr!("Subscription key(s)"))
+ .style("width", "100%")
+ .style("box-sizing", "border-box"),
+ )
+ .with_large_custom_child(hint)
+ .into()
+}
+
+async fn submit_add_keys(form_ctx: FormContext, digest: Option<String>) -> Result<(), Error> {
+ let raw = form_ctx.read().get_field_text("keys");
+ let keys: Vec<String> = raw
+ .split(|c: char| c.is_whitespace() || c == ',')
+ .map(str::trim)
+ .filter(|s| !s.is_empty())
+ .map(str::to_string)
+ .collect();
+
+ if keys.is_empty() {
+ anyhow::bail!(tr!("no keys provided"));
+ }
+
+ let mut body = serde_json::json!({ "keys": keys });
+ if let Some(d) = digest {
+ body["digest"] = d.into();
+ }
+ http_post(BASE_URL, Some(body)).await
+}
+
+/// Map a subscription product type to the remote type its keys can drive.
+fn remote_type_for(product_type: ProductType) -> Option<RemoteType> {
+ if product_type.matches_remote_type(RemoteType::Pve) {
+ Some(RemoteType::Pve)
+ } else if product_type.matches_remote_type(RemoteType::Pbs) {
+ Some(RemoteType::Pbs)
+ } else {
+ None
+ }
+}
+
+fn assign_input_panel(
+ key: &str,
+ product_type: ProductType,
+ form_ctx: &FormContext,
+ node_status: &[RemoteNodeStatus],
+) -> Html {
+ let mut panel = InputPanel::new().padding(4).min_width(500).with_field(
+ tr!("Key"),
+ DisplayField::new()
+ .name("key")
+ .value(key.to_string())
+ .key("key-display"),
+ );
+
+ let Some(remote_type) = remote_type_for(product_type) else {
+ // Defensive: the toolbar disables Assign for these product types.
+ return panel
+ .with_large_custom_child(
+ Container::new()
+ .class(FontStyle::TitleSmall)
+ .class(pwt::css::Opacity::Quarter)
+ .with_child(tr!(
+ "PDM cannot manage {product} remotes yet; this key is parked in the pool.",
+ product = product_type.to_string(),
+ )),
+ )
+ .into();
+ };
+
+ panel = panel.with_field(
+ tr!("Remote"),
+ RemoteSelector::new()
+ .name("remote")
+ .remote_type(remote_type)
+ .required(true),
+ );
+
+ match remote_type {
+ RemoteType::Pve => {
+ let selected_remote = form_ctx.read().get_field_text("remote");
+ if selected_remote.is_empty() {
+ panel
+ .with_field(
+ tr!("Node"),
+ DisplayField::new()
+ .name("node")
+ .key("node-no-remote")
+ .value(AttrValue::from(tr!("Select a remote first."))),
+ )
+ .into()
+ } else {
+ let excluded: Vec<String> = node_status
+ .iter()
+ .filter(|n| n.remote == selected_remote && n.assigned_key.is_some())
+ .map(|n| n.node.clone())
+ .collect();
+ // `PveNodeSelector` fetches its node list in `create` and does not re-fetch on
+ // prop change, so a per-remote `key` forces a fresh component when the operator
+ // picks a target.
+ panel
+ .with_field(
+ tr!("Node"),
+ PveNodeSelector::new(selected_remote.clone())
+ .name("node")
+ .key(format!("node-selector-{selected_remote}"))
+ .excluded_nodes(Rc::new(excluded))
+ .required(true),
+ )
+ .into()
+ }
+ }
+ RemoteType::Pbs => panel
+ .with_field(
+ tr!("Node"),
+ DisplayField::new()
+ .name("node")
+ .value(AttrValue::from("localhost"))
+ .key("node-localhost"),
+ )
+ .into(),
+ }
+}
+
+async fn submit_assign(
+ key: String,
+ form_ctx: FormContext,
+ digest: Option<String>,
+) -> Result<(), Error> {
+ let mut data = form_ctx.get_submit_data();
+ if let Some(d) = digest {
+ if let Some(obj) = data.as_object_mut() {
+ obj.insert("digest".to_string(), d.into());
+ }
+ }
+ let url = format!("{BASE_URL}/{}/assignment", percent_encode_component(&key));
+ http_post(&url, Some(data)).await
+}
diff --git a/ui/src/configuration/subscription_registry.rs b/ui/src/configuration/subscription_registry.rs
new file mode 100644
index 00000000..513fa3a0
--- /dev/null
+++ b/ui/src/configuration/subscription_registry.rs
@@ -0,0 +1,1019 @@
+use std::future::Future;
+use std::pin::Pin;
+use std::rc::Rc;
+
+use anyhow::Error;
+
+use yew::virtual_dom::{Key, VComp, VNode};
+
+use proxmox_yew_comp::percent_encoding::percent_encode_component;
+use proxmox_yew_comp::{http_delete, http_get, http_get_full, http_post};
+use proxmox_yew_comp::{
+ LoadableComponent, LoadableComponentContext, LoadableComponentMaster,
+ LoadableComponentScopeExt, LoadableComponentState,
+};
+
+use pwt::css::{AlignItems, Flex, FlexDirection, FlexFit, FontColor, JustifyContent, Overflow};
+use pwt::prelude::*;
+use pwt::props::{ContainerBuilder, ExtractPrimaryKey, WidgetBuilder};
+use pwt::state::{Selection, SlabTree, Store, TreeStore};
+use pwt::widget::data_table::{DataTable, DataTableColumn, DataTableHeader};
+use pwt::widget::{Button, Column, Container, Fa, Panel, Row, Toolbar, Tooltip};
+
+use pdm_api_types::subscription::{
+ AutoAssignProposal, ProposedAssignment, RemoteNodeStatus, SubscriptionKeyEntry,
+ SubscriptionLevel,
+};
+
+use super::subscription_keys::SubscriptionKeyGrid;
+
+const NODE_STATUS_URL: &str = "/subscriptions/node-status";
+const KEYS_URL: &str = "/subscriptions/keys";
+const AUTO_ASSIGN_URL: &str = "/subscriptions/auto-assign";
+const BULK_ASSIGN_URL: &str = "/subscriptions/bulk-assign";
+const APPLY_PENDING_URL: &str = "/subscriptions/apply-pending";
+const CLEAR_PENDING_URL: &str = "/subscriptions/clear-pending";
+
+/// Map a [`SubscriptionStatus`] to the icon shown in subscription panels.
+///
+/// Public so the dashboard subscriptions panel can render the same icon for the same state
+/// without redefining the mapping. The 4-variant `proxmox_yew_comp::Status` does not cover
+/// every subscription state (New, Expired, Suspended need their own icons), hence the dedicated
+/// helper.
+pub fn subscription_status_icon(status: proxmox_subscription::SubscriptionStatus) -> Fa {
+ use proxmox_subscription::SubscriptionStatus as S;
+ match status {
+ S::Active => Fa::new("check-circle").class(FontColor::Success),
+ S::New => Fa::new("clock-o").class(FontColor::Primary),
+ S::NotFound => Fa::new("exclamation-circle").class(FontColor::Error),
+ S::Invalid => Fa::new("times-circle").class(FontColor::Warning),
+ S::Expired => Fa::new("clock-o").class(FontColor::Warning),
+ S::Suspended => Fa::new("ban").class(FontColor::Error),
+ }
+}
+
+fn subscription_status_label(status: proxmox_subscription::SubscriptionStatus) -> String {
+ use proxmox_subscription::SubscriptionStatus as S;
+ match status {
+ S::Active => tr!("Active"),
+ S::New => tr!("New"),
+ S::NotFound => tr!("No subscription"),
+ S::Invalid => tr!("Invalid"),
+ S::Expired => tr!("Expired"),
+ S::Suspended => tr!("Suspended"),
+ }
+}
+
+fn pending_badge(push_count: u32, clear_count: u32) -> Row {
+ let mut row = Row::new().class(AlignItems::Center).gap(3);
+ if push_count > 0 {
+ row = row.with_child(
+ Tooltip::new(
+ Row::new()
+ .class(AlignItems::Baseline)
+ .gap(1)
+ .with_child(Fa::new("clock-o").class(FontColor::Warning))
+ .with_child(tr!("{n} pending push(es)", n = push_count)),
+ )
+ .tip(tr!(
+ "{n} pool key(s) queued for push; Apply Pending will install them on the remote.",
+ n = push_count,
+ )),
+ );
+ }
+ if clear_count > 0 {
+ row = row.with_child(
+ Tooltip::new(
+ Row::new()
+ .class(AlignItems::Baseline)
+ .gap(1)
+ .with_child(Fa::new("recycle").class(FontColor::Warning))
+ .with_child(tr!("{n} pending clear(s)", n = clear_count)),
+ )
+ .tip(tr!(
+ "{n} live subscription(s) queued for removal; Apply Pending will free them.",
+ n = clear_count,
+ )),
+ );
+ }
+ row
+}
+
+#[derive(Clone, Debug, PartialEq)]
+enum NodeTreeEntry {
+ Root,
+ Remote {
+ name: String,
+ ty: pdm_api_types::remotes::RemoteType,
+ active: u32,
+ total: u32,
+ },
+ Node {
+ data: RemoteNodeStatus,
+ /// If true, this is the only node in its remote and is shown at the top level under the
+ /// remote name instead of nested.
+ standalone: bool,
+ },
+}
+
+impl NodeTreeEntry {
+ fn name(&self) -> &str {
+ match self {
+ Self::Root => "",
+ Self::Remote { name, .. } => name,
+ Self::Node { data, standalone } => {
+ if *standalone {
+ &data.remote
+ } else {
+ &data.node
+ }
+ }
+ }
+ }
+}
+
+impl ExtractPrimaryKey for NodeTreeEntry {
+ fn extract_key(&self) -> Key {
+ Key::from(match self {
+ NodeTreeEntry::Root => "/".to_string(),
+ NodeTreeEntry::Remote { name, .. } => format!("/{name}"),
+ NodeTreeEntry::Node { data, .. } => format!("/{}/{}", data.remote, data.node),
+ })
+ }
+}
+
+fn build_tree(nodes: Vec<RemoteNodeStatus>) -> SlabTree<NodeTreeEntry> {
+ use std::collections::BTreeMap;
+
+ let mut by_remote: BTreeMap<String, Vec<RemoteNodeStatus>> = BTreeMap::new();
+ for n in nodes {
+ by_remote.entry(n.remote.clone()).or_default().push(n);
+ }
+
+ let mut tree = SlabTree::new();
+ let mut root = tree.set_root(NodeTreeEntry::Root);
+ root.set_expanded(true);
+
+ for (remote_name, remote_nodes) in &by_remote {
+ let total = remote_nodes.len() as u32;
+ let active = remote_nodes
+ .iter()
+ .filter(|n| n.status == proxmox_subscription::SubscriptionStatus::Active)
+ .count() as u32;
+
+ let ty = remote_nodes.first().map(|n| n.ty).unwrap_or_default();
+
+ if remote_nodes.len() == 1 {
+ root.append(NodeTreeEntry::Node {
+ data: remote_nodes[0].clone(),
+ standalone: true,
+ });
+ } else {
+ let mut remote_entry = root.append(NodeTreeEntry::Remote {
+ name: remote_name.clone(),
+ ty,
+ active,
+ total,
+ });
+ remote_entry.set_expanded(true);
+ for n in remote_nodes {
+ remote_entry.append(NodeTreeEntry::Node {
+ data: n.clone(),
+ standalone: false,
+ });
+ }
+ }
+ }
+
+ tree
+}
+
+#[derive(Properties, PartialEq, Clone)]
+pub struct SubscriptionRegistryProps {}
+
+impl SubscriptionRegistryProps {
+ pub fn new() -> Self {
+ yew::props!(Self {})
+ }
+}
+
+impl Default for SubscriptionRegistryProps {
+ fn default() -> Self {
+ Self::new()
+ }
+}
+
+impl From<SubscriptionRegistryProps> for VNode {
+ fn from(val: SubscriptionRegistryProps) -> Self {
+ VComp::new::<LoadableComponentMaster<SubscriptionRegistryComp>>(Rc::new(val), None).into()
+ }
+}
+
+pub enum Msg {
+ LoadFinished {
+ nodes: Vec<RemoteNodeStatus>,
+ keys: Vec<SubscriptionKeyEntry>,
+ digest: Option<String>,
+ },
+ AutoAssignPreview,
+ /// Commit a previously-fetched proposal via the bulk-assign endpoint.
+ BulkAssignApply(AutoAssignProposal),
+ ApplyPending,
+ ClearPending,
+ /// Revert the pending change on the currently-selected node: drop the unpushed pool
+ /// assignment without touching the remote.
+ RevertSelectedNode,
+ /// Open the Assign Key dialog for the currently-selected node.
+ AssignKeyToSelectedNode,
+}
+
+#[derive(PartialEq)]
+pub enum ViewState {
+ ConfirmAutoAssign(AutoAssignProposal),
+ ConfirmApplyPending,
+ ConfirmClearPending,
+ /// Assign a pool key to the given node. Opens from the right panel's Assign Key button.
+ AssignKeyToNode {
+ remote: String,
+ node: String,
+ ty: pdm_api_types::remotes::RemoteType,
+ node_sockets: Option<i64>,
+ },
+}
+
+#[doc(hidden)]
+pub struct SubscriptionRegistryComp {
+ state: LoadableComponentState<ViewState>,
+ tree_store: TreeStore<NodeTreeEntry>,
+ tree_columns: Rc<Vec<DataTableHeader<NodeTreeEntry>>>,
+ proposal_columns: Rc<Vec<DataTableHeader<ProposedAssignment>>>,
+ node_selection: Selection,
+ last_node_data: Vec<RemoteNodeStatus>,
+ /// Canonical pool snapshot. Passed down to the key grid (display) and shared with the
+ /// node-first Assign dialog and the Add+Assign wizard (selector source-of-truth).
+ pool_keys: Rc<Vec<SubscriptionKeyEntry>>,
+ /// Pool-config digest captured alongside `pool_keys`. Forwarded to every pool mutation so
+ /// the server rejects stale-view writes with 409 instead of silently overwriting a parallel
+ /// admin's edits.
+ pool_digest: Option<String>,
+}
+
+pwt::impl_deref_mut_property!(
+ SubscriptionRegistryComp,
+ state,
+ LoadableComponentState<ViewState>
+);
+
+fn tree_sorter(a: &NodeTreeEntry, b: &NodeTreeEntry) -> std::cmp::Ordering {
+ a.name().cmp(b.name())
+}
+
+/// Sort helper that compares two Node entries on a derived key and falls back to name comparison
+/// for any Root/Remote variant; tree columns surface this so parent rows do not reshuffle when
+/// sorting by a Node-only attribute.
+fn node_field_sorter<K: Ord>(
+ a: &NodeTreeEntry,
+ b: &NodeTreeEntry,
+ f: impl Fn(&RemoteNodeStatus) -> K,
+) -> std::cmp::Ordering {
+ match (a, b) {
+ (NodeTreeEntry::Node { data: na, .. }, NodeTreeEntry::Node { data: nb, .. }) => {
+ f(na).cmp(&f(nb))
+ }
+ _ => a.name().cmp(b.name()),
+ }
+}
+
+impl SubscriptionRegistryComp {
+ fn tree_columns(store: TreeStore<NodeTreeEntry>) -> Rc<Vec<DataTableHeader<NodeTreeEntry>>> {
+ Rc::new(vec![
+ DataTableColumn::new(tr!("Name"))
+ .tree_column(store)
+ .flex(3)
+ .render(|entry: &NodeTreeEntry| {
+ let (icon, name) = match entry {
+ NodeTreeEntry::Root => return Html::default(),
+ NodeTreeEntry::Remote { name, ty, .. } => {
+ let icon = if *ty == pdm_api_types::remotes::RemoteType::Pbs {
+ "building-o"
+ } else {
+ "server"
+ };
+ (icon, name.as_str())
+ }
+ NodeTreeEntry::Node {
+ data: n,
+ standalone,
+ } => {
+ let icon = if n.ty == pdm_api_types::remotes::RemoteType::Pbs {
+ "building-o"
+ } else {
+ "building"
+ };
+ let label = if *standalone { &n.remote } else { &n.node };
+ (icon, label.as_str())
+ }
+ };
+ Row::new()
+ .class(AlignItems::Baseline)
+ .gap(2)
+ .with_child(Fa::new(icon))
+ .with_child(name)
+ .into()
+ })
+ .sorter(tree_sorter)
+ .into(),
+ DataTableColumn::new(tr!("Sockets"))
+ .width("70px")
+ .sorter(|a: &NodeTreeEntry, b: &NodeTreeEntry| {
+ node_field_sorter(a, b, |n| n.sockets)
+ })
+ .render(|entry: &NodeTreeEntry| match entry {
+ NodeTreeEntry::Node { data: n, .. } => {
+ n.sockets.map(|s| s.to_string()).unwrap_or_default().into()
+ }
+ _ => Html::default(),
+ })
+ .into(),
+ DataTableColumn::new(tr!("Status"))
+ .width("150px")
+ .sorter(|a: &NodeTreeEntry, b: &NodeTreeEntry| {
+ node_field_sorter(a, b, |n| subscription_status_label(n.status))
+ })
+ .render(|entry: &NodeTreeEntry| match entry {
+ NodeTreeEntry::Node { data: n, .. } => Row::new()
+ .class(AlignItems::Baseline)
+ .gap(2)
+ .with_child(subscription_status_icon(n.status))
+ .with_child(subscription_status_label(n.status))
+ .into(),
+ NodeTreeEntry::Remote { active, total, .. } => {
+ let icon = if active == total {
+ Fa::new("check-circle").class(FontColor::Success)
+ } else if *active == 0 {
+ Fa::new("exclamation-circle").class(FontColor::Error)
+ } else {
+ Fa::new("exclamation-triangle").class(FontColor::Warning)
+ };
+ Tooltip::new(
+ Row::new()
+ .class(AlignItems::Baseline)
+ .gap(2)
+ .with_child(icon)
+ .with_child(format!("{active}/{total}")),
+ )
+ .tip(tr!(
+ "{active} of {total} nodes subscribed",
+ active = active,
+ total = total,
+ ))
+ .into()
+ }
+ _ => Html::default(),
+ })
+ .into(),
+ DataTableColumn::new(tr!("Level"))
+ .width("90px")
+ .sorter(|a: &NodeTreeEntry, b: &NodeTreeEntry| {
+ node_field_sorter(a, b, |n| n.level)
+ })
+ .render(|entry: &NodeTreeEntry| match entry {
+ NodeTreeEntry::Node { data: n, .. } if n.level != SubscriptionLevel::None => {
+ n.level.to_string().into()
+ }
+ _ => Html::default(),
+ })
+ .into(),
+ DataTableColumn::new(tr!("Key"))
+ .flex(2)
+ .sorter(|a: &NodeTreeEntry, b: &NodeTreeEntry| {
+ node_field_sorter(a, b, |n| {
+ n.assigned_key
+ .clone()
+ .or_else(|| n.current_key.clone())
+ .unwrap_or_default()
+ })
+ })
+ .render(|entry: &NodeTreeEntry| match entry {
+ NodeTreeEntry::Node { data: n, .. } => key_cell(n),
+ _ => Html::default(),
+ })
+ .into(),
+ ])
+ }
+
+ fn proposal_columns() -> Rc<Vec<DataTableHeader<ProposedAssignment>>> {
+ Rc::new(vec![
+ DataTableColumn::new(tr!("Remote / Node"))
+ .flex(2)
+ .render(|p: &ProposedAssignment| format!("{} / {}", p.remote, p.node).into())
+ .into(),
+ DataTableColumn::new(tr!("Key"))
+ .flex(2)
+ .render(|p: &ProposedAssignment| {
+ Container::from_tag("span")
+ .class(pwt::css::FontStyle::LabelMedium)
+ .with_child(p.key.clone())
+ .into()
+ })
+ .into(),
+ DataTableColumn::new(tr!("Sockets (node / key)"))
+ .width("160px")
+ .render(|p: &ProposedAssignment| {
+ let label = match (p.node_sockets, p.key_sockets) {
+ (Some(ns), Some(ks)) => format!("{ns} / {ks}"),
+ (Some(ns), None) => format!("{ns} / -"),
+ (None, Some(ks)) => format!("- / {ks}"),
+ _ => String::new(),
+ };
+ label.into()
+ })
+ .into(),
+ ])
+ }
+}
+
+fn key_cell(n: &RemoteNodeStatus) -> Html {
+ let assigned = n.assigned_key.as_deref();
+ let current = n.current_key.as_deref();
+
+ if n.pending_clear {
+ // Clear queued: surface the live key the operator is about to free, with a recycle
+ // icon in the warning colour so the row stands out next to ordinary pending pushes.
+ let text = current.or(assigned).unwrap_or("");
+ return Tooltip::new(
+ Row::new()
+ .class(AlignItems::Baseline)
+ .gap(2)
+ .with_child(Fa::new("recycle").class(FontColor::Warning))
+ .with_child(text),
+ )
+ .tip(tr!(
+ "Pending Clear - 'Apply Pending' will remove this subscription from the node."
+ ))
+ .into();
+ }
+
+ // Pending push = pool has a key assigned that the live state has not yet picked up. Drive
+ // this off the keys themselves, not off the subscription status: a key that is on the node
+ // but reports `Invalid`/`Expired`/etc. is *applied* (the push went through), just unhealthy.
+ // The Status column surfaces the health axis; the clock icon here is reserved for the
+ // "queued operation has not landed yet" axis.
+ let pending = assigned.is_some() && current != assigned;
+
+ match (assigned, current) {
+ (Some(a), Some(c)) if a != c => Row::new()
+ .class(AlignItems::Baseline)
+ .gap(2)
+ .with_child(Fa::new("clock-o").class(FontColor::Warning))
+ .with_child(format!("{a} \u{2192} {c}"))
+ .into(),
+ _ => {
+ let text = current.or(assigned).unwrap_or("");
+ if pending {
+ Row::new()
+ .class(AlignItems::Baseline)
+ .gap(2)
+ .with_child(Fa::new("clock-o").class(FontColor::Warning))
+ .with_child(text)
+ .into()
+ } else {
+ text.into()
+ }
+ }
+ }
+}
+
+impl LoadableComponent for SubscriptionRegistryComp {
+ type Properties = SubscriptionRegistryProps;
+ type Message = Msg;
+ type ViewState = ViewState;
+
+ fn create(ctx: &LoadableComponentContext<Self>) -> Self {
+ let store = TreeStore::new().view_root(false);
+ store.set_sorter(tree_sorter);
+
+ let node_selection = Selection::new().on_select({
+ let link = ctx.link().clone();
+ move |_| link.send_redraw()
+ });
+
+ Self {
+ state: LoadableComponentState::new(),
+ tree_store: store.clone(),
+ tree_columns: Self::tree_columns(store),
+ proposal_columns: Self::proposal_columns(),
+ node_selection,
+ last_node_data: Vec::new(),
+ pool_keys: Rc::new(Vec::new()),
+ pool_digest: None,
+ }
+ }
+
+ fn update(&mut self, ctx: &LoadableComponentContext<Self>, msg: Self::Message) -> bool {
+ match msg {
+ Msg::LoadFinished {
+ nodes,
+ keys,
+ digest,
+ } => {
+ self.last_node_data = nodes.clone();
+ let tree = build_tree(nodes);
+ self.tree_store.write().update_root_tree(tree);
+ self.pool_keys = Rc::new(keys);
+ self.pool_digest = digest;
+ }
+ Msg::AutoAssignPreview => {
+ let link = ctx.link().clone();
+ ctx.link().spawn(async move {
+ match http_post::<AutoAssignProposal>(AUTO_ASSIGN_URL, None).await {
+ Ok(proposal) if proposal.assignments.is_empty() => {
+ link.show_error(
+ tr!("Auto-Assign"),
+ tr!("No suitable unassigned keys for the remaining nodes."),
+ false,
+ );
+ }
+ Ok(proposal) => {
+ link.change_view(Some(ViewState::ConfirmAutoAssign(proposal)));
+ }
+ Err(err) => link.show_error(tr!("Auto-Assign"), err.to_string(), true),
+ }
+ });
+ }
+ Msg::BulkAssignApply(proposal) => {
+ let link = ctx.link().clone();
+ ctx.link().spawn(async move {
+ let body = serde_json::json!({ "proposal": proposal });
+ match http_post::<Vec<ProposedAssignment>>(BULK_ASSIGN_URL, Some(body)).await {
+ Ok(_) => {
+ link.change_view(None);
+ link.send_reload();
+ }
+ Err(err) => link.show_error(tr!("Auto-Assign"), err.to_string(), true),
+ }
+ });
+ }
+ Msg::ApplyPending => {
+ let link = ctx.link().clone();
+ let body = self
+ .pool_digest
+ .clone()
+ .map(|d| serde_json::json!({ "digest": d }));
+ ctx.link().spawn(async move {
+ match http_post::<Option<String>>(APPLY_PENDING_URL, body).await {
+ // Button gated on pending != 0; None only fires on a clearing race.
+ Ok(None) => link.change_view(None),
+ Ok(Some(upid)) => {
+ link.change_view(None);
+ link.show_task_progres(upid);
+ }
+ Err(err) => link.show_error(tr!("Apply Pending"), err.to_string(), true),
+ }
+ link.send_reload();
+ });
+ }
+ Msg::ClearPending => {
+ let link = ctx.link().clone();
+ let body = self
+ .pool_digest
+ .clone()
+ .map(|d| serde_json::json!({ "digest": d }));
+ ctx.link().spawn(async move {
+ match http_post::<serde_json::Value>(CLEAR_PENDING_URL, body).await {
+ Ok(_) => {
+ link.change_view(None);
+ link.send_reload();
+ }
+ Err(err) => link.show_error(tr!("Discard Pending"), err.to_string(), true),
+ }
+ });
+ }
+ Msg::RevertSelectedNode => {
+ let Some(key) = self.clear_assignment_target_key() else {
+ return false;
+ };
+ let link = ctx.link().clone();
+ let digest = self.pool_digest.clone();
+ ctx.link().spawn(async move {
+ let url = format!(
+ "/subscriptions/keys/{}/assignment",
+ percent_encode_component(&key),
+ );
+ let query = digest.map(|d| serde_json::json!({ "digest": d }));
+ if let Err(err) = http_delete(&url, query).await {
+ link.show_error(tr!("Revert"), err.to_string(), true);
+ }
+ link.send_reload();
+ });
+ }
+ Msg::AssignKeyToSelectedNode => {
+ let Some((remote, node, ty, node_sockets)) =
+ self.assign_target_for_selected_node()
+ else {
+ return false;
+ };
+ ctx.link().change_view(Some(ViewState::AssignKeyToNode {
+ remote,
+ node,
+ ty,
+ node_sockets,
+ }));
+ }
+ }
+ true
+ }
+
+ fn toolbar(&self, ctx: &LoadableComponentContext<Self>) -> Option<Html> {
+ let link = ctx.link();
+ let (push_count, clear_count) = self.pending_counts();
+ let mut toolbar = Toolbar::new()
+ .border_bottom(true)
+ .with_child(
+ Tooltip::new(
+ Button::new(tr!("Auto-Assign"))
+ .icon_class("fa fa-magic")
+ .on_activate(link.callback(|_| Msg::AutoAssignPreview)),
+ )
+ .tip(tr!(
+ "Propose a one-key-per-node assignment for nodes that have no active \
+ subscription, then queue it pending Apply."
+ )),
+ )
+ .with_spacer()
+ .with_child(
+ Tooltip::new(
+ Button::new(tr!("Apply Pending"))
+ .icon_class("fa fa-play")
+ .disabled(push_count + clear_count == 0)
+ .on_activate(
+ link.change_view_callback(|_| Some(ViewState::ConfirmApplyPending)),
+ ),
+ )
+ .tip(tr!(
+ "Push every queued assignment to its remote node and remove the \
+ subscription from nodes pending clear."
+ )),
+ )
+ .with_child(
+ Tooltip::new(
+ Button::new(tr!("Discard Pending"))
+ .icon_class("fa fa-eraser")
+ .disabled(push_count + clear_count == 0)
+ .on_activate(
+ link.change_view_callback(|_| Some(ViewState::ConfirmClearPending)),
+ ),
+ )
+ .tip(tr!(
+ "Discard queued assignments without touching the remote nodes."
+ )),
+ )
+ .with_flex_spacer();
+
+ if push_count + clear_count > 0 {
+ toolbar = toolbar.with_child(pending_badge(push_count, clear_count));
+ }
+
+ Some(
+ toolbar
+ .with_flex_spacer()
+ .with_child(Button::refresh(self.loading()).on_activate({
+ let link = link.clone();
+ move |_| link.send_reload()
+ }))
+ .into(),
+ )
+ }
+
+ fn load(
+ &self,
+ ctx: &LoadableComponentContext<Self>,
+ ) -> Pin<Box<dyn Future<Output = Result<(), Error>>>> {
+ let link = ctx.link().clone();
+ Box::pin(async move {
+ // Both panels share one snapshot. Fetching in parallel keeps the latency one
+ // round-trip; serial would compound on slow remotes. Use `http_get_full` for the
+ // pool fetch so the digest comes back alongside the entries - every mutation later
+ // round-trips that digest so a stale view fails with 409 instead of overwriting a
+ // parallel admin's edit.
+ let nodes_fut = http_get::<Vec<RemoteNodeStatus>>(NODE_STATUS_URL, None);
+ let keys_fut = http_get_full::<Vec<SubscriptionKeyEntry>>(KEYS_URL, None);
+ let (nodes, keys) = futures::future::join(nodes_fut, keys_fut).await;
+ let keys = keys?;
+ let digest = keys
+ .attribs
+ .get("digest")
+ .and_then(|v| v.as_str())
+ .map(str::to_string);
+ link.send_message(Msg::LoadFinished {
+ nodes: nodes?,
+ keys: keys.data,
+ digest,
+ });
+ Ok(())
+ })
+ }
+
+ fn main_view(&self, ctx: &LoadableComponentContext<Self>) -> Html {
+ Container::new()
+ .class("pwt-content-spacer")
+ .class(FlexFit)
+ .class(FlexDirection::Row)
+ .with_child(self.render_key_pool_panel(ctx))
+ .with_child(self.render_node_tree_panel(ctx))
+ .into()
+ }
+
+ fn dialog_view(
+ &self,
+ ctx: &LoadableComponentContext<Self>,
+ view_state: &Self::ViewState,
+ ) -> Option<Html> {
+ match view_state {
+ ViewState::ConfirmApplyPending => {
+ use pwt::widget::ConfirmDialog;
+ let (push_count, clear_count) = self.pending_counts();
+ let body = match (push_count, clear_count) {
+ (p, 0) => tr!(
+ "Push {n} queued assignment(s) to the remote nodes?",
+ n = p,
+ ),
+ (0, c) => tr!(
+ "Remove {n} live subscription(s) from the remote nodes?",
+ n = c,
+ ),
+ (p, c) => tr!(
+ "Push {p} queued assignment(s) and remove {c} live subscription(s) on the remote nodes?",
+ p = p,
+ c = c,
+ ),
+ };
+ Some(
+ ConfirmDialog::new(tr!("Apply Pending Changes"), body)
+ .icon_class("fa fa-question-circle")
+ .on_confirm({
+ let link = ctx.link().clone();
+ move |_| link.send_message(Msg::ApplyPending)
+ })
+ // ESC / X / No must reset the LoadableComponent's view_state too, or
+ // the dialog closes visually while the parent keeps thinking we are
+ // still on the confirm view - subsequent clicks land on a stale state.
+ .on_close({
+ let link = ctx.link().clone();
+ move |_| link.change_view(None)
+ })
+ .into(),
+ )
+ }
+ ViewState::ConfirmClearPending => {
+ use pwt::widget::ConfirmDialog;
+ Some(
+ ConfirmDialog::new(
+ tr!("Discard Pending Changes"),
+ tr!("Discard all assignments that have not yet been applied to the remote nodes?"),
+ )
+ .icon_class("fa fa-question-circle")
+ .on_confirm({
+ let link = ctx.link().clone();
+ move |_| link.send_message(Msg::ClearPending)
+ })
+ .on_close({
+ let link = ctx.link().clone();
+ move |_| link.change_view(None)
+ })
+ .into(),
+ )
+ }
+ ViewState::ConfirmAutoAssign(proposal) => {
+ Some(self.render_auto_assign_dialog(ctx, proposal))
+ }
+ ViewState::AssignKeyToNode {
+ remote,
+ node,
+ ty,
+ node_sockets,
+ } => {
+ use super::subscription_assign::AssignKeyToNodeDialog;
+ let close_link = ctx.link().clone();
+ Some(
+ AssignKeyToNodeDialog::new(
+ remote.clone(),
+ node.clone(),
+ *ty,
+ *node_sockets,
+ self.pool_keys.clone(),
+ )
+ .pool_digest(self.pool_digest.clone())
+ .on_done(Callback::from(move |_| {
+ close_link.change_view(None);
+ close_link.send_reload();
+ }))
+ .into(),
+ )
+ }
+ }
+ }
+}
+
+impl SubscriptionRegistryComp {
+ fn render_key_pool_panel(&self, ctx: &LoadableComponentContext<Self>) -> Panel {
+ // Reload the right-side node tree whenever the left-side key pool mutates, so a fresh
+ // assignment shows up as pending without forcing the operator to re-navigate.
+ let link = ctx.link().clone();
+ // Pass the current node-status snapshot into the grid so its Clear button can be
+ // disabled for synced bindings (orphan-prevention - mirrors the server-side refusal).
+ let statuses = Rc::new(self.last_node_data.clone());
+ Panel::new()
+ .class(FlexFit)
+ .border(true)
+ .style("flex", "3 1 0")
+ .min_width(300)
+ .title(tr!("Key Pool"))
+ .with_child(
+ SubscriptionKeyGrid::new()
+ .on_change(Callback::from(move |_| link.send_reload()))
+ .node_status(statuses)
+ .pool_keys(self.pool_keys.clone())
+ .pool_digest(self.pool_digest.clone()),
+ )
+ }
+
+ fn render_node_tree_panel(&self, ctx: &LoadableComponentContext<Self>) -> Panel {
+ let table = DataTable::new(self.tree_columns.clone(), self.tree_store.clone())
+ .selection(self.node_selection.clone())
+ .striped(false)
+ .borderless(true)
+ .show_header(true)
+ .class(FlexFit);
+
+ let can_assign_key = self.assign_target_for_selected_node().is_some();
+ let can_revert = self.clear_assignment_target_key().is_some();
+ let assign_button = Tooltip::new(
+ Button::new(tr!("Assign Key"))
+ .icon_class("fa fa-link")
+ .disabled(!can_assign_key)
+ .on_activate(ctx.link().callback(|_| Msg::AssignKeyToSelectedNode)),
+ )
+ .tip(tr!(
+ "Bind a pool key to the selected node. Available for nodes without an active \
+ subscription that have no pool assignment yet."
+ ));
+ let revert_button = Tooltip::new(
+ Button::new(tr!("Revert"))
+ .icon_class("fa fa-undo")
+ .disabled(!can_revert)
+ .on_activate(ctx.link().callback(|_| Msg::RevertSelectedNode)),
+ )
+ .tip(tr!(
+ "Revert the pending change on the selected node: drop an unpushed pool \
+ assignment without touching the remote."
+ ));
+
+ Panel::new()
+ .class(FlexFit)
+ .border(true)
+ .style("flex", "4 1 0")
+ .min_width(400)
+ .title(tr!("Node Subscription Status"))
+ .with_tool(assign_button)
+ .with_tool(revert_button)
+ .with_child(table)
+ }
+
+ /// Return `(pending pushes, pending clears)` mirroring the server's `compute_pending`
+ /// predicate. Iterates the pool (not the node-status list) so a pool entry bound to a
+ /// vanished node still counts as pending - matching what Apply Pending would actually try.
+ fn pending_counts(&self) -> (u32, u32) {
+ let mut push = 0;
+ let mut clear = 0;
+ for entry in self.pool_keys.iter() {
+ let (Some(remote), Some(node)) = (entry.remote.as_deref(), entry.node.as_deref())
+ else {
+ continue;
+ };
+ if entry.pending_clear {
+ clear += 1;
+ continue;
+ }
+ // Pending push = the live current key on the node does not match the assigned pool
+ // key. Subscription health (Invalid, Expired, ...) is a separate axis surfaced via
+ // the Status column; re-pushing the same key would not change the shop's verdict
+ // and the badge must not double-count health issues as queued operations.
+ let is_pending = match self
+ .last_node_data
+ .iter()
+ .find(|n| n.remote == remote && n.node == node)
+ {
+ Some(n) => n.current_key.as_deref() != Some(entry.key.as_str()),
+ None => true,
+ };
+ if is_pending {
+ push += 1;
+ }
+ }
+ (push, clear)
+ }
+
+ /// Resolve the selected tree row to its `RemoteNodeStatus`, if any.
+ fn selected_node_status(&self) -> Option<&RemoteNodeStatus> {
+ let key = self.node_selection.selected_key()?;
+ let raw = key.to_string();
+ let mut parts = raw.trim_start_matches('/').splitn(2, '/');
+ let remote = parts.next()?;
+ let node = parts.next()?;
+ self.last_node_data
+ .iter()
+ .find(|n| n.remote == remote && n.node == node)
+ }
+
+ /// Returns the assigned key when Revert is appropriate: there is a binding AND it has not
+ /// yet been pushed (different from current_key, or the node is not Active). For an
+ /// already-synced assignment, clearing would orphan the live subscription on the remote,
+ /// so the operator must take a different path (introduced later in the series).
+ fn clear_assignment_target_key(&self) -> Option<String> {
+ let n = self.selected_node_status()?;
+ let assigned = n.assigned_key.as_ref()?;
+ let synced = n.status == proxmox_subscription::SubscriptionStatus::Active
+ && n.current_key.as_deref() == Some(assigned.as_str());
+ if synced {
+ return None;
+ }
+ Some(assigned.clone())
+ }
+
+ /// Returns `(remote, node, type, node_sockets)` for the right-panel Assign button:
+ /// selected row is a node, no assigned key in the pool yet, and no live active subscription.
+ /// Refusing earlier than the server keeps the button-disable affordance honest.
+ fn assign_target_for_selected_node(
+ &self,
+ ) -> Option<(String, String, pdm_api_types::remotes::RemoteType, Option<i64>)> {
+ let n = self.selected_node_status()?;
+ if n.assigned_key.is_some() {
+ return None;
+ }
+ if n.status == proxmox_subscription::SubscriptionStatus::Active {
+ return None;
+ }
+ Some((n.remote.clone(), n.node.clone(), n.ty, n.sockets))
+ }
+
+ fn render_auto_assign_dialog(
+ &self,
+ ctx: &LoadableComponentContext<Self>,
+ proposal: &AutoAssignProposal,
+ ) -> Html {
+ use pwt::widget::Dialog;
+
+ let store: Store<ProposedAssignment> = Store::with_extract_key(|p: &ProposedAssignment| {
+ format!("{}/{}", p.remote, p.node).into()
+ });
+ store.set_data(proposal.assignments.clone());
+
+ let link_close = ctx.link().clone();
+ let link_apply = ctx.link().clone();
+ let proposal_for_apply = proposal.clone();
+ let body = Column::new()
+ .class(Flex::Fill)
+ .class(Overflow::Hidden)
+ .min_height(0)
+ .padding(2)
+ .gap(2)
+ .min_width(600)
+ .with_child(Container::from_tag("p").with_child(tr!(
+ "The following {n} assignments are proposed. Click Assign to confirm.",
+ n = proposal.assignments.len(),
+ )))
+ .with_child(
+ DataTable::new(self.proposal_columns.clone(), store)
+ .striped(true)
+ .class(FlexFit)
+ .min_height(140),
+ )
+ .with_child(
+ Row::new()
+ .class(JustifyContent::FlexEnd)
+ .gap(2)
+ .padding_top(2)
+ .with_child(
+ Button::new(tr!("Cancel"))
+ .on_activate(move |_| link_close.change_view(None)),
+ )
+ .with_child(Button::new(tr!("Assign")).on_activate(move |_| {
+ link_apply.send_message(Msg::BulkAssignApply(proposal_for_apply.clone()))
+ })),
+ );
+
+ Dialog::new(tr!("Auto-Assign Proposal"))
+ .resizable(true)
+ .width(700)
+ .min_width(500)
+ .min_height(300)
+ .max_height("80vh")
+ .on_close({
+ let link = ctx.link().clone();
+ move |_| link.change_view(None)
+ })
+ .with_child(body)
+ .into()
+ }
+}
diff --git a/ui/src/main_menu.rs b/ui/src/main_menu.rs
index 18988eaf..eba02d5f 100644
--- a/ui/src/main_menu.rs
+++ b/ui/src/main_menu.rs
@@ -15,6 +15,7 @@ use pdm_api_types::remotes::RemoteType;
use pdm_api_types::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY};
use crate::configuration::subscription_panel::SubscriptionPanel;
+use crate::configuration::subscription_registry::SubscriptionRegistryProps;
use crate::configuration::views::ViewGrid;
use crate::dashboard::view::View;
use crate::remotes::RemotesPanel;
@@ -292,6 +293,15 @@ impl Component for PdmMainMenu {
config_submenu,
);
+ register_view(
+ &mut menu,
+ &mut content,
+ tr!("Subscription Registry"),
+ "subscription-registry",
+ Some("fa fa-id-card"),
+ |_| SubscriptionRegistryProps::new().into(),
+ );
+
let mut admin_submenu = Menu::new();
register_view(
diff --git a/ui/src/widget/pve_node_selector.rs b/ui/src/widget/pve_node_selector.rs
index ca78514b..94d37bbd 100644
--- a/ui/src/widget/pve_node_selector.rs
+++ b/ui/src/widget/pve_node_selector.rs
@@ -43,6 +43,11 @@ pub struct PveNodeSelector {
#[builder(IntoPropValue, into_prop_value)]
#[prop_or_default]
pub remote: AttrValue,
+
+ /// Node names that should not appear in the selector (e.g. nodes that already have a
+ /// subscription key assigned in the pool).
+ #[prop_or_default]
+ pub excluded_nodes: Rc<Vec<String>>,
}
impl PveNodeSelector {
@@ -51,6 +56,11 @@ impl PveNodeSelector {
remote: remote.into_prop_value()
})
}
+
+ pub fn excluded_nodes(mut self, nodes: Rc<Vec<String>>) -> Self {
+ self.excluded_nodes = nodes;
+ self
+ }
}
pub enum Msg {
@@ -60,6 +70,9 @@ pub enum Msg {
pub struct PveNodeSelectorComp {
_async_pool: AsyncPool,
store: Store<ClusterNodeIndexResponse>,
+ /// Unfiltered node list as fetched from the remote, kept so a prop change to `excluded_nodes`
+ /// can re-filter without round-tripping the remote again.
+ raw_nodes: Vec<ClusterNodeIndexResponse>,
last_err: Option<AttrValue>,
}
@@ -69,6 +82,19 @@ impl PveNodeSelectorComp {
nodes.sort_by(|a, b| a.node.cmp(&b.node));
Ok(nodes)
}
+
+ fn apply_filter(&mut self, excluded: &[String]) {
+ let filtered: Vec<ClusterNodeIndexResponse> = if excluded.is_empty() {
+ self.raw_nodes.clone()
+ } else {
+ self.raw_nodes
+ .iter()
+ .filter(|n| !excluded.iter().any(|e| e == &n.node))
+ .cloned()
+ .collect()
+ };
+ self.store.set_data(filtered);
+ }
}
impl Component for PveNodeSelectorComp {
@@ -84,16 +110,20 @@ impl Component for PveNodeSelectorComp {
Self {
_async_pool,
last_err: None,
+ raw_nodes: Vec::new(),
store: Store::with_extract_key(|node: &ClusterNodeIndexResponse| {
Key::from(node.node.as_str())
}),
}
}
- fn update(&mut self, _ctx: &yew::Context<Self>, msg: Self::Message) -> bool {
+ fn update(&mut self, ctx: &yew::Context<Self>, msg: Self::Message) -> bool {
match msg {
Msg::UpdateNodeList(res) => match res {
- Ok(result) => self.store.set_data(result),
+ Ok(result) => {
+ self.raw_nodes = result;
+ self.apply_filter(&ctx.props().excluded_nodes);
+ }
Err(err) => self.last_err = Some(err.to_string().into()),
},
}
@@ -101,6 +131,13 @@ impl Component for PveNodeSelectorComp {
true
}
+ fn changed(&mut self, ctx: &yew::Context<Self>, old_props: &Self::Properties) -> bool {
+ if old_props.excluded_nodes != ctx.props().excluded_nodes {
+ self.apply_filter(&ctx.props().excluded_nodes);
+ }
+ true
+ }
+
fn view(&self, ctx: &yew::Context<Self>) -> yew::Html {
let props = ctx.props();
let err = self.last_err.clone();
--
2.47.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH datacenter-manager v3 06/12] cli: client: add subscription key pool management subcommands
2026-05-15 7:43 [PATCH datacenter-manager v3 00/12] subscription key pool registry Thomas Lamprecht
` (4 preceding siblings ...)
2026-05-15 7:43 ` [PATCH datacenter-manager v3 05/12] ui: registry: add view with key pool and node status Thomas Lamprecht
@ 2026-05-15 7:43 ` Thomas Lamprecht
2026-05-15 7:43 ` [PATCH datacenter-manager v3 07/12] docs: add subscription registry chapter Thomas Lamprecht
` (5 subsequent siblings)
11 siblings, 0 replies; 16+ messages in thread
From: Thomas Lamprecht @ 2026-05-15 7:43 UTC (permalink / raw)
To: pdm-devel
Plumb the new key-pool API endpoints through the CLI under the
existing `subscriptions` command group. The pre-existing `status`
subcommand becomes a sibling rather than the sole entry.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
---
Changes v2 -> 3:
* Worker outcomes surface via the new wait_for_local_task helper from
v3-0002 instead of a hand-rolled poll loop.
* CLI for the new v3 endpoints (Clear Key, Adopt Key / Adopt All,
Check Subscription) is wired in their respective per-feature commits,
not here.
cli/client/src/subscriptions.rs | 260 +++++++++++++++++++++++++++++++-
lib/pdm-client/src/lib.rs | 179 +++++++++++++++++++++-
2 files changed, 430 insertions(+), 9 deletions(-)
diff --git a/cli/client/src/subscriptions.rs b/cli/client/src/subscriptions.rs
index d8bf1e09..00c06ada 100644
--- a/cli/client/src/subscriptions.rs
+++ b/cli/client/src/subscriptions.rs
@@ -1,18 +1,46 @@
use anyhow::Error;
+use proxmox_config_digest::PROXMOX_CONFIG_DIGEST_SCHEMA;
use proxmox_router::cli::{
- format_and_print_result, CliCommand, CommandLineInterface, OutputFormat,
+ format_and_print_result, CliCommand, CliCommandMap, CommandLineInterface, OutputFormat,
};
use proxmox_schema::api;
-use pdm_api_types::subscription::RemoteSubscriptionState;
-use pdm_api_types::VIEW_ID_SCHEMA;
+use pdm_api_types::remotes::REMOTE_ID_SCHEMA;
+use pdm_api_types::subscription::{RemoteSubscriptionState, SUBSCRIPTION_KEY_SCHEMA};
+use pdm_api_types::{NODE_SCHEMA, VIEW_ID_SCHEMA};
+use pdm_client::ConfigDigest;
use crate::env::emoji;
use crate::{client, env};
pub fn cli() -> CommandLineInterface {
- CliCommand::new(&API_METHOD_GET_SUBSCRIPTION_STATUS).into()
+ CliCommandMap::new()
+ .insert(
+ "status",
+ CliCommand::new(&API_METHOD_GET_SUBSCRIPTION_STATUS),
+ )
+ .insert("list-keys", CliCommand::new(&API_METHOD_LIST_KEYS))
+ .insert(
+ "add-keys",
+ CliCommand::new(&API_METHOD_ADD_KEYS).arg_param(&["keys"]),
+ )
+ .insert(
+ "assign-key",
+ CliCommand::new(&API_METHOD_ASSIGN_KEY).arg_param(&["key"]),
+ )
+ .insert(
+ "clear-assignment",
+ CliCommand::new(&API_METHOD_CLEAR_ASSIGNMENT).arg_param(&["key"]),
+ )
+ .insert(
+ "remove-key",
+ CliCommand::new(&API_METHOD_REMOVE_KEY).arg_param(&["key"]),
+ )
+ .insert("auto-assign", CliCommand::new(&API_METHOD_AUTO_ASSIGN))
+ .insert("apply-pending", CliCommand::new(&API_METHOD_APPLY_PENDING))
+ .insert("clear-pending", CliCommand::new(&API_METHOD_CLEAR_PENDING))
+ .into()
}
#[api(
@@ -37,7 +65,7 @@ pub fn cli() -> CommandLineInterface {
},
}
)]
-/// List all the remotes this instance is managing.
+/// Show the subscription status of all remotes.
async fn get_subscription_status(
max_age: Option<u64>,
verbose: Option<bool>,
@@ -106,3 +134,225 @@ async fn get_subscription_status(
}
Ok(())
}
+
+#[api]
+/// List all subscription keys in the pool.
+async fn list_keys() -> Result<(), Error> {
+ let (keys, _digest) = client()?.list_subscription_keys().await?;
+
+ let output_format = env().format_args.output_format;
+ if output_format == OutputFormat::Text {
+ if keys.is_empty() {
+ println!("No keys in pool.");
+ return Ok(());
+ }
+ let key_width = keys.iter().map(|k| k.key.len()).max().unwrap_or(20);
+ for key in &keys {
+ let assignment = match (&key.remote, &key.node) {
+ (Some(r), Some(n)) => format!("{r}/{n}"),
+ _ => "(unassigned)".to_string(),
+ };
+ println!(
+ " {key:<kw$} {product:<5} {level:<10} {status:<10} {assignment}",
+ key = key.key,
+ kw = key_width,
+ product = key.product_type.to_string(),
+ level = key.level.to_string(),
+ status = key.status.to_string(),
+ );
+ }
+ } else {
+ format_and_print_result(&keys, &output_format.to_string());
+ }
+ Ok(())
+}
+
+#[api(
+ input: {
+ properties: {
+ keys: {
+ type: Array,
+ description: "Subscription keys to add to the pool.",
+ items: { schema: SUBSCRIPTION_KEY_SCHEMA },
+ },
+ digest: {
+ schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
+ optional: true,
+ },
+ },
+ },
+)]
+/// Add one or more subscription keys to the pool.
+async fn add_keys(keys: Vec<String>, digest: Option<String>) -> Result<(), Error> {
+ let digest = digest.map(ConfigDigest::from);
+ client()?.add_subscription_keys(&keys, digest).await?;
+ let n = keys.len();
+ if n == 1 {
+ println!("Added {} to pool.", keys[0]);
+ } else {
+ println!("Added {n} keys to pool.");
+ }
+ Ok(())
+}
+
+#[api(
+ input: {
+ properties: {
+ key: { schema: SUBSCRIPTION_KEY_SCHEMA },
+ remote: { schema: REMOTE_ID_SCHEMA },
+ node: { schema: NODE_SCHEMA },
+ digest: {
+ schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
+ optional: true,
+ },
+ },
+ },
+)]
+/// Assign a key from the pool to a remote node.
+async fn assign_key(
+ key: String,
+ remote: String,
+ node: String,
+ digest: Option<String>,
+) -> Result<(), Error> {
+ let digest = digest.map(ConfigDigest::from);
+ client()?
+ .set_subscription_assignment(&key, &remote, &node, digest)
+ .await?;
+ println!("Assigned {key} to {remote}/{node}.");
+ Ok(())
+}
+
+#[api(
+ input: {
+ properties: {
+ key: { schema: SUBSCRIPTION_KEY_SCHEMA },
+ digest: {
+ schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
+ optional: true,
+ },
+ },
+ },
+)]
+/// Clear the assignment of a key (unassign from its remote node).
+async fn clear_assignment(key: String, digest: Option<String>) -> Result<(), Error> {
+ let digest = digest.map(ConfigDigest::from);
+ client()?
+ .clear_subscription_assignment(&key, digest)
+ .await?;
+ println!("Cleared assignment for {key}.");
+ Ok(())
+}
+
+#[api(
+ input: {
+ properties: {
+ key: { schema: SUBSCRIPTION_KEY_SCHEMA },
+ },
+ },
+)]
+/// Remove a key from the pool entirely.
+async fn remove_key(key: String) -> Result<(), Error> {
+ client()?.delete_subscription_key(&key).await?;
+ println!("Removed {key} from pool.");
+ Ok(())
+}
+
+#[api(
+ input: {
+ properties: {
+ apply: {
+ type: bool,
+ optional: true,
+ default: false,
+ description: "Commit the proposal immediately via bulk-assign. \
+ Without this, only a preview is printed.",
+ },
+ },
+ },
+)]
+/// Propose (and optionally apply) automatic key-to-node assignments.
+async fn auto_assign(apply: bool) -> Result<(), Error> {
+ let client = client()?;
+ let proposal = client.subscription_auto_assign().await?;
+
+ if proposal.assignments.is_empty() {
+ println!("No suitable free keys for nodes without an active subscription.");
+ return Ok(());
+ }
+
+ let verb = if apply { "assigned" } else { "proposed" };
+ for p in &proposal.assignments {
+ println!(" {verb}: {} -> {}/{}", p.key, p.remote, p.node);
+ }
+
+ if !apply {
+ println!("\nRe-run with --apply to apply these assignments.");
+ return Ok(());
+ }
+ let applied = client.subscription_bulk_assign(proposal).await?;
+ if applied.is_empty() {
+ println!("\nServer rejected the proposal (no entries applied).");
+ }
+ Ok(())
+}
+
+#[api(
+ input: {
+ properties: {
+ digest: {
+ schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
+ optional: true,
+ },
+ },
+ },
+)]
+/// Push all pending key assignments to remotes as a worker task.
+///
+/// Blocks until the worker finishes so the operator sees the exit status of the actual push
+/// run, not just a UPID they would have to chase down by hand.
+async fn apply_pending(digest: Option<String>) -> Result<(), Error> {
+ let digest = digest.map(ConfigDigest::from);
+ let client = client()?;
+ let upid = match client.subscription_apply_pending(digest).await? {
+ None => {
+ println!("No pending assignments to apply.");
+ return Ok(());
+ }
+ Some(upid) => upid,
+ };
+ println!("Started worker task: {upid}");
+ let status = client.wait_for_local_task(&upid).await?;
+ let exit = status
+ .get("exitstatus")
+ .and_then(|v| v.as_str())
+ .unwrap_or("unknown");
+ if exit == "OK" {
+ println!("Task finished: OK");
+ Ok(())
+ } else {
+ anyhow::bail!("worker task ended with: {exit}");
+ }
+}
+
+#[api(
+ input: {
+ properties: {
+ digest: {
+ schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
+ optional: true,
+ },
+ },
+ },
+)]
+/// Clear every pending assignment in one bulk transaction.
+async fn clear_pending(digest: Option<String>) -> Result<(), Error> {
+ let digest = digest.map(ConfigDigest::from);
+ let cleared = client()?.subscription_clear_pending(digest).await?;
+ if cleared == 0 {
+ println!("No pending assignments to clear.");
+ } else {
+ println!("Cleared {cleared} pending assignment(s).");
+ }
+ Ok(())
+}
diff --git a/lib/pdm-client/src/lib.rs b/lib/pdm-client/src/lib.rs
index cb5bb043..1fed0e85 100644
--- a/lib/pdm-client/src/lib.rs
+++ b/lib/pdm-client/src/lib.rs
@@ -76,7 +76,10 @@ pub mod types {
pub use pve_api_types::StorageStatus as PveStorageStatus;
- pub use pdm_api_types::subscription::{RemoteSubscriptionState, RemoteSubscriptions};
+ pub use pdm_api_types::subscription::{
+ AutoAssignProposal, ClearPendingResult, ProductType, ProposedAssignment, RemoteNodeStatus,
+ RemoteSubscriptionState, RemoteSubscriptions, SubscriptionKeyEntry, SubscriptionKeySource,
+ };
pub use pve_api_types::{SdnVnetMacVrf, SdnZoneIpVrf};
}
@@ -898,9 +901,6 @@ impl<T: HttpApiClient> PdmClient<T> {
/// server-side wait surface lands this method becomes a single GET with no behaviour change
/// for callers.
///
- /// No built-in time bound; wrap in `tokio::time::timeout` if needed. Dropping the future
- /// stops the client-side polling only - the server-side worker keeps running.
- ///
/// Native-only: the polling loop relies on `tokio::time::sleep`, which is not available on
/// the wasm32 target the UI builds for.
#[cfg(not(target_arch = "wasm32"))]
@@ -1119,6 +1119,177 @@ impl<T: HttpApiClient> PdmClient<T> {
Ok(self.0.get(&path).await?.expect_json()?.data)
}
+ /// List all keys in the subscription pool. Returns the entries plus the matching
+ /// `ConfigDigest` so the caller can chain a digest-aware add / assign / delete back.
+ pub async fn list_subscription_keys(
+ &self,
+ ) -> Result<(Vec<SubscriptionKeyEntry>, Option<ConfigDigest>), Error> {
+ let mut res = self
+ .0
+ .get("/api2/extjs/subscriptions/keys")
+ .await?
+ .expect_json()?;
+ Ok((res.data, res.attribs.remove("digest").map(ConfigDigest)))
+ }
+
+ /// Add one or more keys to the pool. See the daemon-side endpoint for the all-or-nothing
+ /// validation semantics.
+ pub async fn add_subscription_keys(
+ &self,
+ keys: &[String],
+ digest: Option<ConfigDigest>,
+ ) -> Result<(), Error> {
+ #[derive(Serialize)]
+ struct AddArgs<'a> {
+ keys: &'a [String],
+ #[serde(skip_serializing_if = "Option::is_none")]
+ digest: Option<ConfigDigest>,
+ }
+ self.0
+ .post("/api2/extjs/subscriptions/keys", &AddArgs { keys, digest })
+ .await?
+ .nodata()
+ }
+
+ /// Bind a key to a remote node.
+ pub async fn set_subscription_assignment(
+ &self,
+ key: &str,
+ remote: &str,
+ node: &str,
+ digest: Option<ConfigDigest>,
+ ) -> Result<(), Error> {
+ #[derive(Serialize)]
+ struct AssignArgs<'a> {
+ remote: &'a str,
+ node: &'a str,
+ #[serde(skip_serializing_if = "Option::is_none")]
+ digest: Option<ConfigDigest>,
+ }
+ let path = format!("/api2/extjs/subscriptions/keys/{key}/assignment");
+ self.0
+ .post(
+ &path,
+ &AssignArgs {
+ remote,
+ node,
+ digest,
+ },
+ )
+ .await?
+ .nodata()
+ }
+
+ /// Drop the remote-node binding for a pool key (the inverse of
+ /// [`set_subscription_assignment`]).
+ pub async fn clear_subscription_assignment(
+ &self,
+ key: &str,
+ digest: Option<ConfigDigest>,
+ ) -> Result<(), Error> {
+ let path = ApiPathBuilder::new(format!(
+ "/api2/extjs/subscriptions/keys/{key}/assignment"
+ ))
+ .maybe_arg("digest", &digest.map(Value::from))
+ .build();
+ self.0.delete(&path).await?.nodata()
+ }
+
+ /// Remove a key from the pool entirely.
+ ///
+ /// No digest parameter: deletion is a point-of-no-return operation and the typed-client
+ /// surface elsewhere (delete_remote, delete_user, ...) does not round-trip a digest on
+ /// DELETE either. External REST callers can still pass `digest` via the URL query if they
+ /// want optimistic concurrency on deletion; the server-side endpoint accepts it.
+ pub async fn delete_subscription_key(&self, key: &str) -> Result<(), Error> {
+ let path = format!("/api2/extjs/subscriptions/keys/{key}");
+ self.0.delete(&path).await?.nodata()
+ }
+
+ /// Combined remote/node subscription status, filtered to remotes the caller has audit
+ /// privilege on.
+ pub async fn subscription_node_status(
+ &self,
+ max_age: Option<u64>,
+ ) -> Result<Vec<RemoteNodeStatus>, Error> {
+ let path = ApiPathBuilder::new("/api2/extjs/subscriptions/node-status")
+ .maybe_arg("max-age", &max_age)
+ .build();
+ Ok(self.0.get(&path).await?.expect_json()?.data)
+ }
+
+ /// Compute a key-to-node assignment proposal. Apply it with
+ /// [`subscription_bulk_assign`].
+ pub async fn subscription_auto_assign(&self) -> Result<AutoAssignProposal, Error> {
+ Ok(self
+ .0
+ .post("/api2/extjs/subscriptions/auto-assign", &json!({}))
+ .await?
+ .expect_json()?
+ .data)
+ }
+
+ /// Commit a proposal previously returned by [`subscription_auto_assign`]. The server
+ /// rejects the call with 409 if either the pool or the live node-status has drifted
+ /// since the proposal was computed.
+ pub async fn subscription_bulk_assign(
+ &self,
+ proposal: AutoAssignProposal,
+ ) -> Result<Vec<ProposedAssignment>, Error> {
+ Ok(self
+ .0
+ .post(
+ "/api2/extjs/subscriptions/bulk-assign",
+ &json!({ "proposal": proposal }),
+ )
+ .await?
+ .expect_json()?
+ .data)
+ }
+
+ /// Push every pending assignment. Returns the worker UPID, or `None` when there is nothing
+ /// to do.
+ ///
+ /// The optional `digest` rejects the call at the API boundary if the pool changed since the
+ /// caller last loaded it - the at-API-call-time plan is pinned, but the worker re-reads when
+ /// it fires, so a parallel admin edit between API return and worker start is still honoured.
+ pub async fn subscription_apply_pending(
+ &self,
+ digest: Option<ConfigDigest>,
+ ) -> Result<Option<String>, Error> {
+ #[derive(Serialize)]
+ struct Args {
+ #[serde(skip_serializing_if = "Option::is_none")]
+ digest: Option<ConfigDigest>,
+ }
+ Ok(self
+ .0
+ .post("/api2/extjs/subscriptions/apply-pending", &Args { digest })
+ .await?
+ .expect_json()?
+ .data)
+ }
+
+ /// Clear every pending assignment in one bulk transaction; returns the count of cleared
+ /// entries.
+ pub async fn subscription_clear_pending(
+ &self,
+ digest: Option<ConfigDigest>,
+ ) -> Result<u32, Error> {
+ #[derive(Serialize)]
+ struct Args {
+ #[serde(skip_serializing_if = "Option::is_none")]
+ digest: Option<ConfigDigest>,
+ }
+ let result: types::ClearPendingResult = self
+ .0
+ .post("/api2/extjs/subscriptions/clear-pending", &Args { digest })
+ .await?
+ .expect_json()?
+ .data;
+ Ok(result.cleared)
+ }
+
pub async fn pve_list_networks(
&self,
remote: &str,
--
2.47.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH datacenter-manager v3 07/12] docs: add subscription registry chapter
2026-05-15 7:43 [PATCH datacenter-manager v3 00/12] subscription key pool registry Thomas Lamprecht
` (5 preceding siblings ...)
2026-05-15 7:43 ` [PATCH datacenter-manager v3 06/12] cli: client: add subscription key pool management subcommands Thomas Lamprecht
@ 2026-05-15 7:43 ` Thomas Lamprecht
2026-05-15 7:43 ` [PATCH datacenter-manager v3 08/12] subscription: add Clear Key action and per-node revert Thomas Lamprecht
` (4 subsequent siblings)
11 siblings, 0 replies; 16+ messages in thread
From: Thomas Lamprecht @ 2026-05-15 7:43 UTC (permalink / raw)
To: pdm-devel
Cover the Subscription Registry view and the actions it exposes,
together with the permission model the registry enforces.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
---
Changes v2 -> 3:
* Picks up the renamed Discard Pending button (was Clear Pending in
v2).
* Action paragraphs for Clear Key, Adopt Key / Adopt All, and Check
Subscription are added incrementally in their respective per-feature
commits, not here.
docs/index.rst | 1 +
docs/subscription-registry.rst | 55 ++++++++++++++++++++++++++++++++++
2 files changed, 56 insertions(+)
create mode 100644 docs/subscription-registry.rst
diff --git a/docs/index.rst b/docs/index.rst
index 2fc8a5dc..2aaf86ea 100644
--- a/docs/index.rst
+++ b/docs/index.rst
@@ -27,6 +27,7 @@ in the section entitled "GNU Free Documentation License".
remotes.rst
automated-installations.rst
views.rst
+ subscription-registry.rst
access-control.rst
sysadmin.rst
faq.rst
diff --git a/docs/subscription-registry.rst b/docs/subscription-registry.rst
new file mode 100644
index 00000000..f1e6fd5b
--- /dev/null
+++ b/docs/subscription-registry.rst
@@ -0,0 +1,55 @@
+Subscription Registry
+=====================
+
+The subscription registry maintains a central pool of Proxmox VE and Proxmox Backup Server
+subscription keys and lets an administrator assign them to remote nodes from a single place, without
+having to select and configure a key for all remote nodes individually.
+
+Key Pool
+--------
+
+The pool accepts Proxmox VE and Proxmox Backup Server keys; other key prefixes are rejected so that
+a new product type is noticed instead of silently parking unusable entries. Each entry records its
+origin and the optional remote node it has been assigned to.
+
+Keys can be added in bulk from the web interface or with the ``proxmox-datacenter-client
+subscriptions add-keys`` command. The Add dialog takes multiple keys, separated by newlines or
+commas, and validates the whole batch atomically.
+
+Node Subscription Status
+------------------------
+
+The Node Subscription Status panel shows the live subscription state of every node behind a
+configured remote alongside any pending plan from the pool. Nodes that already hold a key the
+registry assigned appear with the live level; nodes with a pending pool assignment show a clock
+icon until the change is pushed to the remote.
+
+From this view an operator can clear a pending assignment or remove the key from the pool entirely,
+which is convenient when a node is known to be wrong without first having to find the matching entry
+on the key list.
+
+Assignment
+----------
+
+A key can be pinned to a single node manually.
+
+The Auto-Assign action proposes a plan that fills unsubscribed nodes from free pool keys. For
+Proxmox VE, the smallest covering key by socket count is chosen, so a 4-socket key is not used on a
+2-socket host while a larger host stays unsubscribed.
+
+The proposed plan can be inspected before it is applied. Apply Pending pushes the queued keys to
+their target nodes; if a push fails the remaining queue is kept intact for retry. Discard Pending
+drops the plan without touching any remote.
+
+Permissions
+-----------
+
+Listing the pool and the node status view follows the regular audit privileges on each affected
+remote. Pool entries pinned to a remote the operator has no audit privilege on are hidden from
+the listing; unbound entries stay visible to anyone with the system-scope audit privilege.
+
+Any mutating action on a pool entry or its remote binding requires the matching resource
+privilege on the target remote in addition to the system-scope MODIFY privilege, so an
+operator with global system access alone cannot drive changes against remotes they have no
+other authority on. Auto-Assign skips remotes the caller cannot modify, so a previewed plan
+never silently commits an assignment on a remote the operator only had audit on.
--
2.47.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH datacenter-manager v3 08/12] subscription: add Clear Key action and per-node revert
2026-05-15 7:43 [PATCH datacenter-manager v3 00/12] subscription key pool registry Thomas Lamprecht
` (6 preceding siblings ...)
2026-05-15 7:43 ` [PATCH datacenter-manager v3 07/12] docs: add subscription registry chapter Thomas Lamprecht
@ 2026-05-15 7:43 ` Thomas Lamprecht
2026-05-15 7:43 ` [PATCH datacenter-manager v3 09/12] subscription: add Adopt Key action for foreign live subscriptions Thomas Lamprecht
` (3 subsequent siblings)
11 siblings, 0 replies; 16+ messages in thread
From: Thomas Lamprecht @ 2026-05-15 7:43 UTC (permalink / raw)
To: pdm-devel
Wire a Clear Key action on the Node Subscription Status panel that
queues the live subscription on a remote node for removal at the
next Apply Pending.
Clear Key refuses with BAD_REQUEST when no pool entry is bound to
(remote, node): the resource is fine, it just is not in a state
where Clear Key applies.
The per-node Revert action handles queued Clear Keys via a new
revert-pending-clear endpoint that drops just the flag and keeps the
binding, so backing out a single queued clear no longer requires the
global Discard Pending.
The orphan-prevention error messages on the delete and unassign
paths now point at Clear Key as the remediation.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
---
Changes v2 -> 3:
* Renamed from Reissue Key to Clear Key (the v2 action did not
actually round-trip through the shop). Pool flag renamed from
pending_reissue to pending_clear; "Reissue" stays reserved for the
future shop-side action.
* v2's "queue reissue + cancel" foreign-key-adoption shortcut is
dropped in favour of the explicit Adopt Key action introduced in
v3-0009.
* Per-node Revert handles queued clears via a new
revert-pending-clear endpoint, so backing out a single queued clear no
longer needs the global Discard Pending.
cli/client/src/subscriptions.rs | 66 +++
docs/subscription-registry.rst | 25 +-
lib/pdm-api-types/src/subscription.rs | 6 +-
lib/pdm-client/src/lib.rs | 61 +++
server/src/api/subscriptions/mod.rs | 377 ++++++++++++++++--
ui/src/configuration/subscription_keys.rs | 9 +-
ui/src/configuration/subscription_registry.rs | 169 +++++++-
7 files changed, 647 insertions(+), 66 deletions(-)
diff --git a/cli/client/src/subscriptions.rs b/cli/client/src/subscriptions.rs
index 00c06ada..b9172a2e 100644
--- a/cli/client/src/subscriptions.rs
+++ b/cli/client/src/subscriptions.rs
@@ -40,6 +40,14 @@ pub fn cli() -> CommandLineInterface {
.insert("auto-assign", CliCommand::new(&API_METHOD_AUTO_ASSIGN))
.insert("apply-pending", CliCommand::new(&API_METHOD_APPLY_PENDING))
.insert("clear-pending", CliCommand::new(&API_METHOD_CLEAR_PENDING))
+ .insert(
+ "clear-key",
+ CliCommand::new(&API_METHOD_CLEAR_KEY).arg_param(&["remote", "node"]),
+ )
+ .insert(
+ "revert-clear",
+ CliCommand::new(&API_METHOD_REVERT_CLEAR).arg_param(&["remote", "node"]),
+ )
.into()
}
@@ -149,6 +157,9 @@ async fn list_keys() -> Result<(), Error> {
let key_width = keys.iter().map(|k| k.key.len()).max().unwrap_or(20);
for key in &keys {
let assignment = match (&key.remote, &key.node) {
+ (Some(r), Some(n)) if key.pending_clear => {
+ format!("{r}/{n} [clear queued]")
+ }
(Some(r), Some(n)) => format!("{r}/{n}"),
_ => "(unassigned)".to_string(),
};
@@ -297,6 +308,61 @@ async fn auto_assign(apply: bool) -> Result<(), Error> {
Ok(())
}
+#[api(
+ input: {
+ properties: {
+ remote: { schema: REMOTE_ID_SCHEMA },
+ node: { schema: NODE_SCHEMA },
+ digest: {
+ schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
+ optional: true,
+ },
+ },
+ },
+)]
+/// Drop a queued Clear Key on a remote node while keeping the pool binding.
+async fn revert_clear(
+ remote: String,
+ node: String,
+ digest: Option<String>,
+) -> Result<(), Error> {
+ let digest = digest.map(ConfigDigest::from);
+ client()?
+ .subscription_revert_pending_clear(&remote, &node, digest)
+ .await?;
+ println!("Reverted pending clear on {remote}/{node}.");
+ Ok(())
+}
+
+#[api(
+ input: {
+ properties: {
+ remote: { schema: REMOTE_ID_SCHEMA },
+ node: { schema: NODE_SCHEMA },
+ digest: {
+ schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
+ optional: true,
+ },
+ },
+ },
+)]
+/// Queue a Clear Key on a remote node so its subscription can be removed at next Apply Pending.
+///
+/// Refuses if no pool entry is bound to (remote, node): foreign live subscriptions must first
+/// be imported via the explicit Adopt Key action, never as a side effect of queueing a clear.
+async fn clear_key(
+ remote: String,
+ node: String,
+ digest: Option<String>,
+) -> Result<(), Error> {
+ let digest = digest.map(ConfigDigest::from);
+ client()?
+ .subscription_queue_clear(&remote, &node, digest)
+ .await?;
+ println!("Queued Clear Key on {remote}/{node}; run apply-pending to commit.");
+ Ok(())
+}
+
#[api(
input: {
properties: {
diff --git a/docs/subscription-registry.rst b/docs/subscription-registry.rst
index f1e6fd5b..68b879be 100644
--- a/docs/subscription-registry.rst
+++ b/docs/subscription-registry.rst
@@ -24,21 +24,28 @@ configured remote alongside any pending plan from the pool. Nodes that already h
registry assigned appear with the live level; nodes with a pending pool assignment show a clock
icon until the change is pushed to the remote.
-From this view an operator can clear a pending assignment or remove the key from the pool entirely,
-which is convenient when a node is known to be wrong without first having to find the matching entry
-on the key list.
+From this view an operator can revert a pending change on the selected node (an unpushed
+assignment or a queued Clear Key) or queue a new Clear Key. Clear Key frees the live
+subscription key from a node so it can be reassigned elsewhere. The action is queued until it
+is committed via Apply Pending or reverted on a per-node basis.
-Assignment
-----------
+Assignment and Clearing
+-----------------------
A key can be pinned to a single node manually.
The Auto-Assign action proposes a plan that fills unsubscribed nodes from free pool keys. For
-Proxmox VE, the smallest covering key by socket count is chosen, so a 4-socket key is not used on a
-2-socket host while a larger host stays unsubscribed.
+Proxmox VE, the smallest covering key by socket count is chosen, so a 4-socket key is not used
+on a 2-socket host while a larger host stays unsubscribed.
-The proposed plan can be inspected before it is applied. Apply Pending pushes the queued keys to
-their target nodes; if a push fails the remaining queue is kept intact for retry. Discard Pending
+The Clear Key action queues the live subscription on the selected node for removal. The
+action requires the (remote, node) to already be tracked by the pool. Apply Pending later
+issues the removal on the remote and releases the pool binding so the key becomes available
+for reassignment. Discard Pending drops the queued clear without touching the remote; the
+binding stays intact and the operator can retry.
+
+The proposed plan can be inspected before it is applied. Apply Pending walks the queue in
+order; if any push or clear fails the remaining queue is kept intact for retry. Discard Pending
drops the plan without touching any remote.
Permissions
diff --git a/lib/pdm-api-types/src/subscription.rs b/lib/pdm-api-types/src/subscription.rs
index 559f725d..7d3c8436 100644
--- a/lib/pdm-api-types/src/subscription.rs
+++ b/lib/pdm-api-types/src/subscription.rs
@@ -352,7 +352,7 @@ pub struct SubscriptionKeyEntry {
/// to free the key from `remote`/`node` so it can be reassigned to a different node.
///
/// Apply Pending issues a DELETE on the remote and then clears `remote`/`node` on success.
- /// Clear Pending only resets this flag and leaves the binding untouched so the operator can
+ /// Discard Pending only resets this flag and leaves the binding untouched so the operator can
/// retry. A bare flag is enough since the (remote, node) binding lives next to it.
///
/// Omitted from the serialised representation when false so the on-disk section and the
@@ -552,7 +552,7 @@ pub struct RemoteNodeStatus {
/// Current key on the node (from remote query).
#[serde(skip_serializing_if = "Option::is_none")]
pub current_key: Option<String>,
- /// True when the pool entry bound to this node has a pending clear queued.
+ /// True when the pool has a clear queued for this node. Omitted on the wire when false.
#[serde(default, skip_serializing_if = "std::ops::Not::not")]
pub pending_clear: bool,
}
@@ -562,7 +562,7 @@ pub struct RemoteNodeStatus {
#[serde(rename_all = "kebab-case")]
/// Result of the bulk clear-pending API endpoint.
pub struct ClearPendingResult {
- /// Number of pool entries whose pending push or reissue was cleared.
+ /// Number of pool entries whose pending push or clear was cleared.
pub cleared: u32,
}
diff --git a/lib/pdm-client/src/lib.rs b/lib/pdm-client/src/lib.rs
index 1fed0e85..530f2b5b 100644
--- a/lib/pdm-client/src/lib.rs
+++ b/lib/pdm-client/src/lib.rs
@@ -901,6 +901,9 @@ impl<T: HttpApiClient> PdmClient<T> {
/// server-side wait surface lands this method becomes a single GET with no behaviour change
/// for callers.
///
+ /// No built-in time bound; wrap in `tokio::time::timeout` if needed. Dropping the future
+ /// stops the client-side polling only - the server-side worker keeps running.
+ ///
/// Native-only: the polling loop relies on `tokio::time::sleep`, which is not available on
/// the wasm32 target the UI builds for.
#[cfg(not(target_arch = "wasm32"))]
@@ -1270,6 +1273,64 @@ impl<T: HttpApiClient> PdmClient<T> {
.data)
}
+ /// Queue a clear for the subscription on `remote`/`node`. Apply Pending later removes the
+ /// subscription from the node so the key can be reassigned elsewhere; Discard Pending undoes
+ /// the queueing without touching the remote.
+ pub async fn subscription_queue_clear(
+ &self,
+ remote: &str,
+ node: &str,
+ digest: Option<ConfigDigest>,
+ ) -> Result<(), Error> {
+ #[derive(Serialize)]
+ struct ClearArgs<'a> {
+ remote: &'a str,
+ node: &'a str,
+ #[serde(skip_serializing_if = "Option::is_none")]
+ digest: Option<ConfigDigest>,
+ }
+ self.0
+ .post(
+ "/api2/extjs/subscriptions/queue-clear",
+ &ClearArgs {
+ remote,
+ node,
+ digest,
+ },
+ )
+ .await?
+ .nodata()
+ }
+
+ /// Drop a queued Clear Key on `remote`/`node` while keeping the pool binding. Used by the
+ /// per-node Revert action; the global Discard Pending path scrubs every pending change at
+ /// once.
+ pub async fn subscription_revert_pending_clear(
+ &self,
+ remote: &str,
+ node: &str,
+ digest: Option<ConfigDigest>,
+ ) -> Result<(), Error> {
+ #[derive(Serialize)]
+ struct Args<'a> {
+ remote: &'a str,
+ node: &'a str,
+ #[serde(skip_serializing_if = "Option::is_none")]
+ digest: Option<ConfigDigest>,
+ }
+ self.0
+ .post(
+ "/api2/extjs/subscriptions/revert-pending-clear",
+ &Args {
+ remote,
+ node,
+ digest,
+ },
+ )
+ .await?
+ .nodata()
+ }
+
/// Clear every pending assignment in one bulk transaction; returns the count of cleared
/// entries.
pub async fn subscription_clear_pending(
diff --git a/server/src/api/subscriptions/mod.rs b/server/src/api/subscriptions/mod.rs
index aa3146ec..9c313e8c 100644
--- a/server/src/api/subscriptions/mod.rs
+++ b/server/src/api/subscriptions/mod.rs
@@ -51,6 +51,14 @@ const SUBDIRS: SubdirMap = &sorted!([
),
("keys", &KEYS_ROUTER),
("node-status", &Router::new().get(&API_METHOD_NODE_STATUS)),
+ (
+ "queue-clear",
+ &Router::new().post(&API_METHOD_QUEUE_CLEAR)
+ ),
+ (
+ "revert-pending-clear",
+ &Router::new().post(&API_METHOD_REVERT_PENDING_CLEAR),
+ ),
]);
const KEYS_ROUTER: Router = Router::new()
@@ -288,7 +296,7 @@ fn get_key(key: String, rpcenv: &mut dyn RpcEnvironment) -> Result<SubscriptionK
/// `PRIV_RESOURCE_MODIFY` on that remote, so an audit-only operator cannot release a key
/// another admin had pinned. Refuses if the key is currently the live active key on its bound
/// node, since dropping the pool entry would orphan that subscription on the remote: the
-/// operator must release the live subscription on the remote first.
+/// operator must run Clear Key on the Node Subscription Status panel first.
async fn delete_key(
key: String,
digest: Option<ConfigDigest>,
@@ -371,7 +379,7 @@ async fn delete_key(
http_bail!(
BAD_REQUEST,
"key '{key}' is currently bound to a remote node with a live active \
- subscription; release it on the remote first"
+ subscription; run Clear Key on the Node Subscription Status panel first"
);
}
@@ -514,7 +522,8 @@ async fn set_assignment(
http_bail!(
BAD_REQUEST,
"key '{key}' is currently bound to a remote node with a live active \
- subscription; release it on the remote before rebinding"
+ subscription; run Clear Key on the Node Subscription Status panel before \
+ rebinding"
);
}
@@ -551,6 +560,13 @@ async fn set_assignment(
.expect("entry verified to exist under lock above");
entry.remote = Some(remote);
entry.node = Some(node);
+ if post_moves {
+ // Drop any clear queued against the previous owner so it does not silently fire on
+ // the new node at the next Apply Pending. Only on an actual move - re-asserting the
+ // same binding must not silently cancel a queued Clear Key (use Revert / Clear
+ // Pending for that).
+ entry.pending_clear = false;
+ }
pdm_config::subscriptions::save_config(&config)
})
@@ -577,8 +593,8 @@ async fn set_assignment(
/// Drop the remote-node binding for a pool key.
///
/// Refuses when the binding is currently synced (the assigned key is the live active key on
-/// its remote): unassigning then would orphan that subscription, so the operator must release
-/// the live subscription on the remote first.
+/// its remote): unassigning then would orphan that subscription, so the operator must run
+/// Clear Key on the Node Subscription Status panel first.
async fn clear_assignment(
key: String,
digest: Option<ConfigDigest>,
@@ -658,7 +674,7 @@ async fn clear_assignment(
http_bail!(
BAD_REQUEST,
"key '{key}' is currently bound to a remote node with a live active \
- subscription; release it on the remote first"
+ subscription; run Clear Key on the Node Subscription Status panel first"
);
}
// Safe: the earlier `config.get(&key).cloned()` above proved the key exists, and the
@@ -668,6 +684,9 @@ async fn clear_assignment(
.expect("entry verified to exist under lock above");
entry.remote = None;
entry.node = None;
+ // pending_clear without a binding is meaningless - reset so a later reassignment does
+ // not re-trigger a stale teardown.
+ entry.pending_clear = false;
pdm_config::subscriptions::save_config(&config)
})
@@ -679,9 +698,9 @@ async fn clear_assignment(
/// Pre-lock check for the unassign / delete-key paths ([`clear_assignment`] and [`delete_key`]):
/// returns the (remote, node) the entry is currently active on, if any, so the lock-protected
-/// branch can refuse the operation and prompt the operator to release the live subscription
-/// on the remote first. Returns `None` for entries with no binding, no live subscription, or
-/// a live subscription whose key does not match the entry.
+/// branch can refuse the operation and prompt the operator to run Clear Key first. Returns
+/// `None` for entries with no binding, no live subscription, or a live subscription whose key
+/// does not match the entry.
///
/// Takes the binding from the caller's pre-read entry rather than re-reading config so the
/// remote we hit on the network is the one the caller's pre-priv check already covered: a
@@ -749,6 +768,192 @@ async fn push_key_to_remote(remote: &Remote, key: &str, node_name: &str) -> Resu
Ok(())
}
+/// Tear down a node's subscription via the remote's `/nodes/{node}/subscription` endpoint.
+async fn delete_subscription_on_remote(
+ remote: &Remote,
+ product_type: ProductType,
+ node_name: &str,
+) -> Result<(), Error> {
+ match product_type {
+ ProductType::Pve => {
+ let client = crate::connection::make_pve_client(remote)?;
+ client.delete_subscription(node_name).await?;
+ }
+ ProductType::Pbs => {
+ let client = crate::connection::make_pbs_client(remote)?;
+ client.delete_subscription().await?;
+ }
+ ProductType::Pmg | ProductType::Pom => {
+ bail!("PDM cannot clear '{product_type}' keys: no remote support yet");
+ }
+ }
+
+ info!("removed subscription from {}/{node_name}", remote.id);
+ Ok(())
+}
+
+#[api(
+ input: {
+ properties: {
+ remote: { schema: REMOTE_ID_SCHEMA },
+ // NODE_SCHEMA rejects path-traversal input before it ends up interpolated into
+ // the remote URL `/api2/extjs/nodes/{node}/subscription`.
+ node: { schema: NODE_SCHEMA },
+ digest: {
+ type: ConfigDigest,
+ optional: true,
+ },
+ },
+ },
+ access: {
+ permission: &Permission::Privilege(&["system"], PRIV_SYS_MODIFY, false),
+ },
+)]
+/// Queue a clear on a remote node, that is, mark its subscription for removal so the key can
+/// be reassigned elsewhere.
+///
+/// Sets `pending_clear` on the pool entry currently bound to (remote, node). Apply Pending
+/// later issues the DELETE on the remote and clears the binding on success; Discard Pending only
+/// resets the flag and leaves the binding intact so the operator can retry.
+///
+/// Refuses if no pool entry is bound to (remote, node): importing a foreign live subscription
+/// into the pool is the job of the explicit Adopt Key action, not a side effect of queueing a
+/// clear. The split keeps the audit trail honest; queueing a clear should only ever schedule
+/// a removal, never silently materialise new pool state the operator did not ask for.
+///
+/// Per-remote `PRIV_RESOURCE_MODIFY` is enforced inside the handler so an operator with global
+/// system access alone cannot tear down subscriptions on remotes they have no other authority on.
+async fn queue_clear(
+ remote: String,
+ node: String,
+ digest: Option<ConfigDigest>,
+ rpcenv: &mut dyn RpcEnvironment,
+) -> Result<(), Error> {
+ let auth_id: Authid = rpcenv
+ .get_auth_id()
+ .context("no authid available")?
+ .parse()?;
+ let user_info = CachedUserInfo::new()?;
+ user_info.check_privs(
+ &auth_id,
+ &["resource", &remote],
+ PRIV_RESOURCE_MODIFY,
+ false,
+ )?;
+
+ // No live fetch: the pool entry's binding is authoritative for queueing the operation; the
+ // worker re-validates against the live remote at apply time and aborts if the remote runs a
+ // different key.
+ //
+ // The lock + sync IO runs on a blocking thread so the async runtime stays free for other
+ // work even when /etc/proxmox-datacenter-manager/subscriptions is on slow storage.
+ let new_digest = tokio::task::spawn_blocking(move || -> Result<ConfigDigest, Error> {
+ let _lock = pdm_config::subscriptions::lock_config()?;
+ let (mut config, config_digest) = pdm_config::subscriptions::config()?;
+ config_digest.detect_modification(digest.as_ref())?;
+
+ let bound_id = config
+ .iter()
+ .find(|(_, e)| {
+ e.remote.as_deref() == Some(remote.as_str())
+ && e.node.as_deref() == Some(node.as_str())
+ })
+ .map(|(id, _)| id.to_string());
+
+ let Some(id) = bound_id else {
+ http_bail!(
+ BAD_REQUEST,
+ "no pool-managed assignment on {remote}/{node}; \
+ run Adopt Key first to import a foreign subscription into the pool"
+ );
+ };
+ let entry = config
+ .get_mut(&id)
+ .expect("entry verified to exist under lock above");
+ if entry.pending_clear {
+ http_bail!(BAD_REQUEST, "clear already queued for {remote}/{node}");
+ }
+ entry.pending_clear = true;
+
+ pdm_config::subscriptions::save_config(&config)
+ })
+ .await??;
+ rpcenv["digest"] = new_digest.to_hex().into();
+ Ok(())
+}
+
+#[api(
+ input: {
+ properties: {
+ remote: { schema: REMOTE_ID_SCHEMA },
+ node: { schema: NODE_SCHEMA },
+ digest: {
+ type: ConfigDigest,
+ optional: true,
+ },
+ },
+ },
+ access: {
+ permission: &Permission::Privilege(&["system"], PRIV_SYS_MODIFY, false),
+ },
+)]
+/// Drop a queued Clear Key on `remote`/`node` while keeping the binding intact.
+///
+/// Backs out a Queue Clear on a single node without going through the global Discard Pending
+/// path (which scrubs every pending change in the pool). The binding stays so the operator can
+/// retry the queueing or leave the live subscription untouched.
+async fn revert_pending_clear(
+ remote: String,
+ node: String,
+ digest: Option<ConfigDigest>,
+ rpcenv: &mut dyn RpcEnvironment,
+) -> Result<(), Error> {
+ let auth_id: Authid = rpcenv
+ .get_auth_id()
+ .context("no authid available")?
+ .parse()?;
+ let user_info = CachedUserInfo::new()?;
+ user_info.check_privs(
+ &auth_id,
+ &["resource", &remote],
+ PRIV_RESOURCE_MODIFY,
+ false,
+ )?;
+
+ let new_digest = tokio::task::spawn_blocking(move || -> Result<ConfigDigest, Error> {
+ let _lock = pdm_config::subscriptions::lock_config()?;
+ let (mut config, config_digest) = pdm_config::subscriptions::config()?;
+ config_digest.detect_modification(digest.as_ref())?;
+
+ let bound_id = config
+ .iter()
+ .find(|(_, e)| {
+ e.remote.as_deref() == Some(remote.as_str())
+ && e.node.as_deref() == Some(node.as_str())
+ })
+ .map(|(id, _)| id.to_string());
+
+ let Some(id) = bound_id else {
+ http_bail!(
+ BAD_REQUEST,
+ "no pool-managed assignment on {remote}/{node}"
+ );
+ };
+ let entry = config
+ .get_mut(&id)
+ .expect("entry verified to exist under lock above");
+ if !entry.pending_clear {
+ http_bail!(BAD_REQUEST, "no pending clear queued on {remote}/{node}");
+ }
+ entry.pending_clear = false;
+
+ pdm_config::subscriptions::save_config(&config)
+ })
+ .await??;
+ rpcenv["digest"] = new_digest.to_hex().into();
+ Ok(())
+}
+
#[api(
input: {
properties: {
@@ -985,6 +1190,9 @@ async fn bulk_assign(
if entry.remote.is_none() {
entry.remote = Some(p.remote.clone());
entry.node = Some(p.node.clone());
+ // Mirror set_assignment: stale flags from a prior binding cycle must
+ // not silently fire against the new target at the next Apply Pending.
+ entry.pending_clear = false;
applied.push(p.clone());
}
}
@@ -1160,7 +1368,6 @@ async fn apply_pending(
async fn run_apply_pending(auth_id: Authid) -> Result<(), Error> {
let user_info = CachedUserInfo::new()?;
let (remotes_config, _) = pdm_config::remotes::config()?;
- let (config, _) = pdm_config::subscriptions::config()?;
let node_statuses = collect_status_uncached(&remotes_config).await;
let pending = compute_pending(&user_info, &auth_id, &node_statuses)?;
@@ -1176,34 +1383,103 @@ async fn run_apply_pending(auth_id: Authid) -> Result<(), Error> {
for entry in pending {
let Some(remote) = remotes_config.get(&entry.remote) else {
bail!(
- "remote '{}' vanished, aborting after {ok}/{total} successful pushes",
+ "remote '{}' vanished, aborting after {ok}/{total} successful operations",
entry.remote,
);
};
- // Honour the case where the operator unassigned the key while the worker was queued.
+ // Re-read the pool on every iteration: the previous iteration's `save_config` (Clear
+ // branch) makes the at-start snapshot stale, and a parallel admin's Discard Pending
+ // between worker start and this iteration must cancel a planned op rather than have us
+ // execute it against a flag the operator just retracted.
+ let (config, _) = pdm_config::subscriptions::config()?;
if !pool_assignment_still_valid(&config, &entry) {
info!(
- "skipping {}/{}: pool assignment changed before worker ran",
+ "skipping {}/{}: pool entry changed before worker ran",
+ entry.remote, entry.node
+ );
+ continue;
+ }
+ // For each branch, the entry's current `pending_clear` state must still match the planned
+ // op or the operator's intent has changed under us (a Discard Pending fired for Clear, or
+ // a parallel queue_clear fired for Push).
+ let current_pending_clear = config
+ .get(&entry.key)
+ .map(|e| e.pending_clear)
+ .unwrap_or(false);
+ let op_consistent = match entry.op {
+ PendingOp::Push => !current_pending_clear,
+ PendingOp::Clear => current_pending_clear,
+ };
+ if !op_consistent {
+ info!(
+ "skipping {}/{}: pending state changed before this step ran",
entry.remote, entry.node
);
continue;
}
let redacted = redact_key(&entry.key);
- info!("pushing {redacted} to {}/{}...", entry.remote, entry.node);
- if let Err(err) = push_key_to_remote(remote, &entry.key, &entry.node).await {
- bail!(
- "push of {redacted} to {}/{} failed after {ok}/{total} successful pushes: {err}",
- entry.remote,
- entry.node,
- );
+ match entry.op {
+ PendingOp::Push => {
+ info!("pushing {redacted} to {}/{}...", entry.remote, entry.node);
+ if let Err(err) = push_key_to_remote(remote, &entry.key, &entry.node).await {
+ bail!(
+ "push of {redacted} to {}/{} failed after {ok}/{total} successful operations: {err}",
+ entry.remote,
+ entry.node,
+ );
+ }
+ }
+ PendingOp::Clear => {
+ let product_type = match ProductType::from_key(&entry.key) {
+ Some(ty) => ty,
+ None => bail!("unrecognised key format: {redacted}"),
+ };
+ info!(
+ "clearing {redacted} from {}/{}...",
+ entry.remote, entry.node,
+ );
+ if let Err(err) =
+ delete_subscription_on_remote(remote, product_type, &entry.node).await
+ {
+ bail!(
+ "clear of {redacted} on {}/{} failed after {ok}/{total} successful operations: {err}",
+ entry.remote,
+ entry.node,
+ );
+ }
+ // Clear the binding under the config lock. A subsequent compute_pending call must
+ // not propose another push or clear for the same entry. The lock + sync IO run
+ // on a blocking thread so the worker does not park one of the async runtime's
+ // worker threads on file IO.
+ let entry_key = entry.key.clone();
+ let entry_remote = entry.remote.clone();
+ let entry_node = entry.node.clone();
+ tokio::task::spawn_blocking(move || -> Result<(), Error> {
+ let _lock = pdm_config::subscriptions::lock_config()?;
+ let (mut updated, _) = pdm_config::subscriptions::config()?;
+ if let Some(stored) = updated.get_mut(&entry_key) {
+ if stored.remote.as_deref() == Some(entry_remote.as_str())
+ && stored.node.as_deref() == Some(entry_node.as_str())
+ {
+ stored.remote = None;
+ stored.node = None;
+ stored.pending_clear = false;
+ }
+ }
+ // Worker context: no `rpcenv` to set, post-save digest is unused here.
+ let _ = pdm_config::subscriptions::save_config(&updated)?;
+ Ok(())
+ })
+ .await??;
+ }
}
info!(" success");
invalidate_subscription_info_for_remote(&entry.remote);
ok += 1;
}
- info!("finished: {ok}/{total} pushes succeeded");
+ info!("finished: {ok}/{total} operations succeeded");
Ok(())
}
@@ -1221,14 +1497,19 @@ async fn run_apply_pending(auth_id: Authid) -> Result<(), Error> {
permission: &Permission::Privilege(&["system"], PRIV_SYS_MODIFY, false),
},
)]
-/// Clear every pending assignment in one bulk transaction.
+/// Drop every queued change in one bulk transaction.
///
-/// Pending = pool key bound to a remote node whose live `current_key` does not match the
-/// assigned pool key (a different live key, no key, or no row returned at all because the remote
-/// is unreachable / the node is gone). Clears only those entries the caller has
-/// `PRIV_RESOURCE_MODIFY` on; remotes the caller may only audit are skipped. Mirrors
-/// `apply-pending` but drops the assignments instead of pushing them, so an operator can disown
-/// stuck assignments without first having to bring the target back online.
+/// Two shapes of pending change are discarded:
+/// * pool key bound to a remote node whose live `current_key` does not match the assigned
+/// pool key (a different live key, no key, or no row returned at all because the remote is
+/// unreachable / the node is gone): the binding is dropped, the key returns to the free
+/// pool, and the remote is not touched.
+/// * queued Clear Keys (`pending_clear = true`): the flag is cleared; the binding and the
+/// live key on the remote stay intact.
+///
+/// Only entries the caller has `PRIV_RESOURCE_MODIFY` on are touched; remotes the caller may
+/// only audit are skipped. Mirrors `apply-pending` but drops the queue instead of pushing it,
+/// so an operator can disown stuck changes without first having to bring the target back online.
///
/// The optional `digest` is checked twice: once before the live-state fetch so a stale browser
/// tab is rejected up-front, and again under the config lock so a parallel admin edit between
@@ -1269,8 +1550,23 @@ async fn clear_pending(
if stored.remote.as_deref() == Some(entry.remote.as_str())
&& stored.node.as_deref() == Some(entry.node.as_str())
{
- stored.remote = None;
- stored.node = None;
+ match entry.op {
+ PendingOp::Clear => {
+ // Only reset the flag - leave the binding so the operator can
+ // retry the clear without having to re-import a foreign key
+ // from scratch.
+ stored.pending_clear = false;
+ }
+ PendingOp::Push => {
+ stored.remote = None;
+ stored.node = None;
+ // Defensive: an entry that flipped to pending_clear between
+ // the pre-lock snapshot and now would otherwise leave a
+ // meaningless flag on a now-unbound entry. Reset alongside the
+ // binding clear.
+ stored.pending_clear = false;
+ }
+ }
cleared += 1;
}
}
@@ -1293,12 +1589,21 @@ async fn clear_pending(
Ok(ClearPendingResult { cleared })
}
-/// Plan entry for one pending push.
+/// Plan entry for one pending push or clear.
#[derive(Clone, Debug)]
struct PendingEntry {
key: String,
remote: String,
node: String,
+ op: PendingOp,
+}
+
+#[derive(Clone, Copy, Debug, PartialEq, Eq)]
+enum PendingOp {
+ /// PUT the assigned key to the remote because the live state does not match.
+ Push,
+ /// DELETE the subscription on the remote and clear the binding on success.
+ Clear,
}
fn compute_pending(
@@ -1318,6 +1623,15 @@ fn compute_pending(
return None;
}
+ if entry.pending_clear {
+ return Some(PendingEntry {
+ key: entry.key.clone(),
+ remote: remote.to_string(),
+ node: node.to_string(),
+ op: PendingOp::Clear,
+ });
+ }
+
// Pending push = the live current key on the node does not match the assigned pool
// key. Subscription health (Invalid, Expired, ...) is a separate axis surfaced via
// the Status column; re-pushing the same key would not change the shop's verdict.
@@ -1335,6 +1649,7 @@ fn compute_pending(
key: entry.key.clone(),
remote: remote.to_string(),
node: node.to_string(),
+ op: PendingOp::Push,
})
})
.collect())
diff --git a/ui/src/configuration/subscription_keys.rs b/ui/src/configuration/subscription_keys.rs
index e43543ae..5807504d 100644
--- a/ui/src/configuration/subscription_keys.rs
+++ b/ui/src/configuration/subscription_keys.rs
@@ -5,7 +5,9 @@ use std::rc::Rc;
use anyhow::Error;
use pdm_api_types::remotes::RemoteType;
-use pdm_api_types::subscription::{ProductType, RemoteNodeStatus, SubscriptionKeyEntry};
+use pdm_api_types::subscription::{
+ ProductType, RemoteNodeStatus, SubscriptionKeyEntry,
+};
use yew::virtual_dom::{Key, VComp, VNode};
use proxmox_yew_comp::percent_encoding::percent_encode_component;
@@ -345,7 +347,7 @@ impl LoadableComponent for SubscriptionKeyGridComp {
});
let body = match assignment {
Some((remote, node)) => tr!(
- "Remove {key} from the key pool? It is still assigned to {remote}/{node}; the assignment is released without removing the subscription on the remote.",
+ "Remove {key} from the key pool? It is still assigned to {remote}/{node}; the assignment is released without removing any subscription on the remote. Use Clear Key on the Node Subscription Status panel first to release a live subscription on that node too.",
key = key.to_string(),
remote = remote,
node = node,
@@ -373,7 +375,8 @@ impl LoadableComponent for SubscriptionKeyGridComp {
/// Returns true when the pool entry's binding currently runs the same key on the remote and is
/// Active - meaning a clear-assignment would orphan the live subscription. Mirrors the
-/// server-side gate; the operator must release the live subscription on the remote first.
+/// server-side gate; the operator must run Clear Key on the Node Subscription Status panel
+/// first.
fn is_synced_assignment(entry: &SubscriptionKeyEntry, statuses: &[RemoteNodeStatus]) -> bool {
let (Some(remote), Some(node)) = (entry.remote.as_deref(), entry.node.as_deref()) else {
return false;
diff --git a/ui/src/configuration/subscription_registry.rs b/ui/src/configuration/subscription_registry.rs
index 513fa3a0..7471fae4 100644
--- a/ui/src/configuration/subscription_registry.rs
+++ b/ui/src/configuration/subscription_registry.rs
@@ -209,6 +209,14 @@ impl From<SubscriptionRegistryProps> for VNode {
}
}
+/// What the per-node Revert button should do on the selected entry.
+enum RevertTarget {
+ /// Drop the pool's binding (the key was assigned but never pushed). Carries the pool key.
+ Unassign(String),
+ /// Cancel a queued Clear Key while keeping the binding intact.
+ CancelClear { remote: String, node: String },
+}
+
pub enum Msg {
LoadFinished {
nodes: Vec<RemoteNodeStatus>,
@@ -220,9 +228,11 @@ pub enum Msg {
BulkAssignApply(AutoAssignProposal),
ApplyPending,
ClearPending,
- /// Revert the pending change on the currently-selected node: drop the unpushed pool
- /// assignment without touching the remote.
+ /// Revert the pending change on the currently-selected node: drop an unpushed binding or
+ /// cancel a queued Clear Key (dispatched on the [`RevertTarget`] variant).
RevertSelectedNode,
+ /// Open the confirmation dialog for queueing a clear on the selected node.
+ QueueClearForSelectedNode,
/// Open the Assign Key dialog for the currently-selected node.
AssignKeyToSelectedNode,
}
@@ -232,6 +242,13 @@ pub enum ViewState {
ConfirmAutoAssign(AutoAssignProposal),
ConfirmApplyPending,
ConfirmClearPending,
+ /// Pending confirmation to queue a clear for `(remote, node)`. The current key on the
+ /// node is shown in the dialog body when available.
+ ConfirmQueueClear {
+ remote: String,
+ node: String,
+ current_key: Option<String>,
+ },
/// Assign a pool key to the given node. Opens from the right panel's Assign Key button.
AssignKeyToNode {
remote: String,
@@ -590,23 +607,46 @@ impl LoadableComponent for SubscriptionRegistryComp {
});
}
Msg::RevertSelectedNode => {
- let Some(key) = self.clear_assignment_target_key() else {
+ let Some(target) = self.revert_target() else {
return false;
};
let link = ctx.link().clone();
let digest = self.pool_digest.clone();
ctx.link().spawn(async move {
- let url = format!(
- "/subscriptions/keys/{}/assignment",
- percent_encode_component(&key),
- );
- let query = digest.map(|d| serde_json::json!({ "digest": d }));
- if let Err(err) = http_delete(&url, query).await {
- link.show_error(tr!("Revert"), err.to_string(), true);
+ let err_msg: Option<String> = match target {
+ RevertTarget::Unassign(key) => {
+ let url = format!(
+ "/subscriptions/keys/{}/assignment",
+ percent_encode_component(&key),
+ );
+ let query = digest.map(|d| serde_json::json!({ "digest": d }));
+ http_delete(&url, query).await.err().map(|e| e.to_string())
+ }
+ RevertTarget::CancelClear { remote, node } => {
+ let digest = digest.map(pdm_client::ConfigDigest::from);
+ crate::pdm_client()
+ .subscription_revert_pending_clear(&remote, &node, digest)
+ .await
+ .err()
+ .map(|e| e.to_string())
+ }
+ };
+ if let Some(msg) = err_msg {
+ link.show_error(tr!("Revert"), msg, true);
}
link.send_reload();
});
}
+ Msg::QueueClearForSelectedNode => {
+ let Some((remote, node, current_key)) = self.selected_node_for_clear() else {
+ return false;
+ };
+ ctx.link().change_view(Some(ViewState::ConfirmQueueClear {
+ remote,
+ node,
+ current_key,
+ }));
+ }
Msg::AssignKeyToSelectedNode => {
let Some((remote, node, ty, node_sockets)) =
self.assign_target_for_selected_node()
@@ -770,7 +810,8 @@ impl LoadableComponent for SubscriptionRegistryComp {
Some(
ConfirmDialog::new(
tr!("Discard Pending Changes"),
- tr!("Discard all assignments that have not yet been applied to the remote nodes?"),
+ tr!("Discard all queued assignments and cancel all queued Clear Key actions? \
+ The remote nodes are not touched."),
)
.icon_class("fa fa-question-circle")
.on_confirm({
@@ -787,6 +828,61 @@ impl LoadableComponent for SubscriptionRegistryComp {
ViewState::ConfirmAutoAssign(proposal) => {
Some(self.render_auto_assign_dialog(ctx, proposal))
}
+ ViewState::ConfirmQueueClear {
+ remote,
+ node,
+ current_key,
+ } => {
+ use pwt::widget::ConfirmDialog;
+ let question = match current_key {
+ Some(k) => tr!(
+ "Queue a clear of {key} on {remote}/{node}?",
+ key = k.clone(),
+ remote = remote.clone(),
+ node = node.clone(),
+ ),
+ None => tr!(
+ "Queue a clear on {remote}/{node}?",
+ remote = remote.clone(),
+ node = node.clone(),
+ ),
+ };
+ let body = Column::new()
+ .gap(2)
+ .with_child(Container::from_tag("p").with_child(question))
+ .with_child(Container::from_tag("p").with_child(tr!(
+ "'Apply Pending' will remove the subscription from the node so the key can be reassigned elsewhere; 'Discard Pending' undoes the queueing without touching the remote."
+ )));
+ let remote_for_cb = remote.clone();
+ let node_for_cb = node.clone();
+ let link = ctx.link().clone();
+ let close_link = ctx.link().clone();
+ let digest_for_cb = self.pool_digest.clone();
+ Some(
+ ConfirmDialog::default()
+ .title(tr!("Clear Key"))
+ .confirm_message(body)
+ .on_confirm(move |_| {
+ let link = link.clone();
+ let remote = remote_for_cb.clone();
+ let node = node_for_cb.clone();
+ let digest = digest_for_cb.clone();
+ link.clone().spawn(async move {
+ let digest = digest.map(pdm_client::ConfigDigest::from);
+ if let Err(err) = crate::pdm_client()
+ .subscription_queue_clear(&remote, &node, digest)
+ .await
+ {
+ link.show_error(tr!("Clear Key"), err.to_string(), true);
+ }
+ link.change_view(None);
+ link.send_reload();
+ });
+ })
+ .on_close(move |_| close_link.change_view(None))
+ .into(),
+ )
+ }
ViewState::AssignKeyToNode {
remote,
node,
@@ -847,7 +943,8 @@ impl SubscriptionRegistryComp {
.class(FlexFit);
let can_assign_key = self.assign_target_for_selected_node().is_some();
- let can_revert = self.clear_assignment_target_key().is_some();
+ let can_revert = self.revert_target().is_some();
+ let can_clear_key = self.selected_node_for_clear().is_some();
let assign_button = Tooltip::new(
Button::new(tr!("Assign Key"))
.icon_class("fa fa-link")
@@ -865,8 +962,18 @@ impl SubscriptionRegistryComp {
.on_activate(ctx.link().callback(|_| Msg::RevertSelectedNode)),
)
.tip(tr!(
- "Revert the pending change on the selected node: drop an unpushed pool \
- assignment without touching the remote."
+ "Drop the pending pool change on the selected node."
+ ));
+ let clear_key_button = Tooltip::new(
+ Button::new(tr!("Clear Key"))
+ .icon_class("fa fa-recycle")
+ .disabled(!can_clear_key)
+ .on_activate(ctx.link().callback(|_| Msg::QueueClearForSelectedNode)),
+ )
+ .tip(tr!(
+ "Queue the live subscription on the selected node for removal at next Apply \
+ Pending, freeing the key for reassignment. Requires the node to be \
+ pool-managed."
));
Panel::new()
@@ -877,6 +984,7 @@ impl SubscriptionRegistryComp {
.title(tr!("Node Subscription Status"))
.with_tool(assign_button)
.with_tool(revert_button)
+ .with_tool(clear_key_button)
.with_child(table)
}
@@ -926,19 +1034,40 @@ impl SubscriptionRegistryComp {
.find(|n| n.remote == remote && n.node == node)
}
- /// Returns the assigned key when Revert is appropriate: there is a binding AND it has not
- /// yet been pushed (different from current_key, or the node is not Active). For an
- /// already-synced assignment, clearing would orphan the live subscription on the remote,
- /// so the operator must take a different path (introduced later in the series).
- fn clear_assignment_target_key(&self) -> Option<String> {
+ /// Resolve the selected node into a Revert action target.
+ ///
+ /// Two kinds of pending state are revertible per-node: an unpushed pool assignment (drop
+ /// the binding entirely, same as the old Clear Assignment), and a queued Clear Key (drop
+ /// the flag, keep the binding). A synced binding without a queued clear is not pending,
+ /// so the button is disabled; freeing such a binding requires Clear Key.
+ fn revert_target(&self) -> Option<RevertTarget> {
let n = self.selected_node_status()?;
+ if n.pending_clear {
+ return Some(RevertTarget::CancelClear {
+ remote: n.remote.clone(),
+ node: n.node.clone(),
+ });
+ }
let assigned = n.assigned_key.as_ref()?;
let synced = n.status == proxmox_subscription::SubscriptionStatus::Active
&& n.current_key.as_deref() == Some(assigned.as_str());
if synced {
return None;
}
- Some(assigned.clone())
+ Some(RevertTarget::Unassign(assigned.clone()))
+ }
+
+ /// Returns `(remote, node, current_key)` when the selected node has a pool-managed
+ /// subscription that can be queued for clear: there is a live key, no clear is already
+ /// queued for it, and a pool entry is bound to (remote, node). The pool-binding gate
+ /// mirrors the server-side refusal so foreign live subscriptions do not offer Clear Key
+ /// (they need Adopt Key first).
+ fn selected_node_for_clear(&self) -> Option<(String, String, Option<String>)> {
+ let n = self.selected_node_status()?;
+ if n.pending_clear || n.current_key.is_none() || n.assigned_key.is_none() {
+ return None;
+ }
+ Some((n.remote.clone(), n.node.clone(), n.current_key.clone()))
}
/// Returns `(remote, node, type, node_sockets)` for the right-panel Assign button:
--
2.47.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH datacenter-manager v3 09/12] subscription: add Adopt Key action for foreign live subscriptions
2026-05-15 7:43 [PATCH datacenter-manager v3 00/12] subscription key pool registry Thomas Lamprecht
` (7 preceding siblings ...)
2026-05-15 7:43 ` [PATCH datacenter-manager v3 08/12] subscription: add Clear Key action and per-node revert Thomas Lamprecht
@ 2026-05-15 7:43 ` Thomas Lamprecht
2026-05-15 7:43 ` [PATCH datacenter-manager v3 10/12] subscription: add Adopt All bulk action Thomas Lamprecht
` (2 subsequent siblings)
11 siblings, 0 replies; 16+ messages in thread
From: Thomas Lamprecht @ 2026-05-15 7:43 UTC (permalink / raw)
To: pdm-devel
Add a dedicated endpoint plus CLI / UI wiring for importing a
remote node's live subscription into the pool as a bound entry,
without touching the remote. The action covers the case where a
key was already installed on a node before PDM took over its pool
management; bringing it under the registry is required for any
subsequent pool action to operate on it.
Three sub-cases for the live key:
- Not in the pool: insert with source=Adopted, bound to (remote, node).
- In the pool but unbound: rebind, leaving the source field as-is
so a key originally added by hand keeps its Manual label.
- In the pool but bound elsewhere: refused, the operator has to
reconcile the binding first.
The endpoint pre-fetches the pool digest before the live network
read and refuses with CONFLICT on mismatch, so a parallel
set_assignment landing during the .await cannot silently rebind
the key. Per-remote PRIV_RESOURCE_MODIFY is enforced inside the
handler so operators with only global system access cannot pull
subscriptions off remotes they have no other authority on.
The Node Subscription Status tree marks adoptable rows (live key
set, no pool binding yet) with a download hint icon so the action
is discoverable without consulting the docs. The pool grid gets a
new Source column exposing the Manual vs Adopted origin, hidden by
default; available via the column picker.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
---
New in v3.
cli/client/src/subscriptions.rs | 35 ++++
docs/subscription-registry.rst | 8 +
lib/pdm-api-types/src/subscription.rs | 3 +
lib/pdm-api-types/tests/test_import.rs | 29 +++
lib/pdm-client/src/lib.rs | 36 +++-
server/src/api/subscriptions/mod.rs | 167 ++++++++++++++++++
ui/src/configuration/subscription_keys.rs | 16 +-
ui/src/configuration/subscription_registry.rs | 107 ++++++++++-
8 files changed, 396 insertions(+), 5 deletions(-)
diff --git a/cli/client/src/subscriptions.rs b/cli/client/src/subscriptions.rs
index b9172a2e..c9ba5e4c 100644
--- a/cli/client/src/subscriptions.rs
+++ b/cli/client/src/subscriptions.rs
@@ -48,6 +48,10 @@ pub fn cli() -> CommandLineInterface {
"revert-clear",
CliCommand::new(&API_METHOD_REVERT_CLEAR).arg_param(&["remote", "node"]),
)
+ .insert(
+ "adopt-key",
+ CliCommand::new(&API_METHOD_ADOPT_KEY).arg_param(&["remote", "node"]),
+ )
.into()
}
@@ -308,6 +312,37 @@ async fn auto_assign(apply: bool) -> Result<(), Error> {
Ok(())
}
+#[api(
+ input: {
+ properties: {
+ remote: { schema: REMOTE_ID_SCHEMA },
+ node: { schema: NODE_SCHEMA },
+ digest: {
+ schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
+ optional: true,
+ },
+ },
+ },
+)]
+/// Adopt the live subscription on a remote node into the pool.
+///
+/// Brings a foreign subscription under PDM management without touching the remote: the live
+/// current key on `remote`/`node` is imported as a pool entry bound to that node. Refuses if
+/// the (remote, node) target already has a pool-managed binding.
+async fn adopt_key(
+ remote: String,
+ node: String,
+ digest: Option<String>,
+) -> Result<(), Error> {
+ let digest = digest.map(ConfigDigest::from);
+ client()?
+ .subscription_adopt_key(&remote, &node, digest)
+ .await?;
+ println!("Adopted live subscription on {remote}/{node} into the pool.");
+ Ok(())
+}
+
+
#[api(
input: {
properties: {
diff --git a/docs/subscription-registry.rst b/docs/subscription-registry.rst
index 68b879be..4c31c9a6 100644
--- a/docs/subscription-registry.rst
+++ b/docs/subscription-registry.rst
@@ -44,6 +44,14 @@ issues the removal on the remote and releases the pool binding so the key become
for reassignment. Discard Pending drops the queued clear without touching the remote; the
binding stays intact and the operator can retry.
+The Adopt Key action imports the live subscription on a remote node into the pool as a
+bound entry, without touching the remote. Use it to bring a pre-existing subscription -- one
+installed on a node before PDM took over its pool management -- under the registry so that
+pool actions such as Clear Key and Auto-Assign can act on it. Nodes that are eligible for
+adoption are highlighted with a download hint icon in the Node Subscription Status tree;
+the pool grid carries a hidden-by-default Source column distinguishing manually-added from
+adopted entries, which can be enabled via the column picker if the distinction matters.
+
The proposed plan can be inspected before it is applied. Apply Pending walks the queue in
order; if any push or clear fails the remaining queue is kept intact for retry. Discard Pending
drops the plan without touching any remote.
diff --git a/lib/pdm-api-types/src/subscription.rs b/lib/pdm-api-types/src/subscription.rs
index 7d3c8436..8a0a7977 100644
--- a/lib/pdm-api-types/src/subscription.rs
+++ b/lib/pdm-api-types/src/subscription.rs
@@ -310,6 +310,9 @@ pub enum SubscriptionKeySource {
/// UI or CLI, and as the `serde(default)` for entries that predate this field.
#[default]
Manual,
+ /// Imported from a remote node's live subscription via the Adopt Key action, that is, a key
+ /// that was already installed on a remote before PDM took over its pool management.
+ Adopted,
}
#[api(
diff --git a/lib/pdm-api-types/tests/test_import.rs b/lib/pdm-api-types/tests/test_import.rs
index 33601620..72177460 100644
--- a/lib/pdm-api-types/tests/test_import.rs
+++ b/lib/pdm-api-types/tests/test_import.rs
@@ -40,6 +40,35 @@ fn entry_roundtrip() {
assert_eq!(back.next_due_date.as_deref(), Some("2027-06-01"));
}
+#[test]
+fn adopted_entry_roundtrip() {
+ // Ensure SubscriptionKeySource::Adopted serializes to its kebab-case form `adopted` and
+ // parses back to the same variant, so an in-place upgrade does not silently rewrite
+ // adopted pool entries to Manual on the next save.
+ let mut config = SectionConfigData::<SubscriptionKeyEntry>::default();
+ config.insert(
+ "pbsc-1122334455".to_string(),
+ SubscriptionKeyEntry {
+ key: "pbsc-1122334455".to_string(),
+ product_type: ProductType::Pbs,
+ source: SubscriptionKeySource::Adopted,
+ remote: Some("backup-cluster".to_string()),
+ node: Some("pbs-1".to_string()),
+ ..Default::default()
+ },
+ );
+
+ let raw = SubscriptionKeyEntry::write_section_config("test", &config).expect("write failed");
+ assert!(
+ raw.contains("\tsource adopted"),
+ "expected kebab-case `adopted` in serialised form, got:\n{raw}",
+ );
+ let parsed = SubscriptionKeyEntry::parse_section_config("test", &raw).expect("parse failed");
+ let back = parsed.get("pbsc-1122334455").expect("key not found");
+ assert_eq!(back.source, SubscriptionKeySource::Adopted);
+ assert_eq!(back.remote.as_deref(), Some("backup-cluster"));
+}
+
#[test]
fn shadow_roundtrip() {
let mut shadow = SectionConfigData::<SubscriptionKeyShadow>::default();
diff --git a/lib/pdm-client/src/lib.rs b/lib/pdm-client/src/lib.rs
index 530f2b5b..6c764c00 100644
--- a/lib/pdm-client/src/lib.rs
+++ b/lib/pdm-client/src/lib.rs
@@ -1273,9 +1273,41 @@ impl<T: HttpApiClient> PdmClient<T> {
.data)
}
+ /// Adopt the live subscription on `remote`/`node` into the pool: imports the live key as a
+ /// new pool entry bound to (remote, node) without touching the remote. Refuses if (remote,
+ /// node) already has a pool entry bound to it. See the server endpoint docs for the full
+ /// per-sub-case semantics (existing-unbound, existing-bound-elsewhere, not-in-pool).
+ pub async fn subscription_adopt_key(
+ &self,
+ remote: &str,
+ node: &str,
+ digest: Option<ConfigDigest>,
+ ) -> Result<(), Error> {
+ #[derive(Serialize)]
+ struct AdoptArgs<'a> {
+ remote: &'a str,
+ node: &'a str,
+ #[serde(skip_serializing_if = "Option::is_none")]
+ digest: Option<ConfigDigest>,
+ }
+ self.0
+ .post(
+ "/api2/extjs/subscriptions/adopt-key",
+ &AdoptArgs {
+ remote,
+ node,
+ digest,
+ },
+ )
+ .await?
+ .nodata()
+ }
+
/// Queue a clear for the subscription on `remote`/`node`. Apply Pending later removes the
- /// subscription from the node so the key can be reassigned elsewhere; Discard Pending undoes
- /// the queueing without touching the remote.
+ /// subscription from the node so the key can be reassigned elsewhere; Discard Pending
+ /// undoes the queueing without touching the remote. Returns `BAD_REQUEST` if no pool entry
+ /// is bound to (remote, node); callers must run Adopt Key first to import a foreign
+ /// subscription.
pub async fn subscription_queue_clear(
&self,
remote: &str,
diff --git a/server/src/api/subscriptions/mod.rs b/server/src/api/subscriptions/mod.rs
index 9c313e8c..cc46806c 100644
--- a/server/src/api/subscriptions/mod.rs
+++ b/server/src/api/subscriptions/mod.rs
@@ -39,6 +39,7 @@ pub const ROUTER: Router = Router::new()
#[sortable]
const SUBDIRS: SubdirMap = &sorted!([
+ ("adopt-key", &Router::new().post(&API_METHOD_ADOPT_KEY)),
(
"apply-pending",
&Router::new().post(&API_METHOD_APPLY_PENDING)
@@ -90,6 +91,11 @@ const PANEL_NODE_STATUS_MAX_AGE: u64 = 5 * 60;
/// Keeps the product prefix and the first/last hex characters of the secret so an operator can
/// still tell two keys apart in a tail of `journalctl`, but the full key never lands in a log
/// file readable by anyone other than the priv user.
+///
+/// Uses `chars()` rather than byte slicing so a hostile remote returning a non-ASCII subscription
+/// key cannot trigger a slice-on-non-char-boundary panic; schema-validated pool keys are pure
+/// ASCII per `PRODUCT_KEY_REGEX`, but `redact_key` is also reached by the adoption path on a
+/// live key the remote owned, which can be any string.
fn redact_key(key: &str) -> String {
let Some((prefix, secret)) = key.split_once('-') else {
return "<malformed-key>".to_string();
@@ -954,6 +960,158 @@ async fn revert_pending_clear(
Ok(())
}
+#[api(
+ input: {
+ properties: {
+ remote: { schema: REMOTE_ID_SCHEMA },
+ // NODE_SCHEMA rejects path-traversal input before it ends up interpolated into the
+ // remote URL `/api2/extjs/nodes/{node}/subscription`.
+ node: { schema: NODE_SCHEMA },
+ digest: {
+ type: ConfigDigest,
+ optional: true,
+ },
+ },
+ },
+ access: {
+ permission: &Permission::Privilege(&["system"], PRIV_SYS_MODIFY, false),
+ },
+)]
+/// Adopt the live subscription on a remote node into the pool.
+///
+/// Reads the live current key from `remote`/`node` and brings the pool under management of it
+/// without touching the remote (no DELETE / push). Three sub-cases for the live key:
+///
+/// - Not in the pool: a fresh `Adopted` entry is inserted, bound to (remote, node).
+/// - In the pool, unbound: rebound to (remote, node); the source is left untouched so a key
+/// that was originally added manually keeps its `Manual` label even after a remote re-import.
+/// - In the pool, bound elsewhere: refused; the operator has to reconcile the binding first.
+///
+/// Refuses if a pool entry is already bound to (remote, node): adopting a node that is already
+/// pool-managed would either be a no-op or a footgun (rebinding the same node to a different
+/// key in the pool), so the caller has to pick the right Assign/Clear path explicitly.
+///
+/// Per-remote `PRIV_RESOURCE_MODIFY` is enforced inside the handler so an operator with global
+/// system access alone cannot pull subscriptions off remotes they have no other authority on
+/// (an adopted key bound to (remote, node) is itself an audit-side surface against that node).
+async fn adopt_key(
+ remote: String,
+ node: String,
+ digest: Option<ConfigDigest>,
+ rpcenv: &mut dyn RpcEnvironment,
+) -> Result<(), Error> {
+ let auth_id: Authid = rpcenv
+ .get_auth_id()
+ .context("no authid available")?
+ .parse()?;
+ let user_info = CachedUserInfo::new()?;
+ user_info.check_privs(
+ &auth_id,
+ &["resource", &remote],
+ PRIV_RESOURCE_MODIFY,
+ false,
+ )?;
+
+ // Pre-fetch digest to catch a parallel set_assignment during the live read below.
+ let (_pre_config, pre_digest) = pdm_config::subscriptions::config()?;
+
+ // Fetch live state before grabbing the config lock so the network call does not pin the
+ // lock for the duration of a remote query.
+ let (remotes_config, _) = pdm_config::remotes::config()?;
+ let remote_entry = remotes_config
+ .get(&remote)
+ .ok_or_else(|| http_err!(NOT_FOUND, "remote '{remote}' not found"))?;
+ let live = get_subscription_info_for_remote(remote_entry, FRESH_NODE_STATUS_MAX_AGE)
+ .await
+ .map_err(|err| {
+ http_err!(
+ BAD_REQUEST,
+ "could not read subscription on {remote}/{node}: {err}"
+ )
+ })?;
+ let live_current_key: String = live
+ .get(&node)
+ .and_then(|info| info.as_ref())
+ .and_then(|info| info.key.clone())
+ .ok_or_else(|| http_err!(NOT_FOUND, "no live subscription on {remote}/{node} to adopt"))?;
+
+ // The lock + sync IO runs on a blocking thread so the async runtime stays free for other
+ // work even when /etc/proxmox-datacenter-manager/subscriptions is on slow storage.
+ let new_digest = tokio::task::spawn_blocking(move || -> Result<ConfigDigest, Error> {
+ let _lock = pdm_config::subscriptions::lock_config()?;
+ let (mut config, config_digest) = pdm_config::subscriptions::config()?;
+ config_digest.detect_modification(digest.as_ref())?;
+ if config_digest != pre_digest {
+ http_bail!(
+ CONFLICT,
+ "pool config changed during live fetch; refresh and retry adopt of \
+ {remote}/{node}"
+ );
+ }
+
+ let target_bound = config.iter().any(|(_, e)| {
+ e.remote.as_deref() == Some(remote.as_str())
+ && e.node.as_deref() == Some(node.as_str())
+ });
+ if target_bound {
+ http_bail!(
+ BAD_REQUEST,
+ "{remote}/{node} is already pool-managed; adopt only applies to foreign \
+ subscriptions"
+ );
+ }
+
+ if let Some(existing) = config.get_mut(&live_current_key) {
+ if existing.remote.is_some() || existing.node.is_some() {
+ http_bail!(
+ CONFLICT,
+ "key '{}' is in the pool but bound elsewhere; resolve manually first",
+ redact_key(&live_current_key),
+ );
+ }
+ existing.remote = Some(remote.clone());
+ existing.node = Some(node.clone());
+ } else {
+ // Schema-validate the live key before letting it touch the on-disk pool. The
+ // remote claimed it via /nodes/{node}/subscription, but that surface is not a
+ // strict-schema gate (older PVE versions accept whatever the operator typed at
+ // setup time), so re-validate here against the same schema that manual entry
+ // uses.
+ SUBSCRIPTION_KEY_SCHEMA
+ .parse_simple_value(&live_current_key)
+ .map_err(|err| {
+ http_err!(
+ BAD_REQUEST,
+ "key '{}' rejected: {err}",
+ redact_key(&live_current_key),
+ )
+ })?;
+ let product_type = ProductType::from_key(&live_current_key).ok_or_else(|| {
+ http_err!(
+ BAD_REQUEST,
+ "unrecognised key prefix: {}",
+ redact_key(&live_current_key),
+ )
+ })?;
+ let entry = SubscriptionKeyEntry {
+ key: live_current_key.clone(),
+ product_type,
+ level: SubscriptionLevel::from_key(Some(&live_current_key)),
+ source: SubscriptionKeySource::Adopted,
+ remote: Some(remote.clone()),
+ node: Some(node.clone()),
+ ..Default::default()
+ };
+ config.insert(live_current_key, entry);
+ }
+
+ pdm_config::subscriptions::save_config(&config)
+ })
+ .await??;
+ rpcenv["digest"] = new_digest.to_hex().into();
+ Ok(())
+}
+
#[api(
input: {
properties: {
@@ -1723,6 +1881,15 @@ mod tests {
assert_eq!(redact_key("pbsc-abcdef0123"), "pbsc-a...3");
}
+ #[test]
+ fn redact_key_safe_on_non_ascii_secret() {
+ // Slicing by byte index on a UTF-8 boundary would panic; chars()-based redaction must
+ // tolerate hostile / buggy remote inputs in the foreign-key adoption path.
+ let key = "pve4b-1\u{1F600}";
+ let redacted = redact_key(key);
+ assert!(redacted.starts_with("pve4b-1..."));
+ }
+
#[test]
fn redact_key_safe_on_single_char_secret() {
assert_eq!(redact_key("pve4b-x"), "pve4b-x...");
diff --git a/ui/src/configuration/subscription_keys.rs b/ui/src/configuration/subscription_keys.rs
index 5807504d..cff13563 100644
--- a/ui/src/configuration/subscription_keys.rs
+++ b/ui/src/configuration/subscription_keys.rs
@@ -6,7 +6,7 @@ use anyhow::Error;
use pdm_api_types::remotes::RemoteType;
use pdm_api_types::subscription::{
- ProductType, RemoteNodeStatus, SubscriptionKeyEntry,
+ ProductType, RemoteNodeStatus, SubscriptionKeyEntry, SubscriptionKeySource,
};
use yew::virtual_dom::{Key, VComp, VNode};
@@ -123,8 +123,9 @@ impl SubscriptionKeyGridComp {
Rc::new(vec![
DataTableColumn::new(tr!("Key"))
.flex(2)
- .get_property(|entry: &SubscriptionKeyEntry| entry.key.as_str())
+ .sorter(|a: &SubscriptionKeyEntry, b: &SubscriptionKeyEntry| a.key.cmp(&b.key))
.sort_order(true)
+ .render(|entry: &SubscriptionKeyEntry| entry.key.as_str().into())
.into(),
DataTableColumn::new(tr!("Product"))
.width("80px")
@@ -140,6 +141,17 @@ impl SubscriptionKeyGridComp {
.sorter(|a: &SubscriptionKeyEntry, b: &SubscriptionKeyEntry| a.level.cmp(&b.level))
.render(|entry: &SubscriptionKeyEntry| entry.level.to_string().into())
.into(),
+ DataTableColumn::new(tr!("Source"))
+ .width("90px")
+ .hidden(true)
+ .sorter(|a: &SubscriptionKeyEntry, b: &SubscriptionKeyEntry| {
+ (a.source as u8).cmp(&(b.source as u8))
+ })
+ .render(|entry: &SubscriptionKeyEntry| match entry.source {
+ SubscriptionKeySource::Manual => tr!("Manual").into(),
+ SubscriptionKeySource::Adopted => tr!("Adopted").into(),
+ })
+ .into(),
DataTableColumn::new(tr!("Assignment"))
.flex(2)
.sorter(|a: &SubscriptionKeyEntry, b: &SubscriptionKeyEntry| {
diff --git a/ui/src/configuration/subscription_registry.rs b/ui/src/configuration/subscription_registry.rs
index 7471fae4..7d79370b 100644
--- a/ui/src/configuration/subscription_registry.rs
+++ b/ui/src/configuration/subscription_registry.rs
@@ -235,6 +235,9 @@ pub enum Msg {
QueueClearForSelectedNode,
/// Open the Assign Key dialog for the currently-selected node.
AssignKeyToSelectedNode,
+ /// Open the confirmation dialog for adopting the live subscription on the selected node
+ /// into the pool.
+ AdoptKeyForSelectedNode,
}
#[derive(PartialEq)]
@@ -249,6 +252,13 @@ pub enum ViewState {
node: String,
current_key: Option<String>,
},
+ /// Pending confirmation to adopt the live subscription on `(remote, node)` into the pool.
+ /// The live key is captured here so the dialog body can show what will be imported.
+ ConfirmAdoptKey {
+ remote: String,
+ node: String,
+ current_key: String,
+ },
/// Assign a pool key to the given node. Opens from the right panel's Assign Key button.
AssignKeyToNode {
remote: String,
@@ -494,6 +504,18 @@ fn key_cell(n: &RemoteNodeStatus) -> Html {
.with_child(Fa::new("clock-o").class(FontColor::Warning))
.with_child(text)
.into()
+ } else if assigned.is_none() && current.is_some() {
+ Tooltip::new(
+ Row::new()
+ .class(AlignItems::Baseline)
+ .gap(2)
+ .with_child(Fa::new("download").class(FontColor::Primary))
+ .with_child(text),
+ )
+ .tip(tr!(
+ "Not in pool - Adopt Key imports this live subscription."
+ ))
+ .into()
} else {
text.into()
}
@@ -660,6 +682,16 @@ impl LoadableComponent for SubscriptionRegistryComp {
node_sockets,
}));
}
+ Msg::AdoptKeyForSelectedNode => {
+ let Some((remote, node, current_key)) = self.selected_node_for_adopt() else {
+ return false;
+ };
+ ctx.link().change_view(Some(ViewState::ConfirmAdoptKey {
+ remote,
+ node,
+ current_key,
+ }));
+ }
}
true
}
@@ -828,6 +860,55 @@ impl LoadableComponent for SubscriptionRegistryComp {
ViewState::ConfirmAutoAssign(proposal) => {
Some(self.render_auto_assign_dialog(ctx, proposal))
}
+ ViewState::ConfirmAdoptKey {
+ remote,
+ node,
+ current_key,
+ } => {
+ use pwt::widget::ConfirmDialog;
+ let question = tr!(
+ "Adopt {key} from {remote}/{node} into the pool?",
+ key = current_key.clone(),
+ remote = remote.clone(),
+ node = node.clone(),
+ );
+ let body = Column::new()
+ .gap(2)
+ .with_child(Container::from_tag("p").with_child(question))
+ .with_child(Container::from_tag("p").with_child(tr!(
+ "The live subscription is imported as a pool entry bound to this node; the remote is not contacted. After adoption the key participates in pool operations such as Clear Key and Auto-Assign."
+ )));
+ let remote_for_cb = remote.clone();
+ let node_for_cb = node.clone();
+ let link = ctx.link().clone();
+ let close_link = ctx.link().clone();
+ let digest_for_cb = self.pool_digest.clone();
+ Some(
+ ConfirmDialog::default()
+ .title(tr!("Adopt Key"))
+ .icon_class("fa fa-question-circle")
+ .confirm_message(body)
+ .on_confirm(move |_| {
+ let link = link.clone();
+ let remote = remote_for_cb.clone();
+ let node = node_for_cb.clone();
+ let digest = digest_for_cb.clone();
+ link.clone().spawn(async move {
+ let digest = digest.map(pdm_client::ConfigDigest::from);
+ if let Err(err) = crate::pdm_client()
+ .subscription_adopt_key(&remote, &node, digest)
+ .await
+ {
+ link.show_error(tr!("Adopt Key"), err.to_string(), true);
+ }
+ link.change_view(None);
+ link.send_reload();
+ });
+ })
+ .on_close(move |_| close_link.change_view(None))
+ .into(),
+ )
+ }
ViewState::ConfirmQueueClear {
remote,
node,
@@ -945,6 +1026,7 @@ impl SubscriptionRegistryComp {
let can_assign_key = self.assign_target_for_selected_node().is_some();
let can_revert = self.revert_target().is_some();
let can_clear_key = self.selected_node_for_clear().is_some();
+ let can_adopt_key = self.selected_node_for_adopt().is_some();
let assign_button = Tooltip::new(
Button::new(tr!("Assign Key"))
.icon_class("fa fa-link")
@@ -973,7 +1055,16 @@ impl SubscriptionRegistryComp {
.tip(tr!(
"Queue the live subscription on the selected node for removal at next Apply \
Pending, freeing the key for reassignment. Requires the node to be \
- pool-managed."
+ pool-managed; for foreign subscriptions, run Adopt Key first."
+ ));
+ let adopt_key_button = Tooltip::new(
+ Button::new(tr!("Adopt Key"))
+ .icon_class("fa fa-download")
+ .disabled(!can_adopt_key)
+ .on_activate(ctx.link().callback(|_| Msg::AdoptKeyForSelectedNode)),
+ )
+ .tip(tr!(
+ "Import the live subscription on the selected node into the pool."
));
Panel::new()
@@ -983,6 +1074,7 @@ impl SubscriptionRegistryComp {
.min_width(400)
.title(tr!("Node Subscription Status"))
.with_tool(assign_button)
+ .with_tool(adopt_key_button)
.with_tool(revert_button)
.with_tool(clear_key_button)
.with_child(table)
@@ -1070,6 +1162,19 @@ impl SubscriptionRegistryComp {
Some((n.remote.clone(), n.node.clone(), n.current_key.clone()))
}
+ /// Returns `(remote, node, current_key)` when the selected node has a foreign live
+ /// subscription eligible for Adopt Key: a current key is set on the node and no pool entry
+ /// is bound to (remote, node) yet. Mutually exclusive with `selected_node_for_clear` so the
+ /// toolbar can offer exactly one of Clear Key / Adopt Key for any given selection.
+ fn selected_node_for_adopt(&self) -> Option<(String, String, String)> {
+ let n = self.selected_node_status()?;
+ if n.assigned_key.is_some() {
+ return None;
+ }
+ let current_key = n.current_key.clone()?;
+ Some((n.remote.clone(), n.node.clone(), current_key))
+ }
+
/// Returns `(remote, node, type, node_sockets)` for the right-panel Assign button:
/// selected row is a node, no assigned key in the pool yet, and no live active subscription.
/// Refusing earlier than the server keeps the button-disable affordance honest.
--
2.47.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH datacenter-manager v3 10/12] subscription: add Adopt All bulk action
2026-05-15 7:43 [PATCH datacenter-manager v3 00/12] subscription key pool registry Thomas Lamprecht
` (8 preceding siblings ...)
2026-05-15 7:43 ` [PATCH datacenter-manager v3 09/12] subscription: add Adopt Key action for foreign live subscriptions Thomas Lamprecht
@ 2026-05-15 7:43 ` Thomas Lamprecht
2026-05-15 7:43 ` [PATCH datacenter-manager v3 11/12] subscription: add Check Subscription action Thomas Lamprecht
2026-05-15 7:43 ` [RFC PATCH datacenter-manager v3 12/12] ui: registry: add Add-and-Assign wizard from Assign Key dialog Thomas Lamprecht
11 siblings, 0 replies; 16+ messages in thread
From: Thomas Lamprecht @ 2026-05-15 7:43 UTC (permalink / raw)
To: pdm-devel
Add a server endpoint plus CLI / UI wiring for importing every
foreign live subscription in one transaction. The typical use case
is connecting an existing fleet of PVE/PBS nodes to PDM for the
first time: rather than clicking Adopt Key per-node, the operator
runs Adopt All once and the pool catches up with the deployed
subscriptions in a single call.
The candidate set is recomputed under the config lock, so a
parallel Assign / Adopt landing between the network read and the
lock cannot race-import a key that has just been bound. Candidates
are silently skipped on missing per-remote PRIV_RESOURCE_MODIFY,
on a conflicting pool state ((remote, node) target already bound,
or the live key already bound elsewhere), or on a remote-supplied
key or node name failing schema validation; the UI dialog
enumerates the same set.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
---
New in v3.
cli/client/src/subscriptions.rs | 29 ++++
docs/subscription-registry.rst | 7 +
lib/pdm-api-types/src/subscription.rs | 17 ++
lib/pdm-client/src/lib.rs | 22 +++
server/src/api/subscriptions/mod.rs | 161 +++++++++++++++++-
ui/src/configuration/subscription_registry.rs | 160 +++++++++++++++++
6 files changed, 394 insertions(+), 2 deletions(-)
diff --git a/cli/client/src/subscriptions.rs b/cli/client/src/subscriptions.rs
index c9ba5e4c..469f0841 100644
--- a/cli/client/src/subscriptions.rs
+++ b/cli/client/src/subscriptions.rs
@@ -52,6 +52,7 @@ pub fn cli() -> CommandLineInterface {
"adopt-key",
CliCommand::new(&API_METHOD_ADOPT_KEY).arg_param(&["remote", "node"]),
)
+ .insert("adopt-all", CliCommand::new(&API_METHOD_ADOPT_ALL))
.into()
}
@@ -342,6 +343,34 @@ async fn adopt_key(
Ok(())
}
+#[api(
+ input: {
+ properties: {
+ digest: {
+ schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
+ optional: true,
+ },
+ },
+ },
+)]
+/// Adopt every foreign live subscription into the pool in one transaction.
+///
+/// Walks all remotes the caller can audit, imports any (remote, node) with a live current key
+/// and no pool binding. Candidates the caller has no modify privilege on, or whose key is
+/// already bound elsewhere in the pool, are silently skipped.
+async fn adopt_all(digest: Option<String>) -> Result<(), Error> {
+ let digest = digest.map(ConfigDigest::from);
+ let adopted = client()?.subscription_adopt_all(digest).await?;
+ if adopted.is_empty() {
+ println!("No foreign live subscriptions to adopt.");
+ return Ok(());
+ }
+ println!("Adopted {} live subscription(s):", adopted.len());
+ for e in &adopted {
+ println!(" {}/{} -> {}", e.remote, e.node, e.key);
+ }
+ Ok(())
+}
#[api(
input: {
diff --git a/docs/subscription-registry.rst b/docs/subscription-registry.rst
index 4c31c9a6..6d599fe2 100644
--- a/docs/subscription-registry.rst
+++ b/docs/subscription-registry.rst
@@ -52,6 +52,13 @@ adoption are highlighted with a download hint icon in the Node Subscription Stat
the pool grid carries a hidden-by-default Source column distinguishing manually-added from
adopted entries, which can be enabled via the column picker if the distinction matters.
+The Adopt All action runs the same import across every remote the operator can audit in one
+transaction. Use it after first connecting an existing fleet of nodes to PDM so the pool
+catches up with the live subscriptions already deployed, without having to click through
+Adopt Key for each node. Candidates the operator has no modify privilege on, whose key is
+already bound elsewhere in the pool, whose (remote, node) target is already bound by another
+pool entry, or whose key or node name fails schema validation are skipped silently.
+
The proposed plan can be inspected before it is applied. Apply Pending walks the queue in
order; if any push or clear fails the remaining queue is kept intact for retry. Discard Pending
drops the plan without touching any remote.
diff --git a/lib/pdm-api-types/src/subscription.rs b/lib/pdm-api-types/src/subscription.rs
index 8a0a7977..df1fec1c 100644
--- a/lib/pdm-api-types/src/subscription.rs
+++ b/lib/pdm-api-types/src/subscription.rs
@@ -569,6 +569,23 @@ pub struct ClearPendingResult {
pub cleared: u32,
}
+#[api(
+ properties: {
+ "key": { schema: SUBSCRIPTION_KEY_SCHEMA },
+ },
+)]
+#[derive(Clone, Debug, Serialize, Deserialize, PartialEq, Eq)]
+#[serde(rename_all = "kebab-case")]
+/// One entry imported by the bulk Adopt-All endpoint.
+pub struct AdoptedEntry {
+ /// Remote the live subscription was running on.
+ pub remote: String,
+ /// Node within the remote.
+ pub node: String,
+ /// The adopted subscription key.
+ pub key: String,
+}
+
#[api]
#[derive(Clone, Debug, Default, Serialize, Deserialize, PartialEq, Eq)]
#[serde(rename_all = "kebab-case")]
diff --git a/lib/pdm-client/src/lib.rs b/lib/pdm-client/src/lib.rs
index 6c764c00..f03f6c40 100644
--- a/lib/pdm-client/src/lib.rs
+++ b/lib/pdm-client/src/lib.rs
@@ -1303,6 +1303,28 @@ impl<T: HttpApiClient> PdmClient<T> {
.nodata()
}
+ /// Adopt every foreign live subscription that the caller can modify, in one transaction.
+ /// Returns the list of `(remote, node, key)` tuples that were imported into the pool;
+ /// candidates the caller has no `PRIV_RESOURCE_MODIFY` on (or that fail validation, or that
+ /// are already bound elsewhere in the pool) are silently skipped. See the server endpoint
+ /// docs for the full skip rules.
+ pub async fn subscription_adopt_all(
+ &self,
+ digest: Option<ConfigDigest>,
+ ) -> Result<Vec<pdm_api_types::subscription::AdoptedEntry>, Error> {
+ #[derive(Serialize)]
+ struct AdoptAllArgs {
+ #[serde(skip_serializing_if = "Option::is_none")]
+ digest: Option<ConfigDigest>,
+ }
+ Ok(self
+ .0
+ .post("/api2/extjs/subscriptions/adopt-all", &AdoptAllArgs { digest })
+ .await?
+ .expect_json()?
+ .data)
+ }
+
/// Queue a clear for the subscription on `remote`/`node`. Apply Pending later removes the
/// subscription from the node so the key can be reassigned elsewhere; Discard Pending
/// undoes the queueing without touching the remote. Returns `BAD_REQUEST` if no pool entry
diff --git a/server/src/api/subscriptions/mod.rs b/server/src/api/subscriptions/mod.rs
index cc46806c..a8f5cfc5 100644
--- a/server/src/api/subscriptions/mod.rs
+++ b/server/src/api/subscriptions/mod.rs
@@ -21,8 +21,8 @@ use proxmox_sortable_macro::sortable;
use pdm_api_types::remotes::{Remote, REMOTE_ID_SCHEMA};
use pdm_api_types::subscription::{
- pick_best_pve_socket_key, socket_count_from_key, AutoAssignProposal, ClearPendingResult,
- ProductType, ProposedAssignment, RemoteNodeStatus, SubscriptionKeyEntry,
+ pick_best_pve_socket_key, socket_count_from_key, AdoptedEntry, AutoAssignProposal,
+ ClearPendingResult, ProductType, ProposedAssignment, RemoteNodeStatus, SubscriptionKeyEntry,
SubscriptionKeySource, SubscriptionLevel, SUBSCRIPTION_KEY_SCHEMA,
};
use pdm_api_types::{
@@ -39,6 +39,7 @@ pub const ROUTER: Router = Router::new()
#[sortable]
const SUBDIRS: SubdirMap = &sorted!([
+ ("adopt-all", &Router::new().post(&API_METHOD_ADOPT_ALL)),
("adopt-key", &Router::new().post(&API_METHOD_ADOPT_KEY)),
(
"apply-pending",
@@ -1112,6 +1113,162 @@ async fn adopt_key(
Ok(())
}
+#[api(
+ input: {
+ properties: {
+ digest: {
+ type: ConfigDigest,
+ optional: true,
+ },
+ },
+ },
+ returns: {
+ type: Array,
+ description: "List of (remote, node, key) tuples that were adopted into the pool.",
+ items: { type: AdoptedEntry },
+ },
+ access: {
+ permission: &Permission::Privilege(&["system"], PRIV_SYS_MODIFY, false),
+ },
+)]
+/// Adopt every foreign live subscription in one transaction.
+///
+/// Walks the node-status view (so only remotes the caller can audit are considered), collects
+/// every (remote, node) that has a live current key but no pool entry bound to it, and imports
+/// each one into the pool with source = `Adopted`. Candidates are skipped (not adopted, not an
+/// error) when:
+///
+/// - The caller has no `PRIV_RESOURCE_MODIFY` on the candidate's remote: an audit-only operator
+/// should not be able to materialise pool state for a remote they cannot manage.
+/// - The live key is already in the pool but bound elsewhere: leaving the rebind as a manual
+/// step keeps the bulk action from silently competing with a deliberate prior assignment.
+/// - The live key fails schema validation or its prefix is unknown: a buggy or malicious
+/// remote should not be able to inject garbage into the pool through a bulk shortcut.
+///
+/// Successfully-adopted entries are returned so the caller (CLI / UI) can summarise the outcome
+/// without needing a separate refresh round-trip.
+async fn adopt_all(
+ digest: Option<ConfigDigest>,
+ rpcenv: &mut dyn RpcEnvironment,
+) -> Result<Vec<AdoptedEntry>, Error> {
+ let auth_id: Authid = rpcenv
+ .get_auth_id()
+ .context("no authid available")?
+ .parse()?;
+
+ // Use a fresh node-status snapshot: a cached entry from minutes ago could miss a live
+ // subscription that was just installed on a remote, or vice-versa, claim a subscription
+ // that has since been removed. Adopting bogus or already-cleared keys would be a footgun.
+ let node_statuses = collect_node_status(FRESH_NODE_STATUS_MAX_AGE, rpcenv).await?;
+
+ // Lock + sync IO under spawn_blocking. The closure re-resolves the candidate set under the
+ // lock: a parallel admin's Assign / Adopt between the network read above and the lock
+ // acquisition here would otherwise let us race-import a key that has just been bound by
+ // them.
+ let (adopted, new_digest_opt) = tokio::task::spawn_blocking(
+ move || -> Result<(Vec<AdoptedEntry>, Option<ConfigDigest>), Error> {
+ let user_info = CachedUserInfo::new()?;
+ let _lock = pdm_config::subscriptions::lock_config()?;
+ let (mut config, config_digest) = pdm_config::subscriptions::config()?;
+ config_digest.detect_modification(digest.as_ref())?;
+
+ let mut adopted: Vec<AdoptedEntry> = Vec::new();
+ for n in &node_statuses {
+ let Some(current_key) = n.current_key.as_deref() else {
+ continue;
+ };
+ if n.assigned_key.is_some() {
+ continue;
+ }
+ if user_info.lookup_privs(&auth_id, &["resource", &n.remote])
+ & PRIV_RESOURCE_MODIFY
+ == 0
+ {
+ continue;
+ }
+ // Re-validate foreign node name: later interpolated into remote URL.
+ if NODE_SCHEMA.parse_simple_value(&n.node).is_err() {
+ warn!(
+ "skipping adopt-all candidate on {}/{}: node name fails schema",
+ n.remote, n.node,
+ );
+ continue;
+ }
+ // Re-check binding state under the lock - between the network read and here a
+ // parallel Adopt / Assign on the same target could have created a pool entry
+ // bound to (remote, node) that the cached node-status snapshot did not see.
+ let target_bound = config.iter().any(|(_, e)| {
+ e.remote.as_deref() == Some(n.remote.as_str())
+ && e.node.as_deref() == Some(n.node.as_str())
+ });
+ if target_bound {
+ continue;
+ }
+
+ if let Some(existing) = config.get_mut(current_key) {
+ if existing.remote.is_some() || existing.node.is_some() {
+ // Bound elsewhere: leave the rebind as an explicit operator decision.
+ continue;
+ }
+ existing.remote = Some(n.remote.clone());
+ existing.node = Some(n.node.clone());
+ } else {
+ if SUBSCRIPTION_KEY_SCHEMA
+ .parse_simple_value(current_key)
+ .is_err()
+ {
+ warn!(
+ "skipping adopt-all candidate on {}/{}: key '{}' fails schema",
+ n.remote,
+ n.node,
+ redact_key(current_key),
+ );
+ continue;
+ }
+ let Some(product_type) = ProductType::from_key(current_key) else {
+ warn!(
+ "skipping adopt-all candidate on {}/{}: unrecognised key prefix \
+ '{}'",
+ n.remote,
+ n.node,
+ redact_key(current_key),
+ );
+ continue;
+ };
+ let entry = SubscriptionKeyEntry {
+ key: current_key.to_string(),
+ product_type,
+ level: SubscriptionLevel::from_key(Some(current_key)),
+ source: SubscriptionKeySource::Adopted,
+ remote: Some(n.remote.clone()),
+ node: Some(n.node.clone()),
+ ..Default::default()
+ };
+ config.insert(current_key.to_string(), entry);
+ }
+ adopted.push(AdoptedEntry {
+ remote: n.remote.clone(),
+ node: n.node.clone(),
+ key: current_key.to_string(),
+ });
+ }
+
+ let new_digest = if adopted.is_empty() {
+ None
+ } else {
+ Some(pdm_config::subscriptions::save_config(&config)?)
+ };
+ Ok((adopted, new_digest))
+ },
+ )
+ .await??;
+
+ if let Some(new_digest) = new_digest_opt {
+ rpcenv["digest"] = new_digest.to_hex().into();
+ }
+ Ok(adopted)
+}
+
#[api(
input: {
properties: {
diff --git a/ui/src/configuration/subscription_registry.rs b/ui/src/configuration/subscription_registry.rs
index 7d79370b..b84ddb36 100644
--- a/ui/src/configuration/subscription_registry.rs
+++ b/ui/src/configuration/subscription_registry.rs
@@ -99,6 +99,14 @@ fn pending_badge(push_count: u32, clear_count: u32) -> Row {
row
}
+/// Row shape for the Adopt All preview table.
+#[derive(Clone, PartialEq)]
+struct AdoptCandidate {
+ remote: String,
+ node: String,
+ key: String,
+}
+
#[derive(Clone, Debug, PartialEq)]
enum NodeTreeEntry {
Root,
@@ -238,6 +246,8 @@ pub enum Msg {
/// Open the confirmation dialog for adopting the live subscription on the selected node
/// into the pool.
AdoptKeyForSelectedNode,
+ /// Open the confirmation dialog for adopting every foreign live subscription into the pool.
+ AdoptAllPreview,
}
#[derive(PartialEq)]
@@ -259,6 +269,12 @@ pub enum ViewState {
node: String,
current_key: String,
},
+ /// Pending confirmation to bulk-adopt every foreign live subscription. The candidate list
+ /// is captured at view-open time so the dialog body can show the operator exactly what
+ /// will be imported; the server re-computes the set under the lock at commit time.
+ ConfirmAdoptAll {
+ candidates: Vec<(String, String, String)>,
+ },
/// Assign a pool key to the given node. Opens from the right panel's Assign Key button.
AssignKeyToNode {
remote: String,
@@ -274,6 +290,7 @@ pub struct SubscriptionRegistryComp {
tree_store: TreeStore<NodeTreeEntry>,
tree_columns: Rc<Vec<DataTableHeader<NodeTreeEntry>>>,
proposal_columns: Rc<Vec<DataTableHeader<ProposedAssignment>>>,
+ adopt_columns: Rc<Vec<DataTableHeader<AdoptCandidate>>>,
node_selection: Selection,
last_node_data: Vec<RemoteNodeStatus>,
/// Canonical pool snapshot. Passed down to the key grid (display) and shared with the
@@ -458,6 +475,24 @@ impl SubscriptionRegistryComp {
.into(),
])
}
+
+ fn adopt_columns() -> Rc<Vec<DataTableHeader<AdoptCandidate>>> {
+ Rc::new(vec![
+ DataTableColumn::new(tr!("Remote / Node"))
+ .flex(2)
+ .render(|c: &AdoptCandidate| format!("{} / {}", c.remote, c.node).into())
+ .into(),
+ DataTableColumn::new(tr!("Key"))
+ .flex(2)
+ .render(|c: &AdoptCandidate| {
+ Container::from_tag("span")
+ .class(pwt::css::FontStyle::LabelMedium)
+ .with_child(c.key.clone())
+ .into()
+ })
+ .into(),
+ ])
+ }
}
fn key_cell(n: &RemoteNodeStatus) -> Html {
@@ -542,6 +577,7 @@ impl LoadableComponent for SubscriptionRegistryComp {
tree_store: store.clone(),
tree_columns: Self::tree_columns(store),
proposal_columns: Self::proposal_columns(),
+ adopt_columns: Self::adopt_columns(),
node_selection,
last_node_data: Vec::new(),
pool_keys: Rc::new(Vec::new()),
@@ -692,6 +728,14 @@ impl LoadableComponent for SubscriptionRegistryComp {
current_key,
}));
}
+ Msg::AdoptAllPreview => {
+ let candidates = self.adopt_all_candidates();
+ if candidates.is_empty() {
+ return false;
+ }
+ ctx.link()
+ .change_view(Some(ViewState::ConfirmAdoptAll { candidates }));
+ }
}
true
}
@@ -699,6 +743,7 @@ impl LoadableComponent for SubscriptionRegistryComp {
fn toolbar(&self, ctx: &LoadableComponentContext<Self>) -> Option<Html> {
let link = ctx.link();
let (push_count, clear_count) = self.pending_counts();
+ let adopt_all_count = self.adopt_all_candidates().len();
let mut toolbar = Toolbar::new()
.border_bottom(true)
.with_child(
@@ -712,6 +757,18 @@ impl LoadableComponent for SubscriptionRegistryComp {
subscription, then queue it pending Apply."
)),
)
+ .with_child(
+ Tooltip::new(
+ Button::new(tr!("Adopt All"))
+ .icon_class("fa fa-download")
+ .disabled(adopt_all_count == 0)
+ .on_activate(link.callback(|_| Msg::AdoptAllPreview)),
+ )
+ .tip(tr!(
+ "Import every foreign live subscription that is not yet tracked by the \
+ pool. The remote is not contacted; only the pool config is updated."
+ )),
+ )
.with_spacer()
.with_child(
Tooltip::new(
@@ -860,6 +917,9 @@ impl LoadableComponent for SubscriptionRegistryComp {
ViewState::ConfirmAutoAssign(proposal) => {
Some(self.render_auto_assign_dialog(ctx, proposal))
}
+ ViewState::ConfirmAdoptAll { candidates } => {
+ Some(self.render_adopt_all_dialog(ctx, candidates))
+ }
ViewState::ConfirmAdoptKey {
remote,
node,
@@ -1175,6 +1235,25 @@ impl SubscriptionRegistryComp {
Some((n.remote.clone(), n.node.clone(), current_key))
}
+ /// Iterate the loaded node-status snapshot and return every `(remote, node, current_key)`
+ /// eligible for bulk Adopt-All (live key set, no pool binding). Used both for the toolbar
+ /// disabled gate and for the preview list in the confirm dialog; the authoritative set is
+ /// recomputed by the server under the lock at commit time, so this view is a hint, not a
+ /// contract.
+ fn adopt_all_candidates(&self) -> Vec<(String, String, String)> {
+ self.last_node_data
+ .iter()
+ .filter_map(|n| {
+ if n.assigned_key.is_some() {
+ return None;
+ }
+ n.current_key
+ .clone()
+ .map(|k| (n.remote.clone(), n.node.clone(), k))
+ })
+ .collect()
+ }
+
/// Returns `(remote, node, type, node_sockets)` for the right-panel Assign button:
/// selected row is a node, no assigned key in the pool yet, and no live active subscription.
/// Refusing earlier than the server keeps the button-disable affordance honest.
@@ -1250,4 +1329,85 @@ impl SubscriptionRegistryComp {
.with_child(body)
.into()
}
+
+ fn render_adopt_all_dialog(
+ &self,
+ ctx: &LoadableComponentContext<Self>,
+ candidates: &[(String, String, String)],
+ ) -> Html {
+ use pwt::widget::Dialog;
+
+ let rows: Vec<AdoptCandidate> = candidates
+ .iter()
+ .map(|(r, n, k)| AdoptCandidate {
+ remote: r.clone(),
+ node: n.clone(),
+ key: k.clone(),
+ })
+ .collect();
+ let n = rows.len();
+ let store: Store<AdoptCandidate> = Store::with_extract_key(|c: &AdoptCandidate| {
+ format!("{}/{}", c.remote, c.node).into()
+ });
+ store.set_data(rows);
+
+ let link_close = ctx.link().clone();
+ let link_apply = ctx.link().clone();
+ let digest = self.pool_digest.clone();
+ let body = Column::new()
+ .class(Flex::Fill)
+ .class(Overflow::Hidden)
+ .min_height(0)
+ .padding(2)
+ .gap(2)
+ .min_width(600)
+ .with_child(Container::from_tag("p").with_child(tr!(
+ "The following {n} live subscription(s) will be imported into the pool; \
+ the remote is not contacted.",
+ n = n,
+ )))
+ .with_child(
+ DataTable::new(self.adopt_columns.clone(), store)
+ .striped(true)
+ .class(FlexFit)
+ .min_height(140),
+ )
+ .with_child(
+ Row::new()
+ .class(JustifyContent::FlexEnd)
+ .gap(2)
+ .padding_top(2)
+ .with_child(
+ Button::new(tr!("Cancel"))
+ .on_activate(move |_| link_close.change_view(None)),
+ )
+ .with_child(Button::new(tr!("Adopt")).on_activate(move |_| {
+ let link = link_apply.clone();
+ let digest = digest.clone();
+ link.clone().spawn(async move {
+ let digest = digest.map(pdm_client::ConfigDigest::from);
+ if let Err(err) =
+ crate::pdm_client().subscription_adopt_all(digest).await
+ {
+ link.show_error(tr!("Adopt All"), err.to_string(), true);
+ }
+ link.change_view(None);
+ link.send_reload();
+ });
+ })),
+ );
+
+ Dialog::new(tr!("Adopt All"))
+ .resizable(true)
+ .width(700)
+ .min_width(500)
+ .min_height(300)
+ .max_height("80vh")
+ .on_close({
+ let link = ctx.link().clone();
+ move |_| link.change_view(None)
+ })
+ .with_child(body)
+ .into()
+ }
}
--
2.47.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH datacenter-manager v3 11/12] subscription: add Check Subscription action
2026-05-15 7:43 [PATCH datacenter-manager v3 00/12] subscription key pool registry Thomas Lamprecht
` (9 preceding siblings ...)
2026-05-15 7:43 ` [PATCH datacenter-manager v3 10/12] subscription: add Adopt All bulk action Thomas Lamprecht
@ 2026-05-15 7:43 ` Thomas Lamprecht
2026-05-15 7:43 ` [RFC PATCH datacenter-manager v3 12/12] ui: registry: add Add-and-Assign wizard from Assign Key dialog Thomas Lamprecht
11 siblings, 0 replies; 16+ messages in thread
From: Thomas Lamprecht @ 2026-05-15 7:43 UTC (permalink / raw)
To: pdm-devel
Wire a per-node Check Subscription action that drives the remote's
`update_subscription(force=true)` endpoint (POST on PVE / PBS) and
invalidates the PDM-side subscription cache so the next status read
reflects the fresh shop verdict instead of a 5-minute-stale snapshot.
Mirrors the per-product Check button on PVE and PBS, just driven
from the central registry view.
Useful when a node's live status has drifted to Invalid / Expired
because of a shop-side change and the operator wants to promote the
live verdict back to Active without waiting for the periodic check.
PVE and PBS use the canonical `UpdateSubscription` typed binding
(PVE via pve-api-types, PBS via proxmox-subscription).
NodeSubscriptionInfo and RemoteNodeStatus grow optional check_time
and next_due_date fields populated from the live SubscriptionInfo;
the Status column tooltip surfaces both, where the remote reports
them, so the operator can tell at a glance how fresh the last
check is and when the subscription will next come due.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
---
New in v3.
cli/client/src/subscriptions.rs | 23 ++++
docs/subscription-registry.rst | 7 +
lib/pdm-api-types/src/subscription.rs | 14 ++
lib/pdm-client/src/lib.rs | 19 +++
server/src/api/resources.rs | 4 +
server/src/api/subscriptions/mod.rs | 124 +++++++++++++++++-
ui/src/configuration/subscription_registry.rs | 78 ++++++++++-
7 files changed, 259 insertions(+), 10 deletions(-)
diff --git a/cli/client/src/subscriptions.rs b/cli/client/src/subscriptions.rs
index 469f0841..e98e34fb 100644
--- a/cli/client/src/subscriptions.rs
+++ b/cli/client/src/subscriptions.rs
@@ -53,6 +53,10 @@ pub fn cli() -> CommandLineInterface {
CliCommand::new(&API_METHOD_ADOPT_KEY).arg_param(&["remote", "node"]),
)
.insert("adopt-all", CliCommand::new(&API_METHOD_ADOPT_ALL))
+ .insert(
+ "check",
+ CliCommand::new(&API_METHOD_CHECK_SUBSCRIPTION).arg_param(&["remote", "node"]),
+ )
.into()
}
@@ -372,6 +376,25 @@ async fn adopt_all(digest: Option<String>) -> Result<(), Error> {
Ok(())
}
+#[api(
+ input: {
+ properties: {
+ remote: { schema: REMOTE_ID_SCHEMA },
+ node: { schema: NODE_SCHEMA },
+ },
+ },
+)]
+/// Trigger a fresh shop-side subscription check on a remote node.
+///
+/// Equivalent to the per-product "Check" button: re-verifies the live subscription status
+/// against the shop. Useful for promoting a stale Invalid/Expired verdict to Active once the
+/// underlying issue is fixed at the shop, without waiting for the next periodic check.
+async fn check_subscription(remote: String, node: String) -> Result<(), Error> {
+ client()?.subscription_check(&remote, &node).await?;
+ println!("Re-checked subscription on {remote}/{node}.");
+ Ok(())
+}
+
#[api(
input: {
properties: {
diff --git a/docs/subscription-registry.rst b/docs/subscription-registry.rst
index 6d599fe2..3d64c0bc 100644
--- a/docs/subscription-registry.rst
+++ b/docs/subscription-registry.rst
@@ -63,6 +63,13 @@ The proposed plan can be inspected before it is applied. Apply Pending walks the
order; if any push or clear fails the remaining queue is kept intact for retry. Discard Pending
drops the plan without touching any remote.
+The Check Subscription action triggers a fresh shop-side verification of the live subscription
+on the selected node, equivalent to the per-product "Check" button on PVE / PBS. Useful for
+promoting a stale ``Invalid`` or ``Expired`` verdict to ``Active`` once the underlying issue is
+fixed at the shop, without having to wait for the next periodic check. The Status column tooltip
+surfaces the last-checked timestamp and the next-due-date as reported by the remote, where
+available.
+
Permissions
-----------
diff --git a/lib/pdm-api-types/src/subscription.rs b/lib/pdm-api-types/src/subscription.rs
index df1fec1c..32706654 100644
--- a/lib/pdm-api-types/src/subscription.rs
+++ b/lib/pdm-api-types/src/subscription.rs
@@ -120,6 +120,14 @@ pub struct NodeSubscriptionInfo {
/// Serverid of the node, if accessible
#[serde(skip_serializing)]
pub serverid: Option<String>,
+
+ /// Epoch of the last successful subscription check on the node.
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub check_time: Option<i64>,
+
+ /// Next due date of the subscription, as reported by the remote.
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub next_due_date: Option<String>,
}
#[api(
@@ -558,6 +566,12 @@ pub struct RemoteNodeStatus {
/// True when the pool has a clear queued for this node. Omitted on the wire when false.
#[serde(default, skip_serializing_if = "std::ops::Not::not")]
pub pending_clear: bool,
+ /// Epoch of the last successful subscription check on the node.
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub check_time: Option<i64>,
+ /// Next due date of the subscription, as reported by the remote.
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub next_due_date: Option<String>,
}
#[api]
diff --git a/lib/pdm-client/src/lib.rs b/lib/pdm-client/src/lib.rs
index f03f6c40..eb7a7e89 100644
--- a/lib/pdm-client/src/lib.rs
+++ b/lib/pdm-client/src/lib.rs
@@ -1385,6 +1385,25 @@ impl<T: HttpApiClient> PdmClient<T> {
.nodata()
}
+ /// Trigger a fresh shop-side subscription check on `remote`/`node`. Equivalent to the
+ /// per-product "Check" button: drives `update_subscription(force=true)` and invalidates the
+ /// remote's cached subscription state so the next `subscription_node_status` reflects the
+ /// new verdict.
+ pub async fn subscription_check(&self, remote: &str, node: &str) -> Result<(), Error> {
+ #[derive(Serialize)]
+ struct Args<'a> {
+ remote: &'a str,
+ node: &'a str,
+ }
+ self.0
+ .post(
+ "/api2/extjs/subscriptions/check",
+ &Args { remote, node },
+ )
+ .await?
+ .nodata()
+ }
+
/// Clear every pending assignment in one bulk transaction; returns the count of cleared
/// entries.
pub async fn subscription_clear_pending(
diff --git a/server/src/api/resources.rs b/server/src/api/resources.rs
index d4ed5ab0..8825010c 100644
--- a/server/src/api/resources.rs
+++ b/server/src/api/resources.rs
@@ -959,6 +959,8 @@ async fn fetch_remote_subscription_info(
.level
.and_then(|level| level.parse().ok())
.unwrap_or_default(),
+ check_time: info.checktime,
+ next_due_date: info.nextduedate,
}
}),
);
@@ -975,6 +977,8 @@ async fn fetch_remote_subscription_info(
key: info.key,
level,
serverid: info.serverid,
+ check_time: info.checktime,
+ next_due_date: info.nextduedate,
}
});
diff --git a/server/src/api/subscriptions/mod.rs b/server/src/api/subscriptions/mod.rs
index a8f5cfc5..6b5b4cc0 100644
--- a/server/src/api/subscriptions/mod.rs
+++ b/server/src/api/subscriptions/mod.rs
@@ -47,6 +47,7 @@ const SUBDIRS: SubdirMap = &sorted!([
),
("auto-assign", &Router::new().post(&API_METHOD_AUTO_ASSIGN)),
("bulk-assign", &Router::new().post(&API_METHOD_BULK_ASSIGN)),
+ ("check", &Router::new().post(&API_METHOD_CHECK_SUBSCRIPTION)),
(
"clear-pending",
&Router::new().post(&API_METHOD_CLEAR_PENDING)
@@ -799,6 +800,38 @@ async fn delete_subscription_on_remote(
Ok(())
}
+/// Trigger a fresh shop-side subscription check on `remote`/`node` and return once the remote
+/// has stored the result. Equivalent to the per-product "Check" button, just driven through PDM.
+async fn check_subscription_on_remote(
+ remote: &Remote,
+ product_type: ProductType,
+ node_name: &str,
+) -> Result<(), Error> {
+ match product_type {
+ ProductType::Pve => {
+ let client = crate::connection::make_pve_client(remote)?;
+ client
+ .update_subscription(
+ node_name,
+ pve_api_types::UpdateSubscription { force: Some(true) },
+ )
+ .await?;
+ }
+ ProductType::Pbs => {
+ let client = crate::connection::make_pbs_client(remote)?;
+ client
+ .check_subscription(proxmox_subscription::UpdateSubscription { force: Some(true) })
+ .await?;
+ }
+ ProductType::Pmg | ProductType::Pom => {
+ bail!("PDM cannot check '{product_type}' keys: no remote support yet");
+ }
+ }
+
+ info!("re-checked subscription on {}/{node_name}", remote.id);
+ Ok(())
+}
+
#[api(
input: {
properties: {
@@ -961,6 +994,63 @@ async fn revert_pending_clear(
Ok(())
}
+#[api(
+ input: {
+ properties: {
+ remote: { schema: REMOTE_ID_SCHEMA },
+ // NODE_SCHEMA rejects path-traversal input before it ends up interpolated into the
+ // remote URL `/api2/extjs/nodes/{node}/subscription`.
+ node: { schema: NODE_SCHEMA },
+ },
+ },
+ access: {
+ permission: &Permission::Privilege(&["system"], PRIV_SYS_MODIFY, false),
+ },
+)]
+/// Trigger a fresh shop-side subscription check on `remote`/`node`.
+///
+/// Mirrors the per-product "Check" button on PVE / PBS: drives the remote's
+/// `update_subscription(force=true)` endpoint so a status that went stale at the shop (Invalid,
+/// Expired) gets re-verified without waiting for the next periodic check. The cached
+/// subscription state for the remote is invalidated so the next node-status read reflects the
+/// fresh verdict instead of a 5-minute-stale snapshot.
+///
+/// Per-remote `PRIV_RESOURCE_MODIFY` is enforced inside the handler since the call costs an
+/// outbound HTTPS request to the shop.
+async fn check_subscription(
+ remote: String,
+ node: String,
+ rpcenv: &mut dyn RpcEnvironment,
+) -> Result<(), Error> {
+ let auth_id: Authid = rpcenv
+ .get_auth_id()
+ .context("no authid available")?
+ .parse()?;
+ let user_info = CachedUserInfo::new()?;
+ user_info.check_privs(
+ &auth_id,
+ &["resource", &remote],
+ PRIV_RESOURCE_MODIFY,
+ false,
+ )?;
+
+ let (remotes_config, _) = pdm_config::remotes::config()?;
+ let remote_entry = remotes_config
+ .get(&remote)
+ .ok_or_else(|| http_err!(NOT_FOUND, "remote '{remote}' not found"))?;
+
+ let product_type = match remote_entry.ty {
+ pdm_api_types::remotes::RemoteType::Pve => ProductType::Pve,
+ pdm_api_types::remotes::RemoteType::Pbs => ProductType::Pbs,
+ };
+
+ check_subscription_on_remote(remote_entry, product_type, &node)
+ .await
+ .map_err(|err| http_err!(BAD_REQUEST, "check failed on {remote}/{node}: {err}"))?;
+ invalidate_subscription_info_for_remote(&remote);
+ Ok(())
+}
+
#[api(
input: {
properties: {
@@ -1340,13 +1430,23 @@ async fn collect_node_status(
};
for (node_name, node_info) in &node_infos {
- let (status, level, sockets, current_key) = match node_info {
- Some(info) => (info.status, info.level, info.sockets, info.key.clone()),
+ let (status, level, sockets, current_key, check_time, next_due_date) = match node_info
+ {
+ Some(info) => (
+ info.status,
+ info.level,
+ info.sockets,
+ info.key.clone(),
+ info.check_time,
+ info.next_due_date.clone(),
+ ),
None => (
proxmox_subscription::SubscriptionStatus::NotFound,
SubscriptionLevel::None,
None,
None,
+ None,
+ None,
),
};
@@ -1369,6 +1469,8 @@ async fn collect_node_status(
assigned_key,
current_key,
pending_clear,
+ check_time,
+ next_due_date,
});
}
}
@@ -1996,13 +2098,23 @@ async fn collect_status_uncached(
for (remote_name, remote_ty, result) in results {
let Ok(node_infos) = result else { continue };
for (node_name, node_info) in &node_infos {
- let (status, level, sockets, current_key) = match node_info {
- Some(info) => (info.status, info.level, info.sockets, info.key.clone()),
+ let (status, level, sockets, current_key, check_time, next_due_date) = match node_info
+ {
+ Some(info) => (
+ info.status,
+ info.level,
+ info.sockets,
+ info.key.clone(),
+ info.check_time,
+ info.next_due_date.clone(),
+ ),
None => (
proxmox_subscription::SubscriptionStatus::NotFound,
SubscriptionLevel::None,
None,
None,
+ None,
+ None,
),
};
out.push(RemoteNodeStatus {
@@ -2015,6 +2127,8 @@ async fn collect_status_uncached(
assigned_key: None,
current_key,
pending_clear: false,
+ check_time,
+ next_due_date,
});
}
}
@@ -2098,6 +2212,8 @@ mod tests {
assigned_key: None,
current_key: None,
pending_clear: false,
+ check_time: None,
+ next_due_date: None,
}
}
diff --git a/ui/src/configuration/subscription_registry.rs b/ui/src/configuration/subscription_registry.rs
index b84ddb36..1a70013c 100644
--- a/ui/src/configuration/subscription_registry.rs
+++ b/ui/src/configuration/subscription_registry.rs
@@ -7,6 +7,7 @@ use anyhow::Error;
use yew::virtual_dom::{Key, VComp, VNode};
use proxmox_yew_comp::percent_encoding::percent_encode_component;
+use proxmox_yew_comp::utils::render_epoch;
use proxmox_yew_comp::{http_delete, http_get, http_get_full, http_post};
use proxmox_yew_comp::{
LoadableComponent, LoadableComponentContext, LoadableComponentMaster,
@@ -64,6 +65,26 @@ fn subscription_status_label(status: proxmox_subscription::SubscriptionStatus) -
}
}
+/// Build a multi-line Status-column tooltip listing the last-check timestamp and the
+/// next-due-date when the remote provides them. Returns None if neither is set so the caller
+/// can skip wrapping the cell in a tooltip entirely.
+fn status_tooltip_lines(n: &RemoteNodeStatus) -> Option<String> {
+ let mut lines: Vec<String> = Vec::new();
+ if let Some(ts) = n.check_time {
+ lines.push(tr!("Last checked: {when}", when = render_epoch(ts)));
+ }
+ if let Some(due) = n.next_due_date.as_deref() {
+ if !due.is_empty() {
+ lines.push(tr!("Next due: {date}", date = due.to_string()));
+ }
+ }
+ if lines.is_empty() {
+ None
+ } else {
+ Some(lines.join("\n"))
+ }
+}
+
fn pending_badge(push_count: u32, clear_count: u32) -> Row {
let mut row = Row::new().class(AlignItems::Center).gap(3);
if push_count > 0 {
@@ -248,6 +269,9 @@ pub enum Msg {
AdoptKeyForSelectedNode,
/// Open the confirmation dialog for adopting every foreign live subscription into the pool.
AdoptAllPreview,
+ /// Re-check the subscription on the currently-selected node against the shop. Pure refresh
+ /// path; no confirmation dialog since the action is read-only from the pool's perspective.
+ CheckSubscriptionForSelectedNode,
}
#[derive(PartialEq)]
@@ -385,12 +409,16 @@ impl SubscriptionRegistryComp {
node_field_sorter(a, b, |n| subscription_status_label(n.status))
})
.render(|entry: &NodeTreeEntry| match entry {
- NodeTreeEntry::Node { data: n, .. } => Row::new()
- .class(AlignItems::Baseline)
- .gap(2)
- .with_child(subscription_status_icon(n.status))
- .with_child(subscription_status_label(n.status))
- .into(),
+ NodeTreeEntry::Node { data: n, .. } => {
+ let row = Row::new()
+ .class(AlignItems::Baseline)
+ .gap(2)
+ .with_child(subscription_status_icon(n.status))
+ .with_child(subscription_status_label(n.status));
+ status_tooltip_lines(n)
+ .map(|tip| Tooltip::new(row.clone()).tip(tip).into())
+ .unwrap_or_else(|| row.into())
+ }
NodeTreeEntry::Remote { active, total, .. } => {
let icon = if active == total {
Fa::new("check-circle").class(FontColor::Success)
@@ -736,6 +764,23 @@ impl LoadableComponent for SubscriptionRegistryComp {
ctx.link()
.change_view(Some(ViewState::ConfirmAdoptAll { candidates }));
}
+ Msg::CheckSubscriptionForSelectedNode => {
+ let Some(n) = self.selected_node_status() else {
+ return false;
+ };
+ let remote = n.remote.clone();
+ let node = n.node.clone();
+ let link = ctx.link().clone();
+ ctx.link().spawn(async move {
+ if let Err(err) = crate::pdm_client()
+ .subscription_check(&remote, &node)
+ .await
+ {
+ link.show_error(tr!("Check Subscription"), err.to_string(), true);
+ }
+ link.send_reload();
+ });
+ }
}
true
}
@@ -1087,6 +1132,12 @@ impl SubscriptionRegistryComp {
let can_revert = self.revert_target().is_some();
let can_clear_key = self.selected_node_for_clear().is_some();
let can_adopt_key = self.selected_node_for_adopt().is_some();
+ // Check Subscription is a no-op on the remote when no key is installed (PVE / PBS
+ // `update_subscription` returns early without contacting the shop), so disable the
+ // button to keep the UI honest about what clicking it will do.
+ let can_check = self
+ .selected_node_status()
+ .is_some_and(|n| n.status != proxmox_subscription::SubscriptionStatus::NotFound);
let assign_button = Tooltip::new(
Button::new(tr!("Assign Key"))
.icon_class("fa fa-link")
@@ -1126,6 +1177,20 @@ impl SubscriptionRegistryComp {
.tip(tr!(
"Import the live subscription on the selected node into the pool."
));
+ let check_button = Tooltip::new(
+ Button::new(tr!("Check Subscription"))
+ .icon_class("fa fa-refresh")
+ .disabled(!can_check)
+ .on_activate(
+ ctx.link()
+ .callback(|_| Msg::CheckSubscriptionForSelectedNode),
+ ),
+ )
+ .tip(if can_check {
+ tr!("Re-verify the live subscription against the shop, refreshing the status.")
+ } else {
+ tr!("No subscription installed on the selected node; assign or adopt one first.")
+ });
Panel::new()
.class(FlexFit)
@@ -1137,6 +1202,7 @@ impl SubscriptionRegistryComp {
.with_tool(adopt_key_button)
.with_tool(revert_button)
.with_tool(clear_key_button)
+ .with_tool(check_button)
.with_child(table)
}
--
2.47.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [RFC PATCH datacenter-manager v3 12/12] ui: registry: add Add-and-Assign wizard from Assign Key dialog
2026-05-15 7:43 [PATCH datacenter-manager v3 00/12] subscription key pool registry Thomas Lamprecht
` (10 preceding siblings ...)
2026-05-15 7:43 ` [PATCH datacenter-manager v3 11/12] subscription: add Check Subscription action Thomas Lamprecht
@ 2026-05-15 7:43 ` Thomas Lamprecht
11 siblings, 0 replies; 16+ messages in thread
From: Thomas Lamprecht @ 2026-05-15 7:43 UTC (permalink / raw)
To: pdm-devel
Wire a small two-step wizard reachable from the per-node Assign Key
dialog's "Add new key..." button: paste a key, click Next, and land
back on the Assign selector with the just-added key pre-selected
and the original (remote, node) target preserved.
Optional UX shortcut for the empty-pool case; see the post-`---`
RFC note for the keep-vs-drop trade-off.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
---
New in v3, sent as RFC and probably should be skipped on apply: it
crowds the Assign Key dialog with an extra "Add new key..." button and a
separate two-step wizard that competes with the natural left-panel Add
path (i.e., just add a key on the left, the selection on the right stays
intact and thus one can trivially continue there afterwards). Two
discoverability paths for the same outcome is worse than one slightly
longer path.
ui/src/configuration/subscription_assign.rs | 437 +++++++++++++++++-
ui/src/configuration/subscription_registry.rs | 41 ++
2 files changed, 471 insertions(+), 7 deletions(-)
diff --git a/ui/src/configuration/subscription_assign.rs b/ui/src/configuration/subscription_assign.rs
index 16936b7f..9aba0111 100644
--- a/ui/src/configuration/subscription_assign.rs
+++ b/ui/src/configuration/subscription_assign.rs
@@ -1,8 +1,10 @@
-//! Node-first Assign Key dialog opened from the Subscription Registry's node tree panel.
+//! Node-first Assign Key dialog and a small two-step Add-and-Assign wizard, both opened from
+//! the Subscription Registry's node tree panel.
+use std::cell::RefCell;
use std::rc::Rc;
-use anyhow::Error;
+use anyhow::{bail, Error};
use serde_json::json;
use yew::html::IntoEventCallback;
@@ -10,17 +12,18 @@ use yew::virtual_dom::{Key, VComp, VNode};
use pwt::css::FlexFit;
use pwt::prelude::*;
-use pwt::props::{ContainerBuilder, WidgetBuilder};
+use pwt::props::{ContainerBuilder, FieldBuilder, WidgetBuilder};
use pwt::state::{Selection, Store};
use pwt::widget::data_table::{DataTable, DataTableColumn, DataTableHeader};
-use pwt::widget::{Button, Column, Container, Dialog, Row};
+use pwt::widget::form::Hidden;
+use pwt::widget::{Button, Column, Container, Dialog, GridPicker, Row, TabBarItem};
-use proxmox_yew_comp::http_post;
use proxmox_yew_comp::percent_encoding::percent_encode_component;
+use proxmox_yew_comp::{http_post, http_post_full, Wizard, WizardPageRenderInfo};
use pdm_api_types::remotes::RemoteType;
use pdm_api_types::subscription::{
- pick_best_pve_socket_key, socket_count_from_key, SubscriptionKeyEntry,
+ pick_best_pve_socket_key, socket_count_from_key, ProductType, SubscriptionKeyEntry,
};
const KEYS_URL: &str = "/subscriptions/keys";
@@ -124,6 +127,11 @@ pub struct AssignKeyToNodeDialog {
#[prop_or_default]
pub on_done: Option<Callback<()>>,
+
+ /// Invoked when the operator clicks "Add new key..." in the body. The parent is expected
+ /// to close this dialog and open the Add-and-Assign wizard with the same target.
+ #[prop_or_default]
+ pub on_request_wizard: Option<Callback<()>>,
}
impl AssignKeyToNodeDialog {
@@ -142,6 +150,7 @@ impl AssignKeyToNodeDialog {
pool_keys,
pool_digest: None,
on_done: None,
+ on_request_wizard: None,
}
}
@@ -154,6 +163,11 @@ impl AssignKeyToNodeDialog {
self.on_done = cb.into_event_callback();
self
}
+
+ pub fn on_request_wizard(mut self, cb: impl IntoEventCallback<()>) -> Self {
+ self.on_request_wizard = cb.into_event_callback();
+ self
+ }
}
impl From<AssignKeyToNodeDialog> for VNode {
@@ -258,7 +272,7 @@ impl yew::Component for AssignKeyToNodeComp {
Container::new()
.padding(2)
.with_child(tr!(
- "No matching free keys in the pool. Add one via the Key Pool panel first."
+ "No matching free keys in the pool. Use \"Add new key\" to import one."
))
.into()
} else {
@@ -274,6 +288,14 @@ impl yew::Component for AssignKeyToNodeComp {
.padding_top(2)
.gap(2)
.class(pwt::css::JustifyContent::FlexEnd)
+ .with_child(Button::new(tr!("Add new key...")).on_activate({
+ let cb = props.on_request_wizard.clone();
+ move |_| {
+ if let Some(cb) = &cb {
+ cb.emit(());
+ }
+ }
+ }))
.with_flex_spacer()
.with_child(Button::new(tr!("Cancel")).on_activate({
let cb = props.on_done.clone();
@@ -330,3 +352,404 @@ impl yew::Component for AssignKeyToNodeComp {
}
}
+/// Two-step "Add and Assign" wizard.
+#[derive(Properties, Clone, PartialEq)]
+pub struct AddAndAssignWizard {
+ pub remote: String,
+ pub node: String,
+ pub ty: RemoteType,
+ pub node_sockets: Option<i64>,
+
+ #[prop_or_default]
+ pub pool_digest: Option<String>,
+
+ #[prop_or_default]
+ pub on_done: Option<Callback<()>>,
+}
+
+impl AddAndAssignWizard {
+ pub fn new(
+ remote: impl Into<String>,
+ node: impl Into<String>,
+ ty: RemoteType,
+ node_sockets: Option<i64>,
+ ) -> Self {
+ Self {
+ remote: remote.into(),
+ node: node.into(),
+ ty,
+ node_sockets,
+ pool_digest: None,
+ on_done: None,
+ }
+ }
+
+ pub fn pool_digest(mut self, digest: Option<String>) -> Self {
+ self.pool_digest = digest;
+ self
+ }
+
+ pub fn on_done(mut self, cb: impl IntoEventCallback<()>) -> Self {
+ self.on_done = cb.into_event_callback();
+ self
+ }
+}
+
+impl From<AddAndAssignWizard> for VNode {
+ fn from(val: AddAndAssignWizard) -> Self {
+ VComp::new::<AddAndAssignWizardComp>(Rc::new(val), None).into()
+ }
+}
+
+pub struct AddAndAssignWizardComp {
+ /// Shared mutable digest cell. The Add step writes the post-POST digest the server settled
+ /// on; the on_submit closure reads it for the Assign POST. Kept on the Component (not
+ /// recreated in `view()`) so it survives re-renders triggered by parent prop changes - if
+ /// the cell were instantiated inside `view()`, a re-render would detach the Add step's
+ /// already-registered `on_next` closure (which captured the old cell) from the new cell the
+ /// `on_submit` closure would read.
+ digest_cell: Rc<RefCell<Option<String>>>,
+}
+
+impl yew::Component for AddAndAssignWizardComp {
+ type Message = ();
+ type Properties = AddAndAssignWizard;
+
+ fn create(ctx: &yew::Context<Self>) -> Self {
+ Self {
+ digest_cell: Rc::new(RefCell::new(ctx.props().pool_digest.clone())),
+ }
+ }
+
+ fn view(&self, ctx: &yew::Context<Self>) -> Html {
+ let props = ctx.props();
+ let remote = props.remote.clone();
+ let node = props.node.clone();
+ let ty = props.ty;
+ let node_sockets = props.node_sockets;
+
+ let add_cell = self.digest_cell.clone();
+ let submit_cell = self.digest_cell.clone();
+ let assign_remote = remote.clone();
+ let assign_node = node.clone();
+
+ Wizard::new(tr!(
+ "Add and Assign Key on {remote}/{node}",
+ remote = remote.clone(),
+ node = node.clone()
+ ))
+ .width(700)
+ .on_done(props.on_done.clone())
+ .with_page(
+ TabBarItem::new().key("add").label(tr!("Add Key")),
+ move |p: &WizardPageRenderInfo| add_step(p.clone(), add_cell.clone()),
+ )
+ .with_page(
+ TabBarItem::new().key("assign").label(tr!("Assign")),
+ move |p: &WizardPageRenderInfo| {
+ assign_step(p.clone(), assign_remote.clone(), assign_node.clone(), ty, node_sockets)
+ },
+ )
+ .submit_text(tr!("Assign"))
+ .on_submit(move |data: serde_json::Value| {
+ let remote = remote.clone();
+ let node = node.clone();
+ let digest = submit_cell.borrow().clone();
+ async move {
+ let key = data
+ .get("key")
+ .and_then(|v| v.as_str())
+ .unwrap_or_default()
+ .to_string();
+ if key.is_empty() {
+ bail!("no key selected");
+ }
+ submit_assignment(&key, &remote, &node, digest.as_deref()).await
+ }
+ })
+ .into()
+ }
+}
+
+/// Step 1 of the Add-and-Assign wizard. A small Yew component so the failure path of the
+/// underlying POST can surface an error into the page (`on_next` is a sync callback that
+/// dispatches the actual network work into a future).
+#[derive(Properties, Clone)]
+struct AddStepProps {
+ info: WizardPageRenderInfo,
+ /// Shared cell with the wizard's idea of the current pool digest: read here to pin the Add
+ /// POST, updated here after the POST succeeds so the on_submit closure that fires the
+ /// Assign POST picks up the post-Add value instead of the now-stale at-open digest.
+ digest_cell: Rc<RefCell<Option<String>>>,
+}
+
+impl PartialEq for AddStepProps {
+ fn eq(&self, other: &Self) -> bool {
+ // The `info` carries the wizard's render context; the cell is a stable shared pointer.
+ // PartialEq is required by `Properties` but the inner `RefCell` is interior-mutable, so
+ // compare by Rc identity to keep equality cheap and avoid panicking on a borrow.
+ self.info == other.info && Rc::ptr_eq(&self.digest_cell, &other.digest_cell)
+ }
+}
+
+impl From<AddStepProps> for VNode {
+ fn from(val: AddStepProps) -> Self {
+ VComp::new::<AddStepComp>(Rc::new(val), None).into()
+ }
+}
+
+enum AddStepMsg {
+ AddFailed(String),
+ AddSucceeded,
+}
+
+struct AddStepComp {
+ last_error: Option<String>,
+}
+
+impl yew::Component for AddStepComp {
+ type Message = AddStepMsg;
+ type Properties = AddStepProps;
+
+ fn create(ctx: &yew::Context<Self>) -> Self {
+ let page = ctx.props().info.clone();
+ let form_ctx = page.form_ctx.clone();
+ let digest_cell = ctx.props().digest_cell.clone();
+ let link = ctx.link().clone();
+ page.clone().on_next(Callback::from(move |()| -> bool {
+ // Parse, validate, POST. Advance only after the add succeeds, so a failed add keeps
+ // the operator on step 1 with the same input intact and the error visible.
+ let raw = form_ctx.read().get_field_text("keys");
+ let keys: Vec<String> = raw
+ .split(|c: char| c.is_whitespace() || c == ',')
+ .map(str::trim)
+ .filter(|s| !s.is_empty())
+ .map(str::to_string)
+ .collect();
+ if keys.is_empty() {
+ link.send_message(AddStepMsg::AddFailed(tr!(
+ "Enter at least one subscription key."
+ )));
+ return false;
+ }
+ let page = page.clone();
+ let form = form_ctx.clone();
+ let link = link.clone();
+ let digest_cell = digest_cell.clone();
+ wasm_bindgen_futures::spawn_local(async move {
+ let pinned = digest_cell.borrow().clone();
+ let mut body = json!({ "keys": keys.clone() });
+ if let Some(d) = &pinned {
+ body["digest"] = d.clone().into();
+ }
+ // `http_post_full` returns the response's `attribs`, which carries the digest
+ // the server settled on after this write. Pin that into the shared cell so the
+ // `on_submit` closure (which fires the Assign POST) uses the post-Add value
+ // rather than the now-stale at-open digest. Closes the race window a chained
+ // POST+GET would otherwise leave open: a parallel admin's mutation cannot land
+ // between the two calls because there is only one call.
+ match http_post_full::<()>(KEYS_URL, Some(body)).await {
+ Ok(resp) => {
+ let new_digest = resp
+ .attribs
+ .get("digest")
+ .and_then(|v| v.as_str())
+ .map(str::to_string);
+ *digest_cell.borrow_mut() = new_digest;
+ // Stash the just-added keys in step 1's Hidden field so step 2 can read
+ // them via `info.lookup_form_context(&Key::from("add"))`. Step 2 has its
+ // own `FormContext` and would not otherwise see step 1's data.
+ form.write().set_field_value("__added_keys", json!(keys));
+ link.send_message(AddStepMsg::AddSucceeded);
+ page.go_to_next_page();
+ }
+ Err(err) => link.send_message(AddStepMsg::AddFailed(err.to_string())),
+ }
+ });
+ false
+ }));
+ Self { last_error: None }
+ }
+
+ fn update(&mut self, _ctx: &yew::Context<Self>, msg: Self::Message) -> bool {
+ match msg {
+ AddStepMsg::AddFailed(err) => self.last_error = Some(err),
+ AddStepMsg::AddSucceeded => self.last_error = None,
+ }
+ true
+ }
+
+ fn view(&self, _ctx: &yew::Context<Self>) -> Html {
+ use pwt::widget::form::TextArea;
+
+ // Render into the wizard's per-page FormContext (provided by the outer Form set up by
+ // the Wizard widget). Wrapping in another `Form::new()` here would create a separate,
+ // unparented FormContext that `on_next` cannot read - the textarea's `keys` value would
+ // never reach the validation closure and Next would always report "no keys".
+ //
+ // A plain Column keeps the layout to a single-column flow; InputPanel's CSS grid sized
+ // its track to the textarea's intrinsic `cols` value, which overflowed the wizard's
+ // 700px dialog and forced a horizontal scrollbar on the wider screens.
+ let mut column = Column::new()
+ .padding(4)
+ .gap(2)
+ .class("pwt-w-100")
+ // `__added_keys` carries the just-added keys forward to step 2's FormContext lookup
+ // via `info.lookup_form_context(&Key::from("add"))`; rendered as Hidden so it is
+ // registered on the page context but takes no visual space.
+ .with_child(
+ Hidden::new()
+ .name("__added_keys")
+ .submit_empty(false),
+ )
+ .with_child(
+ TextArea::new()
+ .name("keys")
+ .submit_empty(false)
+ .required(true)
+ .attribute("rows", "6")
+ .attribute("cols", "1")
+ .attribute("placeholder", tr!("Subscription key(s)"))
+ .class("pwt-w-100"),
+ )
+ .with_child(
+ Container::new()
+ .class(pwt::css::FontStyle::TitleSmall)
+ .class(pwt::css::Opacity::Quarter)
+ .with_child(tr!(
+ "One key per line, or comma-separated. The keys are added to the \
+ pool when you click Next. Step 2 will pick which one to assign; \
+ cancelling on step 2 leaves the just-added keys in the pool as \
+ free entries."
+ )),
+ );
+ if let Some(err) = &self.last_error {
+ column = column.with_child(
+ Container::new()
+ .class(pwt::css::FontColor::Error)
+ .with_child(err.clone()),
+ );
+ }
+ column.into()
+ }
+}
+
+fn add_step(p: WizardPageRenderInfo, digest_cell: Rc<RefCell<Option<String>>>) -> Html {
+ AddStepProps {
+ info: p,
+ digest_cell,
+ }
+ .into()
+}
+
+fn assign_step(
+ p: WizardPageRenderInfo,
+ remote: String,
+ node: String,
+ ty: RemoteType,
+ node_sockets: Option<i64>,
+) -> Html {
+ let form_ctx = p.form_ctx.clone();
+ let columns = key_columns();
+
+ // The wizard keeps one FormContext per page, so step 2 cannot read step 1's field directly:
+ // look up the "add" page's context and read `__added_keys` from there. The lookup can
+ // return None on the first render after `go_to_next_page()` if step 1's context is not yet
+ // mounted; render a transient "Loading..." instead of the friendly "no keys" message so the
+ // operator does not get a false-negative flash before the real list appears.
+ let Some(add_form) = p.lookup_form_context(&Key::from("add")) else {
+ return Container::new()
+ .padding(4)
+ .with_child(tr!("Loading..."))
+ .into();
+ };
+ let added: Vec<String> = match add_form.read().get_field_value("__added_keys") {
+ Some(v) => serde_json::from_value(v.clone()).unwrap_or_default(),
+ None => Vec::new(),
+ };
+ let mut candidates: Vec<SubscriptionKeyEntry> = added
+ .iter()
+ .filter_map(|k| {
+ let product_type = ProductType::from_key(k)?;
+ if !product_type.matches_remote_type(ty) {
+ return None;
+ }
+ Some(SubscriptionKeyEntry {
+ key: k.clone(),
+ product_type,
+ level: pdm_api_types::subscription::SubscriptionLevel::from_key(Some(k)),
+ ..Default::default()
+ })
+ })
+ .collect();
+ candidates.sort_by(|a, b| {
+ let sa = socket_count_from_key(&a.key);
+ let sb = socket_count_from_key(&b.key);
+ sa.cmp(&sb).then_with(|| a.key.cmp(&b.key))
+ });
+
+ if candidates.is_empty() {
+ return Container::new()
+ .padding(4)
+ .with_child(tr!(
+ "Step 1 did not yield any keys compatible with this node's product type."
+ ))
+ .into();
+ }
+
+ // Preserve user's pick across re-renders by reading back from step 2's form ctx.
+ let prior_pick: Option<String> = form_ctx
+ .read()
+ .get_field_value("key")
+ .and_then(|v| v.as_str().map(str::to_string));
+ let first_render = prior_pick.is_none();
+ let default = prior_pick.or(default_candidate(&candidates, ty, node_sockets));
+
+ let store = Store::with_extract_key(|e: &SubscriptionKeyEntry| Key::from(e.key.as_str()));
+ store.set_data(candidates);
+
+ let form_for_select = form_ctx.clone();
+ let selection = Selection::new().on_select(move |sel: Selection| {
+ if let Some(key) = sel.selected_key() {
+ form_for_select
+ .write()
+ .set_field_value("key", json!(key.to_string()));
+ }
+ });
+ if let Some(d) = &default {
+ selection.select(Key::from(d.clone()));
+ if first_render {
+ // Only seed the form on the first render. Re-writing on every render would mask the
+ // operator's later pick if FormContext signalled change on equal-value writes and we
+ // ended up in a render loop.
+ form_ctx
+ .write()
+ .set_field_value("key", json!(d.clone()));
+ }
+ }
+
+ Column::new()
+ .gap(2)
+ .padding(2)
+ // Register `key` as a real Hidden field so the wizard's merged submit data carries the
+ // selected pool key all the way through to `on_submit`. Without this, the selection
+ // handler's `set_field_value("key", ...)` writes to a free-form slot the wizard would
+ // not include in the submitted Value.
+ .with_child(Hidden::new().name("key").submit_empty(false))
+ .with_child(
+ Row::new()
+ .gap(2)
+ .with_child(Container::new().with_child(tr!("Target:")))
+ .with_child(Container::new().with_child(format!("{remote}/{node}"))),
+ )
+ .with_child(
+ GridPicker::new(
+ DataTable::new(columns, store)
+ .min_width(500)
+ .header_focusable(false)
+ .class(FlexFit),
+ )
+ .selection(selection),
+ )
+ .into()
+}
diff --git a/ui/src/configuration/subscription_registry.rs b/ui/src/configuration/subscription_registry.rs
index 1a70013c..9d2b19cc 100644
--- a/ui/src/configuration/subscription_registry.rs
+++ b/ui/src/configuration/subscription_registry.rs
@@ -306,6 +306,13 @@ pub enum ViewState {
ty: pdm_api_types::remotes::RemoteType,
node_sockets: Option<i64>,
},
+ /// Two-step "Add and Assign" wizard launched from the AssignKeyToNode dialog.
+ AddAndAssignWizard {
+ remote: String,
+ node: String,
+ ty: pdm_api_types::remotes::RemoteType,
+ node_sockets: Option<i64>,
+ },
}
#[doc(hidden)]
@@ -1077,6 +1084,13 @@ impl LoadableComponent for SubscriptionRegistryComp {
} => {
use super::subscription_assign::AssignKeyToNodeDialog;
let close_link = ctx.link().clone();
+ let wizard_link = ctx.link().clone();
+ let wizard_target = (
+ remote.clone(),
+ node.clone(),
+ *ty,
+ *node_sockets,
+ );
Some(
AssignKeyToNodeDialog::new(
remote.clone(),
@@ -1090,9 +1104,36 @@ impl LoadableComponent for SubscriptionRegistryComp {
close_link.change_view(None);
close_link.send_reload();
}))
+ .on_request_wizard(Callback::from(move |_| {
+ let (remote, node, ty, node_sockets) = wizard_target.clone();
+ wizard_link.change_view(Some(ViewState::AddAndAssignWizard {
+ remote,
+ node,
+ ty,
+ node_sockets,
+ }));
+ }))
.into(),
)
}
+ ViewState::AddAndAssignWizard {
+ remote,
+ node,
+ ty,
+ node_sockets,
+ } => {
+ use super::subscription_assign::AddAndAssignWizard;
+ let close_link = ctx.link().clone();
+ Some(
+ AddAndAssignWizard::new(remote.clone(), node.clone(), *ty, *node_sockets)
+ .pool_digest(self.pool_digest.clone())
+ .on_done(Callback::from(move |_| {
+ close_link.change_view(None);
+ close_link.send_reload();
+ }))
+ .into(),
+ )
+ }
}
}
}
--
2.47.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH datacenter-manager v3 03/12] subscription: pool: add data model and config layer
2026-05-15 7:43 ` [PATCH datacenter-manager v3 03/12] subscription: pool: add data model and config layer Thomas Lamprecht
@ 2026-05-15 15:21 ` Wolfgang Bumiller
0 siblings, 0 replies; 16+ messages in thread
From: Wolfgang Bumiller @ 2026-05-15 15:21 UTC (permalink / raw)
To: Thomas Lamprecht; +Cc: pdm-devel
On Fri, 15 May 2026 09:43:13 +0200, Thomas Lamprecht <t.lamprecht@proxmox.com> wrote:
> diff --git a/lib/pdm-api-types/src/subscription.rs b/lib/pdm-api-types/src/subscription.rs
> index f0eb525..811bce4 100644
> --- a/lib/pdm-api-types/src/subscription.rs
> +++ b/lib/pdm-api-types/src/subscription.rs
> @@ -174,3 +179,395 @@ pub struct PdmSubscriptionInfo {
> [ ... skip 94 lines ... ]
> +pub fn socket_count_from_key(key: &str) -> Option<u32> {
> + let (prefix, _) = key.split_once('-')?;
> + if !prefix.starts_with("pve") {
> + return None;
> + }
> + let after_pve = &prefix[3..];
Above 4 lines could be shortened to
let after_pve = prefix.strip_prefix("pve")?;
--
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH datacenter-manager v3 02/12] pdm-client: add wait_for_local_task helper
2026-05-15 7:43 ` [PATCH datacenter-manager v3 02/12] pdm-client: add wait_for_local_task helper Thomas Lamprecht
@ 2026-05-15 15:21 ` Wolfgang Bumiller
0 siblings, 0 replies; 16+ messages in thread
From: Wolfgang Bumiller @ 2026-05-15 15:21 UTC (permalink / raw)
To: Thomas Lamprecht; +Cc: pdm-devel
On Fri, 15 May 2026 09:43:12 +0200, Thomas Lamprecht <t.lamprecht@proxmox.com> wrote:
> diff --git a/lib/pdm-client/src/lib.rs b/lib/pdm-client/src/lib.rs
> index 76b33ef..cb5bb04 100644
> --- a/lib/pdm-client/src/lib.rs
> +++ b/lib/pdm-client/src/lib.rs
> @@ -890,6 +890,36 @@ impl<T: HttpApiClient> PdmClient<T> {
> [ ... skip 20 lines ... ]
> + let body: Value = self.0.get(&path).await?.expect_json()?.data;
> + let running = body
> + .get("status")
> + .and_then(Value::as_str)
> + .map(|s| s == "running")
> + .unwrap_or(false);
Minor nit:
The last 3 lines could be condensed to
== Some("running")
which is slightly easier to read, and even makes it short enough to fit
within `if` and `{`.
Also, since `body` is already a `Value`, it's safe to just index it. If
it is not a map, it'll just give you a `Null`, and it does the same if
it is a map but the key does not exist. This means we could call
`.as_str()` without needing `.and_then()`, so the entire check could be:
if body["running"].as_str() == Some("running") {
...
}
See the indexing docs here[1]
[1] https://docs.rs/serde_json/latest/serde_json/enum.Value.html#method.get
--
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH datacenter-manager v3 04/12] subscription: api: add key pool and node status endpoints
2026-05-15 7:43 ` [PATCH datacenter-manager v3 04/12] subscription: api: add key pool and node status endpoints Thomas Lamprecht
@ 2026-05-15 15:21 ` Wolfgang Bumiller
0 siblings, 0 replies; 16+ messages in thread
From: Wolfgang Bumiller @ 2026-05-15 15:21 UTC (permalink / raw)
To: Thomas Lamprecht; +Cc: pdm-devel
On Fri, 15 May 2026 09:43:14 +0200, Thomas Lamprecht <t.lamprecht@proxmox.com> wrote:
> diff --git a/server/src/api/subscriptions/mod.rs b/server/src/api/subscriptions/mod.rs
> new file mode 100644
> index 0000000..aa3146e
> --- /dev/null
> +++ b/server/src/api/subscriptions/mod.rs
> @@ -0,0 +1,1542 @@
> [ ... skip 210 lines ... ]
> + // loop on the first collision. The all-or-nothing contract holds because save_config
> + // only runs after the loop completes, so a bail on entry N leaves the on-disk pool
> + // untouched even if entries 1..N already landed in the in-memory `config`.
> + for entry in entries {
> + let key = entry.key.clone();
> + if let Some(existing) = config.insert(key.clone(), entry) {
^ This clone can be skipped, since you use the key from `existing` for
the error message.
> [ ... skip 34 lines ... ]
> + let (config, digest) = pdm_config::subscriptions::config()?;
> + rpcenv["digest"] = digest.to_hex().into();
> + let mut entry = config
> + .get(&key)
> + .cloned()
> + .ok_or_else(|| http_err!(NOT_FOUND, "key '{key}' not found in pool"))?;
We could put the `http_err!()` call into a reusable closure so we also
only have a single instance of the message text, to make it more
difficult for future patches to accidentally only change 1 of them.
--
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2026-05-15 15:21 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-15 7:43 [PATCH datacenter-manager v3 00/12] subscription key pool registry Thomas Lamprecht
2026-05-15 7:43 ` [PATCH datacenter-manager v3 01/12] api types: subscription level: render full names Thomas Lamprecht
2026-05-15 7:43 ` [PATCH datacenter-manager v3 02/12] pdm-client: add wait_for_local_task helper Thomas Lamprecht
2026-05-15 15:21 ` Wolfgang Bumiller
2026-05-15 7:43 ` [PATCH datacenter-manager v3 03/12] subscription: pool: add data model and config layer Thomas Lamprecht
2026-05-15 15:21 ` Wolfgang Bumiller
2026-05-15 7:43 ` [PATCH datacenter-manager v3 04/12] subscription: api: add key pool and node status endpoints Thomas Lamprecht
2026-05-15 15:21 ` Wolfgang Bumiller
2026-05-15 7:43 ` [PATCH datacenter-manager v3 05/12] ui: registry: add view with key pool and node status Thomas Lamprecht
2026-05-15 7:43 ` [PATCH datacenter-manager v3 06/12] cli: client: add subscription key pool management subcommands Thomas Lamprecht
2026-05-15 7:43 ` [PATCH datacenter-manager v3 07/12] docs: add subscription registry chapter Thomas Lamprecht
2026-05-15 7:43 ` [PATCH datacenter-manager v3 08/12] subscription: add Clear Key action and per-node revert Thomas Lamprecht
2026-05-15 7:43 ` [PATCH datacenter-manager v3 09/12] subscription: add Adopt Key action for foreign live subscriptions Thomas Lamprecht
2026-05-15 7:43 ` [PATCH datacenter-manager v3 10/12] subscription: add Adopt All bulk action Thomas Lamprecht
2026-05-15 7:43 ` [PATCH datacenter-manager v3 11/12] subscription: add Check Subscription action Thomas Lamprecht
2026-05-15 7:43 ` [RFC PATCH datacenter-manager v3 12/12] ui: registry: add Add-and-Assign wizard from Assign Key dialog Thomas Lamprecht
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.