public inbox for pdm-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Thomas Lamprecht <t.lamprecht@proxmox.com>
To: pdm-devel@lists.proxmox.com
Subject: [PATCH 8/8] subscription: add Reissue Key action with pending-reissue queue
Date: Thu,  7 May 2026 09:17:31 +0200	[thread overview]
Message-ID: <20260507072436.2649563-9-t.lamprecht@proxmox.com> (raw)
In-Reply-To: <20260507072436.2649563-1-t.lamprecht@proxmox.com>

Wire a new "Reissue Key" action on the Node Subscription Status panel
(previously titled Node Status) that queues the live subscription on a
remote node for removal at next Apply Pending. This mirrors how the
shop describes keys that may be re-bound to a different server ID, so
the panel uses the same vocabulary end to end.

The pool entry bound to (remote, node) gets a pending-reissue flag.
Apply Pending issues a DELETE call on the remote and clears the
binding on success; the entry stays in the pool as a free key. Clear
Pending only resets the flag and leaves the binding intact, so an
operator can retry the queueing without losing context.

Foreign-key adoption: when the live node runs a subscription the pool
has never seen, queueing a reissue imports that key into the pool with
the flag set. If the operator then runs Clear Pending, the imported
entry stays in the pool as a free key. This shortcut lets a legacy
node be brought under PDM management by queueing a reissue and then
immediately cancelling it, without re-typing the key.

The left-side Key Pool grid renames its action button to "Remove Key"
for clarity against the new Reissue Key action, and the confirmation
dialog now warns when the entry is still bound to a remote node so an
operator does not lose track of the live subscription by mistake.

Note as of now this might be better called "Clear Key", it was split
out from some work where we can actually actively reissue here.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
---
 cli/client/src/subscriptions.rs               |  31 +-
 docs/subscription-registry.rst                |  44 ++-
 lib/pdm-api-types/src/subscription.rs         |  14 +
 lib/pdm-api-types/tests/test_import.rs        |   1 +
 lib/pdm-client/src/lib.rs                     |  31 ++
 server/src/api/subscriptions/mod.rs           | 282 ++++++++++++++++--
 ui/src/configuration/subscription_keys.rs     |  29 +-
 ui/src/configuration/subscription_registry.rs | 176 ++++++++---
 8 files changed, 508 insertions(+), 100 deletions(-)

diff --git a/cli/client/src/subscriptions.rs b/cli/client/src/subscriptions.rs
index 5d5532b..5472a71 100644
--- a/cli/client/src/subscriptions.rs
+++ b/cli/client/src/subscriptions.rs
@@ -7,7 +7,7 @@ use proxmox_schema::api;
 
 use pdm_api_types::remotes::REMOTE_ID_SCHEMA;
 use pdm_api_types::subscription::{RemoteSubscriptionState, SUBSCRIPTION_KEY_SCHEMA};
-use pdm_api_types::VIEW_ID_SCHEMA;
+use pdm_api_types::{NODE_SCHEMA, VIEW_ID_SCHEMA};
 
 use crate::env::emoji;
 use crate::{client, env};
@@ -38,6 +38,10 @@ pub fn cli() -> CommandLineInterface {
         .insert("auto-assign", CliCommand::new(&API_METHOD_AUTO_ASSIGN))
         .insert("apply-pending", CliCommand::new(&API_METHOD_APPLY_PENDING))
         .insert("clear-pending", CliCommand::new(&API_METHOD_CLEAR_PENDING))
+        .insert(
+            "queue-reissue",
+            CliCommand::new(&API_METHOD_QUEUE_REISSUE).arg_param(&["remote", "node"]),
+        )
         .into()
 }
 
@@ -147,6 +151,9 @@ async fn list_keys() -> Result<(), Error> {
         let key_width = keys.iter().map(|k| k.key.len()).max().unwrap_or(20);
         for key in &keys {
             let assignment = match (&key.remote, &key.node) {
+                (Some(r), Some(n)) if key.pending_reissue => {
+                    format!("{r}/{n} [reissue queued]")
+                }
                 (Some(r), Some(n)) => format!("{r}/{n}"),
                 _ => "(unassigned)".to_string(),
             };
@@ -272,6 +279,28 @@ async fn auto_assign(apply: Option<bool>) -> Result<(), Error> {
     Ok(())
 }
 
+#[api(
+    input: {
+        properties: {
+            remote: { schema: REMOTE_ID_SCHEMA },
+            node: { schema: NODE_SCHEMA },
+        },
+    },
+)]
+/// Queue a reissue on a remote node so its subscription can be removed at next Apply Pending.
+///
+/// If the live node runs a key that is not yet in the pool, the key is imported into the pool
+/// with the queueing flag set. Cancelling via Clear Pending later leaves that imported entry in
+/// the pool, effectively adopting the previously foreign subscription for future PDM-side
+/// management.
+async fn queue_reissue(remote: String, node: String) -> Result<(), Error> {
+    client()?
+        .subscription_queue_reissue(&remote, &node, None)
+        .await?;
+    println!("Queued reissue on {remote}/{node}; run apply-pending to apply.");
+    Ok(())
+}
+
 #[api]
 /// Push all pending key assignments to remotes as a worker task.
 async fn apply_pending() -> Result<(), Error> {
diff --git a/docs/subscription-registry.rst b/docs/subscription-registry.rst
index 95c2cd4..9f5522d 100644
--- a/docs/subscription-registry.rst
+++ b/docs/subscription-registry.rst
@@ -16,29 +16,43 @@ Keys can be added in bulk from the web interface or with the ``proxmox-datacente
 subscriptions add-keys`` command. The Add dialog takes multiple keys, separated by newlines or
 commas, and validates the whole batch atomically.
 
-Node Status
------------
+Node Subscription Status
+------------------------
 
-The Node Status panel shows the live subscription state of every node behind a configured remote
-alongside any pending plan from the pool. Nodes that already hold a key the registry assigned appear
-with the live level; nodes with a pending pool assignment show a clock icon until the change is
-pushed to the remote.
+The Node Subscription Status panel shows the live subscription state of every node behind a
+configured remote alongside any pending plan from the pool. Nodes that already hold a key the
+registry assigned appear with the live level; nodes with a pending pool assignment show a clock
+icon until the change is pushed to the remote.
 
-From this view an operator can clear a pending assignment or remove the key from the pool entirely,
-which is convenient when a node is known to be wrong without first having to find the matching entry
-on the key list.
+From this view an operator can clear a pending assignment or queue a reissue on the selected
+node. Reissuing means freeing the live subscription key from this node so it can be reassigned
+elsewhere; this is the same notion the shop uses for keys that may be re-bound to a new server
+ID. The action is queued, not immediate, so it can be confirmed or cancelled together with
+other pending changes via Apply Pending and Clear Pending.
 
-Assignment
-----------
+Assignment and Reissue
+----------------------
 
 A key can be pinned to a single node manually.
 
 The Auto-Assign action proposes a plan that fills unsubscribed nodes from free pool keys. For
-Proxmox VE, the smallest covering key by socket count is chosen, so a 4-socket key is not used on a
-2-socket host while a larger host stays unsubscribed.
+Proxmox VE, the smallest covering key by socket count is chosen, so a 4-socket key is not used
+on a 2-socket host while a larger host stays unsubscribed.
 
-The proposed plan can be inspected before it is applied. Apply Pending pushes the queued keys to
-their target nodes; if a push fails the remaining queue is kept intact for retry. Clear Pending
+The Reissue Key action queues the live subscription on the selected node for removal. Apply
+Pending later issues the removal on the remote and releases the pool binding so the key becomes
+available for reassignment. Clear Pending drops the queued reissue without touching the remote;
+the binding stays intact and the operator can retry.
+
+When the live node runs a subscription that is not yet tracked by the pool, queueing a reissue
+also imports that key into the pool. If the operator then runs Clear Pending, the imported
+entry stays as a free pool key, effectively adopting the previously foreign subscription for
+future PDM-side management. This shortcut lets a legacy node be brought under the registry by
+queueing a reissue and then immediately cancelling it, without having to re-enter the key by
+hand.
+
+The proposed plan can be inspected before it is applied. Apply Pending walks the queue in
+order; if any push or reissue fails the remaining queue is kept intact for retry. Clear Pending
 drops the plan without touching any remote.
 
 Permissions
diff --git a/lib/pdm-api-types/src/subscription.rs b/lib/pdm-api-types/src/subscription.rs
index ead3c1b..9927d38 100644
--- a/lib/pdm-api-types/src/subscription.rs
+++ b/lib/pdm-api-types/src/subscription.rs
@@ -317,6 +317,7 @@ pub enum SubscriptionKeySource {
         "level": { optional: true },
         "status": { optional: true },
         "source": { optional: true },
+        "pending-reissue": { optional: true },
     },
 )]
 #[derive(Clone, Debug, Default, Deserialize, Serialize, PartialEq)]
@@ -346,6 +347,16 @@ pub struct SubscriptionKeyEntry {
     #[serde(skip_serializing_if = "Option::is_none")]
     pub node: Option<String>,
 
+    /// True when the operator queued a reissue for this entry's bound node, that is, a request
+    /// to free the key from `remote`/`node` so it can be reassigned to a different node.
+    ///
+    /// Apply Pending will issue a DELETE on the remote and then clear `remote`/`node` on
+    /// success. Clear Pending only resets this flag and leaves the binding untouched, so a
+    /// foreign current_key that was newly imported to queue a reissue stays adopted in the
+    /// pool. A bare flag is enough since the (remote, node) binding lives next to it.
+    #[serde(default, rename = "pending-reissue")]
+    pub pending_reissue: bool,
+
     /// Server ID this key is bound to (from signed info, if available).
     #[serde(skip_serializing_if = "Option::is_none")]
     pub serverid: Option<String>,
@@ -527,6 +538,9 @@ pub struct RemoteNodeStatus {
     /// Current key on the node (from remote query).
     #[serde(skip_serializing_if = "Option::is_none")]
     pub current_key: Option<String>,
+    /// True when the pool has a reissue queued for this node.
+    #[serde(default, rename = "pending-reissue")]
+    pub pending_reissue: bool,
 }
 
 #[api]
diff --git a/lib/pdm-api-types/tests/test_import.rs b/lib/pdm-api-types/tests/test_import.rs
index 2bb1cd6..e8502a5 100644
--- a/lib/pdm-api-types/tests/test_import.rs
+++ b/lib/pdm-api-types/tests/test_import.rs
@@ -17,6 +17,7 @@ fn entry_roundtrip() {
         source: SubscriptionKeySource::Manual,
         remote: Some("my-cluster".to_string()),
         node: Some("node1".to_string()),
+        pending_reissue: false,
         serverid: Some("AABBCCDD".to_string()),
         status: SubscriptionStatus::Active,
         nextduedate: Some("2027-06-01".to_string()),
diff --git a/lib/pdm-client/src/lib.rs b/lib/pdm-client/src/lib.rs
index b0527b1..7e6411c 100644
--- a/lib/pdm-client/src/lib.rs
+++ b/lib/pdm-client/src/lib.rs
@@ -1201,6 +1201,37 @@ impl<T: HttpApiClient> PdmClient<T> {
             .data)
     }
 
+    /// Queue a reissue for the subscription on `remote`/`node`. Apply Pending later removes the
+    /// subscription from the node so the key can be reassigned elsewhere; Clear Pending undoes
+    /// the queueing without touching the remote. If the live node runs a key that is not yet in
+    /// the pool, this also imports it into the pool with the queueing flag set, see the server
+    /// endpoint docs for the adoption semantics.
+    pub async fn subscription_queue_reissue(
+        &self,
+        remote: &str,
+        node: &str,
+        digest: Option<ConfigDigest>,
+    ) -> Result<(), Error> {
+        #[derive(Serialize)]
+        struct ReissueArgs<'a> {
+            remote: &'a str,
+            node: &'a str,
+            #[serde(skip_serializing_if = "Option::is_none")]
+            digest: Option<ConfigDigest>,
+        }
+        self.0
+            .post(
+                "/api2/extjs/subscriptions/queue-reissue",
+                &ReissueArgs {
+                    remote,
+                    node,
+                    digest,
+                },
+            )
+            .await?
+            .nodata()
+    }
+
     /// Clear every pending assignment in one bulk transaction; returns the count of cleared
     /// entries.
     pub async fn subscription_clear_pending(&self) -> Result<u32, Error> {
diff --git a/server/src/api/subscriptions/mod.rs b/server/src/api/subscriptions/mod.rs
index 26d9ecf..ce6be0b 100644
--- a/server/src/api/subscriptions/mod.rs
+++ b/server/src/api/subscriptions/mod.rs
@@ -49,6 +49,10 @@ const SUBDIRS: SubdirMap = &sorted!([
     ),
     ("keys", &KEYS_ROUTER),
     ("node-status", &Router::new().get(&API_METHOD_NODE_STATUS)),
+    (
+        "queue-reissue",
+        &Router::new().post(&API_METHOD_QUEUE_REISSUE)
+    ),
 ]);
 
 const KEYS_ROUTER: Router = Router::new()
@@ -404,6 +408,9 @@ async fn assign_key(
             let entry = config.get_mut(&key).unwrap();
             entry.remote = None;
             entry.node = None;
+            // pending_reissue without a binding is meaningless - reset so a later reassignment
+            // does not re-trigger a stale teardown.
+            entry.pending_reissue = false;
         }
         _ => {
             http_bail!(
@@ -480,6 +487,156 @@ async fn push_key_to_remote(remote: &Remote, key: &str, node_name: &str) -> Resu
     Ok(())
 }
 
+/// Tear down a node's subscription via the remote's `/nodes/{node}/subscription` endpoint.
+async fn delete_subscription_on_remote(
+    remote: &Remote,
+    product_type: ProductType,
+    node_name: &str,
+) -> Result<(), Error> {
+    let path = match product_type {
+        ProductType::Pve => format!("/api2/extjs/nodes/{node_name}/subscription"),
+        ProductType::Pbs => "/api2/extjs/nodes/localhost/subscription".to_string(),
+        ProductType::Pmg | ProductType::Pom => {
+            bail!("PDM cannot reissue '{product_type}' keys: no remote support yet");
+        }
+    };
+
+    let client = crate::connection::make_pbs_client_and_login(remote).await?;
+    client.0.delete(&path).await?;
+
+    info!("removed subscription from {}/{node_name}", remote.id);
+    Ok(())
+}
+
+#[api(
+    input: {
+        properties: {
+            remote: { schema: REMOTE_ID_SCHEMA },
+            // NODE_SCHEMA rejects path-traversal input before it ends up interpolated into
+            // the remote URL `/api2/extjs/nodes/{node}/subscription`.
+            node: { schema: NODE_SCHEMA },
+            digest: {
+                type: ConfigDigest,
+                optional: true,
+            },
+        },
+    },
+    access: {
+        permission: &Permission::Privilege(&["system"], PRIV_SYS_MODIFY, false),
+    },
+)]
+/// Queue a reissue on a remote node, that is, mark its subscription for removal so the key can
+/// be reassigned elsewhere.
+///
+/// Sets `pending_reissue` on the pool entry currently bound to (remote, node). Apply Pending
+/// later issues the DELETE on the remote and clears the binding on success; Clear Pending only
+/// resets the flag and leaves the binding intact so the operator can retry.
+///
+/// Adoption path: when the live node runs a key that is not yet in the pool, queueing a reissue
+/// adds that key to the pool with the flag set. If the operator later runs Clear Pending, the
+/// newly-imported entry stays in the pool as a free key, effectively adopting a previously
+/// foreign subscription for future PDM-side management.
+///
+/// Per-remote `PRIV_RESOURCE_MODIFY` is enforced inside the handler so an operator with global
+/// system access alone cannot tear down subscriptions on remotes they have no other authority on.
+async fn queue_reissue(
+    remote: String,
+    node: String,
+    digest: Option<ConfigDigest>,
+    rpcenv: &mut dyn RpcEnvironment,
+) -> Result<(), Error> {
+    let auth_id: Authid = rpcenv
+        .get_auth_id()
+        .context("no authid available")?
+        .parse()?;
+    let user_info = CachedUserInfo::new()?;
+    user_info.check_privs(
+        &auth_id,
+        &["resource", &remote],
+        PRIV_RESOURCE_MODIFY,
+        false,
+    )?;
+
+    // Fetch live state before grabbing the config lock so the network call does not pin the lock
+    // for the duration of a remote query.
+    let (remotes_config, _) = pdm_config::remotes::config()?;
+    let remote_entry = remotes_config
+        .get(&remote)
+        .ok_or_else(|| http_err!(NOT_FOUND, "remote '{remote}' not found"))?;
+    let live = get_subscription_info_for_remote(remote_entry, FRESH_NODE_STATUS_MAX_AGE)
+        .await
+        .map_err(|err| {
+            http_err!(
+                BAD_REQUEST,
+                "could not read subscription on {remote}/{node}: {err}"
+            )
+        })?;
+    let live_current_key: Option<String> = live
+        .get(&node)
+        .and_then(|info| info.as_ref())
+        .and_then(|info| info.key.clone());
+
+    let _lock = pdm_config::subscriptions::lock_config()?;
+    let (mut config, config_digest) = pdm_config::subscriptions::config()?;
+    config_digest.detect_modification(digest.as_ref())?;
+
+    let bound_id = config
+        .iter()
+        .find(|(_, e)| {
+            e.remote.as_deref() == Some(remote.as_str()) && e.node.as_deref() == Some(node.as_str())
+        })
+        .map(|(id, _)| id.to_string());
+
+    if let Some(id) = bound_id {
+        let entry = config.get_mut(&id).unwrap();
+        if entry.pending_reissue {
+            http_bail!(BAD_REQUEST, "reissue already queued for {remote}/{node}");
+        }
+        entry.pending_reissue = true;
+    } else {
+        // No pool entry is bound to this (remote, node). Three sub-cases for the live key:
+        // - Already in the pool but unbound: rebind to target and flag. This covers the user
+        //   path where Clear Assignment dropped the binding while the live subscription stayed.
+        // - Already in the pool but bound elsewhere: refuse, the operator has to reconcile.
+        // - Not in the pool: adopt by inserting a fresh entry. If Clear Pending fires later,
+        //   the entry stays as a now-managed free pool key.
+        let current_key = live_current_key
+            .ok_or_else(|| http_err!(NOT_FOUND, "no subscription on {remote}/{node}"))?;
+
+        if let Some(existing) = config.get_mut(&current_key) {
+            if existing.remote.is_some() || existing.node.is_some() {
+                http_bail!(
+                    CONFLICT,
+                    "key '{current_key}' is in the pool but bound elsewhere; resolve manually first"
+                );
+            }
+            existing.remote = Some(remote.clone());
+            existing.node = Some(node.clone());
+            existing.pending_reissue = true;
+        } else {
+            SUBSCRIPTION_KEY_SCHEMA
+                .parse_simple_value(&current_key)
+                .map_err(|err| http_err!(BAD_REQUEST, "key '{current_key}' rejected: {err}"))?;
+            let product_type = ProductType::from_key(&current_key)
+                .ok_or_else(|| format_err!("unrecognised key prefix: {current_key}"))?;
+            let entry = SubscriptionKeyEntry {
+                key: current_key.clone(),
+                product_type,
+                level: SubscriptionLevel::from_key(Some(&current_key)),
+                source: SubscriptionKeySource::Manual,
+                remote: Some(remote.clone()),
+                node: Some(node.clone()),
+                pending_reissue: true,
+                ..Default::default()
+            };
+            config.insert(current_key, entry);
+        }
+    }
+
+    pdm_config::subscriptions::save_config(&config)?;
+    Ok(())
+}
+
 #[api(
     input: {
         properties: {
@@ -561,13 +718,14 @@ async fn collect_node_status(
                 ),
             };
 
-            let assigned_key = keys_config
-                .iter()
-                .find(|(_id, entry)| {
-                    entry.remote.as_deref() == Some(remote_name.as_str())
-                        && entry.node.as_deref() == Some(node_name.as_str())
-                })
-                .map(|(_id, entry)| entry.key.clone());
+            let pool_entry = keys_config.iter().find(|(_id, entry)| {
+                entry.remote.as_deref() == Some(remote_name.as_str())
+                    && entry.node.as_deref() == Some(node_name.as_str())
+            });
+            let (assigned_key, pending_reissue) = match pool_entry {
+                Some((_id, entry)) => (Some(entry.key.clone()), entry.pending_reissue),
+                None => (None, false),
+            };
 
             out.push(RemoteNodeStatus {
                 remote: remote_name.clone(),
@@ -578,6 +736,7 @@ async fn collect_node_status(
                 level,
                 assigned_key,
                 current_key,
+                pending_reissue,
             });
         }
     }
@@ -782,37 +941,77 @@ async fn run_apply_pending(auth_id: Authid) -> Result<(), Error> {
     for entry in pending {
         let Some(remote) = remotes_config.get(&entry.remote) else {
             bail!(
-                "remote '{}' vanished, aborting after {ok}/{total} successful pushes",
+                "remote '{}' vanished, aborting after {ok}/{total} successful operations",
                 entry.remote,
             );
         };
-        // Honour the case where the operator unassigned the key while the worker was queued.
+        // Honour the case where the operator unassigned or reset the entry while the worker was
+        // queued. `pool_assignment_still_valid` re-reads the binding under no lock, which is
+        // good enough since `save_config` writes are atomic and a stale skip is benign.
         if !pool_assignment_still_valid(&config, &entry) {
             info!(
-                "skipping {}/{}: pool assignment changed before worker ran",
+                "skipping {}/{}: pool entry changed before worker ran",
                 entry.remote, entry.node
             );
             continue;
         }
 
-        info!(
-            "pushing {} to {}/{}...",
-            entry.key, entry.remote, entry.node
-        );
-        if let Err(err) = push_key_to_remote(remote, &entry.key, &entry.node).await {
-            bail!(
-                "push of {} to {}/{} failed after {ok}/{total} successful pushes: {err}",
-                entry.key,
-                entry.remote,
-                entry.node,
-            );
+        match entry.op {
+            PendingOp::Push => {
+                info!(
+                    "pushing {} to {}/{}...",
+                    entry.key, entry.remote, entry.node
+                );
+                if let Err(err) = push_key_to_remote(remote, &entry.key, &entry.node).await {
+                    bail!(
+                        "push of {} to {}/{} failed after {ok}/{total} successful operations: {err}",
+                        entry.key,
+                        entry.remote,
+                        entry.node,
+                    );
+                }
+            }
+            PendingOp::Reissue => {
+                let product_type = match ProductType::from_key(&entry.key) {
+                    Some(ty) => ty,
+                    None => bail!("unrecognised key format: {}", entry.key),
+                };
+                info!(
+                    "reissuing: removing subscription from {}/{}...",
+                    entry.remote, entry.node
+                );
+                if let Err(err) =
+                    delete_subscription_on_remote(remote, product_type, &entry.node).await
+                {
+                    bail!(
+                        "reissue of {} on {}/{} failed after {ok}/{total} successful operations: {err}",
+                        entry.key,
+                        entry.remote,
+                        entry.node,
+                    );
+                }
+                // Clear the binding under the config lock. A subsequent compute_pending call must
+                // not propose another push or reissue for the same entry.
+                let _lock = pdm_config::subscriptions::lock_config()?;
+                let (mut updated, _) = pdm_config::subscriptions::config()?;
+                if let Some(stored) = updated.get_mut(&entry.key) {
+                    if stored.remote.as_deref() == Some(entry.remote.as_str())
+                        && stored.node.as_deref() == Some(entry.node.as_str())
+                    {
+                        stored.remote = None;
+                        stored.node = None;
+                        stored.pending_reissue = false;
+                    }
+                }
+                pdm_config::subscriptions::save_config(&updated)?;
+            }
         }
         info!("  success");
         invalidate_subscription_info_for_remote(&entry.remote);
         ok += 1;
     }
 
-    info!("finished: {ok}/{total} pushes succeeded");
+    info!("finished: {ok}/{total} operations succeeded");
     Ok(())
 }
 
@@ -854,8 +1053,21 @@ async fn clear_pending(rpcenv: &mut dyn RpcEnvironment) -> Result<ClearPendingRe
             if stored.remote.as_deref() == Some(entry.remote.as_str())
                 && stored.node.as_deref() == Some(entry.node.as_str())
             {
-                stored.remote = None;
-                stored.node = None;
+                match entry.op {
+                    PendingOp::Reissue => {
+                        // Only reset the flag - leave the binding so the operator can retry the
+                        // reissue without having to re-import a foreign key from scratch.
+                        stored.pending_reissue = false;
+                    }
+                    PendingOp::Push => {
+                        stored.remote = None;
+                        stored.node = None;
+                        // Defensive: an entry that flipped to pending_reissue between the
+                        // pre-lock snapshot and now would otherwise leave a meaningless flag
+                        // on a now-unbound entry. Reset alongside the binding clear.
+                        stored.pending_reissue = false;
+                    }
+                }
                 cleared += 1;
             }
         }
@@ -868,12 +1080,21 @@ async fn clear_pending(rpcenv: &mut dyn RpcEnvironment) -> Result<ClearPendingRe
     Ok(ClearPendingResult { cleared })
 }
 
-/// Plan entry for one pending push.
+/// Plan entry for one pending push or reissue.
 #[derive(Clone, Debug)]
 struct PendingEntry {
     key: String,
     remote: String,
     node: String,
+    op: PendingOp,
+}
+
+#[derive(Clone, Copy, Debug, PartialEq, Eq)]
+enum PendingOp {
+    /// PUT the assigned key to the remote because the live state does not match.
+    Push,
+    /// DELETE the subscription on the remote and clear the binding on success.
+    Reissue,
 }
 
 fn compute_pending(
@@ -893,6 +1114,15 @@ fn compute_pending(
                 return None;
             }
 
+            if entry.pending_reissue {
+                return Some(PendingEntry {
+                    key: entry.key.clone(),
+                    remote: remote.to_string(),
+                    node: node.to_string(),
+                    op: PendingOp::Reissue,
+                });
+            }
+
             // Treat anything other than "Active with the assigned key as the live current_key"
             // as pending, including unreachable remotes, so an operator can clear stuck
             // assignments without first having to bring the target back online.
@@ -911,6 +1141,7 @@ fn compute_pending(
                 key: entry.key.clone(),
                 remote: remote.to_string(),
                 node: node.to_string(),
+                op: PendingOp::Push,
             })
         })
         .collect())
@@ -960,6 +1191,7 @@ async fn collect_status_uncached(
                 level,
                 assigned_key: None,
                 current_key,
+                pending_reissue: false,
             });
         }
     }
diff --git a/ui/src/configuration/subscription_keys.rs b/ui/src/configuration/subscription_keys.rs
index c535e94..e2cb8ed 100644
--- a/ui/src/configuration/subscription_keys.rs
+++ b/ui/src/configuration/subscription_keys.rs
@@ -280,19 +280,28 @@ impl LoadableComponent for SubscriptionKeyGridComp {
                 .selected_entry()
                 .map(|entry| self.create_assign_dialog(&entry, ctx)),
             ViewState::Remove => self.selection.selected_key().map(|key| {
-                ConfirmDialog::new(
-                    tr!("Remove Key"),
-                    tr!(
+                let assignment = self.selected_entry().and_then(|e| {
+                    Some((e.remote.clone()?, e.node.clone()?))
+                });
+                let body = match assignment {
+                    Some((remote, node)) => tr!(
+                        "Remove {key} from the key pool? It is still assigned to {remote}/{node}; the assignment is released without removing the subscription on the remote. Use Reissue Key on the Node Subscription Status panel first if you want to free the live subscription too.",
+                        key = key.to_string(),
+                        remote = remote,
+                        node = node,
+                    ),
+                    None => tr!(
                         "Remove {key} from the key pool? This does not revoke the subscription.",
                         key = key.to_string(),
                     ),
-                )
-                .on_confirm({
-                    let link = ctx.link().clone();
-                    let key = key.clone();
-                    move |_| link.send_message(Msg::Remove(key.clone()))
-                })
-                .into()
+                };
+                ConfirmDialog::new(tr!("Remove Key"), body)
+                    .on_confirm({
+                        let link = ctx.link().clone();
+                        let key = key.clone();
+                        move |_| link.send_message(Msg::Remove(key.clone()))
+                    })
+                    .into()
             }),
         }
     }
diff --git a/ui/src/configuration/subscription_registry.rs b/ui/src/configuration/subscription_registry.rs
index 7ed96e6..82d6058 100644
--- a/ui/src/configuration/subscription_registry.rs
+++ b/ui/src/configuration/subscription_registry.rs
@@ -7,7 +7,7 @@ use anyhow::Error;
 use yew::virtual_dom::{Key, VComp, VNode};
 
 use proxmox_yew_comp::percent_encoding::percent_encode_component;
-use proxmox_yew_comp::{http_delete, http_get, http_post, http_put};
+use proxmox_yew_comp::{http_get, http_post, http_put};
 use proxmox_yew_comp::{
     LoadableComponent, LoadableComponentContext, LoadableComponentMaster,
     LoadableComponentScopeExt, LoadableComponentState,
@@ -28,6 +28,7 @@ const NODE_STATUS_URL: &str = "/subscriptions/node-status";
 const AUTO_ASSIGN_URL: &str = "/subscriptions/auto-assign";
 const APPLY_PENDING_URL: &str = "/subscriptions/apply-pending";
 const CLEAR_PENDING_URL: &str = "/subscriptions/clear-pending";
+const QUEUE_REISSUE_URL: &str = "/subscriptions/queue-reissue";
 
 /// Map a [`SubscriptionStatus`] to the icon shown in subscription panels.
 ///
@@ -165,15 +166,21 @@ pub enum Msg {
     ClearPending,
     /// Clear the pool's pin on the currently-selected node by un-assigning its key.
     ClearSelectedNode,
-    /// Remove the pool entry currently pinned to the selected node.
-    RemoveSelectedNodeKey,
+    /// Open the confirmation dialog for queueing a reissue on the selected node.
+    QueueReissueForSelectedNode,
 }
 
 #[derive(PartialEq)]
 pub enum ViewState {
     ConfirmAutoAssign(Vec<ProposedAssignment>),
     ConfirmClearPending,
-    ConfirmRemoveSelectedNodeKey(String),
+    /// Pending confirmation to queue a reissue for `(remote, node)`. The current key on the
+    /// node is shown in the dialog body when available.
+    ConfirmQueueReissue {
+        remote: String,
+        node: String,
+        current_key: Option<String>,
+    },
 }
 
 #[doc(hidden)]
@@ -332,8 +339,24 @@ fn key_cell(n: &RemoteNodeStatus) -> Html {
     let assigned = n.assigned_key.as_deref();
     let current = n.current_key.as_deref();
 
-    // Pending = pool key assigned but the node doesn't have an active subscription yet (the key
-    // still needs to be pushed).
+    if n.pending_reissue {
+        // Reissue queued: surface the live key the operator is about to free, with a recycle
+        // icon in the warning colour so the row stands out next to ordinary pending pushes.
+        let text = current.or(assigned).unwrap_or("");
+        return Tooltip::new(
+            Row::new()
+                .class(AlignItems::Baseline)
+                .gap(2)
+                .with_child(Fa::new("recycle").class(FontColor::Warning))
+                .with_child(text),
+        )
+        .tip(tr!(
+            "Pending Reissue - Apply Pending will remove this subscription from the node."
+        ))
+        .into();
+    }
+
+    // Pending push = pool key assigned but the node doesn't have an active subscription yet.
     let pending =
         assigned.is_some() && n.status != proxmox_subscription::SubscriptionStatus::Active;
 
@@ -449,7 +472,7 @@ impl LoadableComponent for SubscriptionRegistryComp {
                 });
             }
             Msg::ClearSelectedNode => {
-                let Some(key) = self.selected_assigned_key() else {
+                let Some(key) = self.clear_assignment_target_key() else {
                     return false;
                 };
                 let link = ctx.link().clone();
@@ -461,12 +484,16 @@ impl LoadableComponent for SubscriptionRegistryComp {
                     link.send_reload();
                 });
             }
-            Msg::RemoveSelectedNodeKey => {
-                let Some(key) = self.selected_assigned_key() else {
+            Msg::QueueReissueForSelectedNode => {
+                let Some((remote, node, current_key)) = self.selected_node_for_reissue() else {
                     return false;
                 };
                 ctx.link()
-                    .change_view(Some(ViewState::ConfirmRemoveSelectedNodeKey(key)));
+                    .change_view(Some(ViewState::ConfirmQueueReissue {
+                        remote,
+                        node,
+                        current_key,
+                    }));
             }
         }
         true
@@ -551,34 +578,56 @@ impl LoadableComponent for SubscriptionRegistryComp {
             ViewState::ConfirmAutoAssign(proposals) => {
                 Some(self.render_auto_assign_dialog(ctx, proposals))
             }
-            ViewState::ConfirmRemoveSelectedNodeKey(key) => {
+            ViewState::ConfirmQueueReissue {
+                remote,
+                node,
+                current_key,
+            } => {
                 use pwt::widget::ConfirmDialog;
+                let question = match current_key {
+                    Some(k) => tr!(
+                        "Queue a reissue of {key} on {remote}/{node}?",
+                        key = k.clone(),
+                        remote = remote.clone(),
+                        node = node.clone(),
+                    ),
+                    None => tr!(
+                        "Queue a reissue on {remote}/{node}?",
+                        remote = remote.clone(),
+                        node = node.clone(),
+                    ),
+                };
+                let body = Column::new()
+                    .gap(2)
+                    .with_child(Container::from_tag("p").with_child(question))
+                    .with_child(Container::from_tag("p").with_child(tr!(
+                        "Apply Pending will remove the subscription from the node so the key can be reassigned elsewhere; Clear Pending undoes the queueing without touching the remote."
+                    )));
+                let remote_for_cb = remote.clone();
+                let node_for_cb = node.clone();
                 let link = ctx.link().clone();
-                let key_for_callback = key.clone();
                 Some(
-                    ConfirmDialog::new(
-                        tr!("Remove Key"),
-                        tr!(
-                            "Remove {key} from the key pool? This does not revoke the subscription on the remote node.",
-                            key = key.clone(),
-                        ),
-                    )
-                    .on_confirm(move |_| {
-                        let link = link.clone();
-                        let key = key_for_callback.clone();
-                        link.clone().spawn(async move {
-                            let url = format!(
-                                "/subscriptions/keys/{}",
-                                percent_encode_component(&key),
-                            );
-                            if let Err(err) = http_delete(&url, None).await {
-                                link.show_error(tr!("Remove Key"), err.to_string(), true);
-                            }
-                            link.change_view(None);
-                            link.send_reload();
-                        });
-                    })
-                    .into(),
+                    ConfirmDialog::default()
+                        .title(tr!("Reissue Key"))
+                        .confirm_message(body)
+                        .on_confirm(move |_| {
+                            let link = link.clone();
+                            let remote = remote_for_cb.clone();
+                            let node = node_for_cb.clone();
+                            link.clone().spawn(async move {
+                                let body = serde_json::json!({
+                                    "remote": remote,
+                                    "node": node,
+                                });
+                                if let Err(err) = http_post::<()>(QUEUE_REISSUE_URL, Some(body)).await
+                                {
+                                    link.show_error(tr!("Reissue Key"), err.to_string(), true);
+                                }
+                                link.change_view(None);
+                                link.send_reload();
+                            });
+                        })
+                        .into(),
                 )
             }
         }
@@ -614,32 +663,30 @@ impl SubscriptionRegistryComp {
             .show_header(true)
             .class(FlexFit);
 
-        let has_assigned = self.selected_assigned_key().is_some();
+        let can_clear = self.clear_assignment_target_key().is_some();
+        let can_reissue = self.selected_node_for_reissue().is_some();
         let clear_button = Button::new(tr!("Clear Assignment"))
             .icon_class("fa fa-unlink")
-            .disabled(!has_assigned)
+            .disabled(!can_clear)
             .on_activate(ctx.link().callback(|_| Msg::ClearSelectedNode));
-        let remove_button = Button::new(tr!("Remove"))
-            .icon_class("fa fa-trash-o")
-            .disabled(!has_assigned)
-            .on_activate(ctx.link().callback(|_| Msg::RemoveSelectedNodeKey));
+        let reissue_button = Button::new(tr!("Reissue Key"))
+            .icon_class("fa fa-recycle")
+            .disabled(!can_reissue)
+            .on_activate(ctx.link().callback(|_| Msg::QueueReissueForSelectedNode));
 
         Panel::new()
             .class(FlexFit)
             .border(true)
             .style("flex", "4 1 0")
             .min_width(400)
-            .title(tr!("Node Status"))
+            .title(tr!("Node Subscription Status"))
             .with_tool(clear_button)
-            .with_tool(remove_button)
+            .with_tool(reissue_button)
             .with_child(table)
     }
 
-    /// Pool key currently assigned to whatever node the operator selected in the tree.
-    ///
-    /// Returns None when no node row is selected, the selected entry is a remote-level
-    /// aggregate, or the node has no pool assignment.
-    fn selected_assigned_key(&self) -> Option<String> {
+    /// Resolve the selected tree row to its `RemoteNodeStatus`, if any.
+    fn selected_node_status(&self) -> Option<&RemoteNodeStatus> {
         let key = self.node_selection.selected_key()?;
         let raw = key.to_string();
         let mut parts = raw.trim_start_matches('/').splitn(2, '/');
@@ -648,7 +695,38 @@ impl SubscriptionRegistryComp {
         self.last_node_data
             .iter()
             .find(|n| n.remote == remote && n.node == node)
-            .and_then(|n| n.assigned_key.clone())
+    }
+
+    /// Returns the assigned key when Clear Assignment is appropriate: there is a binding AND it
+    /// has not yet been pushed (different from current_key, or the node is not Active). For an
+    /// already-synced assignment, clearing would orphan the live subscription on the remote, so
+    /// the operator must use Reissue Key instead. A queued reissue is also off-limits here so
+    /// "Clear Assignment" stays a binding-clear verb; cancelling a reissue goes through Clear
+    /// Pending, which only resets the flag and keeps the binding for retry.
+    fn clear_assignment_target_key(&self) -> Option<String> {
+        let n = self.selected_node_status()?;
+        if n.pending_reissue {
+            return None;
+        }
+        let assigned = n.assigned_key.as_ref()?;
+        let synced = n.status == proxmox_subscription::SubscriptionStatus::Active
+            && n.current_key.as_deref() == Some(assigned.as_str());
+        if synced {
+            return None;
+        }
+        Some(assigned.clone())
+    }
+
+    /// Returns `(remote, node, current_key)` when the selected node has a live subscription that
+    /// can be queued for reissue: there is a current key on the node and no reissue is already
+    /// queued for it. Disabling the button below this gate keeps the action idempotent without a
+    /// server round-trip.
+    fn selected_node_for_reissue(&self) -> Option<(String, String, Option<String>)> {
+        let n = self.selected_node_status()?;
+        if n.pending_reissue || n.current_key.is_none() {
+            return None;
+        }
+        Some((n.remote.clone(), n.node.clone(), n.current_key.clone()))
     }
 
     fn render_auto_assign_dialog(
-- 
2.47.3





  parent reply	other threads:[~2026-05-07  7:25 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-05-07  7:17 [PATCH 0/8] subscription: add central key pool registry with reissue support Thomas Lamprecht
2026-05-07  7:17 ` [PATCH 1/8] api: subscription cache: ensure max_age=0 forces a fresh fetch Thomas Lamprecht
2026-05-07  7:17 ` [PATCH 2/8] api types: subscription level: render full names Thomas Lamprecht
2026-05-07  7:17 ` [PATCH 3/8] subscription: add key pool data model and config layer Thomas Lamprecht
2026-05-07  7:17 ` [PATCH 4/8] subscription: add key pool and node status API endpoints Thomas Lamprecht
2026-05-07  7:17 ` [PATCH 5/8] ui: add subscription registry with key pool and node status Thomas Lamprecht
2026-05-07  8:15   ` Lukas Wagner
2026-05-07  8:33     ` Thomas Lamprecht
2026-05-07  7:17 ` [PATCH 6/8] cli: add subscription key pool management subcommands Thomas Lamprecht
2026-05-07  7:17 ` [PATCH 7/8] docs: add subscription registry chapter Thomas Lamprecht
2026-05-07  7:17 ` Thomas Lamprecht [this message]
2026-05-07  7:50   ` [PATCH 8/8] subscription: add Reissue Key action with pending-reissue queue Lukas Wagner
2026-05-07  8:38 ` superseded: [PATCH 0/8] subscription: add central key pool registry with reissue support Thomas Lamprecht

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260507072436.2649563-9-t.lamprecht@proxmox.com \
    --to=t.lamprecht@proxmox.com \
    --cc=pdm-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal