all lists on lists.proxmox.com
 help / color / mirror / Atom feed
From: "Lukas Wagner" <l.wagner@proxmox.com>
To: "Thomas Lamprecht" <t.lamprecht@proxmox.com>,
	<pdm-devel@lists.proxmox.com>
Subject: Re: [PATCH datacenter-manager v2 8/8] subscription: add Reissue Key action with pending-reissue queue
Date: Tue, 12 May 2026 15:57:54 +0200	[thread overview]
Message-ID: <DIGR1TTOEQ2G.LCRU97PDD5P3@proxmox.com> (raw)
In-Reply-To: <20260507082943.2749725-9-t.lamprecht@proxmox.com>

On Thu May 7, 2026 at 10:26 AM CEST, Thomas Lamprecht wrote:
> Wire a new "Reissue Key" action on the Node Subscription Status panel
> (previously titled Node Status) that queues the live subscription on a
> remote node for removal at next Apply Pending. This mirrors how the
> shop describes keys that may be re-bound to a different server ID, so
> the panel uses the same vocabulary end to end.
>
> The pool entry bound to (remote, node) gets a pending-reissue flag.
> Apply Pending issues a DELETE call on the remote and clears the
> binding on success; the entry stays in the pool as a free key. Clear
> Pending only resets the flag and leaves the binding intact, so an
> operator can retry the queueing without losing context.
>
> Foreign-key adoption: when the live node runs a subscription the pool
> has never seen, queueing a reissue imports that key into the pool with
> the flag set. If the operator then runs Clear Pending, the imported
> entry stays in the pool as a free key. This shortcut lets a legacy
> node be brought under PDM management by queueing a reissue and then
> immediately cancelling it, without re-typing the key.

This feels a bit off to me. Maybe we should rather have an explicit
'Adopt Key' or 'Import Key' action, and actively disallow the 'Reissue
Key' action if the key is not in the key pool?

Would also be nice to have a visual distinction between 'foreign key'
and 'key already in keypool' in the Node Subscription Status Panel, now
that I think about it.

>
> The left-side Key Pool grid renames its action button to "Remove Key"
> for clarity against the new Reissue Key action, and the confirmation
> dialog now warns when the entry is still bound to a remote node so an
> operator does not lose track of the live subscription by mistake.
>
> Note as of now this might be better called "Clear Key", it was split
> out from some work where we can actually actively reissue here.
>
> Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>

[...]

> +------------------------
>  
> -The Node Status panel shows the live subscription state of every node behind a configured remote
> -alongside any pending plan from the pool. Nodes that already hold a key the registry assigned appear
> -with the live level; nodes with a pending pool assignment show a clock icon until the change is
> -pushed to the remote.
> +The Node Subscription Status panel shows the live subscription state of every node behind a
> +configured remote alongside any pending plan from the pool. Nodes that already hold a key the
> +registry assigned appear with the live level; nodes with a pending pool assignment show a clock
> +icon until the change is pushed to the remote.
>  
> -From this view an operator can clear a pending assignment or remove the key from the pool entirely,
> -which is convenient when a node is known to be wrong without first having to find the matching entry
> -on the key list.
> +From this view an operator can clear a pending assignment or queue a reissue on the selected
> +node. Reissuing means freeing the live subscription key from this node so it can be reassigned
> +elsewhere; this is the same notion the shop uses for keys that may be re-bound to a new server
> +ID. The action is queued, not immediate, so it can be confirmed or cancelled together with
> +other pending changes via Apply Pending and Clear Pending.

The 'not immediate' reads a bit weird, I'd maybe phrase it as

"The 'Reissue Key' action is queued until it is confirmed or cancelled
via 'Apply Pending' or dismissed via 'Clear Pending', respectively".

>  
> -Assignment
> -----------
> +Assignment and Reissue
> +----------------------
>  
>  A key can be pinned to a single node manually.
>  
>  The Auto-Assign action proposes a plan that fills unsubscribed nodes from free pool keys. For
> -Proxmox VE, the smallest covering key by socket count is chosen, so a 4-socket key is not used on a
> -2-socket host while a larger host stays unsubscribed.
> +Proxmox VE, the smallest covering key by socket count is chosen, so a 4-socket key is not used
> +on a 2-socket host while a larger host stays unsubscribed.
>  
> -The proposed plan can be inspected before it is applied. Apply Pending pushes the queued keys to
> -their target nodes; if a push fails the remaining queue is kept intact for retry. Clear Pending
> +The Reissue Key action queues the live subscription on the selected node for removal. Apply
> +Pending later issues the removal on the remote and releases the pool binding so the key becomes
> +available for reassignment. Clear Pending drops the queued reissue without touching the remote;
> +the binding stays intact and the operator can retry.
> +
> +When the live node runs a subscription that is not yet tracked by the pool, queueing a reissue
> +also imports that key into the pool. If the operator then runs Clear Pending, the imported
> +entry stays as a free pool key, effectively adopting the previously foreign subscription for
> +future PDM-side management. This shortcut lets a legacy node be brought under the registry by
> +queueing a reissue and then immediately cancelling it, without having to re-enter the key by
> +hand.
> +
> +The proposed plan can be inspected before it is applied. Apply Pending walks the queue in
> +order; if any push or reissue fails the remaining queue is kept intact for retry. Clear Pending
>  drops the plan without touching any remote.
>  
>  Permissions
> diff --git a/lib/pdm-api-types/src/subscription.rs b/lib/pdm-api-types/src/subscription.rs
> index ead3c1b..9927d38 100644
> --- a/lib/pdm-api-types/src/subscription.rs
> +++ b/lib/pdm-api-types/src/subscription.rs
> @@ -317,6 +317,7 @@ pub enum SubscriptionKeySource {
>          "level": { optional: true },
>          "status": { optional: true },
>          "source": { optional: true },
> +        "pending-reissue": { optional: true },
>      },
>  )]
>  #[derive(Clone, Debug, Default, Deserialize, Serialize, PartialEq)]
> @@ -346,6 +347,16 @@ pub struct SubscriptionKeyEntry {
>      #[serde(skip_serializing_if = "Option::is_none")]
>      pub node: Option<String>,
>  
> +    /// True when the operator queued a reissue for this entry's bound node, that is, a request
> +    /// to free the key from `remote`/`node` so it can be reassigned to a different node.
> +    ///
> +    /// Apply Pending will issue a DELETE on the remote and then clear `remote`/`node` on
> +    /// success. Clear Pending only resets this flag and leaves the binding untouched, so a
> +    /// foreign current_key that was newly imported to queue a reissue stays adopted in the
> +    /// pool. A bare flag is enough since the (remote, node) binding lives next to it.
> +    #[serde(default, rename = "pending-reissue")]
> +    pub pending_reissue: bool,
> +

We already have rename_all = "kebab-case" on the entire struct, I think
the explicit rename here is not needed?

Also, thinking about the representation in the config file and API,
wouldn't a Option<bool> be nicer here? Otherwise pretty much all keys
always have a 'pending-reissue false' in the section...

>      /// Server ID this key is bound to (from signed info, if available).
>      #[serde(skip_serializing_if = "Option::is_none")]
>      pub serverid: Option<String>,
> @@ -527,6 +538,9 @@ pub struct RemoteNodeStatus {
>      /// Current key on the node (from remote query).
>      #[serde(skip_serializing_if = "Option::is_none")]
>      pub current_key: Option<String>,
> +    /// True when the pool has a reissue queued for this node.
> +    #[serde(default, rename = "pending-reissue")]
> +    pub pending_reissue: bool,
>  }
>  

[...]


> +#[api(
> +    input: {
> +        properties: {
> +            remote: { schema: REMOTE_ID_SCHEMA },
> +            // NODE_SCHEMA rejects path-traversal input before it ends up interpolated into
> +            // the remote URL `/api2/extjs/nodes/{node}/subscription`.
> +            node: { schema: NODE_SCHEMA },
> +            digest: {
> +                type: ConfigDigest,
> +                optional: true,
> +            },
> +        },
> +    },
> +    access: {
> +        permission: &Permission::Privilege(&["system"], PRIV_SYS_MODIFY, false),
> +    },
> +)]
> +/// Queue a reissue on a remote node, that is, mark its subscription for removal so the key can
> +/// be reassigned elsewhere.
> +///
> +/// Sets `pending_reissue` on the pool entry currently bound to (remote, node). Apply Pending
> +/// later issues the DELETE on the remote and clears the binding on success; Clear Pending only
> +/// resets the flag and leaves the binding intact so the operator can retry.
> +///
> +/// Adoption path: when the live node runs a key that is not yet in the pool, queueing a reissue
> +/// adds that key to the pool with the flag set. If the operator later runs Clear Pending, the
> +/// newly-imported entry stays in the pool as a free key, effectively adopting a previously
> +/// foreign subscription for future PDM-side management.
> +///
> +/// Per-remote `PRIV_RESOURCE_MODIFY` is enforced inside the handler so an operator with global
> +/// system access alone cannot tear down subscriptions on remotes they have no other authority on.
> +async fn queue_reissue(
> +    remote: String,
> +    node: String,
> +    digest: Option<ConfigDigest>,
> +    rpcenv: &mut dyn RpcEnvironment,
> +) -> Result<(), Error> {
> +    let auth_id: Authid = rpcenv
> +        .get_auth_id()
> +        .context("no authid available")?
> +        .parse()?;
> +    let user_info = CachedUserInfo::new()?;
> +    user_info.check_privs(
> +        &auth_id,
> +        &["resource", &remote],
> +        PRIV_RESOURCE_MODIFY,
> +        false,
> +    )?;
> +
> +    // Fetch live state before grabbing the config lock so the network call does not pin the lock
> +    // for the duration of a remote query.
> +    let (remotes_config, _) = pdm_config::remotes::config()?;
> +    let remote_entry = remotes_config
> +        .get(&remote)
> +        .ok_or_else(|| http_err!(NOT_FOUND, "remote '{remote}' not found"))?;
> +    let live = get_subscription_info_for_remote(remote_entry, FRESH_NODE_STATUS_MAX_AGE)
> +        .await
> +        .map_err(|err| {
> +            http_err!(
> +                BAD_REQUEST,
> +                "could not read subscription on {remote}/{node}: {err}"
> +            )
> +        })?;
> +    let live_current_key: Option<String> = live
> +        .get(&node)
> +        .and_then(|info| info.as_ref())
> +        .and_then(|info| info.key.clone());
> +
> +    let _lock = pdm_config::subscriptions::lock_config()?;

This potentially locks up the async runtime, this must be inside a
spawn_blocking

> +    let (mut config, config_digest) = pdm_config::subscriptions::config()?;
> +    config_digest.detect_modification(digest.as_ref())?;
> +
> +    let bound_id = config
> +        .iter()
> +        .find(|(_, e)| {
> +            e.remote.as_deref() == Some(remote.as_str()) && e.node.as_deref() == Some(node.as_str())
> +        })
> +        .map(|(id, _)| id.to_string());
> +
> +    if let Some(id) = bound_id {
> +        let entry = config.get_mut(&id).unwrap();
> +        if entry.pending_reissue {
> +            http_bail!(BAD_REQUEST, "reissue already queued for {remote}/{node}");
> +        }
> +        entry.pending_reissue = true;
> +    } else {
> +        // No pool entry is bound to this (remote, node). Three sub-cases for the live key:
> +        // - Already in the pool but unbound: rebind to target and flag. This covers the user
> +        //   path where Clear Assignment dropped the binding while the live subscription stayed.
> +        // - Already in the pool but bound elsewhere: refuse, the operator has to reconcile.
> +        // - Not in the pool: adopt by inserting a fresh entry. If Clear Pending fires later,
> +        //   the entry stays as a now-managed free pool key.
> +        let current_key = live_current_key
> +            .ok_or_else(|| http_err!(NOT_FOUND, "no subscription on {remote}/{node}"))?;
> +
> +        if let Some(existing) = config.get_mut(&current_key) {
> +            if existing.remote.is_some() || existing.node.is_some() {
> +                http_bail!(
> +                    CONFLICT,
> +                    "key '{current_key}' is in the pool but bound elsewhere; resolve manually first"
> +                );
> +            }
> +            existing.remote = Some(remote.clone());
> +            existing.node = Some(node.clone());
> +            existing.pending_reissue = true;
> +        } else {
> +            SUBSCRIPTION_KEY_SCHEMA
> +                .parse_simple_value(&current_key)
> +                .map_err(|err| http_err!(BAD_REQUEST, "key '{current_key}' rejected: {err}"))?;
> +            let product_type = ProductType::from_key(&current_key)
> +                .ok_or_else(|| format_err!("unrecognised key prefix: {current_key}"))?;
> +            let entry = SubscriptionKeyEntry {
> +                key: current_key.clone(),
> +                product_type,
> +                level: SubscriptionLevel::from_key(Some(&current_key)),
> +                source: SubscriptionKeySource::Manual,
> +                remote: Some(remote.clone()),
> +                node: Some(node.clone()),
> +                pending_reissue: true,
> +                ..Default::default()
> +            };
> +            config.insert(current_key, entry);
> +        }
> +    }
> +
> +    pdm_config::subscriptions::save_config(&config)?;
> +    Ok(())
> +}
> +

[...]


>  
>  #[doc(hidden)]
> @@ -332,8 +339,24 @@ fn key_cell(n: &RemoteNodeStatus) -> Html {
>      let assigned = n.assigned_key.as_deref();
>      let current = n.current_key.as_deref();
>  
> -    // Pending = pool key assigned but the node doesn't have an active subscription yet (the key
> -    // still needs to be pushed).
> +    if n.pending_reissue {
> +        // Reissue queued: surface the live key the operator is about to free, with a recycle
> +        // icon in the warning colour so the row stands out next to ordinary pending pushes.
> +        let text = current.or(assigned).unwrap_or("");
> +        return Tooltip::new(
> +            Row::new()
> +                .class(AlignItems::Baseline)
> +                .gap(2)
> +                .with_child(Fa::new("recycle").class(FontColor::Warning))
> +                .with_child(text),
> +        )
> +        .tip(tr!(
> +            "Pending Reissue - Apply Pending will remove this subscription from the node."

nit: I'd put Apply Pending in single quotes ('Apply Pending' will remove ...)

> +        ))
> +        .into();
> +    }
> +
> +    // Pending push = pool key assigned but the node doesn't have an active subscription yet.
>      let pending =
>          assigned.is_some() && n.status != proxmox_subscription::SubscriptionStatus::Active;
>  
> @@ -449,7 +472,7 @@ impl LoadableComponent for SubscriptionRegistryComp {
>                  });
>              }
>              Msg::ClearSelectedNode => {
> -                let Some(key) = self.selected_assigned_key() else {
> +                let Some(key) = self.clear_assignment_target_key() else {
>                      return false;
>                  };
>                  let link = ctx.link().clone();
> @@ -461,12 +484,16 @@ impl LoadableComponent for SubscriptionRegistryComp {
>                      link.send_reload();
>                  });
>              }
> -            Msg::RemoveSelectedNodeKey => {
> -                let Some(key) = self.selected_assigned_key() else {
> +            Msg::QueueReissueForSelectedNode => {
> +                let Some((remote, node, current_key)) = self.selected_node_for_reissue() else {
>                      return false;
>                  };
>                  ctx.link()
> -                    .change_view(Some(ViewState::ConfirmRemoveSelectedNodeKey(key)));
> +                    .change_view(Some(ViewState::ConfirmQueueReissue {
> +                        remote,
> +                        node,
> +                        current_key,
> +                    }));
>              }
>          }
>          true
> @@ -551,34 +578,56 @@ impl LoadableComponent for SubscriptionRegistryComp {
>              ViewState::ConfirmAutoAssign(proposals) => {
>                  Some(self.render_auto_assign_dialog(ctx, proposals))
>              }
> -            ViewState::ConfirmRemoveSelectedNodeKey(key) => {
> +            ViewState::ConfirmQueueReissue {
> +                remote,
> +                node,
> +                current_key,
> +            } => {
>                  use pwt::widget::ConfirmDialog;
> +                let question = match current_key {
> +                    Some(k) => tr!(
> +                        "Queue a reissue of {key} on {remote}/{node}?",
> +                        key = k.clone(),
> +                        remote = remote.clone(),
> +                        node = node.clone(),
> +                    ),
> +                    None => tr!(
> +                        "Queue a reissue on {remote}/{node}?",
> +                        remote = remote.clone(),
> +                        node = node.clone(),
> +                    ),
> +                };
> +                let body = Column::new()
> +                    .gap(2)
> +                    .with_child(Container::from_tag("p").with_child(question))
> +                    .with_child(Container::from_tag("p").with_child(tr!(
> +                        "Apply Pending will remove the subscription from the node so the key can be reassigned elsewhere; Clear Pending undoes the queueing without touching the remote."

nit: I'd put Apply Pending in single quotes ('Apply Pending' will remove ...)
> +                    )));
> +                let remote_for_cb = remote.clone();
> +                let node_for_cb = node.clone();
>                  let link = ctx.link().clone();
> -                let key_for_callback = key.clone();
>                  Some(
> -                    ConfirmDialog::new(
> -                        tr!("Remove Key"),
> -                        tr!(
> -                            "Remove {key} from the key pool? This does not revoke the subscription on the remote node.",
> -                            key = key.clone(),
> -                        ),
> -                    )
> -                    .on_confirm(move |_| {
> -                        let link = link.clone();
> -                        let key = key_for_callback.clone();
> -                        link.clone().spawn(async move {
> -                            let url = format!(
> -                                "/subscriptions/keys/{}",
> -                                percent_encode_component(&key),
> -                            );
> -                            if let Err(err) = http_delete(&url, None).await {
> -                                link.show_error(tr!("Remove Key"), err.to_string(), true);
> -                            }
> -                            link.change_view(None);
> -                            link.send_reload();
> -                        });
> -                    })
> -                    .into(),
> +                    ConfirmDialog::default()
> +                        .title(tr!("Reissue Key"))
> +                        .confirm_message(body)
> +                        .on_confirm(move |_| {
> +                            let link = link.clone();
> +                            let remote = remote_for_cb.clone();
> +                            let node = node_for_cb.clone();
> +                            link.clone().spawn(async move {
> +                                let body = serde_json::json!({
> +                                    "remote": remote,
> +                                    "node": node,
> +                                });
> +                                if let Err(err) = http_post::<()>(QUEUE_REISSUE_URL, Some(body)).await
> +                                {
> +                                    link.show_error(tr!("Reissue Key"), err.to_string(), true);
> +                                }

I'd use crate::pdm_client() here, since you added API bindings to lib/pdm_client in this patch anyway?

> +                                link.change_view(None);
> +                                link.send_reload();
> +                            });
> +                        })
> +                        .into(),
>                  )
>              }
>          }






  reply	other threads:[~2026-05-12 13:58 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-05-07  8:26 [PATCH datacenter-manager v2 0/8] subscription: add central key pool registry with reissue support Thomas Lamprecht
2026-05-07  8:26 ` [PATCH datacenter-manager v2 1/8] api: subscription cache: ensure max_age=0 forces a fresh fetch Thomas Lamprecht
2026-05-07 13:23   ` Lukas Wagner
2026-05-08 12:43   ` applied: " Lukas Wagner
2026-05-07  8:26 ` [PATCH datacenter-manager v2 2/8] api types: subscription level: render full names Thomas Lamprecht
2026-05-07 13:23   ` Lukas Wagner
2026-05-07  8:26 ` [PATCH datacenter-manager v2 3/8] subscription: add key pool data model and config layer Thomas Lamprecht
2026-05-12  9:51   ` Lukas Wagner
2026-05-07  8:26 ` [PATCH datacenter-manager v2 4/8] subscription: add key pool and node status API endpoints Thomas Lamprecht
2026-05-07 13:23   ` Lukas Wagner
2026-05-12 12:21   ` Lukas Wagner
2026-05-07  8:26 ` [PATCH datacenter-manager v2 5/8] ui: add subscription registry with key pool and node status Thomas Lamprecht
2026-05-12 14:45   ` Lukas Wagner
2026-05-07  8:26 ` [PATCH datacenter-manager v2 6/8] cli: add subscription key pool management subcommands Thomas Lamprecht
2026-05-12 12:56   ` Lukas Wagner
2026-05-07  8:26 ` [PATCH datacenter-manager v2 7/8] docs: add subscription registry chapter Thomas Lamprecht
2026-05-07  8:26 ` [PATCH datacenter-manager v2 8/8] subscription: add Reissue Key action with pending-reissue queue Thomas Lamprecht
2026-05-12 13:57   ` Lukas Wagner [this message]
2026-05-07  8:34 ` [PATCH datacenter-manager v2 9/9] fixup! ui: add subscription registry with key pool and node status Thomas Lamprecht
2026-05-07 13:23 ` [PATCH datacenter-manager v2 0/8] subscription: add central key pool registry with reissue support Lukas Wagner
2026-05-15  7:48 ` superseded: " Thomas Lamprecht

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DIGR1TTOEQ2G.LCRU97PDD5P3@proxmox.com \
    --to=l.wagner@proxmox.com \
    --cc=pdm-devel@lists.proxmox.com \
    --cc=t.lamprecht@proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal