* [pdm-devel] [PATCH proxmox-api-types/datacenter-manager] remote migration: make target endpoint selectable
@ 2025-01-13 15:45 Dominik Csapak
2025-01-13 15:45 ` [pdm-devel] [PATCH proxmox-api-types 1/2] add more network interface methods Dominik Csapak
` (11 more replies)
0 siblings, 12 replies; 13+ messages in thread
From: Dominik Csapak @ 2025-01-13 15:45 UTC (permalink / raw)
To: pdm-devel
since we cannot currently select a target node for remote migration
(to avoid another network transfer of disks) the target of the
remote migration is always the given endpoint.
Currently pdm auto selects always the first configured endpoint, this
series makes this now user configurable.
proxmox-api-types:
Dominik Csapak (2):
add more network interface methods
add cluster status api call
pve-api-types/generate.pl | 7 ++
pve-api-types/src/generated/code.rs | 12 ++-
pve-api-types/src/generated/types.rs | 120 +++++++++++++++++++++++++++
3 files changed, 138 insertions(+), 1 deletion(-)
proxmox-datacenter-manager:
Dominik Csapak (9):
server: factor qemu/lxc code into own modules
server: api: fix remote upid tracking for qemu remote migration
server: connection: add new function that allows for explicit endpoint
server: api: add target-endpoint parameter to remote migrate api calls
server: api: pve: add remote cluster-status api call
pdm-client: add cluster status method
pdm-client: add target-endpoint parameter to remote migration methods
ui: widget: add remote endpoint selector
ui: migrate: make target endpoint selectable for remote migration
lib/pdm-client/src/lib.rs | 20 +
server/src/api/pve/lxc.rs | 522 ++++++++++
server/src/api/pve/mod.rs | 1064 +--------------------
server/src/api/pve/qemu.rs | 567 +++++++++++
server/src/connection.rs | 61 +-
ui/src/widget/migrate_window.rs | 68 +-
ui/src/widget/mod.rs | 2 +
ui/src/widget/remote_endpoint_selector.rs | 103 ++
8 files changed, 1373 insertions(+), 1034 deletions(-)
create mode 100644 server/src/api/pve/lxc.rs
create mode 100644 server/src/api/pve/qemu.rs
create mode 100644 ui/src/widget/remote_endpoint_selector.rs
--
2.39.5
_______________________________________________
pdm-devel mailing list
pdm-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pdm-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* [pdm-devel] [PATCH proxmox-api-types 1/2] add more network interface methods
2025-01-13 15:45 [pdm-devel] [PATCH proxmox-api-types/datacenter-manager] remote migration: make target endpoint selectable Dominik Csapak
@ 2025-01-13 15:45 ` Dominik Csapak
2025-01-13 15:45 ` [pdm-devel] [PATCH proxmox-api-types 2/2] add cluster status api call Dominik Csapak
` (10 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Dominik Csapak @ 2025-01-13 15:45 UTC (permalink / raw)
To: pdm-devel
by doing `make refresh` after the new methods are documented in pve-manager
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
pve-api-types/src/generated/types.rs | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/pve-api-types/src/generated/types.rs b/pve-api-types/src/generated/types.rs
index 5a77bad..ebbb4a0 100644
--- a/pve-api-types/src/generated/types.rs
+++ b/pve-api-types/src/generated/types.rs
@@ -3232,6 +3232,12 @@ serde_plain::derive_fromstr_from_deserialize!(NetworkInterfaceFamilies);
/// The network configuration method for IPv4.
#[derive(Clone, Copy, Debug, Eq, PartialEq, serde::Deserialize, serde::Serialize)]
pub enum NetworkInterfaceMethod {
+ #[serde(rename = "loopback")]
+ /// loopback.
+ Loopback,
+ #[serde(rename = "dhcp")]
+ /// dhcp.
+ Dhcp,
#[serde(rename = "manual")]
/// manual.
Manual,
--
2.39.5
_______________________________________________
pdm-devel mailing list
pdm-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pdm-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* [pdm-devel] [PATCH proxmox-api-types 2/2] add cluster status api call
2025-01-13 15:45 [pdm-devel] [PATCH proxmox-api-types/datacenter-manager] remote migration: make target endpoint selectable Dominik Csapak
2025-01-13 15:45 ` [pdm-devel] [PATCH proxmox-api-types 1/2] add more network interface methods Dominik Csapak
@ 2025-01-13 15:45 ` Dominik Csapak
2025-01-13 15:45 ` [pdm-devel] [PATCH datacenter-manager 1/9] server: factor qemu/lxc code into own modules Dominik Csapak
` (9 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Dominik Csapak @ 2025-01-13 15:45 UTC (permalink / raw)
To: pdm-devel
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
pve-api-types/generate.pl | 7 ++
pve-api-types/src/generated/code.rs | 12 ++-
pve-api-types/src/generated/types.rs | 114 +++++++++++++++++++++++++++
3 files changed, 132 insertions(+), 1 deletion(-)
diff --git a/pve-api-types/generate.pl b/pve-api-types/generate.pl
index 4c8e164..8be737b 100644
--- a/pve-api-types/generate.pl
+++ b/pve-api-types/generate.pl
@@ -279,6 +279,13 @@ Schema2Rust::register_api_extensions('ClusterJoinInfo', {
});
api(GET => '/cluster/config/join', 'cluster_config_join', 'return-name' => 'ClusterJoinInfo');
+# cluster status info
+Schema2Rust::register_api_extensions('ClusterNodeStatus', {
+ '/properties/id' => { description => sq("FIXME: Missing description in PVE.") },
+ '/properties/name' => { description => sq("FIXME: Missing description in PVE.") },
+});
+api(GET => '/cluster/status', 'cluster_status', 'return-name' => 'ClusterNodeStatus');
+
# api(GET => '/storage', 'list_storages', 'return-name' => 'StorageList');
Schema2Rust::register_api_extensions('ListRealm', {
'/properties/realm' => { description => sq("FIXME: Missing description in PVE.") },
diff --git a/pve-api-types/src/generated/code.rs b/pve-api-types/src/generated/code.rs
index dc17cd9..d05b519 100644
--- a/pve-api-types/src/generated/code.rs
+++ b/pve-api-types/src/generated/code.rs
@@ -127,7 +127,6 @@
/// - /cluster/sdn/vnets/{vnet}/subnets/{subnet}
/// - /cluster/sdn/zones
/// - /cluster/sdn/zones/{zone}
-/// - /cluster/status
/// - /cluster/tasks
/// - /nodes/{node}
/// - /nodes/{node}/aplinfo
@@ -395,6 +394,11 @@ pub trait PveClient {
Err(Error::Other("cluster_resources not implemented"))
}
+ /// Get cluster status information.
+ async fn cluster_status(&self) -> Result<Vec<ClusterNodeStatus>, Error> {
+ Err(Error::Other("cluster_status not implemented"))
+ }
+
/// Generate a new API token for a specific user. NOTE: returns API token
/// value, which needs to be stored as it cannot be retrieved afterwards!
async fn create_token(
@@ -688,6 +692,12 @@ where
Ok(self.0.get(&url).await?.expect_json()?.data)
}
+ /// Get cluster status information.
+ async fn cluster_status(&self) -> Result<Vec<ClusterNodeStatus>, Error> {
+ let url = format!("/api2/extjs/cluster/status");
+ Ok(self.0.get(&url).await?.expect_json()?.data)
+ }
+
/// Generate a new API token for a specific user. NOTE: returns API token
/// value, which needs to be stored as it cannot be retrieved afterwards!
async fn create_token(
diff --git a/pve-api-types/src/generated/types.rs b/pve-api-types/src/generated/types.rs
index ebbb4a0..b6a3d20 100644
--- a/pve-api-types/src/generated/types.rs
+++ b/pve-api-types/src/generated/types.rs
@@ -405,6 +405,120 @@ pub enum ClusterNodeIndexResponseStatus {
serde_plain::derive_display_from_serialize!(ClusterNodeIndexResponseStatus);
serde_plain::derive_fromstr_from_deserialize!(ClusterNodeIndexResponseStatus);
+#[api(
+ properties: {
+ id: {
+ type: String,
+ description: "FIXME: Missing description in PVE.",
+ },
+ ip: {
+ optional: true,
+ type: String,
+ },
+ level: {
+ optional: true,
+ type: String,
+ },
+ local: {
+ default: false,
+ optional: true,
+ },
+ name: {
+ type: String,
+ description: "FIXME: Missing description in PVE.",
+ },
+ nodeid: {
+ optional: true,
+ type: Integer,
+ },
+ nodes: {
+ optional: true,
+ type: Integer,
+ },
+ online: {
+ default: false,
+ optional: true,
+ },
+ quorate: {
+ default: false,
+ optional: true,
+ },
+ type: {
+ type: ClusterNodeStatusType,
+ },
+ version: {
+ optional: true,
+ type: Integer,
+ },
+ },
+)]
+/// Object.
+#[derive(Debug, serde::Deserialize, serde::Serialize)]
+pub struct ClusterNodeStatus {
+ pub id: String,
+
+ /// [node] IP of the resolved nodename.
+ #[serde(default, skip_serializing_if = "Option::is_none")]
+ pub ip: Option<String>,
+
+ /// [node] Proxmox VE Subscription level, indicates if eligible for
+ /// enterprise support as well as access to the stable Proxmox VE Enterprise
+ /// Repository.
+ #[serde(default, skip_serializing_if = "Option::is_none")]
+ pub level: Option<String>,
+
+ /// [node] Indicates if this is the responding node.
+ #[serde(deserialize_with = "proxmox_login::parse::deserialize_bool")]
+ #[serde(default, skip_serializing_if = "Option::is_none")]
+ pub local: Option<bool>,
+
+ pub name: String,
+
+ /// [node] ID of the node from the corosync configuration.
+ #[serde(deserialize_with = "proxmox_login::parse::deserialize_i64")]
+ #[serde(default, skip_serializing_if = "Option::is_none")]
+ pub nodeid: Option<i64>,
+
+ /// [cluster] Nodes count, including offline nodes.
+ #[serde(deserialize_with = "proxmox_login::parse::deserialize_i64")]
+ #[serde(default, skip_serializing_if = "Option::is_none")]
+ pub nodes: Option<i64>,
+
+ /// [node] Indicates if the node is online or offline.
+ #[serde(deserialize_with = "proxmox_login::parse::deserialize_bool")]
+ #[serde(default, skip_serializing_if = "Option::is_none")]
+ pub online: Option<bool>,
+
+ /// [cluster] Indicates if there is a majority of nodes online to make
+ /// decisions
+ #[serde(deserialize_with = "proxmox_login::parse::deserialize_bool")]
+ #[serde(default, skip_serializing_if = "Option::is_none")]
+ pub quorate: Option<bool>,
+
+ #[serde(rename = "type")]
+ pub ty: ClusterNodeStatusType,
+
+ /// [cluster] Current version of the corosync configuration file.
+ #[serde(deserialize_with = "proxmox_login::parse::deserialize_i64")]
+ #[serde(default, skip_serializing_if = "Option::is_none")]
+ pub version: Option<i64>,
+}
+
+#[api]
+/// Indicates the type, either cluster or node. The type defines the object
+/// properties e.g. quorate available for type cluster.
+#[derive(Clone, Copy, Debug, Eq, PartialEq, serde::Deserialize, serde::Serialize)]
+pub enum ClusterNodeStatusType {
+ #[serde(rename = "cluster")]
+ /// cluster.
+ Cluster,
+ #[serde(rename = "node")]
+ /// node.
+ Node,
+}
+serde_plain::derive_display_from_serialize!(ClusterNodeStatusType);
+serde_plain::derive_fromstr_from_deserialize!(ClusterNodeStatusType);
+
const_regex! {
CLUSTER_RESOURCE_NODE_RE = r##"^(?i:[a-z0-9](?i:[a-z0-9\-]*[a-z0-9])?)$"##;
--
2.39.5
_______________________________________________
pdm-devel mailing list
pdm-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pdm-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* [pdm-devel] [PATCH datacenter-manager 1/9] server: factor qemu/lxc code into own modules
2025-01-13 15:45 [pdm-devel] [PATCH proxmox-api-types/datacenter-manager] remote migration: make target endpoint selectable Dominik Csapak
2025-01-13 15:45 ` [pdm-devel] [PATCH proxmox-api-types 1/2] add more network interface methods Dominik Csapak
2025-01-13 15:45 ` [pdm-devel] [PATCH proxmox-api-types 2/2] add cluster status api call Dominik Csapak
@ 2025-01-13 15:45 ` Dominik Csapak
2025-01-13 15:45 ` [pdm-devel] [PATCH datacenter-manager 2/9] server: api: fix remote upid tracking for qemu remote migration Dominik Csapak
` (8 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Dominik Csapak @ 2025-01-13 15:45 UTC (permalink / raw)
To: pdm-devel
so the modules don't get overly big
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
server/src/api/pve/lxc.rs | 507 ++++++++++++++++++
server/src/api/pve/mod.rs | 1029 +-----------------------------------
server/src/api/pve/qemu.rs | 552 +++++++++++++++++++
3 files changed, 1066 insertions(+), 1022 deletions(-)
create mode 100644 server/src/api/pve/lxc.rs
create mode 100644 server/src/api/pve/qemu.rs
diff --git a/server/src/api/pve/lxc.rs b/server/src/api/pve/lxc.rs
new file mode 100644
index 0000000..b16d268
--- /dev/null
+++ b/server/src/api/pve/lxc.rs
@@ -0,0 +1,507 @@
+use anyhow::{bail, format_err, Error};
+use http::uri::Authority;
+
+use proxmox_access_control::CachedUserInfo;
+use proxmox_router::{
+ http_bail, list_subdirs_api_method, Permission, Router, RpcEnvironment, SubdirMap,
+};
+use proxmox_schema::api;
+use proxmox_sortable_macro::sortable;
+
+use pdm_api_types::remotes::REMOTE_ID_SCHEMA;
+use pdm_api_types::{
+ Authid, ConfigurationState, RemoteUpid, NODE_SCHEMA, PRIV_RESOURCE_AUDIT, PRIV_RESOURCE_MANAGE,
+ PRIV_RESOURCE_MIGRATE, SNAPSHOT_NAME_SCHEMA, VMID_SCHEMA,
+};
+
+use crate::api::pve::get_remote;
+
+use super::{
+ check_guest_delete_perms, check_guest_list_permissions, check_guest_permissions,
+ connect_to_remote, new_remote_upid,
+};
+
+use super::find_node_for_vm;
+
+pub const ROUTER: Router = Router::new()
+ .get(&API_METHOD_LIST_LXC)
+ .match_all("vmid", &LXC_VM_ROUTER);
+
+const LXC_VM_ROUTER: Router = Router::new()
+ .get(&list_subdirs_api_method!(LXC_VM_SUBDIRS))
+ .subdirs(LXC_VM_SUBDIRS);
+#[sortable]
+const LXC_VM_SUBDIRS: SubdirMap = &sorted!([
+ ("config", &Router::new().get(&API_METHOD_LXC_GET_CONFIG)),
+ ("rrddata", &super::rrddata::LXC_RRD_ROUTER),
+ ("start", &Router::new().post(&API_METHOD_LXC_START)),
+ ("status", &Router::new().get(&API_METHOD_LXC_GET_STATUS)),
+ ("stop", &Router::new().post(&API_METHOD_LXC_STOP)),
+ ("shutdown", &Router::new().post(&API_METHOD_LXC_SHUTDOWN)),
+ ("migrate", &Router::new().post(&API_METHOD_LXC_MIGRATE)),
+ (
+ "remote-migrate",
+ &Router::new().post(&API_METHOD_LXC_REMOTE_MIGRATE)
+ ),
+]);
+
+#[api(
+ input: {
+ properties: {
+ remote: { schema: REMOTE_ID_SCHEMA },
+ node: {
+ schema: NODE_SCHEMA,
+ optional: true,
+ },
+ },
+ },
+ returns: {
+ type: Array,
+ description: "Get a list of containers.",
+ items: { type: pve_api_types::VmEntry },
+ },
+ access: {
+ permission: &Permission::Privilege(&["resource", "{remote}"], PRIV_RESOURCE_AUDIT, false),
+ },
+)]
+/// Query the remote's list of lxc containers. If no node is provided, the all nodes are queried.
+pub async fn list_lxc(
+ remote: String,
+ node: Option<String>,
+ rpcenv: &mut dyn RpcEnvironment,
+) -> Result<Vec<pve_api_types::LxcEntry>, Error> {
+ // FIXME: top_level_allowed is always true because of schema check above, replace with Anybody
+ // and fine-grained checks once those are implemented for all API calls..
+ let (auth_id, user_info, top_level_allowed) = check_guest_list_permissions(&remote, rpcenv)?;
+
+ let (remotes, _) = pdm_config::remotes::config()?;
+
+ let pve = connect_to_remote(&remotes, &remote)?;
+
+ let list = if let Some(node) = node {
+ pve.list_lxc(&node).await?
+ } else {
+ let mut list = Vec::new();
+ for node in pve.list_nodes().await? {
+ list.extend(pve.list_lxc(&node.node).await?);
+ }
+ list
+ };
+
+ if top_level_allowed {
+ return Ok(list);
+ }
+
+ Ok(list
+ .into_iter()
+ .filter(|entry| {
+ check_guest_permissions(
+ &auth_id,
+ &user_info,
+ &remote,
+ PRIV_RESOURCE_AUDIT,
+ entry.vmid,
+ )
+ })
+ .collect())
+}
+
+#[api(
+ input: {
+ properties: {
+ remote: { schema: REMOTE_ID_SCHEMA },
+ node: {
+ schema: NODE_SCHEMA,
+ optional: true,
+ },
+ vmid: { schema: VMID_SCHEMA },
+ state: { type: ConfigurationState },
+ snapshot: {
+ schema: SNAPSHOT_NAME_SCHEMA,
+ optional: true,
+ },
+ },
+ },
+ returns: { type: pve_api_types::LxcConfig },
+ access: {
+ permission: &Permission::Privilege(&["resource", "{remote}", "guest", "{vmid}"], PRIV_RESOURCE_AUDIT, false),
+ },
+)]
+/// Get the configuration of an lxc container from a remote. If a node is provided, the container
+/// must be on that node, otherwise the node is determined automatically.
+pub async fn lxc_get_config(
+ remote: String,
+ node: Option<String>,
+ vmid: u32,
+ state: ConfigurationState,
+ snapshot: Option<String>,
+) -> Result<pve_api_types::LxcConfig, Error> {
+ let (remotes, _) = pdm_config::remotes::config()?;
+
+ let pve = connect_to_remote(&remotes, &remote)?;
+
+ let node = find_node_for_vm(node, vmid, pve.as_ref()).await?;
+
+ Ok(pve
+ .lxc_get_config(&node, vmid, state.current(), snapshot)
+ .await?)
+}
+
+#[api(
+ input: {
+ properties: {
+ remote: { schema: REMOTE_ID_SCHEMA },
+ node: {
+ schema: NODE_SCHEMA,
+ optional: true,
+ },
+ vmid: { schema: VMID_SCHEMA },
+ },
+ },
+ returns: { type: pve_api_types::QemuStatus },
+ access: {
+ permission: &Permission::Privilege(&["resource", "{remote}", "guest", "{vmid}"], PRIV_RESOURCE_AUDIT, false),
+ },
+)]
+/// Get the status of an LXC guest from a remote. If a node is provided, the guest must be on that
+/// node, otherwise the node is determined automatically.
+pub async fn lxc_get_status(
+ remote: String,
+ node: Option<String>,
+ vmid: u32,
+) -> Result<pve_api_types::LxcStatus, Error> {
+ let (remotes, _) = pdm_config::remotes::config()?;
+
+ let pve = connect_to_remote(&remotes, &remote)?;
+
+ let node = find_node_for_vm(node, vmid, pve.as_ref()).await?;
+
+ Ok(pve.lxc_get_status(&node, vmid).await?)
+}
+
+#[api(
+ input: {
+ properties: {
+ remote: { schema: REMOTE_ID_SCHEMA },
+ node: {
+ schema: NODE_SCHEMA,
+ optional: true,
+ },
+ vmid: { schema: VMID_SCHEMA },
+ },
+ },
+ returns: { type: RemoteUpid },
+ access: {
+ permission: &Permission::Privilege(&["resource", "{remote}", "guest", "{vmid}"], PRIV_RESOURCE_MANAGE, false),
+ },
+)]
+/// Start a remote lxc container.
+pub async fn lxc_start(
+ remote: String,
+ node: Option<String>,
+ vmid: u32,
+) -> Result<RemoteUpid, Error> {
+ let (remotes, _) = pdm_config::remotes::config()?;
+
+ let pve = connect_to_remote(&remotes, &remote)?;
+
+ let node = find_node_for_vm(node, vmid, pve.as_ref()).await?;
+
+ let upid = pve.start_lxc_async(&node, vmid, Default::default()).await?;
+
+ new_remote_upid(remote, upid)
+}
+
+#[api(
+ input: {
+ properties: {
+ remote: { schema: REMOTE_ID_SCHEMA },
+ node: {
+ schema: NODE_SCHEMA,
+ optional: true,
+ },
+ vmid: { schema: VMID_SCHEMA },
+ },
+ },
+ returns: { type: RemoteUpid },
+ access: {
+ permission: &Permission::Privilege(&["resource", "{remote}", "guest", "{vmid}"], PRIV_RESOURCE_MANAGE, false),
+ },
+)]
+/// Stop a remote lxc container.
+pub async fn lxc_stop(
+ remote: String,
+ node: Option<String>,
+ vmid: u32,
+) -> Result<RemoteUpid, Error> {
+ let (remotes, _) = pdm_config::remotes::config()?;
+
+ let pve = connect_to_remote(&remotes, &remote)?;
+
+ let node = find_node_for_vm(node, vmid, pve.as_ref()).await?;
+
+ let upid = pve.stop_lxc_async(&node, vmid, Default::default()).await?;
+
+ new_remote_upid(remote, upid)
+}
+
+#[api(
+ input: {
+ properties: {
+ remote: { schema: REMOTE_ID_SCHEMA },
+ node: {
+ schema: NODE_SCHEMA,
+ optional: true,
+ },
+ vmid: { schema: VMID_SCHEMA },
+ },
+ },
+ returns: { type: RemoteUpid },
+ access: {
+ permission: &Permission::Privilege(&["resource", "{remote}", "guest", "{vmid}"], PRIV_RESOURCE_MANAGE, false),
+ },
+)]
+/// Perform a shutdown of a remote lxc container.
+pub async fn lxc_shutdown(
+ remote: String,
+ node: Option<String>,
+ vmid: u32,
+) -> Result<RemoteUpid, Error> {
+ let (remotes, _) = pdm_config::remotes::config()?;
+
+ let pve = connect_to_remote(&remotes, &remote)?;
+
+ let node = find_node_for_vm(node, vmid, pve.as_ref()).await?;
+
+ let upid = pve
+ .shutdown_lxc_async(&node, vmid, Default::default())
+ .await?;
+
+ new_remote_upid(remote, upid)
+}
+
+#[api(
+ input: {
+ properties: {
+ remote: { schema: REMOTE_ID_SCHEMA },
+ node: {
+ schema: NODE_SCHEMA,
+ optional: true,
+ },
+ target: { schema: NODE_SCHEMA },
+ vmid: { schema: VMID_SCHEMA },
+ online: {
+ type: bool,
+ description: "Attempt an online migration if the container is running.",
+ optional: true,
+ },
+ restart: {
+ type: bool,
+ description: "Perform a restart-migration if the container is running.",
+ optional: true,
+ },
+ "target-storage": {
+ description: "Mapping of source storages to target storages.",
+ optional: true,
+ },
+ bwlimit: {
+ description: "Override I/O bandwidth limit (in KiB/s).",
+ optional: true,
+ },
+ timeout: {
+ description: "Shutdown timeout for restart-migrations.",
+ optional: true,
+ },
+ },
+ },
+ returns: { type: RemoteUpid },
+ access: {
+ permission: &Permission::And(&[
+ &Permission::Privilege(&["resource", "{remote}", "guest", "{vmid}"], PRIV_RESOURCE_MIGRATE, false),
+ ]),
+ },
+)]
+/// Perform an in-cluster migration of a VM.
+#[allow(clippy::too_many_arguments)]
+pub async fn lxc_migrate(
+ remote: String,
+ node: Option<String>,
+ vmid: u32,
+ bwlimit: Option<u64>,
+ restart: Option<bool>,
+ online: Option<bool>,
+ target: String,
+ target_storage: Option<String>,
+ timeout: Option<i64>,
+) -> Result<RemoteUpid, Error> {
+ let bwlimit = bwlimit.map(|n| n as f64);
+
+ log::info!("in-cluster migration requested for remote {remote:?} ct {vmid} to node {target:?}");
+
+ let (remotes, _) = pdm_config::remotes::config()?;
+ let pve = connect_to_remote(&remotes, &remote)?;
+
+ let node = find_node_for_vm(node, vmid, pve.as_ref()).await?;
+
+ if node == target {
+ bail!("refusing migration to the same node");
+ }
+
+ let params = pve_api_types::MigrateLxc {
+ bwlimit,
+ online,
+ restart,
+ target,
+ target_storage,
+ timeout,
+ };
+ let upid = pve.migrate_lxc(&node, vmid, params).await?;
+
+ new_remote_upid(remote, upid)
+}
+
+#[api(
+ input: {
+ properties: {
+ remote: { schema: REMOTE_ID_SCHEMA },
+ target: { schema: REMOTE_ID_SCHEMA },
+ node: {
+ schema: NODE_SCHEMA,
+ optional: true,
+ },
+ vmid: { schema: VMID_SCHEMA },
+ "target-vmid": {
+ optional: true,
+ schema: VMID_SCHEMA,
+ },
+ delete: {
+ description: "Delete the original VM and related data after successful migration.",
+ optional: true,
+ default: false,
+ },
+ online: {
+ type: bool,
+ description: "Perform an online migration if the vm is running.",
+ optional: true,
+ default: false,
+ },
+ "target-storage": {
+ description: "Mapping of source storages to target storages.",
+ },
+ "target-bridge": {
+ description: "Mapping of source bridges to remote bridges.",
+ },
+ bwlimit: {
+ description: "Override I/O bandwidth limit (in KiB/s).",
+ optional: true,
+ },
+ restart: {
+ description: "Perform a restart-migration.",
+ optional: true,
+ },
+ timeout: {
+ description: "Add a shutdown timeout for the restart-migration.",
+ optional: true,
+ },
+ },
+ },
+ returns: { type: RemoteUpid },
+ access: {
+ permission:
+ &Permission::Privilege(&["resource", "{remote}", "guest", "{vmid}"], PRIV_RESOURCE_MIGRATE, false),
+ description: "requires PRIV_RESOURCE_MIGRATE on /resource/{remote}/guest/{vmid} for source and target remove and vmid",
+ },
+)]
+/// Perform a remote migration of an lxc container.
+#[allow(clippy::too_many_arguments)]
+pub async fn lxc_remote_migrate(
+ remote: String, // this is the source
+ target: String, // this is the destination remote name
+ node: Option<String>,
+ vmid: u32,
+ target_vmid: Option<u32>,
+ delete: bool,
+ online: bool,
+ target_storage: String,
+ target_bridge: String,
+ bwlimit: Option<u64>,
+ restart: Option<bool>,
+ timeout: Option<i64>,
+ rpcenv: &mut dyn RpcEnvironment,
+) -> Result<RemoteUpid, Error> {
+ let user_info = CachedUserInfo::new()?;
+ let auth_id: Authid = rpcenv
+ .get_auth_id()
+ .ok_or_else(|| format_err!("no authid available"))?
+ .parse()?;
+ let target_privs = user_info.lookup_privs(
+ &auth_id,
+ &[
+ "resource",
+ &target,
+ "guest",
+ &target_vmid.unwrap_or(vmid).to_string(),
+ ],
+ );
+ if target_privs & PRIV_RESOURCE_MIGRATE == 0 {
+ http_bail!(
+ UNAUTHORIZED,
+ "missing PRIV_RESOURCE_MIGRATE on target remote+vmid"
+ );
+ }
+ if delete {
+ check_guest_delete_perms(rpcenv, &remote, vmid)?;
+ }
+
+ let source = remote; // let's stick to "source" and "target" naming
+
+ log::info!("remote migration requested");
+
+ if source == target {
+ bail!("source and destination clusters must be different");
+ }
+
+ let (remotes, _) = pdm_config::remotes::config()?;
+ let target = get_remote(&remotes, &target)?;
+ let source_conn = connect_to_remote(&remotes, &source)?;
+
+ let node = find_node_for_vm(node, vmid, source_conn.as_ref()).await?;
+
+ // FIXME: For now we'll only try with the first node but we should probably try others, too, in
+ // case some are offline?
+
+ let target_node = target
+ .nodes
+ .first()
+ .ok_or_else(|| format_err!("no nodes configured for target cluster"))?;
+ let target_host_port: Authority = target_node.hostname.parse()?;
+ let mut target_endpoint = format!(
+ "host={host},port={port},apitoken=PVEAPIToken={authid}={secret}",
+ host = target_host_port.host(),
+ authid = target.authid,
+ secret = target.token,
+ port = target_host_port.port_u16().unwrap_or(8006),
+ );
+ if let Some(fp) = target_node.fingerprint.as_deref() {
+ target_endpoint.reserve(fp.len() + ",fingerprint=".len());
+ target_endpoint.push_str(",fingerprint=");
+ target_endpoint.push_str(fp);
+ }
+
+ log::info!("forwarding remote migration requested");
+ let params = pve_api_types::RemoteMigrateLxc {
+ target_bridge,
+ target_storage,
+ delete: Some(delete),
+ online: Some(online),
+ target_vmid,
+ target_endpoint,
+ bwlimit: bwlimit.map(|limit| limit as f64),
+ restart,
+ timeout,
+ };
+ log::info!("migrating vm {vmid} of node {node:?}");
+ let upid = source_conn.remote_migrate_lxc(&node, vmid, params).await?;
+
+ new_remote_upid(source, upid)
+}
diff --git a/server/src/api/pve/mod.rs b/server/src/api/pve/mod.rs
index ae44722..48e16b2 100644
--- a/server/src/api/pve/mod.rs
+++ b/server/src/api/pve/mod.rs
@@ -3,7 +3,6 @@
use std::sync::Arc;
use anyhow::{bail, format_err, Error};
-use http::uri::Authority;
use proxmox_access_control::CachedUserInfo;
use proxmox_router::{
@@ -17,22 +16,20 @@ use proxmox_sortable_macro::sortable;
use pdm_api_types::remotes::{NodeUrl, Remote, RemoteType, REMOTE_ID_SCHEMA};
use pdm_api_types::resource::PveResource;
use pdm_api_types::{
- Authid, ConfigurationState, RemoteUpid, CIDR_FORMAT, HOST_OPTIONAL_PORT_FORMAT, NODE_SCHEMA,
- PRIV_RESOURCE_AUDIT, PRIV_RESOURCE_DELETE, PRIV_RESOURCE_MANAGE, PRIV_RESOURCE_MIGRATE,
- PRIV_SYS_MODIFY, SNAPSHOT_NAME_SCHEMA, VMID_SCHEMA,
+ Authid, RemoteUpid, HOST_OPTIONAL_PORT_FORMAT, PRIV_RESOURCE_AUDIT, PRIV_RESOURCE_DELETE,
+ PRIV_SYS_MODIFY,
};
use pve_api_types::client::PveClient;
-use pve_api_types::{
- ClusterResourceKind, ClusterResourceType, ListRealm, PveUpid, QemuMigratePreconditions,
- StartQemuMigrationType,
-};
+use pve_api_types::{ClusterResourceKind, ClusterResourceType, ListRealm, PveUpid};
use super::resources::{map_pve_lxc, map_pve_node, map_pve_qemu, map_pve_storage};
use crate::{connection, task_cache};
+mod lxc;
mod node;
+mod qemu;
mod rrddata;
pub mod tasks;
@@ -58,66 +55,17 @@ const MAIN_ROUTER: Router = Router::new()
#[sortable]
const REMOTE_SUBDIRS: SubdirMap = &sorted!([
- ("lxc", &LXC_ROUTER),
+ ("lxc", &lxc::ROUTER),
("nodes", &NODES_ROUTER),
- ("qemu", &QEMU_ROUTER),
+ ("qemu", &qemu::ROUTER),
("resources", &RESOURCES_ROUTER),
("tasks", &tasks::ROUTER),
]);
-const LXC_ROUTER: Router = Router::new()
- .get(&API_METHOD_LIST_LXC)
- .match_all("vmid", &LXC_VM_ROUTER);
-
-const LXC_VM_ROUTER: Router = Router::new()
- .get(&list_subdirs_api_method!(LXC_VM_SUBDIRS))
- .subdirs(LXC_VM_SUBDIRS);
-#[sortable]
-const LXC_VM_SUBDIRS: SubdirMap = &sorted!([
- ("config", &Router::new().get(&API_METHOD_LXC_GET_CONFIG)),
- ("rrddata", &rrddata::LXC_RRD_ROUTER),
- ("start", &Router::new().post(&API_METHOD_LXC_START)),
- ("status", &Router::new().get(&API_METHOD_LXC_GET_STATUS)),
- ("stop", &Router::new().post(&API_METHOD_LXC_STOP)),
- ("shutdown", &Router::new().post(&API_METHOD_LXC_SHUTDOWN)),
- ("migrate", &Router::new().post(&API_METHOD_LXC_MIGRATE)),
- (
- "remote-migrate",
- &Router::new().post(&API_METHOD_LXC_REMOTE_MIGRATE)
- ),
-]);
-
const NODES_ROUTER: Router = Router::new()
.get(&API_METHOD_LIST_NODES)
.match_all("node", &node::ROUTER);
-const QEMU_ROUTER: Router = Router::new()
- .get(&API_METHOD_LIST_QEMU)
- .match_all("vmid", &QEMU_VM_ROUTER);
-
-const QEMU_VM_ROUTER: Router = Router::new()
- .get(&list_subdirs_api_method!(QEMU_VM_SUBDIRS))
- .subdirs(QEMU_VM_SUBDIRS);
-#[sortable]
-const QEMU_VM_SUBDIRS: SubdirMap = &sorted!([
- ("config", &Router::new().get(&API_METHOD_QEMU_GET_CONFIG)),
- ("rrddata", &rrddata::QEMU_RRD_ROUTER),
- ("start", &Router::new().post(&API_METHOD_QEMU_START)),
- ("status", &Router::new().get(&API_METHOD_QEMU_GET_STATUS)),
- ("stop", &Router::new().post(&API_METHOD_QEMU_STOP)),
- ("shutdown", &Router::new().post(&API_METHOD_QEMU_SHUTDOWN)),
- (
- "migrate",
- &Router::new()
- .get(&API_METHOD_QEMU_MIGRATE_PRECONDITIONS)
- .post(&API_METHOD_QEMU_MIGRATE)
- ),
- (
- "remote-migrate",
- &Router::new().post(&API_METHOD_QEMU_REMOTE_MIGRATE)
- ),
-]);
-
const RESOURCES_ROUTER: Router = Router::new().get(&API_METHOD_CLUSTER_RESOURCES);
// converts a remote + PveUpid into a RemoteUpid and starts tracking it
@@ -274,128 +222,6 @@ fn check_guest_permissions(
auth_privs & privilege != 0
}
-#[api(
- input: {
- properties: {
- remote: { schema: REMOTE_ID_SCHEMA },
- node: {
- schema: NODE_SCHEMA,
- optional: true,
- },
- },
- },
- returns: {
- type: Array,
- description: "Get a list of VMs",
- items: { type: pve_api_types::VmEntry },
- },
- access: {
- permission: &Permission::Privilege(&["resource", "{remote}"], PRIV_RESOURCE_AUDIT, false),
- },
-)]
-/// Query the remote's list of qemu VMs. If no node is provided, the all nodes are queried.
-pub async fn list_qemu(
- remote: String,
- node: Option<String>,
- rpcenv: &mut dyn RpcEnvironment,
-) -> Result<Vec<pve_api_types::VmEntry>, Error> {
- // FIXME: top_level_allowed is always true because of schema check above, replace with Anybody
- // and fine-grained checks once those are implemented for all API calls..
- let (auth_id, user_info, top_level_allowed) = check_guest_list_permissions(&remote, rpcenv)?;
-
- let (remotes, _) = pdm_config::remotes::config()?;
-
- let pve = connect_to_remote(&remotes, &remote)?;
-
- let list = if let Some(node) = node {
- pve.list_qemu(&node, None).await?
- } else {
- let mut list = Vec::new();
- for node in pve.list_nodes().await? {
- list.extend(pve.list_qemu(&node.node, None).await?);
- }
- list
- };
-
- if top_level_allowed {
- return Ok(list);
- }
-
- Ok(list
- .into_iter()
- .filter(|entry| {
- check_guest_permissions(
- &auth_id,
- &user_info,
- &remote,
- PRIV_RESOURCE_AUDIT,
- entry.vmid,
- )
- })
- .collect())
-}
-
-#[api(
- input: {
- properties: {
- remote: { schema: REMOTE_ID_SCHEMA },
- node: {
- schema: NODE_SCHEMA,
- optional: true,
- },
- },
- },
- returns: {
- type: Array,
- description: "Get a list of containers.",
- items: { type: pve_api_types::VmEntry },
- },
- access: {
- permission: &Permission::Privilege(&["resource", "{remote}"], PRIV_RESOURCE_AUDIT, false),
- },
-)]
-/// Query the remote's list of lxc containers. If no node is provided, the all nodes are queried.
-pub async fn list_lxc(
- remote: String,
- node: Option<String>,
- rpcenv: &mut dyn RpcEnvironment,
-) -> Result<Vec<pve_api_types::LxcEntry>, Error> {
- // FIXME: top_level_allowed is always true because of schema check above, replace with Anybody
- // and fine-grained checks once those are implemented for all API calls..
- let (auth_id, user_info, top_level_allowed) = check_guest_list_permissions(&remote, rpcenv)?;
-
- let (remotes, _) = pdm_config::remotes::config()?;
-
- let pve = connect_to_remote(&remotes, &remote)?;
-
- let list = if let Some(node) = node {
- pve.list_lxc(&node).await?
- } else {
- let mut list = Vec::new();
- for node in pve.list_nodes().await? {
- list.extend(pve.list_lxc(&node.node).await?);
- }
- list
- };
-
- if top_level_allowed {
- return Ok(list);
- }
-
- Ok(list
- .into_iter()
- .filter(|entry| {
- check_guest_permissions(
- &auth_id,
- &user_info,
- &remote,
- PRIV_RESOURCE_AUDIT,
- entry.vmid,
- )
- })
- .collect())
-}
-
async fn find_node_for_vm(
node: Option<String>,
vmid: u32,
@@ -414,183 +240,6 @@ async fn find_node_for_vm(
})
}
-#[api(
- input: {
- properties: {
- remote: { schema: REMOTE_ID_SCHEMA },
- node: {
- schema: NODE_SCHEMA,
- optional: true,
- },
- vmid: { schema: VMID_SCHEMA },
- state: { type: ConfigurationState },
- snapshot: {
- schema: SNAPSHOT_NAME_SCHEMA,
- optional: true,
- },
- },
- },
- returns: { type: pve_api_types::QemuConfig },
- access: {
- permission: &Permission::Privilege(&["resource", "{remote}", "guest", "{vmid}"], PRIV_RESOURCE_AUDIT, false),
- },
-)]
-/// Get the configuration of a qemu VM from a remote. If a node is provided, the VM must be on that
-/// node, otherwise the node is determined automatically.
-pub async fn qemu_get_config(
- remote: String,
- node: Option<String>,
- vmid: u32,
- state: ConfigurationState,
- snapshot: Option<String>,
-) -> Result<pve_api_types::QemuConfig, Error> {
- let (remotes, _) = pdm_config::remotes::config()?;
-
- let pve = connect_to_remote(&remotes, &remote)?;
-
- let node = find_node_for_vm(node, vmid, pve.as_ref()).await?;
-
- Ok(pve
- .qemu_get_config(&node, vmid, state.current(), snapshot)
- .await?)
-}
-
-#[api(
- input: {
- properties: {
- remote: { schema: REMOTE_ID_SCHEMA },
- node: {
- schema: NODE_SCHEMA,
- optional: true,
- },
- vmid: { schema: VMID_SCHEMA },
- },
- },
- returns: { type: pve_api_types::QemuStatus },
- access: {
- permission: &Permission::Privilege(&["resource", "{remote}", "guest", "{vmid}"], PRIV_RESOURCE_AUDIT, false),
- },
-)]
-/// Get the status of a qemu VM from a remote. If a node is provided, the VM must be on that
-/// node, otherwise the node is determined automatically.
-pub async fn qemu_get_status(
- remote: String,
- node: Option<String>,
- vmid: u32,
-) -> Result<pve_api_types::QemuStatus, Error> {
- let (remotes, _) = pdm_config::remotes::config()?;
-
- let pve = connect_to_remote(&remotes, &remote)?;
-
- let node = find_node_for_vm(node, vmid, pve.as_ref()).await?;
-
- Ok(pve.qemu_get_status(&node, vmid).await?)
-}
-
-#[api(
- input: {
- properties: {
- remote: { schema: REMOTE_ID_SCHEMA },
- node: {
- schema: NODE_SCHEMA,
- optional: true,
- },
- vmid: { schema: VMID_SCHEMA },
- },
- },
- returns: { type: RemoteUpid },
- access: {
- permission: &Permission::Privilege(&["resource", "{remote}", "guest", "{vmid}"], PRIV_RESOURCE_MANAGE, false),
- },
-)]
-/// Start a remote qemu vm.
-pub async fn qemu_start(
- remote: String,
- node: Option<String>,
- vmid: u32,
-) -> Result<RemoteUpid, Error> {
- let (remotes, _) = pdm_config::remotes::config()?;
-
- let pve = connect_to_remote(&remotes, &remote)?;
-
- let node = find_node_for_vm(node, vmid, pve.as_ref()).await?;
-
- let upid = pve
- .start_qemu_async(&node, vmid, Default::default())
- .await?;
-
- new_remote_upid(remote, upid)
-}
-
-#[api(
- input: {
- properties: {
- remote: { schema: REMOTE_ID_SCHEMA },
- node: {
- schema: NODE_SCHEMA,
- optional: true,
- },
- vmid: { schema: VMID_SCHEMA },
- },
- },
- returns: { type: RemoteUpid },
- access: {
- permission: &Permission::Privilege(&["resource", "{remote}", "guest", "{vmid}"], PRIV_RESOURCE_MANAGE, false),
- },
-)]
-/// Stop a remote qemu vm.
-pub async fn qemu_stop(
- remote: String,
- node: Option<String>,
- vmid: u32,
-) -> Result<RemoteUpid, Error> {
- let (remotes, _) = pdm_config::remotes::config()?;
-
- let pve = connect_to_remote(&remotes, &remote)?;
-
- let node = find_node_for_vm(node, vmid, pve.as_ref()).await?;
-
- let upid = pve.stop_qemu_async(&node, vmid, Default::default()).await?;
-
- new_remote_upid(remote, upid)
-}
-
-#[api(
- input: {
- properties: {
- remote: { schema: REMOTE_ID_SCHEMA },
- node: {
- schema: NODE_SCHEMA,
- optional: true,
- },
- vmid: { schema: VMID_SCHEMA },
- },
- },
- returns: { type: RemoteUpid },
- access: {
- permission: &Permission::Privilege(&["resource", "{remote}", "guest", "{vmid}"], PRIV_RESOURCE_MANAGE, false),
- },
-)]
-/// Perform a shutdown of a remote qemu vm.
-pub async fn qemu_shutdown(
- remote: String,
- node: Option<String>,
- vmid: u32,
-) -> Result<RemoteUpid, Error> {
- let (remotes, _) = pdm_config::remotes::config()?;
-
- let pve = connect_to_remote(&remotes, &remote)?;
-
- let node = find_node_for_vm(node, vmid, pve.as_ref()).await?;
-
- let upid = pve
- .shutdown_qemu_async(&node, vmid, Default::default())
- .await?;
-
- //(remote, upid.to_string()).try_into()
- new_remote_upid(remote, upid)
-}
-
fn check_guest_delete_perms(
rpcenv: &mut dyn RpcEnvironment,
remote: &str,
@@ -609,670 +258,6 @@ fn check_guest_delete_perms(
)
}
-#[api(
- input: {
- properties: {
- remote: { schema: REMOTE_ID_SCHEMA },
- node: {
- schema: NODE_SCHEMA,
- optional: true,
- },
- target: { schema: NODE_SCHEMA },
- vmid: { schema: VMID_SCHEMA },
- online: {
- type: bool,
- description: "Perform an online migration if the vm is running.",
- optional: true,
- },
- "target-storage": {
- description: "Mapping of source storages to target storages.",
- optional: true,
- },
- bwlimit: {
- description: "Override I/O bandwidth limit (in KiB/s).",
- optional: true,
- },
- "migration-network": {
- description: "CIDR of the (sub) network that is used for migration.",
- type: String,
- format: &CIDR_FORMAT,
- optional: true,
- },
- "migration-type": {
- type: StartQemuMigrationType,
- optional: true,
- },
- force: {
- description: "Allow to migrate VMs with local devices.",
- optional: true,
- default: false,
- },
- "with-local-disks": {
- description: "Enable live storage migration for local disks.",
- optional: true,
- },
- },
- },
- returns: { type: RemoteUpid },
- access: {
- permission: &Permission::And(&[
- &Permission::Privilege(&["resource", "{remote}", "guest", "{vmid}"], PRIV_RESOURCE_MIGRATE, false),
- ]),
- },
-)]
-/// Perform an in-cluster migration of a VM.
-#[allow(clippy::too_many_arguments)]
-pub async fn qemu_migrate(
- remote: String,
- node: Option<String>,
- vmid: u32,
- bwlimit: Option<u64>,
- force: Option<bool>,
- migration_network: Option<String>,
- migration_type: Option<StartQemuMigrationType>,
- online: Option<bool>,
- target: String,
- target_storage: Option<String>,
- with_local_disks: Option<bool>,
-) -> Result<RemoteUpid, Error> {
- log::info!("in-cluster migration requested for remote {remote:?} vm {vmid} to node {target:?}");
-
- let (remotes, _) = pdm_config::remotes::config()?;
- let pve = connect_to_remote(&remotes, &remote)?;
-
- let node = find_node_for_vm(node, vmid, pve.as_ref()).await?;
-
- if node == target {
- bail!("refusing migration to the same node");
- }
-
- let params = pve_api_types::MigrateQemu {
- bwlimit,
- force,
- migration_network,
- migration_type,
- online,
- target,
- targetstorage: target_storage,
- with_local_disks,
- };
- let upid = pve.migrate_qemu(&node, vmid, params).await?;
- //(remote, upid.to_string()).try_into()
- new_remote_upid(remote, upid)
-}
-
-#[api(
- input: {
- properties: {
- remote: { schema: REMOTE_ID_SCHEMA },
- node: {
- schema: NODE_SCHEMA,
- optional: true,
- },
- target: {
- schema: NODE_SCHEMA,
- optional: true,
- },
- vmid: { schema: VMID_SCHEMA },
- }
- },
- access: {
- permission: &Permission::And(&[
- &Permission::Privilege(&["resource", "{remote}", "guest", "{vmid}"], PRIV_RESOURCE_MIGRATE, false),
- ]),
- },
-)]
-/// Qemu (local) migrate preconditions
-async fn qemu_migrate_preconditions(
- remote: String,
- node: Option<String>,
- target: Option<String>,
- vmid: u32,
-) -> Result<QemuMigratePreconditions, Error> {
- let (remotes, _) = pdm_config::remotes::config()?;
- let pve = connect_to_remote(&remotes, &remote)?;
-
- let node = find_node_for_vm(node, vmid, pve.as_ref()).await?;
-
- let res = pve.qemu_migrate_preconditions(&node, vmid, target).await?;
- Ok(res)
-}
-
-#[api(
- input: {
- properties: {
- remote: { schema: REMOTE_ID_SCHEMA },
- target: { schema: REMOTE_ID_SCHEMA },
- node: {
- schema: NODE_SCHEMA,
- optional: true,
- },
- vmid: { schema: VMID_SCHEMA },
- "target-vmid": {
- optional: true,
- schema: VMID_SCHEMA,
- },
- delete: {
- description: "Delete the original VM and related data after successful migration.",
- optional: true,
- default: false,
- },
- online: {
- type: bool,
- description: "Perform an online migration if the vm is running.",
- optional: true,
- default: false,
- },
- "target-storage": {
- description: "Mapping of source storages to target storages.",
- },
- "target-bridge": {
- description: "Mapping of source bridges to remote bridges.",
- },
- bwlimit: {
- description: "Override I/O bandwidth limit (in KiB/s).",
- optional: true,
- }
- },
- },
- returns: { type: RemoteUpid },
- access: {
- permission:
- &Permission::Privilege(&["resource", "{remote}", "guest", "{vmid}"], PRIV_RESOURCE_MIGRATE, false),
- description: "requires PRIV_RESOURCE_MIGRATE on /resource/{remote}/guest/{vmid} for source and target remove and vmid",
- },
-)]
-/// Perform a remote migration of a VM.
-#[allow(clippy::too_many_arguments)]
-pub async fn qemu_remote_migrate(
- remote: String, // this is the source
- target: String, // this is the destination remote name
- node: Option<String>,
- vmid: u32,
- target_vmid: Option<u32>,
- delete: bool,
- online: bool,
- target_storage: String,
- target_bridge: String,
- bwlimit: Option<u64>,
- rpcenv: &mut dyn RpcEnvironment,
-) -> Result<RemoteUpid, Error> {
- let user_info = CachedUserInfo::new()?;
- let auth_id: Authid = rpcenv
- .get_auth_id()
- .ok_or_else(|| format_err!("no authid available"))?
- .parse()?;
- let target_privs = user_info.lookup_privs(
- &auth_id,
- &[
- "resource",
- &target,
- "guest",
- &target_vmid.unwrap_or(vmid).to_string(),
- ],
- );
- if target_privs & PRIV_RESOURCE_MIGRATE == 0 {
- http_bail!(
- UNAUTHORIZED,
- "missing PRIV_RESOURCE_MIGRATE on target remote+vmid"
- );
- }
-
- if delete {
- check_guest_delete_perms(rpcenv, &remote, vmid)?;
- }
-
- let source = remote; // let's stick to "source" and "target" naming
-
- log::info!("remote migration requested");
-
- if source == target {
- bail!("source and destination clusters must be different");
- }
-
- let (remotes, _) = pdm_config::remotes::config()?;
- let target = get_remote(&remotes, &target)?;
- let source_conn = connect_to_remote(&remotes, &source)?;
-
- let node = find_node_for_vm(node, vmid, source_conn.as_ref()).await?;
-
- // FIXME: For now we'll only try with the first node but we should probably try others, too, in
- // case some are offline?
-
- let target_node = target
- .nodes
- .first()
- .ok_or_else(|| format_err!("no nodes configured for target cluster"))?;
- let target_host_port: Authority = target_node.hostname.parse()?;
- let mut target_endpoint = format!(
- "host={host},port={port},apitoken=PVEAPIToken={authid}={secret}",
- host = target_host_port.host(),
- authid = target.authid,
- secret = target.token,
- port = target_host_port.port_u16().unwrap_or(8006),
- );
- if let Some(fp) = target_node.fingerprint.as_deref() {
- target_endpoint.reserve(fp.len() + ",fingerprint=".len());
- target_endpoint.push_str(",fingerprint=");
- target_endpoint.push_str(fp);
- }
-
- log::info!("forwarding remote migration requested");
- let params = pve_api_types::RemoteMigrateQemu {
- target_bridge,
- target_storage,
- delete: Some(delete),
- online: Some(online),
- target_vmid,
- target_endpoint,
- bwlimit,
- };
- log::info!("migrating vm {vmid} of node {node:?}");
- let upid = source_conn.remote_migrate_qemu(&node, vmid, params).await?;
-
- (source, upid.to_string()).try_into()
-}
-
-#[api(
- input: {
- properties: {
- remote: { schema: REMOTE_ID_SCHEMA },
- node: {
- schema: NODE_SCHEMA,
- optional: true,
- },
- vmid: { schema: VMID_SCHEMA },
- state: { type: ConfigurationState },
- snapshot: {
- schema: SNAPSHOT_NAME_SCHEMA,
- optional: true,
- },
- },
- },
- returns: { type: pve_api_types::LxcConfig },
- access: {
- permission: &Permission::Privilege(&["resource", "{remote}", "guest", "{vmid}"], PRIV_RESOURCE_AUDIT, false),
- },
-)]
-/// Get the configuration of an lxc container from a remote. If a node is provided, the container
-/// must be on that node, otherwise the node is determined automatically.
-pub async fn lxc_get_config(
- remote: String,
- node: Option<String>,
- vmid: u32,
- state: ConfigurationState,
- snapshot: Option<String>,
-) -> Result<pve_api_types::LxcConfig, Error> {
- let (remotes, _) = pdm_config::remotes::config()?;
-
- let pve = connect_to_remote(&remotes, &remote)?;
-
- let node = find_node_for_vm(node, vmid, pve.as_ref()).await?;
-
- Ok(pve
- .lxc_get_config(&node, vmid, state.current(), snapshot)
- .await?)
-}
-
-#[api(
- input: {
- properties: {
- remote: { schema: REMOTE_ID_SCHEMA },
- node: {
- schema: NODE_SCHEMA,
- optional: true,
- },
- vmid: { schema: VMID_SCHEMA },
- },
- },
- returns: { type: pve_api_types::QemuStatus },
- access: {
- permission: &Permission::Privilege(&["resource", "{remote}", "guest", "{vmid}"], PRIV_RESOURCE_AUDIT, false),
- },
-)]
-/// Get the status of an LXC guest from a remote. If a node is provided, the guest must be on that
-/// node, otherwise the node is determined automatically.
-pub async fn lxc_get_status(
- remote: String,
- node: Option<String>,
- vmid: u32,
-) -> Result<pve_api_types::LxcStatus, Error> {
- let (remotes, _) = pdm_config::remotes::config()?;
-
- let pve = connect_to_remote(&remotes, &remote)?;
-
- let node = find_node_for_vm(node, vmid, pve.as_ref()).await?;
-
- Ok(pve.lxc_get_status(&node, vmid).await?)
-}
-
-#[api(
- input: {
- properties: {
- remote: { schema: REMOTE_ID_SCHEMA },
- node: {
- schema: NODE_SCHEMA,
- optional: true,
- },
- vmid: { schema: VMID_SCHEMA },
- },
- },
- returns: { type: RemoteUpid },
- access: {
- permission: &Permission::Privilege(&["resource", "{remote}", "guest", "{vmid}"], PRIV_RESOURCE_MANAGE, false),
- },
-)]
-/// Start a remote lxc container.
-pub async fn lxc_start(
- remote: String,
- node: Option<String>,
- vmid: u32,
-) -> Result<RemoteUpid, Error> {
- let (remotes, _) = pdm_config::remotes::config()?;
-
- let pve = connect_to_remote(&remotes, &remote)?;
-
- let node = find_node_for_vm(node, vmid, pve.as_ref()).await?;
-
- let upid = pve.start_lxc_async(&node, vmid, Default::default()).await?;
-
- new_remote_upid(remote, upid)
-}
-
-#[api(
- input: {
- properties: {
- remote: { schema: REMOTE_ID_SCHEMA },
- node: {
- schema: NODE_SCHEMA,
- optional: true,
- },
- vmid: { schema: VMID_SCHEMA },
- },
- },
- returns: { type: RemoteUpid },
- access: {
- permission: &Permission::Privilege(&["resource", "{remote}", "guest", "{vmid}"], PRIV_RESOURCE_MANAGE, false),
- },
-)]
-/// Stop a remote lxc container.
-pub async fn lxc_stop(
- remote: String,
- node: Option<String>,
- vmid: u32,
-) -> Result<RemoteUpid, Error> {
- let (remotes, _) = pdm_config::remotes::config()?;
-
- let pve = connect_to_remote(&remotes, &remote)?;
-
- let node = find_node_for_vm(node, vmid, pve.as_ref()).await?;
-
- let upid = pve.stop_lxc_async(&node, vmid, Default::default()).await?;
-
- new_remote_upid(remote, upid)
-}
-
-#[api(
- input: {
- properties: {
- remote: { schema: REMOTE_ID_SCHEMA },
- node: {
- schema: NODE_SCHEMA,
- optional: true,
- },
- vmid: { schema: VMID_SCHEMA },
- },
- },
- returns: { type: RemoteUpid },
- access: {
- permission: &Permission::Privilege(&["resource", "{remote}", "guest", "{vmid}"], PRIV_RESOURCE_MANAGE, false),
- },
-)]
-/// Perform a shutdown of a remote lxc container.
-pub async fn lxc_shutdown(
- remote: String,
- node: Option<String>,
- vmid: u32,
-) -> Result<RemoteUpid, Error> {
- let (remotes, _) = pdm_config::remotes::config()?;
-
- let pve = connect_to_remote(&remotes, &remote)?;
-
- let node = find_node_for_vm(node, vmid, pve.as_ref()).await?;
-
- let upid = pve
- .shutdown_lxc_async(&node, vmid, Default::default())
- .await?;
-
- new_remote_upid(remote, upid)
-}
-
-#[api(
- input: {
- properties: {
- remote: { schema: REMOTE_ID_SCHEMA },
- node: {
- schema: NODE_SCHEMA,
- optional: true,
- },
- target: { schema: NODE_SCHEMA },
- vmid: { schema: VMID_SCHEMA },
- online: {
- type: bool,
- description: "Attempt an online migration if the container is running.",
- optional: true,
- },
- restart: {
- type: bool,
- description: "Perform a restart-migration if the container is running.",
- optional: true,
- },
- "target-storage": {
- description: "Mapping of source storages to target storages.",
- optional: true,
- },
- bwlimit: {
- description: "Override I/O bandwidth limit (in KiB/s).",
- optional: true,
- },
- timeout: {
- description: "Shutdown timeout for restart-migrations.",
- optional: true,
- },
- },
- },
- returns: { type: RemoteUpid },
- access: {
- permission: &Permission::And(&[
- &Permission::Privilege(&["resource", "{remote}", "guest", "{vmid}"], PRIV_RESOURCE_MIGRATE, false),
- ]),
- },
-)]
-/// Perform an in-cluster migration of a VM.
-#[allow(clippy::too_many_arguments)]
-pub async fn lxc_migrate(
- remote: String,
- node: Option<String>,
- vmid: u32,
- bwlimit: Option<u64>,
- restart: Option<bool>,
- online: Option<bool>,
- target: String,
- target_storage: Option<String>,
- timeout: Option<i64>,
-) -> Result<RemoteUpid, Error> {
- let bwlimit = bwlimit.map(|n| n as f64);
-
- log::info!("in-cluster migration requested for remote {remote:?} ct {vmid} to node {target:?}");
-
- let (remotes, _) = pdm_config::remotes::config()?;
- let pve = connect_to_remote(&remotes, &remote)?;
-
- let node = find_node_for_vm(node, vmid, pve.as_ref()).await?;
-
- if node == target {
- bail!("refusing migration to the same node");
- }
-
- let params = pve_api_types::MigrateLxc {
- bwlimit,
- online,
- restart,
- target,
- target_storage,
- timeout,
- };
- let upid = pve.migrate_lxc(&node, vmid, params).await?;
-
- new_remote_upid(remote, upid)
-}
-
-#[api(
- input: {
- properties: {
- remote: { schema: REMOTE_ID_SCHEMA },
- target: { schema: REMOTE_ID_SCHEMA },
- node: {
- schema: NODE_SCHEMA,
- optional: true,
- },
- vmid: { schema: VMID_SCHEMA },
- "target-vmid": {
- optional: true,
- schema: VMID_SCHEMA,
- },
- delete: {
- description: "Delete the original VM and related data after successful migration.",
- optional: true,
- default: false,
- },
- online: {
- type: bool,
- description: "Perform an online migration if the vm is running.",
- optional: true,
- default: false,
- },
- "target-storage": {
- description: "Mapping of source storages to target storages.",
- },
- "target-bridge": {
- description: "Mapping of source bridges to remote bridges.",
- },
- bwlimit: {
- description: "Override I/O bandwidth limit (in KiB/s).",
- optional: true,
- },
- restart: {
- description: "Perform a restart-migration.",
- optional: true,
- },
- timeout: {
- description: "Add a shutdown timeout for the restart-migration.",
- optional: true,
- },
- },
- },
- returns: { type: RemoteUpid },
- access: {
- permission:
- &Permission::Privilege(&["resource", "{remote}", "guest", "{vmid}"], PRIV_RESOURCE_MIGRATE, false),
- description: "requires PRIV_RESOURCE_MIGRATE on /resource/{remote}/guest/{vmid} for source and target remove and vmid",
- },
-)]
-/// Perform a remote migration of an lxc container.
-#[allow(clippy::too_many_arguments)]
-pub async fn lxc_remote_migrate(
- remote: String, // this is the source
- target: String, // this is the destination remote name
- node: Option<String>,
- vmid: u32,
- target_vmid: Option<u32>,
- delete: bool,
- online: bool,
- target_storage: String,
- target_bridge: String,
- bwlimit: Option<u64>,
- restart: Option<bool>,
- timeout: Option<i64>,
- rpcenv: &mut dyn RpcEnvironment,
-) -> Result<RemoteUpid, Error> {
- let user_info = CachedUserInfo::new()?;
- let auth_id: Authid = rpcenv
- .get_auth_id()
- .ok_or_else(|| format_err!("no authid available"))?
- .parse()?;
- let target_privs = user_info.lookup_privs(
- &auth_id,
- &[
- "resource",
- &target,
- "guest",
- &target_vmid.unwrap_or(vmid).to_string(),
- ],
- );
- if target_privs & PRIV_RESOURCE_MIGRATE == 0 {
- http_bail!(
- UNAUTHORIZED,
- "missing PRIV_RESOURCE_MIGRATE on target remote+vmid"
- );
- }
- if delete {
- check_guest_delete_perms(rpcenv, &remote, vmid)?;
- }
-
- let source = remote; // let's stick to "source" and "target" naming
-
- log::info!("remote migration requested");
-
- if source == target {
- bail!("source and destination clusters must be different");
- }
-
- let (remotes, _) = pdm_config::remotes::config()?;
- let target = get_remote(&remotes, &target)?;
- let source_conn = connect_to_remote(&remotes, &source)?;
-
- let node = find_node_for_vm(node, vmid, source_conn.as_ref()).await?;
-
- // FIXME: For now we'll only try with the first node but we should probably try others, too, in
- // case some are offline?
-
- let target_node = target
- .nodes
- .first()
- .ok_or_else(|| format_err!("no nodes configured for target cluster"))?;
- let target_host_port: Authority = target_node.hostname.parse()?;
- let mut target_endpoint = format!(
- "host={host},port={port},apitoken=PVEAPIToken={authid}={secret}",
- host = target_host_port.host(),
- authid = target.authid,
- secret = target.token,
- port = target_host_port.port_u16().unwrap_or(8006),
- );
- if let Some(fp) = target_node.fingerprint.as_deref() {
- target_endpoint.reserve(fp.len() + ",fingerprint=".len());
- target_endpoint.push_str(",fingerprint=");
- target_endpoint.push_str(fp);
- }
-
- log::info!("forwarding remote migration requested");
- let params = pve_api_types::RemoteMigrateLxc {
- target_bridge,
- target_storage,
- delete: Some(delete),
- online: Some(online),
- target_vmid,
- target_endpoint,
- bwlimit: bwlimit.map(|limit| limit as f64),
- restart,
- timeout,
- };
- log::info!("migrating vm {vmid} of node {node:?}");
- let upid = source_conn.remote_migrate_lxc(&node, vmid, params).await?;
-
- new_remote_upid(source, upid)
-}
-
#[api(
input: {
properties: {
diff --git a/server/src/api/pve/qemu.rs b/server/src/api/pve/qemu.rs
new file mode 100644
index 0000000..9a67c10
--- /dev/null
+++ b/server/src/api/pve/qemu.rs
@@ -0,0 +1,552 @@
+use anyhow::{bail, format_err, Error};
+use http::uri::Authority;
+
+use proxmox_access_control::CachedUserInfo;
+use proxmox_router::{
+ http_bail, list_subdirs_api_method, Permission, Router, RpcEnvironment, SubdirMap,
+};
+use proxmox_schema::api;
+use proxmox_sortable_macro::sortable;
+
+use pdm_api_types::remotes::REMOTE_ID_SCHEMA;
+use pdm_api_types::{
+ Authid, ConfigurationState, RemoteUpid, CIDR_FORMAT, NODE_SCHEMA, PRIV_RESOURCE_AUDIT,
+ PRIV_RESOURCE_MANAGE, PRIV_RESOURCE_MIGRATE, SNAPSHOT_NAME_SCHEMA, VMID_SCHEMA,
+};
+
+use pve_api_types::{QemuMigratePreconditions, StartQemuMigrationType};
+
+use crate::api::pve::get_remote;
+
+use super::{
+ check_guest_delete_perms, check_guest_list_permissions, check_guest_permissions,
+ connect_to_remote, find_node_for_vm, new_remote_upid,
+};
+
+pub const ROUTER: Router = Router::new()
+ .get(&API_METHOD_LIST_QEMU)
+ .match_all("vmid", &QEMU_VM_ROUTER);
+
+const QEMU_VM_ROUTER: Router = Router::new()
+ .get(&list_subdirs_api_method!(QEMU_VM_SUBDIRS))
+ .subdirs(QEMU_VM_SUBDIRS);
+#[sortable]
+const QEMU_VM_SUBDIRS: SubdirMap = &sorted!([
+ ("config", &Router::new().get(&API_METHOD_QEMU_GET_CONFIG)),
+ ("rrddata", &super::rrddata::QEMU_RRD_ROUTER),
+ ("start", &Router::new().post(&API_METHOD_QEMU_START)),
+ ("status", &Router::new().get(&API_METHOD_QEMU_GET_STATUS)),
+ ("stop", &Router::new().post(&API_METHOD_QEMU_STOP)),
+ ("shutdown", &Router::new().post(&API_METHOD_QEMU_SHUTDOWN)),
+ (
+ "migrate",
+ &Router::new()
+ .get(&API_METHOD_QEMU_MIGRATE_PRECONDITIONS)
+ .post(&API_METHOD_QEMU_MIGRATE)
+ ),
+ (
+ "remote-migrate",
+ &Router::new().post(&API_METHOD_QEMU_REMOTE_MIGRATE)
+ ),
+]);
+
+#[api(
+ input: {
+ properties: {
+ remote: { schema: REMOTE_ID_SCHEMA },
+ node: {
+ schema: NODE_SCHEMA,
+ optional: true,
+ },
+ },
+ },
+ returns: {
+ type: Array,
+ description: "Get a list of VMs",
+ items: { type: pve_api_types::VmEntry },
+ },
+ access: {
+ permission: &Permission::Privilege(&["resource", "{remote}"], PRIV_RESOURCE_AUDIT, false),
+ },
+)]
+/// Query the remote's list of qemu VMs. If no node is provided, the all nodes are queried.
+pub async fn list_qemu(
+ remote: String,
+ node: Option<String>,
+ rpcenv: &mut dyn RpcEnvironment,
+) -> Result<Vec<pve_api_types::VmEntry>, Error> {
+ // FIXME: top_level_allowed is always true because of schema check above, replace with Anybody
+ // and fine-grained checks once those are implemented for all API calls..
+ let (auth_id, user_info, top_level_allowed) = check_guest_list_permissions(&remote, rpcenv)?;
+
+ let (remotes, _) = pdm_config::remotes::config()?;
+
+ let pve = connect_to_remote(&remotes, &remote)?;
+
+ let list = if let Some(node) = node {
+ pve.list_qemu(&node, None).await?
+ } else {
+ let mut list = Vec::new();
+ for node in pve.list_nodes().await? {
+ list.extend(pve.list_qemu(&node.node, None).await?);
+ }
+ list
+ };
+
+ if top_level_allowed {
+ return Ok(list);
+ }
+
+ Ok(list
+ .into_iter()
+ .filter(|entry| {
+ check_guest_permissions(
+ &auth_id,
+ &user_info,
+ &remote,
+ PRIV_RESOURCE_AUDIT,
+ entry.vmid,
+ )
+ })
+ .collect())
+}
+
+#[api(
+ input: {
+ properties: {
+ remote: { schema: REMOTE_ID_SCHEMA },
+ node: {
+ schema: NODE_SCHEMA,
+ optional: true,
+ },
+ vmid: { schema: VMID_SCHEMA },
+ state: { type: ConfigurationState },
+ snapshot: {
+ schema: SNAPSHOT_NAME_SCHEMA,
+ optional: true,
+ },
+ },
+ },
+ returns: { type: pve_api_types::QemuConfig },
+ access: {
+ permission: &Permission::Privilege(&["resource", "{remote}", "guest", "{vmid}"], PRIV_RESOURCE_AUDIT, false),
+ },
+)]
+/// Get the configuration of a qemu VM from a remote. If a node is provided, the VM must be on that
+/// node, otherwise the node is determined automatically.
+pub async fn qemu_get_config(
+ remote: String,
+ node: Option<String>,
+ vmid: u32,
+ state: ConfigurationState,
+ snapshot: Option<String>,
+) -> Result<pve_api_types::QemuConfig, Error> {
+ let (remotes, _) = pdm_config::remotes::config()?;
+
+ let pve = connect_to_remote(&remotes, &remote)?;
+
+ let node = find_node_for_vm(node, vmid, pve.as_ref()).await?;
+
+ Ok(pve
+ .qemu_get_config(&node, vmid, state.current(), snapshot)
+ .await?)
+}
+
+#[api(
+ input: {
+ properties: {
+ remote: { schema: REMOTE_ID_SCHEMA },
+ node: {
+ schema: NODE_SCHEMA,
+ optional: true,
+ },
+ vmid: { schema: VMID_SCHEMA },
+ },
+ },
+ returns: { type: pve_api_types::QemuStatus },
+ access: {
+ permission: &Permission::Privilege(&["resource", "{remote}", "guest", "{vmid}"], PRIV_RESOURCE_AUDIT, false),
+ },
+)]
+/// Get the status of a qemu VM from a remote. If a node is provided, the VM must be on that
+/// node, otherwise the node is determined automatically.
+pub async fn qemu_get_status(
+ remote: String,
+ node: Option<String>,
+ vmid: u32,
+) -> Result<pve_api_types::QemuStatus, Error> {
+ let (remotes, _) = pdm_config::remotes::config()?;
+
+ let pve = connect_to_remote(&remotes, &remote)?;
+
+ let node = find_node_for_vm(node, vmid, pve.as_ref()).await?;
+
+ Ok(pve.qemu_get_status(&node, vmid).await?)
+}
+
+#[api(
+ input: {
+ properties: {
+ remote: { schema: REMOTE_ID_SCHEMA },
+ node: {
+ schema: NODE_SCHEMA,
+ optional: true,
+ },
+ vmid: { schema: VMID_SCHEMA },
+ },
+ },
+ returns: { type: RemoteUpid },
+ access: {
+ permission: &Permission::Privilege(&["resource", "{remote}", "guest", "{vmid}"], PRIV_RESOURCE_MANAGE, false),
+ },
+)]
+/// Start a remote qemu vm.
+pub async fn qemu_start(
+ remote: String,
+ node: Option<String>,
+ vmid: u32,
+) -> Result<RemoteUpid, Error> {
+ let (remotes, _) = pdm_config::remotes::config()?;
+
+ let pve = connect_to_remote(&remotes, &remote)?;
+
+ let node = find_node_for_vm(node, vmid, pve.as_ref()).await?;
+
+ let upid = pve
+ .start_qemu_async(&node, vmid, Default::default())
+ .await?;
+
+ new_remote_upid(remote, upid)
+}
+
+#[api(
+ input: {
+ properties: {
+ remote: { schema: REMOTE_ID_SCHEMA },
+ node: {
+ schema: NODE_SCHEMA,
+ optional: true,
+ },
+ vmid: { schema: VMID_SCHEMA },
+ },
+ },
+ returns: { type: RemoteUpid },
+ access: {
+ permission: &Permission::Privilege(&["resource", "{remote}", "guest", "{vmid}"], PRIV_RESOURCE_MANAGE, false),
+ },
+)]
+/// Stop a remote qemu vm.
+pub async fn qemu_stop(
+ remote: String,
+ node: Option<String>,
+ vmid: u32,
+) -> Result<RemoteUpid, Error> {
+ let (remotes, _) = pdm_config::remotes::config()?;
+
+ let pve = connect_to_remote(&remotes, &remote)?;
+
+ let node = find_node_for_vm(node, vmid, pve.as_ref()).await?;
+
+ let upid = pve.stop_qemu_async(&node, vmid, Default::default()).await?;
+
+ (remote, upid.to_string()).try_into()
+}
+
+#[api(
+ input: {
+ properties: {
+ remote: { schema: REMOTE_ID_SCHEMA },
+ node: {
+ schema: NODE_SCHEMA,
+ optional: true,
+ },
+ vmid: { schema: VMID_SCHEMA },
+ },
+ },
+ returns: { type: RemoteUpid },
+ access: {
+ permission: &Permission::Privilege(&["resource", "{remote}", "guest", "{vmid}"], PRIV_RESOURCE_MANAGE, false),
+ },
+)]
+/// Perform a shutdown of a remote qemu vm.
+pub async fn qemu_shutdown(
+ remote: String,
+ node: Option<String>,
+ vmid: u32,
+) -> Result<RemoteUpid, Error> {
+ let (remotes, _) = pdm_config::remotes::config()?;
+
+ let pve = connect_to_remote(&remotes, &remote)?;
+
+ let node = find_node_for_vm(node, vmid, pve.as_ref()).await?;
+
+ let upid = pve
+ .shutdown_qemu_async(&node, vmid, Default::default())
+ .await?;
+
+ (remote, upid.to_string()).try_into()
+}
+
+#[api(
+ input: {
+ properties: {
+ remote: { schema: REMOTE_ID_SCHEMA },
+ node: {
+ schema: NODE_SCHEMA,
+ optional: true,
+ },
+ target: { schema: NODE_SCHEMA },
+ vmid: { schema: VMID_SCHEMA },
+ online: {
+ type: bool,
+ description: "Perform an online migration if the vm is running.",
+ optional: true,
+ },
+ "target-storage": {
+ description: "Mapping of source storages to target storages.",
+ optional: true,
+ },
+ bwlimit: {
+ description: "Override I/O bandwidth limit (in KiB/s).",
+ optional: true,
+ },
+ "migration-network": {
+ description: "CIDR of the (sub) network that is used for migration.",
+ type: String,
+ format: &CIDR_FORMAT,
+ optional: true,
+ },
+ "migration-type": {
+ type: StartQemuMigrationType,
+ optional: true,
+ },
+ force: {
+ description: "Allow to migrate VMs with local devices.",
+ optional: true,
+ default: false,
+ },
+ "with-local-disks": {
+ description: "Enable live storage migration for local disks.",
+ optional: true,
+ },
+ },
+ },
+ returns: { type: RemoteUpid },
+ access: {
+ permission: &Permission::And(&[
+ &Permission::Privilege(&["resource", "{remote}", "guest", "{vmid}"], PRIV_RESOURCE_MIGRATE, false),
+ ]),
+ },
+)]
+/// Perform an in-cluster migration of a VM.
+#[allow(clippy::too_many_arguments)]
+pub async fn qemu_migrate(
+ remote: String,
+ node: Option<String>,
+ vmid: u32,
+ bwlimit: Option<u64>,
+ force: Option<bool>,
+ migration_network: Option<String>,
+ migration_type: Option<StartQemuMigrationType>,
+ online: Option<bool>,
+ target: String,
+ target_storage: Option<String>,
+ with_local_disks: Option<bool>,
+) -> Result<RemoteUpid, Error> {
+ log::info!("in-cluster migration requested for remote {remote:?} vm {vmid} to node {target:?}");
+
+ let (remotes, _) = pdm_config::remotes::config()?;
+ let pve = connect_to_remote(&remotes, &remote)?;
+
+ let node = find_node_for_vm(node, vmid, pve.as_ref()).await?;
+
+ if node == target {
+ bail!("refusing migration to the same node");
+ }
+
+ let params = pve_api_types::MigrateQemu {
+ bwlimit,
+ force,
+ migration_network,
+ migration_type,
+ online,
+ target,
+ targetstorage: target_storage,
+ with_local_disks,
+ };
+ let upid = pve.migrate_qemu(&node, vmid, params).await?;
+ //(remote, upid.to_string()).try_into()
+ new_remote_upid(remote, upid)
+}
+
+#[api(
+ input: {
+ properties: {
+ remote: { schema: REMOTE_ID_SCHEMA },
+ node: {
+ schema: NODE_SCHEMA,
+ optional: true,
+ },
+ target: {
+ schema: NODE_SCHEMA,
+ optional: true,
+ },
+ vmid: { schema: VMID_SCHEMA },
+ }
+ },
+ access: {
+ permission: &Permission::And(&[
+ &Permission::Privilege(&["resource", "{remote}", "guest", "{vmid}"], PRIV_RESOURCE_MIGRATE, false),
+ ]),
+ },
+)]
+/// Qemu (local) migrate preconditions
+async fn qemu_migrate_preconditions(
+ remote: String,
+ node: Option<String>,
+ target: Option<String>,
+ vmid: u32,
+) -> Result<QemuMigratePreconditions, Error> {
+ let (remotes, _) = pdm_config::remotes::config()?;
+ let pve = connect_to_remote(&remotes, &remote)?;
+
+ let node = find_node_for_vm(node, vmid, pve.as_ref()).await?;
+
+ let res = pve.qemu_migrate_preconditions(&node, vmid, target).await?;
+ Ok(res)
+}
+
+#[api(
+ input: {
+ properties: {
+ remote: { schema: REMOTE_ID_SCHEMA },
+ target: { schema: REMOTE_ID_SCHEMA },
+ node: {
+ schema: NODE_SCHEMA,
+ optional: true,
+ },
+ vmid: { schema: VMID_SCHEMA },
+ "target-vmid": {
+ optional: true,
+ schema: VMID_SCHEMA,
+ },
+ delete: {
+ description: "Delete the original VM and related data after successful migration.",
+ optional: true,
+ default: false,
+ },
+ online: {
+ type: bool,
+ description: "Perform an online migration if the vm is running.",
+ optional: true,
+ default: false,
+ },
+ "target-storage": {
+ description: "Mapping of source storages to target storages.",
+ },
+ "target-bridge": {
+ description: "Mapping of source bridges to remote bridges.",
+ },
+ bwlimit: {
+ description: "Override I/O bandwidth limit (in KiB/s).",
+ optional: true,
+ }
+ },
+ },
+ returns: { type: RemoteUpid },
+ access: {
+ permission:
+ &Permission::Privilege(&["resource", "{remote}", "guest", "{vmid}"], PRIV_RESOURCE_MIGRATE, false),
+ description: "requires PRIV_RESOURCE_MIGRATE on /resource/{remote}/guest/{vmid} for source and target remove and vmid",
+ },
+)]
+/// Perform a remote migration of a VM.
+#[allow(clippy::too_many_arguments)]
+pub async fn qemu_remote_migrate(
+ remote: String, // this is the source
+ target: String, // this is the destination remote name
+ node: Option<String>,
+ vmid: u32,
+ target_vmid: Option<u32>,
+ delete: bool,
+ online: bool,
+ target_storage: String,
+ target_bridge: String,
+ bwlimit: Option<u64>,
+ rpcenv: &mut dyn RpcEnvironment,
+) -> Result<RemoteUpid, Error> {
+ let user_info = CachedUserInfo::new()?;
+ let auth_id: Authid = rpcenv
+ .get_auth_id()
+ .ok_or_else(|| format_err!("no authid available"))?
+ .parse()?;
+ let target_privs = user_info.lookup_privs(
+ &auth_id,
+ &[
+ "resource",
+ &target,
+ "guest",
+ &target_vmid.unwrap_or(vmid).to_string(),
+ ],
+ );
+ if target_privs & PRIV_RESOURCE_MIGRATE == 0 {
+ http_bail!(
+ UNAUTHORIZED,
+ "missing PRIV_RESOURCE_MIGRATE on target remote+vmid"
+ );
+ }
+
+ if delete {
+ check_guest_delete_perms(rpcenv, &remote, vmid)?;
+ }
+
+ let source = remote; // let's stick to "source" and "target" naming
+
+ log::info!("remote migration requested");
+
+ if source == target {
+ bail!("source and destination clusters must be different");
+ }
+
+ let (remotes, _) = pdm_config::remotes::config()?;
+ let target = get_remote(&remotes, &target)?;
+ let source_conn = connect_to_remote(&remotes, &source)?;
+
+ let node = find_node_for_vm(node, vmid, source_conn.as_ref()).await?;
+
+ // FIXME: For now we'll only try with the first node but we should probably try others, too, in
+ // case some are offline?
+
+ let target_node = target
+ .nodes
+ .first()
+ .ok_or_else(|| format_err!("no nodes configured for target cluster"))?;
+ let target_host_port: Authority = target_node.hostname.parse()?;
+ let mut target_endpoint = format!(
+ "host={host},port={port},apitoken=PVEAPIToken={authid}={secret}",
+ host = target_host_port.host(),
+ authid = target.authid,
+ secret = target.token,
+ port = target_host_port.port_u16().unwrap_or(8006),
+ );
+ if let Some(fp) = target_node.fingerprint.as_deref() {
+ target_endpoint.reserve(fp.len() + ",fingerprint=".len());
+ target_endpoint.push_str(",fingerprint=");
+ target_endpoint.push_str(fp);
+ }
+
+ log::info!("forwarding remote migration requested");
+ let params = pve_api_types::RemoteMigrateQemu {
+ target_bridge,
+ target_storage,
+ delete: Some(delete),
+ online: Some(online),
+ target_vmid,
+ target_endpoint,
+ bwlimit,
+ };
+ log::info!("migrating vm {vmid} of node {node:?}");
+ let upid = source_conn.remote_migrate_qemu(&node, vmid, params).await?;
+
+ (source, upid.to_string()).try_into()
+}
--
2.39.5
_______________________________________________
pdm-devel mailing list
pdm-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pdm-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* [pdm-devel] [PATCH datacenter-manager 2/9] server: api: fix remote upid tracking for qemu remote migration
2025-01-13 15:45 [pdm-devel] [PATCH proxmox-api-types/datacenter-manager] remote migration: make target endpoint selectable Dominik Csapak
` (2 preceding siblings ...)
2025-01-13 15:45 ` [pdm-devel] [PATCH datacenter-manager 1/9] server: factor qemu/lxc code into own modules Dominik Csapak
@ 2025-01-13 15:45 ` Dominik Csapak
2025-01-13 15:45 ` [pdm-devel] [PATCH datacenter-manager 3/9] server: connection: add new function that allows for explicit endpoint Dominik Csapak
` (7 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Dominik Csapak @ 2025-01-13 15:45 UTC (permalink / raw)
To: pdm-devel
this was missing. While at it remove an old leftover comment
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
server/src/api/pve/qemu.rs | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/server/src/api/pve/qemu.rs b/server/src/api/pve/qemu.rs
index 9a67c10..335c332 100644
--- a/server/src/api/pve/qemu.rs
+++ b/server/src/api/pve/qemu.rs
@@ -375,7 +375,7 @@ pub async fn qemu_migrate(
with_local_disks,
};
let upid = pve.migrate_qemu(&node, vmid, params).await?;
- //(remote, upid.to_string()).try_into()
+
new_remote_upid(remote, upid)
}
@@ -548,5 +548,5 @@ pub async fn qemu_remote_migrate(
log::info!("migrating vm {vmid} of node {node:?}");
let upid = source_conn.remote_migrate_qemu(&node, vmid, params).await?;
- (source, upid.to_string()).try_into()
+ new_remote_upid(source, upid)
}
--
2.39.5
_______________________________________________
pdm-devel mailing list
pdm-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pdm-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* [pdm-devel] [PATCH datacenter-manager 3/9] server: connection: add new function that allows for explicit endpoint
2025-01-13 15:45 [pdm-devel] [PATCH proxmox-api-types/datacenter-manager] remote migration: make target endpoint selectable Dominik Csapak
` (3 preceding siblings ...)
2025-01-13 15:45 ` [pdm-devel] [PATCH datacenter-manager 2/9] server: api: fix remote upid tracking for qemu remote migration Dominik Csapak
@ 2025-01-13 15:45 ` Dominik Csapak
2025-01-13 15:45 ` [pdm-devel] [PATCH datacenter-manager 4/9] server: api: add target-endpoint parameter to remote migrate api calls Dominik Csapak
` (6 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Dominik Csapak @ 2025-01-13 15:45 UTC (permalink / raw)
To: pdm-devel
sometimes it's necessary to connect to a specific endpoint that is
configured, instead of letting it automatically choose, so add the
function for that.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
server/src/connection.rs | 61 ++++++++++++++++++++++++++++++++--------
1 file changed, 49 insertions(+), 12 deletions(-)
diff --git a/server/src/connection.rs b/server/src/connection.rs
index 8c97b4c..0adeba2 100644
--- a/server/src/connection.rs
+++ b/server/src/connection.rs
@@ -26,11 +26,21 @@ struct ConnectInfo {
/// Returns a [`proxmox_client::Client`] and a token prefix for the specified
/// [`pdm_api_types::Remote`]
-fn prepare_connect_client(remote: &Remote) -> Result<ConnectInfo, Error> {
+fn prepare_connect_client(
+ remote: &Remote,
+ target_endpoint: Option<&str>,
+) -> Result<ConnectInfo, Error> {
let node = remote
.nodes
- .first()
- .ok_or_else(|| format_err!("no nodes configured for remote"))?;
+ .iter()
+ .find(|endpoint| match target_endpoint {
+ Some(target) => target == endpoint.hostname,
+ None => true,
+ })
+ .ok_or_else(|| match target_endpoint {
+ Some(endpoint) => format_err!("{endpoint} not configured for remote"),
+ None => format_err!("no nodes configured for remote"),
+ })?;
let mut options = TlsOptions::default();
if let Some(fp) = &node.fingerprint {
@@ -66,12 +76,12 @@ fn prepare_connect_client(remote: &Remote) -> Result<ConnectInfo, Error> {
///
/// It does not actually opens a connection there, but prepares the client with the correct
/// authentication information and settings for the [`RemoteType`]
-fn connect(remote: &Remote) -> Result<Client, anyhow::Error> {
+fn connect(remote: &Remote, target_endpoint: Option<&str>) -> Result<Client, anyhow::Error> {
let ConnectInfo {
client,
perl_compat,
prefix,
- } = prepare_connect_client(remote)?;
+ } = prepare_connect_client(remote, target_endpoint)?;
client.set_authentication(proxmox_client::Token {
userid: remote.authid.to_string(),
prefix,
@@ -91,11 +101,14 @@ fn connect(remote: &Remote) -> Result<Client, anyhow::Error> {
/// This is intended for API calls that accept a user in addition to tokens.
///
/// Note: currently does not support two factor authentication.
-async fn connect_or_login(remote: &Remote) -> Result<Client, anyhow::Error> {
+async fn connect_or_login(
+ remote: &Remote,
+ target_endpoint: Option<&str>,
+) -> Result<Client, anyhow::Error> {
if remote.authid.is_token() {
- connect(remote)
+ connect(remote, target_endpoint)
} else {
- let info = prepare_connect_client(remote)?;
+ let info = prepare_connect_client(remote, target_endpoint)?;
let client = info.client;
match client
.login(proxmox_login::Login::new(
@@ -131,6 +144,13 @@ pub trait ClientFactory {
/// Create a new API client for PBS remotes
fn make_pbs_client(&self, remote: &Remote) -> Result<Box<PbsClient>, Error>;
+ /// Create a new API client for PVE remotes, but with a specific endpoint.
+ fn make_pve_client_with_endpoint(
+ &self,
+ remote: &Remote,
+ target_endpoint: Option<&str>,
+ ) -> Result<Box<dyn PveClient + Send + Sync>, Error>;
+
/// Create a new API client for PVE remotes.
///
/// In case the remote has a user configured (instead of an API token), it will connect and get
@@ -163,25 +183,34 @@ pub struct DefaultClientFactory;
#[async_trait::async_trait]
impl ClientFactory for DefaultClientFactory {
fn make_pve_client(&self, remote: &Remote) -> Result<Box<dyn PveClient + Send + Sync>, Error> {
- let client = crate::connection::connect(remote)?;
+ let client = crate::connection::connect(remote, None)?;
Ok(Box::new(PveClientImpl(client)))
}
fn make_pbs_client(&self, remote: &Remote) -> Result<Box<PbsClient>, Error> {
- let client = crate::connection::connect(remote)?;
+ let client = crate::connection::connect(remote, None)?;
Ok(Box::new(PbsClient(client)))
}
+ fn make_pve_client_with_endpoint(
+ &self,
+ remote: &Remote,
+ target_endpoint: Option<&str>,
+ ) -> Result<Box<dyn PveClient + Send + Sync>, Error> {
+ let client = crate::connection::connect(remote, target_endpoint)?;
+ Ok(Box::new(PveClientImpl(client)))
+ }
+
async fn make_pve_client_and_login(
&self,
remote: &Remote,
) -> Result<Box<dyn PveClient + Send + Sync>, Error> {
- let client = connect_or_login(remote).await?;
+ let client = connect_or_login(remote, None).await?;
Ok(Box::new(PveClientImpl(client)))
}
async fn make_pbs_client_and_login(&self, remote: &Remote) -> Result<Box<PbsClient>, Error> {
- let client = connect_or_login(remote).await?;
+ let client = connect_or_login(remote, None).await?;
Ok(Box::new(PbsClient(client)))
}
}
@@ -201,6 +230,14 @@ pub fn make_pve_client(remote: &Remote) -> Result<Box<dyn PveClient + Send + Syn
instance().make_pve_client(remote)
}
+/// Create a new API client for PVE remotes, but for a specific endpoint
+pub fn make_pve_client_with_endpoint(
+ remote: &Remote,
+ target_endpoint: Option<&str>,
+) -> Result<Box<dyn PveClient + Send + Sync>, Error> {
+ instance().make_pve_client_with_endpoint(remote, target_endpoint)
+}
+
/// Create a new API client for PBS remotes
pub fn make_pbs_client(remote: &Remote) -> Result<Box<PbsClient>, Error> {
instance().make_pbs_client(remote)
--
2.39.5
_______________________________________________
pdm-devel mailing list
pdm-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pdm-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* [pdm-devel] [PATCH datacenter-manager 4/9] server: api: add target-endpoint parameter to remote migrate api calls
2025-01-13 15:45 [pdm-devel] [PATCH proxmox-api-types/datacenter-manager] remote migration: make target endpoint selectable Dominik Csapak
` (4 preceding siblings ...)
2025-01-13 15:45 ` [pdm-devel] [PATCH datacenter-manager 3/9] server: connection: add new function that allows for explicit endpoint Dominik Csapak
@ 2025-01-13 15:45 ` Dominik Csapak
2025-01-13 15:45 ` [pdm-devel] [PATCH datacenter-manager 5/9] server: api: pve: add remote cluster-status api call Dominik Csapak
` (5 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Dominik Csapak @ 2025-01-13 15:45 UTC (permalink / raw)
To: pdm-devel
so we can explicitly control where the remote migration should go to.
It is still necessary that the endpoint is part of the remote
configuration.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
server/src/api/pve/lxc.rs | 19 +++++++++++++++++--
server/src/api/pve/qemu.rs | 21 ++++++++++++++++++---
2 files changed, 35 insertions(+), 5 deletions(-)
diff --git a/server/src/api/pve/lxc.rs b/server/src/api/pve/lxc.rs
index b16d268..f1c3142 100644
--- a/server/src/api/pve/lxc.rs
+++ b/server/src/api/pve/lxc.rs
@@ -403,6 +403,12 @@ pub async fn lxc_migrate(
description: "Add a shutdown timeout for the restart-migration.",
optional: true,
},
+ // TODO better to change remote migration to proxy to node?
+ "target-endpoint": {
+ type: String,
+ optional: true,
+ description: "The target endpoint to use for the connection.",
+ },
},
},
returns: { type: RemoteUpid },
@@ -427,6 +433,7 @@ pub async fn lxc_remote_migrate(
bwlimit: Option<u64>,
restart: Option<bool>,
timeout: Option<i64>,
+ target_endpoint: Option<String>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<RemoteUpid, Error> {
let user_info = CachedUserInfo::new()?;
@@ -472,8 +479,16 @@ pub async fn lxc_remote_migrate(
let target_node = target
.nodes
- .first()
- .ok_or_else(|| format_err!("no nodes configured for target cluster"))?;
+ .iter()
+ .find(|endpoint| match target_endpoint.as_deref() {
+ Some(target) => target == endpoint.hostname,
+ None => true,
+ })
+ .ok_or_else(|| match target_endpoint {
+ Some(endpoint) => format_err!("{endpoint} not configured for target cluster"),
+ None => format_err!("no nodes configured for target cluster"),
+ })?;
+
let target_host_port: Authority = target_node.hostname.parse()?;
let mut target_endpoint = format!(
"host={host},port={port},apitoken=PVEAPIToken={authid}={secret}",
diff --git a/server/src/api/pve/qemu.rs b/server/src/api/pve/qemu.rs
index 335c332..dea0550 100644
--- a/server/src/api/pve/qemu.rs
+++ b/server/src/api/pve/qemu.rs
@@ -450,7 +450,13 @@ async fn qemu_migrate_preconditions(
bwlimit: {
description: "Override I/O bandwidth limit (in KiB/s).",
optional: true,
- }
+ },
+ // TODO better to change remote migration to proxy to node?
+ "target-endpoint": {
+ type: String,
+ optional: true,
+ description: "The target endpoint to use for the connection.",
+ },
},
},
returns: { type: RemoteUpid },
@@ -473,6 +479,7 @@ pub async fn qemu_remote_migrate(
target_storage: String,
target_bridge: String,
bwlimit: Option<u64>,
+ target_endpoint: Option<String>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<RemoteUpid, Error> {
let user_info = CachedUserInfo::new()?;
@@ -519,8 +526,16 @@ pub async fn qemu_remote_migrate(
let target_node = target
.nodes
- .first()
- .ok_or_else(|| format_err!("no nodes configured for target cluster"))?;
+ .iter()
+ .find(|endpoint| match target_endpoint.as_deref() {
+ Some(target) => target == endpoint.hostname,
+ None => true,
+ })
+ .ok_or_else(|| match target_endpoint {
+ Some(endpoint) => format_err!("{endpoint} not configured for target cluster"),
+ None => format_err!("no nodes configured for target cluster"),
+ })?;
+
let target_host_port: Authority = target_node.hostname.parse()?;
let mut target_endpoint = format!(
"host={host},port={port},apitoken=PVEAPIToken={authid}={secret}",
--
2.39.5
_______________________________________________
pdm-devel mailing list
pdm-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pdm-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* [pdm-devel] [PATCH datacenter-manager 5/9] server: api: pve: add remote cluster-status api call
2025-01-13 15:45 [pdm-devel] [PATCH proxmox-api-types/datacenter-manager] remote migration: make target endpoint selectable Dominik Csapak
` (5 preceding siblings ...)
2025-01-13 15:45 ` [pdm-devel] [PATCH datacenter-manager 4/9] server: api: add target-endpoint parameter to remote migrate api calls Dominik Csapak
@ 2025-01-13 15:45 ` Dominik Csapak
2025-01-13 15:45 ` [pdm-devel] [PATCH datacenter-manager 6/9] pdm-client: add cluster status method Dominik Csapak
` (4 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Dominik Csapak @ 2025-01-13 15:45 UTC (permalink / raw)
To: pdm-devel
so we can query the cluster status. Currently very useful to determine
the nodename of a target endpoint, since the cluster status contains
that information. For this reason we also include an explicit target_endpoint
parameter.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
server/src/api/pve/mod.rs | 41 ++++++++++++++++++++++++++++++++++++++-
1 file changed, 40 insertions(+), 1 deletion(-)
diff --git a/server/src/api/pve/mod.rs b/server/src/api/pve/mod.rs
index 48e16b2..2cefbb4 100644
--- a/server/src/api/pve/mod.rs
+++ b/server/src/api/pve/mod.rs
@@ -21,7 +21,9 @@ use pdm_api_types::{
};
use pve_api_types::client::PveClient;
-use pve_api_types::{ClusterResourceKind, ClusterResourceType, ListRealm, PveUpid};
+use pve_api_types::{
+ ClusterNodeStatus, ClusterResourceKind, ClusterResourceType, ListRealm, PveUpid,
+};
use super::resources::{map_pve_lxc, map_pve_node, map_pve_qemu, map_pve_storage};
@@ -59,6 +61,7 @@ const REMOTE_SUBDIRS: SubdirMap = &sorted!([
("nodes", &NODES_ROUTER),
("qemu", &qemu::ROUTER),
("resources", &RESOURCES_ROUTER),
+ ("cluster-status", &STATUS_ROUTER),
("tasks", &tasks::ROUTER),
]);
@@ -68,6 +71,8 @@ const NODES_ROUTER: Router = Router::new()
const RESOURCES_ROUTER: Router = Router::new().get(&API_METHOD_CLUSTER_RESOURCES);
+const STATUS_ROUTER: Router = Router::new().get(&API_METHOD_CLUSTER_STATUS);
+
// converts a remote + PveUpid into a RemoteUpid and starts tracking it
fn new_remote_upid(remote: String, upid: PveUpid) -> Result<RemoteUpid, Error> {
let remote_upid: RemoteUpid = (remote, upid.to_string()).try_into()?;
@@ -175,6 +180,40 @@ pub async fn cluster_resources(
Ok(cluster_resources.collect())
}
+#[api(
+ input: {
+ properties: {
+ remote: { schema: REMOTE_ID_SCHEMA },
+ "target-endpoint": {
+ type: String,
+ optional: true,
+ description: "The target endpoint to use for the connection.",
+ },
+ },
+ },
+ returns: {
+ type: Array,
+ description: "Get all nodes Cluster Status",
+ items: { type: ClusterNodeStatus },
+ },
+ access: {
+ permission: &Permission::Privilege(&["resource", "{remote}"], PRIV_RESOURCE_AUDIT, false),
+ },
+)]
+/// Query the cluster nodes status.
+// FIXME: Use more fine grained permissions and filter on:
+// - `/resource/{remote-id}/{resource-type=node}/{resource-id}`
+pub async fn cluster_status(
+ remote: String,
+ target_endpoint: Option<String>,
+) -> Result<Vec<ClusterNodeStatus>, Error> {
+ let (remotes, _) = pdm_config::remotes::config()?;
+ let remote = get_remote(&remotes, &remote)?;
+ let client = connection::make_pve_client_with_endpoint(remote, target_endpoint.as_deref())?;
+ let status = client.cluster_status().await?;
+ Ok(status)
+}
+
fn map_pve_resource(remote: &str, resource: pve_api_types::ClusterResource) -> Option<PveResource> {
match resource.ty {
ClusterResourceType::Node => map_pve_node(remote, resource).map(PveResource::Node),
--
2.39.5
_______________________________________________
pdm-devel mailing list
pdm-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pdm-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* [pdm-devel] [PATCH datacenter-manager 6/9] pdm-client: add cluster status method
2025-01-13 15:45 [pdm-devel] [PATCH proxmox-api-types/datacenter-manager] remote migration: make target endpoint selectable Dominik Csapak
` (6 preceding siblings ...)
2025-01-13 15:45 ` [pdm-devel] [PATCH datacenter-manager 5/9] server: api: pve: add remote cluster-status api call Dominik Csapak
@ 2025-01-13 15:45 ` Dominik Csapak
2025-01-13 15:45 ` [pdm-devel] [PATCH datacenter-manager 7/9] pdm-client: add target-endpoint parameter to remote migration methods Dominik Csapak
` (3 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Dominik Csapak @ 2025-01-13 15:45 UTC (permalink / raw)
To: pdm-devel
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
lib/pdm-client/src/lib.rs | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/lib/pdm-client/src/lib.rs b/lib/pdm-client/src/lib.rs
index 14e6fc8..4ef560e 100644
--- a/lib/pdm-client/src/lib.rs
+++ b/lib/pdm-client/src/lib.rs
@@ -52,6 +52,8 @@ pub mod types {
};
pub use pve_api_types::ListRealm;
+
+ pub use pve_api_types::ClusterNodeStatus;
}
pub struct PdmClient<T: HttpApiClient>(pub T);
@@ -347,6 +349,16 @@ impl<T: HttpApiClient> PdmClient<T> {
Ok(self.0.get(&query).await?.expect_json()?.data)
}
+ pub async fn pve_cluster_status(
+ &self,
+ remote: &str,
+ target_endpoint: Option<&str>,
+ ) -> Result<Vec<ClusterNodeStatus>, Error> {
+ let mut query = format!("/api2/extjs/pve/remotes/{remote}/cluster-status");
+ add_query_arg(&mut query, &mut '?', "target-endpoint", &target_endpoint);
+ Ok(self.0.get(&query).await?.expect_json()?.data)
+ }
+
pub async fn pve_list_qemu(
&self,
remote: &str,
--
2.39.5
_______________________________________________
pdm-devel mailing list
pdm-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pdm-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* [pdm-devel] [PATCH datacenter-manager 7/9] pdm-client: add target-endpoint parameter to remote migration methods
2025-01-13 15:45 [pdm-devel] [PATCH proxmox-api-types/datacenter-manager] remote migration: make target endpoint selectable Dominik Csapak
` (7 preceding siblings ...)
2025-01-13 15:45 ` [pdm-devel] [PATCH datacenter-manager 6/9] pdm-client: add cluster status method Dominik Csapak
@ 2025-01-13 15:45 ` Dominik Csapak
2025-01-13 15:45 ` [pdm-devel] [PATCH datacenter-manager 8/9] ui: widget: add remote endpoint selector Dominik Csapak
` (2 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Dominik Csapak @ 2025-01-13 15:45 UTC (permalink / raw)
To: pdm-devel
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
lib/pdm-client/src/lib.rs | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/lib/pdm-client/src/lib.rs b/lib/pdm-client/src/lib.rs
index 4ef560e..1253ded 100644
--- a/lib/pdm-client/src/lib.rs
+++ b/lib/pdm-client/src/lib.rs
@@ -488,6 +488,7 @@ impl<T: HttpApiClient> PdmClient<T> {
node: Option<&str>,
vmid: u32,
target: String,
+ target_endpoint: Option<&str>,
params: RemoteMigrateQemu,
) -> Result<RemoteUpid, Error> {
let path = format!("/api2/extjs/pve/remotes/{remote}/qemu/{vmid}/remote-migrate");
@@ -496,6 +497,9 @@ impl<T: HttpApiClient> PdmClient<T> {
if let Some(node) = node {
request["node"] = node.into();
}
+ if let Some(target_endpoint) = target_endpoint {
+ request["target-endpoint"] = target_endpoint.into();
+ }
Ok(self.0.post(&path, &request).await?.expect_json()?.data)
}
@@ -581,6 +585,7 @@ impl<T: HttpApiClient> PdmClient<T> {
node: Option<&str>,
vmid: u32,
target: String,
+ target_endpoint: Option<&str>,
params: RemoteMigrateLxc,
) -> Result<RemoteUpid, Error> {
let path = format!("/api2/extjs/pve/remotes/{remote}/lxc/{vmid}/remote-migrate");
@@ -589,6 +594,9 @@ impl<T: HttpApiClient> PdmClient<T> {
if let Some(node) = node {
request["node"] = node.into();
}
+ if let Some(target_endpoint) = target_endpoint {
+ request["target-endpoint"] = target_endpoint.into();
+ }
Ok(self.0.post(&path, &request).await?.expect_json()?.data)
}
--
2.39.5
_______________________________________________
pdm-devel mailing list
pdm-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pdm-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* [pdm-devel] [PATCH datacenter-manager 8/9] ui: widget: add remote endpoint selector
2025-01-13 15:45 [pdm-devel] [PATCH proxmox-api-types/datacenter-manager] remote migration: make target endpoint selectable Dominik Csapak
` (8 preceding siblings ...)
2025-01-13 15:45 ` [pdm-devel] [PATCH datacenter-manager 7/9] pdm-client: add target-endpoint parameter to remote migration methods Dominik Csapak
@ 2025-01-13 15:45 ` Dominik Csapak
2025-01-13 15:45 ` [pdm-devel] [PATCH datacenter-manager 9/9] ui: migrate: make target endpoint selectable for remote migration Dominik Csapak
2025-01-14 9:35 ` [pdm-devel] [PATCH proxmox-api-types/datacenter-manager] remote migration: make target endpoint selectable Dietmar Maurer
11 siblings, 0 replies; 13+ messages in thread
From: Dominik Csapak @ 2025-01-13 15:45 UTC (permalink / raw)
To: pdm-devel
this is a widget to select a specific endpoint, showing the hostname as
the value. This can be useful in situations where we wan to explicitely
select an endpoint, e.g. remote migration.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
ui/src/widget/mod.rs | 2 +
ui/src/widget/remote_endpoint_selector.rs | 103 ++++++++++++++++++++++
2 files changed, 105 insertions(+)
create mode 100644 ui/src/widget/remote_endpoint_selector.rs
diff --git a/ui/src/widget/mod.rs b/ui/src/widget/mod.rs
index b885d1b..ee9e799 100644
--- a/ui/src/widget/mod.rs
+++ b/ui/src/widget/mod.rs
@@ -21,3 +21,5 @@ pub use search_box::SearchBox;
mod remote_selector;
pub use remote_selector::RemoteSelector;
+
+mod remote_endpoint_selector;
diff --git a/ui/src/widget/remote_endpoint_selector.rs b/ui/src/widget/remote_endpoint_selector.rs
new file mode 100644
index 0000000..779d98c
--- /dev/null
+++ b/ui/src/widget/remote_endpoint_selector.rs
@@ -0,0 +1,103 @@
+use std::rc::Rc;
+
+use wasm_bindgen::UnwrapThrowExt;
+use yew::{
+ html::{IntoEventCallback, IntoPropValue},
+ AttrValue, Callback, Component, Properties,
+};
+
+use pwt::{
+ props::{FieldBuilder, WidgetBuilder},
+ widget::form::Combobox,
+};
+use pwt_macros::{builder, widget};
+
+use crate::RemoteList;
+
+#[widget(comp=PdmEndpointSelector, @input)]
+#[derive(Clone, Properties, PartialEq)]
+#[builder]
+pub struct EndpointSelector {
+ /// The default value
+ #[builder(IntoPropValue, into_prop_value)]
+ #[prop_or_default]
+ pub default: Option<AttrValue>,
+
+ /// Change callback
+ #[builder_cb(IntoEventCallback, into_event_callback, String)]
+ #[prop_or_default]
+ pub on_change: Option<Callback<String>>,
+
+ /// The remote to list Endpoints from
+ #[builder(IntoPropValue, into_prop_value)]
+ #[prop_or_default]
+ pub remote: AttrValue,
+}
+
+impl EndpointSelector {
+ pub fn new(remote: AttrValue) -> Self {
+ yew::props!(Self { remote })
+ }
+}
+
+pub struct PdmEndpointSelector {
+ endpoints: Rc<Vec<AttrValue>>,
+}
+
+impl PdmEndpointSelector {
+ fn update_endpoint_list(&mut self, ctx: &yew::Context<Self>) {
+ let (remotes, _): (RemoteList, _) = ctx
+ .link()
+ .context(ctx.link().callback(|_| ()))
+ .unwrap_throw();
+
+ let remote_id = ctx.props().remote.as_str();
+
+ for remote in remotes.iter() {
+ if remote.id != remote_id {
+ continue;
+ }
+
+ let endpoints = remote
+ .nodes
+ .iter()
+ .map(|endpoint| AttrValue::from(endpoint.hostname.clone()))
+ .collect();
+ self.endpoints = Rc::new(endpoints);
+ break;
+ }
+ }
+}
+
+impl Component for PdmEndpointSelector {
+ type Message = ();
+ type Properties = EndpointSelector;
+
+ fn create(ctx: &yew::Context<Self>) -> Self {
+ let mut this = Self {
+ endpoints: Rc::new(Vec::new()),
+ };
+
+ this.update_endpoint_list(ctx);
+ this
+ }
+
+ fn changed(&mut self, ctx: &yew::Context<Self>, old_props: &Self::Properties) -> bool {
+ if ctx.props().remote != old_props.remote {
+ log::info!("{} {}", ctx.props().remote, old_props.remote);
+ self.update_endpoint_list(ctx);
+ }
+ true
+ }
+
+ fn view(&self, ctx: &yew::Context<Self>) -> yew::Html {
+ let props = ctx.props();
+ Combobox::new()
+ .with_std_props(&props.std_props)
+ .with_input_props(&props.input_props)
+ .on_change(props.on_change.clone())
+ .default(props.default.clone())
+ .items(self.endpoints.clone())
+ .into()
+ }
+}
--
2.39.5
_______________________________________________
pdm-devel mailing list
pdm-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pdm-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* [pdm-devel] [PATCH datacenter-manager 9/9] ui: migrate: make target endpoint selectable for remote migration
2025-01-13 15:45 [pdm-devel] [PATCH proxmox-api-types/datacenter-manager] remote migration: make target endpoint selectable Dominik Csapak
` (9 preceding siblings ...)
2025-01-13 15:45 ` [pdm-devel] [PATCH datacenter-manager 8/9] ui: widget: add remote endpoint selector Dominik Csapak
@ 2025-01-13 15:45 ` Dominik Csapak
2025-01-14 9:35 ` [pdm-devel] [PATCH proxmox-api-types/datacenter-manager] remote migration: make target endpoint selectable Dietmar Maurer
11 siblings, 0 replies; 13+ messages in thread
From: Dominik Csapak @ 2025-01-13 15:45 UTC (permalink / raw)
To: pdm-devel
by showing a target endpoint selector instead of a target node one when
it's a remote migration.
For the user to be able to select the target storage/network properly,
we have to query the nodename for the target and update the
storage/network selectors.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
ui/src/widget/migrate_window.rs | 68 ++++++++++++++++++++++++++++++++-
1 file changed, 66 insertions(+), 2 deletions(-)
diff --git a/ui/src/widget/migrate_window.rs b/ui/src/widget/migrate_window.rs
index c87cf9a..7214ff4 100644
--- a/ui/src/widget/migrate_window.rs
+++ b/ui/src/widget/migrate_window.rs
@@ -23,6 +23,7 @@ use pdm_client::{MigrateLxc, MigrateQemu, RemoteMigrateLxc, RemoteMigrateQemu};
use crate::pve::GuestInfo;
use crate::pve::GuestType;
+use super::remote_endpoint_selector::EndpointSelector;
use super::{
PveMigrateMap, PveNetworkSelector, PveNodeSelector, PveStorageSelector, RemoteSelector,
};
@@ -62,6 +63,8 @@ impl MigrateWindow {
pub enum Msg {
RemoteChange(String),
+ EndpointChange(String),
+ NodenameResult(Result<String, proxmox_client::Error>),
Result(RemoteUpid),
LoadPreconditions(Option<AttrValue>),
PreconditionResult(Result<QemuMigratePreconditions, proxmox_client::Error>),
@@ -71,9 +74,29 @@ pub struct PdmMigrateWindow {
target_remote: AttrValue,
_async_pool: AsyncPool,
preconditions: Option<QemuMigratePreconditions>,
+ target_node: Option<AttrValue>,
}
impl PdmMigrateWindow {
+ async fn get_nodename(
+ remote: AttrValue,
+ target_endpoint: AttrValue,
+ ) -> Result<String, proxmox_client::Error> {
+ let status_list = crate::pdm_client()
+ .pve_cluster_status(&remote, Some(&target_endpoint))
+ .await?;
+
+ for status in status_list {
+ if status.local.unwrap_or(false) {
+ return Ok(status.name);
+ }
+ }
+
+ Err(proxmox_client::Error::Other(
+ "could not find local nodename",
+ ))
+ }
+
async fn load_preconditions(
remote: String,
guest_info: GuestInfo,
@@ -132,6 +155,7 @@ impl PdmMigrateWindow {
let target_remote = value["remote"].as_str().unwrap_or_default();
let upid = if target_remote != remote {
+ let target_endpoint = value.get("target-endpoint").and_then(|e| e.as_str());
match guest_info.guest_type {
crate::pve::GuestType::Qemu => {
let mut migrate_opts = RemoteMigrateQemu::new()
@@ -174,6 +198,7 @@ impl PdmMigrateWindow {
None,
guest_info.vmid,
target_remote.to_string(),
+ target_endpoint,
migrate_opts,
)
.await?
@@ -218,6 +243,7 @@ impl PdmMigrateWindow {
None,
guest_info.vmid,
target_remote.to_string(),
+ target_endpoint,
migrate_opts,
)
.await?
@@ -266,10 +292,12 @@ impl PdmMigrateWindow {
source_remote: AttrValue,
guest_info: GuestInfo,
preconditions: Option<QemuMigratePreconditions>,
+ target_node: Option<AttrValue>,
) -> Html {
let same_remote = target_remote == source_remote;
if !same_remote {
- form_ctx.write().set_field_value("node", "".into());
+ let node = target_node.unwrap_or_default().to_string();
+ form_ctx.write().set_field_value("node", node.into());
}
let detail_mode = form_ctx.read().get_field_checked("detailed-mode");
let mut uses_local_disks = false;
@@ -336,13 +364,27 @@ impl PdmMigrateWindow {
tr!("Mode"),
DisplayField::new("").name("migrate-mode").key("mode"),
)
- .with_right_field(
+ .with_field_and_options(
+ pwt::widget::FieldPosition::Right,
+ false,
+ !same_remote,
tr!("Target Node"),
PveNodeSelector::new(target_remote.clone())
.name("node")
.required(same_remote)
.on_change(link.callback(Msg::LoadPreconditions))
.disabled(!same_remote),
+ )
+ .with_field_and_options(
+ pwt::widget::FieldPosition::Right,
+ false,
+ same_remote,
+ tr!("Target Endpoint"),
+ EndpointSelector::new(target_remote.clone())
+ .placeholder(tr!("Automatic"))
+ .name("target-endpoint")
+ .on_change(link.callback(Msg::EndpointChange))
+ .disabled(same_remote),
);
if !same_remote || uses_local_disks || uses_local_resources {
@@ -460,6 +502,7 @@ impl Component for PdmMigrateWindow {
target_remote: ctx.props().remote.clone(),
_async_pool: AsyncPool::new(),
preconditions: None,
+ target_node: None,
}
}
@@ -501,6 +544,25 @@ impl Component for PdmMigrateWindow {
}
true
}
+ Msg::EndpointChange(endpoint) => {
+ let remote = self.target_remote.clone();
+ self._async_pool
+ .send_future(ctx.link().clone(), async move {
+ let res = Self::get_nodename(remote, endpoint.into()).await;
+ Msg::NodenameResult(res)
+ });
+ false
+ }
+ Msg::NodenameResult(result) => match result {
+ Ok(nodename) => {
+ self.target_node = Some(nodename.into());
+ true
+ }
+ Err(err) => {
+ log::error!("could not extract nodename from endpoint: {err}");
+ false
+ }
+ },
}
}
@@ -525,6 +587,7 @@ impl Component for PdmMigrateWindow {
let source_remote = ctx.props().remote.clone();
let link = ctx.link().clone();
let preconditions = self.preconditions.clone();
+ let target_node = self.target_node.clone();
move |form| {
Self::input_panel(
&link,
@@ -533,6 +596,7 @@ impl Component for PdmMigrateWindow {
source_remote.clone(),
guest_info,
preconditions.clone(),
+ target_node.clone(),
)
}
})
--
2.39.5
_______________________________________________
pdm-devel mailing list
pdm-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pdm-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [pdm-devel] [PATCH proxmox-api-types/datacenter-manager] remote migration: make target endpoint selectable
2025-01-13 15:45 [pdm-devel] [PATCH proxmox-api-types/datacenter-manager] remote migration: make target endpoint selectable Dominik Csapak
` (10 preceding siblings ...)
2025-01-13 15:45 ` [pdm-devel] [PATCH datacenter-manager 9/9] ui: migrate: make target endpoint selectable for remote migration Dominik Csapak
@ 2025-01-14 9:35 ` Dietmar Maurer
11 siblings, 0 replies; 13+ messages in thread
From: Dietmar Maurer @ 2025-01-14 9:35 UTC (permalink / raw)
To: Proxmox Datacenter Manager development discussion, Dominik Csapak
applied until 5/9
The rest produce build failures...
_______________________________________________
pdm-devel mailing list
pdm-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pdm-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2025-01-14 9:36 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-01-13 15:45 [pdm-devel] [PATCH proxmox-api-types/datacenter-manager] remote migration: make target endpoint selectable Dominik Csapak
2025-01-13 15:45 ` [pdm-devel] [PATCH proxmox-api-types 1/2] add more network interface methods Dominik Csapak
2025-01-13 15:45 ` [pdm-devel] [PATCH proxmox-api-types 2/2] add cluster status api call Dominik Csapak
2025-01-13 15:45 ` [pdm-devel] [PATCH datacenter-manager 1/9] server: factor qemu/lxc code into own modules Dominik Csapak
2025-01-13 15:45 ` [pdm-devel] [PATCH datacenter-manager 2/9] server: api: fix remote upid tracking for qemu remote migration Dominik Csapak
2025-01-13 15:45 ` [pdm-devel] [PATCH datacenter-manager 3/9] server: connection: add new function that allows for explicit endpoint Dominik Csapak
2025-01-13 15:45 ` [pdm-devel] [PATCH datacenter-manager 4/9] server: api: add target-endpoint parameter to remote migrate api calls Dominik Csapak
2025-01-13 15:45 ` [pdm-devel] [PATCH datacenter-manager 5/9] server: api: pve: add remote cluster-status api call Dominik Csapak
2025-01-13 15:45 ` [pdm-devel] [PATCH datacenter-manager 6/9] pdm-client: add cluster status method Dominik Csapak
2025-01-13 15:45 ` [pdm-devel] [PATCH datacenter-manager 7/9] pdm-client: add target-endpoint parameter to remote migration methods Dominik Csapak
2025-01-13 15:45 ` [pdm-devel] [PATCH datacenter-manager 8/9] ui: widget: add remote endpoint selector Dominik Csapak
2025-01-13 15:45 ` [pdm-devel] [PATCH datacenter-manager 9/9] ui: migrate: make target endpoint selectable for remote migration Dominik Csapak
2025-01-14 9:35 ` [pdm-devel] [PATCH proxmox-api-types/datacenter-manager] remote migration: make target endpoint selectable Dietmar Maurer
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox