From: Thomas Lamprecht <t.lamprecht@proxmox.com>
To: Proxmox Backup Server development discussion
<pbs-devel@lists.proxmox.com>,
Hannes Laimer <h.laimer@proxmox.com>
Subject: Re: [pbs-devel] [PATCH proxmox-backup v5 1/6] api: make Remote for SyncJob optional
Date: Wed, 8 Nov 2023 11:53:21 +0100 [thread overview]
Message-ID: <ad1b8d68-4f52-40a2-86bb-dfe1b1e425bd@proxmox.com> (raw)
In-Reply-To: <20231006140529.723988-2-h.laimer@proxmox.com>
Am 06/10/2023 um 16:05 schrieb Hannes Laimer:
> diff --git a/src/api2/pull.rs b/src/api2/pull.rs
> index daeba7cf..9ed83046 100644
> --- a/src/api2/pull.rs
> +++ b/src/api2/pull.rs
> @@ -65,7 +75,7 @@ impl TryFrom<&SyncJobConfig> for PullParameters {
> PullParameters::new(
> &sync_job.store,
> sync_job.ns.clone().unwrap_or_default(),
> - &sync_job.remote,
> + sync_job.remote.as_deref().unwrap_or("local"),
isn't this still the same issue that Wolfgang asked about in his review for
v3 and v2? (Please be a bit more communicative and answer to review on list
too, else it's hard to know if you overlooked it or if it's actually OK the
way it is...)
faking a remote ID is probably never a good idea, should this stay None here?
But I actually checked the remaining code, and it fixes this in commit 4/6
"pull: refactor pulling from a datastore", so question is, why do we add API
support before it can even correctly work? Shouldn't this commit be ordered
after the aforementioned refactor one, and the "pull: add support for pulling
from local datastore" one, avoiding such intermediate "bugs" and making review
easier due to a more causal order.
> &sync_job.remote_store,
> sync_job.remote_ns.clone().unwrap_or_default(),
> sync_job
> @@ -91,7 +101,7 @@ pub fn do_sync_job(
> ) -> Result<String, Error> {
> let job_id = format!(
> "{}:{}:{}:{}:{}",
> - sync_job.remote,
> + sync_job.remote.as_deref().unwrap_or("-"),
> sync_job.remote_store,
> sync_job.store,
> sync_job.ns.clone().unwrap_or_default(),
> @@ -99,6 +109,13 @@ pub fn do_sync_job(
> );
> let worker_type = job.jobtype().to_string();
>
> + if sync_job.remote.is_none()
> + && sync_job.store == sync_job.remote_store
> + && sync_job.ns == sync_job.remote_ns
I'd disallow syncing to the same store even if different NS, as they still
might overlap and it's IMO just an odd use-case.
If, we should allow mocving around groups and namespaces inside the same
datastore as separate functionallity, without any jobs or syncing involved,
but that'd be unrelated to this series in any way.
> + {
> + bail!("can't sync, source equals the target");
> + }
> +
> let (email, notify) = crate::server::lookup_datastore_notify_settings(&sync_job.store);
>
> let upid_str = WorkerTask::spawn(
> @@ -122,13 +139,33 @@ pub fn do_sync_job(
> }
> task_log!(
> worker,
> - "sync datastore '{}' from '{}/{}'",
> + "sync datastore '{}' from '{}{}'",
> sync_job.store,
> - sync_job.remote,
> + sync_job
> + .remote
> + .as_deref()
> + .map_or(String::new(), |remote| format!("{remote}/")),
nit: might be nicer to pull this out via a `let source_prefix = match ...`
statement up front.
> sync_job.remote_store,
> );
>
> - pull_store(&worker, &client, pull_params).await?;
> + if sync_job.remote.is_some() {
> + pull_store(&worker, &client, pull_params).await?;
> + } else {
> + if let (Some(target_ns), Some(source_ns)) = (sync_job.ns, sync_job.remote_ns) {
> + if target_ns.path().starts_with(source_ns.path())
> + && sync_job.store == sync_job.remote_store
> + && sync_job.max_depth.map_or(true, |sync_depth| {
> + target_ns.depth() + sync_depth > MAX_NAMESPACE_DEPTH
> + }) {
> + task_log!(
> + worker,
> + "Can't sync namespace into one of its sub-namespaces, would exceed maximum namespace depth, skipping"
> + );
> + }
> + } else {
> + pull_store(&worker, &client, pull_params).await?;
> + }
> + }
>
> task_log!(worker, "sync job '{}' end", &job_id);
>
> @@ -256,7 +294,7 @@ async fn pull(
> let pull_params = PullParameters::new(
> &store,
> ns,
> - &remote,
> + remote.as_deref().unwrap_or("local"),
same here w.r.t. intermediate bug (or at least hard to follow commit order)
> &remote_store,
> remote_ns.unwrap_or_default(),
> auth_id.clone(),
next prev parent reply other threads:[~2023-11-08 10:53 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-06 14:05 [pbs-devel] [PATCH proxmox-backup v5 0/6] local sync-jobs Hannes Laimer
2023-10-06 14:05 ` [pbs-devel] [PATCH proxmox-backup v5 1/6] api: make Remote for SyncJob optional Hannes Laimer
2023-11-08 10:53 ` Thomas Lamprecht [this message]
2023-11-08 13:26 ` Hannes Laimer
2023-10-06 14:05 ` [pbs-devel] [PATCH proxmox-backup v5 2/6] manager: add completion for opt. Remote in SyncJob Hannes Laimer
2023-10-06 14:05 ` [pbs-devel] [PATCH proxmox-backup v5 3/6] accept a ref to a HttpClient Hannes Laimer
2023-10-06 14:05 ` [pbs-devel] [PATCH proxmox-backup v5 4/6] pull: refactor pulling from a datastore Hannes Laimer
2023-10-06 14:05 ` [pbs-devel] [PATCH proxmox-backup v5 5/6] pull: add support for pulling from local datastore Hannes Laimer
2023-10-06 14:05 ` [pbs-devel] [PATCH proxmox-backup v5 6/6] ui: add support for optional Remote in SyncJob Hannes Laimer
2023-11-08 11:06 ` Thomas Lamprecht
2023-11-09 9:34 ` Gabriel Goller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ad1b8d68-4f52-40a2-86bb-dfe1b1e425bd@proxmox.com \
--to=t.lamprecht@proxmox.com \
--cc=h.laimer@proxmox.com \
--cc=pbs-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.