public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Fiona Ebner <f.ebner@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH manager] Revert "tests: update expected replication log output"
Date: Thu, 18 Apr 2024 09:06:48 +0200	[thread overview]
Message-ID: <20240418070648.16462-2-f.ebner@proxmox.com> (raw)
In-Reply-To: <20240418070648.16462-1-f.ebner@proxmox.com>

This reverts commit 3a259c22e64ff22049856256a1dad643439c79ef.

There was an oversight with recent replication fixes that led to
attempting to remove snapshots that do not exist (in more scenarios).
While not an issue with real consequences, it's confusing to users.
This has since been fixed by pve-guest-common commit "replication:
snapshot cleanup: only attempt to remove snapshots that exist".

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---

Build-depends on new pve-guest-common.

 test/replication_test5.log | 2 --
 1 file changed, 2 deletions(-)

diff --git a/test/replication_test5.log b/test/replication_test5.log
index 196d0fd0..928feca3 100644
--- a/test/replication_test5.log
+++ b/test/replication_test5.log
@@ -5,7 +5,6 @@
 1000 job_900_to_node2: create snapshot '__replicate_job_900_to_node2_1000__' on local-zfs:vm-900-disk-1
 1000 job_900_to_node2: using secure transmission, rate limit: none
 1000 job_900_to_node2: full sync 'local-zfs:vm-900-disk-1' (__replicate_job_900_to_node2_1000__)
-1000 job_900_to_node2: delete previous replication snapshot '__replicate_job_900_to_node2_0__' on local-zfs:vm-900-disk-1
 1000 job_900_to_node2: end replication job
 1000 job_900_to_node2: changed config next_sync => 1800
 1000 job_900_to_node2: changed state last_node => node1, last_try => 1000, last_sync => 1000
@@ -38,7 +37,6 @@
 3040 job_900_to_node2: incremental sync 'local-zfs:vm-900-disk-1' (__replicate_job_900_to_node2_1840__ => __replicate_job_900_to_node2_3040__)
 3040 job_900_to_node2: full sync 'local-zfs:vm-900-disk-2' (__replicate_job_900_to_node2_3040__)
 3040 job_900_to_node2: delete previous replication snapshot '__replicate_job_900_to_node2_1840__' on local-zfs:vm-900-disk-1
-3040 job_900_to_node2: delete previous replication snapshot '__replicate_job_900_to_node2_1840__' on local-zfs:vm-900-disk-2
 3040 job_900_to_node2: end replication job
 3040 job_900_to_node2: changed config next_sync => 3600
 3040 job_900_to_node2: changed state last_try => 3040, last_sync => 3040, fail_count => 0, error => 
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


  reply	other threads:[~2024-04-18  7:07 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-18  7:06 [pve-devel] [PATCH guest-common] replication: snapshot cleanup: only attempt to remove snapshots that exist Fiona Ebner
2024-04-18  7:06 ` Fiona Ebner [this message]
2024-04-18  8:23 ` [pve-devel] applied: " Thomas Lamprecht

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240418070648.16462-2-f.ebner@proxmox.com \
    --to=f.ebner@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal