From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [IPv6:2a01:7e0:0:424::9]) by lore.proxmox.com (Postfix) with ESMTPS id D56CC1FF16A for ; Fri, 13 Sep 2024 12:08:50 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id B4063108D1; Fri, 13 Sep 2024 12:08:51 +0200 (CEST) Date: Fri, 13 Sep 2024 12:08:17 +0200 (CEST) From: =?UTF-8?Q?Fabian_Gr=C3=BCnbichler?= To: Proxmox VE development discussion Message-ID: <834633851.29781.1726222097292@webmail.proxmox.com> In-Reply-To: References: MIME-Version: 1.0 X-Priority: 3 Importance: Normal X-Mailer: Open-Xchange Mailer v7.10.6-Rev67 X-Originating-Client: open-xchange-appsuite X-SPAM-LEVEL: Spam detection results: 0 AWL 0.049 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: [pve-devel] NACK: [PATCH] test: remove logs and add a .gitignore file X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Proxmox VE development discussion Cc: Jing Luo Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: pve-devel-bounces@lists.proxmox.com Sender: "pve-devel" > Jing Luo via pve-devel hat am 12.09.2024 13:49 CEST geschrieben: > Through out the years there are 3 log files committed to the git repo. Let's > remove those and add a .gitignore file. this is not super obvious (and maybe should be made so ;)), but those log files are not accidentally committed, they are recordings of the expected output of replication. removing them will recreate them, but their actual point of existence is that if the replication behaviour or output changes, they will no longer match and the pve-manager build will fail. > Signed-off-by: Jing Luo > --- > test/.gitignore | 1 + > test/replication_test4.log | 25 --------------- > test/replication_test5.log | 64 -------------------------------------- > test/replication_test6.log | 8 ----- > 4 files changed, 1 insertion(+), 97 deletions(-) > create mode 100644 test/.gitignore > delete mode 100644 test/replication_test4.log > delete mode 100644 test/replication_test5.log > delete mode 100644 test/replication_test6.log > > diff --git a/test/.gitignore b/test/.gitignore > new file mode 100644 > index 00000000..397b4a76 > --- /dev/null > +++ b/test/.gitignore > @@ -0,0 +1 @@ > +*.log > diff --git a/test/replication_test4.log b/test/replication_test4.log > deleted file mode 100644 > index caefa0de..00000000 > --- a/test/replication_test4.log > +++ /dev/null > @@ -1,25 +0,0 @@ > -1000 job_900_to_node2: new job next_sync => 900 > -1000 job_900_to_node2: start replication job > -1000 job_900_to_node2: end replication job with error: faked replication error > -1000 job_900_to_node2: changed config next_sync => 1300 > -1000 job_900_to_node2: changed state last_node => node1, last_try => 1000, fail_count => 1, error => faked replication error > -1300 job_900_to_node2: start replication job > -1300 job_900_to_node2: end replication job with error: faked replication error > -1300 job_900_to_node2: changed config next_sync => 1900 > -1300 job_900_to_node2: changed state last_try => 1300, fail_count => 2 > -1900 job_900_to_node2: start replication job > -1900 job_900_to_node2: end replication job with error: faked replication error > -1900 job_900_to_node2: changed config next_sync => 2800 > -1900 job_900_to_node2: changed state last_try => 1900, fail_count => 3 > -2800 job_900_to_node2: start replication job > -2800 job_900_to_node2: end replication job with error: faked replication error > -2800 job_900_to_node2: changed config next_sync => 4600 > -2800 job_900_to_node2: changed state last_try => 2800, fail_count => 4 > -4600 job_900_to_node2: start replication job > -4600 job_900_to_node2: end replication job with error: faked replication error > -4600 job_900_to_node2: changed config next_sync => 6400 > -4600 job_900_to_node2: changed state last_try => 4600, fail_count => 5 > -6400 job_900_to_node2: start replication job > -6400 job_900_to_node2: end replication job with error: faked replication error > -6400 job_900_to_node2: changed config next_sync => 8200 > -6400 job_900_to_node2: changed state last_try => 6400, fail_count => 6 > diff --git a/test/replication_test5.log b/test/replication_test5.log > deleted file mode 100644 > index 928feca3..00000000 > --- a/test/replication_test5.log > +++ /dev/null > @@ -1,64 +0,0 @@ > -1000 job_900_to_node2: new job next_sync => 900 > -1000 job_900_to_node2: start replication job > -1000 job_900_to_node2: guest => VM 900, running => 0 > -1000 job_900_to_node2: volumes => local-zfs:vm-900-disk-1 > -1000 job_900_to_node2: create snapshot '__replicate_job_900_to_node2_1000__' on local-zfs:vm-900-disk-1 > -1000 job_900_to_node2: using secure transmission, rate limit: none > -1000 job_900_to_node2: full sync 'local-zfs:vm-900-disk-1' (__replicate_job_900_to_node2_1000__) > -1000 job_900_to_node2: end replication job > -1000 job_900_to_node2: changed config next_sync => 1800 > -1000 job_900_to_node2: changed state last_node => node1, last_try => 1000, last_sync => 1000 > -1000 job_900_to_node2: changed storeid list local-zfs > -1840 job_900_to_node2: start replication job > -1840 job_900_to_node2: guest => VM 900, running => 0 > -1840 job_900_to_node2: volumes => local-zfs:vm-900-disk-1 > -1840 job_900_to_node2: create snapshot '__replicate_job_900_to_node2_1840__' on local-zfs:vm-900-disk-1 > -1840 job_900_to_node2: using secure transmission, rate limit: none > -1840 job_900_to_node2: incremental sync 'local-zfs:vm-900-disk-1' (__replicate_job_900_to_node2_1000__ => __replicate_job_900_to_node2_1840__) > -1840 job_900_to_node2: delete previous replication snapshot '__replicate_job_900_to_node2_1000__' on local-zfs:vm-900-disk-1 > -1840 job_900_to_node2: end replication job > -1840 job_900_to_node2: changed config next_sync => 2700 > -1840 job_900_to_node2: changed state last_try => 1840, last_sync => 1840 > -2740 job_900_to_node2: start replication job > -2740 job_900_to_node2: guest => VM 900, running => 0 > -2740 job_900_to_node2: volumes => local-zfs:vm-900-disk-1,local-zfs:vm-900-disk-2 > -2740 job_900_to_node2: create snapshot '__replicate_job_900_to_node2_2740__' on local-zfs:vm-900-disk-1 > -2740 job_900_to_node2: create snapshot '__replicate_job_900_to_node2_2740__' on local-zfs:vm-900-disk-2 > -2740 job_900_to_node2: delete previous replication snapshot '__replicate_job_900_to_node2_2740__' on local-zfs:vm-900-disk-1 > -2740 job_900_to_node2: end replication job with error: no such volid 'local-zfs:vm-900-disk-2' > -2740 job_900_to_node2: changed config next_sync => 3040 > -2740 job_900_to_node2: changed state last_try => 2740, fail_count => 1, error => no such volid 'local-zfs:vm-900-disk-2' > -3040 job_900_to_node2: start replication job > -3040 job_900_to_node2: guest => VM 900, running => 0 > -3040 job_900_to_node2: volumes => local-zfs:vm-900-disk-1,local-zfs:vm-900-disk-2 > -3040 job_900_to_node2: create snapshot '__replicate_job_900_to_node2_3040__' on local-zfs:vm-900-disk-1 > -3040 job_900_to_node2: create snapshot '__replicate_job_900_to_node2_3040__' on local-zfs:vm-900-disk-2 > -3040 job_900_to_node2: using secure transmission, rate limit: none > -3040 job_900_to_node2: incremental sync 'local-zfs:vm-900-disk-1' (__replicate_job_900_to_node2_1840__ => __replicate_job_900_to_node2_3040__) > -3040 job_900_to_node2: full sync 'local-zfs:vm-900-disk-2' (__replicate_job_900_to_node2_3040__) > -3040 job_900_to_node2: delete previous replication snapshot '__replicate_job_900_to_node2_1840__' on local-zfs:vm-900-disk-1 > -3040 job_900_to_node2: end replication job > -3040 job_900_to_node2: changed config next_sync => 3600 > -3040 job_900_to_node2: changed state last_try => 3040, last_sync => 3040, fail_count => 0, error => > -3640 job_900_to_node2: start replication job > -3640 job_900_to_node2: guest => VM 900, running => 0 > -3640 job_900_to_node2: volumes => local-zfs:vm-900-disk-1,local-zfs:vm-900-disk-2 > -3640 job_900_to_node2: create snapshot '__replicate_job_900_to_node2_3640__' on local-zfs:vm-900-disk-1 > -3640 job_900_to_node2: create snapshot '__replicate_job_900_to_node2_3640__' on local-zfs:vm-900-disk-2 > -3640 job_900_to_node2: using secure transmission, rate limit: none > -3640 job_900_to_node2: incremental sync 'local-zfs:vm-900-disk-1' (__replicate_job_900_to_node2_3040__ => __replicate_job_900_to_node2_3640__) > -3640 job_900_to_node2: incremental sync 'local-zfs:vm-900-disk-2' (__replicate_job_900_to_node2_3040__ => __replicate_job_900_to_node2_3640__) > -3640 job_900_to_node2: delete previous replication snapshot '__replicate_job_900_to_node2_3040__' on local-zfs:vm-900-disk-1 > -3640 job_900_to_node2: delete previous replication snapshot '__replicate_job_900_to_node2_3040__' on local-zfs:vm-900-disk-2 > -3640 job_900_to_node2: end replication job > -3640 job_900_to_node2: changed config next_sync => 4500 > -3640 job_900_to_node2: changed state last_try => 3640, last_sync => 3640 > -3700 job_900_to_node2: start replication job > -3700 job_900_to_node2: guest => VM 900, running => 0 > -3700 job_900_to_node2: volumes => local-zfs:vm-900-disk-1,local-zfs:vm-900-disk-2 > -3700 job_900_to_node2: start job removal - mode 'full' > -3700 job_900_to_node2: delete stale replication snapshot '__replicate_job_900_to_node2_3640__' on local-zfs:vm-900-disk-1 > -3700 job_900_to_node2: delete stale replication snapshot '__replicate_job_900_to_node2_3640__' on local-zfs:vm-900-disk-2 > -3700 job_900_to_node2: job removed > -3700 job_900_to_node2: end replication job > -3700 job_900_to_node2: vanished job > diff --git a/test/replication_test6.log b/test/replication_test6.log > deleted file mode 100644 > index 91754544..00000000 > --- a/test/replication_test6.log > +++ /dev/null > @@ -1,8 +0,0 @@ > -1000 job_900_to_node1: new job next_sync => 1 > -1000 job_900_to_node1: start replication job > -1000 job_900_to_node1: guest => VM 900, running => 0 > -1000 job_900_to_node1: volumes => local-zfs:vm-900-disk-1 > -1000 job_900_to_node1: start job removal - mode 'full' > -1000 job_900_to_node1: job removed > -1000 job_900_to_node1: end replication job > -1000 job_900_to_node1: vanished job > -- > 2.46.0 _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel