public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Jing Luo via pve-devel <pve-devel@lists.proxmox.com>
To: pve-devel@lists.proxmox.com
Cc: Jing Luo <jing@jing.rocks>
Subject: [pve-devel] [PATCH] test: remove logs and add a .gitignore file
Date: Thu, 12 Sep 2024 20:49:59 +0900	[thread overview]
Message-ID: <mailman.233.1726153631.414.pve-devel@lists.proxmox.com> (raw)

[-- Attachment #1: Type: message/rfc822, Size: 12691 bytes --]

From: Jing Luo <jing@jing.rocks>
To: pve-devel@lists.proxmox.com
Cc: Jing Luo <jing@jing.rocks>
Subject: [PATCH] test: remove logs and add a .gitignore file
Date: Thu, 12 Sep 2024 20:49:59 +0900
Message-ID: <20240912115047.1252907-1-jing@jing.rocks>

Through out the years there are 3 log files committed to the git repo. Let's
remove those and add a .gitignore file.

Signed-off-by: Jing Luo <jing@jing.rocks>
---
 test/.gitignore            |  1 +
 test/replication_test4.log | 25 ---------------
 test/replication_test5.log | 64 --------------------------------------
 test/replication_test6.log |  8 -----
 4 files changed, 1 insertion(+), 97 deletions(-)
 create mode 100644 test/.gitignore
 delete mode 100644 test/replication_test4.log
 delete mode 100644 test/replication_test5.log
 delete mode 100644 test/replication_test6.log

diff --git a/test/.gitignore b/test/.gitignore
new file mode 100644
index 00000000..397b4a76
--- /dev/null
+++ b/test/.gitignore
@@ -0,0 +1 @@
+*.log
diff --git a/test/replication_test4.log b/test/replication_test4.log
deleted file mode 100644
index caefa0de..00000000
--- a/test/replication_test4.log
+++ /dev/null
@@ -1,25 +0,0 @@
-1000 job_900_to_node2: new job next_sync => 900
-1000 job_900_to_node2: start replication job
-1000 job_900_to_node2: end replication job with error: faked replication error
-1000 job_900_to_node2: changed config next_sync => 1300
-1000 job_900_to_node2: changed state last_node => node1, last_try => 1000, fail_count => 1, error => faked replication error
-1300 job_900_to_node2: start replication job
-1300 job_900_to_node2: end replication job with error: faked replication error
-1300 job_900_to_node2: changed config next_sync => 1900
-1300 job_900_to_node2: changed state last_try => 1300, fail_count => 2
-1900 job_900_to_node2: start replication job
-1900 job_900_to_node2: end replication job with error: faked replication error
-1900 job_900_to_node2: changed config next_sync => 2800
-1900 job_900_to_node2: changed state last_try => 1900, fail_count => 3
-2800 job_900_to_node2: start replication job
-2800 job_900_to_node2: end replication job with error: faked replication error
-2800 job_900_to_node2: changed config next_sync => 4600
-2800 job_900_to_node2: changed state last_try => 2800, fail_count => 4
-4600 job_900_to_node2: start replication job
-4600 job_900_to_node2: end replication job with error: faked replication error
-4600 job_900_to_node2: changed config next_sync => 6400
-4600 job_900_to_node2: changed state last_try => 4600, fail_count => 5
-6400 job_900_to_node2: start replication job
-6400 job_900_to_node2: end replication job with error: faked replication error
-6400 job_900_to_node2: changed config next_sync => 8200
-6400 job_900_to_node2: changed state last_try => 6400, fail_count => 6
diff --git a/test/replication_test5.log b/test/replication_test5.log
deleted file mode 100644
index 928feca3..00000000
--- a/test/replication_test5.log
+++ /dev/null
@@ -1,64 +0,0 @@
-1000 job_900_to_node2: new job next_sync => 900
-1000 job_900_to_node2: start replication job
-1000 job_900_to_node2: guest => VM 900, running => 0
-1000 job_900_to_node2: volumes => local-zfs:vm-900-disk-1
-1000 job_900_to_node2: create snapshot '__replicate_job_900_to_node2_1000__' on local-zfs:vm-900-disk-1
-1000 job_900_to_node2: using secure transmission, rate limit: none
-1000 job_900_to_node2: full sync 'local-zfs:vm-900-disk-1' (__replicate_job_900_to_node2_1000__)
-1000 job_900_to_node2: end replication job
-1000 job_900_to_node2: changed config next_sync => 1800
-1000 job_900_to_node2: changed state last_node => node1, last_try => 1000, last_sync => 1000
-1000 job_900_to_node2: changed storeid list local-zfs
-1840 job_900_to_node2: start replication job
-1840 job_900_to_node2: guest => VM 900, running => 0
-1840 job_900_to_node2: volumes => local-zfs:vm-900-disk-1
-1840 job_900_to_node2: create snapshot '__replicate_job_900_to_node2_1840__' on local-zfs:vm-900-disk-1
-1840 job_900_to_node2: using secure transmission, rate limit: none
-1840 job_900_to_node2: incremental sync 'local-zfs:vm-900-disk-1' (__replicate_job_900_to_node2_1000__ => __replicate_job_900_to_node2_1840__)
-1840 job_900_to_node2: delete previous replication snapshot '__replicate_job_900_to_node2_1000__' on local-zfs:vm-900-disk-1
-1840 job_900_to_node2: end replication job
-1840 job_900_to_node2: changed config next_sync => 2700
-1840 job_900_to_node2: changed state last_try => 1840, last_sync => 1840
-2740 job_900_to_node2: start replication job
-2740 job_900_to_node2: guest => VM 900, running => 0
-2740 job_900_to_node2: volumes => local-zfs:vm-900-disk-1,local-zfs:vm-900-disk-2
-2740 job_900_to_node2: create snapshot '__replicate_job_900_to_node2_2740__' on local-zfs:vm-900-disk-1
-2740 job_900_to_node2: create snapshot '__replicate_job_900_to_node2_2740__' on local-zfs:vm-900-disk-2
-2740 job_900_to_node2: delete previous replication snapshot '__replicate_job_900_to_node2_2740__' on local-zfs:vm-900-disk-1
-2740 job_900_to_node2: end replication job with error: no such volid 'local-zfs:vm-900-disk-2'
-2740 job_900_to_node2: changed config next_sync => 3040
-2740 job_900_to_node2: changed state last_try => 2740, fail_count => 1, error => no such volid 'local-zfs:vm-900-disk-2'
-3040 job_900_to_node2: start replication job
-3040 job_900_to_node2: guest => VM 900, running => 0
-3040 job_900_to_node2: volumes => local-zfs:vm-900-disk-1,local-zfs:vm-900-disk-2
-3040 job_900_to_node2: create snapshot '__replicate_job_900_to_node2_3040__' on local-zfs:vm-900-disk-1
-3040 job_900_to_node2: create snapshot '__replicate_job_900_to_node2_3040__' on local-zfs:vm-900-disk-2
-3040 job_900_to_node2: using secure transmission, rate limit: none
-3040 job_900_to_node2: incremental sync 'local-zfs:vm-900-disk-1' (__replicate_job_900_to_node2_1840__ => __replicate_job_900_to_node2_3040__)
-3040 job_900_to_node2: full sync 'local-zfs:vm-900-disk-2' (__replicate_job_900_to_node2_3040__)
-3040 job_900_to_node2: delete previous replication snapshot '__replicate_job_900_to_node2_1840__' on local-zfs:vm-900-disk-1
-3040 job_900_to_node2: end replication job
-3040 job_900_to_node2: changed config next_sync => 3600
-3040 job_900_to_node2: changed state last_try => 3040, last_sync => 3040, fail_count => 0, error => 
-3640 job_900_to_node2: start replication job
-3640 job_900_to_node2: guest => VM 900, running => 0
-3640 job_900_to_node2: volumes => local-zfs:vm-900-disk-1,local-zfs:vm-900-disk-2
-3640 job_900_to_node2: create snapshot '__replicate_job_900_to_node2_3640__' on local-zfs:vm-900-disk-1
-3640 job_900_to_node2: create snapshot '__replicate_job_900_to_node2_3640__' on local-zfs:vm-900-disk-2
-3640 job_900_to_node2: using secure transmission, rate limit: none
-3640 job_900_to_node2: incremental sync 'local-zfs:vm-900-disk-1' (__replicate_job_900_to_node2_3040__ => __replicate_job_900_to_node2_3640__)
-3640 job_900_to_node2: incremental sync 'local-zfs:vm-900-disk-2' (__replicate_job_900_to_node2_3040__ => __replicate_job_900_to_node2_3640__)
-3640 job_900_to_node2: delete previous replication snapshot '__replicate_job_900_to_node2_3040__' on local-zfs:vm-900-disk-1
-3640 job_900_to_node2: delete previous replication snapshot '__replicate_job_900_to_node2_3040__' on local-zfs:vm-900-disk-2
-3640 job_900_to_node2: end replication job
-3640 job_900_to_node2: changed config next_sync => 4500
-3640 job_900_to_node2: changed state last_try => 3640, last_sync => 3640
-3700 job_900_to_node2: start replication job
-3700 job_900_to_node2: guest => VM 900, running => 0
-3700 job_900_to_node2: volumes => local-zfs:vm-900-disk-1,local-zfs:vm-900-disk-2
-3700 job_900_to_node2: start job removal - mode 'full'
-3700 job_900_to_node2: delete stale replication snapshot '__replicate_job_900_to_node2_3640__' on local-zfs:vm-900-disk-1
-3700 job_900_to_node2: delete stale replication snapshot '__replicate_job_900_to_node2_3640__' on local-zfs:vm-900-disk-2
-3700 job_900_to_node2: job removed
-3700 job_900_to_node2: end replication job
-3700 job_900_to_node2: vanished job
diff --git a/test/replication_test6.log b/test/replication_test6.log
deleted file mode 100644
index 91754544..00000000
--- a/test/replication_test6.log
+++ /dev/null
@@ -1,8 +0,0 @@
-1000 job_900_to_node1: new job next_sync => 1
-1000 job_900_to_node1: start replication job
-1000 job_900_to_node1: guest => VM 900, running => 0
-1000 job_900_to_node1: volumes => local-zfs:vm-900-disk-1
-1000 job_900_to_node1: start job removal - mode 'full'
-1000 job_900_to_node1: job removed
-1000 job_900_to_node1: end replication job
-1000 job_900_to_node1: vanished job
-- 
2.46.0




[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

             reply	other threads:[~2024-09-12 15:07 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-09-12 11:49 Jing Luo via pve-devel [this message]
2024-09-13 10:08 ` [pve-devel] NACK: " Fabian Grünbichler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=mailman.233.1726153631.414.pve-devel@lists.proxmox.com \
    --to=pve-devel@lists.proxmox.com \
    --cc=jing@jing.rocks \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal