From: Adam Weremczuk <adamw@matrixscience.com>
To: pve-user@lists.proxmox.com
Subject: Re: [PVE-User] CT replication error
Date: Wed, 9 Sep 2020 19:11:48 +0100 [thread overview]
Message-ID: <6f14f3f9-87d2-f685-030a-a1b1a61de003@matrixscience.com> (raw)
In-Reply-To: <93d6fc90-f902-e33a-d2ee-b09daebc1d1b@proxmox.com>
Hi Fabian,
Yes, all replication use the same shared zfs storage.
Are you referring to /var/log/pve/replicate/102-0 file?
It seems to only hold information about the last run.
Anyway, my problem turned out to be the node2 still holding
zfs-pool/subvol-102-disk-0 of the previous container.
I had deleted the old container from the web GUI before creating a new
one in its place (id 102).
For some reason node2 still had the old disk. Once I rm'ed it from the
shell of node2 replication started working for CT-102.
Regards,
Adam
On 09/09/2020 08:51, Fabian Ebner wrote:
> Hi,
> could you check the replication log itself? There might be more
> information there. Do the working replications use the same storages
> as the failing one?
>
> Am 03.09.20 um 14:56 schrieb Adam Weremczuk:
>> Hi all,
>>
>> I have a dual host set up, PVE 6.2-6.
>>
>> All containers replicate fine except for 102 giving the following:
>>
>> Sep 3 13:49:00 node1 systemd[1]: Starting Proxmox VE replication
>> runner...
>> Sep 3 13:49:02 node1 zed: eid=7290 class=history_event
>> pool_guid=0x33A69221E174DDE9
>> Sep 3 13:49:03 node1 pvesr[6852]: send/receive failed, cleaning up
>> snapshot(s)..
>> Sep 3 13:49:03 node1 pvesr[6852]: 102-0: got unexpected replication
>> job error - command 'set -o pipefail && pvesm export
>> zfs-pool:subvol-102-disk-0 zfs - -with-snapshots 1 -snapshot
>> __replicate_102-0_1599137341__ | /usr/bin/ssh -e none -o
>> 'BatchMode=yes' -o 'HostKeyAlias=node2' root@192.168.100.2 -- pvesm
>> import zfs-pool:subvol-102-disk-0 zfs - -with-snapshots 1
>> -allow-rename 0' failed: exit code 255
>> Sep 3 13:49:03 node1 zed: eid=7291 class=history_event
>> pool_guid=0x33A69221E174DDE9
>>
>> Any idea what the problem is and how to fix it?
>>
>> Regards,
>> Adam
>>
>>
>> _______________________________________________
>> pve-user mailing list
>> pve-user@lists.proxmox.com
>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
next prev parent reply other threads:[~2020-09-09 18:19 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-09-03 12:56 Adam Weremczuk
2020-09-09 7:51 ` Fabian Ebner
2020-09-09 18:11 ` Adam Weremczuk [this message]
2020-09-10 8:58 ` Fabian Ebner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=6f14f3f9-87d2-f685-030a-a1b1a61de003@matrixscience.com \
--to=adamw@matrixscience.com \
--cc=pve-user@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.