public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
From: Fabian Ebner <f.ebner@proxmox.com>
To: pve-user@lists.proxmox.com
Subject: Re: [PVE-User] CT replication error
Date: Thu, 10 Sep 2020 10:58:45 +0200	[thread overview]
Message-ID: <0f1b6ce7-7983-6866-dcdc-1c16328646a6@proxmox.com> (raw)
In-Reply-To: <6f14f3f9-87d2-f685-030a-a1b1a61de003@matrixscience.com>

Hi,
glad you were able to resolve the issue. Did you use 'purge' to remove 
the container? Doing that does not (yet) clean up the replicated volumes 
on the remote nodes. We're probably going to change that.

Am 09.09.20 um 20:11 schrieb Adam Weremczuk:
> Hi Fabian,
> 
> Yes, all replication use the same shared zfs storage.
> 
> Are you referring to /var/log/pve/replicate/102-0 file?
> 
> It seems to only hold information about the last run.
> 
> Anyway, my problem turned out to be the node2 still holding 
> zfs-pool/subvol-102-disk-0 of the previous container.
> 
> I had deleted the old container from the web GUI before creating a new 
> one in its place (id 102).
> 
> For some reason node2 still had the old disk. Once I rm'ed it from the 
> shell of node2 replication started working for CT-102.
> 
> Regards,
> Adam
> 
> On 09/09/2020 08:51, Fabian Ebner wrote:
>> Hi,
>> could you check the replication log itself? There might be more 
>> information there. Do the working replications use the same storages 
>> as the failing one?
>>
>> Am 03.09.20 um 14:56 schrieb Adam Weremczuk:
>>> Hi all,
>>>
>>> I have a dual host set up, PVE 6.2-6.
>>>
>>> All containers replicate fine except for 102 giving the following:
>>>
>>> Sep  3 13:49:00 node1 systemd[1]: Starting Proxmox VE replication 
>>> runner...
>>> Sep  3 13:49:02 node1 zed: eid=7290 class=history_event 
>>> pool_guid=0x33A69221E174DDE9
>>> Sep  3 13:49:03 node1 pvesr[6852]: send/receive failed, cleaning up 
>>> snapshot(s)..
>>> Sep  3 13:49:03 node1 pvesr[6852]: 102-0: got unexpected replication 
>>> job error - command 'set -o pipefail && pvesm export 
>>> zfs-pool:subvol-102-disk-0 zfs - -with-snapshots 1 -snapshot 
>>> __replicate_102-0_1599137341__ | /usr/bin/ssh -e none -o 
>>> 'BatchMode=yes' -o 'HostKeyAlias=node2' root@192.168.100.2 -- pvesm 
>>> import zfs-pool:subvol-102-disk-0 zfs - -with-snapshots 1 
>>> -allow-rename 0' failed: exit code 255
>>> Sep  3 13:49:03 node1 zed: eid=7291 class=history_event 
>>> pool_guid=0x33A69221E174DDE9
>>>
>>> Any idea what the problem is and how to fix it?
>>>
>>> Regards,
>>> Adam
>>>
>>>
>>> _______________________________________________
>>> pve-user mailing list
>>> pve-user@lists.proxmox.com
>>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>>
>> _______________________________________________
>> pve-user mailing list
>> pve-user@lists.proxmox.com
>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user




      reply	other threads:[~2020-09-10  8:59 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-03 12:56 Adam Weremczuk
2020-09-09  7:51 ` Fabian Ebner
2020-09-09 18:11   ` Adam Weremczuk
2020-09-10  8:58     ` Fabian Ebner [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0f1b6ce7-7983-6866-dcdc-1c16328646a6@proxmox.com \
    --to=f.ebner@proxmox.com \
    --cc=pve-user@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal