From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [IPv6:2a01:7e0:0:424::9]) by lore.proxmox.com (Postfix) with ESMTPS id E32091FF183 for ; Wed, 16 Jul 2025 23:39:14 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 90FA51B967; Wed, 16 Jul 2025 23:40:14 +0200 (CEST) From: Marco Gaiarin Date: Wed, 16 Jul 2025 12:44:28 +0200 Organization: Il gaio usa sempre TIN per le liste, fallo anche tu!!! Message-ID: X-Trace: eraldo.lilliput.linux.it 1752701743 2495891 192.168.1.45 (16 Jul 2025 21:35:43 GMT) X-Mailer: tin/2.6.4-20240224 ("Banff") (Linux/6.11.0-29-generic (x86_64)) X-Gateway-System: SmartGate 1.4.5 To: pve-user@lists.proxmox.com X-SPAM-LEVEL: Spam detection results: 0 AWL -0.513 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DATE_IN_PAST_06_12 1.543 Date: is 6 to 12 hours before Received: date DMARC_PASS -0.1 DMARC pass policy JMQ_SPF_NEUTRAL 0.5 SPF set to ?all KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment PLING_QUERY 0.1 Subject has exclamation mark and question mark SPF_HELO_PASS -0.001 SPF: HELO matches SPF record SPF_PASS -0.001 SPF: sender matches SPF record Subject: [PVE-User] Leftover replication snapshots?! X-BeenThere: pve-user@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE user list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Proxmox VE user list MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: pve-user-bounces@lists.proxmox.com Sender: "pve-user" I've some VMs and LXCs with replica enabled between two nodes; if i look at snapshopts i got (on both nodes): root@cnpve1:~# zfs list -r -t snapshot -o name,used,creation rpool/data NAME USED CREATION rpool/data/subvol-123-disk-0@__replicate_123-0_1752453014__ 84.4M Mon Jul 14 2:30 2025 rpool/data/subvol-124-disk-0@__replicate_124-0_1752453020__ 21.3G Mon Jul 14 2:30 2025 rpool/data/subvol-126-disk-0@__replicate_126-0_1752662087__ 232K Wed Jul 16 12:34 2025 rpool/data/subvol-128-disk-0@__replicate_128-0_1752662081__ 280K Wed Jul 16 12:34 2025 rpool/data/vm-100-disk-0@__replicate_100-0_1752453009__ 53.9M Mon Jul 14 2:30 2025 rpool/data/vm-101-disk-0@__replicate_101-0_1752662120__ 6.27M Wed Jul 16 12:35 2025 rpool/data/vm-120-disk-0@__replicate_120-0_1752453000__ 0B Mon Jul 14 2:30 2025 rpool/data/vm-121-disk-0@__replicate_121-0_1752468303__ 0B Mon Jul 14 6:45 2025 rpool/data/vm-122-disk-0@__replicate_122-0_1752468303__ 112M Mon Jul 14 6:45 2025 rpool/data/vm-131-disk-0@__replicate_131-0_1752662105__ 3.09M Wed Jul 16 12:35 2025 there's only one snapshot per disk, so seems to me that replica works as: 1) delete old snapshot 2) take snapshot 3) sync snapshot 4) apply snapshot to other node But snapshot get not deleted. Why? If i need temporary some space, can i safely delete all these snapshot (clerly by hand, via 'zfs destroy')? Thanks. -- _______________________________________________ pve-user mailing list pve-user@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user