From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <f.ebner@proxmox.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by lists.proxmox.com (Postfix) with ESMTPS id 6C919913C3
 for <pve-devel@lists.proxmox.com>; Thu,  6 Oct 2022 14:54:25 +0200 (CEST)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
 by firstgate.proxmox.com (Proxmox) with ESMTP id 4A5BB1CA80
 for <pve-devel@lists.proxmox.com>; Thu,  6 Oct 2022 14:54:25 +0200 (CEST)
Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com
 [94.136.29.106])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by firstgate.proxmox.com (Proxmox) with ESMTPS
 for <pve-devel@lists.proxmox.com>; Thu,  6 Oct 2022 14:54:23 +0200 (CEST)
Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1])
 by proxmox-new.maurer-it.com (Proxmox) with ESMTP id ACB174416B
 for <pve-devel@lists.proxmox.com>; Thu,  6 Oct 2022 14:54:17 +0200 (CEST)
From: Fiona Ebner <f.ebner@proxmox.com>
To: pve-devel@lists.proxmox.com
Date: Thu,  6 Oct 2022 14:54:14 +0200
Message-Id: <20221006125414.58279-1-f.ebner@proxmox.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SPAM-LEVEL: Spam detection results:  0
 AWL 0.027 Adjusted score from AWL reputation of From: address
 BAYES_00                 -1.9 Bayes spam probability is 0 to 1%
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
Subject: [pve-devel] [RFC cluster] status: clear stale kv stores upon sync
X-BeenThere: pve-devel@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/>
List-Post: <mailto:pve-devel@lists.proxmox.com>
List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe>
X-List-Received-Date: Thu, 06 Oct 2022 12:54:25 -0000

This avoids that stale kv entries stay around when a node leaves the
CPG. Now, each kv entry will be something a node sent after (or upon
joining) the CPG.

This avoids scenarios where a user of pmxcfs (like pvestatd) on node A
might not yet have had the time to broadcast up-to-date kv entries,
but a user of pmxcfs on node B sees node A as online and uses the
outdated value (which couldn't be detected as outdated).

In particular, this should be helpful for more static information
broadcast by pvestatd which (mostly) doesn't change while a node is
running.

Could also be done as part of the dfsm_confchg() callback, but then
(at least) the additional guarantee of "confchg always happens before
the kvstore messages during sync arrive" is needed (pointed out by
Fabian). It should hold in practice, but dfsm_process_state_update()
doesn't need such guarantees and is also a fitting place to do it.

The cfs_status_clear_other_kvstores() could take a hash table with the
IDs to avoid the quadratic loop, but since the number of nodes in
setups in practice is never too big, it's likely faster than the
additional work of constructing that hash at the call side.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---

Many thanks to Fabian for helpful discussions!

Another alternative to avoid the quadratic loop would be to copy the
hash table, iterating over skip_node_ids once to remove them from the
hash table (but need to make sure the original value isn't freed!) and
then iterate over the hash table once. Not sure how much nicer that
would be though.

 data/src/status.c | 47 +++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 47 insertions(+)

diff --git a/data/src/status.c b/data/src/status.c
index 9bceaeb..3da57cf 100644
--- a/data/src/status.c
+++ b/data/src/status.c
@@ -539,6 +539,46 @@ void cfs_status_set_clinfo(
 	g_mutex_unlock (&mutex);
 }
 
+void
+cfs_status_clear_other_kvstores(
+	uint32_t *skip_node_ids,
+	int count)
+{
+	g_mutex_lock (&mutex);
+
+	if (!cfs_status.clinfo || !cfs_status.clinfo->nodes_byid) {
+		goto unlock; /* ignore */
+	}
+
+	GHashTable *ht = cfs_status.clinfo->nodes_byid;
+	GHashTableIter iter;
+	gpointer key, value;
+
+	g_hash_table_iter_init (&iter, ht);
+
+	// Quadratic in the number of nodes, but it's safe to assume that the number is small enough
+	while (g_hash_table_iter_next (&iter, &key, &value)) {
+		uint32_t nodeid = *(uint32_t *)key;
+		cfs_clnode_t *clnode = (cfs_clnode_t *)value;
+		gboolean skip = FALSE;
+
+		for (int i = 0; i < count; i++) {
+			if (nodeid == skip_node_ids[i]) {
+				skip = TRUE;
+				break;
+			}
+		}
+
+		if (!skip && clnode->kvhash) {
+			cfs_debug("clearing kv store of node %d", nodeid);
+			g_hash_table_remove_all(clnode->kvhash);
+		}
+	}
+
+unlock:
+	g_mutex_unlock (&mutex);
+}
+
 static void
 dump_kvstore_versions(
 	GString *str,
@@ -1769,12 +1809,15 @@ dfsm_process_state_update(
 	g_return_val_if_fail(syncinfo != NULL, -1);
 
 	clog_base_t *clog[syncinfo->node_count];
+	uint32_t sync_node_ids[syncinfo->node_count];
 
 	int local_index = -1;
 	for (int i = 0; i < syncinfo->node_count; i++) {
 		dfsm_node_info_t *ni = &syncinfo->nodes[i];
 		ni->synced = 1;
 
+		sync_node_ids[i] = ni->nodeid;
+
 		if (syncinfo->local == ni)
 			local_index = i;
 
@@ -1791,6 +1834,10 @@ dfsm_process_state_update(
 		cfs_critical("unable to merge log files");
 	}
 
+	// Clear our copy of the kvstore of every node that is not part of the current sync. When
+	// such a node joins again, it will sync its current kvstore with cfs_kvstore_sync().
+	cfs_status_clear_other_kvstores(sync_node_ids, syncinfo->node_count);
+
 	cfs_kvstore_sync();
 
 	return 1;
-- 
2.30.2