From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [IPv6:2a01:7e0:0:424::9]) by lore.proxmox.com (Postfix) with ESMTPS id 6B56B1FF165 for ; Wed, 26 Feb 2025 17:14:02 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 6095118FC9; Wed, 26 Feb 2025 17:13:59 +0100 (CET) Message-ID: <415429c7-eec8-43ab-b53d-fdcab83f1795@proxmox.com> Date: Wed, 26 Feb 2025 17:13:24 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Content-Language: en-US To: Proxmox VE development discussion , Fiona Ebner References: <20221006125414.58279-1-f.ebner@proxmox.com> From: Aaron Lauterer In-Reply-To: <20221006125414.58279-1-f.ebner@proxmox.com> X-SPAM-LEVEL: Spam detection results: 0 AWL -0.035 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_RPBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_SAFE_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [proxmox.com] Subject: Re: [pve-devel] [RFC cluster] status: clear stale kv stores upon sync X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Proxmox VE development discussion Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Errors-To: pve-devel-bounces@lists.proxmox.com Sender: "pve-devel" In the context of another patch to improve the broadcasting of version-info across a cluster, Fiona mentioned [0] that this RFC would also be of interested to invalidate KV data of nodes that left the cluster, to avoid stale infos. Had to adapt the paths in the patch file, but then it still applied to current master. /data/src/status.c /src/pmxcfs/status.c Did a quick test and so far it seems to do what it promises. Once I stop the corosync service of a node, the version-info of it disappears right away. [0] https://lore.proxmox.com/pve-devel/17363828-136f-4f85-ad41-3a34b1a56689@proxmox.com/ Consider this Tested-By: Aaron Lauterer On 2022-10-06 14:54, Fiona Ebner wrote: > This avoids that stale kv entries stay around when a node leaves the > CPG. Now, each kv entry will be something a node sent after (or upon > joining) the CPG. > > This avoids scenarios where a user of pmxcfs (like pvestatd) on node A > might not yet have had the time to broadcast up-to-date kv entries, > but a user of pmxcfs on node B sees node A as online and uses the > outdated value (which couldn't be detected as outdated). > > In particular, this should be helpful for more static information > broadcast by pvestatd which (mostly) doesn't change while a node is > running. > > Could also be done as part of the dfsm_confchg() callback, but then > (at least) the additional guarantee of "confchg always happens before > the kvstore messages during sync arrive" is needed (pointed out by > Fabian). It should hold in practice, but dfsm_process_state_update() > doesn't need such guarantees and is also a fitting place to do it. > > The cfs_status_clear_other_kvstores() could take a hash table with the > IDs to avoid the quadratic loop, but since the number of nodes in > setups in practice is never too big, it's likely faster than the > additional work of constructing that hash at the call side. > > Signed-off-by: Fiona Ebner > --- > > Many thanks to Fabian for helpful discussions! > > Another alternative to avoid the quadratic loop would be to copy the > hash table, iterating over skip_node_ids once to remove them from the > hash table (but need to make sure the original value isn't freed!) and > then iterate over the hash table once. Not sure how much nicer that > would be though. > > data/src/status.c | 47 +++++++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 47 insertions(+) > > diff --git a/data/src/status.c b/data/src/status.c > index 9bceaeb..3da57cf 100644 > --- a/data/src/status.c > +++ b/data/src/status.c > @@ -539,6 +539,46 @@ void cfs_status_set_clinfo( > g_mutex_unlock (&mutex); > } > > +void > +cfs_status_clear_other_kvstores( > + uint32_t *skip_node_ids, > + int count) > +{ > + g_mutex_lock (&mutex); > + > + if (!cfs_status.clinfo || !cfs_status.clinfo->nodes_byid) { > + goto unlock; /* ignore */ > + } > + > + GHashTable *ht = cfs_status.clinfo->nodes_byid; > + GHashTableIter iter; > + gpointer key, value; > + > + g_hash_table_iter_init (&iter, ht); > + > + // Quadratic in the number of nodes, but it's safe to assume that the number is small enough > + while (g_hash_table_iter_next (&iter, &key, &value)) { > + uint32_t nodeid = *(uint32_t *)key; > + cfs_clnode_t *clnode = (cfs_clnode_t *)value; > + gboolean skip = FALSE; > + > + for (int i = 0; i < count; i++) { > + if (nodeid == skip_node_ids[i]) { > + skip = TRUE; > + break; > + } > + } > + > + if (!skip && clnode->kvhash) { > + cfs_debug("clearing kv store of node %d", nodeid); > + g_hash_table_remove_all(clnode->kvhash); > + } > + } > + > +unlock: > + g_mutex_unlock (&mutex); > +} > + > static void > dump_kvstore_versions( > GString *str, > @@ -1769,12 +1809,15 @@ dfsm_process_state_update( > g_return_val_if_fail(syncinfo != NULL, -1); > > clog_base_t *clog[syncinfo->node_count]; > + uint32_t sync_node_ids[syncinfo->node_count]; > > int local_index = -1; > for (int i = 0; i < syncinfo->node_count; i++) { > dfsm_node_info_t *ni = &syncinfo->nodes[i]; > ni->synced = 1; > > + sync_node_ids[i] = ni->nodeid; > + > if (syncinfo->local == ni) > local_index = i; > > @@ -1791,6 +1834,10 @@ dfsm_process_state_update( > cfs_critical("unable to merge log files"); > } > > + // Clear our copy of the kvstore of every node that is not part of the current sync. When > + // such a node joins again, it will sync its current kvstore with cfs_kvstore_sync(). > + cfs_status_clear_other_kvstores(sync_node_ids, syncinfo->node_count); > + > cfs_kvstore_sync(); > > return 1; _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel