public inbox for pmg-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pmg-devel] [PATCH pmg-api 0/2] fix 2 small glitches in clustered environments
@ 2020-11-18 14:52 Stoiko Ivanov
  2020-11-18 14:52 ` [pmg-devel] [PATCH pmg-api 1/2] fix clustersync after node-deletion Stoiko Ivanov
  2020-11-18 14:52 ` [pmg-devel] [PATCH pmg-api 2/2] do not create /cluster/<cid> unconditionally Stoiko Ivanov
  0 siblings, 2 replies; 5+ messages in thread
From: Stoiko Ivanov @ 2020-11-18 14:52 UTC (permalink / raw)
  To: pmg-devel

I managed to reproduce a small problem of a user, running into troubles
when changing the TLS certificate for their pmg installation.

The second patch fixes a small glitch that has been around for quite long:
in a clustered PMG environment each node creates '/cluster/<cid>/'
unconditionally when quarantining a mail.

Stoiko Ivanov (2):
  fix clustersync after node-deletion
  do not create /cluster/<cid> unconditionally

 src/PMG/API2/Cluster.pm | 4 ++++
 src/PMG/MailQueue.pm    | 1 -
 2 files changed, 4 insertions(+), 1 deletion(-)

-- 
2.20.1





^ permalink raw reply	[flat|nested] 5+ messages in thread

* [pmg-devel] [PATCH pmg-api 1/2] fix clustersync after node-deletion
  2020-11-18 14:52 [pmg-devel] [PATCH pmg-api 0/2] fix 2 small glitches in clustered environments Stoiko Ivanov
@ 2020-11-18 14:52 ` Stoiko Ivanov
  2020-11-18 16:03   ` [pmg-devel] applied: " Thomas Lamprecht
  2020-11-18 14:52 ` [pmg-devel] [PATCH pmg-api 2/2] do not create /cluster/<cid> unconditionally Stoiko Ivanov
  1 sibling, 1 reply; 5+ messages in thread
From: Stoiko Ivanov @ 2020-11-18 14:52 UTC (permalink / raw)
  To: pmg-devel

This patch creates the spoolsdirs for a newly joining clusternode on the
master (/var/spool/pmg/cluster/<newnode-cid>/(spam|attachment|virus).

This is necessary in order to prevent a failing cluster-sync for nodes, joining
the cluster after that node has been deleted. (This happens if you remove
a node from the cluster and directly rejoin it to the same masternode):

On the first sync after a node was deleted (there is no section config for a
number < maxcid) each node tries to sync the quarantine for the deleted node
from the cluster (in order to be promotable to new master). This rsync
fails because the spooldir for that node never got created on the master.

The spooldir for a node gets created on the master on the first sync of a node
which can be 2 minutes after joining the cluster (and leaving it again).

Reported via our enterprise support portal.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
---
 src/PMG/API2/Cluster.pm | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/src/PMG/API2/Cluster.pm b/src/PMG/API2/Cluster.pm
index ceda100..7eab761 100644
--- a/src/PMG/API2/Cluster.pm
+++ b/src/PMG/API2/Cluster.pm
@@ -302,6 +302,10 @@ __PACKAGE__->register_method({
 		$next_cid = ++$master->{maxcid};
 	    }
 
+	    # create spooldir for new node to prevent problems if it gets
+	    # delete from the cluster before being synced initially
+	    PMG::MailQueue::create_spooldirs($master->{maxcid});
+
 	    my $node = {
 		type => 'node',
 		cid => $master->{maxcid},
-- 
2.20.1





^ permalink raw reply	[flat|nested] 5+ messages in thread

* [pmg-devel] [PATCH pmg-api 2/2] do not create /cluster/<cid> unconditionally
  2020-11-18 14:52 [pmg-devel] [PATCH pmg-api 0/2] fix 2 small glitches in clustered environments Stoiko Ivanov
  2020-11-18 14:52 ` [pmg-devel] [PATCH pmg-api 1/2] fix clustersync after node-deletion Stoiko Ivanov
@ 2020-11-18 14:52 ` Stoiko Ivanov
  2020-11-18 16:03   ` [pmg-devel] applied: " Thomas Lamprecht
  1 sibling, 1 reply; 5+ messages in thread
From: Stoiko Ivanov @ 2020-11-18 14:52 UTC (permalink / raw)
  To: pmg-devel

while looking through the spooldir creation we noticed the mkdir call
on a relative path. This creates a '/cluster/<cid>/' directory on each system
which has a cluster.conf (<cid> being the node's clusterid). This is not used
since the spooldirs are in '/var/spool/pmg/cluster/'

Simply drop the mkdir call, since the spooldirs get created upon cluster
creation (PMG::API2::Cluster::create) and joining to an existing cluster.

Reported-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
---
 src/PMG/MailQueue.pm | 1 -
 1 file changed, 1 deletion(-)

diff --git a/src/PMG/MailQueue.pm b/src/PMG/MailQueue.pm
index e0f56b9..435f168 100644
--- a/src/PMG/MailQueue.pm
+++ b/src/PMG/MailQueue.pm
@@ -285,7 +285,6 @@ sub quarantine_mail {
     eval {
 	if ($lcid) {
 	    my $subdir = "cluster/$lcid/$subpath";
-	    mkpath $subdir;
 	    ($fh, $uid, $path) = new_fileid ($spooldir, $subdir);
 	} else {
 	    ($fh, $uid, $path) = new_fileid ($spooldir, $subpath);
-- 
2.20.1





^ permalink raw reply	[flat|nested] 5+ messages in thread

* [pmg-devel] applied: [PATCH pmg-api 1/2] fix clustersync after node-deletion
  2020-11-18 14:52 ` [pmg-devel] [PATCH pmg-api 1/2] fix clustersync after node-deletion Stoiko Ivanov
@ 2020-11-18 16:03   ` Thomas Lamprecht
  0 siblings, 0 replies; 5+ messages in thread
From: Thomas Lamprecht @ 2020-11-18 16:03 UTC (permalink / raw)
  To: Stoiko Ivanov, pmg-devel

On 18.11.20 15:52, Stoiko Ivanov wrote:
> This patch creates the spoolsdirs for a newly joining clusternode on the
> master (/var/spool/pmg/cluster/<newnode-cid>/(spam|attachment|virus).
> 
> This is necessary in order to prevent a failing cluster-sync for nodes, joining
> the cluster after that node has been deleted. (This happens if you remove
> a node from the cluster and directly rejoin it to the same masternode):
> 
> On the first sync after a node was deleted (there is no section config for a
> number < maxcid) each node tries to sync the quarantine for the deleted node
> from the cluster (in order to be promotable to new master). This rsync
> fails because the spooldir for that node never got created on the master.
> 
> The spooldir for a node gets created on the master on the first sync of a node
> which can be 2 minutes after joining the cluster (and leaving it again).
> 
> Reported via our enterprise support portal.
> 
> Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
> ---
>  src/PMG/API2/Cluster.pm | 4 ++++
>  1 file changed, 4 insertions(+)
> 
>

applied, thanks!




^ permalink raw reply	[flat|nested] 5+ messages in thread

* [pmg-devel] applied: [PATCH pmg-api 2/2] do not create /cluster/<cid> unconditionally
  2020-11-18 14:52 ` [pmg-devel] [PATCH pmg-api 2/2] do not create /cluster/<cid> unconditionally Stoiko Ivanov
@ 2020-11-18 16:03   ` Thomas Lamprecht
  0 siblings, 0 replies; 5+ messages in thread
From: Thomas Lamprecht @ 2020-11-18 16:03 UTC (permalink / raw)
  To: Stoiko Ivanov, pmg-devel

On 18.11.20 15:52, Stoiko Ivanov wrote:
> while looking through the spooldir creation we noticed the mkdir call
> on a relative path. This creates a '/cluster/<cid>/' directory on each system
> which has a cluster.conf (<cid> being the node's clusterid). This is not used
> since the spooldirs are in '/var/spool/pmg/cluster/'
> 
> Simply drop the mkdir call, since the spooldirs get created upon cluster
> creation (PMG::API2::Cluster::create) and joining to an existing cluster.
> 
> Reported-by: Dominik Csapak <d.csapak@proxmox.com>
> Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
> ---
>  src/PMG/MailQueue.pm | 1 -
>  1 file changed, 1 deletion(-)
> 
>

applied, thanks!




^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2020-11-18 16:03 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-18 14:52 [pmg-devel] [PATCH pmg-api 0/2] fix 2 small glitches in clustered environments Stoiko Ivanov
2020-11-18 14:52 ` [pmg-devel] [PATCH pmg-api 1/2] fix clustersync after node-deletion Stoiko Ivanov
2020-11-18 16:03   ` [pmg-devel] applied: " Thomas Lamprecht
2020-11-18 14:52 ` [pmg-devel] [PATCH pmg-api 2/2] do not create /cluster/<cid> unconditionally Stoiko Ivanov
2020-11-18 16:03   ` [pmg-devel] applied: " Thomas Lamprecht

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal