public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH storage] rbd: add support for erasure coded ec pools
@ 2022-01-26 16:07 Aaron Lauterer
  2022-01-26 18:30 ` Alwin Antreich
  0 siblings, 1 reply; 10+ messages in thread
From: Aaron Lauterer @ 2022-01-26 16:07 UTC (permalink / raw)
  To: pve-devel

The first step is to allocate rbd images correctly.

The metadata objects still need to be stored in a replicated pool, but
by providing the --data-pool parameter on image creation, we can place
the data objects on the erasure coded (EC) pool.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
Right now this only this only affects disk image creation and the EC
pool needs to be created manually to test this.

The Ceph blog about EC with RBD + CephFS gives a nice introduction and
the necessary steps to set up such a pool [0].

The steps needed are:

- create EC profile (a 21 profile is only useful for testing purposes in
     a 3 node cluster, not something that should be considered for
     production use!)
# ceph osd erasure-code-profile set ec-21-profile k=2 m=1 crush-failure-domain=host

- create a new pool with that profile
# ceph osd pool create ec21pool erasure ec-21-profile

- allow overwrite
# ceph osd pool set ec21pool allow_ec_overwrites true

- enable application rbd on the pool (the command in the blog seems to
    have gotten the order of parameters a bit wrong here)
# ceph osd pool application enable ec21pool rbd

- add storage configuration
# pvesm add rbd ectest --pool <replicated pool> --data-pool ec21pool

For the replicated pool, either create a new one without adding the PVE
storage config or use a namespace to separate it from the existing pool.

To create a namespace:
# rbd namespace create <pool>/<namespace>

add the '--namespace' parameter in the pvesm add command.

To check if the objects are stored correclty you can run rados:

# rados -p <pool> ls

This should only show metadata objects

# rados -p <ec pool> ls

This should then show only `rbd_data.xxx` objects.
If you configured a namespace, you also need to add the `--namespace`
parameter to the rados command.


[0] https://ceph.io/en/news/blog/2017/new-luminous-erasure-coding-rbd-cephfs/


 PVE/Storage/RBDPlugin.pm | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index 2607d25..1ea3418 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -289,6 +289,10 @@ sub properties {
 	    description => "Pool.",
 	    type => 'string',
 	},
+	'data-pool' => {
+	    description => "Data Pool (for erasure coding only)",
+	    type => 'string',
+	},
 	namespace => {
 	    description => "RBD Namespace.",
 	    type => 'string',
@@ -318,6 +322,7 @@ sub options {
 	disable => { optional => 1 },
 	monhost => { optional => 1},
 	pool => { optional => 1 },
+	'data-pool' => { optional => 1 },
 	namespace => { optional => 1 },
 	username => { optional => 1 },
 	content => { optional => 1 },
@@ -516,7 +521,10 @@ sub alloc_image {
 
     $name = $class->find_free_diskname($storeid, $scfg, $vmid) if !$name;
 
-    my $cmd = $rbd_cmd->($scfg, $storeid, 'create', '--image-format' , 2, '--size', int(($size+1023)/1024), $name);
+    my @options = ('create', '--image-format' , 2, '--size', int(($size+1023)/1024));
+    push @options, ('--data-pool', $scfg->{'data-pool'}) if $scfg->{'data-pool'};
+    push @options, $name;
+    my $cmd = $rbd_cmd->($scfg, $storeid, @options);
     run_rbd_command($cmd, errmsg => "rbd create '$name' error");
 
     return $name;
-- 
2.30.2





^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [pve-devel] [PATCH storage] rbd: add support for erasure coded ec pools
  2022-01-26 16:07 [pve-devel] [PATCH storage] rbd: add support for erasure coded ec pools Aaron Lauterer
@ 2022-01-26 18:30 ` Alwin Antreich
  2022-01-27 11:27   ` Aaron Lauterer
  2022-01-27 15:41   ` Alwin Antreich
  0 siblings, 2 replies; 10+ messages in thread
From: Alwin Antreich @ 2022-01-26 18:30 UTC (permalink / raw)
  To: Proxmox VE development discussion

Hello Aaron,

nice to see EC pools are coming. ;)

January 26, 2022 5:07 PM, "Aaron Lauterer" <a.lauterer@proxmox.com> wrote:

> The first step is to allocate rbd images correctly.
> 
> The metadata objects still need to be stored in a replicated pool, but
> by providing the --data-pool parameter on image creation, we can place
> the data objects on the erasure coded (EC) pool.

AFAICR, there is an undocumented location for a ceph.conf,
`/etc/pve/priv/ceph/<storage-name>.conf`.

The config should have the following content.
```
[client.admin_ec]
rbd default data pool = ceph_pool_ec
````
Then rbd will use the data pool. This should probably work for all storage operations.

Newer ceph versions should also work with the config db option.
```
ceph config set client.xxx rbd_default_data_pool ceph_pool_ec
```

Cheers,
Alwin



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [pve-devel] [PATCH storage] rbd: add support for erasure coded ec pools
  2022-01-26 18:30 ` Alwin Antreich
@ 2022-01-27 11:27   ` Aaron Lauterer
  2022-01-27 15:41   ` Alwin Antreich
  1 sibling, 0 replies; 10+ messages in thread
From: Aaron Lauterer @ 2022-01-27 11:27 UTC (permalink / raw)
  To: Proxmox VE development discussion, Alwin Antreich

On 1/26/22 19:30, Alwin Antreich wrote:
> Hello Aaron,
> 
> nice to see EC pools are coming. ;)
> 
> January 26, 2022 5:07 PM, "Aaron Lauterer" <a.lauterer@proxmox.com> wrote:
> 
>> The first step is to allocate rbd images correctly.
>>
>> The metadata objects still need to be stored in a replicated pool, but
>> by providing the --data-pool parameter on image creation, we can place
>> the data objects on the erasure coded (EC) pool.
> 
> AFAICR, there is an undocumented location for a ceph.conf,
> `/etc/pve/priv/ceph/<storage-name>.conf`.

Thanks for the hint, as I wasn't aware of it. It will not be considered for PVE managed Ceph though, so not really an option here.[0]

[0] https://git.proxmox.com/?p=pve-storage.git;a=blob;f=PVE/CephConfig.pm;h=c388f025b409c660913c082763765dda0fba2c6c;hb=HEAD#l192

> 
> The config should have the following content.
> ```
> [client.admin_ec]
> rbd default data pool = ceph_pool_ec
> ````
> Then rbd will use the data pool. This should probably work for all storage operations.
> 
> Newer ceph versions should also work with the config db option.
> ```
> ceph config set client.xxx rbd_default_data_pool ceph_pool_e> ```

What these approaches do have in common, is that we spread the config over multiple places and cannot set different data pools for different storages.

I rather keep the data pool stored in our storage.cfg and apply the parameter where needed. From what I can tell, I missed the image clone in this patch, where the data-pool also needs to be applied.
But this way we have the settings for that storage in one place we control and are also able to have different EC pools for different storages. Not that I expect it to happen a lot in practice, but you never know.

> 
> Cheers,
> Alwin
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [pve-devel] [PATCH storage] rbd: add support for erasure coded ec pools
  2022-01-26 18:30 ` Alwin Antreich
  2022-01-27 11:27   ` Aaron Lauterer
@ 2022-01-27 15:41   ` Alwin Antreich
  2022-01-27 16:28     ` Aaron Lauterer
  1 sibling, 1 reply; 10+ messages in thread
From: Alwin Antreich @ 2022-01-27 15:41 UTC (permalink / raw)
  To: Aaron Lauterer, Proxmox VE development  discussion

January 27, 2022 12:27 PM, "Aaron Lauterer" <a.lauterer@proxmox.com> wrote:

> Thanks for the hint, as I wasn't aware of it. It will not be considered for PVE managed Ceph
> though, so not really an option here.[0]
> 
> [0]
> https://git.proxmox.com/?p=pve-storage.git;a=blob;f=PVE/CephConfig.pm;h=c388f025b409c660913c08276376
> dda0fba2c6c;hb=HEAD#l192

That's where the db config would work.

> What these approaches do have in common, is that we spread the config over multiple places and
> cannot set different data pools for different storages.

Yes indeed, it adds to the fragmentation. But this conf file is for each storage, a data pool per
storage is already possible.

> I rather keep the data pool stored in our storage.cfg and apply the parameter where needed. From
> what I can tell, I missed the image clone in this patch, where the data-pool also needs to be
> applied.
> But this way we have the settings for that storage in one place we control and are also able to
> have different EC pools for different storages. Not that I expect it to happen a lot in practice,
> but you never know.

This sure is a good place. But to argue in favor of a separate config file. :)

Wouldn't it make sense to have a parameter for a `client.conf` in the storage definition? Or maybe
an inherent place like it already exists. This would allow to not only set the data pool but also
adjust client caching, timeouts, debug levels, ...? [0] The benefit is mostly for users not having
administrative access to their Ceph cluster.

Hyper-converged setups can store these settings in the config db. Each storage would need its
own user to separate the setting.

Thanks for listening to my 2 cents. ;)

Cheers,
Alwin

[0] https://docs.ceph.com/en/latest/cephfs/client-config-ref
https://docs.ceph.com/en/latest/rbd/rbd-config-ref/#



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [pve-devel] [PATCH storage] rbd: add support for erasure coded ec pools
  2022-01-27 15:41   ` Alwin Antreich
@ 2022-01-27 16:28     ` Aaron Lauterer
  2022-01-28  5:50       ` Thomas Lamprecht
  2022-01-28  9:22       ` Alwin Antreich
  0 siblings, 2 replies; 10+ messages in thread
From: Aaron Lauterer @ 2022-01-27 16:28 UTC (permalink / raw)
  To: Alwin Antreich, Proxmox VE development discussion



On 1/27/22 16:41, Alwin Antreich wrote:
> January 27, 2022 12:27 PM, "Aaron Lauterer" <a.lauterer@proxmox.com> wrote:
> 
>> Thanks for the hint, as I wasn't aware of it. It will not be considered for PVE managed Ceph
>> though, so not really an option here.[0]
>>
>> [0]
>> https://git.proxmox.com/?p=pve-storage.git;a=blob;f=PVE/CephConfig.pm;h=c388f025b409c660913c08276376
>> dda0fba2c6c;hb=HEAD#l192
> 
> That's where the db config would work.
> 
>> What these approaches do have in common, is that we spread the config over multiple places and
>> cannot set different data pools for different storages.
> 
> Yes indeed, it adds to the fragmentation. But this conf file is for each storage, a data pool per
> storage is already possible.

Yeah you are right, for external Ceph clusters, with the extra config file, this would already be possible to configure per storage.

> 
>> I rather keep the data pool stored in our storage.cfg and apply the parameter where needed. From
>> what I can tell, I missed the image clone in this patch, where the data-pool also needs to be
>> applied.
>> But this way we have the settings for that storage in one place we control and are also able to
>> have different EC pools for different storages. Not that I expect it to happen a lot in practice,
>> but you never know.
> 
> This sure is a good place. But to argue in favor of a separate config file. :)
> 
> Wouldn't it make sense to have a parameter for a `client.conf` in the storage definition? Or maybe
> an inherent place like it already exists. This would allow to not only set the data pool but also
> adjust client caching, timeouts, debug levels, ...? [0] The benefit is mostly for users not having
> administrative access to their Ceph cluster.
> 

Correct me if I got something wrong, but adding custom config settings for an external Ceph cluster, which "I" as PVE admin might only have limited access to, is already possible via the previously mentioned `/etc/pve/priv/ceph/<storage>.conf`. And I have to do it manually, so I am aware of it.

In case of hyperconverged setups, I can add anything I want in the `/etc/pve/ceph.conf` so there is no immediate need for a custom config file per storage for things like changing debug levels and so on?

Anything that touches the storage setup should rather be stored in the storage.cfg. And the option where the data objects should be stored falls into that category IMO.

Of course, the downside is that if I run some commands manually, (for example rbd create) I will have to provide the --data-pool parameter myself. But even with a custom config file, I would have to make sure to add it via the `-c` parameter to have any effect. And since the default ceph.conf is not used anymore, I would also need to add the mon list and auth parameters myself. So not much gained there AFAICS versus adding it to the /etc/pve/ceph.conf directly.

Besides the whole "where to store the data-pool parameter" issue, having custom client configs per storage would most likely be its own feature request. Basically extending the current way to hyperconverged storages. Though that would mean some kind of config merging as the hyperconverged situation relies heavily on the default Ceph config file.
I still see the custom config file as an option for the admin to add custom options, not to spread the PVE managed settings when it can be avoided.


> Hyper-converged setups can store these settings in the config db. Each storage would need its
> own user to separate the setting.

Now that would open a whole different box of changing how hyperconverged Ceph clusters are set up ;)

> 
> Thanks for listening to my 2 cents. ;)

It's always good to hear other opinions from a different perspective to check if one is missing something or at least thinking it through even more :)

> 
> Cheers,
> Alwin
> 
> [0] https://docs.ceph.com/en/latest/cephfs/client-config-ref
> https://docs.ceph.com/en/latest/rbd/rbd-config-ref/#
> 




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [pve-devel] [PATCH storage] rbd: add support for erasure coded ec pools
  2022-01-27 16:28     ` Aaron Lauterer
@ 2022-01-28  5:50       ` Thomas Lamprecht
  2022-01-28  9:22       ` Alwin Antreich
  1 sibling, 0 replies; 10+ messages in thread
From: Thomas Lamprecht @ 2022-01-28  5:50 UTC (permalink / raw)
  To: Proxmox VE development discussion, Aaron Lauterer, Alwin Antreich

On 27.01.22 17:28, Aaron Lauterer wrote:
> Besides the whole "where to store the data-pool parameter" issue, having custom client configs per storage would most likely be its own feature request. Basically extending the current way to hyperconverged storages. Though that would mean some kind of config merging as the hyperconverged situation relies heavily on the default Ceph config file.
> I still see the custom config file as an option for the admin to add custom options, not to spread the PVE managed settings when it can be avoided.


Yeah config merging would be probably nicer if avoided, and we can add a
`ceph-opt` like format-string property that allows access to most of the
more relevant settings if demand comes up.

Anyhow, thanks to both of you for the constructive discussion, always
appreciated.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [pve-devel] [PATCH storage] rbd: add support for erasure coded ec pools
  2022-01-27 16:28     ` Aaron Lauterer
  2022-01-28  5:50       ` Thomas Lamprecht
@ 2022-01-28  9:22       ` Alwin Antreich
  2022-01-28  9:50         ` Aaron Lauterer
  2022-01-28 10:54         ` Alwin Antreich
  1 sibling, 2 replies; 10+ messages in thread
From: Alwin Antreich @ 2022-01-28  9:22 UTC (permalink / raw)
  To: Thomas Lamprecht, Proxmox VE development discussion, Aaron Lauterer

January 28, 2022 6:50 AM, "Thomas Lamprecht" <t.lamprecht@proxmox.com> wrote:

> On 27.01.22 17:28, Aaron Lauterer wrote:
> 
>> Besides the whole "where to store the data-pool parameter" issue, having custom client configs per
>> storage would most likely be its own feature request. Basically extending the current way to
>> hyperconverged storages. Though that would mean some kind of config merging as the hyperconverged
>> situation relies heavily on the default Ceph config file.
>> I still see the custom config file as an option for the admin to add custom options, not to spread
>> the PVE managed settings when it can be avoided.
> 
> Yeah config merging would be probably nicer if avoided, and we can add a
> `ceph-opt` like format-string property that allows access to most of the
> more relevant settings if demand comes up.
K.

Would you guys have any objection, when I send a docs patch to document the current client conf possibility, under /etc/pve/priv/ceph/<storage>.conf? Or rather document it for /etc/pve/ceph.conf?

@Aaron, or is it counter productive to what you try to do?

> Anyhow, thanks to both of you for the constructive discussion, always
> appreciated.
:)

Cheers,
Alwin



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [pve-devel] [PATCH storage] rbd: add support for erasure coded ec pools
  2022-01-28  9:22       ` Alwin Antreich
@ 2022-01-28  9:50         ` Aaron Lauterer
  2022-01-28 10:54         ` Alwin Antreich
  1 sibling, 0 replies; 10+ messages in thread
From: Aaron Lauterer @ 2022-01-28  9:50 UTC (permalink / raw)
  To: Alwin Antreich, Thomas Lamprecht, Proxmox VE development discussion



On 1/28/22 10:22, Alwin Antreich wrote:
> January 28, 2022 6:50 AM, "Thomas Lamprecht" <t.lamprecht@proxmox.com> wrote:
> 
>> On 27.01.22 17:28, Aaron Lauterer wrote:
>>
>>> Besides the whole "where to store the data-pool parameter" issue, having custom client configs per
>>> storage would most likely be its own feature request. Basically extending the current way to
>>> hyperconverged storages. Though that would mean some kind of config merging as the hyperconverged
>>> situation relies heavily on the default Ceph config file.
>>> I still see the custom config file as an option for the admin to add custom options, not to spread
>>> the PVE managed settings when it can be avoided.
>>
>> Yeah config merging would be probably nicer if avoided, and we can add a
>> `ceph-opt` like format-string property that allows access to most of the
>> more relevant settings if demand comes up.
> K.
> 
> Would you guys have any objection, when I send a docs patch to document the current client conf possibility, under /etc/pve/priv/ceph/<storage>.conf? Or rather document it for /etc/pve/ceph.conf?

What exactly do you mean? How to add custom config options for external cluster (/etc/pve/priv/ceph/<storeid>.conf) and locally in the /etc/pve/ceph.conf AKA /etc/ceph/ceph.conf?

Sure, AFAICS the custom <storeid>.conf has been added in 2016 [0]. I did a quick search in the admin guide and did not find anything about it.

[0] https://git.proxmox.com/?p=pve-storage.git;a=commit;h=1341722
> 
> @Aaron, or is it counter productive to what you try to do?

Right now, I am only working out the EC pools (data-pool) parameter. Having the current possibilities documented is surely a good idea.

Out of curiosity, do you have to use the custom configs often?

> 
>> Anyhow, thanks to both of you for the constructive discussion, always
>> appreciated.
> :)
> 
> Cheers,
> Alwin
> 




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [pve-devel] [PATCH storage] rbd: add support for erasure coded ec pools
  2022-01-28  9:22       ` Alwin Antreich
  2022-01-28  9:50         ` Aaron Lauterer
@ 2022-01-28 10:54         ` Alwin Antreich
  2022-01-28 11:21           ` Aaron Lauterer
  1 sibling, 1 reply; 10+ messages in thread
From: Alwin Antreich @ 2022-01-28 10:54 UTC (permalink / raw)
  To: Aaron Lauterer, Thomas Lamprecht, Proxmox VE development discussion

January 28, 2022 10:50 AM, "Aaron Lauterer" <a.lauterer@proxmox.com> wrote:

> What exactly do you mean? How to add custom config options for external cluster
> (/etc/pve/priv/ceph/<storeid>.conf) and locally in the /etc/pve/ceph.conf AKA /etc/ceph/ceph.conf?
Oh. I was thinking, if the /etc/pve/priv/ceph/<storage>.conf is not favored, then I could document the usage of client settings in the /etc/pve/ceph.conf.

> 
> Sure, AFAICS the custom <storeid>.conf has been added in 2016 [0]. I did a quick search in the
> admin guide and did not find anything about it.
> 
> [0] https://git.proxmox.com/?p=pve-storage.git;a=commit;h=1341722
Then I'll prepare some docs patch. :)

> Right now, I am only working out the EC pools (data-pool) parameter. Having the current
> possibilities documented is surely a good idea.
> 
> Out of curiosity, do you have to use the custom configs often?
So far it might have been a hand full of times. (that I recall)
Currently we have a small how-to on the website with the /etc/ceph.conf.[0] I want to update this guide as well.

Cheers,
Alwin

[0] https://croit.io/docs/master/hypervisors/proxmox



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [pve-devel] [PATCH storage] rbd: add support for erasure coded ec pools
  2022-01-28 10:54         ` Alwin Antreich
@ 2022-01-28 11:21           ` Aaron Lauterer
  0 siblings, 0 replies; 10+ messages in thread
From: Aaron Lauterer @ 2022-01-28 11:21 UTC (permalink / raw)
  To: Alwin Antreich, Thomas Lamprecht, Proxmox VE development discussion



On 1/28/22 11:54, Alwin Antreich wrote:
> January 28, 2022 10:50 AM, "Aaron Lauterer" <a.lauterer@proxmox.com> wrote:
> 
>> What exactly do you mean? How to add custom config options for external cluster
>> (/etc/pve/priv/ceph/<storeid>.conf) and locally in the /etc/pve/ceph.conf AKA /etc/ceph/ceph.conf?
> Oh. I was thinking, if the /etc/pve/priv/ceph/<storage>.conf is not favored, then I could document the usage of client settings in the /etc/pve/ceph.conf.

Well, it is there for external clusters, just not really documented atm. For a hyperconverged cluster (no monhost line in the storage config, line 185 [0]) it is ignored [0].
If it is configured, it is then added as `-c` to the RBD command, therefore the default ceph config is ignored IIUC [1].



[0] https://git.proxmox.com/?p=pve-storage.git;a=blob;f=PVE/CephConfig.pm;h=c388f025b409c660913c082763765dda0fba2c6c;hb=HEAD#l192
[1] https://git.proxmox.com/?p=pve-storage.git;a=blob;f=PVE/Storage/RBDPlugin.pm;h=2607d259d1ab20f0f430ee7d0083bcd77289c2ac;hb=HEAD#l48
> 
>>
>> Sure, AFAICS the custom <storeid>.conf has been added in 2016 [0]. I did a quick search in the
>> admin guide and did not find anything about it.
>>
>> [0] https://git.proxmox.com/?p=pve-storage.git;a=commit;h=1341722
> Then I'll prepare some docs patch. :)
> 
>> Right now, I am only working out the EC pools (data-pool) parameter. Having the current
>> possibilities documented is surely a good idea.
>>
>> Out of curiosity, do you have to use the custom configs often?
> So far it might have been a hand full of times. (that I recall)
> Currently we have a small how-to on the website with the /etc/ceph.conf.[0] I want to update this guide as well.

I took a quick look. JFYI the keyring (and cephfs secret) don't need to be placed manually anymore with recent versions as the pvesm command can take a path to those files and will place it at the correct location [2]. GUI patches for that are currently reviewed.


Once this patch is applied, your guide will hopefully become a bit shorter. Or longer to cover both cases ;)

Have a great weekend!


[2] https://git.proxmox.com/?p=pve-storage.git;a=commit;h=22b68016f791958baf7dd982cb92d2306f0e1574
> 
> Cheers,
> Alwin
> 
> [0] https://croit.io/docs/master/hypervisors/proxmox
> 




^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2022-01-28 11:22 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-26 16:07 [pve-devel] [PATCH storage] rbd: add support for erasure coded ec pools Aaron Lauterer
2022-01-26 18:30 ` Alwin Antreich
2022-01-27 11:27   ` Aaron Lauterer
2022-01-27 15:41   ` Alwin Antreich
2022-01-27 16:28     ` Aaron Lauterer
2022-01-28  5:50       ` Thomas Lamprecht
2022-01-28  9:22       ` Alwin Antreich
2022-01-28  9:50         ` Aaron Lauterer
2022-01-28 10:54         ` Alwin Antreich
2022-01-28 11:21           ` Aaron Lauterer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal