public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
* Re: [PVE-User] Proxmox VE 7.0 (beta) released!
@ 2021-06-29 12:27 Wolfgang Bumiller
  0 siblings, 0 replies; 16+ messages in thread
From: Wolfgang Bumiller @ 2021-06-29 12:27 UTC (permalink / raw)
  To: Proxmox VE user list, Mark Schouten, Thomas Lamprecht


> On 06/29/2021 2:04 PM Mark Schouten <mark@tuxis.nl> wrote:
> 
>  
> Hi,
> 
> Op 29-06-2021 om 12:31 schreef Thomas Lamprecht:
> >> I do not completely understand why that fixes it though.  Commenting out MACAddressPolicy=persistent helps, but why?
> >>
> > 
> > Because duplicate MAC addresses are not ideal, to say the least?
> 
> That I understand. :)
> 
> But, the cluster interface works when bridge_vlan_aware is off, 
> regardless of the MacAddressPolicy setting.

Yep, this may actually need more investigation, as I also had this issue on a single PVE VM on my ArchLinux host.

- definitely no duplicate mac addresses there
- no MAC related firewall settings
- network traffic *routed* off of a bridge on the host (so the final physical nic being an intel one should also not influence this)
- works when disabling `bridge-vlan-aware`
- still works when enabling vlan filtering via /sys after the fact
- also works with MACAddressPolicy commented out *regardless* of `bridge-vlan-aware`...

Also tried using systemd-networkd for the bridge in place of ifupdown2.
Same behavior when toggling `VLANFiltering` in the [Bridge] section...
Also note that similar to manually enabling vlan filtering via /sys, simply enabling `VLANFiltering` and restarting `systemd-networkd` does not actually break it, only if I delete the bridge first and then let systemd-network recreate it from scratch it'll be broken...

Curious stuff...




^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PVE-User] Proxmox VE 7.0 (beta) released!
       [not found] <kcEE.HSoMZfIyQreLVdFDq7JFjQ.AFttFk5y1wE@ckcucs11.intern.ckc-it.at>
@ 2021-07-06 10:22 ` Stoiko Ivanov
  0 siblings, 0 replies; 16+ messages in thread
From: Stoiko Ivanov @ 2021-07-06 10:22 UTC (permalink / raw)
  To: Christian Kraus, pve-user

Hi,

adding pve-user as To

On Tue, 6 Jul 2021 10:02:54 +0000
"Christian Kraus" <christian.kraus@ckc-it.at> wrote:

> Thanks for the information,
> 
> 
> 
> what i also have seen today :
> 
> 
> 
> created a new iscsi shared volume 
> 
> then added an lvm volume
> 
> 
> 
> all 3 cluster nodes see the icsci shared volume but only the cluster node where the gui was open has access to the created lvm volume - the other 2 i had to reboot to get access for them

How did you do those steps exactly?
(via GUI, CLI, any particular invocations/configs)?

Asking because when trying to reproduce the clone-issue - I did pretty
much that:
* add a new iSCSI volume via GUI
* created a volume group on the exported disk on one node via GUI
* marked the storage as shared

worked here - disks were visible on all nodes without a reboot.





^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PVE-User] Proxmox VE 7.0 (beta) released!
       [not found] ` <mailman.239.1625514988.464.pve-user@lists.proxmox.com>
@ 2021-07-06  9:55   ` Stoiko Ivanov
  0 siblings, 0 replies; 16+ messages in thread
From: Stoiko Ivanov @ 2021-07-06  9:55 UTC (permalink / raw)
  To: Christian Kraus via pve-user

Hi,


On Mon, 5 Jul 2021 19:46:27 +0000
Christian Kraus via pve-user <pve-user@lists.proxmox.com> wrote:

> Storage Migration fails to iscsi destination since upgrade to VE 7.0 beta
> 
> the log says:
> 
> create full clone of drive virtio0 (local-lvm:vm-131-disk-0)
> WARNING: dos signature detected on /dev/nvme-vg/vm-131-disk-0 at offset 510. Wipe it? [y/n]: [n]
> Aborted wiping of dos.
> 1 existing signature left on the device.
> Failed to wipe signatures on logical volume nvme-vg/vm-131-disk-0.
> TASK ERROR: storage migration failed: lvcreate 'nvme-vg/vm-131-disk-0' error: Aborting. Failed to wipe start of new LV.

Thanks for the report! - we looked into it, reproduced the issue and a
first patch for discussion was sent to pve-devel:
https://lists.proxmox.com/pipermail/pve-devel/2021-July/049231.html

once this (or an improved version has been applied) the issue should be
fixed





^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PVE-User] Proxmox VE 7.0 (beta) released!
  2021-07-02 20:57   ` Thomas Lamprecht
@ 2021-07-02 21:06     ` Mark Schouten
  0 siblings, 0 replies; 16+ messages in thread
From: Mark Schouten @ 2021-07-02 21:06 UTC (permalink / raw)
  To: Thomas Lamprecht; +Cc: Proxmox VE user list

Very cool that this is fixed!

Mark Schouten

> Op 2 jul. 2021 om 22:58 heeft Thomas Lamprecht <t.lamprecht@proxmox.com> het volgende geschreven:
> 
> On 29.06.21 10:05, Mark Schouten wrote:
>> Hi,
>> 
>> Op 24-06-2021 om 15:16 schreef Martin Maurer:
>>> We are pleased to announce the first beta release of Proxmox Virtual Environment 7.0! The 7.x family is based on the great Debian 11 "Bullseye" and comes with a 5.11 kernel, QEMU 6.0, LXC 4.0, OpenZFS 2.0.4.
>> 
>> I just upgraded a node in our demo cluster and all seemed fine. Except for non-working cluster network. I was unable to ping the node through the cluster interface, pvecm saw no other nodes and ceph was broken.
>> 
>> However, if I ran tcpdump, ping started working, but not the rest.
>> 
>> Interesting situation, which I 'fixed' by disabling vlan-aware-bridge for that interface. After the reboot, everything works (AFAICS).
>> 
>> If Proxmox wants to debug this, feel free to reach out to me, I can grant you access to this node so you can check it out.
>> 
> 
> FYI, there was some more investigation regarding this, mostly spear headed by Wolfgang,
> and we found and fixed[0] an actual, rather old (fixes commit is from 2014!), bridge bug
> in the kernel.
> 
> The first few lines of the fix's commit message[0] explain the basics:
> 
>> [..] bridges with `vlan_filtering 1` and only 1 auto-port don't
>> set IFF_PROMISC for unicast-filtering-capable ports.
> 
> Further, we saw all that weird behavior as
> * while this is independent of any specific network driver, those specific drivers
>  vary wildly in how the do things, and some thus worked (by luck) while others did
>  not.
> 
> * It can really only happen in the vlan-aware case, as else all ports are set promisc
>  no matter what, but depending in which order things are done the result may still
>  differ even with vlan-aware on
> 
> * It did not matter before (i.e., before systemd started to also apply their
>  MACAddressPolicy by default onto virtual devices like bridges) because then the
>  bridge basically always had a MAC from one of it's ports, so the fdb always
>  contained the bridge's MAC implicitly and the bug was concealed.
> 
> So it's quite likely that this rather confusing mix of behaviors would had pop up
> in more places, where bridges are used, in the upcoming  months when that systemd
> change slowly rolled into stable distros, so actually really nice to find and fix
> (*knocks wood*) this during beta!
> 
> Anyhow, a newer kernel build is now also available in the bullseye based pvetest
> repository, if you want to test and confirm the fix:
> 
> pve-kernel-5.11.22-1-pve version 5.11.22-2
> 
> cheers,
> Thomas
> 
> 
> [0]: https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git/commit/?id=a019abd80220

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PVE-User] Proxmox VE 7.0 (beta) released!
  2021-06-29  8:05 ` Mark Schouten
  2021-06-29  8:23   ` Stoiko Ivanov
  2021-06-29  9:46   ` Thomas Lamprecht
@ 2021-07-02 20:57   ` Thomas Lamprecht
  2021-07-02 21:06     ` Mark Schouten
  2 siblings, 1 reply; 16+ messages in thread
From: Thomas Lamprecht @ 2021-07-02 20:57 UTC (permalink / raw)
  To: Proxmox VE user list, Mark Schouten

On 29.06.21 10:05, Mark Schouten wrote:
> Hi,
> 
> Op 24-06-2021 om 15:16 schreef Martin Maurer:
>> We are pleased to announce the first beta release of Proxmox Virtual Environment 7.0! The 7.x family is based on the great Debian 11 "Bullseye" and comes with a 5.11 kernel, QEMU 6.0, LXC 4.0, OpenZFS 2.0.4.
> 
> I just upgraded a node in our demo cluster and all seemed fine. Except for non-working cluster network. I was unable to ping the node through the cluster interface, pvecm saw no other nodes and ceph was broken.
> 
> However, if I ran tcpdump, ping started working, but not the rest.
> 
> Interesting situation, which I 'fixed' by disabling vlan-aware-bridge for that interface. After the reboot, everything works (AFAICS).
> 
> If Proxmox wants to debug this, feel free to reach out to me, I can grant you access to this node so you can check it out.
> 

FYI, there was some more investigation regarding this, mostly spear headed by Wolfgang,
and we found and fixed[0] an actual, rather old (fixes commit is from 2014!), bridge bug
in the kernel.

The first few lines of the fix's commit message[0] explain the basics:

> [..] bridges with `vlan_filtering 1` and only 1 auto-port don't
> set IFF_PROMISC for unicast-filtering-capable ports.

Further, we saw all that weird behavior as
* while this is independent of any specific network driver, those specific drivers
  vary wildly in how the do things, and some thus worked (by luck) while others did
  not.

* It can really only happen in the vlan-aware case, as else all ports are set promisc
  no matter what, but depending in which order things are done the result may still
  differ even with vlan-aware on

* It did not matter before (i.e., before systemd started to also apply their
  MACAddressPolicy by default onto virtual devices like bridges) because then the
  bridge basically always had a MAC from one of it's ports, so the fdb always
  contained the bridge's MAC implicitly and the bug was concealed.

So it's quite likely that this rather confusing mix of behaviors would had pop up
in more places, where bridges are used, in the upcoming  months when that systemd
change slowly rolled into stable distros, so actually really nice to find and fix
(*knocks wood*) this during beta!

Anyhow, a newer kernel build is now also available in the bullseye based pvetest
repository, if you want to test and confirm the fix:

pve-kernel-5.11.22-1-pve version 5.11.22-2

cheers,
Thomas


[0]: https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git/commit/?id=a019abd80220




^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PVE-User] Proxmox VE 7.0 (beta) released!
  2021-06-29 13:31           ` Stoiko Ivanov
  2021-06-29 13:51             ` alexandre derumier
@ 2021-06-29 14:14             ` Thomas Lamprecht
  1 sibling, 0 replies; 16+ messages in thread
From: Thomas Lamprecht @ 2021-06-29 14:14 UTC (permalink / raw)
  To: Proxmox VE user list, Stoiko Ivanov, Mark Schouten

On 29.06.21 15:31, Stoiko Ivanov wrote:
> On Tue, 29 Jun 2021 14:04:05 +0200
> Mark Schouten <mark@tuxis.nl> wrote:
> 
>> Hi,
>>
>> Op 29-06-2021 om 12:31 schreef Thomas Lamprecht:
>>>> I do not completely understand why that fixes it though.  Commenting out MACAddressPolicy=persistent helps, but why?
>>>>  
>>>
>>> Because duplicate MAC addresses are not ideal, to say the least?  
>>
>> That I understand. :)
>>
>> But, the cluster interface works when bridge_vlan_aware is off, 
>> regardless of the MacAddressPolicy setting.
>>
> 
> We managed to find a reproducer - my current guess is that it might have
> something to do with intel NIC drivers or some changes in ifupdown2 (or
> udev, or in their interaction ;) - Sadly if tcpdump fixes the issues, it
> makes debugging quite hard :)

The issue is that the kernel always (since close to forever) cleared the bridge's
promisc mode when there was either no port or exactly one port with flood or learning
enabled in the `br_manage_promisc` function.

Further, on toggeling VLAN-aware the aforementioned `br_manage_promisc` is called
from `br_vlan_filter_toggle`

So, why does this breaks now? I really do not think it's due to some driver-specific
stuff, not impossible but the following sounds like a better explanation about the
"why now":

Previously the MAC address of the bridge was the same as the one from the single port,
so there it didn't matter to much if promisc was on on the single port itself, the
bridge could accept the packages. But now, with the systemd default MACAddresPolicy
"persistent" now also applying to bridges, the bridge gets a different MAC than the
port, which means the disabled promisc matters on that port quite a bit more.

So vlan-aware on "breaks" it by mistake, as then a br_manage_promisc call is made
at a time where the "clear promisc for port" logic triggers, so rather a side-effect
than a real cause.

I quite tempted to drop the br_auto_port special case for the single port case in
the kernel as fix, but need to think about this - and probably will send that to
LKML first to poke for some comments...




^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PVE-User] Proxmox VE 7.0 (beta) released!
  2021-06-29 13:31           ` Stoiko Ivanov
@ 2021-06-29 13:51             ` alexandre derumier
  2021-06-29 14:14             ` Thomas Lamprecht
  1 sibling, 0 replies; 16+ messages in thread
From: alexandre derumier @ 2021-06-29 13:51 UTC (permalink / raw)
  To: Proxmox VE user list, Mark Schouten; +Cc: Thomas Lamprecht

Hi,
I have found a bug report about promisc && vlan-aware on mellanox site,
of a proxmox user.

(proxmox 6.4 with kernel (5.12?)

https://community.mellanox.com/s/question/0D51T00008ansfP/vlan-aware-linux-bridging-is-not-functional-on-connectx4lx-card-unless-manually-put-in-promiscuous-mode


So maybe is it a kernel change ?

I don't known is that ifupdown2 has been bumped to last master ? (I
don't have seen special commit of this kind of change)



Le mardi 29 juin 2021 à 15:31 +0200, Stoiko Ivanov a écrit :
> On Tue, 29 Jun 2021 14:04:05 +0200
> Mark Schouten <mark@tuxis.nl> wrote:
> 
> > Hi,
> > 
> > Op 29-06-2021 om 12:31 schreef Thomas Lamprecht:
> > > > I do not completely understand why that fixes it though. 
> > > > Commenting out MACAddressPolicy=persistent helps, but why?
> > > >  
> > > 
> > 
> > That I understand. :)
> > 
> > But, the cluster interface works when bridge_vlan_aware is off, 
> > regardless of the MacAddressPolicy setting.
> > 
> 
> We managed to find a reproducer - my current guess is that it might
> have
> something to do with intel NIC drivers or some changes in ifupdown2 (or
> udev, or in their interaction ;) - Sadly if tcpdump fixes the issues,
> it
> makes debugging quite hard :)
> 
> In any case - as can also be seen in the 2 reports you sent:
> with vlan-aware bridges the promisc flag of the ethernet interface (the
> bridge-port) is set to 0, when vlan-aware is not present it is set to
> 1.
> This explains the symptoms you're seeing, and why running tcpdump fixes
> it
> 
> FWIW: I think simply starting a guest would have fixed the issue as
> well
> (when a second bridge_port gets added the kernel sets the promisc flag
> on
> the port correctly)
> 
> As Wolfgang wrote - we'll look into it and will hopefully come up with
> a
> sensible solution.
> 
> Thanks for the beta-test and the report!
> 
> 
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user





^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PVE-User] Proxmox VE 7.0 (beta) released!
  2021-06-29 12:04         ` Mark Schouten
@ 2021-06-29 13:31           ` Stoiko Ivanov
  2021-06-29 13:51             ` alexandre derumier
  2021-06-29 14:14             ` Thomas Lamprecht
  0 siblings, 2 replies; 16+ messages in thread
From: Stoiko Ivanov @ 2021-06-29 13:31 UTC (permalink / raw)
  To: Mark Schouten; +Cc: Proxmox VE user list, Thomas Lamprecht

On Tue, 29 Jun 2021 14:04:05 +0200
Mark Schouten <mark@tuxis.nl> wrote:

> Hi,
> 
> Op 29-06-2021 om 12:31 schreef Thomas Lamprecht:
> >> I do not completely understand why that fixes it though.  Commenting out MACAddressPolicy=persistent helps, but why?
> >>  
> > 
> > Because duplicate MAC addresses are not ideal, to say the least?  
> 
> That I understand. :)
> 
> But, the cluster interface works when bridge_vlan_aware is off, 
> regardless of the MacAddressPolicy setting.
> 

We managed to find a reproducer - my current guess is that it might have
something to do with intel NIC drivers or some changes in ifupdown2 (or
udev, or in their interaction ;) - Sadly if tcpdump fixes the issues, it
makes debugging quite hard :)

In any case - as can also be seen in the 2 reports you sent:
with vlan-aware bridges the promisc flag of the ethernet interface (the
bridge-port) is set to 0, when vlan-aware is not present it is set to 1.
This explains the symptoms you're seeing, and why running tcpdump fixes it

FWIW: I think simply starting a guest would have fixed the issue as well
(when a second bridge_port gets added the kernel sets the promisc flag on
the port correctly)

As Wolfgang wrote - we'll look into it and will hopefully come up with a
sensible solution.

Thanks for the beta-test and the report!




^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PVE-User] Proxmox VE 7.0 (beta) released!
  2021-06-29 10:31       ` Thomas Lamprecht
@ 2021-06-29 12:04         ` Mark Schouten
  2021-06-29 13:31           ` Stoiko Ivanov
  0 siblings, 1 reply; 16+ messages in thread
From: Mark Schouten @ 2021-06-29 12:04 UTC (permalink / raw)
  To: Thomas Lamprecht, Proxmox VE user list

Hi,

Op 29-06-2021 om 12:31 schreef Thomas Lamprecht:
>> I do not completely understand why that fixes it though.  Commenting out MACAddressPolicy=persistent helps, but why?
>>
> 
> Because duplicate MAC addresses are not ideal, to say the least?

That I understand. :)

But, the cluster interface works when bridge_vlan_aware is off, 
regardless of the MacAddressPolicy setting.

-- 
Mark Schouten
CTO, Tuxis B.V. | https://www.tuxis.nl/
<mark@tuxis.nl> | +31 318 200208



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PVE-User] Proxmox VE 7.0 (beta) released!
  2021-06-29 10:06     ` Mark Schouten
@ 2021-06-29 10:31       ` Thomas Lamprecht
  2021-06-29 12:04         ` Mark Schouten
  0 siblings, 1 reply; 16+ messages in thread
From: Thomas Lamprecht @ 2021-06-29 10:31 UTC (permalink / raw)
  To: Mark Schouten, Proxmox VE user list

On 29.06.21 12:06, Mark Schouten wrote:
> Hi,
> 
> Op 29-06-2021 om 11:46 schreef Thomas Lamprecht:
>> Do you have some FW rules regarding MAC-Addresses or the like?
>> As the MAC-Address selection changed in Proxmox VE 7 due to new default
>> n systemd's network link policy, as listed in our known issues[0].
> 
> There is no firewall configured on this cluster. On Stoiko's advice, I changed the systemd-link-settings and now everything works again.

Ah yeah, that advice was not posted to the list initially so I did not saw that...

> 
> I do not completely understand why that fixes it though.  Commenting out MACAddressPolicy=persistent helps, but why?
> 

Because duplicate MAC addresses are not ideal, to say the least?

I.e., quoting the second part of my original reply again:

> It's now not the one of the first port anymore, but derived from interface
> name and `/etc/machine-id`, which in combination should be unique but also
> persistent.
> 
> But, for some ISO releases (4.0 to 5.3) the machine-id for the installed host
> was not always re-generated, which could result in duplication of a MAC for
> identical named interfaces on two hosts.
> We try to actively catch and fix that on upgrade by checking if the ID is one
> of the known static ones (it's just a handful of possible IDs) on upgrade.
> 
> But if one cloned an machine (e.g., a colleague run into this in a demo
> virtualized PVE test clusters they cloned from a template) that ID will be
> duplicated and thus make problems.
> That could be easily checked by comparing the `/etc/machine-id` content and
> be fixed by re-generation[1].
> 




^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PVE-User] Proxmox VE 7.0 (beta) released!
  2021-06-29  9:46   ` Thomas Lamprecht
@ 2021-06-29 10:06     ` Mark Schouten
  2021-06-29 10:31       ` Thomas Lamprecht
  0 siblings, 1 reply; 16+ messages in thread
From: Mark Schouten @ 2021-06-29 10:06 UTC (permalink / raw)
  To: Thomas Lamprecht, Proxmox VE user list

Hi,

Op 29-06-2021 om 11:46 schreef Thomas Lamprecht:
> Do you have some FW rules regarding MAC-Addresses or the like?
> As the MAC-Address selection changed in Proxmox VE 7 due to new default
> n systemd's network link policy, as listed in our known issues[0].

There is no firewall configured on this cluster. On Stoiko's advice, I 
changed the systemd-link-settings and now everything works again.

I do not completely understand why that fixes it though.  Commenting out 
MACAddressPolicy=persistent helps, but why?

-- 
Mark Schouten
CTO, Tuxis B.V. | https://www.tuxis.nl/
<mark@tuxis.nl> | +31 318 200208



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PVE-User] Proxmox VE 7.0 (beta) released!
  2021-06-29  8:05 ` Mark Schouten
  2021-06-29  8:23   ` Stoiko Ivanov
@ 2021-06-29  9:46   ` Thomas Lamprecht
  2021-06-29 10:06     ` Mark Schouten
  2021-07-02 20:57   ` Thomas Lamprecht
  2 siblings, 1 reply; 16+ messages in thread
From: Thomas Lamprecht @ 2021-06-29  9:46 UTC (permalink / raw)
  To: Proxmox VE user list, Mark Schouten

Hi,

On 29.06.21 10:05, Mark Schouten wrote:
> Op 24-06-2021 om 15:16 schreef Martin Maurer:
>> We are pleased to announce the first beta release of Proxmox Virtual Environment 7.0! The 7.x family is based on the great Debian 11 "Bullseye" and comes with a 5.11 kernel, QEMU 6.0, LXC 4.0, OpenZFS 2.0.4.
> 
> I just upgraded a node in our demo cluster and all seemed fine. Except for non-working cluster network. I was unable to ping the node through the cluster interface, pvecm saw no other nodes and ceph was broken.
> 
> However, if I ran tcpdump, ping started working, but not the rest.
> 
> Interesting situation, which I 'fixed' by disabling vlan-aware-bridge for that interface. After the reboot, everything works (AFAICS).
> 
> If Proxmox wants to debug this, feel free to reach out to me, I can grant you access to this node so you can check it out.
> 

Do you have some FW rules regarding MAC-Addresses or the like?
As the MAC-Address selection changed in Proxmox VE 7 due to new default 
n systemd's network link policy, as listed in our known issues[0].

It's now not the one of the first port anymore, but derived from interface
name and `/etc/machine-id`, which in combination should be unique but also
persistent.

But, for some ISO releases (4.0 to 5.3) the machine-id for the installed host
was not always re-generated, which could result in duplication of a MAC for
identical named interfaces on two hosts.
We try to actively catch and fix that on upgrade by checking if the ID is one
of the known static ones (it's just a handful of possible IDs) on upgrade.

But if one cloned an machine (e.g., a colleague run into this in a demo
virtualized PVE test clusters they cloned from a template) that ID will be
duplicated and thus make problems.
That could be easily checked by comparing the `/etc/machine-id` content and
be fixed by re-generation[1].

Just noting that for completness sake, to avoid more investigation if it's
just that.

- Thomas


[0]: https://pve.proxmox.com/wiki/Roadmap#7.0-beta-known-issues
[1]: https://wiki.debian.org/MachineId#machine_id_and_cloned_systems.2C_generating_a_new_machine_id




^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PVE-User] Proxmox VE 7.0 (beta) released!
  2021-06-29  8:23   ` Stoiko Ivanov
@ 2021-06-29  8:34     ` Mark Schouten
  0 siblings, 0 replies; 16+ messages in thread
From: Mark Schouten @ 2021-06-29  8:34 UTC (permalink / raw)
  To: Stoiko Ivanov; +Cc: Proxmox VE user list

[-- Attachment #1: Type: text/plain, Size: 594 bytes --]

Hi Stoiko,

Op 29-06-2021 om 10:23 schreef Stoiko Ivanov:
>> I just upgraded a node in our demo cluster and all seemed fine. Except
>> for non-working cluster network. I was unable to ping the node through
>> the cluster interface, pvecm saw no other nodes and ceph was broken.
> Thanks for the report - could you provide some details on the upgraded
> node? Mostly which NICs are used - but also the complete hardware -setup
> > (If you prefer you can send me a pvereport to my e-mail)


See attached!

-- 
Mark Schouten
CTO, Tuxis B.V. | https://www.tuxis.nl/
<mark@tuxis.nl> | +31 318 200208

[-- Attachment #2: pvereport_novlanaware.txt --]
[-- Type: text/plain, Size: 65418 bytes --]


==== general system info ====

# hostname
node06

# pveversion --verbose
proxmox-ve: 7.0-2 (running kernel: 5.11.22-1-pve)
pve-manager: 7.0-5 (running version: 7.0-5/cce9b25f)
pve-kernel-5.11: 7.0-3
pve-kernel-helper: 7.0-3
pve-kernel-5.4: 6.4-3
pve-kernel-5.11.22-1-pve: 5.11.22-1
pve-kernel-5.4.119-1-pve: 5.4.119-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph: 15.2.13-pve1
ceph-fuse: 15.2.13-pve1
corosync: 3.1.2-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.0.0-1+pve5
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.21-pve1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 7.0-3
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-4
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-2
libpve-storage-perl: 7.0-6
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-1
lxcfs: 4.0.8-pve1
novnc-pve: 1.2.0-3
proxmox-backup-client: 1.1.10-1
proxmox-backup-file-restore: 1.1.10-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.1-4
pve-cluster: 7.0-2
pve-container: 4.0-3
pve-docs: 7.0-3
pve-edk2-firmware: 3.20200531-1
pve-firewall: 4.2-2
pve-firmware: 3.2-4
pve-ha-manager: 3.2-2
pve-i18n: 2.3-1
pve-qemu-kvm: 6.0.0-2
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-5
smartmontools: 7.2-pve2
spiceterm: 3.2-2
vncterm: 1.7-1
zfsutils-linux: 2.0.4-pve1

# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
2a03:7900:111::dc:6 node06.demo.customers.tuxis.net node06

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
#2a03:7900:111:0:90d4:a7ff:fe7f:ccf6 pbs.tuxis.net

# pvesubscription get
message: There is no subscription key
serverid: F3A59435D1B87C5A2460F965646A3177
status: NotFound
url: https://www.proxmox.com/proxmox-ve/pricing

# cat /etc/apt/sources.list
## Managed via Ansible

deb http://debmirror.tuxis.nl/debian/ bullseye main contrib non-free
deb-src http://debmirror.tuxis.nl/debian/ bullseye main contrib non-free
deb http://security.debian.org/ bullseye-security main contrib non-free
deb-src http://security.debian.org/ bullseye-security main contrib non-free
deb http://debmirror.tuxis.nl/debian/ bullseye-updates main contrib non-free
deb-src http://debmirror.tuxis.nl/debian/ bullseye-updates main contrib non-free

# cat /etc/apt/sources.list.d/pvetest-for-beta.list
deb http://download.proxmox.com/debian/pve bullseye pvetest


# cat /etc/apt/sources.list.d/ceph.list
deb http://download.proxmox.com/debian/ceph-octopus bullseye main


# cat /etc/apt/sources.list.d/apt_tuxis_nl_tuxis.list
deb https://apt.tuxis.nl/tuxis/ tuxis-cron main
deb https://apt.tuxis.nl/tuxis/ monitoring main
deb https://apt.tuxis.nl/tuxis/ pmrb main


# lscpu
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Byte Order:                      Little Endian
Address sizes:                   46 bits physical, 48 bits virtual
CPU(s):                          8
On-line CPU(s) list:             0-7
Thread(s) per core:              2
Core(s) per socket:              4
Socket(s):                       1
NUMA node(s):                    1
Vendor ID:                       GenuineIntel
CPU family:                      6
Model:                           63
Model name:                      Intel(R) Xeon(R) CPU E5-1620 v3 @ 3.50GHz
Stepping:                        2
CPU MHz:                         3600.000
CPU max MHz:                     3600.0000
CPU min MHz:                     1200.0000
BogoMIPS:                        7000.21
Virtualization:                  VT-x
L1d cache:                       128 KiB
L1i cache:                       128 KiB
L2 cache:                        1 MiB
L3 cache:                        10 MiB
NUMA node0 CPU(s):               0-7
Vulnerability Itlb multihit:     KVM: Mitigation: VMX disabled
Vulnerability L1tf:              Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds:               Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown:          Mitigation; PTI
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:        Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:        Mitigation; Full generic retpoline, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected
Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat pln pts md_clear flush_l1d

# pvesh get /cluster/resources --type node --output-format=yaml
---
- cpu: 0.011763712868024
  disk: 2595618816
  id: node/node06
  level: ''
  maxcpu: 8
  maxdisk: 115451363328
  maxmem: 67322179584
  mem: 3120074752
  node: node06
  status: online
  type: node
  uptime: 2238
- cpu: 0.188962970750996
  disk: 21875785728
  id: node/node04
  level: ''
  maxcpu: 8
  maxdisk: 101597184000
  maxmem: 67331584000
  mem: 23079858176
  node: node04
  status: online
  type: node
  uptime: 13969251
- cpu: 0.0102829414157591
  disk: 3339059200
  id: node/node05
  level: ''
  maxcpu: 8
  maxdisk: 115422265344
  maxmem: 67331592192
  mem: 8491147264
  node: node05
  status: online
  type: node
  uptime: 2830727

==== overall system load info ====

# top -b -c -w512 -n 1 -o TIME | head -n 30
top - 10:26:43 up 37 min,  1 user,  load average: 0.13, 0.08, 0.06
Tasks: 295 total,   1 running, 294 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.8 us,  1.5 sy,  0.0 ni, 97.7 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
MiB Mem :  64203.4 total,  60470.7 free,   2954.7 used,    778.0 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.  60187.0 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
   1647 root      rt   0  571448 178096  53256 S   6.2   0.3   0:53.45 /usr/sbin/corosync -f
   1663 ceph      20   0 1400308 570700  28140 S   0.0   0.9   0:39.35 /usr/bin/ceph-osd -f --cluster ceph --id 5 --setuser ceph --setgroup ceph
   1656 ceph      20   0 1298416 546100  28064 S   0.0   0.8   0:27.35 /usr/bin/ceph-osd -f --cluster ceph --id 4 --setuser ceph --setgroup ceph
   1644 ceph      20   0  507260 123516  20868 S   0.0   0.2   0:24.52 /usr/bin/ceph-mon -f --cluster ceph --id node06 --setuser ceph --setgroup ceph
   1762 root      20   0  270424  95672   9780 S   0.0   0.1   0:23.76 pvestatd
   1888 www-data  20   0  346704 127472   8076 S   0.0   0.2   0:09.48 pveproxy worker
   1887 www-data  20   0  346736 126700   7424 S   0.0   0.2   0:08.86 pveproxy worker
   1889 www-data  20   0  346604 126552   7296 S   0.0   0.2   0:07.14 pveproxy worker
   1763 root      20   0  264748  84656   4284 S   0.0   0.1   0:06.55 pve-firewall
   1550 root      20   0  618448  55796  50792 S   0.0   0.1   0:05.45 /usr/bin/pmxcfs
   1643 ceph      20   0  503992 176144  19884 S   0.0   0.3   0:04.03 /usr/bin/ceph-mgr -f --cluster ceph --id node06 --setuser ceph --setgroup ceph
   1789 root      20   0  345064 124220   6524 S   0.0   0.2   0:02.80 pvedaemon worker
     13 root      20   0       0      0      0 I   0.0   0.0   0:02.48 [rcu_sched]
   1788 root      20   0  345068 123856   6204 S   0.0   0.2   0:02.40 pvedaemon worker
   1790 root      20   0  345064 123804   6168 S   0.0   0.2   0:02.40 pvedaemon worker
      1 root      20   0  165772   8880   5280 S   0.0   0.0   0:02.17 /sbin/init
   1335 Debian-+  20   0   24352  10532   5780 S   0.0   0.0   0:01.40 /usr/sbin/snmpd -LOw -u Debian-snmp -g Debian-snmp -I -smux mteTrigger mteTriggerConf -f -p /run/snmpd.pid
   1642 ceph      20   0  302080  27288  11188 S   0.0   0.0   0:00.85 /usr/bin/ceph-mds -f --cluster ceph --id node06 --setuser ceph --setgroup ceph
    421 root       1 -19       0      0      0 S   0.0   0.0   0:00.82 [z_wr_iss]
    422 root       1 -19       0      0      0 S   0.0   0.0   0:00.82 [z_wr_iss]
    423 root       1 -19       0      0      0 S   0.0   0.0   0:00.82 [z_wr_iss]
    424 root       1 -19       0      0      0 S   0.0   0.0   0:00.82 [z_wr_iss]
    425 root       1 -19       0      0      0 S   0.0   0.0   0:00.82 [z_wr_iss]

# head /proc/pressure/*
==> /proc/pressure/cpu <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=5430541

==> /proc/pressure/io <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=1300222
full avg10=0.00 avg60=0.00 avg300=0.00 total=1149011

==> /proc/pressure/memory <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=0
full avg10=0.00 avg60=0.00 avg300=0.00 total=0

==== info about storage ====

# cat /etc/pve/storage.cfg
dir: local
	path /var/lib/vz
	content images,backup,iso,vztmpl
	prune-backups keep-last=1
	shared 0

zfspool: local-zfs
	pool rpool/data
	content rootdir,images
	sparse 1

rbd: Ceph
	content rootdir,images
	krbd 0
	pool Ceph

cephfs: CephFS
	path /mnt/pve/CephFS
	content iso,snippets,backup,vztmpl
	prune-backups keep-last=1

dir: Tuxis_Marketplace
	path /mnt/pve/Tuxis_Marketplace
	content iso,backup
	is_mountpoint yes
	mkdir 0
	shared 1

dir: Tuxis_Marketplace_Beta
	path /mnt/pve/Tuxis_Marketplace_Beta
	content backup,iso
	is_mountpoint yes
	mkdir 0
	shared 1

rbd: CephKRBD
	content images
	krbd 1
	pool Ceph

pbs: pbs002.tuxis.nl
	datastore DB0220_demo
	server pbs002.tuxis.nl
	content backup
	encryption-key 68:d5:89:f6:f1:f4:67:59:1b:74:6a:78:99:11:ad:09:a0:b0:12:db:43:8d:41:19:af:38:90:77:12:c1:6d:f8
	fingerprint 45:f8:79:eb:27:96:88:6b:29:ad:21:00:13:c6:bd:b8:30:f6:f3:9b:f0:bf:dd:f3:ad:f0:09:d5:d2:9a:34:79
	prune-backups keep-last=1
	username DB0220@pbs


# pvesm status
Name                          Type     Status           Total            Used       Available        %
Ceph                           rbd     active       802501642       337252874       465248768   42.03%
CephFS                      cephfs     active       500432896        35184640       465248256    7.03%
CephKRBD                       rbd     active       802501642       337252874       465248768   42.03%
Tuxis_Marketplace              dir     active    274877906944               0    274877906944    0.00%
Tuxis_Marketplace_Beta         dir     active    274877906944               0    274877906944    0.00%
local                          dir     active       112745472         2534784       110210688    2.25%
local-zfs                  zfspool     active       110210824              96       110210728    0.00%
pbs002.tuxis.nl                pbs     active      1701798656        10516096      1691282560    0.62%

# cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc defaults 0 0

# findmnt --ascii
TARGET                            SOURCE                                                  FSTYPE     OPTIONS
/                                 rpool/ROOT/pve-1                                        zfs        rw,relatime,xattr,noacl
|-/sys                            sysfs                                                   sysfs      rw,nosuid,nodev,noexec,relatime
| |-/sys/kernel/security          securityfs                                              securityfs rw,nosuid,nodev,noexec,relatime
| |-/sys/fs/cgroup                cgroup2                                                 cgroup2    rw,nosuid,nodev,noexec,relatime
| |-/sys/fs/pstore                pstore                                                  pstore     rw,nosuid,nodev,noexec,relatime
| |-/sys/fs/bpf                   none                                                    bpf        rw,nosuid,nodev,noexec,relatime,mode=700
| |-/sys/kernel/debug             debugfs                                                 debugfs    rw,nosuid,nodev,noexec,relatime
| |-/sys/kernel/tracing           tracefs                                                 tracefs    rw,nosuid,nodev,noexec,relatime
| |-/sys/fs/fuse/connections      fusectl                                                 fusectl    rw,nosuid,nodev,noexec,relatime
| `-/sys/kernel/config            configfs                                                configfs   rw,nosuid,nodev,noexec,relatime
|-/proc                           proc                                                    proc       rw,relatime
| `-/proc/sys/fs/binfmt_misc      systemd-1                                               autofs     rw,relatime,fd=30,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=561
|-/dev                            udev                                                    devtmpfs   rw,nosuid,relatime,size=32838164k,nr_inodes=8209541,mode=755,inode64
| |-/dev/pts                      devpts                                                  devpts     rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000
| |-/dev/shm                      tmpfs                                                   tmpfs      rw,nosuid,nodev,inode64
| |-/dev/mqueue                   mqueue                                                  mqueue     rw,nosuid,nodev,noexec,relatime
| `-/dev/hugepages                hugetlbfs                                               hugetlbfs  rw,relatime,pagesize=2M
|-/run                            tmpfs                                                   tmpfs      rw,nosuid,nodev,noexec,relatime,size=6574432k,mode=755,inode64
| |-/run/lock                     tmpfs                                                   tmpfs      rw,nosuid,nodev,noexec,relatime,size=5120k,inode64
| |-/run/rpc_pipefs               sunrpc                                                  rpc_pipefs rw,relatime
| `-/run/user/0                   tmpfs                                                   tmpfs      rw,nosuid,nodev,relatime,size=6574428k,nr_inodes=1643607,mode=700,inode64
|-/rpool                          rpool                                                   zfs        rw,noatime,xattr,noacl
| |-/rpool/ROOT                   rpool/ROOT                                              zfs        rw,noatime,xattr,noacl
| `-/rpool/data                   rpool/data                                              zfs        rw,noatime,xattr,noacl
|-/var/lib/ceph/osd/ceph-4        tmpfs                                                   tmpfs      rw,relatime,inode64
|-/var/lib/ceph/osd/ceph-5        tmpfs                                                   tmpfs      rw,relatime,inode64
|-/mnt/pve/Tuxis_Marketplace_Beta s3fs                                                    fuse.s3fs  rw,nosuid,nodev,relatime,user_id=0,group_id=0
|-/mnt/pve/Tuxis_Marketplace      s3fs                                                    fuse.s3fs  rw,nosuid,nodev,relatime,user_id=0,group_id=0
|-/etc/pve                        /dev/fuse                                               fuse       rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other
|-/var/lib/lxcfs                  lxcfs                                                   fuse.lxcfs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other
`-/mnt/pve/CephFS                 [fdb0:5bd1:dc::4],[fdb0:5bd1:dc::5],[fdb0:5bd1:dc::6]:/ ceph       rw,relatime,name=admin,secret=<hidden>,acl

# df --human
Filesystem                                               Size  Used Avail Use% Mounted on
udev                                                      32G     0   32G   0% /dev
tmpfs                                                    6.3G  1.3M  6.3G   1% /run
rpool/ROOT/pve-1                                         108G  2.5G  106G   3% /
tmpfs                                                     32G   63M   32G   1% /dev/shm
tmpfs                                                    5.0M     0  5.0M   0% /run/lock
rpool                                                    106G  128K  106G   1% /rpool
rpool/ROOT                                               106G  128K  106G   1% /rpool/ROOT
rpool/data                                               106G  128K  106G   1% /rpool/data
tmpfs                                                     32G   28K   32G   1% /var/lib/ceph/osd/ceph-4
tmpfs                                                     32G   28K   32G   1% /var/lib/ceph/osd/ceph-5
s3fs                                                     256T     0  256T   0% /mnt/pve/Tuxis_Marketplace_Beta
s3fs                                                     256T     0  256T   0% /mnt/pve/Tuxis_Marketplace
/dev/fuse                                                 30M   40K   30M   1% /etc/pve
[fdb0:5bd1:dc::4],[fdb0:5bd1:dc::5],[fdb0:5bd1:dc::6]:/  478G   34G  444G   8% /mnt/pve/CephFS
tmpfs                                                    6.3G  4.0K  6.3G   1% /run/user/0

==== info about network ====

# ip -details -statistics address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 minmtu 0 maxmtu 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
    RX: bytes  packets  errors  dropped missed  mcast   
    55254397   58681    0       0       0       0       
    TX: bytes  packets  errors  dropped carrier collsns 
    55254397   58681    0       0       0       0       
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether 0c:c4:7a:d9:1d:f6 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 9216 
    bridge_slave state forwarding priority 32 cost 4 hairpin off guard off root_block off fastleave off learning on flood on port_id 0x8001 port_no 0x1 designated_port 32769 designated_cost 0 designated_bridge 8000.52:ed:27:6f:7b:f3 designated_root 8000.52:ed:27:6f:7b:f3 hold_timer    0.00 message_age_timer    0.00 forward_delay_timer    0.00 topology_change_ack 0 config_pending 0 proxy_arp off proxy_arp_wifi off mcast_router 1 mcast_fast_leave off mcast_flood on mcast_to_unicast off neigh_suppress off group_fwd_mask 0 group_fwd_mask_str 0x0 vlan_tunnel off isolated off numtxqueues 8 numrxqueues 8 gso_max_size 65536 gso_max_segs 65535 
    altname enp6s0
    RX: bytes  packets  errors  dropped missed  mcast   
    15982113   105257   0       74      0       5642    
    TX: bytes  packets  errors  dropped carrier collsns 
    12624090   36080    0       0       0       0       
3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr999 state UP group default qlen 1000
    link/ether 0c:c4:7a:d9:1d:f7 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 9216 
    bridge_slave state forwarding priority 32 cost 4 hairpin off guard off root_block off fastleave off learning on flood on port_id 0x8001 port_no 0x1 designated_port 32769 designated_cost 0 designated_bridge 8000.16:2d:db:6c:6d:8a designated_root 8000.16:2d:db:6c:6d:8a hold_timer    0.00 message_age_timer    0.00 forward_delay_timer    0.00 topology_change_ack 0 config_pending 0 proxy_arp off proxy_arp_wifi off mcast_router 1 mcast_fast_leave off mcast_flood on mcast_to_unicast off neigh_suppress off group_fwd_mask 0 group_fwd_mask_str 0x0 vlan_tunnel off isolated off numtxqueues 8 numrxqueues 8 gso_max_size 65536 gso_max_segs 65535 
    altname enp7s0
    RX: bytes  packets  errors  dropped missed  mcast   
    489871250  699399   0       0       0       33      
    TX: bytes  packets  errors  dropped carrier collsns 
    268886467  574470   0       0       0       0       
4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 52:ed:27:6f:7b:f3 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 65535 
    bridge forward_delay 0 hello_time 200 max_age 2000 ageing_time 30000 stp_state 0 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q bridge_id 8000.52:ed:27:6f:7b:f3 designated_root 8000.52:ed:27:6f:7b:f3 root_port 0 root_path_cost 0 topology_change 0 topology_change_detected 0 hello_timer    0.00 tcn_timer    0.00 topology_change_timer    0.00 gc_timer   23.76 vlan_default_pvid 1 vlan_stats_enabled 0 vlan_stats_per_port 0 group_fwd_mask 0 group_address 01:80:c2:00:00:00 mcast_snooping 1 mcast_router 1 mcast_query_use_ifaddr 0 mcast_querier 0 mcast_hash_elasticity 16 mcast_hash_max 4096 mcast_last_member_count 2 mcast_startup_query_count 2 mcast_last_member_interval 100 mcast_membership_interval 26000 mcast_querier_interval 25500 mcast_query_interval 12500 mcast_query_response_interval 1000 mcast_startup_query_interval 3124 mcast_stats_enabled 0 mcast_igmp_version 2 mcast_mld_version 1 nf_call_iptables 0 nf_call_ip6tables 0 nf_call_arptables 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 
    inet6 2a03:7900:111::dc:6/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::50ed:27ff:fe6f:7bf3/64 scope link 
       valid_lft forever preferred_lft forever
    RX: bytes  packets  errors  dropped missed  mcast   
    14379299   103674   0       0       0       5565    
    TX: bytes  packets  errors  dropped carrier collsns 
    12356802   32972    0       0       0       0       
5: vmbr999: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 16:2d:db:6c:6d:8a brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 65535 
    bridge forward_delay 0 hello_time 200 max_age 2000 ageing_time 30000 stp_state 0 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q bridge_id 8000.16:2d:db:6c:6d:8a designated_root 8000.16:2d:db:6c:6d:8a root_port 0 root_path_cost 0 topology_change 0 topology_change_detected 0 hello_timer    0.00 tcn_timer    0.00 topology_change_timer    0.00 gc_timer  160.33 vlan_default_pvid 1 vlan_stats_enabled 0 vlan_stats_per_port 0 group_fwd_mask 0 group_address 01:80:c2:00:00:00 mcast_snooping 1 mcast_router 1 mcast_query_use_ifaddr 0 mcast_querier 0 mcast_hash_elasticity 16 mcast_hash_max 4096 mcast_last_member_count 2 mcast_startup_query_count 2 mcast_last_member_interval 100 mcast_membership_interval 26000 mcast_querier_interval 25500 mcast_query_interval 12500 mcast_query_response_interval 1000 mcast_startup_query_interval 3124 mcast_stats_enabled 0 mcast_igmp_version 2 mcast_mld_version 1 nf_call_iptables 0 nf_call_ip6tables 0 nf_call_arptables 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 
    inet6 fdb0:5bd1:dc::6/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::142d:dbff:fe6c:6d8a/64 scope link 
       valid_lft forever preferred_lft forever
    RX: bytes  packets  errors  dropped missed  mcast   
    465681104  499419   0       0       0       33      
    TX: bytes  packets  errors  dropped carrier collsns 
    260240217  473931   0       0       0       0       

# ip -details -4 route show

# ip -details -6 route show
unicast ::1 dev lo proto kernel scope global metric 256 pref medium
unicast 2a03:7900:111::/64 dev vmbr0 proto kernel scope global metric 256 pref medium
unicast fdb0:5bd1:dc::/64 dev vmbr999 proto kernel scope global metric 256 pref medium
unicast fdb0:5bd1:cde::/64 via fdb0:5bd1:dc::ffff dev vmbr999 proto boot scope global metric 1024 pref medium
unicast fe80::/64 dev vmbr999 proto kernel scope global metric 256 pref medium
unicast fe80::/64 dev vmbr0 proto kernel scope global metric 256 pref medium
unicast default via 2a03:7900:111::1 dev vmbr0 proto kernel scope global metric 1024 onlink pref medium

# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface eno1 inet6 manual

iface eno2 inet6 manual

auto vmbr0
iface vmbr0 inet6 static
	address 2a03:7900:111::dc:6/64
	gateway 2a03:7900:111::1
	bridge-ports eno1
	bridge-stp off
	bridge-fd 0

auto vmbr999
iface vmbr999 inet6 static
	address fdb0:5bd1:dc::6/64
	bridge-ports eno2
	bridge-stp off
	bridge-fd 0
	#bridge-vlan-aware yes
	#bridge-vids 2-4094
        post-up /usr/sbin/ip ro add fdb0:5bd1:cde::/64 via fdb0:5bd1:dc::ffff


==== info about virtual guests ====

# qm list

# pct list

==== info about firewall ====

# cat /etc/pve/local/host.fw
cat: /etc/pve/local/host.fw: No such file or directory

# iptables-save
# Generated by iptables-save v1.8.7 on Tue Jun 29 10:26:45 2021
*raw
:PREROUTING ACCEPT [10644:3275736]
:OUTPUT ACCEPT [8292:3186298]
COMMIT
# Completed on Tue Jun 29 10:26:45 2021
# Generated by iptables-save v1.8.7 on Tue Jun 29 10:26:45 2021
*filter
:INPUT ACCEPT [8292:3186298]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [8293:3186338]
COMMIT
# Completed on Tue Jun 29 10:26:45 2021

==== info about cluster ====

# pvecm nodes

Membership information
----------------------
    Nodeid      Votes Name
         1          1 node04
         2          1 node05
         3          1 node06 (local)

# pvecm status
Cluster information
-------------------
Name:             Demo
Config Version:   3
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Tue Jun 29 10:26:46 2021
Quorum provider:  corosync_votequorum
Nodes:            3
Node ID:          0x00000003
Ring ID:          1.f47
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   3
Highest expected: 3
Total votes:      3
Quorum:           2  
Flags:            Quorate 

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 fdb0:5bd1:dc::4%32617
0x00000002          1 fdb0:5bd1:dc::5%32617
0x00000003          1 fdb0:5bd1:dc::6%32617 (local)

# cat /etc/pve/corosync.conf 2>/dev/null
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: node04
    nodeid: 1
    quorum_votes: 1
    ring0_addr: fdb0:5bd1:dc::4
  }
  node {
    name: node05
    nodeid: 2
    quorum_votes: 1
    ring0_addr: fdb0:5bd1:dc::5
  }
  node {
    name: node06
    nodeid: 3
    quorum_votes: 1
    ring0_addr: fdb0:5bd1:dc::6
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: Demo
  config_version: 3
  interface {
    linknumber: 0
  }
  ip_version: ipv4-6
  link_mode: passive
  secauth: on
  version: 2
}


# ha-manager status
quorum OK
master node06 (idle, Tue Jun  9 16:20:27 2020)
lrm node04 (idle, Tue Jun 29 10:26:43 2021)
lrm node05 (idle, Tue Jun 29 10:26:43 2021)
lrm node06 (idle, Tue Jun 29 10:26:41 2021)

==== info about hardware ====

# dmidecode -t bios
# dmidecode 3.3
Getting SMBIOS data from sysfs.
SMBIOS 3.0 present.

Handle 0x0000, DMI type 0, 24 bytes
BIOS Information
	Vendor: American Megatrends Inc.
	Version: 2.0a
	Release Date: 08/01/2016
	Address: 0xF0000
	Runtime Size: 64 kB
	ROM Size: 16 MB
	Characteristics:
		PCI is supported
		BIOS is upgradeable
		BIOS shadowing is allowed
		Boot from CD is supported
		Selectable boot is supported
		BIOS ROM is socketed
		EDD is supported
		5.25"/1.2 MB floppy services are supported (int 13h)
		3.5"/720 kB floppy services are supported (int 13h)
		3.5"/2.88 MB floppy services are supported (int 13h)
		Print screen service is supported (int 5h)
		8042 keyboard services are supported (int 9h)
		Serial services are supported (int 14h)
		Printer services are supported (int 17h)
		ACPI is supported
		USB legacy is supported
		BIOS boot specification is supported
		Targeted content distribution is supported
		UEFI is supported
	BIOS Revision: 5.6


# lspci -nnk
00:00.0 Host bridge [0600]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMI2 [8086:2f00] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 DMI2 [15d9:0832]
00:01.0 PCI bridge [0604]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 1 [8086:2f02] (rev 02)
	Kernel driver in use: pcieport
00:01.1 PCI bridge [0604]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 1 [8086:2f03] (rev 02)
	Kernel driver in use: pcieport
00:03.0 PCI bridge [0604]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 3 [8086:2f08] (rev 02)
	Kernel driver in use: pcieport
00:03.2 PCI bridge [0604]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 3 [8086:2f0a] (rev 02)
	Kernel driver in use: pcieport
00:04.0 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 0 [8086:2f20] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 0 [15d9:0832]
	Kernel driver in use: ioatdma
	Kernel modules: ioatdma
00:04.1 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 1 [8086:2f21] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 1 [15d9:0832]
	Kernel driver in use: ioatdma
	Kernel modules: ioatdma
00:04.2 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 2 [8086:2f22] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 2 [15d9:0832]
	Kernel driver in use: ioatdma
	Kernel modules: ioatdma
00:04.3 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 3 [8086:2f23] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 3 [15d9:0832]
	Kernel driver in use: ioatdma
	Kernel modules: ioatdma
00:04.4 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 4 [8086:2f24] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 4 [15d9:0832]
	Kernel driver in use: ioatdma
	Kernel modules: ioatdma
00:04.5 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 5 [8086:2f25] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 5 [15d9:0832]
	Kernel driver in use: ioatdma
	Kernel modules: ioatdma
00:04.6 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 6 [8086:2f26] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 6 [15d9:0832]
	Kernel driver in use: ioatdma
	Kernel modules: ioatdma
00:04.7 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 7 [8086:2f27] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 7 [15d9:0832]
	Kernel driver in use: ioatdma
	Kernel modules: ioatdma
00:05.0 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Address Map, VTd_Misc, System Management [8086:2f28] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Address Map, VTd_Misc, System Management [15d9:0832]
00:05.1 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Hot Plug [8086:2f29] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Hot Plug [15d9:0832]
00:05.2 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 RAS, Control Status and Global Errors [8086:2f2a] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 RAS, Control Status and Global Errors [15d9:0832]
00:05.4 PIC [0800]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 I/O APIC [8086:2f2c] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 I/O APIC [15d9:0832]
00:11.0 Unassigned class [ff00]: Intel Corporation C610/X99 series chipset SPSR [8086:8d7c] (rev 05)
	Subsystem: Super Micro Computer Inc X10SRL-F [15d9:0832]
00:11.4 SATA controller [0106]: Intel Corporation C610/X99 series chipset sSATA Controller [AHCI mode] [8086:8d62] (rev 05)
	Subsystem: Super Micro Computer Inc C610/X99 series chipset sSATA Controller [AHCI mode] [15d9:0832]
	Kernel driver in use: ahci
	Kernel modules: ahci
00:14.0 USB controller [0c03]: Intel Corporation C610/X99 series chipset USB xHCI Host Controller [8086:8d31] (rev 05)
	Subsystem: Super Micro Computer Inc X10SRL-F [15d9:0832]
	Kernel driver in use: xhci_hcd
	Kernel modules: xhci_pci
00:16.0 Communication controller [0780]: Intel Corporation C610/X99 series chipset MEI Controller #1 [8086:8d3a] (rev 05)
	Subsystem: Super Micro Computer Inc X10SRL-F [15d9:0832]
	Kernel modules: mei_me
00:16.1 Communication controller [0780]: Intel Corporation C610/X99 series chipset MEI Controller #2 [8086:8d3b] (rev 05)
	Subsystem: Super Micro Computer Inc X10SRL-F [15d9:0832]
00:1a.0 USB controller [0c03]: Intel Corporation C610/X99 series chipset USB Enhanced Host Controller #2 [8086:8d2d] (rev 05)
	Subsystem: Super Micro Computer Inc X10SRL-F [15d9:0832]
	Kernel driver in use: ehci-pci
	Kernel modules: ehci_pci
00:1c.0 PCI bridge [0604]: Intel Corporation C610/X99 series chipset PCI Express Root Port #1 [8086:8d10] (rev d5)
	Kernel driver in use: pcieport
00:1c.4 PCI bridge [0604]: Intel Corporation C610/X99 series chipset PCI Express Root Port #5 [8086:8d18] (rev d5)
	Kernel driver in use: pcieport
00:1c.5 PCI bridge [0604]: Intel Corporation C610/X99 series chipset PCI Express Root Port #6 [8086:8d1a] (rev d5)
	Kernel driver in use: pcieport
00:1c.6 PCI bridge [0604]: Intel Corporation C610/X99 series chipset PCI Express Root Port #7 [8086:8d1c] (rev d5)
	Kernel driver in use: pcieport
00:1d.0 USB controller [0c03]: Intel Corporation C610/X99 series chipset USB Enhanced Host Controller #1 [8086:8d26] (rev 05)
	Subsystem: Super Micro Computer Inc X10SRL-F [15d9:0832]
	Kernel driver in use: ehci-pci
	Kernel modules: ehci_pci
00:1f.0 ISA bridge [0601]: Intel Corporation C610/X99 series chipset LPC Controller [8086:8d44] (rev 05)
	Subsystem: Super Micro Computer Inc X10SRL-F [15d9:0832]
	Kernel driver in use: lpc_ich
	Kernel modules: lpc_ich
00:1f.2 SATA controller [0106]: Intel Corporation C610/X99 series chipset 6-Port SATA Controller [AHCI mode] [8086:8d02] (rev 05)
	Subsystem: Super Micro Computer Inc C610/X99 series chipset 6-Port SATA Controller [AHCI mode] [15d9:0832]
	Kernel driver in use: ahci
	Kernel modules: ahci
00:1f.3 SMBus [0c05]: Intel Corporation C610/X99 series chipset SMBus Controller [8086:8d22] (rev 05)
	Subsystem: Super Micro Computer Inc X10SRL-F [15d9:0832]
	Kernel driver in use: i801_smbus
	Kernel modules: i2c_i801
06:00.0 Ethernet controller [0200]: Intel Corporation I210 Gigabit Network Connection [8086:1533] (rev 03)
	DeviceName:  Intel Ethernet i210AT #1
	Subsystem: Super Micro Computer Inc I210 Gigabit Network Connection [15d9:1533]
	Kernel driver in use: igb
	Kernel modules: igb
07:00.0 Ethernet controller [0200]: Intel Corporation I210 Gigabit Network Connection [8086:1533] (rev 03)
	DeviceName:  Intel Ethernet i210AT #2
	Subsystem: Super Micro Computer Inc I210 Gigabit Network Connection [15d9:1533]
	Kernel driver in use: igb
	Kernel modules: igb
08:00.0 PCI bridge [0604]: ASPEED Technology, Inc. AST1150 PCI-to-PCI Bridge [1a03:1150] (rev 03)
09:00.0 VGA compatible controller [0300]: ASPEED Technology, Inc. ASPEED Graphics Family [1a03:2000] (rev 30)
	DeviceName:  ASPEED Video AST2400
	Subsystem: Super Micro Computer Inc X10SRL-F [15d9:0832]
	Kernel driver in use: ast
	Kernel modules: ast
ff:0b.0 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 R3 QPI Link 0 & 1 Monitoring [8086:2f81] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 R3 QPI Link 0 & 1 Monitoring [15d9:0832]
ff:0b.1 Performance counters [1101]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 R3 QPI Link 0 & 1 Monitoring [8086:2f36] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 R3 QPI Link 0 & 1 Monitoring [15d9:0832]
	Kernel driver in use: hswep_uncore
ff:0b.2 Performance counters [1101]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 R3 QPI Link 0 & 1 Monitoring [8086:2f37] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 R3 QPI Link 0 & 1 Monitoring [15d9:0832]
	Kernel driver in use: hswep_uncore
ff:0c.0 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Unicast Registers [8086:2fe0] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Unicast Registers [15d9:0832]
ff:0c.1 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Unicast Registers [8086:2fe1] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Unicast Registers [15d9:0832]
ff:0c.2 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Unicast Registers [8086:2fe2] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Unicast Registers [15d9:0832]
ff:0c.3 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Unicast Registers [8086:2fe3] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Unicast Registers [15d9:0832]
ff:0f.0 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Buffered Ring Agent [8086:2ff8] (rev 02)
ff:0f.1 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Buffered Ring Agent [8086:2ff9] (rev 02)
ff:0f.4 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 System Address Decoder & Broadcast Registers [8086:2ffc] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 System Address Decoder & Broadcast Registers [15d9:0832]
ff:0f.5 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 System Address Decoder & Broadcast Registers [8086:2ffd] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 System Address Decoder & Broadcast Registers [15d9:0832]
ff:0f.6 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 System Address Decoder & Broadcast Registers [8086:2ffe] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 System Address Decoder & Broadcast Registers [15d9:0832]
ff:10.0 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCIe Ring Interface [8086:2f1d] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 PCIe Ring Interface [15d9:0832]
ff:10.1 Performance counters [1101]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCIe Ring Interface [8086:2f34] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 PCIe Ring Interface [15d9:0832]
	Kernel driver in use: hswep_uncore
ff:10.5 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Scratchpad & Semaphore Registers [8086:2f1e] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Scratchpad & Semaphore Registers [15d9:0832]
ff:10.6 Performance counters [1101]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Scratchpad & Semaphore Registers [8086:2f7d] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Scratchpad & Semaphore Registers [15d9:0832]
ff:10.7 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Scratchpad & Semaphore Registers [8086:2f1f] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Scratchpad & Semaphore Registers [15d9:0832]
ff:12.0 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Home Agent 0 [8086:2fa0] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Home Agent 0 [15d9:0832]
ff:12.1 Performance counters [1101]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Home Agent 0 [8086:2f30] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Home Agent 0 [15d9:0832]
	Kernel driver in use: hswep_uncore
ff:13.0 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Target Address, Thermal & RAS Registers [8086... (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Target Address, Thermal & RAS Registers [15d9:0832]
ff:13.1 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Target Address, Thermal & RAS Registers [8086... (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Target Address, Thermal & RAS Registers [15d9:0832]
ff:13.2 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel Target Address Decoder [8086:2faa] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel Target Address Decoder [15d9:0832]
ff:13.3 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel Target Address Decoder [8086:2fab] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel Target Address Decoder [15d9:0832]
ff:13.4 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel Target Address Decoder [8086:2fac] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel Target Address Decoder [15d9:0832]
ff:13.5 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel Target Address Decoder [8086:2fad] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel Target Address Decoder [15d9:0832]
ff:13.6 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DDRIO Channel 0/1 Broadcast [8086:2fae] (rev 02)
ff:13.7 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DDRIO Global Broadcast [8086:2faf] (rev 02)
ff:14.0 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 0 Thermal Control [8086:2fb0] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 0 Thermal Control [15d9:0832]
	Kernel driver in use: hswep_uncore
ff:14.1 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 1 Thermal Control [8086:2fb1] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 1 Thermal Control [15d9:0832]
	Kernel driver in use: hswep_uncore
ff:14.2 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 0 ERROR Registers [8086:2fb2] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 0 ERROR Registers [15d9:0832]
ff:14.3 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 1 ERROR Registers [8086:2fb3] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 1 ERROR Registers [15d9:0832]
ff:14.4 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DDRIO (VMSE) 0 & 1 [8086:2fbc] (rev 02)
ff:14.5 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DDRIO (VMSE) 0 & 1 [8086:2fbd] (rev 02)
ff:14.6 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DDRIO (VMSE) 0 & 1 [8086:2fbe] (rev 02)
ff:14.7 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DDRIO (VMSE) 0 & 1 [8086:2fbf] (rev 02)
ff:15.0 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 2 Thermal Control [8086:2fb4] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 2 Thermal Control [15d9:0832]
	Kernel driver in use: hswep_uncore
ff:15.1 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 3 Thermal Control [8086:2fb5] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 3 Thermal Control [15d9:0832]
	Kernel driver in use: hswep_uncore
ff:15.2 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 2 ERROR Registers [8086:2fb6] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 2 ERROR Registers [15d9:0832]
ff:15.3 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 3 ERROR Registers [8086:2fb7] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 3 ERROR Registers [15d9:0832]
ff:16.0 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 1 Target Address, Thermal & RAS Registers [8086... (rev 02)
ff:16.6 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DDRIO Channel 2/3 Broadcast [8086:2f6e] (rev 02)
ff:16.7 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DDRIO Global Broadcast [8086:2f6f] (rev 02)
ff:17.0 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 1 Channel 0 Thermal Control [8086:2fd0] (rev 02)
	Kernel driver in use: hswep_uncore
ff:17.4 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DDRIO (VMSE) 2 & 3 [8086:2fb8] (rev 02)
ff:17.5 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DDRIO (VMSE) 2 & 3 [8086:2fb9] (rev 02)
ff:17.6 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DDRIO (VMSE) 2 & 3 [8086:2fba] (rev 02)
ff:17.7 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DDRIO (VMSE) 2 & 3 [8086:2fbb] (rev 02)
ff:1e.0 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Power Control Unit [8086:2f98] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Power Control Unit [15d9:0832]
ff:1e.1 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Power Control Unit [8086:2f99] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Power Control Unit [15d9:0832]
ff:1e.2 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Power Control Unit [8086:2f9a] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Power Control Unit [15d9:0832]
ff:1e.3 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Power Control Unit [8086:2fc0] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Power Control Unit [15d9:0832]
ff:1e.4 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Power Control Unit [8086:2f9c] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Power Control Unit [15d9:0832]
ff:1f.0 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 VCU [8086:2f88] (rev 02)
ff:1f.2 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 VCU [8086:2f8a] (rev 02)

==== info about block devices ====

# lsblk --ascii
NAME                                                                                                  MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                                                                                                     8:0    0 111.8G  0 disk 
|-sda1                                                                                                  8:1    0  1007K  0 part 
|-sda2                                                                                                  8:2    0   512M  0 part 
`-sda3                                                                                                  8:3    0 111.3G  0 part 
sdb                                                                                                     8:16   0 111.8G  0 disk 
|-sdb1                                                                                                  8:17   0  1007K  0 part 
|-sdb2                                                                                                  8:18   0   512M  0 part 
`-sdb3                                                                                                  8:19   0 111.3G  0 part 
sdc                                                                                                     8:32   0 447.1G  0 disk 
`-ceph--33bdcbd7--07be--4373--97ca--0678dda8888d-osd--block--e2deed6d--596f--4837--b14e--88c9afdbe531 253:0    0 447.1G  0 lvm  
sdd                                                                                                     8:48   0 447.1G  0 disk 
`-ceph--97bdf879--bbf1--41ba--8563--81abe42cf617-osd--block--55199458--8b33--44f2--b4d2--3a876072a622 253:1    0 447.1G  0 lvm  

# ls -l /dev/disk/by-*/
/dev/disk/by-id/:
total 0
lrwxrwxrwx 1 root root  9 Jun 29 09:49 ata-INTEL_SSDSC2KW120H6_BTLT7124064S120GGN -> ../../sda
lrwxrwxrwx 1 root root 10 Jun 29 09:49 ata-INTEL_SSDSC2KW120H6_BTLT7124064S120GGN-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Jun 29 09:49 ata-INTEL_SSDSC2KW120H6_BTLT7124064S120GGN-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Jun 29 09:49 ata-INTEL_SSDSC2KW120H6_BTLT7124064S120GGN-part3 -> ../../sda3
lrwxrwxrwx 1 root root  9 Jun 29 09:49 ata-SAMSUNG_MZ7LM120HCFD-00003_S22PNYAG500437 -> ../../sdb
lrwxrwxrwx 1 root root 10 Jun 29 09:49 ata-SAMSUNG_MZ7LM120HCFD-00003_S22PNYAG500437-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Jun 29 09:49 ata-SAMSUNG_MZ7LM120HCFD-00003_S22PNYAG500437-part2 -> ../../sdb2
lrwxrwxrwx 1 root root 10 Jun 29 09:49 ata-SAMSUNG_MZ7LM120HCFD-00003_S22PNYAG500437-part3 -> ../../sdb3
lrwxrwxrwx 1 root root  9 Jun 29 09:49 ata-SAMSUNG_MZ7LM480HCHP-00003_S1YJNXAH102524 -> ../../sdd
lrwxrwxrwx 1 root root  9 Jun 29 09:49 ata-SAMSUNG_MZ7LM480HCHP-00003_S1YJNXAH102531 -> ../../sdc
lrwxrwxrwx 1 root root 10 Jun 29 09:49 dm-name-ceph--33bdcbd7--07be--4373--97ca--0678dda8888d-osd--block--e2deed6d--596f--4837--b14e--88c9afdbe531 -> ../../dm-0
lrwxrwxrwx 1 root root 10 Jun 29 09:49 dm-name-ceph--97bdf879--bbf1--41ba--8563--81abe42cf617-osd--block--55199458--8b33--44f2--b4d2--3a876072a622 -> ../../dm-1
lrwxrwxrwx 1 root root 10 Jun 29 09:49 dm-uuid-LVM-GHM6Bwl9TQ7jv5GJd8ORRD6XDearTRZhgvpxQ22a3TWdlBd9iGk1oHhop5lXn8lL -> ../../dm-1
lrwxrwxrwx 1 root root 10 Jun 29 09:49 dm-uuid-LVM-hoOm4ydEDwKOrnNdVuCBCsY31it5n1ZRDsf4uP4Irce8u2hubaahZCqfMz9IpwhI -> ../../dm-0
lrwxrwxrwx 1 root root  9 Jun 29 09:49 lvm-pv-uuid-AGbSTn-aDmD-AbAR-ngCX-8glc-2KVW-xal2xh -> ../../sdc
lrwxrwxrwx 1 root root  9 Jun 29 09:49 lvm-pv-uuid-QPw8aR-Rbbe-LzZ7-0j3t-n8gn-OeOs-YWPaoV -> ../../sdd
lrwxrwxrwx 1 root root  9 Jun 29 09:49 wwn-0x5002538c00018347 -> ../../sdb
lrwxrwxrwx 1 root root 10 Jun 29 09:49 wwn-0x5002538c00018347-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Jun 29 09:49 wwn-0x5002538c00018347-part2 -> ../../sdb2
lrwxrwxrwx 1 root root 10 Jun 29 09:49 wwn-0x5002538c00018347-part3 -> ../../sdb3
lrwxrwxrwx 1 root root  9 Jun 29 09:49 wwn-0x5002538c40146ccb -> ../../sdd
lrwxrwxrwx 1 root root  9 Jun 29 09:49 wwn-0x5002538c40146cd2 -> ../../sdc
lrwxrwxrwx 1 root root  9 Jun 29 09:49 wwn-0x55cd2e414db345fd -> ../../sda
lrwxrwxrwx 1 root root 10 Jun 29 09:49 wwn-0x55cd2e414db345fd-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Jun 29 09:49 wwn-0x55cd2e414db345fd-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Jun 29 09:49 wwn-0x55cd2e414db345fd-part3 -> ../../sda3

/dev/disk/by-label/:
total 0
lrwxrwxrwx 1 root root 10 Jun 29 09:49 rpool -> ../../sda3

/dev/disk/by-partuuid/:
total 0
lrwxrwxrwx 1 root root 10 Jun 29 09:49 4f42744a-eef7-49f5-bfa4-5cb3ca1ee4b2 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Jun 29 09:49 70166c71-7a1f-400e-bd39-f8f4be867d3e -> ../../sda3
lrwxrwxrwx 1 root root 10 Jun 29 09:49 87402126-9aa6-4be9-9c13-4704492a974b -> ../../sda1
lrwxrwxrwx 1 root root 10 Jun 29 09:49 a52ed3d9-d18c-4d5b-9d8a-c92b235fd9e1 -> ../../sdb3
lrwxrwxrwx 1 root root 10 Jun 29 09:49 de77a2cb-a1df-460e-97a2-3c8c8ae9fad5 -> ../../sdb2
lrwxrwxrwx 1 root root 10 Jun 29 09:49 fb306c92-2607-46a5-a32d-7556b04dd494 -> ../../sda2

/dev/disk/by-path/:
total 0
lrwxrwxrwx 1 root root  9 Jun 29 09:49 pci-0000:00:11.4-ata-3 -> ../../sda
lrwxrwxrwx 1 root root 10 Jun 29 09:49 pci-0000:00:11.4-ata-3-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Jun 29 09:49 pci-0000:00:11.4-ata-3-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Jun 29 09:49 pci-0000:00:11.4-ata-3-part3 -> ../../sda3
lrwxrwxrwx 1 root root  9 Jun 29 09:49 pci-0000:00:11.4-ata-3.0 -> ../../sda
lrwxrwxrwx 1 root root 10 Jun 29 09:49 pci-0000:00:11.4-ata-3.0-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Jun 29 09:49 pci-0000:00:11.4-ata-3.0-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Jun 29 09:49 pci-0000:00:11.4-ata-3.0-part3 -> ../../sda3
lrwxrwxrwx 1 root root  9 Jun 29 09:49 pci-0000:00:11.4-ata-4 -> ../../sdb
lrwxrwxrwx 1 root root 10 Jun 29 09:49 pci-0000:00:11.4-ata-4-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Jun 29 09:49 pci-0000:00:11.4-ata-4-part2 -> ../../sdb2
lrwxrwxrwx 1 root root 10 Jun 29 09:49 pci-0000:00:11.4-ata-4-part3 -> ../../sdb3
lrwxrwxrwx 1 root root  9 Jun 29 09:49 pci-0000:00:11.4-ata-4.0 -> ../../sdb
lrwxrwxrwx 1 root root 10 Jun 29 09:49 pci-0000:00:11.4-ata-4.0-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Jun 29 09:49 pci-0000:00:11.4-ata-4.0-part2 -> ../../sdb2
lrwxrwxrwx 1 root root 10 Jun 29 09:49 pci-0000:00:11.4-ata-4.0-part3 -> ../../sdb3
lrwxrwxrwx 1 root root  9 Jun 29 09:49 pci-0000:00:1f.2-ata-1 -> ../../sdc
lrwxrwxrwx 1 root root  9 Jun 29 09:49 pci-0000:00:1f.2-ata-1.0 -> ../../sdc
lrwxrwxrwx 1 root root  9 Jun 29 09:49 pci-0000:00:1f.2-ata-2 -> ../../sdd
lrwxrwxrwx 1 root root  9 Jun 29 09:49 pci-0000:00:1f.2-ata-2.0 -> ../../sdd

/dev/disk/by-uuid/:
total 0
lrwxrwxrwx 1 root root 10 Jun 29 09:49 17716103480993325194 -> ../../sda3
lrwxrwxrwx 1 root root 10 Jun 29 09:49 B851-E178 -> ../../sda2
lrwxrwxrwx 1 root root 10 Jun 29 09:49 B852-ACFC -> ../../sdb2

# iscsiadm -m node
iscsiadm: No records found

# iscsiadm -m session
iscsiadm: No active sessions.

==== info about volumes ====

# pvs
  PV         VG                                        Fmt  Attr PSize    PFree
  /dev/sdc   ceph-33bdcbd7-07be-4373-97ca-0678dda8888d lvm2 a--  <447.13g    0 
  /dev/sdd   ceph-97bdf879-bbf1-41ba-8563-81abe42cf617 lvm2 a--  <447.13g    0 

# lvs
  LV                                             VG                                        Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  osd-block-e2deed6d-596f-4837-b14e-88c9afdbe531 ceph-33bdcbd7-07be-4373-97ca-0678dda8888d -wi-ao---- <447.13g                                                    
  osd-block-55199458-8b33-44f2-b4d2-3a876072a622 ceph-97bdf879-bbf1-41ba-8563-81abe42cf617 -wi-ao---- <447.13g                                                    

# vgs
  VG                                        #PV #LV #SN Attr   VSize    VFree
  ceph-33bdcbd7-07be-4373-97ca-0678dda8888d   1   1   0 wz--n- <447.13g    0 
  ceph-97bdf879-bbf1-41ba-8563-81abe42cf617   1   1   0 wz--n- <447.13g    0 

# zpool status
  pool: rpool
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
	still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
	the pool may no longer be accessible by software that does not support
	the features. See zpool-features(5) for details.
  scan: scrub repaired 0B in 00:00:19 with 0 errors on Sun Jun 13 00:24:20 2021
config:

	NAME                                                   STATE     READ WRITE CKSUM
	rpool                                                  ONLINE       0     0     0
	  ata-SAMSUNG_MZ7LM120HCFD-00003_S22PNYAG500437-part3  ONLINE       0     0     0

errors: No known data errors

# zpool list -v
NAME                                                    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool                                                   111G  2.43G   109G        -         -     5%     2%  1.00x    ONLINE  -
  ata-SAMSUNG_MZ7LM120HCFD-00003_S22PNYAG500437-part3   111G  2.43G   109G        -         -     5%  2.18%      -  ONLINE  

# zfs list
NAME               USED  AVAIL     REFER  MOUNTPOINT
rpool             2.43G   105G      104K  /rpool
rpool/ROOT        2.42G   105G       96K  /rpool/ROOT
rpool/ROOT/pve-1  2.42G   105G     2.42G  /
rpool/data          96K   105G       96K  /rpool/data

# pveceph status
  cluster:
    id:     73045ca5-eead-4e44-a0c1-b6796ed3d7d5
    health: HEALTH_WARN
            client is using insecure global_id reclaim
            mons are allowing insecure global_id reclaim
            4 slow ops, oldest one blocked for 4666 sec, mon.node04 has slow ops
 
  services:
    mon: 3 daemons, quorum node04,node05,node06 (age 36m)
    mgr: node04(active, since 97m), standbys: node05, node06
    mds: CephFS:1 {0=node05=up:active} 2 up:standby
    osd: 6 osds: 6 up (since 37m), 6 in (since 37m)
 
  data:
    pools:   4 pools, 193 pgs
    objects: 96.62k objects, 365 GiB
    usage:   1.0 TiB used, 1.6 TiB / 2.6 TiB avail
    pgs:     193 active+clean
 
  io:
    client:   0 B/s rd, 20 KiB/s wr, 0 op/s rd, 3 op/s wr
 

# ceph osd status
ID  HOST     USED  AVAIL  WR OPS  WR DATA  RD OPS  RD DATA  STATE      
 0  node04   154G   292G      1     7372       0        0   exists,up  
 1  node04   202G   244G      1     5733       0        0   exists,up  
 2  node05   163G   283G      0     7371       0        0   exists,up  
 3  node05   193G   253G      0     4095       0        0   exists,up  
 4  node06   180G   266G      0     4095       0        0   exists,up  
 5  node06   177G   270G      0        0       0        0   exists,up  

# ceph df
--- RAW STORAGE ---
CLASS  SIZE     AVAIL    USED     RAW USED  %RAW USED
ssd    2.6 TiB  1.6 TiB  1.0 TiB   1.0 TiB      39.96
TOTAL  2.6 TiB  1.6 TiB  1.0 TiB   1.0 TiB      39.96
 
--- POOLS ---
POOL                   ID  PGS  STORED   OBJECTS  USED     %USED  MAX AVAIL
Ceph                    2  128  322 GiB   87.91k  965 GiB  42.04    444 GiB
CephFS_data             3   32   34 GiB    8.68k  101 GiB   7.03    444 GiB
CephFS_metadata         4   32  7.9 MiB       24   24 MiB      0    444 GiB
device_health_metrics   5    1   28 MiB        6   84 MiB      0    444 GiB

# ceph osd df tree
ID  CLASS  WEIGHT   REWEIGHT  SIZE     RAW USE  DATA     OMAP     META      AVAIL    %USE   VAR   PGS  STATUS  TYPE NAME      
-1         2.61960         -  2.6 TiB  1.0 TiB  1.0 TiB   98 MiB   5.9 GiB  1.6 TiB  39.96  1.00    -          root default   
-3         0.87320         -  894 GiB  357 GiB  355 GiB   43 MiB   2.0 GiB  537 GiB  39.96  1.00    -              host node04
 0    ssd  0.43660   1.00000  447 GiB  154 GiB  153 GiB  5.5 MiB  1018 MiB  293 GiB  34.53  0.86   82      up          osd.0  
 1    ssd  0.43660   1.00000  447 GiB  203 GiB  202 GiB   38 MiB   986 MiB  244 GiB  45.38  1.14  111      up          osd.1  
-5         0.87320         -  894 GiB  357 GiB  355 GiB   30 MiB   2.0 GiB  537 GiB  39.96  1.00    -              host node05
 2    ssd  0.43660   1.00000  447 GiB  164 GiB  163 GiB   25 MiB   1.0 GiB  284 GiB  36.58  0.92   94      up          osd.2  
 3    ssd  0.43660   1.00000  447 GiB  194 GiB  193 GiB  4.8 MiB  1019 MiB  253 GiB  43.34  1.08   99      up          osd.3  
-7         0.87320         -  894 GiB  357 GiB  355 GiB   25 MiB   2.0 GiB  537 GiB  39.96  1.00    -              host node06
 4    ssd  0.43660   1.00000  447 GiB  180 GiB  179 GiB   22 MiB  1002 MiB  267 GiB  40.32  1.01   97      up          osd.4  
 5    ssd  0.43660   1.00000  447 GiB  177 GiB  176 GiB  2.9 MiB  1021 MiB  270 GiB  39.60  0.99   96      up          osd.5  
                       TOTAL  2.6 TiB  1.0 TiB  1.0 TiB   98 MiB   5.9 GiB  1.6 TiB  39.96                                    
MIN/MAX VAR: 0.86/1.14  STDDEV: 3.70

# cat /etc/ceph/ceph.conf
[global]
	 auth_client_required = cephx
	 auth_cluster_required = cephx
	 auth_service_required = cephx
	 cluster_network = fdb0:5bd1:dc::4/64
	 fsid = 73045ca5-eead-4e44-a0c1-b6796ed3d7d5
	 mon_allow_pool_delete = true
	 mon_host = fdb0:5bd1:dc::4 fdb0:5bd1:dc::5 fdb0:5bd1:dc::6
	 ms_bind_ipv4 = false
	 ms_bind_ipv6 = true
	 osd_pool_default_min_size = 2
	 osd_pool_default_size = 3
	 public_network = fdb0:5bd1:dc::4/64

[client]
	 keyring = /etc/pve/priv/$cluster.$name.keyring

[mds]
	 keyring = /var/lib/ceph/mds/ceph-$id/keyring

[mds.node04]
	 host = node04
	 mds_standby_for_name = pve

[mds.node05]
	 host = node05
	 mds_standby_for_name = pve

[mds.node06]
	 host = node06
	 mds standby for name = pve

[mon.node04]
	 public_addr = fdb0:5bd1:dc::4

[mon.node05]
	 public_addr = fdb0:5bd1:dc::5

[mon.node06]
	 public_addr = fdb0:5bd1:dc::6


# ceph config dump
WHO    MASK  LEVEL    OPTION                     VALUE  RO
  mgr        unknown  mgr/dashboard/server_addr  ::1    * 
  mgr        unknown  mgr/dashboard/ssl          false  * 

# pveceph pool ls
+-----------------------+------+----------+--------+-------------+----------------+-------------------+--------------------------+---------------------------+-----------------+----------------------+---------------+
| Name                  | Size | Min Size | PG Num | min. PG Num | Optimal PG Num | PG Autoscale Mode | PG Autoscale Target Size | PG Autoscale Target Ratio | Crush Rule Name |               %-Used |          Used |
+=======================+======+==========+========+=============+================+===================+==========================+===========================+=================+======================+===============+
| Ceph                  |    3 |        2 |    128 |             |             64 | on                |                          |                           | replicated_rule |    0.420354932546616 | 1036478722631 |
+-----------------------+------+----------+--------+-------------+----------------+-------------------+--------------------------+---------------------------+-----------------+----------------------+---------------+
| CephFS_data           |    3 |        2 |     32 |             |             32 | on                |                          |                           | replicated_rule |   0.0703079700469971 |  108086587392 |
+-----------------------+------+----------+--------+-------------+----------------+-------------------+--------------------------+---------------------------+-----------------+----------------------+---------------+
| CephFS_metadata       |    3 |        2 |     32 |          16 |             16 | on                |                          |                           | replicated_rule | 1.73890748556005e-05 |      24853666 |
+-----------------------+------+----------+--------+-------------+----------------+-------------------+--------------------------+---------------------------+-----------------+----------------------+---------------+
| device_health_metrics |    3 |        2 |      1 |           1 |              1 | on                |                          |                           | replicated_rule | 6.18295161984861e-05 |      88374925 |
+-----------------------+------+----------+--------+-------------+----------------+-------------------+--------------------------+---------------------------+-----------------+----------------------+---------------+

# ceph versions
{
    "mon": {
        "ceph version 15.2.13 (1f5c7871ec0e36ade641773b9b05b6211c308b9d) octopus (stable)": 2,
        "ceph version 15.2.13 (de5fc19f874b2757d3c0977de8b143f6146af132) octopus (stable)": 1
    },
    "mgr": {
        "ceph version 15.2.13 (1f5c7871ec0e36ade641773b9b05b6211c308b9d) octopus (stable)": 2,
        "ceph version 15.2.13 (de5fc19f874b2757d3c0977de8b143f6146af132) octopus (stable)": 1
    },
    "osd": {
        "ceph version 15.2.13 (1f5c7871ec0e36ade641773b9b05b6211c308b9d) octopus (stable)": 4,
        "ceph version 15.2.13 (de5fc19f874b2757d3c0977de8b143f6146af132) octopus (stable)": 2
    },
    "mds": {
        "ceph version 15.2.13 (1f5c7871ec0e36ade641773b9b05b6211c308b9d) octopus (stable)": 2,
        "ceph version 15.2.13 (de5fc19f874b2757d3c0977de8b143f6146af132) octopus (stable)": 1
    },
    "overall": {
        "ceph version 15.2.13 (1f5c7871ec0e36ade641773b9b05b6211c308b9d) octopus (stable)": 10,
        "ceph version 15.2.13 (de5fc19f874b2757d3c0977de8b143f6146af132) octopus (stable)": 5
    }
}

[-- Attachment #3: pvereport_withvlanaware.txt --]
[-- Type: text/plain, Size: 55534 bytes --]


==== general system info ====

# hostname
node06

# pveversion --verbose
proxmox-ve: 7.0-2 (running kernel: 5.11.22-1-pve)
pve-manager: 7.0-5 (running version: 7.0-5/cce9b25f)
pve-kernel-5.11: 7.0-3
pve-kernel-helper: 7.0-3
pve-kernel-5.4: 6.4-3
pve-kernel-5.11.22-1-pve: 5.11.22-1
pve-kernel-5.4.119-1-pve: 5.4.119-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph: 15.2.13-pve1
ceph-fuse: 15.2.13-pve1
corosync: 3.1.2-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.0.0-1+pve5
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.21-pve1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 7.0-3
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-4
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-2
libpve-storage-perl: 7.0-6
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-1
lxcfs: 4.0.8-pve1
novnc-pve: 1.2.0-3
proxmox-backup-client: 1.1.10-1
proxmox-backup-file-restore: 1.1.10-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.1-4
pve-cluster: 7.0-2
pve-container: 4.0-3
pve-docs: 7.0-3
pve-edk2-firmware: 3.20200531-1
pve-firewall: 4.2-2
pve-firmware: 3.2-4
pve-ha-manager: 3.2-2
pve-i18n: 2.3-1
pve-qemu-kvm: 6.0.0-2
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-5
smartmontools: 7.2-pve2
spiceterm: 3.2-2
vncterm: 1.7-1
zfsutils-linux: 2.0.4-pve1

# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
2a03:7900:111::dc:6 node06.demo.customers.tuxis.net node06

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
#2a03:7900:111:0:90d4:a7ff:fe7f:ccf6 pbs.tuxis.net

# pvesubscription get
message: There is no subscription key
serverid: F3A59435D1B87C5A2460F965646A3177
status: NotFound
url: https://www.proxmox.com/proxmox-ve/pricing

# cat /etc/apt/sources.list
## Managed via Ansible

deb http://debmirror.tuxis.nl/debian/ bullseye main contrib non-free
deb-src http://debmirror.tuxis.nl/debian/ bullseye main contrib non-free
deb http://security.debian.org/ bullseye-security main contrib non-free
deb-src http://security.debian.org/ bullseye-security main contrib non-free
deb http://debmirror.tuxis.nl/debian/ bullseye-updates main contrib non-free
deb-src http://debmirror.tuxis.nl/debian/ bullseye-updates main contrib non-free

# cat /etc/apt/sources.list.d/pvetest-for-beta.list
deb http://download.proxmox.com/debian/pve bullseye pvetest


# cat /etc/apt/sources.list.d/ceph.list
deb http://download.proxmox.com/debian/ceph-octopus bullseye main


# cat /etc/apt/sources.list.d/apt_tuxis_nl_tuxis.list
deb https://apt.tuxis.nl/tuxis/ tuxis-cron main
deb https://apt.tuxis.nl/tuxis/ monitoring main
deb https://apt.tuxis.nl/tuxis/ pmrb main


# lscpu
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Byte Order:                      Little Endian
Address sizes:                   46 bits physical, 48 bits virtual
CPU(s):                          8
On-line CPU(s) list:             0-7
Thread(s) per core:              2
Core(s) per socket:              4
Socket(s):                       1
NUMA node(s):                    1
Vendor ID:                       GenuineIntel
CPU family:                      6
Model:                           63
Model name:                      Intel(R) Xeon(R) CPU E5-1620 v3 @ 3.50GHz
Stepping:                        2
CPU MHz:                         1200.000
CPU max MHz:                     3600.0000
CPU min MHz:                     1200.0000
BogoMIPS:                        6999.74
Virtualization:                  VT-x
L1d cache:                       128 KiB
L1i cache:                       128 KiB
L2 cache:                        1 MiB
L3 cache:                        10 MiB
NUMA node0 CPU(s):               0-7
Vulnerability Itlb multihit:     KVM: Mitigation: VMX disabled
Vulnerability L1tf:              Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds:               Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown:          Mitigation; PTI
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:        Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:        Mitigation; Full generic retpoline, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected
Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat pln pts md_clear flush_l1d

# pvesh get /cluster/resources --type node --output-format=yaml
---
- id: node/node04
  node: node04
  status: offline
  type: node
- cpu: 0
  disk: 2595225600
  id: node/node06
  level: ''
  maxcpu: 8
  maxdisk: 115451232256
  maxmem: 67322179584
  mem: 1778470912
  node: node06
  status: online
  type: node
  uptime: 17
- id: node/node05
  node: node05
  status: offline
  type: node

==== overall system load info ====

# top -b -c -w512 -n 1 -o TIME | head -n 30
top - 10:30:25 up 1 min,  1 user,  load average: 0.56, 0.23, 0.08
Tasks: 316 total,   3 running, 313 sleeping,   0 stopped,   0 zombie
%Cpu(s):  1.5 us,  3.1 sy,  0.0 ni, 95.4 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
MiB Mem :  64203.4 total,  62239.5 free,   1723.6 used,    240.4 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.  61675.9 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
      1 root      20   0  165796   8924   5312 S   0.0   0.0   0:01.96 /sbin/init
   1908 root      20   0  333916 125364  18596 S   0.0   0.2   0:00.63 /usr/bin/perl /usr/bin/pvesh --nooutput create /nodes/localhost/startall
   1900 www-data  20   0  346652 126348   7264 S   0.0   0.2   0:00.53 pveproxy worker
   1684 root      rt   0  309108 177892  53256 R   0.0   0.3   0:00.52 /usr/sbin/corosync -f
   1680 ceph      20   0  255064  19864  10604 S   0.0   0.0   0:00.35 /usr/bin/ceph-mgr -f --cluster ceph --id node06 --setuser ceph --setgroup ceph
   1681 ceph      20   0  409952  32068  18780 S   6.2   0.0   0:00.34 /usr/bin/ceph-mon -f --cluster ceph --id node06 --setuser ceph --setgroup ceph
   1679 ceph      20   0  250860  18892  10156 S   0.0   0.0   0:00.33 /usr/bin/ceph-mds -f --cluster ceph --id node06 --setuser ceph --setgroup ceph
   1699 ceph      20   0  267296  20544  11224 S   0.0   0.0   0:00.31 /usr/bin/ceph-osd -f --cluster ceph --id 5 --setuser ceph --setgroup ceph
     31 root      rt   0       0      0      0 S   0.0   0.0   0:00.30 [migration/3]
     37 root      rt   0       0      0      0 S   0.0   0.0   0:00.30 [migration/4]
     43 root      rt   0       0      0      0 S   0.0   0.0   0:00.30 [migration/5]
     49 root      rt   0       0      0      0 S   0.0   0.0   0:00.30 [migration/6]
     55 root      rt   0       0      0      0 S   0.0   0.0   0:00.30 [migration/7]
   1701 ceph      20   0  267292  20304  11036 S   0.0   0.0   0:00.30 /usr/bin/ceph-osd -f --cluster ceph --id 4 --setuser ceph --setgroup ceph
     19 root      rt   0       0      0      0 S   0.0   0.0   0:00.29 [migration/1]
     25 root      rt   0       0      0      0 S   0.0   0.0   0:00.29 [migration/2]
   1787 root      20   0  264744  84344   3988 S   0.0   0.1   0:00.28 pve-firewall
   1901 www-data  20   0  346460 126360   7280 S   0.0   0.2   0:00.28 pveproxy worker
   1813 root      20   0  345068 123768   6148 S   0.0   0.2   0:00.25 pvedaemon worker
   1121 root      20   0   98428   3324   2640 S   0.0   0.0   0:00.23 /usr/sbin/zed -F
    734 root      20   0   40140  14964  13888 S   0.0   0.0   0:00.22 /lib/systemd/systemd-journald
    780 root      20   0   22124   3568   2380 S   0.0   0.0   0:00.19 /lib/systemd/systemd-udevd
   1339 root      20   0  396620  18592   7216 S   0.0   0.0   0:00.17 /usr/bin/python3 /usr/bin/fail2ban-server -xf start

# head /proc/pressure/*
==> /proc/pressure/cpu <==
some avg10=0.00 avg60=0.24 avg300=0.15 total=727501

==> /proc/pressure/io <==
some avg10=0.00 avg60=0.10 avg300=0.06 total=376684
full avg10=0.00 avg60=0.08 avg300=0.05 total=320128

==> /proc/pressure/memory <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=0
full avg10=0.00 avg60=0.00 avg300=0.00 total=0

==== info about storage ====

# cat /etc/pve/storage.cfg
dir: local
	path /var/lib/vz
	content images,backup,iso,vztmpl
	prune-backups keep-last=1
	shared 0

zfspool: local-zfs
	pool rpool/data
	content rootdir,images
	sparse 1

rbd: Ceph
	content rootdir,images
	krbd 0
	pool Ceph

cephfs: CephFS
	path /mnt/pve/CephFS
	content iso,snippets,backup,vztmpl
	prune-backups keep-last=1

dir: Tuxis_Marketplace
	path /mnt/pve/Tuxis_Marketplace
	content iso,backup
	is_mountpoint yes
	mkdir 0
	shared 1

dir: Tuxis_Marketplace_Beta
	path /mnt/pve/Tuxis_Marketplace_Beta
	content backup,iso
	is_mountpoint yes
	mkdir 0
	shared 1

rbd: CephKRBD
	content images
	krbd 1
	pool Ceph

pbs: pbs002.tuxis.nl
	datastore DB0220_demo
	server pbs002.tuxis.nl
	content backup
	encryption-key 68:d5:89:f6:f1:f4:67:59:1b:74:6a:78:99:11:ad:09:a0:b0:12:db:43:8d:41:19:af:38:90:77:12:c1:6d:f8
	fingerprint 45:f8:79:eb:27:96:88:6b:29:ad:21:00:13:c6:bd:b8:30:f6:f3:9b:f0:bf:dd:f3:ad:f0:09:d5:d2:9a:34:79
	prune-backups keep-last=1
	username DB0220@pbs


# pvesm status
got timeout

ERROR: command 'pvesm status' failed: got timeout


# cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc defaults 0 0

# findmnt --ascii
TARGET                            SOURCE           FSTYPE     OPTIONS
/                                 rpool/ROOT/pve-1 zfs        rw,relatime,xattr,noacl
|-/sys                            sysfs            sysfs      rw,nosuid,nodev,noexec,relatime
| |-/sys/kernel/security          securityfs       securityfs rw,nosuid,nodev,noexec,relatime
| |-/sys/fs/cgroup                cgroup2          cgroup2    rw,nosuid,nodev,noexec,relatime
| |-/sys/fs/pstore                pstore           pstore     rw,nosuid,nodev,noexec,relatime
| |-/sys/fs/bpf                   none             bpf        rw,nosuid,nodev,noexec,relatime,mode=700
| |-/sys/kernel/debug             debugfs          debugfs    rw,nosuid,nodev,noexec,relatime
| |-/sys/kernel/tracing           tracefs          tracefs    rw,nosuid,nodev,noexec,relatime
| |-/sys/fs/fuse/connections      fusectl          fusectl    rw,nosuid,nodev,noexec,relatime
| `-/sys/kernel/config            configfs         configfs   rw,nosuid,nodev,noexec,relatime
|-/proc                           proc             proc       rw,relatime
| `-/proc/sys/fs/binfmt_misc      systemd-1        autofs     rw,relatime,fd=30,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=23036
|-/dev                            udev             devtmpfs   rw,nosuid,relatime,size=32838164k,nr_inodes=8209541,mode=755,inode64
| |-/dev/pts                      devpts           devpts     rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000
| |-/dev/shm                      tmpfs            tmpfs      rw,nosuid,nodev,inode64
| |-/dev/hugepages                hugetlbfs        hugetlbfs  rw,relatime,pagesize=2M
| `-/dev/mqueue                   mqueue           mqueue     rw,nosuid,nodev,noexec,relatime
|-/run                            tmpfs            tmpfs      rw,nosuid,nodev,noexec,relatime,size=6574432k,mode=755,inode64
| |-/run/lock                     tmpfs            tmpfs      rw,nosuid,nodev,noexec,relatime,size=5120k,inode64
| |-/run/rpc_pipefs               sunrpc           rpc_pipefs rw,relatime
| `-/run/user/0                   tmpfs            tmpfs      rw,nosuid,nodev,relatime,size=6574428k,nr_inodes=1643607,mode=700,inode64
|-/rpool                          rpool            zfs        rw,noatime,xattr,noacl
| |-/rpool/ROOT                   rpool/ROOT       zfs        rw,noatime,xattr,noacl
| `-/rpool/data                   rpool/data       zfs        rw,noatime,xattr,noacl
|-/var/lib/ceph/osd/ceph-4        tmpfs            tmpfs      rw,relatime,inode64
|-/var/lib/ceph/osd/ceph-5        tmpfs            tmpfs      rw,relatime,inode64
|-/mnt/pve/Tuxis_Marketplace      s3fs             fuse.s3fs  rw,nosuid,nodev,relatime,user_id=0,group_id=0
|-/mnt/pve/Tuxis_Marketplace_Beta s3fs             fuse.s3fs  rw,nosuid,nodev,relatime,user_id=0,group_id=0
|-/etc/pve                        /dev/fuse        fuse       rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other
`-/var/lib/lxcfs                  lxcfs            fuse.lxcfs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other

# df --human
Filesystem        Size  Used Avail Use% Mounted on
udev               32G     0   32G   0% /dev
tmpfs             6.3G  1.3M  6.3G   1% /run
rpool/ROOT/pve-1  108G  2.5G  106G   3% /
tmpfs              32G   66M   32G   1% /dev/shm
tmpfs             5.0M     0  5.0M   0% /run/lock
rpool             106G  128K  106G   1% /rpool
rpool/ROOT        106G  128K  106G   1% /rpool/ROOT
rpool/data        106G  128K  106G   1% /rpool/data
tmpfs              32G   24K   32G   1% /var/lib/ceph/osd/ceph-4
tmpfs              32G   24K   32G   1% /var/lib/ceph/osd/ceph-5
s3fs              256T     0  256T   0% /mnt/pve/Tuxis_Marketplace
s3fs              256T     0  256T   0% /mnt/pve/Tuxis_Marketplace_Beta
/dev/fuse          30M   40K   30M   1% /etc/pve
tmpfs             6.3G  4.0K  6.3G   1% /run/user/0

==== info about virtual guests ====

# qm list

# pct list

==== info about network ====

# ip -details -statistics address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 minmtu 0 maxmtu 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
    RX: bytes  packets  errors  dropped missed  mcast   
    729230     6382     0       0       0       0       
    TX: bytes  packets  errors  dropped carrier collsns 
    729230     6382     0       0       0       0       
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether 0c:c4:7a:d9:1d:f6 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 9216 
    bridge_slave state forwarding priority 32 cost 4 hairpin off guard off root_block off fastleave off learning on flood on port_id 0x8001 port_no 0x1 designated_port 32769 designated_cost 0 designated_bridge 8000.52:ed:27:6f:7b:f3 designated_root 8000.52:ed:27:6f:7b:f3 hold_timer    0.00 message_age_timer    0.00 forward_delay_timer    0.00 topology_change_ack 0 config_pending 0 proxy_arp off proxy_arp_wifi off mcast_router 1 mcast_fast_leave off mcast_flood on mcast_to_unicast off neigh_suppress off group_fwd_mask 0 group_fwd_mask_str 0x0 vlan_tunnel off isolated off numtxqueues 8 numrxqueues 8 gso_max_size 65536 gso_max_segs 65535 
    altname enp6s0
    RX: bytes  packets  errors  dropped missed  mcast   
    507910     4689     0       3       0       270     
    TX: bytes  packets  errors  dropped carrier collsns 
    425694     1188     0       0       0       0       
3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr999 state UP group default qlen 1000
    link/ether 0c:c4:7a:d9:1d:f7 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 9216 
    bridge_slave state forwarding priority 32 cost 4 hairpin off guard off root_block off fastleave off learning on flood on port_id 0x8001 port_no 0x1 designated_port 32769 designated_cost 0 designated_bridge 8000.16:2d:db:6c:6d:8a designated_root 8000.16:2d:db:6c:6d:8a hold_timer    0.00 message_age_timer    0.00 forward_delay_timer    0.00 topology_change_ack 0 config_pending 0 proxy_arp off proxy_arp_wifi off mcast_router 1 mcast_fast_leave off mcast_flood on mcast_to_unicast off neigh_suppress off group_fwd_mask 0 group_fwd_mask_str 0x0 vlan_tunnel off isolated off numtxqueues 8 numrxqueues 8 gso_max_size 65536 gso_max_segs 65535 
    altname enp7s0
    RX: bytes  packets  errors  dropped missed  mcast   
    720        8        0       0       0       8       
    TX: bytes  packets  errors  dropped carrier collsns 
    36292      370      0       0       0       0       
4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 52:ed:27:6f:7b:f3 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 65535 
    bridge forward_delay 0 hello_time 200 max_age 2000 ageing_time 30000 stp_state 0 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q bridge_id 8000.52:ed:27:6f:7b:f3 designated_root 8000.52:ed:27:6f:7b:f3 root_port 0 root_path_cost 0 topology_change 0 topology_change_detected 0 hello_timer    0.00 tcn_timer    0.00 topology_change_timer    0.00 gc_timer  204.43 vlan_default_pvid 1 vlan_stats_enabled 0 vlan_stats_per_port 0 group_fwd_mask 0 group_address 01:80:c2:00:00:00 mcast_snooping 1 mcast_router 1 mcast_query_use_ifaddr 0 mcast_querier 0 mcast_hash_elasticity 16 mcast_hash_max 4096 mcast_last_member_count 2 mcast_startup_query_count 2 mcast_last_member_interval 100 mcast_membership_interval 26000 mcast_querier_interval 25500 mcast_query_interval 12500 mcast_query_response_interval 1000 mcast_startup_query_interval 3124 mcast_stats_enabled 0 mcast_igmp_version 2 mcast_mld_version 1 nf_call_iptables 0 nf_call_ip6tables 0 nf_call_arptables 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 
    inet6 2a03:7900:111::dc:6/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::50ed:27ff:fe6f:7bf3/64 scope link 
       valid_lft forever preferred_lft forever
    RX: bytes  packets  errors  dropped missed  mcast   
    439912     4664     0       0       0       267     
    TX: bytes  packets  errors  dropped carrier collsns 
    414944     1063     0       0       0       0       
5: vmbr999: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 16:2d:db:6c:6d:8a brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 65535 
    bridge forward_delay 0 hello_time 200 max_age 2000 ageing_time 30000 stp_state 0 priority 32768 vlan_filtering 1 vlan_protocol 802.1Q bridge_id 8000.16:2d:db:6c:6d:8a designated_root 8000.16:2d:db:6c:6d:8a root_port 0 root_path_cost 0 topology_change 0 topology_change_detected 0 hello_timer    0.00 tcn_timer    0.00 topology_change_timer    0.00 gc_timer  204.52 vlan_default_pvid 1 vlan_stats_enabled 0 vlan_stats_per_port 0 group_fwd_mask 0 group_address 01:80:c2:00:00:00 mcast_snooping 1 mcast_router 1 mcast_query_use_ifaddr 0 mcast_querier 0 mcast_hash_elasticity 16 mcast_hash_max 4096 mcast_last_member_count 2 mcast_startup_query_count 2 mcast_last_member_interval 100 mcast_membership_interval 26000 mcast_querier_interval 25500 mcast_query_interval 12500 mcast_query_response_interval 1000 mcast_startup_query_interval 3124 mcast_stats_enabled 0 mcast_igmp_version 2 mcast_mld_version 1 nf_call_iptables 0 nf_call_ip6tables 0 nf_call_arptables 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 
    inet6 fdb0:5bd1:dc::6/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::142d:dbff:fe6c:6d8a/64 scope link 
       valid_lft forever preferred_lft forever
    RX: bytes  packets  errors  dropped missed  mcast   
    608        8        0       0       0       8       
    TX: bytes  packets  errors  dropped carrier collsns 
    36292      370      0       0       0       0       

# ip -details -4 route show

# ip -details -6 route show
unicast ::1 dev lo proto kernel scope global metric 256 pref medium
unicast 2a03:7900:111::/64 dev vmbr0 proto kernel scope global metric 256 pref medium
unicast fdb0:5bd1:dc::/64 dev vmbr999 proto kernel scope global metric 256 pref medium
unicast fdb0:5bd1:cde::/64 via fdb0:5bd1:dc::ffff dev vmbr999 proto boot scope global metric 1024 pref medium
unicast fe80::/64 dev vmbr0 proto kernel scope global metric 256 pref medium
unicast fe80::/64 dev vmbr999 proto kernel scope global metric 256 pref medium
unicast default via 2a03:7900:111::1 dev vmbr0 proto kernel scope global metric 1024 onlink pref medium

# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface eno1 inet6 manual

iface eno2 inet6 manual

auto vmbr0
iface vmbr0 inet6 static
	address 2a03:7900:111::dc:6/64
	gateway 2a03:7900:111::1
	bridge-ports eno1
	bridge-stp off
	bridge-fd 0

auto vmbr999
iface vmbr999 inet6 static
	address fdb0:5bd1:dc::6/64
	bridge-ports eno2
	bridge-stp off
	bridge-fd 0
	bridge-vlan-aware yes
	bridge-vids 2-4094
        post-up /usr/sbin/ip ro add fdb0:5bd1:cde::/64 via fdb0:5bd1:dc::ffff


==== info about firewall ====

# cat /etc/pve/local/host.fw
cat: /etc/pve/local/host.fw: No such file or directory

# iptables-save
# Generated by iptables-save v1.8.7 on Tue Jun 29 10:30:36 2021
*raw
:PREROUTING ACCEPT [376:131554]
:OUTPUT ACCEPT [281:128086]
COMMIT
# Completed on Tue Jun 29 10:30:36 2021
# Generated by iptables-save v1.8.7 on Tue Jun 29 10:30:36 2021
*filter
:INPUT ACCEPT [281:128086]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [282:128126]
COMMIT
# Completed on Tue Jun 29 10:30:36 2021

==== info about cluster ====

# pvecm nodes

Membership information
----------------------
    Nodeid      Votes Name
         3          1 node06 (local)

# pvecm status
Cluster information
-------------------
Name:             Demo
Config Version:   3
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Tue Jun 29 10:30:37 2021
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          0x00000003
Ring ID:          3.f4c
Quorate:          No

Votequorum information
----------------------
Expected votes:   3
Highest expected: 3
Total votes:      1
Quorum:           2 Activity blocked
Flags:            

Membership information
----------------------
    Nodeid      Votes Name
0x00000003          1 fdb0:5bd1:dc::6%32732 (local)

# cat /etc/pve/corosync.conf 2>/dev/null
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: node04
    nodeid: 1
    quorum_votes: 1
    ring0_addr: fdb0:5bd1:dc::4
  }
  node {
    name: node05
    nodeid: 2
    quorum_votes: 1
    ring0_addr: fdb0:5bd1:dc::5
  }
  node {
    name: node06
    nodeid: 3
    quorum_votes: 1
    ring0_addr: fdb0:5bd1:dc::6
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: Demo
  config_version: 3
  interface {
    linknumber: 0
  }
  ip_version: ipv4-6
  link_mode: passive
  secauth: on
  version: 2
}


# ha-manager status
quorum No quorum on node 'node06'!
master node06 (old timestamp - dead?, Tue Jun  9 16:20:27 2020)
lrm node04 (old timestamp - dead?, Tue Jun 29 10:27:38 2021)
lrm node05 (old timestamp - dead?, Tue Jun 29 10:27:38 2021)
lrm node06 (old timestamp - dead?, Tue Jun 29 10:27:37 2021)

==== info about hardware ====

# dmidecode -t bios
# dmidecode 3.3
Getting SMBIOS data from sysfs.
SMBIOS 3.0 present.

Handle 0x0000, DMI type 0, 24 bytes
BIOS Information
	Vendor: American Megatrends Inc.
	Version: 2.0a
	Release Date: 08/01/2016
	Address: 0xF0000
	Runtime Size: 64 kB
	ROM Size: 16 MB
	Characteristics:
		PCI is supported
		BIOS is upgradeable
		BIOS shadowing is allowed
		Boot from CD is supported
		Selectable boot is supported
		BIOS ROM is socketed
		EDD is supported
		5.25"/1.2 MB floppy services are supported (int 13h)
		3.5"/720 kB floppy services are supported (int 13h)
		3.5"/2.88 MB floppy services are supported (int 13h)
		Print screen service is supported (int 5h)
		8042 keyboard services are supported (int 9h)
		Serial services are supported (int 14h)
		Printer services are supported (int 17h)
		ACPI is supported
		USB legacy is supported
		BIOS boot specification is supported
		Targeted content distribution is supported
		UEFI is supported
	BIOS Revision: 5.6


# lspci -nnk
00:00.0 Host bridge [0600]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMI2 [8086:2f00] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 DMI2 [15d9:0832]
00:01.0 PCI bridge [0604]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 1 [8086:2f02] (rev 02)
	Kernel driver in use: pcieport
00:01.1 PCI bridge [0604]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 1 [8086:2f03] (rev 02)
	Kernel driver in use: pcieport
00:03.0 PCI bridge [0604]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 3 [8086:2f08] (rev 02)
	Kernel driver in use: pcieport
00:03.2 PCI bridge [0604]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 3 [8086:2f0a] (rev 02)
	Kernel driver in use: pcieport
00:04.0 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 0 [8086:2f20] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 0 [15d9:0832]
	Kernel driver in use: ioatdma
	Kernel modules: ioatdma
00:04.1 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 1 [8086:2f21] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 1 [15d9:0832]
	Kernel driver in use: ioatdma
	Kernel modules: ioatdma
00:04.2 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 2 [8086:2f22] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 2 [15d9:0832]
	Kernel driver in use: ioatdma
	Kernel modules: ioatdma
00:04.3 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 3 [8086:2f23] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 3 [15d9:0832]
	Kernel driver in use: ioatdma
	Kernel modules: ioatdma
00:04.4 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 4 [8086:2f24] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 4 [15d9:0832]
	Kernel driver in use: ioatdma
	Kernel modules: ioatdma
00:04.5 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 5 [8086:2f25] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 5 [15d9:0832]
	Kernel driver in use: ioatdma
	Kernel modules: ioatdma
00:04.6 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 6 [8086:2f26] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 6 [15d9:0832]
	Kernel driver in use: ioatdma
	Kernel modules: ioatdma
00:04.7 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 7 [8086:2f27] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 7 [15d9:0832]
	Kernel driver in use: ioatdma
	Kernel modules: ioatdma
00:05.0 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Address Map, VTd_Misc, System Management [8086:2f28] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Address Map, VTd_Misc, System Management [15d9:0832]
00:05.1 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Hot Plug [8086:2f29] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Hot Plug [15d9:0832]
00:05.2 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 RAS, Control Status and Global Errors [8086:2f2a] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 RAS, Control Status and Global Errors [15d9:0832]
00:05.4 PIC [0800]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 I/O APIC [8086:2f2c] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 I/O APIC [15d9:0832]
00:11.0 Unassigned class [ff00]: Intel Corporation C610/X99 series chipset SPSR [8086:8d7c] (rev 05)
	Subsystem: Super Micro Computer Inc X10SRL-F [15d9:0832]
00:11.4 SATA controller [0106]: Intel Corporation C610/X99 series chipset sSATA Controller [AHCI mode] [8086:8d62] (rev 05)
	Subsystem: Super Micro Computer Inc C610/X99 series chipset sSATA Controller [AHCI mode] [15d9:0832]
	Kernel driver in use: ahci
	Kernel modules: ahci
00:14.0 USB controller [0c03]: Intel Corporation C610/X99 series chipset USB xHCI Host Controller [8086:8d31] (rev 05)
	Subsystem: Super Micro Computer Inc X10SRL-F [15d9:0832]
	Kernel driver in use: xhci_hcd
	Kernel modules: xhci_pci
00:16.0 Communication controller [0780]: Intel Corporation C610/X99 series chipset MEI Controller #1 [8086:8d3a] (rev 05)
	Subsystem: Super Micro Computer Inc X10SRL-F [15d9:0832]
	Kernel modules: mei_me
00:16.1 Communication controller [0780]: Intel Corporation C610/X99 series chipset MEI Controller #2 [8086:8d3b] (rev 05)
	Subsystem: Super Micro Computer Inc X10SRL-F [15d9:0832]
00:1a.0 USB controller [0c03]: Intel Corporation C610/X99 series chipset USB Enhanced Host Controller #2 [8086:8d2d] (rev 05)
	Subsystem: Super Micro Computer Inc X10SRL-F [15d9:0832]
	Kernel driver in use: ehci-pci
	Kernel modules: ehci_pci
00:1c.0 PCI bridge [0604]: Intel Corporation C610/X99 series chipset PCI Express Root Port #1 [8086:8d10] (rev d5)
	Kernel driver in use: pcieport
00:1c.4 PCI bridge [0604]: Intel Corporation C610/X99 series chipset PCI Express Root Port #5 [8086:8d18] (rev d5)
	Kernel driver in use: pcieport
00:1c.5 PCI bridge [0604]: Intel Corporation C610/X99 series chipset PCI Express Root Port #6 [8086:8d1a] (rev d5)
	Kernel driver in use: pcieport
00:1c.6 PCI bridge [0604]: Intel Corporation C610/X99 series chipset PCI Express Root Port #7 [8086:8d1c] (rev d5)
	Kernel driver in use: pcieport
00:1d.0 USB controller [0c03]: Intel Corporation C610/X99 series chipset USB Enhanced Host Controller #1 [8086:8d26] (rev 05)
	Subsystem: Super Micro Computer Inc X10SRL-F [15d9:0832]
	Kernel driver in use: ehci-pci
	Kernel modules: ehci_pci
00:1f.0 ISA bridge [0601]: Intel Corporation C610/X99 series chipset LPC Controller [8086:8d44] (rev 05)
	Subsystem: Super Micro Computer Inc X10SRL-F [15d9:0832]
	Kernel driver in use: lpc_ich
	Kernel modules: lpc_ich
00:1f.2 SATA controller [0106]: Intel Corporation C610/X99 series chipset 6-Port SATA Controller [AHCI mode] [8086:8d02] (rev 05)
	Subsystem: Super Micro Computer Inc C610/X99 series chipset 6-Port SATA Controller [AHCI mode] [15d9:0832]
	Kernel driver in use: ahci
	Kernel modules: ahci
00:1f.3 SMBus [0c05]: Intel Corporation C610/X99 series chipset SMBus Controller [8086:8d22] (rev 05)
	Subsystem: Super Micro Computer Inc X10SRL-F [15d9:0832]
	Kernel driver in use: i801_smbus
	Kernel modules: i2c_i801
06:00.0 Ethernet controller [0200]: Intel Corporation I210 Gigabit Network Connection [8086:1533] (rev 03)
	DeviceName:  Intel Ethernet i210AT #1
	Subsystem: Super Micro Computer Inc I210 Gigabit Network Connection [15d9:1533]
	Kernel driver in use: igb
	Kernel modules: igb
07:00.0 Ethernet controller [0200]: Intel Corporation I210 Gigabit Network Connection [8086:1533] (rev 03)
	DeviceName:  Intel Ethernet i210AT #2
	Subsystem: Super Micro Computer Inc I210 Gigabit Network Connection [15d9:1533]
	Kernel driver in use: igb
	Kernel modules: igb
08:00.0 PCI bridge [0604]: ASPEED Technology, Inc. AST1150 PCI-to-PCI Bridge [1a03:1150] (rev 03)
09:00.0 VGA compatible controller [0300]: ASPEED Technology, Inc. ASPEED Graphics Family [1a03:2000] (rev 30)
	DeviceName:  ASPEED Video AST2400
	Subsystem: Super Micro Computer Inc X10SRL-F [15d9:0832]
	Kernel driver in use: ast
	Kernel modules: ast
ff:0b.0 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 R3 QPI Link 0 & 1 Monitoring [8086:2f81] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 R3 QPI Link 0 & 1 Monitoring [15d9:0832]
ff:0b.1 Performance counters [1101]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 R3 QPI Link 0 & 1 Monitoring [8086:2f36] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 R3 QPI Link 0 & 1 Monitoring [15d9:0832]
	Kernel driver in use: hswep_uncore
ff:0b.2 Performance counters [1101]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 R3 QPI Link 0 & 1 Monitoring [8086:2f37] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 R3 QPI Link 0 & 1 Monitoring [15d9:0832]
	Kernel driver in use: hswep_uncore
ff:0c.0 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Unicast Registers [8086:2fe0] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Unicast Registers [15d9:0832]
ff:0c.1 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Unicast Registers [8086:2fe1] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Unicast Registers [15d9:0832]
ff:0c.2 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Unicast Registers [8086:2fe2] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Unicast Registers [15d9:0832]
ff:0c.3 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Unicast Registers [8086:2fe3] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Unicast Registers [15d9:0832]
ff:0f.0 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Buffered Ring Agent [8086:2ff8] (rev 02)
ff:0f.1 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Buffered Ring Agent [8086:2ff9] (rev 02)
ff:0f.4 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 System Address Decoder & Broadcast Registers [8086:2ffc] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 System Address Decoder & Broadcast Registers [15d9:0832]
ff:0f.5 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 System Address Decoder & Broadcast Registers [8086:2ffd] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 System Address Decoder & Broadcast Registers [15d9:0832]
ff:0f.6 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 System Address Decoder & Broadcast Registers [8086:2ffe] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 System Address Decoder & Broadcast Registers [15d9:0832]
ff:10.0 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCIe Ring Interface [8086:2f1d] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 PCIe Ring Interface [15d9:0832]
ff:10.1 Performance counters [1101]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCIe Ring Interface [8086:2f34] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 PCIe Ring Interface [15d9:0832]
	Kernel driver in use: hswep_uncore
ff:10.5 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Scratchpad & Semaphore Registers [8086:2f1e] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Scratchpad & Semaphore Registers [15d9:0832]
ff:10.6 Performance counters [1101]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Scratchpad & Semaphore Registers [8086:2f7d] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Scratchpad & Semaphore Registers [15d9:0832]
ff:10.7 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Scratchpad & Semaphore Registers [8086:2f1f] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Scratchpad & Semaphore Registers [15d9:0832]
ff:12.0 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Home Agent 0 [8086:2fa0] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Home Agent 0 [15d9:0832]
ff:12.1 Performance counters [1101]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Home Agent 0 [8086:2f30] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Home Agent 0 [15d9:0832]
	Kernel driver in use: hswep_uncore
ff:13.0 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Target Address, Thermal & RAS Registers [8086... (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Target Address, Thermal & RAS Registers [15d9:0832]
ff:13.1 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Target Address, Thermal & RAS Registers [8086... (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Target Address, Thermal & RAS Registers [15d9:0832]
ff:13.2 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel Target Address Decoder [8086:2faa] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel Target Address Decoder [15d9:0832]
ff:13.3 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel Target Address Decoder [8086:2fab] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel Target Address Decoder [15d9:0832]
ff:13.4 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel Target Address Decoder [8086:2fac] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel Target Address Decoder [15d9:0832]
ff:13.5 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel Target Address Decoder [8086:2fad] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel Target Address Decoder [15d9:0832]
ff:13.6 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DDRIO Channel 0/1 Broadcast [8086:2fae] (rev 02)
ff:13.7 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DDRIO Global Broadcast [8086:2faf] (rev 02)
ff:14.0 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 0 Thermal Control [8086:2fb0] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 0 Thermal Control [15d9:0832]
	Kernel driver in use: hswep_uncore
ff:14.1 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 1 Thermal Control [8086:2fb1] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 1 Thermal Control [15d9:0832]
	Kernel driver in use: hswep_uncore
ff:14.2 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 0 ERROR Registers [8086:2fb2] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 0 ERROR Registers [15d9:0832]
ff:14.3 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 1 ERROR Registers [8086:2fb3] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 1 ERROR Registers [15d9:0832]
ff:14.4 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DDRIO (VMSE) 0 & 1 [8086:2fbc] (rev 02)
ff:14.5 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DDRIO (VMSE) 0 & 1 [8086:2fbd] (rev 02)
ff:14.6 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DDRIO (VMSE) 0 & 1 [8086:2fbe] (rev 02)
ff:14.7 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DDRIO (VMSE) 0 & 1 [8086:2fbf] (rev 02)
ff:15.0 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 2 Thermal Control [8086:2fb4] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 2 Thermal Control [15d9:0832]
	Kernel driver in use: hswep_uncore
ff:15.1 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 3 Thermal Control [8086:2fb5] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 3 Thermal Control [15d9:0832]
	Kernel driver in use: hswep_uncore
ff:15.2 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 2 ERROR Registers [8086:2fb6] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 2 ERROR Registers [15d9:0832]
ff:15.3 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 3 ERROR Registers [8086:2fb7] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 3 ERROR Registers [15d9:0832]
ff:16.0 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 1 Target Address, Thermal & RAS Registers [8086... (rev 02)
ff:16.6 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DDRIO Channel 2/3 Broadcast [8086:2f6e] (rev 02)
ff:16.7 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DDRIO Global Broadcast [8086:2f6f] (rev 02)
ff:17.0 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 1 Channel 0 Thermal Control [8086:2fd0] (rev 02)
	Kernel driver in use: hswep_uncore
ff:17.4 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DDRIO (VMSE) 2 & 3 [8086:2fb8] (rev 02)
ff:17.5 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DDRIO (VMSE) 2 & 3 [8086:2fb9] (rev 02)
ff:17.6 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DDRIO (VMSE) 2 & 3 [8086:2fba] (rev 02)
ff:17.7 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DDRIO (VMSE) 2 & 3 [8086:2fbb] (rev 02)
ff:1e.0 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Power Control Unit [8086:2f98] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Power Control Unit [15d9:0832]
ff:1e.1 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Power Control Unit [8086:2f99] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Power Control Unit [15d9:0832]
ff:1e.2 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Power Control Unit [8086:2f9a] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Power Control Unit [15d9:0832]
ff:1e.3 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Power Control Unit [8086:2fc0] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Power Control Unit [15d9:0832]
ff:1e.4 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Power Control Unit [8086:2f9c] (rev 02)
	Subsystem: Super Micro Computer Inc Xeon E7 v3/Xeon E5 v3/Core i7 Power Control Unit [15d9:0832]
ff:1f.0 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 VCU [8086:2f88] (rev 02)
ff:1f.2 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 VCU [8086:2f8a] (rev 02)

==== info about block devices ====

# lsblk --ascii
NAME                                                                                                  MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                                                                                                     8:0    0 111.8G  0 disk 
|-sda1                                                                                                  8:1    0  1007K  0 part 
|-sda2                                                                                                  8:2    0   512M  0 part 
`-sda3                                                                                                  8:3    0 111.3G  0 part 
sdb                                                                                                     8:16   0 111.8G  0 disk 
|-sdb1                                                                                                  8:17   0  1007K  0 part 
|-sdb2                                                                                                  8:18   0   512M  0 part 
`-sdb3                                                                                                  8:19   0 111.3G  0 part 
sdc                                                                                                     8:32   0 447.1G  0 disk 
`-ceph--33bdcbd7--07be--4373--97ca--0678dda8888d-osd--block--e2deed6d--596f--4837--b14e--88c9afdbe531 253:0    0 447.1G  0 lvm  
sdd                                                                                                     8:48   0 447.1G  0 disk 
`-ceph--97bdf879--bbf1--41ba--8563--81abe42cf617-osd--block--55199458--8b33--44f2--b4d2--3a876072a622 253:1    0 447.1G  0 lvm  

# ls -l /dev/disk/by-*/
/dev/disk/by-id/:
total 0
lrwxrwxrwx 1 root root  9 Jun 29 10:29 ata-INTEL_SSDSC2KW120H6_BTLT7124064S120GGN -> ../../sda
lrwxrwxrwx 1 root root 10 Jun 29 10:29 ata-INTEL_SSDSC2KW120H6_BTLT7124064S120GGN-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Jun 29 10:29 ata-INTEL_SSDSC2KW120H6_BTLT7124064S120GGN-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Jun 29 10:29 ata-INTEL_SSDSC2KW120H6_BTLT7124064S120GGN-part3 -> ../../sda3
lrwxrwxrwx 1 root root  9 Jun 29 10:29 ata-SAMSUNG_MZ7LM120HCFD-00003_S22PNYAG500437 -> ../../sdb
lrwxrwxrwx 1 root root 10 Jun 29 10:29 ata-SAMSUNG_MZ7LM120HCFD-00003_S22PNYAG500437-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Jun 29 10:29 ata-SAMSUNG_MZ7LM120HCFD-00003_S22PNYAG500437-part2 -> ../../sdb2
lrwxrwxrwx 1 root root 10 Jun 29 10:29 ata-SAMSUNG_MZ7LM120HCFD-00003_S22PNYAG500437-part3 -> ../../sdb3
lrwxrwxrwx 1 root root  9 Jun 29 10:29 ata-SAMSUNG_MZ7LM480HCHP-00003_S1YJNXAH102524 -> ../../sdd
lrwxrwxrwx 1 root root  9 Jun 29 10:29 ata-SAMSUNG_MZ7LM480HCHP-00003_S1YJNXAH102531 -> ../../sdc
lrwxrwxrwx 1 root root 10 Jun 29 10:29 dm-name-ceph--33bdcbd7--07be--4373--97ca--0678dda8888d-osd--block--e2deed6d--596f--4837--b14e--88c9afdbe531 -> ../../dm-0
lrwxrwxrwx 1 root root 10 Jun 29 10:29 dm-name-ceph--97bdf879--bbf1--41ba--8563--81abe42cf617-osd--block--55199458--8b33--44f2--b4d2--3a876072a622 -> ../../dm-1
lrwxrwxrwx 1 root root 10 Jun 29 10:29 dm-uuid-LVM-GHM6Bwl9TQ7jv5GJd8ORRD6XDearTRZhgvpxQ22a3TWdlBd9iGk1oHhop5lXn8lL -> ../../dm-1
lrwxrwxrwx 1 root root 10 Jun 29 10:29 dm-uuid-LVM-hoOm4ydEDwKOrnNdVuCBCsY31it5n1ZRDsf4uP4Irce8u2hubaahZCqfMz9IpwhI -> ../../dm-0
lrwxrwxrwx 1 root root  9 Jun 29 10:29 lvm-pv-uuid-AGbSTn-aDmD-AbAR-ngCX-8glc-2KVW-xal2xh -> ../../sdc
lrwxrwxrwx 1 root root  9 Jun 29 10:29 lvm-pv-uuid-QPw8aR-Rbbe-LzZ7-0j3t-n8gn-OeOs-YWPaoV -> ../../sdd
lrwxrwxrwx 1 root root  9 Jun 29 10:29 wwn-0x5002538c00018347 -> ../../sdb
lrwxrwxrwx 1 root root 10 Jun 29 10:29 wwn-0x5002538c00018347-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Jun 29 10:29 wwn-0x5002538c00018347-part2 -> ../../sdb2
lrwxrwxrwx 1 root root 10 Jun 29 10:29 wwn-0x5002538c00018347-part3 -> ../../sdb3
lrwxrwxrwx 1 root root  9 Jun 29 10:29 wwn-0x5002538c40146ccb -> ../../sdd
lrwxrwxrwx 1 root root  9 Jun 29 10:29 wwn-0x5002538c40146cd2 -> ../../sdc
lrwxrwxrwx 1 root root  9 Jun 29 10:29 wwn-0x55cd2e414db345fd -> ../../sda
lrwxrwxrwx 1 root root 10 Jun 29 10:29 wwn-0x55cd2e414db345fd-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Jun 29 10:29 wwn-0x55cd2e414db345fd-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Jun 29 10:29 wwn-0x55cd2e414db345fd-part3 -> ../../sda3

/dev/disk/by-label/:
total 0
lrwxrwxrwx 1 root root 10 Jun 29 10:29 rpool -> ../../sda3

/dev/disk/by-partuuid/:
total 0
lrwxrwxrwx 1 root root 10 Jun 29 10:29 4f42744a-eef7-49f5-bfa4-5cb3ca1ee4b2 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Jun 29 10:29 70166c71-7a1f-400e-bd39-f8f4be867d3e -> ../../sda3
lrwxrwxrwx 1 root root 10 Jun 29 10:29 87402126-9aa6-4be9-9c13-4704492a974b -> ../../sda1
lrwxrwxrwx 1 root root 10 Jun 29 10:29 a52ed3d9-d18c-4d5b-9d8a-c92b235fd9e1 -> ../../sdb3
lrwxrwxrwx 1 root root 10 Jun 29 10:29 de77a2cb-a1df-460e-97a2-3c8c8ae9fad5 -> ../../sdb2
lrwxrwxrwx 1 root root 10 Jun 29 10:29 fb306c92-2607-46a5-a32d-7556b04dd494 -> ../../sda2

/dev/disk/by-path/:
total 0
lrwxrwxrwx 1 root root  9 Jun 29 10:29 pci-0000:00:11.4-ata-3 -> ../../sda
lrwxrwxrwx 1 root root 10 Jun 29 10:29 pci-0000:00:11.4-ata-3-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Jun 29 10:29 pci-0000:00:11.4-ata-3-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Jun 29 10:29 pci-0000:00:11.4-ata-3-part3 -> ../../sda3
lrwxrwxrwx 1 root root  9 Jun 29 10:29 pci-0000:00:11.4-ata-3.0 -> ../../sda
lrwxrwxrwx 1 root root 10 Jun 29 10:29 pci-0000:00:11.4-ata-3.0-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Jun 29 10:29 pci-0000:00:11.4-ata-3.0-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Jun 29 10:29 pci-0000:00:11.4-ata-3.0-part3 -> ../../sda3
lrwxrwxrwx 1 root root  9 Jun 29 10:29 pci-0000:00:11.4-ata-4 -> ../../sdb
lrwxrwxrwx 1 root root 10 Jun 29 10:29 pci-0000:00:11.4-ata-4-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Jun 29 10:29 pci-0000:00:11.4-ata-4-part2 -> ../../sdb2
lrwxrwxrwx 1 root root 10 Jun 29 10:29 pci-0000:00:11.4-ata-4-part3 -> ../../sdb3
lrwxrwxrwx 1 root root  9 Jun 29 10:29 pci-0000:00:11.4-ata-4.0 -> ../../sdb
lrwxrwxrwx 1 root root 10 Jun 29 10:29 pci-0000:00:11.4-ata-4.0-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Jun 29 10:29 pci-0000:00:11.4-ata-4.0-part2 -> ../../sdb2
lrwxrwxrwx 1 root root 10 Jun 29 10:29 pci-0000:00:11.4-ata-4.0-part3 -> ../../sdb3
lrwxrwxrwx 1 root root  9 Jun 29 10:29 pci-0000:00:1f.2-ata-1 -> ../../sdc
lrwxrwxrwx 1 root root  9 Jun 29 10:29 pci-0000:00:1f.2-ata-1.0 -> ../../sdc
lrwxrwxrwx 1 root root  9 Jun 29 10:29 pci-0000:00:1f.2-ata-2 -> ../../sdd
lrwxrwxrwx 1 root root  9 Jun 29 10:29 pci-0000:00:1f.2-ata-2.0 -> ../../sdd

/dev/disk/by-uuid/:
total 0
lrwxrwxrwx 1 root root 10 Jun 29 10:29 17716103480993325194 -> ../../sda3
lrwxrwxrwx 1 root root 10 Jun 29 10:29 B851-E178 -> ../../sda2
lrwxrwxrwx 1 root root 10 Jun 29 10:29 B852-ACFC -> ../../sdb2

# iscsiadm -m node
iscsiadm: No records found

# iscsiadm -m session
iscsiadm: No active sessions.

==== info about volumes ====

# pvs
  PV         VG                                        Fmt  Attr PSize    PFree
  /dev/sdc   ceph-33bdcbd7-07be-4373-97ca-0678dda8888d lvm2 a--  <447.13g    0 
  /dev/sdd   ceph-97bdf879-bbf1-41ba-8563-81abe42cf617 lvm2 a--  <447.13g    0 

# lvs
  LV                                             VG                                        Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  osd-block-e2deed6d-596f-4837-b14e-88c9afdbe531 ceph-33bdcbd7-07be-4373-97ca-0678dda8888d -wi-a----- <447.13g                                                    
  osd-block-55199458-8b33-44f2-b4d2-3a876072a622 ceph-97bdf879-bbf1-41ba-8563-81abe42cf617 -wi-a----- <447.13g                                                    

# vgs
  VG                                        #PV #LV #SN Attr   VSize    VFree
  ceph-33bdcbd7-07be-4373-97ca-0678dda8888d   1   1   0 wz--n- <447.13g    0 
  ceph-97bdf879-bbf1-41ba-8563-81abe42cf617   1   1   0 wz--n- <447.13g    0 

# zpool status
  pool: rpool
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
	still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
	the pool may no longer be accessible by software that does not support
	the features. See zpool-features(5) for details.
  scan: scrub repaired 0B in 00:00:19 with 0 errors on Sun Jun 13 00:24:20 2021
config:

	NAME                                                   STATE     READ WRITE CKSUM
	rpool                                                  ONLINE       0     0     0
	  ata-SAMSUNG_MZ7LM120HCFD-00003_S22PNYAG500437-part3  ONLINE       0     0     0

errors: No known data errors

# zpool list -v
NAME                                                    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool                                                   111G  2.43G   109G        -         -     5%     2%  1.00x    ONLINE  -
  ata-SAMSUNG_MZ7LM120HCFD-00003_S22PNYAG500437-part3   111G  2.43G   109G        -         -     5%  2.18%      -  ONLINE  

# zfs list
NAME               USED  AVAIL     REFER  MOUNTPOINT
rpool             2.43G   105G      104K  /rpool
rpool/ROOT        2.42G   105G       96K  /rpool/ROOT
rpool/ROOT/pve-1  2.42G   105G     2.42G  /
rpool/data          96K   105G       96K  /rpool/data

# pveceph status

ERROR: command 'pveceph status' failed: got timeout


# ceph osd status

ERROR: command 'ceph osd status' failed: got timeout


# ceph df

ERROR: command 'ceph df' failed: got timeout


# ceph osd df tree

ERROR: command 'ceph osd df tree' failed: got timeout


# cat /etc/ceph/ceph.conf
[global]
	 auth_client_required = cephx
	 auth_cluster_required = cephx
	 auth_service_required = cephx
	 cluster_network = fdb0:5bd1:dc::4/64
	 fsid = 73045ca5-eead-4e44-a0c1-b6796ed3d7d5
	 mon_allow_pool_delete = true
	 mon_host = fdb0:5bd1:dc::4 fdb0:5bd1:dc::5 fdb0:5bd1:dc::6
	 ms_bind_ipv4 = false
	 ms_bind_ipv6 = true
	 osd_pool_default_min_size = 2
	 osd_pool_default_size = 3
	 public_network = fdb0:5bd1:dc::4/64

[client]
	 keyring = /etc/pve/priv/$cluster.$name.keyring

[mds]
	 keyring = /var/lib/ceph/mds/ceph-$id/keyring

[mds.node04]
	 host = node04
	 mds_standby_for_name = pve

[mds.node05]
	 host = node05
	 mds_standby_for_name = pve

[mds.node06]
	 host = node06
	 mds standby for name = pve

[mon.node04]
	 public_addr = fdb0:5bd1:dc::4

[mon.node05]
	 public_addr = fdb0:5bd1:dc::5

[mon.node06]
	 public_addr = fdb0:5bd1:dc::6


# ceph config dump

ERROR: command 'ceph config dump' failed: got timeout


# pveceph pool ls
got timeout

# ceph versions

ERROR: command 'ceph versions' failed: got timeout


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PVE-User] Proxmox VE 7.0 (beta) released!
  2021-06-29  8:05 ` Mark Schouten
@ 2021-06-29  8:23   ` Stoiko Ivanov
  2021-06-29  8:34     ` Mark Schouten
  2021-06-29  9:46   ` Thomas Lamprecht
  2021-07-02 20:57   ` Thomas Lamprecht
  2 siblings, 1 reply; 16+ messages in thread
From: Stoiko Ivanov @ 2021-06-29  8:23 UTC (permalink / raw)
  To: Mark Schouten; +Cc: Proxmox VE user list

Hi,

On Tue, 29 Jun 2021 10:05:44 +0200
Mark Schouten <mark@tuxis.nl> wrote:

> Hi,
> 
> Op 24-06-2021 om 15:16 schreef Martin Maurer:
> > We are pleased to announce the first beta release of Proxmox Virtual 
> > Environment 7.0! The 7.x family is based on the great Debian 11 
> > "Bullseye" and comes with a 5.11 kernel, QEMU 6.0, LXC 4.0, OpenZFS 2.0.4.  
> 
> I just upgraded a node in our demo cluster and all seemed fine. Except 
> for non-working cluster network. I was unable to ping the node through 
> the cluster interface, pvecm saw no other nodes and ceph was broken.
Thanks for the report - could you provide some details on the upgraded
node? Mostly which NICs are used - but also the complete hardware -setup

(If you prefer you can send me a pvereport to my e-mail)

> 
> However, if I ran tcpdump, ping started working, but not the rest.
quite odd - last time I ran into something like this was with an OpenBSD
router, where the promisc flag did not get passed down to the physical
port of a bridge)

the output of `ip -details a` and `ip -details l` might provide some
insight

> 
> Interesting situation, which I 'fixed' by disabling vlan-aware-bridge 
> for that interface. After the reboot, everything works (AFAICS).
Thanks for sharing the mitigation (sadly this won't work for everybody)

> 
> If Proxmox wants to debug this, feel free to reach out to me, I can 
> grant you access to this node so you can check it out.
> 

Kind Regards,
stoiko




^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PVE-User] Proxmox VE 7.0 (beta) released!
  2021-06-24 13:16 Martin Maurer
@ 2021-06-29  8:05 ` Mark Schouten
  2021-06-29  8:23   ` Stoiko Ivanov
                     ` (2 more replies)
       [not found] ` <mailman.239.1625514988.464.pve-user@lists.proxmox.com>
  1 sibling, 3 replies; 16+ messages in thread
From: Mark Schouten @ 2021-06-29  8:05 UTC (permalink / raw)
  To: pve-user

Hi,

Op 24-06-2021 om 15:16 schreef Martin Maurer:
> We are pleased to announce the first beta release of Proxmox Virtual 
> Environment 7.0! The 7.x family is based on the great Debian 11 
> "Bullseye" and comes with a 5.11 kernel, QEMU 6.0, LXC 4.0, OpenZFS 2.0.4.

I just upgraded a node in our demo cluster and all seemed fine. Except 
for non-working cluster network. I was unable to ping the node through 
the cluster interface, pvecm saw no other nodes and ceph was broken.

However, if I ran tcpdump, ping started working, but not the rest.

Interesting situation, which I 'fixed' by disabling vlan-aware-bridge 
for that interface. After the reboot, everything works (AFAICS).

If Proxmox wants to debug this, feel free to reach out to me, I can 
grant you access to this node so you can check it out.

-- 
Mark Schouten
CTO, Tuxis B.V. | https://www.tuxis.nl/
<mark@tuxis.nl> | +31 318 200208



^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PVE-User] Proxmox VE 7.0 (beta) released!
@ 2021-06-24 13:16 Martin Maurer
  2021-06-29  8:05 ` Mark Schouten
       [not found] ` <mailman.239.1625514988.464.pve-user@lists.proxmox.com>
  0 siblings, 2 replies; 16+ messages in thread
From: Martin Maurer @ 2021-06-24 13:16 UTC (permalink / raw)
  To: pve-devel, PVE User List

Hi all,

We are pleased to announce the first beta release of Proxmox Virtual Environment 7.0! The 7.x family is based on the great Debian 11 "Bullseye" and comes with a 5.11 kernel, QEMU 6.0, LXC 4.0, OpenZFS 2.0.4.

Note: The current release of Proxmox Virtual Environment 7.0 is a beta version. If you test or upgrade, make sure to first create backups of your data. We recommend https://www.proxmox.com/en/proxmox-backup-server to do so.

Here are some of the highlights of the Proxmox VE 7.0 beta version:

- Ceph Server: Ceph Pacific 16.2 is the new default. Ceph Octopus 15.2 comes with continued support.
- BTRFS: modern copy on write file system natively supported by the Linux kernel, implementing features such as snapshots, built-in RAID, and self healing via checksums for data and metadata.
- ifupdown2 is the default for new installations using the Proxmox VE official ISO.
- QEMU 6.0 has support for io_uring as asynchronous I/O engine for virtual drives - this is now the default for newly started or migrated guests.
- Countless GUI improvements
- and much more...

Release notes
https://pve.proxmox.com/wiki/Roadmap

Download[
http://download.proxmox.com/iso

Community Forum
https://forum.proxmox.com

Bugtracker
https://bugzilla.proxmox.com

Source code
https://git.proxmox.com

FAQ
Q: Can I upgrade Proxmox VE 6.4 to 7.0 beta with apt?
A: Yes, please follow the upgrade instructions on https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0

Q: Can I upgrade a 7.0 beta installation to the stable 7.0 release via apt?
A: Yes, upgrading from beta to stable installation will be possible via apt.

Q: Which apt repository can I use for Proxmox VE 7.0 beta?
A: deb http://download.proxmox.com/debian/pve bullseye pvetest

Q: Can I install Proxmox VE 7.0 beta on top of Debian 11 "Bullseye"?
A: Yes.

Q: Can I upgrade my Proxmox VE 6.4 cluster with Ceph Octopus to 7.0 beta?
A: This is a two step process. First, you have to upgrade Proxmox VE from 6.4 to 7.0, and afterwards upgrade Ceph from Octopus to Pacific. There are a lot of improvements and changes, so please follow exactly the upgrade documentation:
https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0
https://pve.proxmox.com/wiki/Ceph_Octopus_to_Pacific

Q: When do you expect the stable Proxmox VE 7.0 release?
A: The final Proxmox VE 7.0 will be available as soon as all Proxmox VE 7.0 release critical bugs are fixed.

Q: Where can I get more information about feature updates?
A: Check the https://pve.proxmox.com/wiki/Roadmap, https://forum.proxmox.com, the https://lists.proxmox.com/cgi-bin/mailman/listinfo, and/or subscribe to our https://www.proxmox.com/en/news.

You are welcome to test your hardware and your upgrade path and we are looking forward to your feedback, bug reports, or ideas. Thank you for getting involved!

-- 
Best Regards,

Martin Maurer

martin@proxmox.com
https://www.proxmox.com

____________________________________________________________________
Proxmox Server Solutions GmbH
Bräuhausgasse 37, 1050 Vienna, Austria
Commercial register no.: FN 258879 f
Registration office: Handelsgericht Wien




^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2021-07-06 10:23 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-29 12:27 [PVE-User] Proxmox VE 7.0 (beta) released! Wolfgang Bumiller
     [not found] <kcEE.HSoMZfIyQreLVdFDq7JFjQ.AFttFk5y1wE@ckcucs11.intern.ckc-it.at>
2021-07-06 10:22 ` Stoiko Ivanov
  -- strict thread matches above, loose matches on Subject: below --
2021-06-24 13:16 Martin Maurer
2021-06-29  8:05 ` Mark Schouten
2021-06-29  8:23   ` Stoiko Ivanov
2021-06-29  8:34     ` Mark Schouten
2021-06-29  9:46   ` Thomas Lamprecht
2021-06-29 10:06     ` Mark Schouten
2021-06-29 10:31       ` Thomas Lamprecht
2021-06-29 12:04         ` Mark Schouten
2021-06-29 13:31           ` Stoiko Ivanov
2021-06-29 13:51             ` alexandre derumier
2021-06-29 14:14             ` Thomas Lamprecht
2021-07-02 20:57   ` Thomas Lamprecht
2021-07-02 21:06     ` Mark Schouten
     [not found] ` <mailman.239.1625514988.464.pve-user@lists.proxmox.com>
2021-07-06  9:55   ` Stoiko Ivanov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal