public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Aaron Lauterer <a.lauterer@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH manager 2/3] pvereport: rework report contents
Date: Mon, 21 Dec 2020 16:13:50 +0100	[thread overview]
Message-ID: <20201221151351.30575-2-a.lauterer@proxmox.com> (raw)
In-Reply-To: <20201221151351.30575-1-a.lauterer@proxmox.com>

add:
* HA status
* ceph osd df tree
* ceph conf file and conf db
* ceph versions

removed:
* ceph status, as pveceph status is now printing the same information

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---

@Thomas, we did discuss using the cluster/ceph/metadata endpoint off
list for more information about running services and other stuff like
needed restarts after updates.

Since it returns a lot of JSON that needs to be filtered to be useful
and not littering the report, I will need a bit more time to see what
would be needed and how to filter for that.

For now with the changed pveceph status (ceph -s) output we already get
an overview if any expected services are not running and with `ceph
versions` we get an idea which versions and if multiple versions are
present in the cluster.

I think this is okay for now to get a good impression and quite a lot
more hints where to investigate further.

 PVE/Report.pm | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/PVE/Report.pm b/PVE/Report.pm
index 5ee3453d..f8d5e663 100644
--- a/PVE/Report.pm
+++ b/PVE/Report.pm
@@ -51,7 +51,8 @@ my $init_report_cmds = sub {
 	cluster => [
 	    'pvecm nodes',
 	    'pvecm status',
-	    'cat /etc/pve/corosync.conf 2>/dev/null'
+	    'cat /etc/pve/corosync.conf 2>/dev/null',
+	    'ha-manager status',
 	],
 	bios => [
 	    'dmidecode -t bios',
@@ -76,7 +77,9 @@ my $init_report_cmds = sub {
 
     if (-e '/etc/ceph/ceph.conf') {
 	# TODO: add (now working) rdb ls over all pools? really needed?
-	push @{$report_def->{volumes}}, 'ceph status', 'ceph osd status', 'ceph df', 'pveceph status', 'pveceph pool ls';
+	push @{$report_def->{volumes}}, 'pveceph status', 'ceph osd status',
+		'ceph df', 'ceph osd df tree', 'cat /etc/ceph/ceph.conf',
+		'ceph config dump', 'pveceph pool ls', 'ceph versions';
     }
 
     push @{$report_def->{disks}}, 'multipath -ll', 'multipath -v3'
-- 
2.20.1





  reply	other threads:[~2020-12-21 15:14 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-21 15:13 [pve-devel] [PATCH manager 1/3] pvereport: fix multipath inclusion Aaron Lauterer
2020-12-21 15:13 ` Aaron Lauterer [this message]
2020-12-21 16:06   ` [pve-devel] applied: Re: [PATCH manager 2/3] pvereport: rework report contents Thomas Lamprecht
2020-12-21 16:15   ` [pve-devel] " Thomas Lamprecht
2020-12-21 15:13 ` [pve-devel] [PATCH manager 3/3] pvereport: code cleanup, line length Aaron Lauterer
2020-12-21 16:08   ` [pve-devel] applied: " Thomas Lamprecht
2020-12-21 16:02 ` [pve-devel] applied: Re: [PATCH manager 1/3] pvereport: fix multipath inclusion Thomas Lamprecht

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201221151351.30575-2-a.lauterer@proxmox.com \
    --to=a.lauterer@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal