* [pve-devel] [PATCH docs v2 2/6] ceph: correct heading capitalization
2025-02-05 10:08 [pve-devel] [PATCH docs v2 1/6] ceph: add anchors for use in troubleshooting section Alexander Zeidler
@ 2025-02-05 10:08 ` Alexander Zeidler
2025-02-05 10:08 ` [pve-devel] [PATCH docs v2 3/6] ceph: troubleshooting: revise and add frequently needed information Alexander Zeidler
` (5 subsequent siblings)
6 siblings, 0 replies; 11+ messages in thread
From: Alexander Zeidler @ 2025-02-05 10:08 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Alexander Zeidler <a.zeidler@proxmox.com>
---
v2:
* no changes
pveceph.adoc | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/pveceph.adoc b/pveceph.adoc
index 93c2f8d..90bb975 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -768,7 +768,7 @@ Nautilus: PG merging and autotuning].
[[pve_ceph_device_classes]]
-Ceph CRUSH & device classes
+Ceph CRUSH & Device Classes
---------------------------
[thumbnail="screenshot/gui-ceph-config.png"]
@@ -1021,7 +1021,7 @@ After these steps, the CephFS should be completely removed and if you have
other CephFS instances, the stopped metadata servers can be started again
to act as standbys.
-Ceph maintenance
+Ceph Maintenance
----------------
[[pve_ceph_osd_replace]]
@@ -1089,7 +1089,7 @@ are executed.
[[pveceph_shutdown]]
-Shutdown {pve} + Ceph HCI cluster
+Shutdown {pve} + Ceph HCI Cluster
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To shut down the whole {pve} + Ceph cluster, first stop all Ceph clients. These
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pve-devel] [PATCH docs v2 3/6] ceph: troubleshooting: revise and add frequently needed information
2025-02-05 10:08 [pve-devel] [PATCH docs v2 1/6] ceph: add anchors for use in troubleshooting section Alexander Zeidler
2025-02-05 10:08 ` [pve-devel] [PATCH docs v2 2/6] ceph: correct heading capitalization Alexander Zeidler
@ 2025-02-05 10:08 ` Alexander Zeidler
2025-02-05 10:08 ` [pve-devel] [PATCH docs v2 4/6] ceph: osd: revise and expand the section "Destroy OSDs" Alexander Zeidler
` (4 subsequent siblings)
6 siblings, 0 replies; 11+ messages in thread
From: Alexander Zeidler @ 2025-02-05 10:08 UTC (permalink / raw)
To: pve-devel
Existing information is slightly modified and retained.
Add information:
* List which logs are usually helpful for troubleshooting
* Explain how to acknowledge listed Ceph crashes and view details
* List common causes of Ceph problems and link to recommendations for a
healthy cluster
* Briefly describe the common problem "OSDs down/crashed"
Signed-off-by: Alexander Zeidler <a.zeidler@proxmox.com>
---
v2:
* implement all comments from Max Carrara
** using longer link texts
** fix build errors by adding two missing anchors in patch:
"ceph: add anchors for use in troubleshooting"
pveceph.adoc | 72 ++++++++++++++++++++++++++++++++++++++++++++++------
1 file changed, 64 insertions(+), 8 deletions(-)
diff --git a/pveceph.adoc b/pveceph.adoc
index 90bb975..7401d2b 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -1150,22 +1150,78 @@ The following Ceph commands can be used to see if the cluster is healthy
('HEALTH_OK'), if there are warnings ('HEALTH_WARN'), or even errors
('HEALTH_ERR'). If the cluster is in an unhealthy state, the status commands
below will also give you an overview of the current events and actions to take.
+To stop their execution, press CTRL-C.
----
-# single time output
-pve# ceph -s
-# continuously output status changes (press CTRL+C to stop)
-pve# ceph -w
+# Continuously watch the cluster status
+pve# watch ceph --status
+
+# Print the cluster status once (not being updated)
+# and continuously append lines of status events
+pve# ceph --watch
----
+[[pve_ceph_ts]]
+Troubleshooting
+~~~~~~~~~~~~~~~
+
+This section includes frequently used troubleshooting information.
+More information can be found on the official Ceph website under
+Troubleshooting
+footnote:[Ceph troubleshooting {cephdocs-url}/rados/troubleshooting/].
+
+[[pve_ceph_ts_logs]]
+.Relevant Logs on Affected Node
+
+* xref:disk_health_monitoring[Disk Health Monitoring]
+* __System -> System Log__ (or, for example,
+ `journalctl --since "2 days ago"`)
+* IPMI and RAID controller logs
+
+Ceph service crashes can be listed and viewed in detail by running
+`ceph crash ls` and `ceph crash info <crash_id>`. Crashes marked as
+new can be acknowledged by running, for example,
+`ceph crash archive-all`.
+
To get a more detailed view, every Ceph service has a log file under
`/var/log/ceph/`. If more detail is required, the log level can be
adjusted footnote:[Ceph log and debugging {cephdocs-url}/rados/troubleshooting/log-and-debug/].
-You can find more information about troubleshooting
-footnote:[Ceph troubleshooting {cephdocs-url}/rados/troubleshooting/]
-a Ceph cluster on the official website.
-
+[[pve_ceph_ts_causes]]
+.Common Causes of Ceph Problems
+
+* Network problems like congestion, a faulty switch, a shut down
+interface or a blocking firewall. Check whether all {pve} nodes are
+reliably reachable on the
+xref:pvecm_cluster_network[corosync cluster network] and on the
+xref:pve_ceph_install_wizard[Ceph public and cluster network].
+
+* Disk or connection parts which are:
+** defective
+** not firmly mounted
+** lacking I/O performance under higher load (e.g. when using HDDs,
+consumer hardware or
+xref:pve_ceph_recommendation_raid[inadvisable RAID controllers])
+
+* Not fulfilling the xref:pve_ceph_recommendation[recommendations] for
+a healthy Ceph cluster.
+
+[[pve_ceph_ts_problems]]
+.Common Ceph Problems
+ ::
+
+OSDs `down`/crashed:::
+A faulty OSD will be reported as `down` and mostly (auto) `out` 10
+minutes later. Depending on the cause, it can also automatically
+become `up` and `in` again. To try a manual activation via web
+interface, go to __Any node -> Ceph -> OSD__, select the OSD and click
+on **Start**, **In** and **Reload**. When using the shell, run on the
+affected node `ceph-volume lvm activate --all`.
++
+To activate a failed OSD, it may be necessary to
+xref:ha_manager_node_maintenance[safely reboot] the respective node
+or, as a last resort, to
+xref:pve_ceph_osd_replace[recreate or replace] the OSD.
ifdef::manvolnum[]
include::pve-copyright.adoc[]
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pve-devel] [PATCH docs v2 4/6] ceph: osd: revise and expand the section "Destroy OSDs"
2025-02-05 10:08 [pve-devel] [PATCH docs v2 1/6] ceph: add anchors for use in troubleshooting section Alexander Zeidler
2025-02-05 10:08 ` [pve-devel] [PATCH docs v2 2/6] ceph: correct heading capitalization Alexander Zeidler
2025-02-05 10:08 ` [pve-devel] [PATCH docs v2 3/6] ceph: troubleshooting: revise and add frequently needed information Alexander Zeidler
@ 2025-02-05 10:08 ` Alexander Zeidler
2025-02-05 10:08 ` [pve-devel] [PATCH docs v2 5/6] ceph: maintenance: revise and expand section "Replace OSDs" Alexander Zeidler
` (3 subsequent siblings)
6 siblings, 0 replies; 11+ messages in thread
From: Alexander Zeidler @ 2025-02-05 10:08 UTC (permalink / raw)
To: pve-devel
Existing information is slightly modified and retained.
Add information:
* Mention and link to the sections "Troubleshooting" and "Replace OSDs"
* CLI commands (pveceph) must be executed on the affected node
* Check in advance the "Used (%)" of OSDs to avoid blocked I/O
* Check and wait until the OSD can be stopped safely
* Use `pveceph stop` instead of `systemctl stop ceph-osd@<ID>.service`
* Explain cleanup option a bit more
Signed-off-by: Alexander Zeidler <a.zeidler@proxmox.com>
---
v2:
* implement both suggestions from Max Carrara
** mention what the warning is about (unsafe to stop OSD yet)
** use WARNING admonition and adapt the point accordingly
pveceph.adoc | 61 +++++++++++++++++++++++++++++-----------------------
1 file changed, 34 insertions(+), 27 deletions(-)
diff --git a/pveceph.adoc b/pveceph.adoc
index 7401d2b..81a6cc7 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -502,33 +502,40 @@ ceph-volume lvm create --filestore --data /dev/sd[X] --journal /dev/sd[Y]
Destroy OSDs
~~~~~~~~~~~~
-To remove an OSD via the GUI, first select a {PVE} node in the tree view and go
-to the **Ceph -> OSD** panel. Then select the OSD to destroy and click the **OUT**
-button. Once the OSD status has changed from `in` to `out`, click the **STOP**
-button. Finally, after the status has changed from `up` to `down`, select
-**Destroy** from the `More` drop-down menu.
-
-To remove an OSD via the CLI run the following commands.
-
-[source,bash]
-----
-ceph osd out <ID>
-systemctl stop ceph-osd@<ID>.service
-----
-
-NOTE: The first command instructs Ceph not to include the OSD in the data
-distribution. The second command stops the OSD service. Until this time, no
-data is lost.
-
-The following command destroys the OSD. Specify the '-cleanup' option to
-additionally destroy the partition table.
-
-[source,bash]
-----
-pveceph osd destroy <ID>
-----
-
-WARNING: The above command will destroy all data on the disk!
+If you experience problems with an OSD or its disk, try to
+xref:pve_ceph_mon_and_ts[troubleshoot] them first to decide if a
+xref:pve_ceph_osd_replace[replacement] is needed.
+
+To destroy an OSD:
+
+. Either open the web interface and select any {pve} node in the tree
+view, or open a shell on the node where the OSD to be deleted is
+located.
+
+. Go to the __Ceph -> OSD__ panel (`ceph osd df tree`). If the OSD to
+be deleted is still `up` and `in` (non-zero value at `AVAIL`), make
+sure that all OSDs have their `Used (%)` value well below the
+`nearfull_ratio` of default `85%`. In this way you can reduce the risk
+from the upcoming rebalancing, which may cause OSDs to run full and
+thereby blocking I/O on Ceph pools.
+
+. If the deletable OSD is not `out` yet, select the OSD and click on
+**Out** (`ceph osd out <id>`). This will exclude it from data
+distribution and starts a rebalance.
+
+. Click on **Stop**, if stopping is not safe yet, a warning will
+appear and you should click on **Cancel**, please try again shortly
+afterwards. When using the shell, check if it is safe to stop by
+reading the output from `ceph osd ok-to-stop <id>`, once true, run
+`pveceph stop --service osd.<id>` .
+
+. Finally:
++
+[WARNING]
+To remove the OSD from Ceph and delete all disk data, first click on
+**More -> Destroy**. Use the cleanup option to clean up the partition
+table and similar, enabling an immediate reuse of the disk in {pve}.
+Finally, click on **Remove** (`pveceph osd destroy <id> [--cleanup]`).
[[pve_ceph_pools]]
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pve-devel] [PATCH docs v2 5/6] ceph: maintenance: revise and expand section "Replace OSDs"
2025-02-05 10:08 [pve-devel] [PATCH docs v2 1/6] ceph: add anchors for use in troubleshooting section Alexander Zeidler
` (2 preceding siblings ...)
2025-02-05 10:08 ` [pve-devel] [PATCH docs v2 4/6] ceph: osd: revise and expand the section "Destroy OSDs" Alexander Zeidler
@ 2025-02-05 10:08 ` Alexander Zeidler
2025-02-05 10:08 ` [pve-devel] [PATCH docs v2 6/6] pvecm: remove node: mention Ceph and its steps for safe removal Alexander Zeidler
` (2 subsequent siblings)
6 siblings, 0 replies; 11+ messages in thread
From: Alexander Zeidler @ 2025-02-05 10:08 UTC (permalink / raw)
To: pve-devel
Remove redundant information that is already described in section
“Destroy OSDs” and link to it.
Mention and link to the troubleshooting section, as replacing the OSD
may not fix the underyling problem.
Mention that the replacement disk should be of the same type and size
and comply with the recommendations.
Mention how to acknowledge warnings of crashed OSDs.
Signed-off-by: Alexander Zeidler <a.zeidler@proxmox.com>
---
v2
* no changes
pveceph.adoc | 45 +++++++++++++--------------------------------
1 file changed, 13 insertions(+), 32 deletions(-)
diff --git a/pveceph.adoc b/pveceph.adoc
index 81a6cc7..a471fb9 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -1035,43 +1035,24 @@ Ceph Maintenance
Replace OSDs
~~~~~~~~~~~~
-One of the most common maintenance tasks in Ceph is to replace the disk of an
-OSD. If a disk is already in a failed state, then you can go ahead and run
-through the steps in xref:pve_ceph_osd_destroy[Destroy OSDs]. Ceph will recreate
-those copies on the remaining OSDs if possible. This rebalancing will start as
-soon as an OSD failure is detected or an OSD was actively stopped.
+With the following steps you can replace the disk of an OSD, which is
+one of the most common maintenance tasks in Ceph. If there is a
+problem with an OSD while its disk still seems to be healthy, read the
+xref:pve_ceph_mon_and_ts[troubleshooting] section first.
-NOTE: With the default size/min_size (3/2) of a pool, recovery only starts when
-`size + 1` nodes are available. The reason for this is that the Ceph object
-balancer xref:pve_ceph_device_classes[CRUSH] defaults to a full node as
-`failure domain'.
+. If the disk failed, get a
+xref:pve_ceph_recommendation_disk[recommended] replacement disk of the
+same type and size.
-To replace a functioning disk from the GUI, go through the steps in
-xref:pve_ceph_osd_destroy[Destroy OSDs]. The only addition is to wait until
-the cluster shows 'HEALTH_OK' before stopping the OSD to destroy it.
+. xref:pve_ceph_osd_destroy[Destroy] the OSD in question.
-On the command line, use the following commands:
+. Detach the old disk from the server and attach the new one.
-----
-ceph osd out osd.<id>
-----
-
-You can check with the command below if the OSD can be safely removed.
-
-----
-ceph osd safe-to-destroy osd.<id>
-----
-
-Once the above check tells you that it is safe to remove the OSD, you can
-continue with the following commands:
-
-----
-systemctl stop ceph-osd@<id>.service
-pveceph osd destroy <id>
-----
+. xref:pve_ceph_osd_create[Create] the OSD again.
-Replace the old disk with the new one and use the same procedure as described
-in xref:pve_ceph_osd_create[Create OSDs].
+. After automatic rebalancing, the cluster status should switch back
+to `HEALTH_OK`. Any still listed crashes can be acknowledged by
+running, for example, `ceph crash archive-all`.
Trim/Discard
~~~~~~~~~~~~
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pve-devel] [PATCH docs v2 6/6] pvecm: remove node: mention Ceph and its steps for safe removal
2025-02-05 10:08 [pve-devel] [PATCH docs v2 1/6] ceph: add anchors for use in troubleshooting section Alexander Zeidler
` (3 preceding siblings ...)
2025-02-05 10:08 ` [pve-devel] [PATCH docs v2 5/6] ceph: maintenance: revise and expand section "Replace OSDs" Alexander Zeidler
@ 2025-02-05 10:08 ` Alexander Zeidler
2025-02-05 14:20 ` [pve-devel] [PATCH docs v2 1/6] ceph: add anchors for use in troubleshooting section Max Carrara
2025-03-24 16:42 ` [pve-devel] applied: " Aaron Lauterer
6 siblings, 0 replies; 11+ messages in thread
From: Alexander Zeidler @ 2025-02-05 10:08 UTC (permalink / raw)
To: pve-devel
as it has already been missed in the past or the proper procedure was
not known.
Signed-off-by: Alexander Zeidler <a.zeidler@proxmox.com>
---
v2:
* no changes
pvecm.adoc | 47 +++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 47 insertions(+)
diff --git a/pvecm.adoc b/pvecm.adoc
index cffea6d..a65736d 100644
--- a/pvecm.adoc
+++ b/pvecm.adoc
@@ -320,6 +320,53 @@ replication automatically switches direction if a replicated VM is migrated, so
by migrating a replicated VM from a node to be deleted, replication jobs will be
set up to that node automatically.
+If the node to be removed has been configured for
+xref:chapter_pveceph[Ceph]:
+
+. Ensure that sufficient {pve} nodes with running OSDs (`up` and `in`)
+continue to exist.
++
+NOTE: By default, Ceph pools have a `size/min_size` of `3/2` and a
+full node as `failure domain` at the object balancer
+xref:pve_ceph_device_classes[CRUSH]. So if less than `size` (`3`)
+nodes with running OSDs are online, data redundancy will be degraded.
+If less than `min_size` are online, pool I/O will be blocked and
+affected guests may crash.
+
+. Ensure that sufficient xref:pve_ceph_monitors[monitors],
+xref:pve_ceph_manager[managers] and, if using CephFS,
+xref:pveceph_fs_mds[metadata servers] remain available.
+
+. To maintain data redundancy, each destruction of an OSD, especially
+the last one on a node, will trigger a data rebalance. Therefore,
+ensure that the OSDs on the remaining nodes have sufficient free space
+left.
+
+. To remove Ceph from the node to be deleted, start by
+xref:pve_ceph_osd_destroy[destroying] its OSDs, one after the other.
+
+. Once the xref:pve_ceph_mon_and_ts[CEPH status] is `HEALTH_OK` again,
+proceed by:
+
+[arabic]
+.. destroying its xref:pveceph_fs_mds[metadata server] via web
+interface at __Ceph -> CephFS__ or by running:
++
+----
+# pveceph mds destroy <local hostname>
+----
+
+.. xref:pveceph_destroy_mon[destroying its monitor]
+
+.. xref:pveceph_destroy_mgr[destroying its manager]
+
+. Finally, remove the now empty bucket ({pve} node to be removed) from
+the CRUSH hierarchy by running:
++
+----
+# ceph osd crush remove <hostname>
+----
+
In the following example, we will remove the node hp4 from the cluster.
Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes`
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [pve-devel] [PATCH docs v2 1/6] ceph: add anchors for use in troubleshooting section
2025-02-05 10:08 [pve-devel] [PATCH docs v2 1/6] ceph: add anchors for use in troubleshooting section Alexander Zeidler
` (4 preceding siblings ...)
2025-02-05 10:08 ` [pve-devel] [PATCH docs v2 6/6] pvecm: remove node: mention Ceph and its steps for safe removal Alexander Zeidler
@ 2025-02-05 14:20 ` Max Carrara
2025-03-24 16:42 ` [pve-devel] applied: " Aaron Lauterer
6 siblings, 0 replies; 11+ messages in thread
From: Max Carrara @ 2025-02-05 14:20 UTC (permalink / raw)
To: Proxmox VE development discussion
On Wed Feb 5, 2025 at 11:08 AM CET, Alexander Zeidler wrote:
> Signed-off-by: Alexander Zeidler <a.zeidler@proxmox.com>
> ---
> v2:
> * add two missing anchors to be usable via xref
>
To keep things short:
- Docs now build again (tested with both `make update` and `make deb`).
- Installed the .deb packages on my development VM in order to read the
docs where they'll be "deployed" - all works fine.
- Gave anchors and hrefs a smoke test by testing 'em out randomly;
seems like everything's working .
(tested on multi-page docs, single-page docs, PDF version)
- Also checked if the `id` attribute is set for the smallest headings,
and it sure is! You can't link to those sections by default (which is
an AsciiDoc and/or configuration thing, I'm guessing), *but* we should
still be able to refer to them throughout the docs, if necessary
(untested).
For example, the `id` for "Relevant Logs on Affected Node" is
set correctly as intended: `chapter-pveceph.html#pve_ceph_ts_logs`
- As mentioned before, I find the writing style to be quite nice; I
especially like how things are broken down into smaller steps and
paragraphs. No fancy idioms or figures of speech; the writing is
strictly technical and instructive.
- The instructions themselves also seem fine; I have used similar steps
the last time I was messing around with my Ceph cluster (was a while
ago though). I especially like the update on the "Destroy OSDs"
section; I personally wouldn't have thought of checking that the OSDs
should have their Used % below the nearfull_ratio before throwing an
OSD out.
Full disclosure: The only thing I haven't tried out is removing a node
that is running Ceph from a cluster. *But* because I wiped a Ceph
installation from a node before (before re-creating it again) I can
tell that the steps there are sensible (and *much* safer than what I
did back then, woops). The only thing I hadn't done was removing the
node from the CRUSH hierarchy, but I guess in my case Ceph had just
figured that out itself :P
All in all, unless I missed something or if there are any objections, I
think this can be merged.
Consider:
Reviewed-by: Max Carrara <m.carrara@proxmox.com>
Tested-by: Max Carrara <m.carrara@proxmox.com>
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pve-devel] applied: [PATCH docs v2 1/6] ceph: add anchors for use in troubleshooting section
2025-02-05 10:08 [pve-devel] [PATCH docs v2 1/6] ceph: add anchors for use in troubleshooting section Alexander Zeidler
` (5 preceding siblings ...)
2025-02-05 14:20 ` [pve-devel] [PATCH docs v2 1/6] ceph: add anchors for use in troubleshooting section Max Carrara
@ 2025-03-24 16:42 ` Aaron Lauterer
2025-03-26 10:20 ` Max Carrara
6 siblings, 1 reply; 11+ messages in thread
From: Aaron Lauterer @ 2025-03-24 16:42 UTC (permalink / raw)
To: Proxmox VE development discussion, Alexander Zeidler
On 2025-02-05 11:08, Alexander Zeidler wrote:
> Signed-off-by: Alexander Zeidler <a.zeidler@proxmox.com>
> ---
> v2:
> * add two missing anchors to be usable via xref
>
> pve-disk-health-monitoring.adoc | 1 +
> pveceph.adoc | 8 ++++++++
> pvecm.adoc | 1 +
> 3 files changed, 10 insertions(+)
>
>
thanks for the work to extend the docs!
applied with some changes and follow-ups:
* we do not want to break the anchor to the ceph recommendations. This
anchor is in use to redirect customers and users to ceph requirements.
Yes it is ugly, but we cannot change it so we keep it.
* I do not think that inline CLI commands are a good idea as they are
make the paragraph messy and harder to read and are hard to copy&paste.
Therefore, I rephrased those parts to have them in their own code blocks.
* rephrased the Destroy OSD section, mainly to also place the CLI
commands into code blocks and a few other things you can see in the
follow up commit.
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [pve-devel] applied: [PATCH docs v2 1/6] ceph: add anchors for use in troubleshooting section
2025-03-24 16:42 ` [pve-devel] applied: " Aaron Lauterer
@ 2025-03-26 10:20 ` Max Carrara
2025-03-26 13:13 ` Aaron Lauterer
0 siblings, 1 reply; 11+ messages in thread
From: Max Carrara @ 2025-03-26 10:20 UTC (permalink / raw)
To: Proxmox VE development discussion, Alexander Zeidler
On Mon Mar 24, 2025 at 5:42 PM CET, Aaron Lauterer wrote:
>
>
> On 2025-02-05 11:08, Alexander Zeidler wrote:
> > Signed-off-by: Alexander Zeidler <a.zeidler@proxmox.com>
> > ---
> > v2:
> > * add two missing anchors to be usable via xref
> >
> > pve-disk-health-monitoring.adoc | 1 +
> > pveceph.adoc | 8 ++++++++
> > pvecm.adoc | 1 +
> > 3 files changed, 10 insertions(+)
> >
> >
> thanks for the work to extend the docs!
>
> applied with some changes and follow-ups:
>
> * we do not want to break the anchor to the ceph recommendations. This
> anchor is in use to redirect customers and users to ceph requirements.
> Yes it is ugly, but we cannot change it so we keep it.
> * I do not think that inline CLI commands are a good idea as they are
> make the paragraph messy and harder to read and are hard to copy&paste.
> Therefore, I rephrased those parts to have them in their own code blocks.
> * rephrased the Destroy OSD section, mainly to also place the CLI
> commands into code blocks and a few other things you can see in the
> follow up commit.
Aw, without my R-b and T-b tags, unfortunately :(
Is there any reason in particular that they weren't added?
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [pve-devel] applied: [PATCH docs v2 1/6] ceph: add anchors for use in troubleshooting section
2025-03-26 10:20 ` Max Carrara
@ 2025-03-26 13:13 ` Aaron Lauterer
2025-03-26 13:36 ` Max Carrara
0 siblings, 1 reply; 11+ messages in thread
From: Aaron Lauterer @ 2025-03-26 13:13 UTC (permalink / raw)
To: pve-devel
On 2025-03-26 11:20, Max Carrara wrote:
> On Mon Mar 24, 2025 at 5:42 PM CET, Aaron Lauterer wrote:
>>
>>
>> On 2025-02-05 11:08, Alexander Zeidler wrote:
>>> Signed-off-by: Alexander Zeidler <a.zeidler@proxmox.com>
>>> ---
>>> v2:
>>> * add two missing anchors to be usable via xref
>>>
>>> pve-disk-health-monitoring.adoc | 1 +
>>> pveceph.adoc | 8 ++++++++
>>> pvecm.adoc | 1 +
>>> 3 files changed, 10 insertions(+)
>>>
>>>
>> thanks for the work to extend the docs!
>>
>> applied with some changes and follow-ups:
>>
>> * we do not want to break the anchor to the ceph recommendations. This
>> anchor is in use to redirect customers and users to ceph requirements.
>> Yes it is ugly, but we cannot change it so we keep it.
>> * I do not think that inline CLI commands are a good idea as they are
>> make the paragraph messy and harder to read and are hard to copy&paste.
>> Therefore, I rephrased those parts to have them in their own code blocks.
>> * rephrased the Destroy OSD section, mainly to also place the CLI
>> commands into code blocks and a few other things you can see in the
>> follow up commit.
>
> Aw, without my R-b and T-b tags, unfortunately :(
>
> Is there any reason in particular that they weren't added?
Me being blind. Sorry for that.
>
>>
>>
>> _______________________________________________
>> pve-devel mailing list
>> pve-devel@lists.proxmox.com
>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [pve-devel] applied: [PATCH docs v2 1/6] ceph: add anchors for use in troubleshooting section
2025-03-26 13:13 ` Aaron Lauterer
@ 2025-03-26 13:36 ` Max Carrara
0 siblings, 0 replies; 11+ messages in thread
From: Max Carrara @ 2025-03-26 13:36 UTC (permalink / raw)
To: Proxmox VE development discussion
On Wed Mar 26, 2025 at 2:13 PM CET, Aaron Lauterer wrote:
>
>
> On 2025-03-26 11:20, Max Carrara wrote:
> > On Mon Mar 24, 2025 at 5:42 PM CET, Aaron Lauterer wrote:
> >>
> >>
> >> On 2025-02-05 11:08, Alexander Zeidler wrote:
> >>> Signed-off-by: Alexander Zeidler <a.zeidler@proxmox.com>
> >>> ---
> >>> v2:
> >>> * add two missing anchors to be usable via xref
> >>>
> >>> pve-disk-health-monitoring.adoc | 1 +
> >>> pveceph.adoc | 8 ++++++++
> >>> pvecm.adoc | 1 +
> >>> 3 files changed, 10 insertions(+)
> >>>
> >>>
> >> thanks for the work to extend the docs!
> >>
> >> applied with some changes and follow-ups:
> >>
> >> * we do not want to break the anchor to the ceph recommendations. This
> >> anchor is in use to redirect customers and users to ceph requirements.
> >> Yes it is ugly, but we cannot change it so we keep it.
> >> * I do not think that inline CLI commands are a good idea as they are
> >> make the paragraph messy and harder to read and are hard to copy&paste.
> >> Therefore, I rephrased those parts to have them in their own code blocks.
> >> * rephrased the Destroy OSD section, mainly to also place the CLI
> >> commands into code blocks and a few other things you can see in the
> >> follow up commit.
> >
> > Aw, without my R-b and T-b tags, unfortunately :(
> >
> > Is there any reason in particular that they weren't added?
>
> Me being blind. Sorry for that.
Oh, no prob! Happens. :P
>
> >
> >>
> >>
> >> _______________________________________________
> >> pve-devel mailing list
> >> pve-devel@lists.proxmox.com
> >> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> >
> >
> >
> > _______________________________________________
> > pve-devel mailing list
> > pve-devel@lists.proxmox.com
> > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> >
> >
>
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 11+ messages in thread