* [PVE-User] Python problem with upgrade to proxmoxVE6.3/ CEPH Nautilus 14.2.15
@ 2020-11-27 11:47 Jean-Luc Oms
2020-11-27 13:07 ` Lindsay Mathieson
` (2 more replies)
0 siblings, 3 replies; 11+ messages in thread
From: Jean-Luc Oms @ 2020-11-27 11:47 UTC (permalink / raw)
To: Proxmox VE user list
Bonjour,
Upgrading to last Proxmox VE / Ceph nautilus from the last 6.2 proxmox
VE seems to introduce a python 2/3 version problem, dashboard healt
stops working.
root@ceph1:/usr/bin# ceph health
HEALTH_ERR Module 'dashboard' has failed: ('invalid syntax',
('/usr/share/ceph/mgr/dashboard/controllers/orchestrator.py', 34, 11,
' result: dict = {}\n'))
This syntax was introduced in 3.6, and using strace it seems python 2.7
is used.
Any option to resolve this ? everything was ok in 6.2-15.
Thanks
--
Jean-Luc Oms
/STI-ReseauX <https://rx.lirmm.fr>- LIRMM - CNRS/UM/
+33 4 67 41 85 93 <tel:+33-467-41-85-93> / +33 6 32 01 04 17
<tel:+33-632-01-04-17>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PVE-User] Python problem with upgrade to proxmoxVE6.3/ CEPH Nautilus 14.2.15
2020-11-27 11:47 [PVE-User] Python problem with upgrade to proxmoxVE6.3/ CEPH Nautilus 14.2.15 Jean-Luc Oms
@ 2020-11-27 13:07 ` Lindsay Mathieson
[not found] ` <47b5a337-b2ca-ce6d-37c5-e904db8d6e03@univ-fcomte.fr>
2020-11-27 15:15 ` Jean-Luc Oms
2020-11-27 16:45 ` Marco M. Gabriel
2 siblings, 1 reply; 11+ messages in thread
From: Lindsay Mathieson @ 2020-11-27 13:07 UTC (permalink / raw)
To: pve-user
On 27/11/2020 9:47 pm, Jean-Luc Oms wrote:
> Upgrading to last Proxmox VE / Ceph nautilus from the last 6.2 proxmox
> VE seems to introduce a python 2/3 version problem, dashboard healt
> stops working.
Was just about to report that :)
Same here.
--
Lindsay
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PVE-User] Python problem with upgrade to proxmoxVE6.3/ CEPH Nautilus 14.2.15
2020-11-27 11:47 [PVE-User] Python problem with upgrade to proxmoxVE6.3/ CEPH Nautilus 14.2.15 Jean-Luc Oms
2020-11-27 13:07 ` Lindsay Mathieson
@ 2020-11-27 15:15 ` Jean-Luc Oms
2020-11-27 16:45 ` Marco M. Gabriel
2 siblings, 0 replies; 11+ messages in thread
From: Jean-Luc Oms @ 2020-11-27 15:15 UTC (permalink / raw)
To: pve-user
Next step ....
I've a small 'preprod' cluster for testing ... but without ceph. If I
install ceph on oe node of this cluster this package is not installed:
ceph-mgr-dashboard
If I remove this package from my prod cluster, tested on the node
running active manager, no dependencies and after manager restart,
health is Ok ...
Now i have installed :
root@ceph1:~# dpkg -l | grep ceph
ii ceph 14.2.15-pve2
amd64 distributed storage and file system
ii ceph-base 14.2.15-pve2
amd64 common ceph daemon libraries and management tools
ii ceph-common 14.2.15-pve2
amd64 common utilities to mount and interact with a ceph storage
cluster
ii ceph-fuse 14.2.15-pve2
amd64 FUSE-based client for the Ceph distributed file system
ii ceph-mds 14.2.15-pve2
amd64 metadata server for the ceph distributed file system
ii ceph-mgr 14.2.15-pve2
amd64 manager for the ceph distributed storage system
ii ceph-mon 14.2.15-pve2
amd64 monitor server for the ceph storage system
ii ceph-osd 14.2.15-pve2
amd64 OSD server for the ceph storage system
ii libcephfs2 14.2.15-pve2
amd64 Ceph distributed file system client library
ii python-ceph-argparse 14.2.15-pve2
all Python 2 utility libraries for Ceph CLI
ii python-cephfs 14.2.15-pve2
amd64 Python 2 libraries for the Ceph libcephfs library
Is this ok ?
Is ceph-mgr-dashboard needed ?
Thanks
Le 27/11/2020 à 12:47, Jean-Luc Oms a écrit :
> Bonjour,
>
> Upgrading to last Proxmox VE / Ceph nautilus from the last 6.2 proxmox
> VE seems to introduce a python 2/3 version problem, dashboard healt
> stops working.
>
> root@ceph1:/usr/bin# ceph health
> HEALTH_ERR Module 'dashboard' has failed: ('invalid syntax',
> ('/usr/share/ceph/mgr/dashboard/controllers/orchestrator.py', 34, 11,
> ' result: dict = {}\n'))
>
> This syntax was introduced in 3.6, and using strace it seems python 2.7
> is used.
>
> Any option to resolve this ? everything was ok in 6.2-15.
>
> Thanks
>
>
--
Jean-Luc Oms
/STI-ReseauX <https://rx.lirmm.fr>- LIRMM - CNRS/UM/
+33 4 67 41 85 93 <tel:+33-467-41-85-93> / +33 6 32 01 04 17
<tel:+33-632-01-04-17>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PVE-User] Python problem with upgrade to proxmoxVE6.3/ CEPH Nautilus 14.2.15
2020-11-27 11:47 [PVE-User] Python problem with upgrade to proxmoxVE6.3/ CEPH Nautilus 14.2.15 Jean-Luc Oms
2020-11-27 13:07 ` Lindsay Mathieson
2020-11-27 15:15 ` Jean-Luc Oms
@ 2020-11-27 16:45 ` Marco M. Gabriel
2020-11-28 3:23 ` Lindsay Mathieson
2 siblings, 1 reply; 11+ messages in thread
From: Marco M. Gabriel @ 2020-11-27 16:45 UTC (permalink / raw)
To: Proxmox VE user list
Same problem here after upgrading from 6.2.15 to 6.3 on a test cluster.
But the problem disappeared suddenly when I upgraded Ceph from
Nautilus to Octopus as well. Not sure if this is the reason why it
disappeared and I wouldn't recommend doing this on a production
cluster while ceph is in HEALTH_ERR.
It would be fine if anyone could test and confirm that an upgrade to
Ceph Octopus resolves the issue.
Kind regards,
Marco
Am Fr., 27. Nov. 2020 um 12:54 Uhr schrieb Jean-Luc Oms <jean-luc.oms@lirmm.fr>:
>
> Bonjour,
>
> Upgrading to last Proxmox VE / Ceph nautilus from the last 6.2 proxmox
> VE seems to introduce a python 2/3 version problem, dashboard healt
> stops working.
>
> root@ceph1:/usr/bin# ceph health
> HEALTH_ERR Module 'dashboard' has failed: ('invalid syntax',
> ('/usr/share/ceph/mgr/dashboard/controllers/orchestrator.py', 34, 11,
> ' result: dict = {}\n'))
>
> This syntax was introduced in 3.6, and using strace it seems python 2.7
> is used.
>
> Any option to resolve this ? everything was ok in 6.2-15.
>
> Thanks
>
>
> --
> Jean-Luc Oms
> /STI-ReseauX <https://rx.lirmm.fr>- LIRMM - CNRS/UM/
> +33 4 67 41 85 93 <tel:+33-467-41-85-93> / +33 6 32 01 04 17
> <tel:+33-632-01-04-17>
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PVE-User] Python problem with upgrade to proxmoxVE6.3/ CEPH Nautilus 14.2.15
[not found] ` <47b5a337-b2ca-ce6d-37c5-e904db8d6e03@univ-fcomte.fr>
@ 2020-11-27 16:59 ` alexandre derumier
2020-11-27 17:12 ` [PVE-User] ProxmoxVE6.3/CEPH Octopus 15.2.6 was " Jean-Daniel TISSOT
2020-11-27 21:17 ` [PVE-User] " Lindsay Mathieson
2020-11-27 17:06 ` Marco M. Gabriel
1 sibling, 2 replies; 11+ messages in thread
From: alexandre derumier @ 2020-11-27 16:59 UTC (permalink / raw)
To: pve-user
>>1 pools have too many placement groups Pool rbd has 128 placement
groups, should have 32
>>
>>I don't find any way to reduce placement groups tu 32
>>
>>Any help welcome.
you can't reduce PG on nautilus, only since octopus. (and ceph is able
to do it automataaly with new pg autoscaler)
I think it's a warning introduced in last nautilus update.
If I remember, they are an option to disable this warning, (but I don't
remember it)
On 27/11/2020 17:29, Jean-Daniel TISSOT wrote:
> Hi,
>
> I have another problem
>
> root@dmz-pve1:~ # ceph health HEALTH_WARN 1 pools have too many
> placement groups root@dmz-pve1:~ # pveceph pool ls
> ┌───────────────────────┬──────┬──────────┬────────┬───────────────────┬─────────────────┬──────────────────────┬──────────────┐
> │ Name │ Size │ Min Size │ PG Num │ PG Autoscale Mode
> │ Crush Rule Name │ %-Used │ Used │
> ╞═══════════════════════╪══════╪══════════╪════════╪═══════════════════╪═════════════════╪══════════════════════╪══════════════╡
> │ device_health_metrics │ 3 │ 2 │ 1 │ on
> │ replicated_rule │ 4.19273845864154e-07 │ 4534827 │
> ├───────────────────────┼──────┼──────────┼────────┼───────────────────┼─────────────────┼──────────────────────┼──────────────┤
> │ rbd │ 3 │ 2 │ 128 │ warn
> │ replicated_rule │ 0.0116069903597236 │ 127014329075 │
> └───────────────────────┴──────┴──────────┴────────┴───────────────────┴─────────────────┴──────────────────────┴──────────────┘
>
>
> In the GUI :
>
> 1 pools have too many placement groups Pool rbd has 128 placement
> groups, should have 32
>
> I don't find any way to reduce placement groups tu 32
>
> Any help welcome.
>
> Best regards,
>
> Le 27/11/2020 à 14:07, Lindsay Mathieson a écrit :
>> On 27/11/2020 9:47 pm, Jean-Luc Oms wrote:
>>> Upgrading to last Proxmox VE / Ceph nautilus from the last 6.2 proxmox
>>> VE seems to introduce a python 2/3 version problem, dashboard healt
>>> stops working.
>>
>> Was just about to report that :)
>>
>>
>> Same here.
>>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PVE-User] Python problem with upgrade to proxmoxVE6.3/ CEPH Nautilus 14.2.15
[not found] ` <47b5a337-b2ca-ce6d-37c5-e904db8d6e03@univ-fcomte.fr>
2020-11-27 16:59 ` alexandre derumier
@ 2020-11-27 17:06 ` Marco M. Gabriel
2020-11-27 17:08 ` alexandre derumier
2020-11-27 17:18 ` Jean-Daniel TISSOT
1 sibling, 2 replies; 11+ messages in thread
From: Marco M. Gabriel @ 2020-11-27 17:06 UTC (permalink / raw)
To: Proxmox VE user list
You can enable the pg-autoscaler. It does the work for you and
optimizes the number of placement groups on a given pool.
The pg-autoscaler was introduced with nautilus and we ran it for a
while without any problems.
Here is an explanation of how to enable and how to use the autoscaler:
https://ceph.io/rados/new-in-nautilus-pg-merging-and-autotuning/
Best regards,
Marco
Am Fr., 27. Nov. 2020 um 17:37 Uhr schrieb Jean-Daniel TISSOT
<Jean-Daniel.Tissot@univ-fcomte.fr>:
>
> Hi,
>
> I have another problem
>
> root@dmz-pve1:~ # ceph health HEALTH_WARN 1 pools have too many
> placement groups root@dmz-pve1:~ # pveceph pool ls
> ┌───────────────────────┬──────┬──────────┬────────┬───────────────────┬─────────────────┬──────────────────────┬──────────────┐
> │ Name │ Size │ Min Size │ PG Num │ PG Autoscale Mode │
> Crush Rule Name │ %-Used │ Used │
> ╞═══════════════════════╪══════╪══════════╪════════╪═══════════════════╪═════════════════╪══════════════════════╪══════════════╡
> │ device_health_metrics │ 3 │ 2 │ 1 │ on │
> replicated_rule │ 4.19273845864154e-07 │ 4534827 │
> ├───────────────────────┼──────┼──────────┼────────┼───────────────────┼─────────────────┼──────────────────────┼──────────────┤
> │ rbd │ 3 │ 2 │ 128 │ warn │
> replicated_rule │ 0.0116069903597236 │ 127014329075 │
> └───────────────────────┴──────┴──────────┴────────┴───────────────────┴─────────────────┴──────────────────────┴──────────────┘
>
>
> In the GUI :
>
> 1 pools have too many placement groups Pool rbd has 128 placement
> groups, should have 32
>
> I don't find any way to reduce placement groups tu 32
>
> Any help welcome.
>
> Best regards,
>
> Le 27/11/2020 à 14:07, Lindsay Mathieson a écrit :
> > On 27/11/2020 9:47 pm, Jean-Luc Oms wrote:
> >> Upgrading to last Proxmox VE / Ceph nautilus from the last 6.2 proxmox
> >> VE seems to introduce a python 2/3 version problem, dashboard healt
> >> stops working.
> >
> > Was just about to report that :)
> >
> >
> > Same here.
> >
> --
> Bien cordialement,
> Jean-Daniel Tissot - IE CNRS http://chrono-environnement.univ-fcomte.fr
> UMR 6249 - Laboratoire Chrono-environnement UMR CNRS-UFC
> Université de Franche-Comté, 16 route de Gray, 25030 Besançon Cedex, FRANCE
> Jean-Daniel.Tissot@univ-fcomte.fr tel:+33 3 81 666 440
>
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PVE-User] Python problem with upgrade to proxmoxVE6.3/ CEPH Nautilus 14.2.15
2020-11-27 17:06 ` Marco M. Gabriel
@ 2020-11-27 17:08 ` alexandre derumier
2020-11-27 17:18 ` Jean-Daniel TISSOT
1 sibling, 0 replies; 11+ messages in thread
From: alexandre derumier @ 2020-11-27 17:08 UTC (permalink / raw)
To: pve-user
On 27/11/2020 18:06, Marco M. Gabriel wrote:
> You can enable the pg-autoscaler. It does the work for you and
> optimizes the number of placement groups on a given pool.
Oh, yes indeed, I thinked it was introduced in Octopus,
but it's indeed already avaible in Nautilus :)
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PVE-User] ProxmoxVE6.3/CEPH Octopus 15.2.6 was Python problem with upgrade to proxmoxVE6.3/ CEPH Nautilus 14.2.15
2020-11-27 16:59 ` alexandre derumier
@ 2020-11-27 17:12 ` Jean-Daniel TISSOT
2020-11-27 21:17 ` [PVE-User] " Lindsay Mathieson
1 sibling, 0 replies; 11+ messages in thread
From: Jean-Daniel TISSOT @ 2020-11-27 17:12 UTC (permalink / raw)
To: pve-user
Sorry to steal the thread.
In fact before upgrading to Octopus, I don't remember any warning.
I upgrade Proxmox an follow the wiki to upgrade CEPH. All things seam
working (I don't test HA by migrate a VM work perfectly.
I have just the warning on pool rdb (1 pools have too many placement
groups Pool rbd has 128 placement groups, should have 32)
Le 27/11/2020 à 17:59, alexandre derumier a écrit :
> >>1 pools have too many placement groups Pool rbd has 128 placement
> groups, should have 32
> >>
> >>I don't find any way to reduce placement groups tu 32
> >>
> >>Any help welcome.
>
> you can't reduce PG on nautilus, only since octopus. (and ceph is
> able to do it automataaly with new pg autoscaler)
>
> I think it's a warning introduced in last nautilus update.
>
> If I remember, they are an option to disable this warning, (but I
> don't remember it)
>
>
> On 27/11/2020 17:29, Jean-Daniel TISSOT wrote:
>> Hi,
>>
>> I have another problem
>>
>> root@dmz-pve1:~ # ceph health HEALTH_WARN 1 pools have too many
>> placement groups root@dmz-pve1:~ # pveceph pool ls
>> ┌───────────────────────┬──────┬──────────┬────────┬───────────────────┬─────────────────┬──────────────────────┬──────────────┐
>> │ Name │ Size │ Min Size │ PG Num │ PG Autoscale
>> Mode │ Crush Rule Name │ %-Used │ Used │
>> ╞═══════════════════════╪══════╪══════════╪════════╪═══════════════════╪═════════════════╪══════════════════════╪══════════════╡
>> │ device_health_metrics │ 3 │ 2 │ 1 │
>> on │ replicated_rule │ 4.19273845864154e-07 │ 4534827
>> │
>> ├───────────────────────┼──────┼──────────┼────────┼───────────────────┼─────────────────┼──────────────────────┼──────────────┤
>> │ rbd │ 3 │ 2 │ 128 │
>> warn │ replicated_rule │ 0.0116069903597236 │
>> 127014329075 │
>> └───────────────────────┴──────┴──────────┴────────┴───────────────────┴─────────────────┴──────────────────────┴──────────────┘
>>
>>
>> In the GUI :
>>
>> 1 pools have too many placement groups Pool rbd has 128 placement
>> groups, should have 32
>>
>> I don't find any way to reduce placement groups tu 32
>>
>> Any help welcome.
>>
>> Best regards,
>>
>> Le 27/11/2020 à 14:07, Lindsay Mathieson a écrit :
>>> On 27/11/2020 9:47 pm, Jean-Luc Oms wrote:
>>>> Upgrading to last Proxmox VE / Ceph nautilus from the last 6.2 proxmox
>>>> VE seems to introduce a python 2/3 version problem, dashboard healt
>>>> stops working.
>>>
>>> Was just about to report that :)
>>>
>>>
>>> Same here.
>>>
>
>
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
--
Bien cordialement,
Jean-Daniel Tissot - IE CNRS http://chrono-environnement.univ-fcomte.fr
UMR 6249 - Laboratoire Chrono-environnement UMR CNRS-UFC
Université de Franche-Comté, 16 route de Gray, 25030 Besançon Cedex, FRANCE
Jean-Daniel.Tissot@univ-fcomte.fr tel:+33 3 81 666 440
Alabama, Mississippi, Minnesota, South Carolina, Oregon... not so sweet home
Black Panther Party, renaît de tes cendres et reviens les aider
https://www.youtube.com/watch?v=ZvilFSMVHTs
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PVE-User] Python problem with upgrade to proxmoxVE6.3/ CEPH Nautilus 14.2.15
2020-11-27 17:06 ` Marco M. Gabriel
2020-11-27 17:08 ` alexandre derumier
@ 2020-11-27 17:18 ` Jean-Daniel TISSOT
1 sibling, 0 replies; 11+ messages in thread
From: Jean-Daniel TISSOT @ 2020-11-27 17:18 UTC (permalink / raw)
To: pve-user
Many thanks.
Work like a charm.
No more warning.
Again many thanks Marco
Best regards,
Jean-Daniel
Le 27/11/2020 à 18:06, Marco M. Gabriel a écrit :
> You can enable the pg-autoscaler. It does the work for you and
> optimizes the number of placement groups on a given pool.
>
> The pg-autoscaler was introduced with nautilus and we ran it for a
> while without any problems.
>
> Here is an explanation of how to enable and how to use the autoscaler:
> https://ceph.io/rados/new-in-nautilus-pg-merging-and-autotuning/
>
> Best regards,
> Marco
>
>
> Am Fr., 27. Nov. 2020 um 17:37 Uhr schrieb Jean-Daniel TISSOT
> <Jean-Daniel.Tissot@univ-fcomte.fr>:
>> Hi,
>>
>> I have another problem
>>
>> root@dmz-pve1:~ # ceph health HEALTH_WARN 1 pools have too many
>> placement groups root@dmz-pve1:~ # pveceph pool ls
>> ┌───────────────────────┬──────┬──────────┬────────┬───────────────────┬─────────────────┬──────────────────────┬──────────────┐
>> │ Name │ Size │ Min Size │ PG Num │ PG Autoscale Mode │
>> Crush Rule Name │ %-Used │ Used │
>> ╞═══════════════════════╪══════╪══════════╪════════╪═══════════════════╪═════════════════╪══════════════════════╪══════════════╡
>> │ device_health_metrics │ 3 │ 2 │ 1 │ on │
>> replicated_rule │ 4.19273845864154e-07 │ 4534827 │
>> ├───────────────────────┼──────┼──────────┼────────┼───────────────────┼─────────────────┼──────────────────────┼──────────────┤
>> │ rbd │ 3 │ 2 │ 128 │ warn │
>> replicated_rule │ 0.0116069903597236 │ 127014329075 │
>> └───────────────────────┴──────┴──────────┴────────┴───────────────────┴─────────────────┴──────────────────────┴──────────────┘
>>
>>
>> In the GUI :
>>
>> 1 pools have too many placement groups Pool rbd has 128 placement
>> groups, should have 32
>>
>> I don't find any way to reduce placement groups tu 32
>>
>> Any help welcome.
>>
>> Best regards,
>>
>> Le 27/11/2020 à 14:07, Lindsay Mathieson a écrit :
>>> On 27/11/2020 9:47 pm, Jean-Luc Oms wrote:
>>>> Upgrading to last Proxmox VE / Ceph nautilus from the last 6.2 proxmox
>>>> VE seems to introduce a python 2/3 version problem, dashboard healt
>>>> stops working.
>>> Was just about to report that :)
>>>
>>>
>>> Same here.
>>>
>> --
>> Bien cordialement,
>> Jean-Daniel Tissot - IE CNRS http://chrono-environnement.univ-fcomte.fr
>> UMR 6249 - Laboratoire Chrono-environnement UMR CNRS-UFC
>> Université de Franche-Comté, 16 route de Gray, 25030 Besançon Cedex, FRANCE
>> Jean-Daniel.Tissot@univ-fcomte.fr tel:+33 3 81 666 440
>>
>> _______________________________________________
>> pve-user mailing list
>> pve-user@lists.proxmox.com
>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
--
Bien cordialement,
Jean-Daniel Tissot - IE CNRS http://chrono-environnement.univ-fcomte.fr
UMR 6249 - Laboratoire Chrono-environnement UMR CNRS-UFC
Université de Franche-Comté, 16 route de Gray, 25030 Besançon Cedex, FRANCE
Jean-Daniel.Tissot@univ-fcomte.fr tel:+33 3 81 666 440
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PVE-User] Python problem with upgrade to proxmoxVE6.3/ CEPH Nautilus 14.2.15
2020-11-27 16:59 ` alexandre derumier
2020-11-27 17:12 ` [PVE-User] ProxmoxVE6.3/CEPH Octopus 15.2.6 was " Jean-Daniel TISSOT
@ 2020-11-27 21:17 ` Lindsay Mathieson
1 sibling, 0 replies; 11+ messages in thread
From: Lindsay Mathieson @ 2020-11-27 21:17 UTC (permalink / raw)
To: pve-user
On 28/11/2020 2:59 am, alexandre derumier wrote:
> you can't reduce PG on nautilus, only since octopus. (and ceph is
> able to do it automataaly with new pg autoscaler)
Actually you can, has been the case since Nautilus.
https://ceph.io/rados/new-in-nautilus-pg-merging-and-autotuning/
--
Lindsay
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PVE-User] Python problem with upgrade to proxmoxVE6.3/ CEPH Nautilus 14.2.15
2020-11-27 16:45 ` Marco M. Gabriel
@ 2020-11-28 3:23 ` Lindsay Mathieson
0 siblings, 0 replies; 11+ messages in thread
From: Lindsay Mathieson @ 2020-11-28 3:23 UTC (permalink / raw)
To: pve-user
On 28/11/2020 2:45 am, Marco M. Gabriel wrote:
> It would be fine if anyone could test and confirm that an upgrade to
> Ceph Octopus resolves the issue.
Regards the dashboard not working on Proxmox 6.3 - upgrading to ceph
Octopus fixed it for me. Also the dashboard looks a lot more swish :)
--
Lindsay
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2020-11-28 3:24 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-27 11:47 [PVE-User] Python problem with upgrade to proxmoxVE6.3/ CEPH Nautilus 14.2.15 Jean-Luc Oms
2020-11-27 13:07 ` Lindsay Mathieson
[not found] ` <47b5a337-b2ca-ce6d-37c5-e904db8d6e03@univ-fcomte.fr>
2020-11-27 16:59 ` alexandre derumier
2020-11-27 17:12 ` [PVE-User] ProxmoxVE6.3/CEPH Octopus 15.2.6 was " Jean-Daniel TISSOT
2020-11-27 21:17 ` [PVE-User] " Lindsay Mathieson
2020-11-27 17:06 ` Marco M. Gabriel
2020-11-27 17:08 ` alexandre derumier
2020-11-27 17:18 ` Jean-Daniel TISSOT
2020-11-27 15:15 ` Jean-Luc Oms
2020-11-27 16:45 ` Marco M. Gabriel
2020-11-28 3:23 ` Lindsay Mathieson
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal