i have a small scale Openstack bionic-rocky deployed with MAAS/JUJU : - 3x controller nodes (APIs, keystone, telemetry, monitoring… deployed in containers):
4x Hyperconverged nodes (Ceph-osd /Nova)
However the issue i am facing is a slow dashboard performance, using chrome dev tools when fetching:
instances it can take up to 19s sometimes
Same for networks.
Volumes tab takes about 3s to load, which is fair enough.
When i reboot my controllers the same request right away after reboot are much faster (2 to 3 seconds). However after 2 days or so everything go slow again.
Any idea on how to solve this will be much appreciated.
Best Regards, RZ
Hi ronyzd,
Have you checked if the response is equally slow using the openstack CLI ? If the CLI is “fast” and the dashboard slow it might have to do with Django caching, else I would most probably check database performance.
offline-compression is enabled.
One thing i don’t get is why does it slow down with time after reboot. As i said, if i reboot my controllers (one at a time) the dashboard is much faster and then it starts slowing down. 48 hours later it performing really slow…
When it is slowing down, have you tried debugging things on the control plane nodes ? One wild guess would be that juju agents consume much resources (they shouldn’t). Have you tried with your controllers down, to check if things are still slow ?
Really thank you for trying to help with this.
when i say my controllers node i am talking about the openstack servers hosting all the different services beside Nova and Ceph OSD (dashboard, keystone, glance, cinder…), not the juju controller (i only have one) inside lxd containers.
Hello soumplis,
just to update on the issue. I upgraded the stack to bionic:stein and now everything seems to working fine. The dashboard instances tab went to 2.5 s and networks to 2 s.