Hm, that seems ok. I have a couple more things to try. First, are you using
keystone in this deployment, or is this just vanilla
charmed-kubernetes deployed to aws? FWIW, you would have had to manually deploy
keystone and add a relation; if you didn’t do that, you’re not using it.
Assuming no keystone, set the dashboard auth back to basic (auto should also work, but let’s be explicit just in case):
juju config kubernetes-master dashboard-auth='basic'
And let’s verify the admin passwords (output is just for reference; don’t share your version here):
$ juju run --application kubernetes-master -- grep admin /root/cdk/basic_auth.csv
- Stdout: |
- Stdout: |
Again, don’t share your output; I just want you to double check that the first field from that admin line matches for all k8s-master units and that it matches the
password that you have in your kube config file. That first csv field value should allow you to login using
Basic auth with
admin as the username and
<value> as the password.
If all the passwords match, let’s try hitting the dashboard directly instead of via your
kubectl proxy machine. Find the public ip of the
$ juju status kubeapi-load-balancer
App Version Status Scale Charm Store Rev OS Notes
kubeapi-load-balancer 1.14.0 active 1 kubeapi-load-balancer jujucharms 729 ubuntu exposed
Unit Workload Agent Machine Public address Ports Message
kubeapi-load-balancer/0* active idle 4 126.96.36.199 443/tcp Loadbalancer ready.
Two things to note from above: (a) make sure the
exposed in the
Notes column, and (b) use the
Public address IP to access the dashboard. From above, I would navigate to:
Substitute your public address in place of mine and see if you can login with either
Basic auth mechanisms.
- are you using
keystone for auth?
- explicitly configure k8s-master to set
- verify the admin password matches across all k8s-master units and the password from your kubeconfig file
- try to connect to the
kubeapi-load-balancer directly without going through the machine that you’re using to run
If any of those result in success, we can hopefully figure out where things went wrong. Thanks for your patience and willingness to debug here!