Trouble using add-k8s to access kubernetes-core cluster on Openstack created by another client

I deployed a kubernetes-core with juju on a openstack. Now on a other machine I want to juju add-k8s and bootstrap it. But in the bootstrap process seems to fail with the following error:

ERROR failed to bootstrap model: creating controller stack for controller: creating service for controller: attempt count exceeded: controller service address not provisioned

It seems like the controller is created on kubernetes because juju (even on another machine) doesn’t let me create a controller with the same name

I deployed the following kubernetes-core bundle: https://jaas.ai/kubernetes-core

And executed the following add-k8s command:

juju add-k8s --region=hayward/RegionOne --storage=local-storage < .kube/config

the local-storage class I created manually and the .kube/config I copied from the cluster to that machine.

Can you run through the commands you ran on the second client please? I think there’s a mix up in the add-k8s (add the cloud) vs juju bootstrap but want to make sure we follow what you’re saying.

On the one where I deployed kubernetes-core I ran these commands:

juju add-cloud
juju add-credentials hayward
juju deploy ./bundle.yaml # The kubernetes-core bundle

No sorry, I meant on the one where you were trying to deploy workloads into your new kubernetes.

So I assume on there you did the add-k8s command and then you ran a bootstrap command?

I would expect you’d do something like

juju add-k8s myk8scluster --storage=local-storage
juju bootstrap myk8scluster

I’m not sure what “cloud” hayward is? Is that the OpenStack? Is that OpenStack known on the client you want to create the model on top of k8s with?

Yes, hayward is the openstack on which the kubernetes was deployed. I also added it with the juju add-cloud command, so when I execute juju list-clouds it shows up.

The bootstrap command was just as you stated:
juju bootstrap juju-cluster k8s

Ok, so having the “region” of the k8s be based on the openstack isn’t quite right. To the client bootstraping to k8s it shouldn’t need to worry about that I don’t think. add-k8s will find the .kube/config I believe so I’m curious if the command I put up there works for you?

that might need to be

juju add-k8s --local ...

I have been always piping the .kube/config to the add-k8s command just in case.

If I try just with --local without region I get an error telling me to provide the --region.

I also tried --local and --region together and I get the same result as in the beginning. It starts bootstraping, it gets stuck in the " Creating k8s resources for controller " phase and times out with the Error of the first post of this thread.

Supplying a region with juju add-k8s was introduced in Juju 2.6 to allow Juju to perform necessary checks for recommended storage and other housekeeping things. Unfortunately, this was done as a mandatory argument and so doesn’t work with clouds where region may not be configured.

The soon to be released Juju 2.6.3 (any day now) will have a fix for this issue - region will be optional for clouds such as Openstack.

Hi Ian,

I just tried it out with the new Juju 2.6.3 release and I still get the same error. I added the recommended Cinder storage that one of the error messages tells me to, just in case, but the error continues to be the following:

ubuntu@osm-kuber:~$ juju bootstrap juju-cluster
Creating Juju controller "juju-cluster" on juju-cluster
Creating k8s resources for controller "controller-juju-cluster"
ERROR failed to bootstrap model: creating controller stack for controller: creating service for controller: attempt count exceeded: controller service address not provisioned

Are there some additional requirements that my Kubernetes install may need for it to work? I know in microk8s I need to enable storage and dns. But I think this stuff should be enabled in a standard kubernetes-core install right?

A standard kubernetes-core install will not set up any k8s storage class on the cluster; that’s usually done via deploying the cloud specific (openstack in this case) integrator charm and relating to the master/worker charms.

Can you confirm that your cluster has a StorageClass set up and annotated as cluster default storage? Also, what’s in the config block when you juju show-cloud <yourk8scluster>? When you bootstrap, is there a PVC created in the cluster? If so, what does kubectl say about its status? Can you try bootstrapping with --debug to get some extra output?

I reinstalled the kubernetes-core following the instruction posted in the link you sent me:

juju deploy cs:canonical-kubernetes --overlay ./k8s-openstack-overlay.yaml
juju trust openstack-integrator

Afterwards I connected kubectl to the k8s by copying the config file in kubernetes-master to ~/.kube/config.
Then I executed the script in the section"Creating a pod with a PersistentDisk-backed volume" of the examples that are in your provided link.
The created storage class was also set to default so I got the following result:

ubuntu@kuber:~$ kubectl get storageclasses
NAME                           PROVISIONER            AGE
openstack-standard (default)   kubernetes.io/cinder   34m

Done this, I executed the following command:

cat .kube/config | juju add-k8s --cloud=hayward juju-cluster --storage=openstack-standard

‘hayward’ is the openstack cloud where the kubernetes-core was deployed. This is the result of the show-cloud command:

ubuntu@kuber:~$ juju show-cloud juju-cluster
defined: public
type: k8s
description: ""
auth-types: [userpass]
endpoint: https://192.168.20.134:6443
regions:
  "": {}
ca-credentials:
- |-
  -----BEGIN CERTIFICATE-----
  MIIDOzCCAiOgAwIBAgIJAJj4Pyw55gT5MA0GCSqGSIb3DQEBCwUAMBgxFjAUBgNV
  BAMMDTEwLjE4MS4yMi4xOTUwHhcNMTkwNjA1MDgyMTIxWhcNMjkwNjAyMDgyMTIx
  WjAYMRYwFAYDVQQDDA0xMC4xODEuMjIuMTk1MIIBIjANBgkqhkiG9w0BAQEFAAOC
  AQ8AMIIBCgKCAQEAoMMlcubGXRCws8hy0wE/rh+GzBegxoLScNNuNGk8rdaU22Jh
  MZe6xpjHBePolNTLRKolpZeVA8W8/vRKL+3JHShkV2DxjMmqOFuMjMA00eXzZc9b
  e0Lchkm6fZ/UkVcbI0kskV4+x7w9vvh497EHglUzwMV6/Nvs/KpXl5dAzVCHnfVd
  sOPLa+oDGwKP6Kz7D2f2PQEay6ZsuIO5/bQtw1s7Un7lhowHQ9pPB9nOQA0HUgcY
  uINoCd/0sUgo9pPZlhN58wfXEemhquA9g4sZPbIvblPElGcRfqmf3xvCdEsRebxn
  wXFQRjsGACsQo+MTm9CW3RGD8l6aS0R3hxogaQIDAQABo4GHMIGEMB0GA1UdDgQW
  BBSg13xEqGZiYOMDwyxI8M5C6dBjCTBIBgNVHSMEQTA/gBSg13xEqGZiYOMDwyxI
  8M5C6dBjCaEcpBowGDEWMBQGA1UEAwwNMTAuMTgxLjIyLjE5NYIJAJj4Pyw55gT5
  MAwGA1UdEwQFMAMBAf8wCwYDVR0PBAQDAgEGMA0GCSqGSIb3DQEBCwUAA4IBAQBb
  A0b6q3yLLSuySjzx7L9CE3wD+Ieon6dxxPOV+2K1ZQhYtQX2T/qsI2H+sAsW/ckd
  K9c6Yoq4k3n2avxTnfcq25RxVJMhSwLsreJeGPcVlKkca3m7gG+39NqkEe/CySZ7
  fpRlLnHnSNMmZTeXmogQ8VrWpSsEGHqIWV2w+fbBtTebTElkePIf4eRSW60MypRh
  zbd+fy/ba1eI7Hmx8IdXypOEj0pZjIZIhrV1B+f4RG1NrdArHjggGxVE554y9svh
  y7ylxk8YOjnYdPK0VGwrbzEWCkAtpemR8jDwexuNfKnAHY84pWbbJa82yD+oDQv2
  nr93xa6wTvKhqZxOArBI
  -----END CERTIFICATE-----

Finally I executed juju bootstrap juju-cluster --debug and got the same error as before.

Here is the complete output of the bootstrap command: Ubuntu Pastebin

For those following along, it appears that the k8s service created for the Juju controller is not getting an externally accessible IP address assigned to it. We need to dig into the reason why.

1 Like

@dominik.f can you provide the ‘juju model-config’ output for the model in which k8s is deployed please?

Hi sorry for the late response I just retook this issue, here is the output that I get when executing juju model-config

Attribute                     From     Value
agent-metadata-url            default  ""
agent-stream                  default  released
agent-version                 model    2.6.5
apt-ftp-proxy                 default  ""
apt-http-proxy                default  ""
apt-https-proxy               default  ""
apt-mirror                    default  ""
apt-no-proxy                  default  ""
automatically-retry-hooks     default  true
backup-dir                    default  ""
cloudinit-userdata            default  ""
container-image-metadata-url  default  ""
container-image-stream        default  released
container-inherit-properties  default  ""
container-networking-method   model    fan
default-series                default  bionic
development                   default  false
disable-network-management    default  false
egress-subnets                default  ""
enable-os-refresh-update      default  true
enable-os-upgrade             default  true
external-network              default  ""
fan-config                    model    10.0.0.0/24=252.0.0.0/8
firewall-mode                 default  instance
ftp-proxy                     default  ""
http-proxy                    default  ""
https-proxy                   default  ""
ignore-machine-addresses      default  false
image-metadata-url            default  ""
image-stream                  default  released
juju-ftp-proxy                default  ""
juju-http-proxy               default  ""
juju-https-proxy              default  ""
juju-no-proxy                 default  127.0.0.1,localhost,::1
logforward-enabled            default  false
logging-config                model    <root>=INFO;unit=DEBUG
max-action-results-age        default  336h
max-action-results-size       default  5G
max-status-history-age        default  336h
max-status-history-size       default  5G
net-bond-reconfigure-delay    default  17
network                       default  ""
no-proxy                      default  127.0.0.1,localhost,::1
policy-target-group           default  ""
provisioner-harvest-mode      default  destroyed
proxy-ssh                     default  false
resource-tags                 model    {}
snap-http-proxy               default  ""
snap-https-proxy              default  ""
snap-store-assertions         default  ""
snap-store-proxy              default  ""
ssl-hostname-verification     default  true
storage-default-block-source  model    cinder
test-mode                     default  false
transmit-vendor-metrics       default  true
update-status-hook-interval   default  5m
use-default-secgroup          default  false
use-floating-ip               default  false
use-openstack-gbp             default  false

Hey there, naively trying to bump this one. I have the same issue having my bootstrap process hanging at Contacting Juju controller at 10.152.183.53 to verify accessibility…

I have deployed a juju controller on local Maas cloud and then using juju I have deployed kubernetes-core bundle. Now trying to bootstrap the new k8s cloud. (Im not even sure this is the way to go. I seem to be jujujing in circles.)

So I guess there’s something important missing in my setup regarding networking. Tips, tricks and googling suggestions welcome, will dump configs & logs on demand as I dont want to spam too much.

TY!

1 Like

Hi @noobadmin
What does the cloud look like?
juju show-cloud <cloud> --client

By the way, you are very welcome to use paste.ubuntu.com to submit these kinds of text documents.

Hi @noobadmin

What’s the controller service type?

$ kubectl get svc -n controller-<your-controller-name> controller-service -o json | jq .spec.type

If it’s LoadBalancer, it might be the kubeapi-load-balancer in kubernetes core wasn’t configured properly.

You can test if it works:

#  https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/
$ kubectl apply -f https://k8s.io/examples/service/access/frontend.yaml

Thanks a lot for your fast responses, sorry for the late reply.

I’ve worked around this by sshing into the kubernetes master and bootstraping a controller there but that probably isn’t … is not what it should look like :frowning: