Juju add-k8s in openstack no route to controller

Having deployed charmed-kubernetes on Openstack, for a laugh I thought I would try add-k8s and bootstrap. I created some storage nonsense (ceph in ceph?), it nearly works but no cigar.

18:16:39 INFO  juju.juju api.go:67 connecting to API addresses: []
18:26:39 ERROR juju.cmd.juju.commands bootstrap.go:795 unable to contact api server after 1 attempts: dial tcp i/o timeout

10.152.whatever is buried in kubernetes. I have no route. What should I have done first?

Many thanks for a clue.

The k8s service that gets spun up to sit in front of the controller pods needs to get a public IP address, not a cloud local one.

One option is to configure the k8s cluster to support LoadBalancer or ExternaIP services and use bootstrap options to configure the controller service accordingly.

The supported service config options are similar to what can be specified when deploying a charm and can be seen here.


$ juju bootstrap myk8s
  --config controller-service-type=external
  --config controller-external-name=mydnsname
  --config controller-external-ips=[x.x.x.x, y.y.y.y]
1 Like

@wallyworld many thanks for the prompt but there is something that I am missing.

When I posted originally I was running charmed-kubernetes vanilla without the openstack-integrator (which gives some issues). I’ve since rebuilt with openstack-integrator, in search of the loadbalancer configuration.

I managed to get this working with a guess combination of setting ‘floating-network-id’, ‘lb-floating-network’ (both set to the external provider network) and ‘subnet-id’ (set to the subnet in openstack hosting the k8s stuff). I also have to ‘use-default-secgroup’ in the model, and add the ‘subnet-id’ subnet to permit on the ingress of default - this to permit the loadbalancer to connect. With all this, the config basically works in kubectl from outside, with one issue of the certificate not describing the lb ip address.

I try to bootstrap the k8s with a loadbalancer;

juju bootstrap myk8s --debug --config controller-service-type=loadbalancer
21:03:03 DEBUG juju.kubernetes.provider bootstrap.go:432 creating controller service:
&Service{ObjectMeta:{controller-service  controller-myk8s    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[juju-app:controller] map[juju.io/controller:ca994d22-efab-45b6-8830-995b94d1f4ee] [] []  []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:api-server,Protocol:,Port:17070,TargetPort:{0 17070 },NodePort:0,},},Selector:map[string]string{juju-app: controller,},ClusterIP:,Type:LoadBalancer,ExternalIPs:[],SessionAffinity:,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamily:nil,TopologyKeys:[],},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},},}
21:03:04 DEBUG juju.kubernetes.provider bootstrap.go:466 polling k8s controller svc DNS, in 1 attempt, controller service address not provisioned
21:03:07 DEBUG juju.kubernetes.provider bootstrap.go:466 polling k8s controller svc DNS, in 2 attempt, controller service address not provisioned
21:03:10 DEBUG juju.kubernetes.provider bootstrap.go:466 polling k8s controller svc DNS, in 3 attempt, controller service address not provisioned

and lots more like that.

I don’t understand the suggestion to set e.g. controller-external-ips here, I was expecting such would be dynamic.

The bootstrap command was just an example of how you can pass in k8s params to control how the service is configured. It wasn’t intended to be the actual solution for your case. Each k8s cluster can be set up differently and you need to know how your cluster works in order to know how best to configure the appropriate front end for your services.

The error in your logs indicates that the requested loadbalancer service cannot be provisioned by k8s. I am not sure what to do there to properly configure the cluster and openstack to make things work.

Thanks, yes there is some other problems I have clearly. Getting my head around it now…