How to deploy charms to specific clusters on vSphere Cloud

Hi all,

I have created a vSphere cloud for using with Juju. I have 2 clusters on my vCenter (Cluster01 and Cluster02) with different datastores on each. I have created a new model and set the model-config to use DataStore02 as datastore.

When I deploy a charm, i.e. charmed-kubernetes, the machines are created following the order of the clusters (regions??) and most of cases the machines aren’t deployed because the configured DataStore02 is not found on first cluster.

Can I deploy the charm to specific cluster? Can I create the vSphere cloud with regions (clusters)?

When I create the cloud it asks for a Datacenter, but if I write anything other than the base datacenter it won’t work (i.e. DC01/host/Cluster02)

The problem is that none of the charms deploy correctly…

We are running a multicloud, multiuser setup with an onprem “jimm” using maas, vsphere and will bring in a few other cloud substrates down the line.

vsphere works “OK” with Juju, although there are quite a few limitations. In our setup - we deploy targeting individual clouds(vsphere-cloud) + regions(datacenter1) like this:

juju add-model mymodel vshpere-cloud/datacenter1

From there, you can laborate with juju model-config to set various things. I’ve written some about it here: https://discourse.jujucharms.com/t/bootstrapping-controller-with-model-defaults-settings-keys/461 which might lead you in the right direction.

A few basic checks are:

  • Do you have your datacenters defined in your clouds.yaml? - How does it look?
  • The mapping between vsphere terms (datacenters, clusters, hosts) Juju can be different. You can read about that here: https://discourse.jujucharms.com/t/using-vmware-vsphere-with-juju/1099 - Did you read through it?
  • A good hint, is that vsphere “logs you out” which will show up in the juju status. I often need to run juju update-credentials to be able to get back to operating/deploying etc. in vsphere clouds.
  • vsphere clouds can’t target individual “datastores” for allocating nodes. (2019-december)
  • vsphere cant add centos-images. (2019-december) - not even sure it can change the default images… (2019-december)
  • vsphere has no support for spaces (2019-december)

@timClicks might be able to correct or fill in…

We have quite some experience with vsphere how and have fallen into a few pitfalls.

Thank you for your persistence, by the way. We’re working hard to improve vsphere support. Input from larger sites such as yours is very helpful.

1 Like

I believe what you are looking for is --to zone=Cluster2. The full set of commands should be:

juju add-model cdk
juju model-defaults datastore=DataStore02
juju deploy charmed-kubernetes <cloud>/<datacenter> --to zone=Cluster2

Juju uses the term “availability zone”, because this is what other providers use. See the “vSphere specific features” section of Using VMware vSphere with Juju. Juju’s default behaviour is to spread the placement of units across availability zones, to increase resilience to hardware failure.

That doesn’t work for me:

$ juju deploy kubernetes-core --to zone=Cluster02
ERROR options provided but not supported when deploying a bundle: --to

I only have one Datacenter defined at vsphere cloud, but with 4 clusters. I have set the datastore in model-config to DataStore04 too.

When the machines are deployed, if the first cluster juju tries to deploy is, i.e., Cluster01 it fails because DataStore04 is not accessible by that cluster, and the machine remains in error state.

One thing, this config has worked before, when juju tried with another cluster it retries to 10 attempts, changing the availability zone, but now it uses always the same availability zone.

Oh, of course. It’s a bundle, not a charm. Let me think and get back to you.

Thanks, as I told before this worked before, juju tried to with first cluster and then tried to other cluster until the datastore defined in model-config was accessible. I don’t know why, in the following attempts, juju keep trying the same availability zone…

Okay, this is very possible. As evidence, here is CDK deployed onto AWS instances, but notably they’re all deployed to the same availability zone:

Model  Controller  Cloud/Region   Version      SLA          Timestamp
cdk    c-aws       aws/us-east-1  2.8-beta1.1  unsupported  21:51:05+13:00

App                    Version  Status       Scale  Charm                  Store       Rev  OS      Notes
containerd                      waiting          0  containerd             jujucharms   46  ubuntu
easyrsa                         maintenance      1  easyrsa                jujucharms  289  ubuntu
etcd                            maintenance      3  etcd                   jujucharms  478  ubuntu
flannel                         waiting          0  flannel                jujucharms  462  ubuntu
kubeapi-load-balancer           maintenance      1  kubeapi-load-balancer  jujucharms  695  ubuntu  exposed
kubernetes-master               maintenance      2  kubernetes-master      jujucharms  778  ubuntu
kubernetes-worker               maintenance      3  kubernetes-worker      jujucharms  613  ubuntu  exposed

Unit                      Workload     Agent      Machine  Public address  Ports  Message
easyrsa/0*                maintenance  executing  0        35.175.150.212         (install) installing charm software
etcd/0                    maintenance  executing  1        54.166.190.208         (install) installing charm software
etcd/1*                   maintenance  executing  2        3.86.103.111           (install) installing charm software
etcd/2                    maintenance  executing  3        52.91.19.22            (install) installing charm software
kubeapi-load-balancer/0*  maintenance  executing  4        3.82.136.68            (install) installing charm software
kubernetes-master/0*      maintenance  executing  5        3.88.223.49            (install) installing charm software
kubernetes-master/1       maintenance  executing  6        3.83.148.230           (install) installing charm software
kubernetes-worker/0*      maintenance  executing  7        100.27.21.52           (install) installing charm software
kubernetes-worker/1       maintenance  executing  8        3.93.236.66            (install) installing charm software
kubernetes-worker/2       maintenance  executing  9        3.87.196.220           (install) installing charm software

Machine  State    DNS             Inst id              Series  AZ          Message
0        started  35.175.150.212  i-030f843238dd825e6  bionic  us-east-1c  running
1        started  54.166.190.208  i-03af8e748ef69305a  bionic  us-east-1c  running
2        started  3.86.103.111    i-018512895c7a92af3  bionic  us-east-1c  running
3        started  52.91.19.22     i-0e98e556b85ce073d  bionic  us-east-1c  running
4        started  3.82.136.68     i-019b6e9bb9199f176  bionic  us-east-1c  running
5        started  3.88.223.49     i-0f810d493aa0cfc87  bionic  us-east-1c  running
6        started  3.83.148.230    i-09a75c240ec6e5d6a  bionic  us-east-1c  running
7        started  100.27.21.52    i-0da36f48735f87e82  bionic  us-east-1c  running
8        started  3.93.236.66     i-09fb5cd4458cac88a  bionic  us-east-1c  running
9        started  3.87.196.220    i-0e8b521a83d3f391f  bionic  us-east-1c  running

To do this, you need to customise the bundle.yaml for your hosting environment. To find it, visit the bundle’s web page, then click on “bundle.yaml” in the Files box on the right-hand-side.

Every application needs to have a zones constraint added. See the example before for a model.

Replace “us-east-1c” with whichever cluster you prefer. If multiple clusters can access the datacenter that you want to store them on, then create a comma-seperated list, e.g. zones=Cluster02,Cluster03. Note that the constraint is zones, not zone.

description: A highly-available, production-grade Kubernetes cluster.
series: bionic
services:
  containerd: # no constraints needed here
    annotations:
      gui-x: '475'
      gui-y: '800'
    charm: cs:~containers/containerd-46
    resources: {}
  easyrsa:
    annotations:
      gui-x: '90'
      gui-y: '420'
    charm: cs:~containers/easyrsa-289
    constraints: root-disk=8G  zones=us-east-1c # ***
    num_units: 1
    resources:
      easyrsa: 5
  etcd:
    annotations:
      gui-x: '800'
      gui-y: '420'
    charm: cs:~containers/etcd-478
    constraints: root-disk=8G zones=us-east-1c # ***
    num_units: 3
    options:
      channel: 3.2/stable
    resources:
      core: 0
      etcd: 3
      snapshot: 0
  flannel: # no constraints needed here
    annotations:
      gui-x: '475'
      gui-y: '605'
    charm: cs:~containers/flannel-462
    resources:
      flannel-amd64: 516
      flannel-arm64: 512
      flannel-s390x: 499
  kubeapi-load-balancer:
    annotations:
      gui-x: '450'
      gui-y: '250'
    charm: cs:~containers/kubeapi-load-balancer-695
    constraints: root-disk=8G zones=us-east-1c # changed line
    expose: true
    num_units: 1
    resources: {}
  kubernetes-master:
    annotations:
      gui-x: '800'
      gui-y: '850'
    charm: cs:~containers/kubernetes-master-778
    constraints: cores=2 mem=4G root-disk=16G zones=us-east-1c # changed line
    num_units: 2
    options:
      channel: 1.16/stable
    resources:
      cdk-addons: 0
      core: 0
      kube-apiserver: 0
      kube-controller-manager: 0
      kube-proxy: 0
      kube-scheduler: 0
      kubectl: 0
  kubernetes-worker:
    annotations:
      gui-x: '90'
      gui-y: '850'
    charm: cs:~containers/kubernetes-worker-613
    constraints: cores=4 mem=4G root-disk=16G zones=us-east-1c  # changed line
    expose: true
    num_units: 3
    options:
      channel: 1.16/stable
    resources:
      cni-amd64: 516
      cni-arm64: 507
      cni-s390x: 519
      core: 0
      kube-proxy: 0
      kubectl: 0
      kubelet: 0
relations:
- - kubernetes-master:kube-api-endpoint
  - kubeapi-load-balancer:apiserver
- - kubernetes-master:loadbalancer
  - kubeapi-load-balancer:loadbalancer
- - kubernetes-master:kube-control
  - kubernetes-worker:kube-control
- - kubernetes-master:certificates
  - easyrsa:client
- - etcd:certificates
  - easyrsa:client
- - kubernetes-master:etcd
  - etcd:db
- - kubernetes-worker:certificates
  - easyrsa:client
- - kubernetes-worker:kube-api-endpoint
  - kubeapi-load-balancer:website
- - kubeapi-load-balancer:certificates
  - easyrsa:client
- - flannel:etcd
  - etcd:db
- - flannel:cni
  - kubernetes-master:cni
- - flannel:cni
  - kubernetes-worker:cni
- - containerd:containerd
  - kubernetes-worker:container-runtime
- - containerd:containerd
  - kubernetes-master:container-runtime

This feels like a bug. If you are willing to share your shell history, I would be very interested in learning more about the behaviour you are seeing.

1 Like

OK, let me try the modifications in the bundle file and I’ll let you know.

Thanks,

Hi,

i have put the zones directives at machine constrainst, instead of charm, like this:

'0':
  series: bionic
  constraints: cpu-cores=2 mem=4G root-disk=16G zones=Cluster04
'1':
  series: bionic
  constraints: cpu-cores=4 mem=8G root-disk=16G zones=Cluster04

and so… it worked!!!

Thanks you very much for your help and interest. If you want I can provide info and shell history to see why juju keep trying to deploy to the same zone although designed datacenter is not accessible when zones constraints are not indicated.

Regards,

1 Like

:confetti_ball: Excellent :confetti_ball:

Let me know if you encounter any more trouble. As @erik-lonroth has mentioned, the vSphere provider code is lacking features compared to MAAS. Please complain loudly if something that you need is missing. That makes it much easier to prioritise.

2 Likes

I’ve read through this thread and aprechiate it. I still try to figure out how to use multiple datastores in the same model…