Juju 2.6 beta 1 Release Notes

The Juju team is pleased to announce the release of Juju 2.6-beta1.

The 2.6 release will deliver important new features and bug fixes, some of which are ready to be tested in this first beta. We will support upgrading from the beta to the final 2.6 release so if you install now, you can get to the production release without having to reinstall.

New features and major changes

Kubernetes Support

Juju on k8s has a number of new features, the biggest of which is it’s now possible to bootstrap a controller directly to the k8s cluster - no external controller needed anymore. There’s also additions to what a charm is able to specify in terms of the pod spec it needs for its workloads, and a bunch of other enhancements and polish.

microk8s is built-in

Just as with LXD, if microk8s is snap installed, Juju will recognise it out of the box. There’s no need to run add-k8s to have the “microk8s” cloud appear in list-clouds or to allow bootstrap. By default, microk8s does not enable storage or DNS, so you’ll need to do this yourself:

microk8s.enable storage dns

Juju will warn you if you bootstrap and these features are not enabled.

Bootstrap

You can bootstrap to k8s just as for any other cloud. eg for microk8s

juju bootstrap microk8s mytest

The controller pod will run in a namespace called controller-mytest so it’s important you specify a controller name. This is not enforced but will be next beta.

Other controller related commands like destroy-controller and kill-controller work as expected.

Bootstrap has been tested to work on:

  • microk8s
  • AKS

There’s a bug preventing it working on CDK deployments on public clouds like AWS, due to issues with the Load Balancer initilaising. There’s also an upstream GKE issue with mounting volumes we need to work around for GKE bootstrap to work.

Using add-k8s

The add-k8s command is used to register/add a cluster to make it available to Juju. This can either be done using --local to add to the local cache and allowing bootstrap, or a k8s cluster can be added to a running controller.

When adding a cluster, Juju needs to know the cloud type and region of the underlying cloud hosting the cluster. This is so that suitable storage can be set up. Juju will attempt to sniff the cloud details by querying the cluster, but this is not always possible. For clusters deployed using CDK, you may be asked to supply this information using the --region argument to add-k8s, eg

juju add-k8s --region ec2/us-east-1 mycluster

For this first beta, you will need to specify the cloud type (ec2, gce, azure). The next beta will allow the a local cloud name as well (so long as the cloud exists locally, the cloud type can be looked up).

Workloads on public clusters GKE, AKS

The add-k8s command - used to register/add a k8s cluster locally so you can bootstrap, or to add a cluster to a running controller - has been enhanced to allow AKS and GKE clusters to be used. EKS has issues so no support there yet.

You need to first either snap install gcloud for GKE or apt install az for AKS. After using relevant CLI tool to login to your account, and creating a cluster using either the CLI or web interface, you can then run:

juju add-k8s --gke
or
juju add-k8s --aks

and you will be stepped through adding your k8s cluster to Juju. If you know things like region or project id or account name etc, you can specify those to avoid some of the interactive prompts, eg

juju add-k8s --gke --credential=myaccount --project=myproject --region=someregion myk8scloud

The above parameters will match what you used when creating the cluster with gcloud.

Default storage

When running add-k8s, Juju will detect the underlying cloud on which the k8s cluster is running and ensure that there’s a default storage class using a recommended storage provisioner for the cluster. This information is stored with the cluster definition in Juju and set as model default config options for operator-storage and workload-storage. Being model config options, these can be set to something different for any given model, and you can also set up Juju storage pools for more control over how storage is provisioned for any given application.

If the k8s cluster has no suitable storage defined, you will be prompt to run add-k8s with the
--storage argument, to tell Juju what storage class to use for provisioning both operator and workload storage.

Additional charm capabilities

Charms can now specify many additional k8s pod attributes, including:

  • init containers
  • security context
  • service annotations
  • custom resource definitions

The easiest way to describe what’s supported is by looking at an example of a charm pod spec. The example is representative but not complete (eg not all possible security context attributes are shown).

activeDeadlineSeconds: 10
serviceAccountName: serviceAccount
restartPolicy: OnFailure
terminationGracePeriodSeconds: 20
automountServiceAccountToken: true
securityContext:
  runAsNonRoot: true
  supplementalGroups: [1,2]
hostname: host
subdomain: sub
priorityClassName: top
priority: 30
dnsPolicy: ClusterFirstWithHostNet
dnsConfig: 
  nameservers: [ns1, ns2]
readinessGates:
  - conditionType: PodScheduled
containers:
  - name: gitlab
    image: gitlab/latest
    imagePullPolicy: Always
    command: ["sh", "-c"]
    args: ["doIt", "--debug"]
    workingDir: "/path/to/here"
    ports:
    - containerPort: 80
      name: fred
      protocol: TCP
    - containerPort: 443
      name: mary
    securityContext:
      runAsNonRoot: true
      privileged: true
    livenessProbe:
      initialDelaySeconds: 10
      httpGet:
        path: /ping
        port: 8080
    readinessProbe:
      initialDelaySeconds: 10
      httpGet:
        path: /pingReady
        port: www
    config:
      attr: foo=bar; name['fred']='blogs';
      foo: bar
      restricted: 'yes'
      switch: on
    files:
      - name: configuration
        mountPath: /var/lib/foo
        files:
          file1: |
            [config]
            foo: bar
  - name: gitlab-helper
    image: gitlab-helper/latest
    ports:
    - containerPort: 8080
      protocol: TCP
  - name: secret-image-user
    imageDetails:
        imagePath: staging.registry.org/testing/testing-image@sha256:deed-beef
        username: docker-registry
        password: hunter2
  - name: just-image-details
    imageDetails:
        imagePath: testing/no-secrets-needed@sha256:deed-beef
initContainers:
  - name: gitlab-init
    image: gitlab-init/latest
    imagePullPolicy: Always
    command: ["sh", "-c"]
    args: ["doIt", "--debug"]
    workingDir: "/path/to/here"
    ports:
    - containerPort: 80
      name: fred
      protocol: TCP
    - containerPort: 443
      name: mary
    config:
      foo: bar
      restricted: 'yes'
      switch: on
service:
  annotations:
    foo: bar
customResourceDefinitions:
  tfjobs.kubeflow.org:
    group: kubeflow.org
    version: v1alpha2
    scope: Namespaced
    names:
      plural: "tfjobs"
      singular: "tfjob"
      kind: TFJob
    validation:
      openAPIV3Schema:
        properties:
          tfReplicaSpecs:
            properties:
              Worker:
                properties:
                  replicas:
                    type: integer
                    minimum: 1
              PS:
                properties:
                  replicas:
                    type: integer
                    minimum: 1
              Chief:
                properties:
                  replicas:
                    type: integer
                    minimum: 1
                    maximum: 1

Upgrade support

This feature is not fully ready for this beta - only upgrade controller works. The next beta will also support upgrading hosted k8s models and the feature will be documented fully at that time.

Multi-cloud Controllers

Controllers now support models hosted on more than one cloud. You can use the add-cloud command to register a new cloud and then specify that cloud name when using add-model.

juju add-cloud -f myclouds.yaml mycloud
juju add-model another mycloud

Controller admins can see all clouds on a controller. There’s a new grant-cloud command to give the add-model permission to one or more users.

juju grant-cloud joe add-model fluffy

And use revoke-cloud to remove access.

When adding a model on a controller hosting more than one cloud, specifying the cloud name becomes mandatory.

Note: the user experience around adding credentials works in this beta as it always has but needs polish to be considered truly done.

A key change is that like most/all other Juju commands, the cloud commands will operate (by default) on a running controller. So, just like add-model, these commands:

  • list-clouds
  • show-cloud
  • add-cloud
  • remove-cloud
  • update-cloud

will use the current controller, or accept a -c or --controller argument to use a different one.

For the times where you may be preparing to bootstrap to a new cloud but you have existing controllers running, and you want to first create the cloud definition locally, you can use the --local argument, eg
juju add-cloud -f myclouds.yaml mycloud --local

Currently , interactive add-cloud is always local.

A key use case already tried out previously using the edge snap is to:

  • bootstrap LXD
  • register (add) a previously provisioned MAAS cloud
  • deploy Openstack on that MAAS
  • register (add) that Openstack
  • deploy to that Openstack

All of the above can be done on a single LXD based controller (no MAAS or Openstack nodes need be sacrificed to act as a controller).

Note: there’s no guard rails yet to stop impossible deployments from being attempted. eg it makes little sense to add a Google cloud to an AWS controller, as the latency between the workload agents and the controller is potentially large, as is the cost of serving out agent binaries cached on the controller etc. Also, there needs to be a TCP connection from the workload agents to the controller.

Mongo 4 from snap (amd64 only this release)

Using the mongodb-snap feature flag, you can bootstrap a Juju controller and mongo will be installed as a snap, currently version 4.0.5. See snap info juju-db for details on the snap. This won’t work yet behind a firewall.

Minor changes

vSphere Improvements

Improvements have been made to the vSphere provider.

  1. Constraints to specify root disk parameters - datastore and size, eg
    juju deploy myapp --constraints="root-disk-size=20G root-disk-source=mydatastore"

  2. Resource groups within a host or cluster can now be specified as an availability zone constraint, eg
    juju deploy myapp --constraints="zones=mycluster/mygroup"
    juju deploy myapp --constraints="zones=mycluster/myparent/mygroup"

  3. Better checks at deploy time that specified datastores exist and are valid.

All changes and fixes

Every change and fix to this release is tracked on a per-bug basis on Launchpad.

Any major bugs have all been fixed in the 2.5.4 release already so there’s nothing noteworthy to call out here.

All bugs corresponding to changes and fixes to this release are listed on the 2.6-beta1 milestone page.

Known issues

  • bootstrap to GKE is broken - there’s an issue with volume mounts that is not yet solved (you can still add-k8s --gke to an existing controller)
  • bootstrap is broken on clusters deployed using CDK on certain clouds, eg AWS (due to a problem with the Load Balancer initialising)

Install Juju

Install Juju using the snap:

sudo snap install juju --classic --channel beta

Those users already using the ‘stable’ snap channel (the default as per the above command) should be upgraded automatically. Other packages are available for a variety of platforms (see the install documentation).

Feedback Appreciated

Let us know how you’re using Juju or of any questions you may have. You can join us on Discourse, send us a message on Twitter (hashtag #jujucharms), or talk to us in the #juju IRC channel on freenode.

3 Likes