K8s Spec v3 changes

Available in the 2.8 edge snap…

New extended volume support

It’s now possible to configure volumes backed by:

  • config map
  • secret
  • host path
  • empty dir

To do this, you’ll need to mark your YAML as version 3. This new version 3 also:

  • renames the config block to envConfig (to better reflect its purpose).
  • renames the files block to volumeConfig.
  • allows file mode to be specified

With secret and config map, these must be defined in the elsewhere YAML handed to Juju - you can’t reference existing resources not created by the charm. If you leave out the files block, the entire secret or config map will be mounted. path is optional - the file will be created with the same name as key if not specified.

The path for each file is created relative to the overall mount point.

Here’s an example of what’s possible when creating the new volume types.

version: 3
...
    # renamed from config
    envConfig:
      MYSQL_ROOT_PASSWORD: %(root_password)s
      MYSQL_USER: %(user)s
      MYSQL_PASSWORD: %(password)s
      MYSQL_DATABASE: %(database)s
      MY_NODE_NAME:
        field:
          path: spec.nodeName
          api-version: v1
      build-robot-secret:
        secret:
          name: build-robot-secret
          key: config.yaml
    # Here's where the new volumes types are set up
    # This block was called "files" in v2
    volumeConfig:
      # This is what was supported previously (simple text files)
      - name: configurations
        mountPath: /etc/mysql/conf.d
        files:
          - path: custom_mysql.cnf
            content: |
              [mysqld]
              skip-host-cache
              skip-name-resolve
              query_cache_limit = 1M
              query_cache_size = %(query-cache-size)s
              query_cache_type = %(query-cache-type)s
      # host path
      - name: myhostpath1
        mountPath: /var/log1
        hostPath:
          path: /var/log
          type: Directory
      - name: myhostpath2
        mountPath: /var/log2
        hostPath:
          path: /var/log
          # see https://kubernetes.io/docs/concepts/storage/volumes/#hostpath for other types
          type: Directory
      # empty dir
      - name: cache-volume
        mountPath: /empty-dir
        emptyDir:
          medium: Memory # defaults to disk
      - name: cache-volume222
        mountPath: /empty-dir222
        emptyDir:
          medium: Memory
      - name: cache-volume
        mountPath: /empty-dir1
        emptyDir:
          medium: Memory
      # secret
      - name: another-build-robot-secret
        mountPath: /opt/another-build-robot-secret
        secret:
          name: another-build-robot-secret
          defaultMode: 511
          files:
            - key: username
              path: my-group/username
              mode: 511
            - key: password
              path: my-group/password
              mode: 511
        # config map
        configMap:
          name: log-config
          defaultMode: 511
          files:
            - key: log_level
              path: log_level
              mode: 511

The lifecycle of CRDs

Introduce CRD lifecycle. Now charmers can decide when the CRDs get deleted by specifying proper labels.

{
    "juju-resource-lifecycle": "model | persistent"
}
  1. If no juju-resource-lifecycle label set, the CRD gets deleted with the application together.

  2. If juju-resource-lifecycle sets to model, the CRD will not get deleted when the application is removed until the model is destroyed.

  3. If juju-resource-lifecycle sets to persistent, the CRD will never get deleted by Juju even the model is gone.

deploy a charm has below spec:

version: 3
kubernetesResources:
  customResourceDefinitions:
    - name: tfjobs.kubeflow.org
      labels:
        foo: bar  # deleted with the app;
      spec:
        ...
    - name: tfjob1s.kubeflow.org1
      labels:
        foo: bar
        juju-resource-lifecycle: model  # deleted with the model;
      spec:
        ...
    - name: tfjob2s.kubeflow.org2
      labels:
        foo: bar
        juju-resource-lifecycle: persistent  # never gets deleted;
      spec:
        ...

$ juju deploy /tmp/charm-builds/mariadb-k8s/ --debug  --resource mysql_image=mariadb -n1

$ mkubectl get crds -o json | jq '.items[] | .metadata | [.name,.labels]'
[
  "tfjob1s.kubeflow.org1",
  {
    "foo": "bar",
    "juju-app": "mariadb-k8s",
    "juju-resource-lifecycle": "model",
    "juju-model": "t1"
  }
]
[
  "tfjob2s.kubeflow.org2",
  {
    "foo": "bar",
    "juju-app": "mariadb-k8s",
    "juju-resource-lifecycle": "persistent",
    "juju-model": "t1"
  }
]
[
  "tfjobs.kubeflow.org",
  {
    "foo": "bar",
    "juju-app": "mariadb-k8s",
    "juju-model": "t1"
  }
]

$ juju remove-application mariadb-k8s -m k1:t1 --destroy-storage --force
removing application mariadb-k8s
- will remove storage database/0

$ mkubectl get crds -o json | jq '.items[] | .metadata | [.name,.labels]'
[
  "tfjob1s.kubeflow.org1",
  {
    "foo": "bar",
    "juju-app": "mariadb-k8s",
    "juju-resource-lifecycle": "model",
    "juju-model": "t1"
  }
]
[
  "tfjob2s.kubeflow.org2",
  {
    "foo": "bar",
    "juju-app": "mariadb-k8s",
    "juju-resource-lifecycle": "persistent",
    "juju-model": "t1"
  }
]

$ juju destroy-model t1 --destroy-storage -y --debug --force

$ mkubectl get crds -o json | jq '.items[] | .metadata | [.name,.labels]'
[
  "tfjob2s.kubeflow.org2",
  {
    "foo": "bar",
    "juju-app": "mariadb-k8s",
    "juju-resource-lifecycle": "persistent",
    "juju-model": "t1"
  }
]

The lifecycle of CRs

$ juju deploy /tmp/charm-builds/mariadb-k8s/ --debug  --resource mysql_image=mariadb

$ mkubectl get crds tfjob1s.kubeflow.org1 -o json | jq ' .metadata | {name: .name,"juju-resource-lifecycle": (.labels | ."juju-resource-lifecycle")}'
{
  "name": "tfjob1s.kubeflow.org1",
  "juju-resource-lifecycle": "persistent"
}

$ mkubectl get tfjob1s.kubeflow.org1 -o json | jq '.items[] | .metadata | {name: .name,"juju-resource-lifecycle":(.labels | ."juju-resource-lifecycle")}'
{
  "name": "dist-mnist-for-e2e-test11",
  "juju-resource-lifecycle": null
}
{
  "name": "dist-mnist-for-e2e-test12",
  "juju-resource-lifecycle": "model"
}
{
  "name": "dist-mnist-for-e2e-test13",
  "juju-resource-lifecycle": "persistent"
}

$ juju remove-application mariadb-k8s -m k1:t1 --destroy-storage --force
removing application mariadb-k8s
- will remove storage database/0

$ mkubectl get tfjob1s.kubeflow.org1 -o json | jq '.items[] | .metadata | {name: .name,"juju-resource-lifecycle":(.labels | ."juju-resource-lifecycle")}'
{
  "name": "dist-mnist-for-e2e-test12",
  "juju-resource-lifecycle": "model"
}
{
  "name": "dist-mnist-for-e2e-test13",
  "juju-resource-lifecycle": "persistent"
}

$ juju destroy-model t1 --destroy-storage -y --debug --force

$ mkubectl get tfjob1s.kubeflow.org1 -o json | jq '.items[] | .metadata | {name: .name,"juju-resource-lifecycle":(.labels | ."juju-resource-lifecycle")}'
{
  "name": "dist-mnist-for-e2e-test13",
  "juju-resource-lifecycle": "persistent"
}

2 Likes

I’m not sure to understand how to attach a kubernetes secret to a container config from this example. It looks like the secret is added as an environment variable, and not from a secret created by the spec.

My use case is the following. My charm spec created a secret like this :

'kubernetesResources': {
                'secrets': [
                    {
                        'name': 'mssql',
                        'type': 'Opaque',
                        'data': {
                            'SA_PASSWORD': (b64encode(
                                ('MyC0m9l&xP@ssw0rd').encode('utf-8')).decode('utf-8')),
                        }
                    }
                ]
            }

So, my goal is to attach this secret to my container. How would I do it without it being part of the envConfig? i.e

            'containers': [
                {
                    'name': self.framework.model.app.name,
                    'image': config["image"],
                    'ports': ports,
                    'envConfig': container_config,
                }
            ],

Hi @camille.rodriguez1
You can mount the secret to the pod’s filesystem using volumeConfig like:

    volumeConfig:
      - name: another-build-robot-secret
        mountPath: /opt/another-build-robot-secret
        secret:
          name: another-build-robot-secret
          defaultMode: 511
          files:
            - key: username
              path: my-group/username
              mode: 511
            - key: password
              path: my-group/password
              mode: 511

or mount as env variable using envConfig like

   envConfig:
     build-robot-secret:
        secret:
          name: build-robot-secret
          key: config.yaml