Elastic Operator K8S Feedback + Custom K8S Objects

Over the last few days I have carved out the initial bits for an elastic-operator k8s charm.

I’m currently at a crossroad where I don’t really know how to proceed, going to try and put it on blast, here goes.

The Deets:

  1. The elastic-operator charm successfully deploys the elastic-operator to k8s!
  2. Following the operator deploy, I can deploy an Elasticsearch object via kubectl using the juju deployed elastic-operator.
  3. Currently stuck in understanding how to model the CRD Elasticsearch, Kibana, and ApmServer objects via Juju.

Install and configure microk8s

sudo snap install microk8s --classic
microk8s.enable storage
microk8s.enable dns

Bootstrap Juju, Add Model, Deploy K8S Operator

juju bootstrap microk8s
juju add-model bdx
juju deploy cs:~omnivector/elastic-operator-k8s

Check Juju Status

see Juju’s view of the successful operator deployment.

$ juju status
Model  Controller          Cloud/Region        Version    SLA          Timestamp
bdx    microk8s-localhost  microk8s/localhost  2.7-beta1  unsupported  16:51:00Z

App                   Version  Status  Scale  Charm                 Store       Rev  OS          Address         Notes
elastic-operator-k8s           active      1  elastic-operator-k8s  jujucharms    4  kubernetes  10.152.183.186  

Unit                     Workload  Agent  Address     Ports     Message
elastic-operator-k8s/0*  active    idle   10.1.35.13  9876/TCP  

Check kubectl

Verify the running pods via kubectl.

$ microk8s.kubectl get pods --namespace bdx
NAME                                             READY   STATUS    RESTARTS   AGE
elastic-operator-k8s-0                           1/1     Running   0          15m
elastic-operator-k8s-operator-0                  1/1     Running   0          15m

Follow elastic-operator logs

Follow the elastic-operator logs if you are interested.

# Follow the operator logs to see the successful deployment and reconciliations
microk8s.kubectl logs -f elastic-operator-k8s-0   --namespace bdx

Deploy Elasticsearch

At this point the elastic-operator should be running and crds installed to your model namespace. Run the code below to provision an Elasticsearch object.

The bit below is what I am having trouble understanding how we will model with Juju.

cat <<EOF | microk8s.kubectl apply -n bdx -f -
apiVersion: elasticsearch.k8s.elastic.co/v1beta1
kind: Elasticsearch
metadata:
  name: elasticsearch-sample
spec:
  version: 7.4.0
  nodeSets:
  - name: juju-testing-default
    config:
      # most Elasticsearch configuration parameters are possible to set, e.g:
      node.attr.attr_name: attr_value
      node.master: true
      node.data: true
      node.ingest: true
      node.ml: true
      node.store.allow_mmap: false
      spec:
        containers:
        - name: elasticsearch
          resources:
            limits:
              memory: 4Gi
              cpu: 1
          env:
          - name: ES_JAVA_OPTS
            value: "-Xms2g -Xmx2g"
    count: 1
EOF

After running the command above, the operator will schedule, assemble, and reconcile the new Elasticsearch object into existence in the namespace of the model, in this case, “bdx”.

Inspect the logs of the deployed Elasticsearch object

microk8s.kubectl logs -f elasticsearch-sample-es-juju-testing-default-0 -n bdx

List pods via kubectl

Check that the pods we expect to be up and running are up and running.

$ microk8s.kubectl get pods --namespace bdx
NAME                                             READY   STATUS    RESTARTS   AGE
elastic-operator-k8s-0                           1/1     Running   0          15m
elastic-operator-k8s-operator-0                  1/1     Running   0          15m
elasticsearch-sample-es-juju-testing-default-0   1/1     Running   0          55s

juju status

$ juju status
Model  Controller          Cloud/Region        Version    SLA          Timestamp
bdx    microk8s-localhost  microk8s/localhost  2.7-beta1  unsupported  17:00:48Z

App                   Version  Status  Scale  Charm                 Store       Rev  OS          Address         Notes
elastic-operator-k8s           active      1  elastic-operator-k8s  jujucharms    4  kubernetes  10.152.183.186  

Unit                     Workload  Agent  Address     Ports     Message
elastic-operator-k8s/0*  active    idle   10.1.35.13  9876/TCP  

From the microk8s.kubectl get pods output above, we see the elasticsearch-sample-es-juju-testing-default-0 pod only exists in the context of the k8s namespace, but not the juju model. This is because the deployment was created via kubectl, Juju is not tracking these resources.

How will we approach modeling custom k8s objects like this via Juju?

An initial hand-wavy thought…

It seems that the elastic search CRD is really a charm in disguise? You deploy it and then it spins up pods as needed. And looking at elastic/crds-flavor-default.yaml there’s very charm like config defined in there. There’s even an attribute for docker image to use. So can we consider writing a charm instead. That way, things will be modelled and visible to Juju. I guess what I’m saying is that k8s has a certain way of defining operators, responsible for creating resources/pods in the cluster. We need to look at how to re-model that functionality the Juju way. At a high level, it still comes down to running pods with OCI images representing the workloads. It’s just how those pods are managed - we need to get it happening via Juju charms and relations etc rather than a k8s “black box” opaque to Juju.

1 Like

@wallyworld is it possible that pod-spec-set be modified to allow the handling of other types of objects?

We want to maintain a model driven approach. k8s services map nicely to Juju applications, and pods map to units. Juju creates a deployment controller or stateful set to manage the pods, but that’s an internal detail of how the Juju model is layered onto k8s. Scaling an application means creating additional pods/units, and because the Juju model maps to the k8s artefacts/resources, it all fits. The scale request can go via Juju (scale-application) but also can be done by a load balancer outside of Juju and because of the well defined mapping between Juju/k8s, Juju can notice those new pods and create matching units in the Juju model.

An issue arises when we deploy a k8s operator that spins up resources independent of Juju that are not know to the Juju model. Juju can’t track such resources and represent them in the Juju model. Juju could allow arbitrary resources to be created but is that the best way forward. Can we design a better way.

In this case, the behaviour of the resource appears very charm like (it spins up pods for the workload), and the implementation of how config is defined is also very charm like (there’s config attributes defined by a schema). There has been thought around the concept of a charm deploying something other than a stateful set or deployment controller to manage workload pods. So perhaps we can create a charm whose config schema matches the attributes needed by the custom resource. We’d add some additional metadata to allow the charm to declare how the k8s operator should be provisioned - I guess stateful set or deployment controller are just 2 examples we have currently (by operator, I mean something which spins up workload pods). Juju could then reasonably track new pods and create units to match. I’m not sure how scaling would work from the Juju side though so that’s something that would need thought.

The above is not meant to be something definitive that we’ve decided on. It’s more a thought bubble to get some discussion going.

1 Like