Implementing k8s resources in a charm that are not supported by pod_spec

Hi team,

I am wondering what is the most “elegant” way to implement in a charm a kubernetes resource that is not supported by Juju’s pod_spec. For context, I am talking about charming with the operator framework. One idea would be to add in the charm a function that calls subprocess, to execute the manual command with kubectl. Any other ideas?

As for what type of resources that are not supported by pod_spec, one example is the Pod Security Policies. There is currently a bug opened for this (Bug #1886694 “Podspec for k8s charm does not support Pod Securit...” : Bugs : juju), and I am not expecting to see it included in the pod spec until the next release. However, if I have to charm an application that requires it, I am stuck. This is an example of a manifest for a Pod Security Policy :

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  labels:
    app: metallb
  name: controller
  namespace: metallb-system
spec:
  allowPrivilegeEscalation: false
  allowedCapabilities: []
  allowedHostPaths: []
  defaultAddCapabilities: []
  defaultAllowPrivilegeEscalation: false
  fsGroup:
    ranges:
    - max: 65535
      min: 1
    rule: MustRunAs
  hostIPC: false
  hostNetwork: false
  hostPID: false
  privileged: false
  readOnlyRootFilesystem: true
  requiredDropCapabilities:
  - ALL
  runAsUser:
    ranges:
    - max: 65535
      min: 1
    rule: MustRunAs
  seLinux:
    rule: RunAsAny
  supplementalGroups:
    ranges:
    - max: 65535
      min: 1
    rule: MustRunAs
  volumes:
  - configMap
  - secret
  - emptyDir

Thank you for the help!
Camille

Hi @camille.rodriguez1

Interacting with K8s API directly is not recommended because resources created in this way might not be cleaned up after application was removed for example.

But If you really want to do it, you can directly import k8s python client in the charm like this

from kubernetes import client, config
config.load_incluster_config()
v1 = client.CoreV1Api()
v1.list_namespace()

For what @camille.rodriguez1 needs, that should be under client.PolicyV1beta1Api (source).

So it should be:

from kubernetes import client, config
config.load_incluster_config()
policy_client = client.PolicyV1beta1Api()
policy_client.create_pod_security_policy(...)

Ok, interesting. I asked in another channel and someone suggested to use custom resources instead, like it is done in this example https://github.com/juju-solutions/bundle-kubeflow/blob/master/charms/kubeflow-dashboard/reactive/kubeflow_dashboard.py#L113-L122 . What do you think of this compared to using the kubernetes API?

I don’t think that you can use load_incluster_config() from the charm code though, the charm is not running inside of a pod. Or is it? It failed when I tried to use that function.

hi @camille.rodriguez1 I think you should be able to just use the above code to do whatever I want to talk to the K8s API directly unless the service account mounted to the pod does not have sufficient permissions granted.
The charm usually runs in the k8s operator of the application, and it might run in the workload pod if you are talking to k8s in an action script.

I am wondering what error you got from k8s?

For sure, you can use custom resources to create k8s resources if you already have any custom resources definitions deployed and custom resources definitions operators running.

I added this bit of code in my charm:

    from kubernetes import client, config
    config.load_incluster_config()
    policy_client = client.PolicyV1beta1Api()
    v1 = client.CoreV1Api()
    v1.list_namespace()
    print(v1.list_namespace())

And I encounter this error:

application-metallb-controller: 12:41:16 ERROR juju.worker.uniter.operation hook "install" (via hook dispatching script: dispatch) failed: exit status 1
application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install Traceback (most recent call last):
application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install   File "./src/charm.py", line 20, in <module>
application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install     class MetallbCharm(CharmBase):
application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install   File "./src/charm.py", line 130, in MetallbCharm
application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install     config.load_incluster_config()
application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install   File "/var/lib/juju/agents/unit-metallb-controller-0/charm/venv/kubernetes/config/incluster_config.py", line 93, in load_incluster_config
application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install     InClusterConfigLoader(token_filename=SERVICE_TOKEN_FILENAME,
application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install   File "/var/lib/juju/agents/unit-metallb-controller-0/charm/venv/kubernetes/config/incluster_config.py", line 45, in load_and_set
application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install     self._load_config()
application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install   File "/var/lib/juju/agents/unit-metallb-controller-0/charm/venv/kubernetes/config/incluster_config.py", line 51, in _load_config
application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install     raise ConfigException("Service host/port is not set.")
application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install kubernetes.config.config_exception.ConfigException: Service host/port is not set.

I found that the issue here is a bug with the way juju passes the environment variables to the charm operator. More info/workaround here: Bug #1892255 “Environment variables are not being properly passe...” : Bugs : juju

1 Like

Thanks for firing the bug, @camille.rodriguez1
We will investigate what the root cause is.