K8S on Metal - Storage Configuration for use With Juju

I’ve got a minimal charmed K8S spun up on bare metal, have successfully added the k8s to an existing controller, and am able to add a juju model. I feel there may be some more setup or configuration of storage to be able to start using Juju with this k8s. Upon trying to deploy a charm I hit this issue where it seems persistent volumes are not created. I’ve tried a few things, trying to create different storage classes, but I feel I may need a little guidance here on what exactly needs to be done. I’m wondering if there is a storage class I need to create, or modifications that need be made to the default storage class to enable use of this cluster with Juju? Do I need a dynamic provisioning storage class? Can I use no-provisioner, hostpath-provisioner, what are the deets for local storage provisioners here?
Thanks in advance!

For bare metal you’re looking for something to provide storage to all your workers. Usually this is something like Ceph or NFS unless you’re doing something like running it all in LXD on a single machine and then you can do something like a hostpath. The thing is that you need to be able to reach that storage from any worker on which that pod can be scheduled.

If you deploy the Ceph charms or the NFS charm you can relate it to kubernetes-master or kubernetes-worker, respectfully, and get a storage class created for you. You can also just spin up something yourself and create a storage class.

@knobby thanks for this.

Even using the rbd-provisioner, we still are faced with many questions we don’t understand how to answer in getting things up and running. We are currently stuck here, tried many things at this point, going to go back to the drawing board and try to further analyze what is going on here.

We are wondering if someone can provide the basic workflow or two for a basic storage configuration that will allow us to get things up and running for a bare metal k8s deploy either with and/or without ceph?

We figured a few things out as far as getting the the rbd storage class to work with Juju on bare metal k8s. Here is what it looked like for us:

Bare Metal K8S + Ceph Storage

Created primarily using this document.

Create and note the client.admin and client.kube keys to connect Kubernetes to Ceph

sudo ceph osd pool create kube 1024
sudo ceph auth get-or-create client.kube mon \
    'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=kube' -o ceph.client.kube.keyring
sudo ceph auth get-or-create client.kube mon \
    'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=kube' -o ceph.client.kube.keyring
sudo ceph auth get-key client.admin | base64 
sudo ceph auth get-key client.kube | base64
  • Back on your local machine, create each of these files and fill the key in these templates:

ceph-secret.yaml:

apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
  namespace: kube-system
data:
  key: <key goes here> 
type: kubernetes.io/rbd

ceph-user-secret.yaml:

apiVersion: v1
kind: Secret
metadata:
  name: ceph-user-secret
  namespace: default
data:
  key: <key goes here> 
type: kubernetes.io/rbd
  • Create the secrets in Kubernetes:
kubectl create -f ceph-user-secret.yaml
kubectl create -f ceph-secret.yaml
  • Create this template file:

rbd-dynamic.yaml

apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
  name: rbd-dynamic
  annotations:
     storageclass.beta.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/rbd
parameters:
  monitors: <monitor-ip>:6789,<monitor-ip>6789,<monitor-ip>:6789
  adminId: admin
  adminSecretName: ceph-secret
  adminSecretNamespace: kube-system
  pool: kube
  userId: kube
  userSecretName: ceph-user-secret
  • Create the storageclass:
kubectl create -f rbd-dynamic.yaml
  • Create the Ceph claim file:

ceph-claim.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: ceph-claim
spec:
  accessModes:     
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
  • Create the Persistent Volume Claim:
kubectl create -f ceph-claim.yaml
  • Verify that everything has worked:
kubectl get pvc --all-namespaces

Should output similar to:

NAMESPACE   NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
default     ceph-claim   Bound    pvc-cbf7887d-ec69-475d-ae2f-b2697dbfab72   2Gi        RWO            rbd-dynamic    116s
  • Add Kubernetes to your PDL-juju controller
cat ~/.kube/config | juju add-k8s k8s-cloud --controller=juju_controller_dc_00

Output should look like:

k8s substrate "maas" added as cloud "k8s-cloud" with storage provisioned
by the existing "rbd-dynamic" storage class.

Notice that juju recognizes the default storage class we created above when adding the cloud.

  • Test adding a model:
juju add-model test-model k8s-cloud
Added 'test-model' model on k8s-cloud with credential 'admin' for user 'admin' 

See that it get the correct operator and workload storage are configured as model defaults.

$ juju model-config | grep storage
operator-storage              controller  rbd-dynamic
workload-storage              controller  rbd-dynamic

The problem with this is that we need to add the ceph-user-secret to the namespace created for every model.

apiVersion: v1
kind: Secret
metadata:
  name: ceph-user-secret
  namespace: test-model
data:
  key: <ceph-user-secret-base64> 
type: kubernetes.io/rbd

The secret needs to be created in the namespace of each model that wants to use the dynamic-rdb storage class.

By adding the ceph-user-secret to each model that we want to use the rbd-dynamic storage class in we were able to get things deployed via juju on our bare metal k8s.

Yours is a far more robust solution allowing for different users on the cluster, but if you’re just looking to kick the tires I would suggest using juju to deploy ceph:

juju deploy -n 3 ceph-mon
juju deploy -n 3 cs:ceph-osd --constraints <constraints that make sense for your maas>
juju relate ceph-mon ceph-osd
juju relate ceph-mon:admin kubernetes-master
juju relate ceph-mon:client kubernetes-master

As I said, this isn’t as custom with users, but it will get you a working storage class with ceph easily. We have plans to adjust and improve this integration, but haven’t had the chance yet.

We tried creating the relation to get the rbd-provisioner initially, it was a cross-model-relation though, from ceph to k8s, so I’m not sure if it was user error or something other, but we couldn’t get the rbd-provisioner working via the relation.

Knowing now that creating the relation is a legitimate way to get the rbd-provisioner working possibly I’ll play around with it some more.

Thanks!