Jenkins-K8S Charm Summary/Feedback

Jenkins-K8S Charm

jenkins-k8s github src
jenkins-k8s charmstore

For the most part, I was able to follow along the lines of what was done with the other previous k8s charms.
I used filesystem storage for jenkins home,L24, and defined the TCP ports in the podspec,L9

The docker image for jenkins is built from the upstream jenkins image:

from jenkins/jenkins:2.60.3

# Distributed Builds plugins
RUN /usr/local/bin/ ssh-slaves

# install Notifications and Publishing plugins
RUN /usr/local/bin/ email-ext
RUN /usr/local/bin/ mailer
RUN /usr/local/bin/ slack

# Artifacts
RUN /usr/local/bin/ htmlpublisher

# UI
RUN /usr/local/bin/ greenballs
RUN /usr/local/bin/ simple-theme-plugin

# Scaling
RUN /usr/local/bin/ kubernetes

# install Maven
USER root
RUN apt-get update && apt-get install -y maven
USER jenkins

Build the image

docker build . -t jamesbeedy/juju-jenkins-test:1.0

List images see that its there

$ docker image list
REPOSITORY                             TAG                 IMAGE ID            CREATED             SIZE
jamesbeedy/juju-jenkins-test           1.0                 0f711167f01c        19 seconds ago      893MB

Build the charm

make build

Push to charmstore

charm push ./build/builds/jenkins-k8s jenkins-k8s --resource jenkins_image=jamesbeedy/juju-jenkins-test:1.0

Everything seems great at this point, although I’m unaware of what other components I may be missing here. I looked at the upstream pod spec for jenkins pod and it seemed pretty simple basically only defining a port, so thats what I did here.

Jenkins-K8S Deployment Process

# Bootstrap AWS
juju bootstrap aws

# Deploy K8S + AWS integrator
juju deploy cs:bundle/canonical-kubernetes-363
juju deploy cs:~containers/aws-integrator-8

# Trust aws-integrator and relate to k8s worker/master
juju trust aws-integrator
juju relate aws-integrator kubernetes-master
juju relate aws-integrator kubernetes-worker

Deployment settles and our k8s is live:

Model          Controller  Cloud/Region   Version  SLA          Timestamp
default        pdl-aws     aws/us-west-2  2.5.0    unsupported  13:52:38-08:00

App                    Version  Status  Scale  Charm                  Store       Rev  OS      Notes
aws-integrator         1.15.71  active      1  aws-integrator         jujucharms    8  ubuntu
easyrsa                3.0.1    active      1  easyrsa                jujucharms  195  ubuntu
etcd                   3.2.10   active      3  etcd                   jujucharms  338  ubuntu
flannel                0.10.0   active      5  flannel                jujucharms  351  ubuntu
kubeapi-load-balancer  1.14.0   active      1  kubeapi-load-balancer  jujucharms  525  ubuntu  exposed
kubernetes-master      1.13.2   active      2  kubernetes-master      jujucharms  542  ubuntu
kubernetes-worker      1.13.2   active      3  kubernetes-worker      jujucharms  398  ubuntu  exposed

Unit                      Workload  Agent  Machine  Public address  Ports           Message
aws-integrator/1*         active    idle   11                   ready
easyrsa/0*                active    idle   0                   Certificate Authority connected.
etcd/0*                   active    idle   1   2379/tcp        Healthy with 3 known peers
etcd/1                    active    idle   2   2379/tcp        Healthy with 3 known peers
etcd/2                    active    idle   3  2379/tcp        Healthy with 3 known peers
kubeapi-load-balancer/0*  active    idle   4  443/tcp         Loadbalancer ready.
kubernetes-master/0*      active    idle   5    6443/tcp        Kubernetes master running.
  flannel/0*              active    idle                      Flannel subnet
kubernetes-master/1       active    idle   6  6443/tcp        Kubernetes master running.
  flannel/1               active    idle                    Flannel subnet
kubernetes-worker/0*      active    idle   7  80/tcp,443/tcp  Kubernetes worker running.
  flannel/2               active    idle                    Flannel subnet
kubernetes-worker/1       active    idle   8  80/tcp,443/tcp  Kubernetes worker running.
  flannel/4               active    idle                    Flannel subnet
kubernetes-worker/2       active    idle   9   80/tcp,443/tcp  Kubernetes worker running.
  flannel/3               active    idle                     Flannel subnet

Machine  State    DNS             Inst id              Series  AZ          Message
0        started   i-08cc8c6a85f78d6dd  bionic  us-west-2a  running
1        started   i-051a93407700bc598  bionic  us-west-2a  running
2        started   i-0663d1baf47487040  bionic  us-west-2a  running
3        started  i-0e84af70f8115f57b  bionic  us-west-2a  running
4        started  i-04348b1b743d6048b  bionic  us-west-2a  running
5        started    i-02541a9b9bc1efb8a  bionic  us-west-2a  running
6        started  i-0998891271a632f3e  bionic  us-west-2a  running
7        started  i-09ed2ca17685a2393  bionic  us-west-2a  running
8        started  i-0a74512e86695fbdc  bionic  us-west-2a  running
9        started   i-0f488da3b1dfe618a  bionic  us-west-2a  running
11       started   i-0ccfa6f129e704b7f  bionic  us-west-2a  running

Now that we have K8S, time to create the Juju k8s model, provision storage, deploy k8s workload.

# Create the K8S cloud and K8S model
juju scp kubernetes-master/0:config ~/.kube/config
kubectl config view --raw | juju add-k8s myk8scloud --cluster-name=juju-cluster
juju add-model bdxk8smodel myk8scloud

# Create the storage pools (operator and application)
juju create-storage-pool operator-storage kubernetes \
    storage-class=juju-operator-storage \ parameters.type=gp2

juju create-storage-pool k8s-ebs kubernetes \
    storage-class=juju-ebs \ parameters.type=gp2

At this point Juju is configured to deploy the jenkins-k8s charm.

juju deploy cs:~jamesbeedy/jenkins-k8s-0 --storage jenkins-home=1G,k8s-ebs

Wait for the deployment to settle (errors through a bunch of volume creation statuses for a few minutes, but it finally settles).

$ juju status
Model        Controller  Cloud/Region  Version  SLA          Timestamp
bdxk8smodel  pdl-aws     myk8scloud    2.5.0    unsupported  14:05:46-08:00

App          Version  Status  Scale  Charm        Store       Rev  OS          Address         Notes
jenkins-k8s           active      1  jenkins-k8s  jujucharms    0  kubernetes  

Unit            Workload  Agent  Address    Ports     Message
jenkins-k8s/0*  active    idle  8080/TCP  

$ juju storage
Unit           Storage id      Type        Pool     Size   Status    Message
jenkins-k8s/0  jenkins-home/0  filesystem  k8s-ebs  42MiB  attached  Successfully provisioned volume pvc-94699c34-1e8d-11e9-876a-0694b7418256 using

kubectl shows me

$ kubectl get namespaces
NAME                              STATUS   AGE
bdxk8smodel                       Active   32m
default                           Active   62m
ingress-nginx-kubernetes-worker   Active   62m
kube-public                       Active   62m
kube-system                       Active   62m

$ kubectl get all --namespace bdxk8smodel
NAME                              READY   STATUS             RESTARTS   AGE
pod/juju-jenkins-k8s-0            0/1     CrashLoopBackOff   10         31m
pod/juju-operator-jenkins-k8s-0   1/1     Running            0          32m

NAME                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/juju-jenkins-k8s   ClusterIP   <none>        8080/TCP   31m

NAME                                         READY   AGE
statefulset.apps/juju-jenkins-k8s            0/1     31m
statefulset.apps/juju-operator-jenkins-k8s   1/1     32m
$ kubectl describe pods juju-jenkins-k8s-0 --namespace bdxk8smodel
Name:           juju-jenkins-k8s-0
Namespace:      bdxk8smodel
Start Time:     Tue, 22 Jan 2019 13:35:19 -0800
Labels:         controller-revision-hash=juju-jenkins-k8s-688b95b7bc
Annotations:    <none>
Status:         Running
Controlled By:  StatefulSet/juju-jenkins-k8s
    Container ID:   docker://bffdda745ae842a240e68563b7895966b0a63b37b12875c48d78f684ac10c06d
    Image ID:       docker-pullable://
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 22 Jan 2019 17:27:55 -0800
      Finished:     Tue, 22 Jan 2019 17:27:55 -0800
    Ready:          False
    Restart Count:  50
    Environment:    <none>
      /var/jenkins_home from juju-jenkins-home-0 (rw)
      /var/run/secrets/ from default-token-sh6f2 (ro)
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  juju-jenkins-home-0-juju-jenkins-k8s-0
    ReadOnly:   false
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-sh6f2
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations: for 300s
        for 300s
  Type     Reason   Age                       From                                                  Message
  ----     ------   ----                      ----                                                  -------
  Warning  BackOff  2m12s (x1082 over 3h56m)  kubelet,  Back-off restarting failed container

After all is said and done, I’m left wondering how to better introspect the CrashLoopBackOff, and how to configure ingress networking to my application.

For the CrashLoopBackOff, I think I’m best off researching how to debug kubernetes applications.

For the ingress stuff, possibly there a doc/spec on how ingress is done somewhere?


1 Like

Ohhh I think I see it

$ kubectl logs -p juju-jenkins-k8s-0 --namespace bdxk8smodel
touch: cannot touch '/var/jenkins_home/copy_reference_file.log': Permission denied
Can not write to /var/jenkins_home/copy_reference_file.log. Wrong volume permissions?

The access mode seems correct:

$ kubectl get pvc --namespace bdxk8smodel
NAME                                                      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                        AGE
jenkins-k8s-operator-volume-juju-operator-jenkins-k8s-0   Bound    pvc-7eae7d29-1e8d-11e9-876a-0694b7418256   1Gi        RWO            bdxk8smodel-juju-operator-storage   5h27m
juju-jenkins-home-0-juju-jenkins-k8s-0                    Bound    pvc-94699c34-1e8d-11e9-876a-0694b7418256   1Gi        RWO            bdxk8smodel-juju-ebs                5h26m

I found this, and

Which leads me to believe that I may need to set

    fsGroup: 1000

on the persistent volume claim.

It seems I need to change permissions on the volume mount since the process runs as the ‘jenkins’ user, thinking the above may be a fix.


1 Like

With regard to exposting applications, what doc we have so far is in the Exposing Gitlab section of this post.
There’s also k8s specific config that can be used when deploying an application. Such config includes external ips, load balancer ip etc - aspects of the k8s cluster deployment which may have been set up and used for ingress. There’s an assumed amount of k8s knowledge required though. We’ll need to get better documentation put together.


Juju 2.5.1 due out within the next week will support setting a bunch of additional k8s specific pod attributes, including securityContext and serviceAccountName. You could also try it using the 2.5 edge snap (commit dcb82c3)

These are set in the YAML file passed back to Juju via the pod-spec-set hook command.
The attributes go at the top level, next to the containers spec.

A (contrived) example.
Note that all securityContext and dnsConfig etc attributes are supported, not just those shown which are examples only.

activeDeadlineSeconds: 10
serviceAccountName: serviceAccount
restartPolicy: OnFailure
terminationGracePeriodSeconds: 20
automountServiceAccountToken: true
  runAsNonRoot: true
hostname: host
subdomain: sub
priorityClassName: top
priority: 30
  nameservers: [ns1, ns2]
  - conditionType: PodScheduled
  - name: gitlab
    image: gitlab/latest
    imagePullPolicy: Always
    command: ["sh", "-c"]
    args: ["doIt", "--debug"]
    workingDir: "/path/to/here"
    - containerPort: 80
      name: fred
      protocol: TCP
    - containerPort: 443
      name: mary
      initialDelaySeconds: 10
        path: /ping
        port: 8080
      initialDelaySeconds: 10
        path: /pingReady
        port: www
      attr: foo=bar; name['fred']='blogs';
      foo: bar
      restricted: 'yes'
      switch: on
      - name: configuration
        mountPath: /var/lib/foo
          file1: |
            foo: bar
1 Like

This is great! I have tried it out on 2.5/edge and I was able to get past the permissions error I was hitting by setting

  fsGroup: 1000


@jamesbeedy I really recommend k9s by derailed for awareness of what’s going in your cluster.

Have you tried it?


1 Like