Deploy OSM-HA on AWS

The Charmed Distribution of OSM (AWS)

Welcome to The Charmed Distribution of OSM!

The objective of this page is to give an overview of the first steps to get up and running with HA version of OSM.

User Guide

The installation process is very straight forward, and will help you in the process of installing OSM in AWS


  • Bootstrap AWS Cloud
  • Deploy CDK and OSM-VCA on AWS
  • Boostrap CDK Cloud
  • Deploy OSM

Bootstrap AWS Cloud

First of all, in your local machine, you should have juju installed.

sudo snap install juju --classic

AWS Credentials

Next step is to add the credentials of your AWS account.

juju add-credential aws
# Output
Enter credential name: osm-ha-credential

Using auth-type "access-key".


Enter secret-key:

Credential "osm-ha-credential" added locally for cloud "aws".

Bootstrap controller

Once the credentials are added, it’s time to bootstrap a juju controller into AWS.

juju bootstrap aws aws-osm-ha --credential osm-ha-credential

Deploy CDK and OSM-VCA on AWS

This section will present the steps to deploy a CDK into AWS, which will be use later on to deploy OSM on top of it. Also, we will need to create an overlay.yaml file in order to include several things:

  • osm-vca: Charm needed by LCM (OSM component) to host the Proxy Charms.
  • kubernetes-worker: 4 workers are needed.
  • aws-integrator: Charm needed to have CDK working on AWS.
cat << EOF > overlay.yaml
    charm: cs:~charmed-osm/vca
    num_units: 1
    constraints: mem=4G cores=2 root-disk=40G
    num_units: 4
    charm: cs:~containers/aws-integrator
    num_units: 1
  - ['aws-integrator', 'kubernetes-master']
  - ['aws-integrator', 'kubernetes-worker']

Deploy CDK and OSM-VCA with the following commands:

juju add-model cdk
juju deploy charmed-kubernetes --overlay overlay.yaml
juju trust aws-integrator
juju offer osm-vca:osm-vca # Offer osm-vca for a Cross-Model Relation

The command juju find-offers shows the URL of the offered interface (admin/cdk.osm-vca).

Store       URL                Access  Interfaces
aws-osm-ha  admin/cdk.osm-vca  admin   osm-vca:osm-vca

Boostrap CDK Cloud

Before bootstrapping the CDK Cloud, it’s important to wait until CDK is up and running. We know that when the Message of the kubernetes-master is “Kubernetes master running”. This can take 20 minutes.

watch -c juju status --color
Model  Controller  Cloud/Region   Version  SLA          Timestamp
cdk    aws-osm-ha  aws/us-east-1  2.6.5    unsupported  11:16:01+02:00

App                    Version   Status  Scale  Charm                  Store       Rev  OS      Notes
aws-integrator         1.16.148  active      1  aws-integrator         jujucharms   10  ubuntu
containerd                       active      6  containerd             jujucharms    2  ubuntu
easyrsa                3.0.1     active      1  easyrsa                jujucharms  254  ubuntu  
etcd                   3.2.10    active      3  etcd                   jujucharms  434  ubuntu
flannel                0.10.0    active      6  flannel                jujucharms  425  ubuntu
kubeapi-load-balancer  1.14.0    active      1  kubeapi-load-balancer  jujucharms  649  ubuntu  exposed
kubernetes-master      1.15.0    active      2  kubernetes-master      jujucharms  700  ubuntu
kubernetes-worker      1.15.0    active      4  kubernetes-worker      jujucharms  552  ubuntu  exposed
osm-vca                          active      1  vca                    jujucharms    0  ubuntu

Unit                      Workload  Agent  Machine  Public address  Ports           Message
aws-integrator/0*         active    idle   0                     Ready
easyrsa/0*                active    idle   1                   Certificate Authority connected.
etcd/0                    active    idle   2    2379/tcp        Healthy with 3 known peers
etcd/1                    active    idle   3  2379/tcp        Healthy with 3 known peers
etcd/2*                   active    idle   4  2379/tcp        Healthy with 3 known peers
kubeapi-load-balancer/0*  active    idle   5    443/tcp         Loadbalancer ready.
kubernetes-master/0       active    idle   6  6443/tcp        Kubernetes master running.
  containerd/5            active    idle                    Container runtime available.
  flannel/5               active    idle                    Flannel subnet
kubernetes-master/1*      active    idle   7      6443/tcp        Kubernetes master running.
  containerd/4            active    idle                        Container runtime available.
  flannel/4               active    idle                        Flannel subnet
kubernetes-worker/0*      active    idle   8   80/tcp,443/tcp  Kubernetes worker running.
  containerd/0*           active    idle                     Container runtime available.
  flannel/0*              active    idle                     Flannel subnet
kubernetes-worker/1       active    idle   9    80/tcp,443/tcp  Kubernetes worker running.
  containerd/2            active    idle                      Container runtime available.
  flannel/2               active    idle                      Flannel subnet
kubernetes-worker/2       active    idle   10    80/tcp,443/tcp  Kubernetes worker running.
  containerd/1            active    idle                      Container runtime available.
  flannel/1               active    idle                      Flannel subnet
kubernetes-worker/3       active    idle   11     80/tcp,443/tcp  Kubernetes worker running.
  containerd/3            active    idle                       Container runtime available.
  flannel/3               active    idle                       Flannel subnet
osm-vca/0*                active    idle   12                  configured

Machine  State    DNS             Inst id              Series  AZ          Message
0        started     i-060581800c9b3de9e  bionic  us-east-1a  running
1        started   i-0e5be760554ea0b16  bionic  us-east-1b  running
2        started    i-0c723a5c9330a17e3  bionic  us-east-1a  running
3        started  i-0ccdc065640112f5d  bionic  us-east-1b  running
4        started  i-0431891ab2dcc004b  bionic  us-east-1c  running
5        started    i-053071bbc1f012ae1  bionic  us-east-1d  running
6        started  i-091a0b6e8dadcfa6c  bionic  us-east-1a  running
7        started      i-08826546e130c1515  bionic  us-east-1b  running
8        started   i-0f73acd5c5eeef2e6  bionic  us-east-1d  running
9        started    i-09933015cbd3cd922  bionic  us-east-1c  running
10       started    i-031171240a1a70b5b  bionic  us-east-1b  running
11       started     i-029902110200145cb  bionic  us-east-1a  running
12       started  i-05592992d699d3d2f  bionic  us-east-1f  running

Offer    Application  Charm  Rev  Connected  Endpoint  Interface  Role
osm-vca  osm-vca      vca    0    0/0        osm-vca   osm-vca    provider

Get CDK credentials

The credentials of the Kubernetes needs to be stored in ~/.kube/config. For copying the credentials and installing the kubectl client, execute the following commands:

mkdir ~/.kube
juju scp kubernetes-master/0:config ~/.kube/config
sudo snap install kubectl --classic

Create Storage

These additional commands are needed to create storages for CDK to be working in AWS.

# Create a storage class using the `` provisioner
kubectl create -f - <<EOY
kind: StorageClass
  name: ebs-1
  type: gp2

# Create a persistent volume claim using that storage class
kubectl create -f - <<EOY
kind: PersistentVolumeClaim
apiVersion: v1
  name: testclaim
    - ReadWriteOnce
      storage: 100Mi
  storageClassName: ebs-1

Bootstrap Controller

Insert the region you got before in the juju status command.

cat ~/.kube/config | juju add-k8s k8s-cloud --local --region=aws/us-east-1
juju bootstrap k8s-cloud

Deploy OSM

This section covers the deployment of OSM, and we will use a osm namespace for it.

juju add-model osm

Create storage pools

Create the needed storage pools for OSM:

juju create-storage-pool operator-storage kubernetes
juju create-storage-pool osm-pv kubernetes
juju create-storage-pool packages-pv kubernetes


Deploy OSM simply by executing the following:

juju deploy osm-ha

Cross-model-relation: <controller_name>:<URL>

Add the cross-model relation between osm-vca and lcm.

juju add-relation lcm-k8s aws-osm-ha:admin/cdk.osm-vca


Access OSM UI

Take the IP of one of the kubernetes-workers (p.e., and execute the following commands:

juju config ui-k8s
juju expose ui-k8s
juju config prometheus-k8s
juju expose prometheus-k8s
juju config grafana-k8s
juju expose grafana-k8s

The ingress module uses nginx. By default, it has the option proxy-body-size to 1m. This will be a problem if a VNF package of more than 1m is uploaded. To solve it, we only have to add an annotation to the ingress.

kubectl -n osm edit ingress ui-k8s

# Add the following line in the annotations "0"

You can access now these services:

Check OSM status

$ juju status
Model  Controller           Cloud/Region         Version  SLA          Timestamp
osm    k8s-cloud-us-east-1  k8s-cloud/us-east-1  2.6.5    unsupported  21:18:23+02:00

App             Version  Status  Scale  Charm           Store       Rev  OS          Address         Notes
grafana-k8s              active      3  grafana-k8s     jujucharms   15  kubernetes
kafka-k8s                active      3  kafka-k8s       jujucharms    1  kubernetes
lcm-k8s                  active      3  lcm-k8s         jujucharms   20  kubernetes
mariadb-k8s              active      3  mariadb-k8s     jujucharms   13  kubernetes
mon-k8s                  active      3  mon-k8s         jujucharms   14  kubernetes
mongodb-k8s              active      3  mongodb-k8s     jujucharms   14  kubernetes
nbi-k8s                  active      3  nbi-k8s         jujucharms   19  kubernetes
pol-k8s                  active      3  pol-k8s         jujucharms   14  kubernetes
prometheus-k8s           active      3  prometheus-k8s  jujucharms   12  kubernetes
ro-k8s                   active      3  ro-k8s          jujucharms   14  kubernetes
ui-k8s                   active      3  ui-k8s          jujucharms   23  kubernetes  exposed
zookeeper-k8s            active      3  zookeeper-k8s   jujucharms   16  kubernetes

Unit               Workload  Agent  Address     Ports                                Message
grafana-k8s/0*     active    idle  3000/TCP                             configured
grafana-k8s/1      active    idle  3000/TCP                             configured
grafana-k8s/2      active    idle  3000/TCP                             configured
kafka-k8s/0*       active    idle  9092/TCP                             configured
kafka-k8s/1        active    idle  9092/TCP                             configured
kafka-k8s/2        active    idle  9092/TCP                             configured
lcm-k8s/0*         active    idle  80/TCP                               configured
lcm-k8s/1          active    idle  80/TCP                               configured
lcm-k8s/2          active    idle  80/TCP                               configured
mariadb-k8s/0*     active    idle  3306/TCP,4444/TCP,4567/TCP,4568/TCP  configured
mariadb-k8s/1      active    idle  3306/TCP,4444/TCP,4567/TCP,4568/TCP  configured
mariadb-k8s/2      active    idle  3306/TCP,4444/TCP,4567/TCP,4568/TCP  configured
mon-k8s/0*         active    idle  8000/TCP                             configured
mon-k8s/1          active    idle  8000/TCP                             configured
mon-k8s/2          active    idle  8000/TCP                             configured
mongodb-k8s/0      active    idle  27017/TCP                            configured
mongodb-k8s/1*     active    idle  27017/TCP                            configured
mongodb-k8s/2      active    idle  27017/TCP                            configured
nbi-k8s/0*         active    idle  9999/TCP                             configured
nbi-k8s/1          active    idle  9999/TCP                             configured
nbi-k8s/2          active    idle  9999/TCP                             configured
pol-k8s/0*         active    idle  80/TCP                               configured
pol-k8s/1          active    idle  80/TCP                               configured
pol-k8s/2          active    idle  80/TCP                               configured
prometheus-k8s/0*  active    idle  9090/TCP                             configured
prometheus-k8s/1   active    idle  9090/TCP                             configured
prometheus-k8s/2   active    idle  9090/TCP                             configured
ro-k8s/0*          active    idle  9090/TCP                             configured
ro-k8s/1           active    idle  9090/TCP                             configured
ro-k8s/2           active    idle  9090/TCP                             configured
ui-k8s/0*          active    idle  80/TCP                               configured
ui-k8s/1           active    idle  80/TCP                               configured
ui-k8s/2           active    idle  80/TCP                               configured
zookeeper-k8s/0*   active    idle  2181/TCP,2888/TCP,3888/TCP           configured
zookeeper-k8s/1    active    idle  2181/TCP,2888/TCP,3888/TCP           configured
zookeeper-k8s/2    active    idle  2181/TCP,2888/TCP,3888/TCP           configured

Scale applications

Some applications such as MariaDB and RO (OSM) need to be scaled after the deployment is finished. We are working on improving that, but in the meantime, you should execute the following commands:

  • Scale MariaDB cluster
juju scale-application mariadb-k8s 3
  • Scale RO
juju scale-application ro-k8s 3

How to clean up

The easiest way to clean up everything is executing the following commands:

juju kill-controller aws-osm-ha -t 0 -y
juju unregister k8s-cloud-us-east-1

How to use OSM

This topic

1 Like