How to: Charmed MongoDB Performance testing for VM charm

About

The goal of this document is to provide options for load testing of Charmed MongoDB (VM chamr).

Testbed

For testing purposes we will use LXD running on t3.2xlarge AWS instance with 500GB ssd drive.
We will deploy 3 node replica set MongoDB cluster on Ubuntu 22.04 using Juju 3.1.7

Related documentation

This document contains all the required information and commands to run performance testing as well as results of the tests. You can refer to to latest\more detailed information using these links:

Testbed preparation

Please perform following steps to prepare your testbed.

1. Deploy AWS instance

This step is optional in case of running tests on your own hardware

2. Set up LXD

sudo snap install lxd
lxd init --auto
lxc network set lxdbr0 ipv6.address none

3. Setu Juju

sudo snap install juju --channel 3.1/stable

Juju already has a built-in knowledge of LXD and how it works, so there is no additional setup or configuration needed, however, because Juju 3.x is a strictly confined snap, and is not allowed to create ~/.local/share we need to create it manually.

mkdir -p ~/.local/share

4. Bootstrap a controller

juju bootstrap localhost lxd --agent-version 3.1.7

5. Add a model

juju add-model test

6. Deploy a replica set

juju deploy mongodb -n 3

You can check the status of the deployment using:

juju status

or

juju status --watch 1s --color

7. Obtaining connection string

To run tests agains the deployed cluster you need to obtain a connection string that will be used to connect to the cluster.

export DB_USERNAME="operator"
export DB_NAME="admin"
export REPL_SET_NAME="mongodb"
export DB_PASSWORD=$(juju run mongodb/leader get-password | grep password| awk '{print $2}')
export HOST_IP_0=$(juju exec --unit mongodb/0 -- hostname -I | awk '{print $1}')
export HOST_IP_1=$(juju exec --unit mongodb/1 -- hostname -I | awk '{print $1}')
export HOST_IP_2=$(juju exec --unit mongodb/2 -- hostname -I | awk '{print $1}')
export MONGODB_URI=mongodb://$DB_USERNAME:$DB_PASSWORD@$HOST_IP_0,$HOST_IP_1,$HOST_IP_2:27017/$DB_NAME?replicaSet=$REPL_SET_NAME

Integration with CoS (optional)

CoS stands for Canonical Observability Stack. Integrating Charmed MongoDB cluster with it can help during load testing to figure out the system behaviour.
The sections below contains the summary and commands from these documents:

  1. Viewing Metrics
  2. CoS tutorial

Please refer them for the latest updates.

Install and prepare MicroK8s

sudo snap install microk8s --channel=1.27-strict
sudo usermod -a -G snap_microk8s $(whoami)
mkdir ~/.kube
sudo chown -R $(whoami) ~/.kube
newgrp snap_microk8s
microk8s status --wait-ready

Configure MicroK8s

Install dependencies

sudo snap install jq

Configure MicroK8s

microk8s enable dns
microk8s enable hostpath-storage

IPADDR=$(ip -4 -j route get 2.2.2.2 | jq -r '.[] | .prefsrc')
microk8s enable metallb:$IPADDR-$IPADDR

microk8s kubectl rollout status deployments/hostpath-provisioner -n kube-system -w
microk8s kubectl rollout status deployments/coredns -n kube-system -w
microk8s kubectl rollout status daemonset.apps/speaker -n metallb-system -w

Add Juju K8s controller and create a model

mkdir -p $HOME/.local/share/juju

juju bootstrap microk8s k8s --agent-version 3.1.7

# wait until bootstrap is finished
# make sure that you are using k8s controller

juju add-model cos

Deploy CoS

curl -L https://raw.githubusercontent.com/canonical/cos-lite-bundle/main/overlays/offers-overlay.yaml -O
curl -L https://raw.githubusercontent.com/canonical/cos-lite-bundle/main/overlays/storage-small-overlay.yaml -O

juju deploy cos-lite \
  --trust \
  --overlay ./offers-overlay.yaml \
  --overlay ./storage-small-overlay.yaml

Again, you can see the status of the deployment using

juju status --watch 1s --color

Once cos is deployed, you will need to:

  • Obtain password to connect to Grafana dashboard
  • Integrate it with Charmed MongoDB.

To get the the password to connect to Grafana dashboard execute

juju run grafana/leader get-admin-password --model cos

The username for the dashboard is admin. The dashboard will be available at this URI:

http://<your_host_public_ip_or_dns_name>/cos-grafana

Please make sure that required security groups settings are applied to the instance\machine where you are running the tests.

Integrate CoS with Charmed MongoDB

Deploy Grafana Agent in LXD Juju controller

juju switch lxd:test

juju deploy grafana-agent --channel=stable
juju integrate grafana-agent mongodb

Integrate CoS with Charmed MongoDB

juju consume k8s:admin/cos.alertmanager-karma-dashboard
juju consume k8s:admin/cos.grafana-dashboards
juju consume k8s:admin/cos.loki-logging
juju consume k8s:admin/cos.prometheus-receive-remote-write

juju integrate grafana-agent prometheus-receive-remote-write
juju integrate grafana-agent loki-logging
juju integrate grafana-agent grafana-dashboards

Load testing with YCSB

YCSB is Yahoo Cloud Serving Benchmark - an open source tool that provides CRUD testing capabilities for different databases. The tool can be found here and the documentation specific for testing MongoDB can be found here

Preparing YCSB

Install dependencies

sudo apt update -y
# install java, maven, python 2

sudo apt install openjdk-21-jdk maven python2 -y

# symlink python2 to python

sudo ln -s /usr/bin/python2 /usr/bin/python

Install YCSB

mkdir -p ~/load-testing/

cd ~/load-testing/

curl -O --location https://github.com/brianfrankcooper/YCSB/releases/download/0.17.0/ycsb-0.17.0.tar.gz

tar xfvz ycsb-0.17.0.tar.gz

cd ycsb-0.17.0

Run tests

Load data

Adjust params as required.

RECORD_COUNT=500000
LOAD_THREADS_COUNT=16
./bin/ycsb load mongodb -s -P workloads/workloada -p recordcount=$RECORD_COUNT -threads $LOAD_THREADS_COUNT -p mongodb.url="$MONGODB_URI"

As a result of the command you should should see something like this:

[OVERALL], RunTime(ms), 521122
[OVERALL], Throughput(ops/sec), 959.4682243313466
[TOTAL_GCS_G1_Young_Generation], Count, 33
[TOTAL_GC_TIME_G1_Young_Generation], Time(ms), 128
[TOTAL_GC_TIME_%_G1_Young_Generation], Time(%), 0.024562386542882474
[TOTAL_GCS_G1_Concurrent_GC], Count, 0
[TOTAL_GC_TIME_G1_Concurrent_GC], Time(ms), 0
[TOTAL_GC_TIME_%_G1_Concurrent_GC], Time(%), 0.0
[TOTAL_GCS_G1_Old_Generation], Count, 0
[TOTAL_GC_TIME_G1_Old_Generation], Time(ms), 0
[TOTAL_GC_TIME_%_G1_Old_Generation], Time(%), 0.0
[TOTAL_GCs], Count, 33
[TOTAL_GC_TIME], Time(ms), 128
[TOTAL_GC_TIME_%], Time(%), 0.024562386542882474
[CLEANUP], Operations, 16
[CLEANUP], AverageLatency(us), 351.8125
[CLEANUP], MinLatency(us), 1
[CLEANUP], MaxLatency(us), 5599
[CLEANUP], 95thPercentileLatency(us), 6
[CLEANUP], 99thPercentileLatency(us), 5599
[INSERT], Operations, 500000
[INSERT], AverageLatency(us), 16636.157778
[INSERT], MinLatency(us), 3330
[INSERT], MaxLatency(us), 595455
[INSERT], 95thPercentileLatency(us), 27871
[INSERT], 99thPercentileLatency(us), 46623
[INSERT], Return=OK, 500000

Run tests

Adjust params as required.

OPERATIONS_COUNT=1000000
OPERATIONS_THREADS_COUNT=2

./bin/ycsb run mongodb -s -P workloads/workloada -p operationcount=$OPERATIONS_COUNT -threads $OPERATIONS_THREADS_COUNT  -p  mongodb.url="$MONGODB_URI"

As a result of the command you should should see something like this:

[OVERALL], RunTime(ms), 1758197
[OVERALL], Throughput(ops/sec), 568.7644786107586
[TOTAL_GCS_G1_Young_Generation], Count, 81
[TOTAL_GC_TIME_G1_Young_Generation], Time(ms), 181
[TOTAL_GC_TIME_%_G1_Young_Generation], Time(%), 0.010294637062854732
[TOTAL_GCS_G1_Concurrent_GC], Count, 0
[TOTAL_GC_TIME_G1_Concurrent_GC], Time(ms), 0
[TOTAL_GC_TIME_%_G1_Concurrent_GC], Time(%), 0.0
[TOTAL_GCS_G1_Old_Generation], Count, 0
[TOTAL_GC_TIME_G1_Old_Generation], Time(ms), 0
[TOTAL_GC_TIME_%_G1_Old_Generation], Time(%), 0.0
[TOTAL_GCs], Count, 81
[TOTAL_GC_TIME], Time(ms), 181
[TOTAL_GC_TIME_%], Time(%), 0.010294637062854732
[READ], Operations, 500517
[READ], AverageLatency(us), 524.631029515481
[READ], MinLatency(us), 177
[READ], MaxLatency(us), 253183
[READ], 95thPercentileLatency(us), 975
[READ], 99thPercentileLatency(us), 2083
[READ], Return=OK, 500517
[CLEANUP], Operations, 2
[CLEANUP], AverageLatency(us), 3088.0
[CLEANUP], MinLatency(us), 6
[CLEANUP], MaxLatency(us), 6171
[CLEANUP], 95thPercentileLatency(us), 6171
[CLEANUP], 99thPercentileLatency(us), 6171
[UPDATE], Operations, 499483
[UPDATE], AverageLatency(us), 6490.39991751471
[UPDATE], MinLatency(us), 2552
[UPDATE], MaxLatency(us), 1244159
[UPDATE], 95thPercentileLatency(us), 11063
[UPDATE], 99thPercentileLatency(us), 18575
[UPDATE], Return=OK, 499483

Cleanup test database

echo "$MONGODB_URI"
juju ssh mongodb/leader
charmed-mongodb.mongosh "<result of the first command here>"
mongodb [primary] admin> show databases;
admin   524.00 KiB
config  224.00 KiB
local     4.59 MiB
ycsb      2.40 MiB

use ycsb
db.dropDatabase();

Load testing with NoSQLBench tool

NoSQLBench provides different workloads and scenarios for testing MongoDB. Please refer the section in NoSQLBench documentation for more details.

Preparing NoSQLBech

Install dependencies

sudo apt update -y
sudo apt install libfuse2* -y

Download the tool

mkdir -p ~/load-testing/
cd load-testing/

wget https://github.com/nosqlbench/nosqlbench/releases/download/5.17.9-release/nb5

chmod +x nb5

Run the test

Write data to database

./nb5 run driver=mongodb workload=mongodb-keyvalue2 tags=block:rampup cycles=50k --progress console:1s connection="$MONGODB_URI" database=perf-test

Run main activity

./nb5 run driver=mongodb workload=mongodb-keyvalue2 tags='block:main.*' cycles=25k cyclerate=2500 threads=25 --progress console:1s connection="$MONGODB_URI" database=perf-test

List of workloads and scenarios provided by NoSQLBench for MongoDB

To list scenarios with related workloads execute:

./nb5 --list-scenarios  |  grep mongodb