Login into MongoDB

The following script will SSH to a Juju 2.0 controller machine and start a mongo shell. It optionally takes the machine to connect to and the model name (defaulting to machine “0” in the model named “controller”).

This is the one script to rule them all :face_vomiting:

#!/bin/bash

mode=${1:-iaas}
machine=${2:-0}
model=${3:-controller}

case "${mode}" in
    caas | k8s)
        kubectl_bin=microk8s.kubectl
        k8s_ns=`juju whoami | grep Controller | awk '{print "controller-"$2}'`
        k8s_controller_pod=`${kubectl_bin} -n ${k8s_ns} get pods | grep -E "^controller-([0-9]+)" | awk '{print $1}'`
        echo "${k8s_controller_pod}"

        echo "Connecting to mongo-db instance at: ${k8s_ns}:${k8s_controller_pod}"
        mongo_user=`${kubectl_bin} exec -n ${k8s_ns} ${k8s_controller_pod} -c api-server -it -- bash -c "grep tag /var/lib/juju/agents/controller-*/agent.conf | cut -d' ' -f2 | tr -d '\n'"`
        mongo_pass=`${kubectl_bin} exec -n ${k8s_ns} ${k8s_controller_pod} -c api-server -it -- bash -c "grep statepassword /var/lib/juju/agents/controller-*/agent.conf | cut -d' ' -f2 | tr -d '\n'"`
        ${kubectl_bin} exec -n ${k8s_ns} ${k8s_controller_pod} -c mongodb -it -- bash -c "/bin/mongo 127.0.0.1:37017/juju --authenticationDatabase admin --ssl --sslAllowInvalidCertificates --username '${mongo_user}' --password '${mongo_pass}'"
        ;;

    *)
        read -d '' -r cmds <<'EOF'
conf=/var/lib/juju/agents/machine-*/agent.conf
user=`sudo grep tag $conf | cut -d' ' -f2`
password=`sudo grep statepassword $conf | cut -d' ' -f2`
if [ -f /snap/bin/juju-db.mongo ]; then
  client=/snap/bin/juju-db.mongo
elif [ -f /usr/lib/juju/mongo*/bin/mongo ]; then
  client=/usr/lib/juju/mongo*/bin/mongo
else
  client=/usr/bin/mongo
fi
echo $user $password
$client 127.0.0.1:37017/juju --authenticationDatabase admin --ssl --sslAllowInvalidCertificates --username "$user" --password "$password"
EOF
        juju ssh -m "${model}" "${machine}" "${cmds}"
        ;;
esac
6 Likes

This is very similar to mine. My one uses a PATH expansion to deal with the different mongo versions.

Also, I have it saved as juju-db in ~/bin which is in my PATH, which means I can do the following:

juju db

Script contents:

#!/bin/bash

machine=${1:-0}
model=${2:-controller}

echo machine $machine

read -d '' -r cmds <<'EOF'
conf=/var/lib/juju/agents/machine-*/agent.conf
user=`sudo grep tag $conf | cut -d' ' -f2`
password=`sudo grep statepassword $conf | cut -d' ' -f2`
PATH="$PATH:$(echo /usr/lib/juju/mongo*/bin)"
mongo 127.0.0.1:37017/juju --authenticationDatabase admin --ssl --sslAllowInvalidCertificates --username "$user" --password "$password"
EOF

juju ssh -m $model $machine "$cmds"

I think perhaps the echo left in there was from me debugging at some stage.

2 Likes

That does works, though on Bionic it adds a literal “/usr/lib/juju/mongo*/bin” to PATH if it doesn’t exist. It isn’t a huge deal, as a PATH that doesn’t exist doesn’t match anything. :slight_smile:

Here’s a slightly cleaner/more secure version, based on the snippets found here and elsewhere while searching for a solution:

#!/bin/bash

machine="${1:-0}"
model="${2:-controller}"
juju=$(command -v juju)

read -d '' -r cmds <<'EOF'
conf=/var/lib/juju/agents/machine-*/agent.conf
user=$(sudo awk '/tag/ {print $2}' $conf)
password=$(sudo awk '/statepassword/ {print $2}' $conf)
client=$(command -v mongo)
"$client" 127.0.0.1:37017/juju --authenticationDatabase admin --ssl --sslAllowInvalidCertificates --username "$user" --password "$password"
EOF

"$juju" ssh -m "$model" "$machine" "$cmds"
2 Likes

Thank you for sharing +1

I have a lxd cluster.

I enabled enable-ha high-availibility.

The agent.conf is not on every host…

But inside the agent.conf. I found a non existent apiaddresses. (i restarted the lxd server with the juju controller )

And i can’t find a statepassword in the agent.conf.
There is only a apipassword, oldpassword and a cacert

juju ssh didn’t work in my case, because i also deleted an lxd-server from the lxd-cluster.
The deleted lxd-server seems to relevant inside of the mongodb-configuration.

Now i will enter the juju-controller with lxc exec ... bash and look, if some of the passwords let me connect to mongo.

Is the mongodb automatically replicated, or did i need to change every instance ?

… I will report soon, what i could find out …

okay, thank you.

rick_h_ from freenode helps me, to find the statepassword

It was inside the controller container. And not on the host, where i searched for it.

:slight_smile:

EDIT: the script at the top of this page has been updated to work with mongo db instances on k8s and should be used instead of the one suggested below.

If you are trying to access a mongodb instance on k8s, the above scripts won’t do the trick for you.
However, this will work (replace kubectl_bin accordingly if not using microk8s via snap):

#!/bin/bash

kubectl_bin=microk8s.kubectl
k8s_ns=`juju whoami | grep Controller | awk '{print "controller-"$2}'`
k8s_controller_pod=`${kubectl_bin} -n ${k8s_ns} get pods | awk 'NR==2 {print $1}'`

echo "Connecting to mongo-db instance at: ${k8s_ns}:${k8s_controller_pod}"
mongo_user=`${kubectl_bin} exec -n ${k8s_ns} ${k8s_controller_pod} -c api-server -it -- bash -c "grep tag /var/lib/juju/agents/controller-*/agent.conf | cut -d' ' -f2 | tr -d '\n'"`
mongo_pass=`${kubectl_bin} exec -n ${k8s_ns} ${k8s_controller_pod} -c api-server -it -- bash -c "grep statepassword /var/lib/juju/agents/controller-*/agent.conf | cut -d' ' -f2 | tr -d '\n'"`
${kubectl_bin} exec -n ${k8s_ns} ${k8s_controller_pod} -c mongodb -it -- bash -c "mongo 127.0.0.1:37017/juju --authenticationDatabase admin --ssl --sslAllowInvalidCertificates --username '${mongo_user}' --password '${mongo_pass}'"
1 Like

The script in the OP is missing the actual SSH command. Adding the one from Tim’s reply works:

juju ssh -m "$model" "$machine" "$cmds"

I’ve fixed it to my latest version, which handles k8s models.

1 Like

The /usr/bin/mongo does not seem to be available. I simply changed to invoke the mongo command available in the path.

For Focal deployments you need to use mongo from the snap, so juju-db.mongo.

1 Like

@simonrichardson I know this is a bit old, but I can’t get this script working anymore. I’m getting the following error:

$ juju db
sh: 1: sudo: not found
sh: 1: sudo: not found

sh: 12: /usr/bin/mongo: not found
ERROR command terminated with exit code 127

This is on a Microk8s model on the current tip of 3.1.

Never mind, found the issue. You have to invoke it like

juju db caas

Probably wouldn’t be hard to optimise this and automatically work out if it should be CaaS/IaaS.