Continuous Integration Testing

Juju has many Continuous Integration (CI) tests that run whenever new code is checked in. These are long-running integration tests that do actual deploys to clouds, to ensure that juju works in real-world environments.

Local Environment Setup

Run the Makefile to install all the deps you need to run the tests.

make install-deps

The Scripts

The scripts under acceptancetests are generally divided into three categories:

  • The CI tests themselves, that get run by jenkins and show pass/fail. These scripts have the prefix “assess”, i.e. assess_recovery.py.
  • Unit tests of the helper scripts. These scripts are in the ‘tests’ subdirectory and have the prefix “test”, i.e. tests/test_jujupy.py.
  • Helper scripts used by the CI tests. These are generally any file without one of the aforementioned prefixes.

Running Unit Tests (tests of the CI testing code)

The unit tests are written using python’s unittest module.

To run all the tests, run make test. To run the tests for a particular test file, run python -m unittest <module_name>.

Running CI Tests

The CI tests are just normal python files. Their return value indicates success or failure (0 for success, nonzero for failure). You can just run the file, and it’ll tell you the arguments it expects. In general, the tests will expect that you have a working juju binary and an environments.yaml file with usable environments. Most of the scripts ask for the path to your local juju binary and the name of an environment in your environments.yaml. The script will use these to bootstrap the indicated environment and run its tests.

If the test needs to deploy a test charm, you’ll need to set the JUJU_REPOSITORY environment variable to the repository path found under acceptancetests folder.

Help can be printed for any test script, for example `./assess_log_rotation.py --help’. Many tests can be run without passing any arguments, and instead, defaults will be assumed (and a folder created to contain the output).

Creating a New CI Test

Run make new-assess NAME.

Run make lint early and often.

If your tests require new charms, please write them in Python.

Exit status

Tests must exit with 0 on success, nonzero on failure.

Juju binary

Tests must accept a path to the juju binary under test. A path including the binary name (e.g. mydir/bin/juju) is expected. (Some older tests use a path to the directory, but this is deprecated.)

Environment name

Tests that use an environment must accept an environment name to use, so that they can be run on different substrates by specifying different environments.

Runtime environment name

Tests that use an environment must permit a temporary runtime environment name to be supplied, so that multiple tests using the same substrate can be run at the same time.

Test mode

Tests must run juju with test-mode: True by default, so that they do not artificially inflate statistics. This is handled automatically by jujupy.temp_bootstrap_env

series

Tests whose results could vary by series should allow default-series to be specified.

Integrating your test into CI testing

This will not happen automatically. The Jenkins config must be updated. See the juju-qa-jenkins project page for more instruction.

Hey @simonrichardson, this is pretty out of date now. Do you want to update with your shell integration testing system?