Okay… so I’ve expanded this a bit
Deploy a charm or bundle as an application or applications into your model.
You can deploy charms and bundles from your local file system, or remote charms
hosted on the public charm store.
Multiple charms/bundles should be deployed via separate commands.
When deploying, consult the documentation of the charms/bundles for deployment
instructions. Many charms require configuration settings and relations to be
in place for a deployment to succeed.
The deployment process is highly configurable. The most important options
to configure are hardware constraints (--constraints), placement directives
(--to), storage constraints (--storage) and application configuration (--config).
To deploy a charm, provide the public charm's ID, or a path to the local
charm's root directory.
# Deploy a charm from the charm store
juju deploy [--channel=<channel>] <charm>
# Deploy a charm from the local filesystem
juju deploy <path-to-charm-directory>
<charm> will typically be a charm name, such as postgresql, but can include
other details. Visit the charm's public web page for the exact details about
how to specify it on the command line.
<channel> allows you to instruct Juju to deploy from non-default channels, such
as candidate, beta and devel.
Deploying charms: Setting application configuration
Each charm provides parameters ("config") that you are able to change within
the config.yaml file in its root directory. Use the '--config' option to change
their default values. This option accepts either a path to a YAML-formatted
file, or a key=value pair.
... --config <path-to-config.yaml>
... --config <param>=<value> [--config <param>=<value> [...]]
It is also possible to combine these styles. Parameters provided at the command
-line overwrite those written to the config file.
<path-to-config.yaml> is a filesystem path. Relative paths are resolved relative
to the current working directory. <path-to-config.yaml> should point to a file
file should be a YAML file that includes the application name at the top level:
For example, the rabbitmq-server charm provides the ability for you to specify
that it should be deployed as a high-availability cluster by specifying
'min-cluster-size'. We can encode this in a config file as:
Each <param> matches a parameter defined within the charm's config.yaml.
Each <value> must match the type of the relevant <param>.
For furher details, refer to the charm's documentation and 'juju help config'.
Deploying charms: Specify hardware requirements
To specify minimum hardware requirements, use the '--constraints' option.
This option accepts a space-delimited list of key=value pairs. To prevent the
shell from parsing constraints as multiple argument to Juju, they're typically
surrounded in quotes:
... --constraints "<constraint>=<value>[ <constraint>=<value>[ ...]]"
<constraint> is one of several constraint types. Constraints common to all
providers are: arch, cores, mem, root-disk. Other constraints are available
on specific clouds. See the "Further reading" section below for instructions
to access reference documentation.
<value> is typically a number, but this depends on the constraint that is
being applied. Values that describe bytes, e.g. mem and root-disk, accept a
M, G, T, or P suffix. The unit defaults to M, e.g. megabytes.
Example ensuring that Juju provisions an instance with at least 8GB RAM, 4 CPU
cores, and a root disk size of 40GB:
... --constraints "mem=8G cores=4 root-disk=40G"
Deploying charms: Control which operating system is installed on the machine
hosting the application
Use the '--series' option to specify which operating system series that is
hosting the unit(s) to be deployed. Defaults to the model's default series.
... --series <series>
<series> is a valid series ID supported by the charm. See the charm's
metadata.yaml file for which series it supports.
Deploying charms: Control where unit(s) are deployed to
The '--to' option provides you with the ability to control the machines that
Juju selects for the deployment. The '--to' option accepts "placement
directives". Placement directives have several variants.
# Deploy a unit to a pre-existing machine
... --to <machine-id>
# Deploy to a new container on a new machine.
... --to <container-type>
# Deploys to a new container on a pre-exiting machine.
... --to <container-type>:<machine-id>
# Deploy to a pre-existing container
... --to <machine-id>/<container-type>/<container-id>
# Deploy to a new machine within the availability zone(s).
... --to zone=<zone>[,<zone>[, ...]]
# Deploy to a new machine within the given space(s)
--to spaces=<space>[,<space>[, ...]]
# Deploys pods to nodes that match the label (k8s models only).
... --to <kubernetes-label>
<machine-id> is the ID of a pre-existing machine. See 'juju machines' for a list
of machines currently deployed.
<container-type> should be lxd or kvm, depending on which hypervisor is desired.
See 'juju help add-machine' for details on Juju's support for operating system
<container-id> is the ID for a pre-existing container. Containers are tied to a
machine, and are specified along with the relevant <machine-id>.
<space> should refer to space that has already been added to the model. See the
'juju add-space' command for more details. Adding a caret (^) to the start of
space name instructs Juju to avoid that space. For example '--to space=^dmz'
prevents Juju from provisioning a machine for this charm within the dmz space.
As machines can be part of multiple spaces, you can combine these two styles.
Using '--to space=^dmz,data' provisions a machine within the data space that
must not also belong in the dmz space.
Deploying charms: Specify storage
Many charms can make use of stroage volumes provided by clouds that persist even
when machines may decomissioned. Use '--attach-storage' for assigning pre-exist-
ing storage that is currently detached. Use the '--storage' option to request
that Juju provisions new storage.
# Add new storage, allocated from a storage pool
... --storage <storage-label>=[<storage-pool>],[<count>],[<size>][<suffix>]
# Attach a storage volume that is no longer attached to a unit
... --attach-storage <storage-id>
<storage-id> is a storage ID. See 'juju storage' for a list of str
<storage-label> is specified within the charm's metadata.yaml file. Charms may
have more than one <storage-label>. See also 'juju add-storage'.
<storage-pool> is a storage pool that has been defined within the model,
provided by the cloud. Use 'juju storage-pools' to list the pools available
<count> is a number greater than zero, defaulting to 1.
<size> represents number of bytes that should be reserved from the pool.
<suffix> is a multiplier used with <size> to specify how much space to reserve.
Legal values include M, G, T, and P.
Deploying charms: Adding resources
Use the '--resource' option to upload resources needed by the charm. This
option may be repeated if multiple resources are needed:
... --resource <resource-name>=<resource-path> [--resource ...]
Deploying bundles is similar to charms, except that the options supported by
the deploy command are different.
# Deploy a bundle from the charm store
juju deploy <bundle> [--overlay <path-to-overlay-bundle.yaml> [...]]
# Deploy a bundle from the local filesystem
juju deploy <path-to-bundle.yaml>
Deploying bundles: Application configuration
Bundles are typically extended via 1 or more "overlay bundles". An overlay
bundle extends or overwrites values provided in the primary bundle.
... --overlay <path-to-overlay.yaml> [--overlay ...]
Deploying bundles: Mapping machine IDs between the bundle and the model
When deploying bundles, machines specified in the bundle are added to the model
as new machines, unless --map-machines changes this behaviour.
<mapping> is either the word "existing", which maps every machine in the bundle
with machines with the same ID in the model, or pairs of machine IDs of the form
For example, given a bundle specifying machines 1, 2, and 3 and a model that has
machines 1, 2, 3, and 4 already in place, this example maps the bundle's
machine 3 with the model's machine 4. Machines 1 and 2 from the bundle are
mapped to machines 1 and 2 from the model.
Deploying to Kubernetes
K8s models require charms writtens have no concept of a
Deploying to Kubernetes: Require device(s) attached to nodes
Use the '--device' option to require that the charm is deployed to nodes that
have specific accelerators, such as GPUs and TPUs.
... --device <label>=[<count>,](<device-class>|<vendor>/<type>)[,<attrs>]
<label> is defined within the charm's metadata.yaml file.
<count> is a number greater than 0, defaulting to 1.
<device-class> is a label that will match a group of nodes, e.g.
nvidia-tesla-p100 or nvidia-tesla-k80.
<vendor> is a hardware vendor's URL, e.g. amd.com or nvidia.com.
<type> is the type of accelerator, e.g. gpu.
Assign application endpoint bindings to spaces.
Do not enforce pre-conditions, such as that --series matches a series
supported by the charm and LXD profile validation.
Enables your charm to access the cloud programmatically. Consult the charm's
documentation if this option is required.
These options work in conjunction with budgets, wallets and SLAs.
# Deploy to a charm, apache2, to new machine
juju deploy apache2
# Deploy to a charm, mysql, to machine 23:
juju deploy mysql --to 23
# Deploy a charm, mysql, to new LXD container on a new machine:
juju deploy mysql --to lxd
# Deploy to a new LXD container on machine 25:
juju deploy mysql --to lxd:25
# Deploy to LXD container 3 on machine 24:
juju deploy mysql --to 24/lxd/3
# Deploy 2 units, one on machine 3 and one to a new LXD container on machine 5:
juju deploy mysql -n 2 --to 3,lxd:5
# Deploy 3 units, one on machine 3 and the remaining two on new machines:
juju deploy mysql -n 3 --to 3
# Deploy to a machine with at least 8 GiB of memory:
juju deploy postgresql --constraints mem=8G
# Deploy to a specific availability zone (provider-dependent):
juju deploy mysql --to zone=us-east-1a
# Deploy to a specific MAAS node:
juju deploy mysql --to host.maas
# Deploy two units to machines within the the 'dmz' space
juju deploy haproxy -n 2 --constraints spaces=dmz
# Deploy a unit of postgresql to a machine outsize of the 'dmz' space
juju deploy postgresql --constraints spaces=^dmz
# Deploy a k8s charm that requires a single NVIDIA GPU:
juju deploy mycharm --device miner=1,nvidia.com/gpu
# Deploy a k8s charm that requires two NVIDIA GPUs that have an
# attribute of 'gpu=nvidia-tesla-p100'
juju deploy mycharm \