Juju Storage + MAAS

(Copied from #juju)

hello all, maas storage question

I filed this a few months ago Bug #1765959 “storage provider does not support dynamic storage” : Bugs : juju

looks like it was marked a duplicate of Bug #1691694 “Support for MAAS storage binding in bundles” : Bugs : juju

I’m wondering if its really a duplicate …

my bug, #1765959 has nothing to do with provisioning storage via bundle

Here is the workflow I’m trying Ubuntu Pastebin

It seems #1765959 (and #1691694) are still a valid bugs from what I can tell.

Question: Is storage for the maas provider just totally borked right now per aforementioned bugs?

I feel like ^bugs would have had more priority on them if its true that maas storage is borked, why I feel it must be user error.

But I also feel my workflow checks out…

Either way, any insight here (how to use MAAS storage correctly, or valid that MAAS storage is borked) would be appreciated.

Thank you

Hi @jamesbeedy,

I know that MAAS storage does work for some use cases, but I’m not entirely on top of all storage things.

I think I’d start step by step. It is my understanding that when you are deploying a charm with storage constraints, those constraints are taken into account when creating a machine through MAAS.

Firstly I’d suggest looking at the issues of there being no machines that match the tags. It is possible that the zone placement might be causing poor interactions as well, can you try without that?

I’ve tried everything at this point. Would you mind referencing a workflow that works?

All things check out (node deploys without error) when I leave off the --storage, its when I try to deploy with --storage that things break in this way.

Let me see if there are others who have more experience around this…

Here’s what I think is happening. The issue here is the use of the --to placement directive. Here’s how things currently work and where I think the bug is.

Storage can either be dynamic or not. EBS volumes on AWS are dynamic; they can be created on demand. MAAS volumes are not dynamic - you get what’s on the machine when it’s provisioned.

When a unit with storage is deployed, the only time non-dynamic storage can be allocated to the unit is when the machine to which the unit is assigned is first provisioned. The rationale is that once a machine is provisioned, because the storage is non-dynamic, new storage cannot be created to attach to the unit [1].

Without the --to, Juju would go and ask for a new machine and one with the required storage would be picked out of the allocation pool and provisioned. Job done.

With the --to, Juju considers that the machine is existing and already provisioned; such machines cannot have the necessary storage added to it (it is non-dynamic) and so the deploy is rejected. This is clear from the error message against the unit “cannot assign unit “elasticsearch/0” to machine 0: “maas” storage provider does not support dynamic storage”.

However, the placement directive here is to ensure the machine used comes from a specified zone. And in that case, Juju would go ahead and create a new machine in that zone, so there’s no issue with dynamic vs non-dynamic storage. But the logic to check storage compatibility appears to not account for this. It is getting confused because it mistakenly thinks any placement is forbidden, not just machine/container placement.

The fix is to fix to tweak how placement directives are checked. We need to reject non-dynamic storage for just machine/container placement, not placement handled by the provider where a new machine is created.

[1] Perhaps Juju should track what volumes on a machine have been mounted / used to satisfy any previous storage requirements for already deployed unit, and if there is a “clean” available volume of the right size, use that.

Removing the --to just leaves me with juju iterating over my zones trying to find a machine and never does…

juju deploy cs:~omnivector/elasticsearch --constraints "tags=es03,data,es03-data01,complete" --storage data=raid gives:

Every 0.1s: juju status --color                                                                                           Jamess-MacBook-Pro.local: Tue Aug 14 13:29:56 2018

Model          Controller  Cloud/Region  Version  SLA          Timestamp
es03-testing2  dcmaas      dcmaas        2.4.1    unsupported  13:29:56-07:00

App            Version  Status   Scale  Charm          Store       Rev  OS      Notes
elasticsearch           waiting    0/1  elasticsearch  jujucharms   28  ubuntu

Unit             Workload  Agent       Machine  Public address  Ports  Message
elasticsearch/0  waiting   allocating  0                               waiting for machine

Machine  State    DNS  Inst id  Series  AZ  Message
0        pending       pending  xenial      failed to start machine 0 in zone "default", retrying in 10s with new availability zone: failed to acquire node: No available ma
chine matches constraints: [('agent_name', ['9bec03b7-2752-4543-8754-accecd7a71e3']), ('storage', ['root:0,0:1(raid)']), ('tags', ['es03', 'data', 'es03-data01', 'complete'
]), ('zone', ['default'])] (resolved to "storage=root:0,0:1(raid) tags=complete,data,es03,es03-data01 zone=default")

One more thing to note, when I leave the --storage out of the command juju deploy finds and deploys the node successfully.

juju deploy cs:~omnivector/elasticsearch --constraints "tags=es03,data,es03-data01,complete"


Model          Controller  Cloud/Region  Version  SLA          Timestamp
es03-testing3  dcmaas      dcmaas        2.4.1    unsupported  13:43:58-07:00

App            Version  Status   Scale  Charm          Store       Rev  OS      Notes
elasticsearch           waiting    0/1  elasticsearch  jujucharms   28  ubuntu

Unit             Workload  Agent       Machine  Public address  Ports  Message
elasticsearch/0  waiting   allocating  0        10.10.70.2             waiting for machine

Machine  State    DNS         Inst id  Series  AZ    Message
0        pending  10.10.70.2  f4wpyq   xenial  es03  Deploying: ubuntu/amd64/ga-16.04/xenial/daily/boot-initrd

Another thing to note, I cannot destroy these models once I have the failed deploy in them :frowning:

Removing the zone placement directive was not intended to be a solution, just an explanation of where the root cause of the failure came in. We need to fix the underlying issue which is that Juju is confusing a zone (more accurately provider specific) placement directive with a machine placement directive; this confusion causes the deployment failure due to the modelling of maas’s storage as non-dynamic. Leaving out --storage works because that negates the need for any pre-deployment storage validation.

I think @wallyworld is right about the zone placement directive being a problem here.

In every case where I used Juju storage and MAAS I had a storage consuming charm used without any placement directives and other charms or containers collocated with it using unit name-based placement directives (–to ceph-osd/0). I used bundles which only contained storage binding sections which should be equivalent to this case.

Using specific AZs is another consideration - there are no AZ constraints and one of the ideas mentioned here Bug #1743106 “[RFE] availability zone constraints” : Bugs : juju was to have AZs to be used by the model specified in a model-config - juju would then iterate only on the ones present in that config for a given model.

Don’t know how many AZs there are and matching machines in them to tell why retries are failing.

By the way, the bundle that used to work for me in the past is as follows:

cat ceph-bluestore-pool.yaml

series: xenial
variables:
  openstack-origin: &openstack-origin  cloud:xenial-queens
  constraints: &tags "tags=virtual"
services:
  ceph-mon:
    charm: cs:xenial/ceph-mon
    num_units: 3
    options:
      expected-osd-count: 3
      source: *openstack-origin
    to:
      - lxd:ceph-osd/0
      - lxd:ceph-osd/1
      - lxd:ceph-osd/2
  ceph-osd:
    charm: cs:xenial/ceph-osd
    num_units: 3
    options:
      bluestore: true
      source: *openstack-origin
      bluestore-block-wal-size: 268435456
      bluestore-block-db-size: 2147483648
    storage:
      osd-devices: 'osd-devices'
      bluestore-wal: 'bluestore-wal'
      bluestore-db: 'bluestore-db'
relations:
  - [ ceph-osd, ceph-mon ]

I have not tested it on the latest version of Juju yet but this one does not contain any zone placement directives or explicit machine placement directives.

1 Like

This thread is over a year old now, but I’m wondering if it ever went anywhere.

I’m attempting to use the cs:~omnivector/bundle/elk-core-4 bundle to deploy an ELK stack to a set of MAAS hardware. The charm deploys properly with the exception of the Elasticsearch storage (which is always created on the root volume and mounted to the /srv/elasticsearch-data path on the machine). The machine is tagged for Elasticsearch and juju finds the machine properly via the tag. The machine has a software RAID5 array that I would like to use for storage.

I have tried configuring the RAID5 array as:

  • RAW volume with no filesystem
  • volume with one partition and and ext4 file system
  • volume with one partition, an ext4 file system, mounted at /srv/elasticsearch-data
  • volume with one partition, an ext4 file system, mounted at /srv

On MAAS I have created a tag for the RAID5 array and then created a corresponding storage pool for juju that references the tag. In all volume configurations when I use the --storage option for juju, juju is unable to find a matching machine. If I remove the --storage option then juju can find the machine but always creates it’s own volume and mounts it at /srv/elasticsearch-data.

I’m still new to MAAS and juju, so reaching out to see if I’m missing something that is obvious to others or if I’ll have to create my own Elasticsearch charm that just uses the storage as it is provided by MASS.

Thanks for reading this far!

To help diagnose any issue, can you provide the Juju commands used to create the storage pool and deploy the bundle? It would also be useful to get some trace logging. Turn it on like this:

juju model-config logging-config="<root>=INFO;juju.provider.maas=TRACE"

From the CLI you can run juju-debug-log in one window and do the deploy in another.

BTW, you don’t need to create your own filesystem on the MAAS volume. Juju will do that automatically.

@Dmitrii may have some specific input that may help.

I ended up pulling down the elasticsearch charm and removing the storage portion of the metadata.yaml file and also updating the reactive/elasticsearch.py file. These changes took storage control away from juju and let MAAS handle it. This is not a solution for real cloud deployments, but it gets me moving.

Patch to charm:

diff -Naur elasticsearch/metadata.yaml elasticsearch-pharper/metadata.yaml
--- elasticsearch/metadata.yaml	2019-10-17 23:27:18.859556449 +0000
+++ elasticsearch-pharper/metadata.yaml	2019-10-17 23:32:01.937208980 +0000
@@ -38,8 +38,4 @@
     "type": "file"
     "filename": "elastic.deb"
     "description": "Deb as obtained from https://www.elastic.co/downloads"
-"storage":
-  "data":
-    "type": "filesystem"
-    "location": "/srv/elasticsearch-data"
 "subordinate": !!bool "false"
diff -Naur elasticsearch/reactive/elasticsearch.py elasticsearch-pharper/reactive/elasticsearch.py
--- elasticsearch/reactive/elasticsearch.py	2019-10-17 23:27:18.943556078 +0000
+++ elasticsearch-pharper/reactive/elasticsearch.py	2019-10-18 00:14:21.250240307 +0000
@@ -117,8 +117,7 @@
     set_flag('elasticsearch.storage.available')
 
 
-@when('elasticsearch.storage.available',
-      'elastic.base.available')
+@when('elastic.base.available')
 @when_not('elasticsearch.storage.prepared')
 def prepare_es_data_dir():
     """

If you are still interested in debugging the juju storage interaction with MAAS, I can provide the debug information requested.

If you were able to provide the debug info that would be much appreciated, just so we can try and get a handle on what might be happening. We would like to diagnose the root cause of whatever is wrong.

Debug data.

This run was better than I had seen on any previous run because a machine was found with the --storage option specified but the unit and application got stuck allocating and no volume was created or mounted. Debug log could not be attached (system only allowed image files).

I defined a raw software RAID volume on the machine and gave it the tag ‘raid’ in MAAS. The tag I had previously used (where no machine match was found was ‘md0’).

The commands I used to create storage and deploy were:

$ juju storage-pools
Name    Provider  Attrs
loop    loop      
maas    maas      
rootfs  rootfs    
tmpfs   tmpfs     

$ juju create-storage-pool raid maas tags=raid
$ juju storage-pools
Name    Provider  Attrs
loop    loop      
maas    maas      
raid    maas      tags=raid
rootfs  rootfs    
tmpfs   tmpfs     

$ juju deploy --constraints tags=Elasticsearch-debug --storage data=raid ~/charms/elasticsearch
Deploying charm "local:bionic/elasticsearch-3".

$ juju status
Model    Controller                   Cloud/Region      Version  SLA          Timestamp
default  maas-controller  	      maas-region  	2.6.9    unsupported  22:42:42Z

App            Version  Status   Scale  Charm          Store  Rev  OS      Notes
elasticsearch           waiting    0/1  elasticsearch  local    3  ubuntu  

Unit             Workload  Agent       Machine  Public address  Ports  Message
elasticsearch/3  waiting   allocating  3                               waiting for machine

Machine  State    DNS  Inst id  Series  AZ  Message
3        pending       pending  bionic      failed to start machine 3 in zone "default", retrying in 10s with new availability zone: failed to acquire node: No available machine matches constraints: [('agent_name', ['f9ce5fcc-1afb-4ada-8b97-3e23422cf4ce']), ('storage', ['root:0,0:1(raid)']), ('tags', ['Elasticsearch-debug']), ('zone', ['default'])] (resolved to "storage=root:0,0:1(raid) tags=Elasticsearch-debug zone=default")

$ juju status
Model    Controller                   Cloud/Region      Version  SLA          Timestamp
default  maas-controller  	      maas-region  	2.6.9    unsupported  22:43:11Z

App            Version  Status   Scale  Charm          Store  Rev  OS      Notes
elasticsearch           waiting    0/1  elasticsearch  local    3  ubuntu  

Unit             Workload  Agent       Machine  Public address  Ports  Message
elasticsearch/3  waiting   allocating  3        10.XXX.XXX.218         waiting for machine

Machine  State    DNS             Inst id     Series  AZ       Message
3        pending  10.XXX.XXX.218  machine-3  bionic  rack_14  Deploying

Since I am a new user, the system is telling me that I am at my maximum reply count for this topic. Is there another way to upload the debug log? Last bit of the log (up to where it got stuck):

machine-3: 22:58:56 DEBUG juju.worker.fanconfigurer Fan not enabled
machine-3: 22:58:56 DEBUG juju.worker.dependency "fan-configurer" manifold worker started at 2019-10-18 22:58:56.800789384 +0000 UTC
machine-3: 22:58:56 DEBUG juju.worker.logger reconfiguring logging from "<root>=DEBUG" to "<root>=INFO;juju.provider.maas=TRACE;unit=DEBUG"
machine-3: 22:58:56 ERROR juju.worker.dependency "broker-tracker" manifold worker returned unexpected error: no container types determined
machine-3: 22:58:56 INFO juju.worker.machiner setting addresses for "machine-3" to [local-machine:127.0.0.1 local-cloud:10.XXX.XXX.218 local-cloud:192.168.100.251 local-machine:::1]
machine-3: 22:58:56 INFO juju.worker.upgradeseries no series upgrade lock present
machine-3: 22:58:56 INFO juju.worker.authenticationworker "machine-3" key updater worker started
machine-3: 22:58:56 INFO juju.worker.machiner "machine-3" started
machine-3: 22:58:57 INFO juju.api.common no addresses observed on interface "enp4s0f0"
machine-3: 22:58:57 INFO juju.api.common no addresses observed on interface "enp4s0f1"
machine-3: 22:58:57 INFO juju.api.common no addresses observed on interface "enp133s0f0"
machine-3: 22:58:57 INFO juju.api.common no addresses observed on interface "enp133s0f1"
machine-3: 22:58:57 INFO juju.api.common no addresses observed on interface "enp135s0f0"
machine-3: 22:58:57 INFO juju.api.common no addresses observed on interface "enp135s0f1"
machine-3: 22:58:57 INFO juju.worker.deployer checking unit "elasticsearch/3"
machine-3: 22:58:57 INFO juju.worker.deployer deploying unit "elasticsearch/3"
machine-3: 22:58:57 INFO juju.api.common no addresses observed on interface "enp4s0f0"
machine-3: 22:58:57 INFO juju.api.common no addresses observed on interface "enp4s0f1"
machine-3: 22:58:57 INFO juju.api.common no addresses observed on interface "enp133s0f0"
machine-3: 22:58:57 INFO juju.api.common no addresses observed on interface "enp133s0f1"
machine-3: 22:58:57 INFO juju.api.common no addresses observed on interface "enp135s0f0"
machine-3: 22:58:57 INFO juju.api.common no addresses observed on interface "enp135s0f1"
machine-3: 22:58:58 INFO juju.service Installing and starting service &{Service:{Name:jujud-unit-elasticsearch-3 Conf:{Desc:juju unit agent for elasticsearch/3 Transient:false AfterStopped: Env:map[JUJU_CONTAINER_TYPE:] Limit:map[] Timeout:300 ExecStart:/lib/systemd/system/jujud-unit-elasticsearch-3/exec-start.sh ExecStopPost: Logfile:/var/log/juju/unit-elasticsearch-3.log ExtraScript: ServiceBinary:/var/lib/juju/tools/unit-elasticsearch-3/jujud ServiceArgs:[unit --data-dir /var/lib/juju --unit-name elasticsearch/3 --debug]}} ConfName:jujud-unit-elasticsearch-3.service UnitName:jujud-unit-elasticsearch-3.service DirName:/lib/systemd/system/jujud-unit-elasticsearch-3 FallBackDirName:/var/lib/juju/init Script:[35 33 47 117 115 114 47 98 105 110 47 101 110 118 32 98 97 115 104 10 10 35 32 83 101 116 32 117 112 32 108 111 103 103 105 110 103 46 10 116 111 117 99 104 32 39 47 118 97 114 47 108 111 103 47 106 117 106 117 47 117 110 105 116 45 101 108 97 115 116 105 99 115 101 97 114 99 104 45 51 46 108 111 103 39 10 99 104 111 119 110 32 115 121 115 108 111 103 58 115 121 115 108 111 103 32 39 47 118 97 114 47 108 111 103 47 106 117 106 117 47 117 110 105 116 45 101 108 97 115 116 105 99 115 101 97 114 99 104 45 51 46 108 111 103 39 10 99 104 109 111 100 32 48 54 48 48 32 39 47 118 97 114 47 108 111 103 47 106 117 106 117 47 117 110 105 116 45 101 108 97 115 116 105 99 115 101 97 114 99 104 45 51 46 108 111 103 39 10 101 120 101 99 32 62 62 32 39 47 118 97 114 47 108 111 103 47 106 117 106 117 47 117 110 105 116 45 101 108 97 115 116 105 99 115 101 97 114 99 104 45 51 46 108 111 103 39 10 101 120 101 99 32 50 62 38 49 10 10 35 32 82 117 110 32 116 104 101 32 115 99 114 105 112 116 46 10 39 47 118 97 114 47 108 105 98 47 106 117 106 117 47 116 111 111 108 115 47 117 110 105 116 45 101 108 97 115 116 105 99 115 101 97 114 99 104 45 51 47 106 117 106 117 100 39 32 117 110 105 116 32 45 45 100 97 116 97 45 100 105 114 32 39 47 118 97 114 47 108 105 98 47 106 117 106 117 39 32 45 45 117 110 105 116 45 110 97 109 101 32 101 108 97 115 116 105 99 115 101 97 114 99 104 47 51 32 45 45 100 101 98 117 103] newDBus:0xb57120}
unit-elasticsearch-3: 22:58:59 INFO juju.cmd running jujud [2.6.9 gc go1.11.13]
unit-elasticsearch-3: 22:58:59 DEBUG juju.cmd   args: []string{"/var/lib/juju/tools/unit-elasticsearch-3/jujud", "unit", "--data-dir", "/var/lib/juju", "--unit-name", "elasticsearch/3", "--debug"}
unit-elasticsearch-3: 22:58:59 DEBUG juju.agent read agent config, format "2.0"
unit-elasticsearch-3: 22:58:59 INFO juju.worker.upgradesteps upgrade steps for 2.6.9 have already been run.
unit-elasticsearch-3: 22:58:59 DEBUG juju.worker.dependency "agent" manifold worker started at 2019-10-18 22:58:59.646885396 +0000 UTC
unit-elasticsearch-3: 22:58:59 DEBUG juju.worker.dependency "upgrade-check-gate" manifold worker started at 2019-10-18 22:58:59.647097174 +0000 UTC
unit-elasticsearch-3: 22:58:59 DEBUG juju.worker.dependency "api-config-watcher" manifold worker started at 2019-10-18 22:58:59.647358681 +0000 UTC
unit-elasticsearch-3: 22:58:59 DEBUG juju.worker.apicaller connecting with old password
unit-elasticsearch-3: 22:58:59 DEBUG juju.worker.dependency "upgrade-steps-gate" manifold worker started at 2019-10-18 22:58:59.647799832 +0000 UTC
unit-elasticsearch-3: 22:58:59 DEBUG juju.worker.introspection introspection worker listening on "@jujud-unit-elasticsearch-3"
unit-elasticsearch-3: 22:58:59 DEBUG juju.worker.introspection stats worker now serving
unit-elasticsearch-3: 22:58:59 DEBUG juju.worker.dependency "upgrade-check-flag" manifold worker started at 2019-10-18 22:58:59.656365678 +0000 UTC
unit-elasticsearch-3: 22:58:59 DEBUG juju.api successfully dialed "wss://10.XXX.XXX.226:17070/model/f9ce5fcc-1afb-4ada-8b97-3e23422cf4ce/api"
unit-elasticsearch-3: 22:58:59 INFO juju.api connection established to "wss://10.XXX.XXX.226:17070/model/f9ce5fcc-1afb-4ada-8b97-3e23422cf4ce/api"
unit-elasticsearch-3: 22:58:59 DEBUG juju.worker.dependency "upgrade-steps-flag" manifold worker started at 2019-10-18 22:58:59.659018794 +0000 UTC
unit-elasticsearch-3: 22:59:00 INFO juju.worker.apicaller [f9ce5f] "unit-elasticsearch-3" successfully connected to "10.XXX.XXX.226:17070"
unit-elasticsearch-3: 22:59:00 DEBUG juju.worker.apicaller changing password...
unit-elasticsearch-3: 22:59:00 INFO juju.worker.apicaller [f9ce5f] password changed for "unit-elasticsearch-3"
unit-elasticsearch-3: 22:59:00 DEBUG juju.api RPC connection died
unit-elasticsearch-3: 22:59:00 DEBUG juju.worker.dependency "api-caller" manifold worker stopped: restart immediately
unit-elasticsearch-3: 22:59:00 DEBUG juju.worker.apicaller connecting with current password
unit-elasticsearch-3: 22:59:00 DEBUG juju.api successfully dialed "wss://10.XXX.XXX.226:17070/model/f9ce5fcc-1afb-4ada-8b97-3e23422cf4ce/api"
unit-elasticsearch-3: 22:59:00 INFO juju.api connection established to "wss://10.XXX.XXX.226:17070/model/f9ce5fcc-1afb-4ada-8b97-3e23422cf4ce/api"
unit-elasticsearch-3: 22:59:01 INFO juju.worker.apicaller [f9ce5f] "unit-elasticsearch-3" successfully connected to "10.XXX.XXX.226:17070"
unit-elasticsearch-3: 22:59:01 DEBUG juju.worker.dependency "api-caller" manifold worker started at 2019-10-18 22:59:01.041181613 +0000 UTC
unit-elasticsearch-3: 22:59:01 DEBUG juju.worker.dependency "upgrader" manifold worker started at 2019-10-18 22:59:01.051612576 +0000 UTC
unit-elasticsearch-3: 22:59:01 DEBUG juju.worker.dependency "log-sender" manifold worker started at 2019-10-18 22:59:01.051683442 +0000 UTC
unit-elasticsearch-3: 22:59:01 DEBUG juju.worker.dependency "migration-inactive-flag" manifold worker started at 2019-10-18 22:59:01.053458345 +0000 UTC
unit-elasticsearch-3: 22:59:01 DEBUG juju.worker.dependency "upgrade-steps-runner" manifold worker started at 2019-10-18 22:59:01.053959333 +0000 UTC
unit-elasticsearch-3: 22:59:01 DEBUG juju.worker.dependency "upgrade-steps-runner" manifold worker completed successfully
unit-elasticsearch-3: 22:59:01 INFO juju.worker.upgrader abort check blocked until version event received
unit-elasticsearch-3: 22:59:01 INFO juju.worker.upgrader unblocking abort check
unit-elasticsearch-3: 22:59:01 INFO juju.worker.upgrader desired agent binary version: 2.6.9
unit-elasticsearch-3: 22:59:01 DEBUG juju.worker.dependency "upgrade-check-flag" manifold worker stopped: gate unlocked
unit-elasticsearch-3: 22:59:01 DEBUG juju.worker.dependency "upgrade-check-flag" manifold worker started at 2019-10-18 22:59:01.098426818 +0000 UTC
unit-elasticsearch-3: 22:59:01 DEBUG juju.worker.dependency "migration-fortress" manifold worker started at 2019-10-18 22:59:01.108687667 +0000 UTC
unit-elasticsearch-3: 22:59:01 DEBUG juju.worker.dependency "migration-minion" manifold worker started at 2019-10-18 22:59:01.119214019 +0000 UTC
unit-elasticsearch-3: 22:59:01 INFO juju.worker.migrationminion migration phase is now: NONE
unit-elasticsearch-3: 22:59:01 DEBUG juju.worker.dependency "charm-dir" manifold worker started at 2019-10-18 22:59:01.131532043 +0000 UTC
unit-elasticsearch-3: 22:59:01 DEBUG juju.worker.dependency "api-address-updater" manifold worker started at 2019-10-18 22:59:01.131606144 +0000 UTC
unit-elasticsearch-3: 22:59:01 DEBUG juju.worker.leadership elasticsearch/3 making initial claim for elasticsearch leadership
unit-elasticsearch-3: 22:59:01 DEBUG juju.worker.logger initial log config: "<root>=DEBUG"
unit-elasticsearch-3: 22:59:01 INFO juju.worker.logger logger worker started
unit-elasticsearch-3: 22:59:01 DEBUG juju.worker.dependency "leadership-tracker" manifold worker started at 2019-10-18 22:59:01.131706924 +0000 UTC
unit-elasticsearch-3: 22:59:01 DEBUG juju.worker.dependency "meter-status" manifold worker started at 2019-10-18 22:59:01.131892911 +0000 UTC
unit-elasticsearch-3: 22:59:01 DEBUG juju.worker.dependency "metric-spool" manifold worker started at 2019-10-18 22:59:01.131926901 +0000 UTC
unit-elasticsearch-3: 22:59:01 DEBUG juju.worker.dependency "proxy-config-updater" manifold worker started at 2019-10-18 22:59:01.132134214 +0000 UTC
unit-elasticsearch-3: 22:59:01 DEBUG juju.worker.dependency "logging-config-updater" manifold worker started at 2019-10-18 22:59:01.132169028 +0000 UTC
unit-elasticsearch-3: 22:59:01 DEBUG juju.worker.dependency "hook-retry-strategy" manifold worker started at 2019-10-18 22:59:01.134157118 +0000 UTC
unit-elasticsearch-3: 22:59:01 DEBUG juju.worker.proxyupdater applying in-process legacy proxy settings proxy.Settings{Http:"", Https:"", Ftp:"", NoProxy:"10.XXX.XXX.226,127.0.0.1,192.168.100.243,::1,localhost", AutoNoProxy:""}
unit-elasticsearch-3: 22:59:01 DEBUG juju.worker.proxyupdater saving new legacy proxy settings proxy.Settings{Http:"", Https:"", Ftp:"", NoProxy:"10.XXX.XXX.226,127.0.0.1,192.168.100.243,::1,localhost", AutoNoProxy:""}
unit-elasticsearch-3: 22:59:01 DEBUG juju.worker.proxyupdater new apt proxy settings proxy.Settings{Http:"", Https:"", Ftp:"", NoProxy:"127.0.0.1,::1,localhost", AutoNoProxy:""}
unit-elasticsearch-3: 22:59:01 DEBUG juju.worker.dependency "metric-sender" manifold worker started at 2019-10-18 22:59:01.143663895 +0000 UTC
unit-elasticsearch-3: 22:59:01 DEBUG juju.worker.dependency "uniter" manifold worker started at 2019-10-18 22:59:01.144553492 +0000 UTC
unit-elasticsearch-3: 22:59:01 INFO juju.worker.leadership elasticsearch/3 promoted to leadership of elasticsearch
unit-elasticsearch-3: 22:59:01 DEBUG juju.worker.logger reconfiguring logging from "<root>=DEBUG" to "<root>=INFO;juju.provider.maas=TRACE;unit=DEBUG"
unit-elasticsearch-3: 22:59:01 INFO juju.agent.tools ensure jujuc symlinks in /var/lib/juju/tools/unit-elasticsearch-3
unit-elasticsearch-3: 22:59:01 INFO juju.agent.tools was a symlink, now looking at /var/lib/juju/tools/2.6.9-bionic-amd64
unit-elasticsearch-3: 22:59:01 INFO juju.worker.meterstatus skipped "meter-status-changed" hook (missing)
unit-elasticsearch-3: 22:59:01 INFO juju.worker.uniter unit "elasticsearch/3" started
unit-elasticsearch-3: 22:59:01 INFO juju.worker.uniter resuming charm install
unit-elasticsearch-3: 22:59:01 INFO juju.worker.uniter.charm downloading local:bionic/elasticsearch-3 from API server
unit-elasticsearch-3: 22:59:01 INFO juju.downloader downloading from local:bionic/elasticsearch-3
unit-elasticsearch-3: 22:59:01 INFO juju.downloader download complete ("local:bionic/elasticsearch-3")
unit-elasticsearch-3: 22:59:01 INFO juju.downloader download verified ("local:bionic/elasticsearch-3")
unit-elasticsearch-3: 22:59:02 INFO juju.worker.uniter hooks are retried true

Thanks for looking into this and getting the extra debug info. I think it would be great to open a proper bug https://bugs.launchpad.net/juju/+filebug
Then you can attach logs etc and the bug can be tracked.