Square brackets in env vars

So in a kubernetes yaml definition this is legal:

druid_extensions_loadList: '["postgresql-metadata-storage"]'

but when I do it in a spec file Juju says:

parsing unit spec for dc4: error unmarshaling JSON: json: cannot unmarshal array into Go struct field EnvVar.value of type string

I’ve tried escaping the square brackets but that just appears in the env var in the docker container.

Anyone know how I’m supposed to get those into the container? I’m trying to keep the container the same as the upstream one so we don’t have any funky patching going on.

In plain K8S deployment yaml:

happily renders:

Sorry @wallyworld et al, here’s the full spec for context:

containers:
  - name: druid-coordinator
    command: ["/druid.sh"]
    args: ["coordinator"]
    workingDir: "/opt/druid"
    imageDetails:
      imagePath: {docker_image_path}
      username: {docker_image_username}
      password: {docker_image_password}
    ports:
      - containerPort: {coordinator_port}
        protocol: TCP
    config:
      DRUID_XMX: 1g
      DRUID_XMS: 1g
      DRUID_MAXNEWSIZE: 250m
      DRUID_NEWSIZE: 250m
      DRUID_MAXDIRECTMEMORYSIZE: 6172m
      druid_emitter_logging_logLevel: debug
      druid_extensions_loadList: '["postgresql-metadata-storage"]'
      druid_zk_service_host: {zk_host}
      druid_metadata_storage_host:
      druid_metadata_storage_type: {db_type}
      druid_metadata_storage_connector_connectURI: {jdbc_url}
      druid_metadata_storage_connector_user: {db_user}
      druid_metadata_storage_connector_password: {db_password}
      druid_coordinator_balancer_strategy: cachingCost
      druid_indexer_runner_javaOptsArray: ''
      druid_indexer_fork_property_druid_processing_buffer_sizeBytes: 268435456
      druid_storage_type: azure
      druid_azure_account: YOURACCOUNT
      druid_azure_key: YOURKEY
      druid_azure_container: druid
      druid_azure_protocol: https
      druid_azure_maxTries: 3
      DRUID_LOG4J: '<?xml version="1.0" encoding="UTF-8" ?><Configuration status="WARN"><Appenders><Console name="Console" target="SYSTEM_OUT"><PatternLayout pattern=" %p [%t] %c - %m%n"/></Console></Appenders><Loggers><Root level="info"><AppenderRef ref="Console"/></Root><Logger name="org.apache.druid.jetty.RequestLog" additivity="false" level="DEBUG"><AppenderRef ref="Console"/></Logger></Loggers></Configuration>'

As @thumper alluded to it appears the square brackets get parsed as an array in Go, so I tried escaping them, but then I end up with escaped square brackets in my env var.

And dc4 is a test kubernetes charm I’m building out… nothing special, just trying to get the pod spec aligned.

We’ll have a fix for this in the upcoming 2.6.9 release (hopefully late next week).
It will land before then in the 2.6 edge snap within a day.

Thanks @wallyworld, as you mention the edge snap, you mean its an issue in the charm build stage?

No, he’ll be referring to the juju charm, e.g.

snap install --channel=2.6/edge --classic juju

Ah duh… what I was getting at I guess was, does it require a model upgrade? or just a snap upgrade?

Just a Juju upgrade, and possibly a charm upgrade to force through a new pod spec YAML file. Not 100% sure if a charm upgrade is needed.
The fix is in the 2.6 edge snap already if you wanted to try it out.

Sweet! i shall give it a go in 30 minutes when I’ve finished some other stuff and let you know.

What does this fix look like for the JSON?

My spec looks like:

containers:
  - name: druid-coordinator
    command: ["/druid.sh"]
    args: ["coordinator"]
    workingDir: "/opt/druid"
    imageDetails:
      imagePath: registry.jujucharms.com/spiculecharms/druid-coordinator-k8s/druid_image@sha256:74f2f4d14a105264fec3c69634e2acb262cc1adc117f4c66d94cde68342285bc
      username: docker-registry
      password: <>
    ports:
      - containerPort: 8081
        protocol: TCP
    config:
      DRUID_XMX: 1g
      DRUID_XMS: 1g
      DRUID_MAXNEWSIZE: 250m
      DRUID_NEWSIZE: 250m
      DRUID_MAXDIRECTMEMORYSIZE: 6172m
      druid_emitter_logging_logLevel: debug
      druid_extensions_loadList: '["postgresql-metadata-storage"]'
      druid_zk_service_host: 10.152.183.253:2181
      druid_metadata_storage_host:
      druid_metadata_storage_type: mysql
      druid_metadata_storage_connector_connectURI: jdbc://10.152.183.170/database
      druid_metadata_storage_connector_user: mysql
      druid_metadata_storage_connector_password: password
      druid_coordinator_balancer_strategy: cachingCost
      druid_indexer_runner_javaOptsArray: ''
      druid_indexer_fork_property_druid_processing_buffer_sizeBytes: 268435456
      druid_storage_type: azure
      druid_azure_account: YOURACCOUNT
      druid_azure_key: YOURKEY
      druid_azure_container: druid
      druid_azure_protocol: https
      druid_azure_maxTries: 3
      DRUID_LOG4J: '<?xml version="1.0" encoding="UTF-8" ?><Configuration status="WARN"><Appenders><Console name="Console" target="SYSTEM_OUT"><PatternLayout pattern=" %p [%t] %c - %m%n"/></Console></Appenders><Loggers><Root level="info"><AppenderRef ref="Console"/></Root><Logger name="org.apache.druid.jetty.RequestLog" additivity="false" level="DEBUG"><AppenderRef ref="Console"/></Logger></Loggers></Configuration>'

And with:

juju     2.7-beta1+develop-1062091  8936  edge      canonical✓  classic

and

Model       Controller  Cloud/Region              Version  SLA          Timestamp
druid-test  k8s         k8s-test-cloud/RegionOne  2.6.8    unsupported  12:13:27Z

I continue to get the same error on a brand new model.

Oh I see Tim’s pointer to 2.6/edge not just edge… I shall try that.

Urgh, my test controller is now 2.7beta1 and i don’t believe you can spin up earlier models can you and I can’t bootstrap a 2.6 controller.

Hosed for now. I’ll test this when I can get 2.6 re-bootstrapped.

Alright folks with the fix, what am I doing wrong?

Oh I saw the PR on IRC, seems to use backticks. I’ll try that.

God knows… I give up:

https://gitlab.com/spiculedata/juju/druid-coordinator-k8s/blob/master/reactive/spec_template.yaml#L20

What is that line specifically supposed to look like?

Le sigh. You’ve discovered an issue with our Jenkins build and upload process for the operator images. The Juju operator image for 2.6.9 on dockerhub is stale and so doesn’t contain the fix.

For now, I’ve manually updated the image. You’ll need to remove any existing docker.io/jujusolutions/jujud-operator:2.6.9 image from your k8s cluster.

If you re-bootstrap it should work now. I’ve tested bootstrapping using the 2.6 edge snap (rather than from source) and have verified it’s all good now. FWIW, I tested with a config like this:

    config:
      FOO: '["hello", "<@world>"]'

Alrighty here goes…

Okay, well it didn’t work but I dunno how to force remove images from K8S cause there’s no prune type command as it leaves it to the GC…

docker.io/jujusolutions/jujud-operator@sha256:f52962d6e480557046f87de247bd37a2cb8907cbddd74c9ff2e131145f343cc9

Thats the one it bootstrapped

Same error as before.

The image you want is this one

jujusolutions/jujud-operator@sha256:5c0b326424e47c3621a4e20af649263968fc85b404691b6868f3d128cf12ba28

It seems your k8s cluster has the old 2.6.9 edge image cached.
If you ssh into each worker node, you can remove the old images from the containerd repo using something like
ctr -n k8s.io image rm jujusolutions/jujud-operator:2.6.9

I am not sure that :k8s.io" is the correct namespace so check first with image ls

This issue of image staleness only comes up like this when testing the edge snap - released jujud operators get a version tag which is then fixed from that point. We’re already working on tagging the images with sha in addition to juju version to fix the issue in future.

Interesting. Thanks, I’ll try that shortly.