Juju is not using existing cloud for bundle deploy + other issues

OS Ubuntu 20.04 Cloud Image
latest juju from snapd
Hi,
I tried this 5 times now and I am getting tired of it, I was following YT tutorial AND documentation. I added a manual cloud and 2 hosts, that worked.

root@localhost:~# juju models
Controller: manual-controller

Model       Cloud/Region  Type    Status     Machines  Cores  Access  Last connection
controller  test/default  manual  available         1      4  admin   just now
default     test/default  manual  available         0      -  admin   10 minutes ago
k8s*        test/default  manual  available         2      8  admin   1 minute ago

Then I do:
juju deploy kubernetes-core
since this requires 2 hosts so it should work no?

But now it creates new nodes and msses everything up

root@localhost:~# juju machines
    Machine  State    DNS        Inst id           Series  AZ  Message
    0        started  10.3.3.14  manual:10.3.3.14  focal       Manually provisioned machine
    1        started  10.3.3.15  manual:10.3.3.15  focal       Manually provisioned machine
    2        down                pending           focal       manual provider cannot start instances
    2/lxd/0  pending             pending           focal       
    3        down                pending           focal       manual provider cannot start instances

Then I tried this:

Deploying a charm in a Manual cloud

juju deploy kubernetes-core --to test

Gives me: ERROR options provided but not supported when deploying a bundle: --to

I cant even clean this mess up.
If I run this:

root@localhost:~# juju destroy-model k8s
WARNING! This command will destroy the "k8s" model.
This includes all machines, applications, data and other resources.

Continue [y/N]? y
Destroying model
Waiting for model to be removed, 2 error(s), 5 machine(s), 6 application(s)....
Waiting for model to be removed, 2 error(s), 3 machine(s), 4 application(s)...
Waiting for model to be removed, 1 error(s), 2 machine(s)..................

This hangs forever, I waited 2h and nothing happens.

So can someone tell me how I deploy this on the 2 nodes I added?
I thought this is supposed to make things easier…it doesn’t so far. It’s so extremely frustrating.

edit:
btw my kern.log is full of this

    [ 2691.812889] audit: type=1400 audit(1604062515.999:4374): apparmor="DENIED" operation="open" profile="snap.juju-db.daemon" name="/proc/3095/net/netstat" pid=3095 comm="ftdc" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
    [ 2691.812892] audit: type=1400 audit(1604062515.999:4375): apparmor="DENIED" operation="open" profile="snap.juju-db.daemon" name="/proc/3095/net/snmp" pid=3095 comm="ftdc" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
    [ 2692.812728] audit: type=1400 audit(1604062516.995:4376): apparmor="DENIED" operation="open" profile="snap.juju-db.daemon" name="/proc/3095/net/netstat" pid=3095 comm="ftdc" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
    [ 2692.812731] audit: type=1400 audit(1604062516.995:4377): apparmor="DENIED" operation="open" profile="snap.juju-db.daemon" name="/proc/3095/net/snmp" pid=3095 comm="ftdc" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
    [ 2693.812803] audit: type=1400 audit(1604062517.999:4378): apparmor="DENIED" operation="open" profile="snap.juju-db.daemon" name="/proc/3095/net/netstat" pid=3095 comm="ftdc" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
    [ 2693.812806] audit: type=1400 audit(1604062517.999:4379): apparmor="DENIED" operation="open" profile="snap.juju-db.daemon" name="/proc/3095/net/snmp" pid=3095 comm="ftdc" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
    [ 2694.812851] audit: type=1400 audit(1604062518.999:4380): apparmor="DENIED" operation="open" profile="snap.juju-db.daemon" name="/proc/3095/net/netstat" pid=3095 comm="ftdc" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
    [ 2694.812854] audit: type=1400 audit(1604062518.999:4381): apparmor="DENIED" operation="open" profile="snap.juju-db.daemon" name="/proc/3095/net/snmp" pid=3095 comm="ftdc" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
    [ 2695.813050] audit: type=1400 audit(1604062519.999:4382): apparmor="DENIED" operation="open" profile="snap.juju-db.daemon" name="/proc/3095/net/netstat" pid=3095 comm="ftdc" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
    [ 2695.813052] audit: type=1400 audit(1604062519.999:4383): apparmor="DENIED" operation="open" profile="snap.juju-db.daemon" name="/proc/3095/net/snmp" pid=3095 comm="ftdc" requested_mask="r" denied_mask="r" fsuid=0 ouid=0

Not sure if related to the issue

Unfortunately you can’t use eoan or focal for nesting LXD containers as you hit the issue with apparmour. This isn’t a juju issue per-say, but a LXD one. The work around for now is to use bionic.

I do believe Juju should provide better guidance about this.

Why nested? I am providing 2 machines, I am not trying to start a container in a container

The app armor issues often occur when running containers in containers. Reading through the OP, however, I think that the problem here is not related.

@howaboutno in order to do what you’re trying to do, you’d need to download the bundle.yaml for kubernetes-core from the charm store, and edit it to add the machines that you’ve created under a “machines” section, then add the “to” directives to the charms in the bundle.

Alternately, you could read through the bundle and run a juju deploy command with a --to directive for each charm in the bundle.

What you can’t generally do is to mix cli flag directives with a .yaml bundle specification when deploying a set of charms.

Is there a reason that you’re doing your experiments with an lxd/localhost cloud? Those are generally much more straightforward to work with. Manual providers can be tricky – they require more knowledge of the Juju internals than do deployments to a local lxd “cloud” or a public cloud.

@pengale
Hi,
first of all, I am not doing any LXD stuff. I set up a manual cloud and added 2 hosts.
And I want to deploy k8s core on them for testing.
To answer your question: Because public clouds are not an option for us and we run our own infra and I dont want to test on a localhost, I need to test it on real nodes, same as in prodution.

And yes I got to the point to check the bundels yaml and edited and made sure the host names matched, still didn’t work, it’ll always create new machines with error.

There’s no logic to the commands and the config files. Like…the yaml says:
machines:
‘0’: {}
‘1’: {}

and juju says:
Machine State DNS Inst id Series AZ Message
0 started 10.3.3.14 manual:10.3.3.14 focal Manually provisioned machine
1 started 10.3.3.15 manual:10.3.3.15 focal Manually provisioned machine

So why is it not using the machine IDs to deploy it on those machines?

Also by now I am kinda losing any faith in juju. It might be all clear to you guys since you’re familiar with it, but lemme tell you from an outside perspective this is really badly documented and the internal logic is lacking hard.
Every tutorial says “yea just do juju deploy and magically its all setup”, in reality it seems there’s tons of config and edits necessary to get anything going.

2 days, 3 bugs, 2 threads, x yt videos and web tutorials later and I still cant simply deploy a charm bundle because instructions are missing or unclear.

In order to get Juju to use the machines that you’ve already provisioned, you can use the --map-machines=existing flag, documented here: Juju | Charm bundles (Apologies for leaving that out in my original post – that was an error on my part.)

Without that flag, Juju will try to be smart, and abstract away the machine naming, so that you don’t clobber an existing deployment by deploying an additional bundle on top of it. This is convenient if you are spinning up an LMA stack on something on AWS. It’s definitely less convenient with manually provisioned machines.

We’re always working to balance the “this just works on a public cloud” aspects of Juju with the ability to deploy into specific, bespoke environments. We don’t always get the balance right, and there is a lot we can do in order to make our documentation more accessible. We always do appreciate feedback, and are always happy to help answer questions here, however.

You might not be, but the bundle is.

2/lxd/0  pending             pending           focal       

This is pretty much Juju the really hard way.