Multiple Space Bindings per Endpoint

In the context of CMR and bug Bug #1826892 “Add support for multiple API VIPs” : Bugs : vault-charm I think it is worthwhile to raise a design question of using a space binding per “relation id” or, in other words, supporting multiple bindings per endpoint.

An application may listen on 0.0.0.0 on a multi-interface host and, in case of TCP, connected sockets would be created on the actual interface IP addresses based on the IP that was present in a request. In order for a client to discover one of the IPs available on a host we use ingress-address propagated via relations to the client side.

For deployments with multiple network spaces the same endpoint in metadata.yaml might be desirable to use with bindings to different spaces (the use-case of the bug above).

Conceptually:

metadata.yaml portion of example-http-server application:

bindings:
  "": oam-space
  # the first binding is the "primary" one for compatibility
  http: oam-space,internal-space,public-space

Then at relation creation time one should be able to specify a space binding to use on both sides or the “primary” one would be used by default:

# public-space would be used for this relation
juju add-relation example-http-server:http:public-space example-http-client:http:public-space

# oam-space would be used by default at the example-http-server side
juju add-relation example-http-server:http example-http-client:http

From the implementation perspective:

network-get already supports -r relationid argument and then passes it along to NetworkInfo API call.

At the apiserver side both in NetworkInfo and EnterScope NetworksForRelation is called where a bound space is retrieved either by calling GetSpaceForBinding or, if that fails, using a heuristic based on whether relation is cross-model or not.

So the idea would be to extend GetSpaceForBinding to support relationId-aware lookups for bindings and to extend the bundle syntax to support multiple bindings per endpoint with a “primary” binding per endpoint for compatibility.

Using additional endpoints in metadata.yaml is not very useful as they are not used for creating the actual relations (thus reviews for 1826892 use convoluted code to work around that).

Thanks in advance for any feedback!

1 Like

When we were designing network spaces, we certainly considered multiple addresses per endpoint. The main reason we dropped it was because the cognitive overhead was considered too high vs the value of doing so. (Charmers that can’t expect to have a single address to use for a purpose, so all charms always have to code in case they get multiple endpoints assigned. Users have to deal with complicated syntax in case you want to use multiple addresses, etc.)

I think what was proposed would be possible, and we could reevaluate our decision. But it wasn’t an oversight but an intentional choice to keep the model simple. For this one charm that would actually support it, you could just add more interfaces. (extra-bindings so that you don’t have extra relations hanging off.)

I guess your thought is that if the charm ignores its own ‘what address should I bind to’ and just binds to ‘0.0.0.0’ then it already is exposed to all addresses and it just becomes a model/operator to support multiple bindings per endpoint.

@jameinel makes sense, the complexity is there for sure.

So the patches for this bug added an endpoint via extra-bindings, however, the problem is that the charm code needs to decide which ingress-address to use when exposing a service URL for a particular remote unit while all of remote units use the same endpoint.

For example, in the PR for lp:1826892 an ingress-address of a remote unit (Vault client) is used for a CIDR lookup on a Vault (server) unit to determine whether the client will be able to access the published URL or not. There are two endpoints: “secrets” with which the relation is made and “access” which is an endpoint in extra-bindings. But the problem is that is_address_in_network is used to make a decision - if the client address is in the CIDRs available on the Vault server then it is published, otherwise it is not. This is only going to work when both the client and server have addresses on the same subnet - if the client is several hops away this logic will break and the URL won’t be published (this will be the case either in L3-oriented deployments with multi-vlan spaces or with CMR where units can be on separate network segments).

https://github.com/openstack-charmers/charm-interface-vault-kv/blob/master/provides.py#L62-L70
https://github.com/openstack-charmers/charm-interface-vault-kv/pull/5/files#diff-6e152090b45a29ed86305121942fb300R66

So this kind of logic demonstrates that using an additional endpoint in extra-bindings is not going to help solve the problem properly as the “provides” side needs to have some heuristics on which address to use in every case to make sure the “requires” side receives the right address it can actually reach via L3.

Keeping multiple endpoints with different names but the same interface and implementation is possible but does not seem right.

So it feels like the complexity will be there anyway but if it is present natively in Juju for all charms to use maybe it’s a better approach.