Interlock Service Clusters – with Code!
So a colleague of mine was helping his client configure Interlock and wanted to know more about how to configure Interlock Service Clusters. So I referred him to my previous blog – Interlock Service Clusters. While that article conceptually helps someone understand the capabilities of Interlock, it does not show any working code examples.
Let’s review what Docker Enterprise UCP Interlock provides. And then I will show you how to configure Interlock to support multiple ingresses each of which are tied to its own environment.
Interlock Review
The Interlock ingress provides three services.
- Interlock (I) – an overall manager of all things ingress and a listener to Swarm events. It spawns both the extension and proxy services.
- Interlock Extension (IE) – When Interlock notices Swarm service changes it will then notify the Interlock Extension to create a new Nginx configuration file. That file is returned to the Interlock.
- Interlock Proxy (IP) – the core ingress listener that routes traffic based on http host header to appropriate application services. It receives its Nginx configuration from Interlock whenever there are service changes that the Interlock Proxy needs to handle.
The Interlock services containers are represented in the diagram below as I
for Interlock, IE
for Interlock Extension, and IP
for Interlock Proxy.

The shaded sections represent Docker Collections for dev, test, and prod environments; all managed within the single cluster. Integrating Interlock Service clusters into this approach provides a benefit in that of isolating problems to a single collection. This is a much more fault tolerant and ensures downstream test and prod ingress traffic is unaffected. The second benefit is that this provides greater ingress capacity for each environment. The production Interlock Proxies are dedicated for production use only and therefore does not share its capacity with dev and test ingress traffic.
We will establish 3 Interlock Service Clusters and have it deploy one ucp-interlock-proxy
replica to each node that has the label of com.docker.interlock.service.cluster
.
The overall process we work thru entails the following steps.
- enable interlock
- pulling down Interlock’s configuration toml
- configuring three service clusters
- upload a new configuration with a new name
- restart the interlock service
The code that I will show you below is going to be applied to my personal cluster in AWS. In my cluster I have 1 manager, 1 dtr, and 3 worker nodes. Each worker node is assigned to one of 3 collections named /dev, /test, and /prod. I will setup a single dedicated interlock proxy on each of these environments to segregate ingress traffic for dev, test, and prod.
$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
ziskz8lewtzu7tqtmx ip-127-13-5-3.us-west-2.compute.internal Ready Active 18.09.7
5ngrzymphsp4vlwww7 ip-127-13-6-2.us-west-2.compute.internal Ready Active 18.09.7
qqrs3gsq6irn9meho2 * ip-127-13-7-8.us-west-2.compute.internal Ready Active Leader 18.09.7
5bzaa5xckvzi4w84pm ip-127-13-1-6.us-west-2.compute.internal Ready Active 18.09.7
kv8mocefffu794d982 ip-127-13-1-5.us-west-2.compute.internal Ready Active 18.09.7
With Code
Step 1 – Verify Worker Nodes in Collections
Let’s examine the Let’s examine a worker node to determine its collection.
$ docker node inspect ip-127-13-5-3.us-west-2.compute.internal | grep com.docker.ucp.access.label
"com.docker.ucp.access.label": "/dev",
You can repeat this command to inspect each node and determine if they reside in the appropriate collection.
Step 2 – Enable Interlock
If you have not already done so, then navigate in UCP to admin
– admin settings
– Layer 7 Routing
. Then select the check box to Enable Layer 7 Routing
.

Step 3 – Label Ingress Nodes
Add an additional label to each node whose purpose is dedicated to running interlock proxies.
$ docker node update --label-add com.docker.interlock.service.cluster=dev ip-127-13-5-3.us-west-2.compute.internal
ip-127-13-5-3.us-west-2.compute.internal
Repeat this command for each node you are dedicating for ingress traffic and assign appropriate environment values for test and prod.
Step 4 – Download Interlock Configuration
Now that Interlock is running, let us extract its running configuration and modify that to suit our purposes.
CURRENT_CONFIG_NAME=$(docker service inspect --format '{{ (index .Spec.TaskTemplate.ContainerSpec.Configs 0).ConfigName }}' ucp-interlock)
docker config inspect --format '{{ printf "%s" .Spec.Data }}' $CURRENT_CONFIG_NAME > config.orig.toml
The default configuration for Interlock is to have two interlock proxies running anywhere in the cluster. The proxies configuration resides in a section named Extensions.default. This is the heart of an interlock service cluster. We will duplicate this section two times for a total of three sections and then rename them to suit our needs.
Step 5 – Edit Interlock Configuration
Copy the config.orig.toml file to config.new.toml. Then, using your favorite editor (vi of course) duplicate the Extensions.default
section two more times. Rename each of the three Extension.defaults
to Extension.dev
, Extensions.test
, and Extensions.prod
. Each Extensions.<env>
section has other sub-sections that include the same name plus a qualifier (e.g. Extensions.default.Config)
. These too will need to be renamed.
Now we have 3 named extensions for each of dev, test, and prod. Next, you will search for the PublishedSSLPort
and change it to 8445 for dev, and 8444 for test and leave the value 8443 for prod. These 3 ports should be the values that the incoming load balancer uses in its back-end pools. For each environment specific VIP (dev, test, prod) the traffic will flow into the load balancer on port 443. The VIP used to access the load balancer will dictate how the traffic will be routed to the appropriate interlock proxy IP address and port.
Add a new property called ServiceCluster
under each of the extensions sections and give it the name of dev, test, or prod.
You can also specify the constraint labels that will dictate where both the Interlock Extension and Interlock Proxies will run. Start by changing the Constraints
and ProxyConstraints
to use your new node labels.
The ProxyReplicas
indicates how many container replicas to run for the interlock proxy service. We will set ours to 2. The ProxyServiceName
is the name of the service as it is deployed into Swarm for this service. We will name ours ucp-interlock-proxy-dev which is specific to the environment it is supporting.
Of course you will do this for all three sections within the new configuration file. Below is a snippet of only the changes that I have made for the dev ingress configuration. You will want to repeat this for test and prod as well.
[Extensions.dev]
ServiceCluster = "dev"
ProxyServiceName = "ucp-interlock-proxy-dev"
ProxyReplicas = 1
PublishedPort = 8082
PublishedSSLPort = 8445
Constraints = ["node.labels.com.docker.interlock.service.cluster=dev"]
ProxyConstraints = ["node.labels.com.docker.interlock.service.cluster=dev"]
This snipped of the configuration file only represents the values that have changed. There are numerous others you should just leave as is.
Step 6 – Upload new Interlock Configuration
NEW_CONFIG_NAME="com.docker.ucp.interlock.conf-$(( $(cut -d '-' -f 2 <<< "$CURRENT_CONFIG_NAME") + 1 ))"
docker config create $NEW_CONFIG_NAME config.new.toml
Step 7- Restart Interlock Service with New Configuration
docker service update --update-failure-action rollback \
--config-rm $CURRENT_CONFIG_NAME \
--config-add source=$NEW_CONFIG_NAME,target=/config.toml \
ucp-interlock
ucp-interlock
overall progress: 1 out of 1 tasks
1/1: running [==================================================>]
verify: Service converged
interlock-service-cluster-config
Note: in the above scenario the service update worked smoothly. Other times, such as when there are errors in your configuration, the service will rollback. In those cases you will want to do a docker ps -a | grep interlock
and look for the recently exited docker/ucp-interlock
container. Once you have its container id you can perform a docker logs <container-id>
to see what went wrong.
Step 8 – Verify Everything is Working
We need to make sure that everything started up properly and are listening on their appropriate ports.
docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
y3jg0mka0w7b ucp-agent global 4/5 docker/ucp-agent:3.1.9
xdf9q5y4dev4 ucp-agent-win global 0/0 docker/ucp-agent-win:3.1.9
k0vb1yloiaqu ucp-auth-api global 0/1 docker/ucp-auth:3.1.9
ki8qeixu12d4 ucp-auth-worker global 0/1 docker/ucp-auth:3.1.9
nyr40a0zitbt ucp-interlock replicated 0/1 docker/ucp-interlock:3.1.9
ewwzlj198zc2 ucp-interlock-extension replicated 1/1 docker/ucp-interlock-extension:3.1.9
yg07hhjap775 ucp-interlock-extension-dev replicated 1/1 docker/ucp-interlock-extension:3.1.9
ifqzrt3kw95p ucp-interlock-extension-prod replicated 1/1 docker/ucp-interlock-extension:3.1.9
l6zg39sva9bb ucp-interlock-extension-test replicated 1/1 docker/ucp-interlock-extension:3.1.9
xkhrafdy3czt ucp-interlock-proxy-dev replicated 1/1 docker/ucp-interlock-proxy:3.1.9 *:8082->80/tcp, *:8445->443/tcp
wpelftw9q9co ucp-interlock-proxy-prod replicated 1/1 docker/ucp-interlock-proxy:3.1.9 *:8080->80/tcp, *:8443->443/tcp
g23ahtsxiktx ucp-interlock-proxy-test replicated 1/1 docker/ucp-interlock-proxy:3.1.9 *:8081->80/tcp, *:8444->443/tcp
You can see there are 3 new ucp-interlock-extension-<env>
containers and 3 new ucp-interlock-proxy-<env>
containers. You can also verify that they are listening on SSL port 8443 thru 8445. This is fine for a demonstration, but you will more than likely want to set the replica’s somewhere in the 2 to 5 range per environment. And of course you will determine that based on your traffic load.
NOTE: Often times after the update of the Interlock’s configuration you will still see the old ucp-interlock-extension
and/or the ucp-interlock-proxy
services still running. You can run the following command to remove these as they are no longer necessary.
docker service rm ucp-interlock-extension ucp-interlock-proxy
Step 9 – Deploy an Application
Now let’s deploy a demo service that we can route thru our new ingress. We’re going to take the standard docker demo application and deploy it to our dev cluster. Start by creating the following docker-compose.yml file:
version: "3.7"
services:
demo:
image: ehazlett/docker-demo
deploy:
replicas: 2
labels:
com.docker.lb.hosts: ingress-demo.lab.capstonec.net
com.docker.lb.network: ingress_dev
com.docker.lb.port: 8080
com.docker.lb.service_cluster: dev
com.docker.ucp.access.label: /dev
networks:
- ingress_dev
networks:
ingress_dev:
external: true
Note that the com.docker.lb.network
attribute is set to ingress_dev
. I previously created this network outside of the stack. We will now utilize this network for all our ingress traffic from Interlock to our docker-demo container.
$ docker network create ingress_dev --label com.docker.ucp.access.label=/dev --driver overlay
mbigo6hh978bfmpr5o9stiwdc
Also notice that the com.docker.lb.hosts
attribute is set to ingress-demo.lab.capstonec.net
. I logged into our DNS server and create a CNAME record with that name pointing to my AWS load balancer for the dev environment.
I also must configure my AWS load balancer to allow traffic to a Target Group
of virtual machines. We can talk about your cloud configuration in another article down the road.
Let’s deploy that stack:
docker stack deploy -c docker-stack.yml demodev
Once the stack is deployed, we can verify that the services are running on the correct machine:
docker stack ps demodev
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
i3bght0p5d0j demodev_demo.1 ehazlett/docker-demo:latest ip-127-13-5-3.ec2.internal Running Running 10 hours ago
cyqfu0ormnn8 demodev_demo.2 ehazlett/docker-demo:latest ip-127-13-5-3.ec2.internal Running Running 10 hours ago
Finally we should be able to open a browser to http://ingress-demo.lab.capstonec.net which routes thru the dev interlock service cluster) and see the application running.

You can also run a curl command to verify:
curl ingress-demo.lab.capstonec.net
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title></title>
...
Summary
Well that was a decent amount of work but now you’re done. You’ve successfully implemented your first interlock service cluster which is highly available and segmented into three environments for dev, test, and prod!
As always if you have any questions or need any help please contact us.
Mark Miller
Solutions Architect
Docker Accredited Consultant