Interlock Service Clusters

The Single-Cluster architecture utilizes a single Docker Swarm cluster with multiple collections to separate the dev, test, and prod worker machines and combined with RBAC it enforces work load isolation of applications across the various runtime environments. Applications deployed to this Single-Cluster can utilize the Interlock reverse proxy capabilities of SSL termination and path based routing. This single Interlock application supports all three collections and the routing of application traffic.

In this article I will show you how to configure Interlock to run in a multi-service-cluster configuration which gains you isolation and dedication of Interlock Proxy instances to each of the dev, test, and prod collections.

Last December I had the opportunity to speak at a break-out session during DockerCon EU18 in Barcelona Spain.  I met two other speakers who are from the Ministry of Justice in the Netherlands.  Their team has architected and deployed their Docker Swarm in the Single-Cluster approach quite successfully.  They have numerous government agencies deploying hundreds of applications into their environment. 

Single-Cluster no Layer 7 Routing

In my recent experiences I have helped two of my clients deploy two separate instances of the Single-Cluster environments utilizing the collections to separate the dev, test, and prod worker nodes. The CICD pipeline has been configured to deploy the application stacks to the appropriate collection based on the com.docker.ucp.access.label label setting.  The following diagram illustrates what a sample Swarm cluster might look like without Interlock enabled.  As you can see the application load balancer has a backend pool of hosts that it routes to which includes all of the cluster worker nodes.

Cluster no interlock

Default Interlock Service Cluster

Once you enable Interlock then Docker UCP will configure four new containers; Interlock, Interlock Extension, and two Interlock Proxies.  Docker’s site provides an example of East vs West cloud regions using Interlock Service Clusters. Here we will focus on the deployment environments within a Swarm Cluster as it pertains to dev, test, and production Collections.

Cluster default interlock

Any web application can specify the appropriate Interlock labels to register itself with Interlock Layer 7 routing.  An inbound browser request will first hit the load balancer which then round-robins between the two Interlock Proxies. The Interlock Proxy will then use the inbound host URL to lookup the appropriate Docker service to route traffic to. After running in this new configuration for a couple weeks, we encountered issues with Interlock, which Docker has since fixed. But the unpleasant experience opened our eyes to a flaw in our implementation.

The default deployment of the Interlock service cluster was far too simplistic.  We had a single Interlock Service Cluster with a set of Interlock Proxies that was handling all dev, test, and prod URL requests and directing the traffic to containers in the appropriate collection based on names.  This all worked fine up until the time that a single developer made a syntax error in his docker-stack.yml file.  Upon deployment of the stack file UCP then injected this erroneous configuration into the Interlock Proxy service which then caused 2 of 5 interlock proxy containers to start flapping.  This was an immediate degradation of our implementation in that we now only had 3 functioning proxies.  Furthermore, upon attempts to resolve the issue, we managed to take down 2 more proxies which left us with a single Interlock Proxy servicing the entire cluster.

Interlock Multiple Service Clusters

Multi service cluster

The most important thing we learned is that we cannot afford to have a developer make a simple mistake and end up impacting test and production activities.  This lead us towards implementing multiple Interlock Service Clusters.  In the multiple Interlock Service Clusters configuration there is still only one Interlock and one Interlock Extension process.  However, the Interlock Proxies are replicated with a set of proxies for each collection in the cluster.  Each pair of proxies also gets its own load balancer which round-robins inbound requests across the two proxies.

Now, if a developer makes a syntactic error in his docker-stack.yml file and deploys to the /dev collection, it will only impact the development Interlock Proxy containers.  It will not have any impact on test and production Interlock Proxies.

This architectural choice gains us several things.  The largest benefit to this architectural choice is that of isolation of problems to a single environmental domain: dev, test, prod. This is a much more fault tolerant architecture.  The second benefit is that this provides greater capacity for each environment. The production Interlock Proxies are dedicated for production use only and therefore does not share its capacity with dev and test traffic.

Refinement Details

There is one thing I have glossed over in the previous diagrams and that is related to where the Interlock Proxies reside.  They are actually containers that need to run on a worker node.  We have chosen to use some dedicated nodes to service these proxies that are not used for any other container workload.  One approach is to have two nodes that run a single Interlock Proxy on each.  Then rinse-and-repeat for dev, test, and prod.  This adds up to 6 additional worker nodes which comes with additional license cost.  

Another approach is to have only two dedicated nodes and run a dev, test, and prod Interlock Proxy on each.  

interlock multi two nodes diagram

This keeps the additional worker nodes down to two but yet you are fault tolerant and highly available all at the same time.

Conclusion

The Interlock Multiple Service Clusters approach is far superior than using the default Interlock Service Cluster.  It is more fault tolerant, especially to developer mistakes, and will not take down test and production at the same time.

The final configuration does require some thought to get it just right.  If you would like some help with making your container architecture reach all of your enterprise goals, then contact anyone at Capstone. We would love to come help you.

Mark Miller
Docker Accredited Consultant
Solutions Architect for Capstone