ucp

Interlock Service Clusters – with Code!

So a colleague of mine was helping his client configure Interlock and wanted to know more about how to configure Interlock Service Clusters.  So I referred him to my previous blog – Interlock Service Clusters.  While that article conceptually helps someone understand the capabilities of Interlock, it does not show any working code examples.

Let’s review what Docker Enterprise UCP Interlock provides. And then I will show you how to configure Interlock to support multiple ingresses each of which are tied to its own environment.

Interlock Review

The Interlock ingress provides three services.

  • Interlock (I) – an overall manager of all things ingress and a listener to Swarm events. It spawns both the extension and proxy services.
  • Interlock Extension (IE) – When Interlock notices Swarm service changes it will then notify the Interlock Extension to create a new Nginx configuration file. That file is returned to the Interlock.
  • Interlock Proxy (IP) – the core ingress listener that routes traffic based on http host header to appropriate application services. It receives its Nginx configuration from Interlock whenever there are service changes that the Interlock Proxy needs to handle.

The Interlock services containers are represented in the diagram below as I for Interlock, IE for Interlock Extension, and IP for Interlock Proxy.

interlock multi service

The shaded sections represent Docker Collections for dev, test, and prod environments; all managed within the single cluster. Integrating Interlock Service clusters into this approach provides a benefit in that of isolating problems to a single collection. This is a much more fault tolerant and ensures downstream test and prod ingress traffic is unaffected.  The second benefit is that this provides greater ingress capacity for each environment. The production Interlock Proxies are dedicated for production use only and therefore does not share its capacity with dev and test ingress traffic.

We will establish 3 Interlock Service Clusters and have it deploy one ucp-interlock-proxy replica to each node that has the label of com.docker.interlock.service.cluster.

The overall process we work thru entails the following steps.

  • enable interlock
  • pulling down Interlock’s configuration toml
  • configuring three service clusters
  • upload a new configuration with a new name
  • restart the interlock service

The code that I will show you below is going to be applied to my personal cluster in AWS. In my cluster I have 1 manager, 1 dtr, and 3 worker nodes.  Each worker node is assigned to one of 3 collections named /dev, /test, and /prod. I will setup a single dedicated interlock proxy on each of these environments to segregate ingress traffic for dev, test, and prod.

$ docker node ls
ID                            HOSTNAME                            STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
ziskz8lewtzu7tqtmx     ip-127-13-5-3.us-west-2.compute.internal   Ready     Active                          18.09.7
5ngrzymphsp4vlwww7     ip-127-13-6-2.us-west-2.compute.internal   Ready     Active                          18.09.7
qqrs3gsq6irn9meho2 *   ip-127-13-7-8.us-west-2.compute.internal   Ready     Active         Leader           18.09.7
5bzaa5xckvzi4w84pm     ip-127-13-1-6.us-west-2.compute.internal   Ready     Active                          18.09.7
kv8mocefffu794d982     ip-127-13-1-5.us-west-2.compute.internal   Ready     Active                          18.09.7

With Code

Step 1 – Verify Worker Nodes in Collections

Let’s examine the Let’s examine a worker node to determine its collection.

$ docker node inspect ip-127-13-5-3.us-west-2.compute.internal | grep com.docker.ucp.access.label
                "com.docker.ucp.access.label": "/dev",

You can repeat this command to inspect each node and determine if they reside in the appropriate collection.

Step 2 – Enable Interlock

If you have not already done so, then navigate in UCP to adminadmin settingsLayer 7 Routing.  Then select the check box to Enable Layer 7 Routing.

Interlock Service Clusters - with Code!

Step 3 – Label Ingress Nodes

Add an additional label to each node whose purpose is dedicated to running interlock proxies.

$ docker node update --label-add com.docker.interlock.service.cluster=dev ip-127-13-5-3.us-west-2.compute.internal
ip-127-13-5-3.us-west-2.compute.internal

Repeat this command for each node you are dedicating for ingress traffic and assign appropriate environment values for test and prod.

Step 4 – Download Interlock Configuration

Now that Interlock is running, let us extract its running configuration and modify that to suit our purposes.

CURRENT_CONFIG_NAME=$(docker service inspect --format '{{ (index .Spec.TaskTemplate.ContainerSpec.Configs 0).ConfigName }}' ucp-interlock)
docker config inspect --format '{{ printf "%s" .Spec.Data }}' $CURRENT_CONFIG_NAME > config.orig.toml

The default configuration for Interlock is to have two interlock proxies running anywhere in the cluster. The proxies configuration resides in a section named Extensions.default. This is the heart of an interlock service cluster. We will duplicate this section two times for a total of three sections and then rename them to suit our needs.

 

Step 5 – Edit Interlock Configuration

Copy the config.orig.toml file to config.new.toml. Then, using your favorite editor (vi of course) duplicate the Extensions.default section two more times.  Rename each of the three Extension.defaults to Extension.dev, Extensions.test, and Extensions.prod.  Each Extensions.<env> section has other sub-sections that include the same name plus a qualifier (e.g. Extensions.default.Config).  These too will need to be renamed.

Now we have 3 named extensions for each of dev, test, and prod.  Next, you will search for the PublishedSSLPort and change it to 8445 for dev, and 8444 for test and leave the value 8443 for prod.  These 3 ports should be the values that the incoming load balancer uses in its back-end pools. For each environment specific VIP (dev, test, prod) the traffic will flow into the load balancer on port 443.  The VIP used to access the load balancer will dictate how the traffic will be routed to the appropriate interlock proxy IP address and port.  

Add a new property called ServiceCluster under each of the extensions sections and give it the name of dev, test, or prod.

You can also specify the constraint labels that will dictate where both the Interlock Extension and Interlock Proxies will run. Start by changing the Constraints and ProxyConstraints to use your new node labels.

The ProxyReplicas indicates how many container replicas to run for the interlock proxy service. We will set ours to 2. The ProxyServiceName is the name of the service as it is deployed into Swarm for this service. We will name ours ucp-interlock-proxy-dev which is specific to the environment it is supporting.

Of course you will do this for all three sections within the new configuration file. Below is a snippet of only the changes that I have made for the dev ingress configuration. You will want to repeat this for test and prod as well.

  [Extensions.dev]
    ServiceCluster = "dev"
    ProxyServiceName = "ucp-interlock-proxy-dev"
    ProxyReplicas = 1
    PublishedPort = 8082
    PublishedSSLPort = 8445
    Constraints = ["node.labels.com.docker.interlock.service.cluster=dev"]
    ProxyConstraints = ["node.labels.com.docker.interlock.service.cluster=dev"]

This snipped of the configuration file only represents the values that have changed. There are numerous others you should just leave as is.

Step 6 – Upload new Interlock Configuration

NEW_CONFIG_NAME="com.docker.ucp.interlock.conf-$(( $(cut -d '-' -f 2 <<< "$CURRENT_CONFIG_NAME") + 1 ))"
docker config create $NEW_CONFIG_NAME config.new.toml

Step 7- Restart Interlock Service with New Configuration

docker service update --update-failure-action rollback \
    --config-rm $CURRENT_CONFIG_NAME \
    --config-add source=$NEW_CONFIG_NAME,target=/config.toml \
    ucp-interlock
ucp-interlock
overall progress: 1 out of 1 tasks 
1/1: running   [==================================================>] 
verify: Service converged 
interlock-service-cluster-config

Note: in the above scenario the service update worked smoothly. Other times, such as when there are errors in your configuration, the service will rollback. In those cases you will want to do a docker ps -a | grep interlock and look for the recently exited docker/ucp-interlock container. Once you have its container id you can perform a docker logs <container-id> to see what went wrong.

Step 8 – Verify Everything is Working

We need to make sure that everything started up properly and are listening on their appropriate ports.

docker service ls
ID                  NAME                           MODE                REPLICAS            IMAGE                                  PORTS
y3jg0mka0w7b        ucp-agent                      global              4/5                 docker/ucp-agent:3.1.9                 
xdf9q5y4dev4        ucp-agent-win                  global              0/0                 docker/ucp-agent-win:3.1.9             
k0vb1yloiaqu        ucp-auth-api                   global              0/1                 docker/ucp-auth:3.1.9                  
ki8qeixu12d4        ucp-auth-worker                global              0/1                 docker/ucp-auth:3.1.9                  
nyr40a0zitbt        ucp-interlock                  replicated          0/1                 docker/ucp-interlock:3.1.9             
ewwzlj198zc2        ucp-interlock-extension        replicated          1/1                 docker/ucp-interlock-extension:3.1.9   
yg07hhjap775        ucp-interlock-extension-dev    replicated          1/1                 docker/ucp-interlock-extension:3.1.9   
ifqzrt3kw95p        ucp-interlock-extension-prod   replicated          1/1                 docker/ucp-interlock-extension:3.1.9   
l6zg39sva9bb        ucp-interlock-extension-test   replicated          1/1                 docker/ucp-interlock-extension:3.1.9   
xkhrafdy3czt        ucp-interlock-proxy-dev        replicated          1/1                 docker/ucp-interlock-proxy:3.1.9       *:8082->80/tcp, *:8445->443/tcp
wpelftw9q9co        ucp-interlock-proxy-prod       replicated          1/1                 docker/ucp-interlock-proxy:3.1.9       *:8080->80/tcp, *:8443->443/tcp
g23ahtsxiktx        ucp-interlock-proxy-test       replicated          1/1                 docker/ucp-interlock-proxy:3.1.9       *:8081->80/tcp, *:8444->443/tcp

You can see there are 3 new ucp-interlock-extension-<env> containers and 3 new ucp-interlock-proxy-<env> containers. You can also verify that they are listening on SSL port 8443 thru 8445.  This is fine for a demonstration, but you will more than likely want to set the replica’s somewhere in the 2 to 5 range per environment.  And of course you will determine that based on your traffic load.

NOTE: Often times after the update of the Interlock’s configuration you will still see the old ucp-interlock-extension and/or the ucp-interlock-proxy services still running. You can run the following command to remove these as they are no longer necessary.

docker service rm ucp-interlock-extension ucp-interlock-proxy

Step 9 – Deploy an Application

Now let’s deploy a demo service that we can route thru our new ingress. We’re going to take the standard docker demo application and deploy it to our dev cluster. Start by creating the following docker-compose.yml file:

version: "3.7"
  
services:
  demo:
    image: ehazlett/docker-demo
    deploy:
      replicas: 2
      labels:
        com.docker.lb.hosts: ingress-demo.lab.capstonec.net
        com.docker.lb.network: ingress_dev
        com.docker.lb.port: 8080
        com.docker.lb.service_cluster: dev
        com.docker.ucp.access.label: /dev
    networks:
      - ingress_dev

networks:
  ingress_dev:
    external: true

Note that the com.docker.lb.network attribute is set to ingress_dev. I previously created this network outside of the stack. We will now utilize this network for all our ingress traffic from Interlock to our docker-demo container.

$ docker network create ingress_dev --label com.docker.ucp.access.label=/dev --driver overlay
mbigo6hh978bfmpr5o9stiwdc

Also notice that the com.docker.lb.hosts attribute is set to ingress-demo.lab.capstonec.net. I logged into our DNS server and create a CNAME record with that name pointing to my AWS load balancer for the dev environment.

I also must configure my AWS load balancer to allow traffic to a Target Group of virtual machines. We can talk about your cloud configuration in another article down the road.

Let’s deploy that stack:

docker stack deploy -c docker-stack.yml demodev

Once the stack is deployed, we can verify that the services are running on the correct machine:

docker stack ps demodev
ID                  NAME                     IMAGE                         NODE                            DESIRED STATE       CURRENT STATE           ERROR               PORTS
i3bght0p5d0j        demodev_demo.1       ehazlett/docker-demo:latest   ip-127-13-5-3.ec2.internal   Running             Running 10 hours ago                        
cyqfu0ormnn8        demodev_demo.2       ehazlett/docker-demo:latest   ip-127-13-5-3.ec2.internal   Running             Running 10 hours ago                        

Finally we should be able to open a browser to http://ingress-demo.lab.capstonec.net which routes thru the dev interlock service cluster) and see the application running.

You can also run a curl command to verify:

curl ingress-demo.lab.capstonec.net
<!DOCTYPE html>
<html lang="en">
    <head>
        <meta charset="utf-8">
        <title></title>
...

Summary

Well that was a decent amount of work but now you’re done. You’ve successfully implemented your first interlock service cluster which is highly available and segmented into three environments for dev, test, and prod!

As always if you have any questions or need any help please contact us.

Mark Miller
Solutions Architect
Docker Accredited Consultant

Kubernetes Workload Isolation

There are many images of ships with pin-wheel colored containers in a myriad of stacked configurations. In the featured image above you can clearly see three ships at dock loaded with containers. These ships have unique destination port cities across the globe each one carrying a distinct set of product for a discreet set of customers. These containers carry a payload.

Our virtual docker containers carry a workload. So, the ships vary in what containers they carry, where they are transporting it, and for whom it belongs to. We will talk about how to get our virtual containers loaded into a particular ship and entertain one solution to VM and container isolation.


Over the years Capstone has work in many vertical industries. Several of Capstone’s customers have extremely regulated environments such as the banking, insurance, and financial investment industries. These industry verticals typically need to comply with numerous governing standards and often have unique ways of interpreting and applying those regulations to there IT infrastructure. All of these regulations are aimed at restricting, or at least minimizing, covert intrusion.

Traditional Application Isolation

One traditional approach to thwarting intrusion is to create virtual local area networks (VLAN’s) which separate and isolate sets of virtual machines (VM’s) using firewall rules. These sets of VM’s are placed into VLAN’s based on business oriented Application Groups (AG). The diagram below shows three AG’s for the Test environment and three more for Staging environment. This is typically handled by the enterprise network, security, and firewall teams.

Kubernetes Workload Isolation
VLAN’s isolate Virtual Machines via Firewall

This approach helps ensure that if any particular VM was compromised by a bad actor that they would not easily break into other important machines outside of their current VLAN. By using firewall rules and strict enforcement of only opening necessary ports between the VLAN’s you can achieve a high level of confidence in thwarting pervasive intrusion.

Docker Enterprise Collections

With the Docker Enterprise platform you can easily deploy work among worker VM’s using Docker Collections. Collections are a native enterprise feature that groups worker nodes and supports some RBAC restrictions. These Collections are named and support role based access control (RBAC) for restricting users from accessing and processing within each particular collection. This allows separation of workloads but does not necessarily guarantee network isolation.

Kubernetes Workload Isolation
Collections are groups of Worker Nodes

This approach is similar to VLAN’s in that VM’s are separated into distinct groups. While the containerized applications are isolated and protected via RBAC, the VM’s are not isolated from each other. A rogue actor could still potentially hop from one VM to another across VLAN’s.

But we could combine the VLAN separation with the Collection separation and gain the benefits of both approaches.

Kubernetes Workload Isolation
VLAN’s combined with Docker Collections

There are other ways to slice and dice your platform. Your approach should follow your requirements. Some customers want isolation to happen at the environment level (e.g. dev versus test) to ensure that a breach in one environment does not affect another. In this example you might have 2 VLAN’s with 3 collections each. The collections still allow for individual AG ownership and placement.

Kubernetes Workload Isolation
Two VLAN’s with Three Collections each

Kubernetes Namespaces

Kubernetes Workload Isolation
Kubernetes namespaces cross all VM’s by default

Docker’s inclusion of Kubernetes into the Enterprise platform has a strong focus on integration. Kubernetes uses namespaces to organize deployments and pods while Swarm leverages Collections. Docker integrated its enterprise class RBAC model into Kubernetes for ensuring security amongst namespace scoped deployments. But namespaces do not directly allow targeting of any particular node or set of nodes in the cluster. Rather, namespaces are groupings of containers which potentially span across all the VM’s in a cluster.

Namespace Linking

However, if you want the benefits of Collections within the Kubernetes realm, Docker has the answer in the linking kubernetes namespace to a collection. The following screen-shot shows how it is done via the UCP web interface. You simply navigate to your namespace and then choose “Link Nodes in Collection”. This effectively pins your namespace to the collection you choose and therefore pins the workload to the set of VM’s within that collection.

Kubernetes Workload Isolation
UCP Linking of Namespace with Collection

The Trifecta

Now we can combine VLAN’s, Collections, and Namespaces all together into the cluster’s configuration to obtain firewall enforced isolation of VM’s, grouping of VM’s based on Collection, and Kubernetes namespaces linked with collections.

Kubernetes Workload Isolation
VLAN’s, Collections, and Namespaces combined

There are several benefits to this approach and they include the following:

  • VM isolation within the VLAN’s enforced by firewall
  • Container deployments to Collections enforces workload placement
  • Docker Enterprise supports industry acclaimed Kubernetes scheduler
  • Kubernetes namespace linking to Collections provides placement of pod deployments on Collection based VMs
  • Kubernetes RBAC enforces access to pods/applications

Hard Work

All of this sounds great until you want to implement. We have great VM isolation thru VLANs, but UCP managers must be able to communicate with and manage each of the worker nodes in the cluster across all the VLAN’s. This means that firewall rules must be implemented to enable traffic over numerous docker ports that must be opened between the UCP management VLAN and each of your AG VLAN’s. In addition IP in IP traffic (IP Protocol 4) must be enabled on your firewall between Management VLAN and AG VLAN. These all must be factored into your rollout.

Summary

Using the Kubernetes orchestrator within the Docker Enterprise platform has great advantages including enterprise security and workload separation. In addition, you can apply traditional VLAN isolation of your VM’s in conjunction with Docker to enable VM isolation.

At Capstone we have a wide variety of experiences. But some of those experiences tend to have common architectural goals. Hopefully you have gained insight into how VLAN’s can be incorporated with your Docker Enterprise platform.

If you want more information or assistance you can contact me on linkedin or thru our Capstone site.

Mark Miller
Solutions Architect
Docker Accredited Consultant

How to securely deploy Docker EE on the AWS Cloud

Overview

This reference deployment guide provides the step-by-step instructions for deploying Docker Enterprise Edition on the Amazon Web Services (AWS) Cloud. This automation references deployments that use the Docker Certified Infrastructure (DCI) template which is based on Terraform to launch, configure and run the AWS compute, network, storage and other services required to deploy a specific workload on AWS. The DCI template uses Ansible playbooks to configured the Docker Enterprise cluster environment.

Viewing Container Logs thru Docker UCP

The Docker Universal Control Plane provides a wealth of information about the Docker cluster. There is information for both Swarm and Kubernetes. There are tons of detailed information about stacks, services, containers, networks, volumes, pods, namespaces, service accounts, controllers, load balancers, pods, configurations, storage, etc. (I think you get the point).

UCP Dashboard