July 2019

Kubernetes Workload Isolation

There are many images of ships with pin-wheel colored containers in a myriad of stacked configurations. In the featured image above you can clearly see three ships at dock loaded with containers. These ships have unique destination port cities across the globe each one carrying a distinct set of product for a discreet set of customers. These containers carry a payload.

Our virtual docker containers carry a workload. So, the ships vary in what containers they carry, where they are transporting it, and for whom it belongs to. We will talk about how to get our virtual containers loaded into a particular ship and entertain one solution to VM and container isolation.


Over the years Capstone has work in many vertical industries. Several of Capstone’s customers have extremely regulated environments such as the banking, insurance, and financial investment industries. These industry verticals typically need to comply with numerous governing standards and often have unique ways of interpreting and applying those regulations to there IT infrastructure. All of these regulations are aimed at restricting, or at least minimizing, covert intrusion.

Traditional Application Isolation

One traditional approach to thwarting intrusion is to create virtual local area networks (VLAN’s) which separate and isolate sets of virtual machines (VM’s) using firewall rules. These sets of VM’s are placed into VLAN’s based on business oriented Application Groups (AG). The diagram below shows three AG’s for the Test environment and three more for Staging environment. This is typically handled by the enterprise network, security, and firewall teams.

Kubernetes Workload Isolation
VLAN’s isolate Virtual Machines via Firewall

This approach helps ensure that if any particular VM was compromised by a bad actor that they would not easily break into other important machines outside of their current VLAN. By using firewall rules and strict enforcement of only opening necessary ports between the VLAN’s you can achieve a high level of confidence in thwarting pervasive intrusion.

Docker Enterprise Collections

With the Docker Enterprise platform you can easily deploy work among worker VM’s using Docker Collections. Collections are a native enterprise feature that groups worker nodes and supports some RBAC restrictions. These Collections are named and support role based access control (RBAC) for restricting users from accessing and processing within each particular collection. This allows separation of workloads but does not necessarily guarantee network isolation.

Kubernetes Workload Isolation
Collections are groups of Worker Nodes

This approach is similar to VLAN’s in that VM’s are separated into distinct groups. While the containerized applications are isolated and protected via RBAC, the VM’s are not isolated from each other. A rogue actor could still potentially hop from one VM to another across VLAN’s.

But we could combine the VLAN separation with the Collection separation and gain the benefits of both approaches.

Kubernetes Workload Isolation
VLAN’s combined with Docker Collections

There are other ways to slice and dice your platform. Your approach should follow your requirements. Some customers want isolation to happen at the environment level (e.g. dev versus test) to ensure that a breach in one environment does not affect another. In this example you might have 2 VLAN’s with 3 collections each. The collections still allow for individual AG ownership and placement.

Kubernetes Workload Isolation
Two VLAN’s with Three Collections each

Kubernetes Namespaces

Kubernetes Workload Isolation
Kubernetes namespaces cross all VM’s by default

Docker’s inclusion of Kubernetes into the Enterprise platform has a strong focus on integration. Kubernetes uses namespaces to organize deployments and pods while Swarm leverages Collections. Docker integrated its enterprise class RBAC model into Kubernetes for ensuring security amongst namespace scoped deployments. But namespaces do not directly allow targeting of any particular node or set of nodes in the cluster. Rather, namespaces are groupings of containers which potentially span across all the VM’s in a cluster.

Namespace Linking

However, if you want the benefits of Collections within the Kubernetes realm, Docker has the answer in the linking kubernetes namespace to a collection. The following screen-shot shows how it is done via the UCP web interface. You simply navigate to your namespace and then choose “Link Nodes in Collection”. This effectively pins your namespace to the collection you choose and therefore pins the workload to the set of VM’s within that collection.

Kubernetes Workload Isolation
UCP Linking of Namespace with Collection

The Trifecta

Now we can combine VLAN’s, Collections, and Namespaces all together into the cluster’s configuration to obtain firewall enforced isolation of VM’s, grouping of VM’s based on Collection, and Kubernetes namespaces linked with collections.

Kubernetes Workload Isolation
VLAN’s, Collections, and Namespaces combined

There are several benefits to this approach and they include the following:

  • VM isolation within the VLAN’s enforced by firewall
  • Container deployments to Collections enforces workload placement
  • Docker Enterprise supports industry acclaimed Kubernetes scheduler
  • Kubernetes namespace linking to Collections provides placement of pod deployments on Collection based VMs
  • Kubernetes RBAC enforces access to pods/applications

Hard Work

All of this sounds great until you want to implement. We have great VM isolation thru VLANs, but UCP managers must be able to communicate with and manage each of the worker nodes in the cluster across all the VLAN’s. This means that firewall rules must be implemented to enable traffic over numerous docker ports that must be opened between the UCP management VLAN and each of your AG VLAN’s. In addition IP in IP traffic (IP Protocol 4) must be enabled on your firewall between Management VLAN and AG VLAN. These all must be factored into your rollout.

Summary

Using the Kubernetes orchestrator within the Docker Enterprise platform has great advantages including enterprise security and workload separation. In addition, you can apply traditional VLAN isolation of your VM’s in conjunction with Docker to enable VM isolation.

At Capstone we have a wide variety of experiences. But some of those experiences tend to have common architectural goals. Hopefully you have gained insight into how VLAN’s can be incorporated with your Docker Enterprise platform.

If you want more information or assistance you can contact me on linkedin or thru our Capstone site.

Mark Miller
Solutions Architect
Docker Accredited Consultant

SSL Options with Kubernetes – Part 3

In the first two posts in this series, SSL Options with Kubernetes – Part 1 and SSL Options with Kubernetes – Part 2, we saw how to use the Kubernetes LoadBalancer service type to terminate SSL for your application deployed on a Kubernetes cluster in AWS and Azure, respectively. In this post, we will see how this can be done for a Kubernetes cluster anywhere using an Ingress resource.

Rather than using an external load balancer as the AWS and Azure cloud providers do for the LoadBalancer service type, an ingress uses an Ingress Controller to provide load balancing, SSL termination and other services within a Kubernetes cluster. A big advantage of using an ingress is its portability across all clusters regardless of the underlying infrastructure, i.e. cloud, virtualized or bare metal. Until recently, a disadvantage was an ingress only supported HTTP and HTTPS and you would need to use a NodePort service type for other protocols. However, NGINX has added support for other protocols to their ingress controller.

Building Images in a Heterogeneous Cluster

Recently I was troubleshooting a customer problem in their on-premise cluster. But I was not sure where the problem lay. So I switched over to using my colleagues Docker Enterprise demo cluster that is running in Azure. In this heterogeneous cluster are 1 Universal Control Plan (UCP) manager, 1 Docker Trusted Registry (DTR), 2 Windows workers, and 1 Linux worker.

Building Images in a Heterogeneous Cluster

I was attempting to reproduce my customer’s problem. However, what should have been easy turned into a problem; or else I wouldn’t be writing about it. I could not even get to my customer’s problem until I resolved an issue with simply building a linux image against a heterogeneous (Windows and Linux workers) cluster. At the time, it felt rather silly and frustrating all at the same time. All I could do was wring my hands and groan.

I had downloaded my client bundle and sourced it in my bash shell.

$ source env.sh

The next thing I needed was to build the docker image from my custom Dockerfile. The Dockerfile was based on nginx and had a custom nginx.conf loaded into the image.

$ cd ~/my-pp
$ docker build -t my-app:1.0 .
Sending build context to Docker daemon  4.096kB
worker-win-2: Step 1/3 : FROM nginx:1.15.2 
worker-win-2: Pulling from library/nginx
worker-win-2: 
Failed to build image: no matching manifest for unknown in the manifest list entries

Ok, based on the last line of the log output it is not obvious what the issue is. However, if you look at the machine name that the build command was sent to, it becomes quite obvious what the problem is. I cannot build a linux based image on a windows machine. But how do I specify the target operating system on the command line?

I knew my friend Chuck had already encountered this problem. So this is what he told me to do; add the following option –build-arg ‘constraint:ostype==linux’ to my build command.

$ cd ~/my-app
$ docker build --build-arg 'constraint:ostype==linux' -t my-app .
Sending build context to Docker daemon  4.096kB
worker-linux-1: Step 1/3 : FROM nginx:1.15.2
worker-linux-1: Pulling from library/nginx
worker-linux-1: Pull complete 
worker-linux-1: Digest: sha256:d85914d547a6c92faa39ce7058bd7529baacab7e0cd4255442b04577c4d1f424
worker-linux-1: Status: Downloaded newer image for nginx:1.15.2
worker-linux-1:  ---> c82521676580
worker-linux-1: Step 2/3 : EXPOSE 8443 
worker-linux-1:  ---> Running in 88e99ace1e12
worker-linux-1: Removing intermediate container 88e99ace1e12
worker-linux-1:  ---> bd98a77c3b6b
worker-linux-1: Step 3/3 : COPY nginx.conf /etc/nginx/ 
worker-linux-1:  ---> 62b9f978af24
worker-linux-1: Successfully built 62b9f978af24
worker-linux-1: Successfully tagged my-app:latest

That’s it folks. Plain and simple.

$ docker build –build-arg ‘constraint:ostype==linux’ -t my-app .

In a heterogeneous cluster my builds are now targeting linux machines and not windows. Of course you can alternate the ostype to windows if that is your goal. Good luck and contact us at https://capstonec.com/contact-us/

Mark Miller
Solutions Architect
Docker Accredited Consultant