Docker

Kubernetes Workload Isolation

There are many images of ships with pin-wheel colored containers in a myriad of stacked configurations. In the featured image above you can clearly see three ships at dock loaded with containers. These ships have unique destination port cities across the globe each one carrying a distinct set of product for a discreet set of customers. These containers carry a payload.

Our virtual docker containers carry a workload. So, the ships vary in what containers they carry, where they are transporting it, and for whom it belongs to. We will talk about how to get our virtual containers loaded into a particular ship and entertain one solution to VM and container isolation.


Over the years Capstone has work in many vertical industries. Several of Capstone’s customers have extremely regulated environments such as the banking, insurance, and financial investment industries. These industry verticals typically need to comply with numerous governing standards and often have unique ways of interpreting and applying those regulations to there IT infrastructure. All of these regulations are aimed at restricting, or at least minimizing, covert intrusion.

Traditional Application Isolation

One traditional approach to thwarting intrusion is to create virtual local area networks (VLAN’s) which separate and isolate sets of virtual machines (VM’s) using firewall rules. These sets of VM’s are placed into VLAN’s based on business oriented Application Groups (AG). The diagram below shows three AG’s for the Test environment and three more for Staging environment. This is typically handled by the enterprise network, security, and firewall teams.

Kubernetes Workload Isolation
VLAN’s isolate Virtual Machines via Firewall

This approach helps ensure that if any particular VM was compromised by a bad actor that they would not easily break into other important machines outside of their current VLAN. By using firewall rules and strict enforcement of only opening necessary ports between the VLAN’s you can achieve a high level of confidence in thwarting pervasive intrusion.

Docker Enterprise Collections

With the Docker Enterprise platform you can easily deploy work among worker VM’s using Docker Collections. Collections are a native enterprise feature that groups worker nodes and supports some RBAC restrictions. These Collections are named and support role based access control (RBAC) for restricting users from accessing and processing within each particular collection. This allows separation of workloads but does not necessarily guarantee network isolation.

Kubernetes Workload Isolation
Collections are groups of Worker Nodes

This approach is similar to VLAN’s in that VM’s are separated into distinct groups. While the containerized applications are isolated and protected via RBAC, the VM’s are not isolated from each other. A rogue actor could still potentially hop from one VM to another across VLAN’s.

But we could combine the VLAN separation with the Collection separation and gain the benefits of both approaches.

Kubernetes Workload Isolation
VLAN’s combined with Docker Collections

There are other ways to slice and dice your platform. Your approach should follow your requirements. Some customers want isolation to happen at the environment level (e.g. dev versus test) to ensure that a breach in one environment does not affect another. In this example you might have 2 VLAN’s with 3 collections each. The collections still allow for individual AG ownership and placement.

Kubernetes Workload Isolation
Two VLAN’s with Three Collections each

Kubernetes Namespaces

Kubernetes Workload Isolation
Kubernetes namespaces cross all VM’s by default

Docker’s inclusion of Kubernetes into the Enterprise platform has a strong focus on integration. Kubernetes uses namespaces to organize deployments and pods while Swarm leverages Collections. Docker integrated its enterprise class RBAC model into Kubernetes for ensuring security amongst namespace scoped deployments. But namespaces do not directly allow targeting of any particular node or set of nodes in the cluster. Rather, namespaces are groupings of containers which potentially span across all the VM’s in a cluster.

Namespace Linking

However, if you want the benefits of Collections within the Kubernetes realm, Docker has the answer in the linking kubernetes namespace to a collection. The following screen-shot shows how it is done via the UCP web interface. You simply navigate to your namespace and then choose “Link Nodes in Collection”. This effectively pins your namespace to the collection you choose and therefore pins the workload to the set of VM’s within that collection.

Kubernetes Workload Isolation
UCP Linking of Namespace with Collection

The Trifecta

Now we can combine VLAN’s, Collections, and Namespaces all together into the cluster’s configuration to obtain firewall enforced isolation of VM’s, grouping of VM’s based on Collection, and Kubernetes namespaces linked with collections.

Kubernetes Workload Isolation
VLAN’s, Collections, and Namespaces combined

There are several benefits to this approach and they include the following:

  • VM isolation within the VLAN’s enforced by firewall
  • Container deployments to Collections enforces workload placement
  • Docker Enterprise supports industry acclaimed Kubernetes scheduler
  • Kubernetes namespace linking to Collections provides placement of pod deployments on Collection based VMs
  • Kubernetes RBAC enforces access to pods/applications

Hard Work

All of this sounds great until you want to implement. We have great VM isolation thru VLANs, but UCP managers must be able to communicate with and manage each of the worker nodes in the cluster across all the VLAN’s. This means that firewall rules must be implemented to enable traffic over numerous docker ports that must be opened between the UCP management VLAN and each of your AG VLAN’s. In addition IP in IP traffic (IP Protocol 4) must be enabled on your firewall between Management VLAN and AG VLAN. These all must be factored into your rollout.

Summary

Using the Kubernetes orchestrator within the Docker Enterprise platform has great advantages including enterprise security and workload separation. In addition, you can apply traditional VLAN isolation of your VM’s in conjunction with Docker to enable VM isolation.

At Capstone we have a wide variety of experiences. But some of those experiences tend to have common architectural goals. Hopefully you have gained insight into how VLAN’s can be incorporated with your Docker Enterprise platform.

If you want more information or assistance you can contact me on linkedin or thru our Capstone site.

Mark Miller
Solutions Architect
Docker Accredited Consultant

SSL Options with Kubernetes – Part 3

In the first two posts in this series, SSL Options with Kubernetes – Part 1 and SSL Options with Kubernetes – Part 2, we saw how to use the Kubernetes LoadBalancer service type to terminate SSL for your application deployed on a Kubernetes cluster in AWS and Azure, respectively. In this post, we will see how this can be done for a Kubernetes cluster anywhere using an Ingress resource.

Rather than using an external load balancer as the AWS and Azure cloud providers do for the LoadBalancer service type, an ingress uses an Ingress Controller to provide load balancing, SSL termination and other services within a Kubernetes cluster. A big advantage of using an ingress is its portability across all clusters regardless of the underlying infrastructure, i.e. cloud, virtualized or bare metal. Until recently, a disadvantage was an ingress only supported HTTP and HTTPS and you would need to use a NodePort service type for other protocols. However, NGINX has added support for other protocols to their ingress controller.

Building Images in a Heterogeneous Cluster

Recently I was troubleshooting a customer problem in their on-premise cluster. But I was not sure where the problem lay. So I switched over to using my colleagues Docker Enterprise demo cluster that is running in Azure. In this heterogeneous cluster are 1 Universal Control Plan (UCP) manager, 1 Docker Trusted Registry (DTR), 2 Windows workers, and 1 Linux worker.

Building Images in a Heterogeneous Cluster

I was attempting to reproduce my customer’s problem. However, what should have been easy turned into a problem; or else I wouldn’t be writing about it. I could not even get to my customer’s problem until I resolved an issue with simply building a linux image against a heterogeneous (Windows and Linux workers) cluster. At the time, it felt rather silly and frustrating all at the same time. All I could do was wring my hands and groan.

I had downloaded my client bundle and sourced it in my bash shell.

$ source env.sh

The next thing I needed was to build the docker image from my custom Dockerfile. The Dockerfile was based on nginx and had a custom nginx.conf loaded into the image.

$ cd ~/my-pp
$ docker build -t my-app:1.0 .
Sending build context to Docker daemon  4.096kB
worker-win-2: Step 1/3 : FROM nginx:1.15.2 
worker-win-2: Pulling from library/nginx
worker-win-2: 
Failed to build image: no matching manifest for unknown in the manifest list entries

Ok, based on the last line of the log output it is not obvious what the issue is. However, if you look at the machine name that the build command was sent to, it becomes quite obvious what the problem is. I cannot build a linux based image on a windows machine. But how do I specify the target operating system on the command line?

I knew my friend Chuck had already encountered this problem. So this is what he told me to do; add the following option –build-arg ‘constraint:ostype==linux’ to my build command.

$ cd ~/my-app
$ docker build --build-arg 'constraint:ostype==linux' -t my-app .
Sending build context to Docker daemon  4.096kB
worker-linux-1: Step 1/3 : FROM nginx:1.15.2
worker-linux-1: Pulling from library/nginx
worker-linux-1: Pull complete 
worker-linux-1: Digest: sha256:d85914d547a6c92faa39ce7058bd7529baacab7e0cd4255442b04577c4d1f424
worker-linux-1: Status: Downloaded newer image for nginx:1.15.2
worker-linux-1:  ---> c82521676580
worker-linux-1: Step 2/3 : EXPOSE 8443 
worker-linux-1:  ---> Running in 88e99ace1e12
worker-linux-1: Removing intermediate container 88e99ace1e12
worker-linux-1:  ---> bd98a77c3b6b
worker-linux-1: Step 3/3 : COPY nginx.conf /etc/nginx/ 
worker-linux-1:  ---> 62b9f978af24
worker-linux-1: Successfully built 62b9f978af24
worker-linux-1: Successfully tagged my-app:latest

That’s it folks. Plain and simple.

$ docker build –build-arg ‘constraint:ostype==linux’ -t my-app .

In a heterogeneous cluster my builds are now targeting linux machines and not windows. Of course you can alternate the ostype to windows if that is your goal. Good luck and contact us at https://capstonec.com/contact-us/

Mark Miller
Solutions Architect
Docker Accredited Consultant

Kubernetes NetworkPolicies in Docker Enterprise Edition

Kubernetes running under Docker UCP uses the Calico CNI plugin so that you can use Kubernetes NetworkPolicies to control pod to pod communication as well as communication between pods and other network endpoints.

This blog post will walk you through an example of configuring Kubernetes NetworkPolicies. We will block traffic from one namespace into another namespace, while still allowing external traffic to access the “restricted” namespace. As a high-level use case, we will consider the situation where a development team is working on multiple branches of a project, and the pods in the different branches should not be able to communicate with each other. If you are not familiar with the basic concepts of NetworkPolicies, see the Kubernetes documentation here.

Help! I need to change the pod CIDR in my Kubernetes cluster

Change_k8s_pod_cidr 1 1

Your Docker EE Kubernetes cluster has been working great for months. The DevOps team is fully committed to deploying critical applications as Kubernetes workloads using their pipeline, and there are several production applications already deployed in your Kubernetes cluster.

But today the DevOps team tells you something is wrong; they can’t reach a group of internal corporate servers from Kubernetes pods. They can reach those same servers using basic Docker containers and Swarm services. You’re sure its just another firewall misconfiguration and you enlist the help of your network team to fix it. After several hours of troubleshooting, you realize that the problem is that you are using a CIDR (Classless Inter-Domain Routing) range for your cluster’s pod CIDR range that overlaps the CIDR range that the servers use.

Resistance is futile; management tells you that the server IP addresses can’t be changed, so you must change the CIDR range for your Kubernetes cluster. You do a little Internet surfing and quickly figure out that this is not considered an easy task. Worse yet, most of the advice is for Kubernetes clusters installed using tools like kubeadm or kops, while your cluster is installed under Docker EE UCP.

Relax! In this blog post, I’m going to walk you through changing the pod CIDR range in Kubernetes running under Docker EE. There will be some disruptions at the time that the existing Kubernetes pods are re-started to use IP addresses from the new CIDR range but they should be minimal if your applications use a replicated design.

32-bit Apps in a 64-bit Docker Container

C+ book cover

I started my career in December of 1989 at a company named Planning Research Corporation which contracted a considerable amount of work with the Department of Defense. I spent one year working in Fortran 77. The next 6 years were far more interesting to me as I dove into the world of ANSI C programming using the Kernighan & Ritchie bible. I still have my book on a shelf.

Our systems ran on 3 different Unix operating systems. We managed Makefiles that targeted SunOS, DEC Ultrix, and IBM AIX platforms. At times this was quite challenging. However, everything in this environment was 32 bit architecture; but what did that matter to me at the time? 64 bit processors didn’t come along for many more years.

SSL Options with Kubernetes – Part 2

In the first post in this series, SSL Options with Kubernetes – Part 1, we saw how to use the Kubernetes LoadBalancer service type to terminate SSL for your application deployed on a Kubernetes cluster in AWS. In this post, we will see how this can be done for a Kubernetes cluster in Azure.

In general, Kubernetes objects are portable across the various types of infrastructure underlying the cluster, i.e. public cloud, private cloud, virtualized, bare metal, etc. However, some objects are implemented through the Kubernetes concept of Cloud Providers. The LoadBalancer service type is one of these. AWS, Azure, and GCP (as well as vSphere, OpenStack and others) all implement a load balancer service using the existing load balancer(s) their cloud service provides. As such, each implementation is different. These differences are accounted for in the annotations to the Service object. For example, here is the specification we used for our service in the previous post.

How to securely deploy Docker EE on the AWS Cloud

Overview

This reference deployment guide provides the step-by-step instructions for deploying Docker Enterprise Edition on the Amazon Web Services (AWS) Cloud. This automation references deployments that use the Docker Certified Infrastructure (DCI) template which is based on Terraform to launch, configure and run the AWS compute, network, storage and other services required to deploy a specific workload on AWS. The DCI template uses Ansible playbooks to configured the Docker Enterprise cluster environment.

SSL Options with Kubernetes – Part 1

In this post (and future posts) we will continue to look into questions our clients have asked about using Docker Enterprise that have prompted us to do some further research and/or investigation. Here we are going to look into the options for enabling secure communications with our applications running under Kubernetes container orchestration on a Docker Enterprise cluster.

The LoadBalancer type of a service in Kubernetes is available if you are using one of the major public clouds, AWS, Azure or GCP, via their respective cloud provider implementations. An Ingress resource is available on any Kubernetes cluster including both on-premises and in the cloud. Both LoadBalancer and Ingress provide the capability to terminate SSL traffic. In this post will show how this is accomplished with an AWS LoadBalancer service.

The Power in a Name

My full name is Mark Allen Miller. You can find my profile on LinkedIn under my full name https://www.linkedin.com/in/markallenmiller/. I went to college with two other Mark Millers. One of them also had the same middle initial as me so my name is not the most unique name in the world. My dad’s name is Siegfried Miller. At the age of 18, because he could “change the world”, he changed his last name from Mueller to Miller and yep, he doesn’t have a middle name. My grandfather’s name is Karl Mueller. His Austrian surname, prior to immigrating to the US in 1950, was Müller with an umlaut which is a mark ( ¨ ) used over a vowel to indicate a different vowel quality. Interesting trivia you might say, but what does this have to do with Docker?

Well, Docker originally had the name dotCloud. According to wikipedia “Docker represents an evolution of dotCloud’s proprietary technology, which is itself built on earlier open-source projects such as Cloudlets.” I had never even heard of Cloudlets until I wrote this blog.

Docker containers have names also. These names give us humans something a little more interesting to work with instead of the typical container id such as 648f7f486b24. The name of a container can be used to identify a running instance of an image, but it can also be used in most commands in place of the container id.