docker

Kubernetes NetworkPolicies in Docker Enterprise Edition

Kubernetes running under Docker UCP uses the Calico CNI plugin so that you can use Kubernetes NetworkPolicies to control pod to pod communication as well as communication between pods and other network endpoints.

This blog post will walk you through an example of configuring Kubernetes NetworkPolicies. We will block traffic from one namespace into another namespace, while still allowing external traffic to access the “restricted” namespace. As a high-level use case, we will consider the situation where a development team is working on multiple branches of a project, and the pods in the different branches should not be able to communicate with each other. If you are not familiar with the basic concepts of NetworkPolicies, see the Kubernetes documentation here.

Help! I need to change the pod CIDR in my Kubernetes cluster

Change_k8s_pod_cidr 1 1

Your Docker EE Kubernetes cluster has been working great for months. The DevOps team is fully committed to deploying critical applications as Kubernetes workloads using their pipeline, and there are several production applications already deployed in your Kubernetes cluster.

But today the DevOps team tells you something is wrong; they can’t reach a group of internal corporate servers from Kubernetes pods. They can reach those same servers using basic Docker containers and Swarm services. You’re sure its just another firewall misconfiguration and you enlist the help of your network team to fix it. After several hours of troubleshooting, you realize that the problem is that you are using a CIDR (Classless Inter-Domain Routing) range for your cluster’s pod CIDR range that overlaps the CIDR range that the servers use.

Resistance is futile; management tells you that the server IP addresses can’t be changed, so you must change the CIDR range for your Kubernetes cluster. You do a little Internet surfing and quickly figure out that this is not considered an easy task. Worse yet, most of the advice is for Kubernetes clusters installed using tools like kubeadm or kops, while your cluster is installed under Docker EE UCP.

Relax! In this blog post, I’m going to walk you through changing the pod CIDR range in Kubernetes running under Docker EE. There will be some disruptions at the time that the existing Kubernetes pods are re-started to use IP addresses from the new CIDR range but they should be minimal if your applications use a replicated design.

32-bit Apps in a 64-bit Docker Container

C+ book cover

I started my career in December of 1989 at a company named Planning Research Corporation which contracted a considerable amount of work with the Department of Defense. I spent one year working in Fortran 77. The next 6 years were far more interesting to me as I dove into the world of ANSI C programming using the Kernighan & Ritchie bible. I still have my book on a shelf.

Our systems ran on 3 different Unix operating systems. We managed Makefiles that targeted SunOS, DEC Ultrix, and IBM AIX platforms. At times this was quite challenging. However, everything in this environment was 32 bit architecture; but what did that matter to me at the time? 64 bit processors didn’t come along for many more years.

SSL Options with Kubernetes – Part 2

In the first post in this series, SSL Options with Kubernetes – Part 1, we saw how to use the Kubernetes LoadBalancer service type to terminate SSL for your application deployed on a Kubernetes cluster in AWS. In this post, we will see how this can be done for a Kubernetes cluster in Azure.

In general, Kubernetes objects are portable across the various types of infrastructure underlying the cluster, i.e. public cloud, private cloud, virtualized, bare metal, etc. However, some objects are implemented through the Kubernetes concept of Cloud Providers. The LoadBalancer service type is one of these. AWS, Azure, and GCP (as well as vSphere, OpenStack and others) all implement a load balancer service using the existing load balancer(s) their cloud service provides. As such, each implementation is different. These differences are accounted for in the annotations to the Service object. For example, here is the specification we used for our service in the previous post.

How to securely deploy Docker EE on the AWS Cloud

Overview

This reference deployment guide provides the step-by-step instructions for deploying Docker Enterprise Edition on the Amazon Web Services (AWS) Cloud. This automation references deployments that use the Docker Certified Infrastructure (DCI) template which is based on Terraform to launch, configure and run the AWS compute, network, storage and other services required to deploy a specific workload on AWS. The DCI template uses Ansible playbooks to configured the Docker Enterprise cluster environment.

SSL Options with Kubernetes – Part 1

In this post (and future posts) we will continue to look into questions our clients have asked about using Docker Enterprise that have prompted us to do some further research and/or investigation. Here we are going to look into the options for enabling secure communications with our applications running under Kubernetes container orchestration on a Docker Enterprise cluster.

The LoadBalancer type of a service in Kubernetes is available if you are using one of the major public clouds, AWS, Azure or GCP, via their respective cloud provider implementations. An Ingress resource is available on any Kubernetes cluster including both on-premises and in the cloud. Both LoadBalancer and Ingress provide the capability to terminate SSL traffic. In this post will show how this is accomplished with an AWS LoadBalancer service.

The Power in a Name

My full name is Mark Allen Miller. You can find my profile on LinkedIn under my full name https://www.linkedin.com/in/markallenmiller/. I went to college with two other Mark Millers. One of them also had the same middle initial as me so my name is not the most unique name in the world. My dad’s name is Siegfried Miller. At the age of 18, because he could “change the world”, he changed his last name from Mueller to Miller and yep, he doesn’t have a middle name. My grandfather’s name is Karl Mueller. His Austrian surname, prior to immigrating to the US in 1950, was Müller with an umlaut which is a mark ( ¨ ) used over a vowel to indicate a different vowel quality. Interesting trivia you might say, but what does this have to do with Docker?

Well, Docker originally had the name dotCloud. According to wikipedia “Docker represents an evolution of dotCloud’s proprietary technology, which is itself built on earlier open-source projects such as Cloudlets.” I had never even heard of Cloudlets until I wrote this blog.

Docker containers have names also. These names give us humans something a little more interesting to work with instead of the typical container id such as 648f7f486b24. The name of a container can be used to identify a running instance of an image, but it can also be used in most commands in place of the container id.

Kubernetes tolerations working together with Docker UCP scheduler restrictions

In this blog post we´ll take a look at how the scheduler controls in Docker UCP interact with Kubernetes taints and tolerations. Both are used to control what workloads are allowed to run on manager and DTR (Docker Trusted Registry) nodes. Docker EE UCP mangers nodes are also Kubernetes master nodes, and in production systems it is important to restrict what runs on the manager (master) and DTR nodes. We’ll walk through deploying a Kubernetes workload on every node in a Docker EE cluster.

Deploying a Docker stack file as a Kubernetes workload

Overview

Recently I’ve been hosting workshops for a customer who is exploring migrating from Docker Swarm orchestration to Kubernetes orchestration. The customer is currently using Docker EE (Enterprise Edition) 2.1, and plans to continue using that platform, just leveraging Kubernetes rather than Swarm. There are a number of advantages to continuing to use Docker EE including:

  • Pre-installed Kubernetes.
  • Group (team) and user management, including corporate LDAP integration.
  • Using the Docker UCP client bundle to configure both your Kubernetes and Docker client environment.
  • Availability of an on-premises registry (DTR) that includes advanced features such image scanning and image promotion.

I had already conducted a workshop on deploying applications as Docker services in stack files (compose files deployed as Docker stacks), demonstrating self-healing replicated applications, service discovery and the ability to publish ports externally using the Docker ingress network. …

Using a Private Registry in Kubernetes

Docker Trusted Registry (DTR) in a Docker Enterprise Edition (EE) cluster allows users to create a private image repository for their own use. They may want to do this when they want to use the cluster for their work but don’t want to or can’t use their own system or they’re not ready yet to share it with others. However, using a private image repository in a Kubernetes deployment requires some additional steps. In this post, I will show you how to setup the repository and use it in your deployment.

Private repository screenshot
In the example below, we’re going to create an image repository in DTR for an nginx image for the user ken.rider.

In this case, we’re going to use the official image for nginx as the basis for the nginx image in our repository. We’ll need to pull the image from the Docker Hub, login to DTR on our cluster, re-tag the image for our repository and push the image to DTR.

ken> docker image pull nginx:1.14-alpine
1.14-alpine: Pulling from library/nginx
6c40cc604d8e: Already exists
76679ad9f124: Pull complete
389a52582f93: Pull complete
496e2dd2b91a: Pull complete
Digest: sha256:b96aeeb1687703c49096f4969358d44f8520b671da94848309a3ba5be5b4c632
Status: Downloaded newer image for nginx:1.14-alpine
ken> docker login ken-dtr.lab.capstonec.net
Username: ken.rider
Password:
Login Succeeded
ken> docker image tag nginx:1.14-alpine ken-dtr.lab.capstonec.net/ken.rider/nginx:1.14-alpine
ken> docker image push ken-dtr.lab.capstonec.net/ken.rider/nginx:1.14-alpine
The push refers to repository [ken-dtr.lab.capstonec.net/ken.rider/nginx]
129ba078f157: Layer already exists
8c8f1eccd524: Layer already exists
68442845474f: Layer already exists
503e53e365f3: Layer already exists
1.14-alpine: digest: sha256:b9734546761e49b453efce35ee523bbcaff1052d281516f133d41b090e26c0df size: 1153

As an alternative, an administrator can change the DTR setting to create a repository on push. In this case, the repository is set to private by default.

Private registry diagram

Here the user only has to push the image to create the private imageCreate repository on push repository.

ken> docker image pull alpine
Using default tag: latest
latest: Pulling from library/alpine
6c40cc604d8e: Already exists
Digest: sha256:b3dbf31b77fd99d9c08f780ce6f5282aba076d70a513a8be859d8d3a4d0c92b8
Status: Downloaded newer image for alpine:latest
ken> docker login ken-dtr.lab.capstonec.net
Username: ken.rider
Password:
Login Succeeded
ken> docker image tag alpine ken-dtr.lab.capstonec.net/ken.rider/alpine:latest
ken> docker image push ken-dtr.lab.capstonec.net/ken.rider/alpine:latest
The push refers to repository [ken-dtr.lab.capstonec.net/ken.rider/alpine]
503e53e365f3: Mounted from ken.rider/nginx
latest: digest: sha256:25b4d910f4b76a63a3b45d0f69a57c34157500faf6087236581eca221c62d214 size: 528

Here’s what we see in DTR after we push both images. Note that each repository is labeled as private.

private-repositories.jpg

Now, let’s create a Kubernetes service based on the nginx image from this private repository in DTR. (If you would like to know more about the user-kenrider namespace used below, see my previous post on how to Create a User K8S Sandbox in Docker EE.)

ken> kubectl --namespace user-kenrider create deployment nginx --image=ken-dtr.lab.capstonec.net/ken.rider/nginx:1.14-alpine
deployment.extensions "nginx" created

That looks promising. Let’s look at the deployment.

ken> kubectl --namespace user-kenrider get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx 1 1 1 0 25s
ken> kubectl --namespace user-kenrider describe deployment nginx
Name: nginx
Namespace: user-kenrider
CreationTimestamp: Sun, 03 Feb 2019 15:26:41 -0700
Labels: app=nginx
Annotations: deployment.kubernetes.io/revision=1
Selector: app=nginx
Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Pod Template:
Labels: app=nginx
Containers:
nginx:
Image: ken-dtr.lab.capstonec.net/ken.rider/nginx:1.14-alpine
Port:
Host Port:
Environment:
Mounts:
Volumes:
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True ReplicaSetUpdated
OldReplicaSets:
NewReplicaSet: nginx-659f44b5b4 (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 42s deployment-controller Scaled up replica set nginx-659f44b5b4 to 1

Notice that the deployment says it has scaled up to 1 replica but the 1 replica is unavailable.

Let’s look at the pods.

ken> kubectl --namespace user-kenrider get pods
NAME READY STATUS RESTARTS AGE
nginx-659f44b5b4-f7bz2 0/1 ImagePullBackOff 0 1m
ken> kubectl --namespace user-kenrider describe pod nginx-659f44b5b4-f7bz2
Name: nginx-659f44b5b4-f7bz2
Namespace: user-kenrider
Priority: 0
PriorityClassName:
Node: ip-172-31-14-17/172.31.14.17
Start Time: Sun, 03 Feb 2019 15:26:42 -0700
Labels: app=nginx
pod-template-hash=2159006160
Annotations:
Status: Pending
IP: 192.168.87.199
Controlled By: ReplicaSet/nginx-659f44b5b4
Containers:
nginx:
Container ID:
Image: ken-dtr.lab.capstonec.net/ken.rider/nginx:1.14-alpine
Image ID:
Port:
Host Port:
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-kjtft (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-kjtft:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-kjtft
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: com.docker.ucp.manager
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 1m default-scheduler Successfully assigned user-kenrider/nginx-659f44b5b4-f7bz2 to ip-172-31-14-17
Normal Pulling 40s (x3 over 1m) kubelet, ip-172-31-14-17 pulling image "ken-dtr.lab.capstonec.net/ken.rider/nginx:1.14-alpine"
Warning Failed 40s (x3 over 1m) kubelet, ip-172-31-14-17 Failed to pull image "ken-dtr.lab.capstonec.net/ken.rider/nginx:1.14-alpine": rpc error: code = Unknown desc = Error response from daemon: pull access denied for ken-dtr.lab.capstonec.net/ken.rider/nginx, repository does not exist or may require 'docker login'
Warning Failed 40s (x3 over 1m) kubelet, ip-172-31-14-17 Error: ErrImagePull
Normal BackOff 2s (x5 over 1m) kubelet, ip-172-31-14-17 Back-off pulling image "ken-dtr.lab.capstonec.net/ken.rider/nginx:1.14-alpine"
Warning Failed 2s (x5 over 1m) kubelet, ip-172-31-14-17 Error: ImagePullBackOff

Here we see the pod tried to pull the ken-dtr.lab.capstonec.net/ken.rider/nginx:1.14-alpine image but failed. Notice it says we may need to login to DTR. How do we do that securely with a deployment/pod? We start by going to Pull an Image from a Private Registry in the Kubernetes reference documentation. There we see we need to create a Kubernetes secret with the user login information and pass that information in the pod specification.

Private registry diagram

Let’s start by creating the secret.

ken> kubectl --namespace user-kenrider create secret docker-registry regcred --docker-server=https://ken-dtr.lab.capstonec.net --docker-username=ken.rider --docker-password=mypassword --docker-email=ken.rider@capstonec.com
secret "regcred" created

Then we’ll create a Kubernetes manifest, named deployment-nginx.yml, using this secret in the pod specification.

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  labels:
    app: nginx
  name: nginx
  namespace: user-kenrider
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: ken-dtr.lab.capstonec.net/ken.rider/nginx:1.14-alpine
        imagePullPolicy: IfNotPresent
      name: nginx
      imagePullSecrets:
      - name: regcred

Now we’ll delete the previous deployment and apply this manifest file that specifies the secret we created. (We could have just done an apply but wanted to be explicit for this post.) And this time we can see the deployment is available and the pod is running.

ken> kubectl --namespace user-kenrider delete deployment nginx
deployment.extensions "nginx" deleted
ken> kubectl --namespace user-kenrider apply -f .\deployment-nginx.yml
deployment.extensions "nginx" created
ken> kubectl --namespace user-kenrider get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE

nginx 1 1 1 1 15s

ken> kubectl --namespace user-kenrider get pods
NAME READY STATUS RESTARTS AGE

nginx-67476dd867-5gw44 1/1 Running 0 24s

In this post we’ve seen how to create a private image repository in DTR on a Docker EE cluster and how to use an image from that private repository in a Kubernetes deployment on that same cluster.