Dave Thompson

Add Kubernetes Users to Your Cluster

If you are working with Kubernetes a lot, you have probably built several basic clusters for learning purposes using kubeadm and the documentation here. As you start exploring topics like RBAC Roles and Pod Security Policies, you will soon notice that only one user was created in each cluster. And since that user has the cluster-admin role, it can do anything in any namespace. To try out many of the Kubernetes security-related concepts in your clusters, you will need to add Kubernetes users that are not cluster administrators. For your basic clusters, you probably don’t have integration set up with an external system to add and authenticate users. And as the Kubernetes docs note here: Kubernetes does not have objects which represent normal user accounts. Normal users cannot be added to a cluster through an API call.

This blog post will show you how to create new Kubernetes users in your clusters, focusing on clusters created using kubeadm.

Hands-on with Kubernetes Pod Security Policies

Kubernetes Pod Security Policies allow you to control the security specifications that pods must adhere to in order to run in your cluster. You can block users from deploying inherently insecure pods either intentionally or unintentionally. This sounds like a great feature and a security best practice and can be a big step toward keeping your cluster free of insecure resources.

However, some pods may require additional security permissions beyond what most cluster users are allowed to deploy. For example, monitoring or metrics tooling may need host network access or may need to run in privileged mode. Also, you may need to allow developers to run applications with additional capabilities during early development stages just to make progress.

How hard is it to use Pod Security Policies to judiciously secure your cluster? We’ll look at that in this blog post.

How Pods access Kubernetes DNS in Docker EE, part two

This is part two of a two-part blog about Kubernetes DNS resolution and network access by Pods in Kubernetes. In part one we looked at internal Kubernetes DNS and how DNS resolution is configured for containers. In this part, we look at how network traffic gets from the containers in user workload Pods to Pods providing DNS functionality. We’re using Kubernetes running under Docker EE UCP (Docker Enterprise Edition Universal Control Plane) in this example. You can find more information about Docker EE here. Docker EE uses the Calico network plugin for Kubernetes, so some of the details are specific to Calico.

How Pods access Kubernetes DNS in Docker EE, part one

Service discovery is one of the important benefits of using a container/Pod orchestrator. When you create a Service in Kubernetes, controllers running behind the scenes create an entry in Kubernetes DNS records. Then other applications deployed in the cluster can look up the Service using its name. Kubernetes also configures routing within the cluster to send traffic for the Service to the Service’s ephemeral endpoint Pods.

Understanding Kubernetes DNS configuration and related traffic flow will help you troubleshoot problems accessing the cluster’s DNS from Pods. This is part one of a two-part deep-dive into how Kubernetes does this under the hood. In part one of this blog, we will look at how Kubernetes sets up DNS resolution for containers in Pods. In part two, we will look at how network traffic flows from containers in Pods for user workloads to the Pods providing DNS functionality. We’re going to use Kubernetes running under Docker Enterprise Edition for our examples in this blog.

Configure Custom CIDR Ranges in Docker EE

I recently worked with a customer to customize all of the default Classless Interdomain Routing (CIDR) ranges used for IP address allocation by Docker Enterprise Edition 3.0 (Docker EE 3.0). The customer primarily wanted to document the customization process for future use. However, there is often a real need to change some of the default CIDR ranges to avoid conflicts with existing private IP addresses already in use within a customer’s network. Typically such a conflict will make it impossible for applications running in containers or pods to access external hosts in the conflicting CIDR range.

Are you expected to own innovation? Our IT solutions team can share your challenges. We empower businesses to evolve, release projects faster, and enhance performance. Tell us how we can help.

    Attack of the Kubernetes Clones

    One of the customers I support is using Kubernetes under Docker EE UCP (Enterprise Edition Universal Control Plane) and has been very impressed with its stability and ease of management. Recently, however, a worker node that had been very stable for months started evicting Kubernetes pods extremely frequently, reporting inadequate CPU resources. Our DevOps team was still experimenting with determining resource requirements for many of their containerized apps, so at first, we thought the problem was caused by resource contention between pods running on the node.

    Kubernetes NetworkPolicies in Docker Enterprise Edition

    Kubernetes running under Docker UCP uses the Calico CNI plugin so that you can use Kubernetes NetworkPolicies to control pod to pod communication as well as communication between pods and other network endpoints.

    This blog post will walk you through an example of configuring Kubernetes NetworkPolicies. We will block traffic from one namespace into another namespace, while still allowing external traffic to access the “restricted” namespace. As a high-level use case, we will consider the situation where a development team is working on multiple branches of a project, and the pods in the different branches should not be able to communicate with each other. If you are not familiar with the basic concepts of NetworkPolicies, see the Kubernetes documentation here.

    Help! I need to change the pod CIDR in my Kubernetes cluster

     

    Your Docker EE Kubernetes cluster has been working great for months. The DevOps team is fully committed to deploying critical applications as Kubernetes workloads using their pipeline, and there are several production applications already deployed in your Kubernetes cluster.

    But today the DevOps team tells you something is wrong; they can’t reach a group of internal corporate servers from Kubernetes pods. They can reach those same servers using basic Docker containers and Swarm services. You’re sure its just another firewall misconfiguration and you enlist the help of your network team to fix it. After several hours of troubleshooting, you realize that the problem is that you are using a CIDR (Classless Inter-Domain Routing) range for your cluster’s pod CIDR range that overlaps the CIDR range that the servers use.

    Resistance is futile; management tells you that the server IP addresses can’t be changed, so you must change the CIDR range for your Kubernetes cluster. You do a little Internet surfing and quickly figure out that this is not considered an easy task. Worse yet, most of the advice is for Kubernetes clusters installed using tools like kubeadm or kops, while your cluster is installed under Docker EE UCP.

    Relax! In this blog post, I’m going to walk you through changing the pod CIDR range in Kubernetes running under Docker EE. There will be some disruptions at the time that the existing Kubernetes pods are re-started to use IP addresses from the new CIDR range but they should be minimal if your applications use a replicated design.

    Looking to innovate? Eager to answer your questions? Capstone Labs explores the latest IT advancements and helps you demonstrate the value of application modernization.

    Ask your tech questions

     

    Kubernetes tolerations working together with Docker UCP scheduler restrictions

    In this blog post we´ll take a look at how the scheduler controls in Docker UCP interact with Kubernetes taints and tolerations. Both are used to control what workloads are allowed to run on manager and DTR (Docker Trusted Registry) nodes. Docker EE UCP mangers nodes are also Kubernetes master nodes, and in production systems it is important to restrict what runs on the manager (master) and DTR nodes. We’ll walk through deploying a Kubernetes workload on every node in a Docker EE cluster.

    Deploying a Docker stack file as a Kubernetes workload

    Overview

    Recently I’ve been hosting workshops for a customer who is exploring migrating from Docker Swarm orchestration to Kubernetes orchestration. The customer is currently using Docker EE (Enterprise Edition) 2.1, and plans to continue using that platform, just leveraging Kubernetes rather than Swarm. There are a number of advantages to continuing to use Docker EE including:

    • Pre-installed Kubernetes.
    • Group (team) and user management, including corporate LDAP integration.
    • Using the Docker UCP client bundle to configure both your Kubernetes and Docker client environment.
    • Availability of an on-premises registry (DTR) that includes advanced features such image scanning and image promotion.

    I had already conducted a workshop on deploying applications as Docker services in stack files (compose files deployed as Docker stacks), demonstrating self-healing replicated applications, service discovery and the ability to publish ports externally using the Docker ingress network. …