This is part two of a two-part blog about internal DNS resolution and network access for Pods in Kubernetes. In part one we looked at how internal DNS services are configured in Kubernetes and how DNS resolution is configured for containers in Pods for user workloads. In this part, we will look at how network traffic gets from the containers in Pods for user workloads to the Pods providing DNS functionality.
Service discovery is one of the important benefits of using a container/Pod orchestrator. When you create a Service in Kubernetes, controllers running behind the scenes create an entry in the cluster’s DNS records so that other applications deployed in the cluster can look up the Service using its name. In part one of this blog, we will look at how DNS resolution is set up for containers in Pods.
In a previous post, What is Container Orchestration?, I explained container orchestration using some examples based on Docker Swarm. While Docker Swarm is undeniably easier to both use and explain, Kubernetes is by far the most prevalent container orchestrator today. So, I’m going to go through the same examples from that previous post but, this time, use Kubernetes. One of the great things about Docker Enterprise is it supports both Swarm and Kubernetes so I didn’t have to change my infrastructure at all.
Managing a Kubernetes cluster with one user is easy. Once you go beyond one user, you need to start using Role-Based Access Control (RBAC). But, once you get beyond a couple of users and/or teams and a few namespaces for them, it quickly becomes difficult to keep track of who can do what and where. And, as time goes on and more and more people have a hand in setting up your RBAC, it can get even more confusing. You can and should have your RBAC resource definitions in source control but it’s not easy to read and is hard to visualize. Enter the open source who-can kubectl plugin from the folks at Aqua Security. It gives you the ability to show who (subjects) can do what (verbs) to what (resources) and where (namespaces).
Docker Enterprise Edition uses several default network address CIDR ranges by default. Learn how you can customize those CIDR ranges to avoid conflicts with your existing networks.
How you can automatically scale Worker Nodes in your AWS Docker Enterprise cluster.
Ingress control is provided out-of-the-box with Docker Enterprise in the form of UCP Interlock. This tutorial will walk you have to setup multiple ingress controllers across three environments all in the same cluster.
Why did one of my normally stable Kubernetes nodes suddenly decide it was constantly rebooting and changing its CPU count? The answer involves how it was cloned as a template for new nodes.
How you can apply Kubernetes NetworkPolicy’s to ensure your enterprise security and network concerns.
Over the last two or three years I’ve given a similar presentation on containers to operations groups at clients, potential clients, conferences and meetups. Generally, they’re just getting started with containers and are wondering what orchestration is and how it impacts them. In this post, I will talk about what container orchestration is and provide several videos with simple examples of what it means.