kubernetes

How Pods access Kubernetes DNS in Docker EE, part one

Service discovery is one of the important benefits of using a container/Pod orchestrator. When you create a Service in Kubernetes, controllers running behind the scenes create an entry in Kubernetes DNS records. Then other applications deployed in the cluster can look up the Service using its name. Kubernetes also configures routing within the cluster to send traffic for the Service to the Service’s ephemeral endpoint Pods.

Understanding Kubernetes DNS configuration and related traffic flow will help you troubleshoot problems accessing the cluster’s DNS from Pods. This is part one of a two-part deep-dive into how Kubernetes does this under the hood. In part one of this blog, we will look at how Kubernetes sets up DNS resolution for containers in Pods. In part two, we will look at how network traffic flows from containers in Pods for user workloads to the Pods providing DNS functionality. We’re going to use Kubernetes running under Docker Enterprise Edition for our examples in this blog.

How Pods access Kubernetes DNS in Docker EE, part one Read More »

Kubeman on aisle K8S

I rarely go to Walmart and I definitely never thought I’d be going to Walmart for technology tools, advice, etc. However, thanks to Aymen EL Amri’s great curated weekly email, Kaptain, on all things Kubernetes, see https://www.faun.dev/, I ran across a great tool from Walmart Labs called Kubeman. If you are managing multiple Kubernetes clusters, and that’s almost always the case if you’re using Kubernetes, Kubeman is a tool you need to consider for troubleshooting. In addition to making it easier to investigate issues across multiple Kubernetes clusters, it understands an Istio service mesh as well.

Kubeman on aisle K8S Read More »

Getting Tomcat logs from Kubernetes pods

I have been working with a client recently on getting Tomcat access and error logs from Kubernetes pods into Elasticsearch and visible in Kibana. As I started to look at the problem and saw Elastic Stack 7.5.0 released, it also seemed like a good idea to move them up to the latest release. And, now that Helm 3 has been released and no longer requires Tiller, using the Elastic Stack Kubernetes Helm Charts to manage their installs made a lot of sense.

Looking to innovate? Eager to answer your questions? Capstone Labs explores the latest IT advancements and helps you demonstrate the value of application modernization.

Ask your tech questions

 

Getting Tomcat logs from Kubernetes pods Read More »

What is Container Orchestration – Kubernetes Version?

In a previous post, What is Container Orchestration?, I explained container orchestration using some examples based on Docker Swarm. While Docker Swarm is undeniably easier to both use and explain, Kubernetes is by far the most prevalent container orchestrator today. So, I’m going to go through the same examples from that previous post but, this time, use Kubernetes. One of the great things about Docker Enterprise is it supports both Swarm and Kubernetes so I didn’t have to change my infrastructure at all.

What is Container Orchestration – Kubernetes Version? Read More »

A First Look at Helm 3

Helm has been widely publicized as the package manager for Kubernetes. We’ve seen the need over and over for Helm. Unfortunately, Helm 2 requires Tiller and Tiller opens a lot of security questions. In particular, in a multi-user, multi-organization, and/or multi-tenant cluster, securing the Tiller service account (or accounts) was difficult and problematic. As a result, we’ve never recommended our clients use Helm in production. With the recent announcement of the first release candidate for Helm 3, it’s time to take another look as this version no longer requires or uses Tiller so many (most) of our security concerns should be gone.

A First Look at Helm 3 Read More »

Who Can…?

Managing a Kubernetes cluster with one user is easy. Once you go beyond one user, you need to start using Role-Based Access Control (RBAC). I’ve delved into this topic several times in the past with posts on how to Create a Kubernetes User Sandbox in Docker Enterprise and Functional Kubernetes Namespaces in Docker Enterprise. But, once you get beyond a couple of users and/or teams and a few namespaces for them, it quickly becomes difficult to keep track of who can do what and where. And, as time goes on and more and more people have a hand in setting up your RBAC, it can get even more confusing. You can and should have your RBAC resource definitions in source control but it’s not easy to read and is hard to visualize. Enter the open source who-can kubectl plugin from the folks at Aqua Security. It gives you the ability to show who (subjects) can do what (verbs) to what (resources) and where (namespaces).

Who Can…? Read More »

Configure Custom CIDR Ranges in Docker EE

I recently worked with a customer to customize all of the default Classless Interdomain Routing (CIDR) ranges used for IP address allocation by Docker Enterprise Edition 3.0 (Docker EE 3.0). The customer primarily wanted to document the customization process for future use. However, there is often a real need to change some of the default CIDR ranges to avoid conflicts with existing private IP addresses already in use within a customer’s network. Typically such a conflict will make it impossible for applications running in containers or pods to access external hosts in the conflicting CIDR range.

Are you expected to own innovation? Our IT solutions team can share your challenges. We empower businesses to evolve, release projects faster, and enhance performance. Tell us how we can help.

Error: Contact form not found.

Configure Custom CIDR Ranges in Docker EE Read More »

Auto Scaling Docker Nodes in AWS

ButterBall Turkey

I once heard a story from someone who worked at ConAgra. They produce and sell a variety of food products that you and I eat all the time. The most notorious is the ButterBall turkey. ConAgra owned Butterball from 1990 to 2006. Every Thanksgiving holiday, so I am told, ButterBall would have to scale up their call-center as well as their website to a couple hundred web servers to handle the demand for “how to cook my turkey?” That’s a lot of hardware!

We are only a couple months away from Thanksgiving. So, what do you call a turkey on the day after Thanksgiving? Lucky. #dadjoke

 

Auto Scaling Docker Nodes in AWS Read More »

Attack of the Kubernetes Clones

One of the customers I support is using Kubernetes under Docker EE UCP (Enterprise Edition Universal Control Plane) and has been very impressed with its stability and ease of management. Recently, however, a worker node that had been very stable for months started evicting Kubernetes pods extremely frequently, reporting inadequate CPU resources. Our DevOps team was still experimenting with determining resource requirements for many of their containerized apps, so at first, we thought the problem was caused by resource contention between pods running on the node.

Attack of the Kubernetes Clones Read More »

Kubernetes Network Isolation

In the 1980s there was a funny television commercial for an insurance company that was debauching many other insurance companies. These hideous competitors trained their agents to “Say NO, deny the Claim!” thereby denying customers the benefits of the insurance policy they had purchased. It always made me chuckle and I still remember the chant to this day. I want to show you how you can do this, “Say no, deny pod access!” in Kubernetes using NetworkPolicies applied to your application deployments.

Kubernetes Network Isolation
Denied

Recently while working with a customer who is quite new to Docker and the world of Kubernetes, they were inquiring about how to isolate their applications from each other in a shared Kubernetes cluster.

In a previous blog post entitled Kubernetes Workload Isolation I discussed how customers have segmented their cluster by using a combination of VLAN’s, Collections, and Namespaces. But if you are not utilizing VLAN’s to segment your networking among VM’s and if you are not using Collections to separate VM’s into different RBAC groups then you will need a different approach.

Kubernetes Network Isolation Read More »