Kubernetes Series – Pods (part 1)

Kubernetes Series - Pods (part 1)

A couple weeks back I took the test for Certified Kubernetes Application Developer developed by the Cloud Native Computing Foundation (CNCF), in collaboration with The Linux Foundation. For me personally it was satisfying to complete the test and become certified. 

Today’s blog will be the first in a series to share with you all that I have learned about Kubernetes and help you on your journey to understanding this container orchestration tool.

Kubeman on aisle K8S

I rarely go to Walmart and I definitely never thought I’d be going to Walmart for technology tools, advice, etc. However, thanks to Aymen EL Amri’s great curated weekly email, Kaptain, on all things Kubernetes, see https://www.faun.dev/, I ran across a great tool from Walmart Labs called Kubeman. If you are managing multiple Kubernetes clusters, and that’s almost always the case if you’re using Kubernetes, Kubeman is a tool you need to consider for troubleshooting. In addition to making it easier to investigate issues across multiple Kubernetes clusters, it understands an Istio service mesh as well.

Getting Tomcat logs from Kubernetes pods

I have been working with a client recently on getting Tomcat access and error logs from Kubernetes pods into Elasticsearch and visible in Kibana. As I started to look at the problem and saw Elastic Stack 7.5.0 released, it also seemed like a good idea to move them up to the latest release. And, now that Helm 3 has been released and no longer requires Tiller, using the Elastic Stack Kubernetes Helm Charts to manage their installs made a lot of sense.

Dynamically Configure reCaptcha Site Key

I’ve recently been working on a new web portal and ran into a problem of trying to figure out how to dynamically configure the reCaptcha site key.

It is a basic single page application (SPA) utilizing the Angular web framework (with component based development) written using Typescript (love it!) and decorated using Bootstrap CSS (can’t live without it!). The frontend Angular application is served up from the backend Java SpringBoot. application And, the whole thing is deploy in Docker containers to a Docker Swarm cluster.

I have a lot to be thankful for

In the days leading up to the Thanksgiving Holiday weekend, I want to talk about being thankful.  This is not your typical tech blog post.

Today my wife Sharon and I both went outside into the front garden to trim back the shrubs, prune the rose bushes, and remove all the debris from our summer plants.  It was a pleasant afternoon with a cool breeze in the air.  The sun shone on my back and warmed me as I worked.  With a pair of leather gloves and some pruning shears in hand, we made short work of the fall garden cleanup.  Sharon and I enjoy our time together and especially working together on our property.

What is Container Orchestration – Kubernetes Version?

In a previous post, What is Container Orchestration?, I explained container orchestration using some examples based on Docker Swarm. While Docker Swarm is undeniably easier to both use and explain, Kubernetes is by far the most prevalent container orchestrator today. So, I’m going to go through the same examples from that previous post but, this time, use Kubernetes. One of the great things about Docker Enterprise is it supports both Swarm and Kubernetes so I didn’t have to change my infrastructure at all.

A First Look at Helm 3

Helm has been widely publicized as the package manager for Kubernetes. We’ve seen the need over and over for Helm. Unfortunately, Helm 2 requires Tiller and Tiller opens a lot of security questions. In particular, in a multi-user, multi-organization, and/or multi-tenant cluster, securing the Tiller service account (or accounts) was difficult and problematic. As a result, we’ve never recommended our clients use Helm in production. With the recent announcement of the first release candidate for Helm 3, it’s time to take another look as this version no longer requires or uses Tiller so many (most) of our security concerns should be gone.

Who Can…?

Managing a Kubernetes cluster with one user is easy. Once you go beyond one user, you need to start using Role-Based Access Control (RBAC). I’ve delved into this topic several times in the past with posts on how to Create a Kubernetes User Sandbox in Docker Enterprise and Functional Kubernetes Namespaces in Docker Enterprise. But, once you get beyond a couple of users and/or teams and a few namespaces for them, it quickly becomes difficult to keep track of who can do what and where. And, as time goes on and more and more people have a hand in setting up your RBAC, it can get even more confusing. You can and should have your RBAC resource definitions in source control but it’s not easy to read and is hard to visualize. Enter the open source who-can kubectl plugin from the folks at Aqua Security. It gives you the ability to show who (subjects) can do what (verbs) to what (resources) and where (namespaces).

Configure Custom CIDR Ranges in Docker EE

I recently worked with a customer to customize all of the default Classless Interdomain Routing (CIDR) ranges used for IP address allocation by Docker Enterprise Edition 3.0 (Docker EE 3.0). The customer primarily wanted to document the customization process for future use. However, there is often a real need to change some of the default CIDR ranges to avoid conflicts with existing private IP addresses already in use within a customer’s network. Typically such a conflict will make it impossible for applications running in containers or pods to access external hosts in the conflicting CIDR range.