DNS and Certificates made simple for Kubernetes

DNS and Certificates made simple for Kubernetes

One of the challenges with demonstrating what you have done in Kubernetes is directing others to your application in a secure manner. This typically involves you having to create a DNS entry and requesting a certificate. Two projects, external-dns and cert-manager, respectively, are making it much easier to automate this process and hide the complexities from developers. Both projects are under active development (with commits in the last few hours or days) and a lot of interest (thousands of stars on GitHub).

How the New VMware vSphere Updates Help Developers Unlock Kubernetes

How the New VMware vSphere Updates Help Developers Unlock Kubernetes

Can we do it faster and better than ever before? It’s a question that deep down, every agile development team wants to answer. Creating and deploying software better and faster is the impetus behind the rise of DevOps culture and growth of the containerization approach (expectations are that container usage will grow 64% by 2022).

In the pursuit of app modernization, there’s an emphasis on finding new efficiencies and ways to streamline the process.  These goals are achievable whether the initiative is migrating a legacy app to containerized microservices or creating cloud-native apps for use across hybrid IT environments.   

You Will Love These Cloud-native App Architecture Patterns

You Will Love These Cloud-native App Architecture Patterns

VMworld last week continued 2020’s atypical tech world tradition of offering all its sessions virtually. That format definitely makes the audience transition from session room to session room much quicker and easier. But for those of us tracking our physical activity we earn many, many fewer steps! 

This year’s conference offered the usual, very broad selection of sessions. They addressed all the new offerings and features and trends in VMware’s huge portfolio of products and offerings. In my opinion, as an application solution techie geek, the fundamental transition of vSphere to an orchestrated container platform based on Kubernetes (demystified in App Modernization ) was far and away the most interesting track of the conference. The architect in me votes hands-down that Paul Czarkowski’s session, Cloud Native Operations on Kubernetes, was the best presentation of the conference. No question it’s worth an hour of your time! 

Add Kubernetes Users to Your Cluster

If you are working with Kubernetes a lot, you have probably built several basic clusters for learning purposes using kubeadm and the documentation here. As you start exploring topics like RBAC Roles and Pod Security Policies, you will soon notice that only one user was created in each cluster. And since that user has the cluster-admin role, it can do anything in any namespace. To try out many of the Kubernetes security-related concepts in your clusters, you will need to add Kubernetes users that are not cluster administrators. For your basic clusters, you probably don’t have integration set up with an external system to add and authenticate users. And as the Kubernetes docs note here: Kubernetes does not have objects which represent normal user accounts. Normal users cannot be added to a cluster through an API call.

This blog post will show you how to create new Kubernetes users in your clusters, focusing on clusters created using kubeadm.

Hands-on with Kubernetes Pod Security Policies

Kubernetes Pod Security Policies allow you to control the security specifications that pods must adhere to in order to run in your cluster. You can block users from deploying inherently insecure pods either intentionally or unintentionally. This sounds like a great feature and a security best practice and can be a big step toward keeping your cluster free of insecure resources.

However, some pods may require additional security permissions beyond what most cluster users are allowed to deploy. For example, monitoring or metrics tooling may need host network access or may need to run in privileged mode. Also, you may need to allow developers to run applications with additional capabilities during early development stages just to make progress.

How hard is it to use Pod Security Policies to judiciously secure your cluster? We’ll look at that in this blog post.

How Pods access Kubernetes DNS in Docker EE, part two

This is part two of a two-part blog about Kubernetes DNS resolution and network access by Pods in Kubernetes. In part one we looked at internal Kubernetes DNS and how DNS resolution is configured for containers. In this part, we look at how network traffic gets from the containers in user workload Pods to Pods providing DNS functionality. We’re using Kubernetes running under Docker EE UCP (Docker Enterprise Edition Universal Control Plane) in this example. You can find more information about Docker EE here. Docker EE uses the Calico network plugin for Kubernetes, so some of the details are specific to Calico.

How Pods access Kubernetes DNS in Docker EE, part one

Service discovery is one of the important benefits of using a container/Pod orchestrator. When you create a Service in Kubernetes, controllers running behind the scenes create an entry in Kubernetes DNS records. Then other applications deployed in the cluster can look up the Service using its name. Kubernetes also configures routing within the cluster to send traffic for the Service to the Service’s ephemeral endpoint Pods.

Understanding Kubernetes DNS configuration and related traffic flow will help you troubleshoot problems accessing the cluster’s DNS from Pods. This is part one of a two-part deep-dive into how Kubernetes does this under the hood. In part one of this blog, we will look at how Kubernetes sets up DNS resolution for containers in Pods. In part two, we will look at how network traffic flows from containers in Pods for user workloads to the Pods providing DNS functionality. We’re going to use Kubernetes running under Docker Enterprise Edition for our examples in this blog.

Kubernetes Series – Pods (part 1)

Kubernetes Series - Pods (part 1)

A couple weeks back I took the test for Certified Kubernetes Application Developer developed by the Cloud Native Computing Foundation (CNCF), in collaboration with The Linux Foundation. For me personally it was satisfying to complete the test and become certified. 

Today’s blog will be the first in a series to share with you all that I have learned about Kubernetes and help you on your journey to understanding this container orchestration tool.

Kubeman on aisle K8S

I rarely go to Walmart and I definitely never thought I’d be going to Walmart for technology tools, advice, etc. However, thanks to Aymen EL Amri’s great curated weekly email, Kaptain, on all things Kubernetes, see https://www.faun.dev/, I ran across a great tool from Walmart Labs called Kubeman. If you are managing multiple Kubernetes clusters, and that’s almost always the case if you’re using Kubernetes, Kubeman is a tool you need to consider for troubleshooting. In addition to making it easier to investigate issues across multiple Kubernetes clusters, it understands an Istio service mesh as well.

Getting Tomcat logs from Kubernetes pods

I have been working with a client recently on getting Tomcat access and error logs from Kubernetes pods into Elasticsearch and visible in Kibana. As I started to look at the problem and saw Elastic Stack 7.5.0 released, it also seemed like a good idea to move them up to the latest release. And, now that Helm 3 has been released and no longer requires Tiller, using the Elastic Stack Kubernetes Helm Charts to manage their installs made a lot of sense.