In the first post in this series, SSL Options with Kubernetes – Part 1, we saw how to use the Kubernetes LoadBalancer service type to terminate SSL for your application deployed on a Kubernetes cluster in AWS. In this post, we will see how this can be done for a Kubernetes cluster in Azure.
In general, Kubernetes objects are portable across the various types of infrastructure underlying the cluster, i.e. public cloud, private cloud, virtualized, bare metal, etc. However, some objects are implemented through the Kubernetes concept of Cloud Providers. The
LoadBalancer service type is one of these. AWS, Azure, and GCP (as well as vSphere, OpenStack and others) all implement a load balancer service using the existing load balancer(s) their cloud service provides. As such, each implementation is different. These differences are accounted for in the annotations to the
Service object. For example, here is the specification we used for our service in the previous post.
kind: Service apiVersion: v1 metadata: name: kens-service annotations: service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443" service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-east-1:123456789012:certificate/0be2e3b9-bc43-4159-8369-d2de55d8ac43" service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http" spec: type: LoadBalancer selector: app: kens-app ports: - name: http protocol: TCP port: 80 targetPort: 80 - name: https protocol: TCP port: 443 targetPort: 80
Notice the annotations in the metadata are specific to AWS and will not apply to Azure (or any other cloud provider). So, how do we make this work in Azure?
The Azure Cloud Provider
Your best source of information on Kubernetes cloud provider implementations are the corresponding repositories on GitHub. For Azure, you can find this at the Cloud provider for Azure. In this case, we are interested in the Azure LoadBalancer annotations. You will notice there are no annotations related to SSL, HTTPS or certificates. Azure’s Basic and Standard SKU load balancers, which the Azure cloud provider implementation uses, do not support SSL termination so the Kubernetes
LoadBalancer service can’t either. We will have to look into other options.
Building a Kubernetes cluster in Azure
As with the Kubernetes cluster in AWS, we will use Docker Certified Infrastructure (DCI) to build our Kubernetes cluster in Azure. DCI uses Infrastructure as Code using a combination of Terraform to provision infrastructure and Ansible to configure the operating system(s) and install software. To date, DCI has only been available to Docker Certified Consultants like Capstone IT. However, Docker announced at DockerCon last week that DCI would be included in Docker Enterprise 3.0 and will be available to customers with active subscriptions. We will explore DCI further in a future post (or posts).
Creating a LoadBalancer service in Azure
Let’s start by implementing a
LoadBalancer service that load balances HTTP traffic to our NGINX deployment that exposes an HTTP endpoint. Here is the deployment manifest we used before and will use it again here without any changes.
kind: Deployment apiVersion: apps/v1 metadata: name: kens-deployment spec: selector: matchLabels: app: kens-app replicas: 2 template: metadata: labels: app: kens-app spec: containers: - name: nginx image: nginx:1.15 ports: - containerPort: 80
We will apply this manifest on our Docker Enterprise test cluster in Azure using its built-in Kubernetes orchestrator.
ken$ kubectl apply -f kens-deploment.yaml deployment.apps/kens-deployment created ken$ kubectl get all NAME READY STATUS RESTARTS AGE pod/kens-deployment-65b9fd745b-tzdfd 1/1 Running 0 2m pod/kens-deployment-65b9fd745b-vc4hx 1/1 Running 0 2m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/kens-deployment 2 2 2 2 2m NAME DESIRED CURRENT READY AGE replicaset.apps/kens-deployment-65b9fd745b 2 2 2 2m
So far, so good. Let’s try creating a simple load balancer service using the following manifest.
kind: Service apiVersion: v1 metadata: name: kens-service spec: type: LoadBalancer selector: app: kens-app ports: - name: http protocol: TCP port: 80 targetPort: 80
And, when we apply it then wait about 90 seconds, we see the following.
ken$ kubectl apply -f kens-service.yaml service/kens-service created ken$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kens-service LoadBalancer 10.96.244.29 126.96.36.199 80:34731/TCP 1m
Azure provides a public IP address for the load balancer it created. Unfortunately, unlike the AWS cloud provider, the Azure cloud provider implementation returns the IP address rather than the DNS name. You can see this IP address in the EXTERNAL-IP column. Since the Azure cloud provider implementation doesn’t provide a DNS A record, we will use this IP address to create a DNS A entry in our zone. In this case, we will map kens-app.lab.capstonec.net to 188.8.131.52.
Let’s see if we can access our service at http://kens-app.lab.capstonec.net.
We now know we can create a load balancer in Kubernetes on Azure for HTTP. What about HTTPS? On AWS our service manifest had the following in the ports section of the
- name: https protocol: TCP port: 443 targetPort: 80
This tells the load balancer to accept traffic on port 443 (HTTPS) and send it the matching pod(s) on port 80 (HTTP). This is possible as the AWS load balancer support SSL offloading and can terminate SSL traffic there. We could try this on Azure and Kubernetes will happily create a load balancer that accepts traffic on 443 and sends it to pods on 80. Unfortunately, this traffic is still SSL encrypted and our NGINX container (as configured) doesn’t support it. As a result, you will see the following in your browser.
At this point, we have several options using Kubernetes. Instead of a
LoadBalancer, we could use an
Ingress and we will look at it in my next post. We could use a sidecar proxy in our pods that offloads SSL there and we may look at that in a future post. Another option would be to update our NGINX container to support HTTPS but that adds overhead both at run-time and on developers. Another idea would be using a service mesh like Istio and we’ll get to that in a future post as well.
If we look outside of Kubernetes, what options are available to us in Azure?
Using Azure Front Door
The Azure Front Door Service provides a scalable, secure and common entry point for web applications. (Azure also provides an Application Gateway which provides similar functionality within a region rather than globally. The process for implementing SSL offloading is very similar.) Microsoft has a couple of great tutorials in their documentation on how to Add a custom domain to your Front Door and Configure HTTPS on a Front Door custom domain. We’ll use these tutorials (and their links) to
- Create DNS name label or an alias record for your load balancer,
- In my case, I created an alias record, kens-app-lb.lab.capstonec.net, that resolves to my load balancer’s IP address, 184.108.40.206.
- You don’t have to do this step but I won’t use an IP address when a DNS name is an option.
- Create an Azure Front Door, in this case kens-app.azurefd.net,
- You can choose to accept HTTP, HTTPS or both,
- The backend will be your load balancer, and
- For the forwarding protocol, choose HTTP only (since our NGINX service only accepts HTTP).
- Add our custom domain, kens-app.lab.capstonec.net, to Front Door,
- You will need to add a DNS CNAME record for adverify.kens-app.lab.capstonec.net pointing to adverify.kens-app.azurefd.net to verify the domain, and
- a CNAME record for kens-app.lab.capstonec.net pointing to kens-app.azurefd.net for your application.
- Add our wildcard certificate, for *.lab.capstonec.net, to Key Vault, and
- Enable HTTPS on Front Door’s frontend host for our custom domain.
Here’s what we see in the Azure portal after we complete these steps.
Let’s see if we can access our service at https://kens-app.lab.capstonec.net.
We’ve successfully enabled HTTPS access to our application deployed on a Kubernetes cluster in Azure.
AWS vs. Azure for Kubernetes Load Balancers
Both the AWS and Azure Kubernetes cloud provider implementations provide a
LoadBalancer service type. However, the capabilities are very different. We saw this with their ability (or lack thereof) to offload SSL. There are other differences you may run into. For example, AWS creates a load balancer per service while Azure creates one load balancer for all services. This will impact your choices for layer 4 and/or layer 7 routing.
In the next post, we will look at an option for SSL termination that works across all Kubernetes clusters regardless of the underlying infrastructure. We will be using the concept of a Kubernetes Ingress.
Want to know more
Capstone IT is a Docker Premier Consulting Partner as well as being an Azure Gold and AWS Select partner. If you are interested in finding out more and getting help with your Container, Cloud and DevOps transformation, please Contact Us.