SSL Options with Kubernetes – Part 1

In this post (and future posts) we will continue to look into questions our clients have asked about using Docker Enterprise that have prompted us to do some further research and/or investigation. Here we are going to look into the options for enabling secure communications with our applications running under Kubernetes container orchestration on a Docker Enterprise cluster.

The LoadBalancer type of a service in Kubernetes is available if you are using one of the major public clouds, AWS, Azure or GCP, via their respective cloud provider implementations. An Ingress resource is available on any Kubernetes cluster including both on-premises and in the cloud. Both LoadBalancer and Ingress provide the capability to terminate SSL traffic. In this post will show how this is accomplished with an AWS LoadBalancer service.

Create a Deployment

Let’s start by creating a deployment with two replicas of a pod with one container using the nginx:1.15 image from Docker Hub. By default, an NGINX container listens on TCP port 80 (HTTP).

kind: Deployment
apiVersion: apps/v1
  name: kens-deployment
        app: kens-app
    replicas: 2
          app: kens-app
        - name: nginx
          image: nginx:1.15
          - containerPort: 80

To create the NGINX deployment, we just need to apply the above manifest.

ken$ kubectl apply -f kens-deployment.yml
deployment.apps/kens-deployment created
ken$ kubectl get all
NAME                                          READY   STATUS    RESTARTS   AGE
pod/kens-deployment-7ff87d5f69-5s22g          1/1     Running   0          6s
pod/kens-deployment-7ff87d5f69-qlbzd          1/1     Running   0          6s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE

NAME                                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/kens-deployment          2         2         2            2           7s

NAME                                                DESIRED   CURRENT   READY   AGE
replicaset.apps/kens-deployment-7ff87d5f69          2         2         2       7s

Get a Certificate

SSL requires a certificate. Since we are using AWS, the certificate we use for this load balancer must exist in the AWS Certificate Manager. You can either request a certificate from AWS or you can import one from your preferred certificate authority (CA). In either case, you will need access to your DNS provider to validate the domain name(s) you use in the certificate. In this case we’ll import a wildcard certificate for * issued by GoDaddy. Make sure to request or import the certificate in the region your Docker Enterprise cluster resides. You will need the ARN for this certificate (see below) for the annotations metadata for the service you create. (For a list of all the AWS LoadBalancer annotations, see Cloud Providers – AWS in the Kubernetes documentation.)

AWS Certificate Manager

Create an HTTPS Service

Next we will create a service that accepts HTTPS (TCP port 443) traffic on an AWS load balancer (ELB) and routes it using HTTP (TCP port 80) to a pod with a label of type app and a value of kens-app.

We are now ready to create the manifest for the NGINX service. We want our end users to access our NGINX deployment via HTTP and HTTPS so we need the load balancer to accept traffic on TCP ports 80 and 443. Our NGINX deployment accepts HTTP so we will need to terminate SSL at the load balancer and forward all of the backend traffic as HTTP to our pods.

kind: Service
apiVersion: v1
  name: kens-service
  annotations: "443" "arn:aws:acm:us-east-1:123456789012:certificate/0be2e3b9-bc43-4159-8369-d2de55d8ac43" "http"
  type: LoadBalancer
    app: kens-app
    - name: http
      protocol: TCP
      port: 80
      targetPort: 80
    - name: https
      protocol: TCP
      port: 443
      targetPort: 80

To create our NGINX service, we just need to apply the manifest above. And, we will see our service along with the IP address of the service within the cluster and the DNS name (A record) of the public interface of the AWS ELB created.

ken$ kubectl apply -f kens-service.yml
service/kens-service created
ken$ kubectl get svc
NAME            TYPE           CLUSTER-IP     EXTERNAL-IP                                                              PORT(S)                      AGE
kens-service    LoadBalancer   80:33432/TCP,443:34074/TCP   3s

Add a DNS Entry

We want to access this service at so we will need to add a CNAME record to DNS that points to the DNS name supplied by AWS for the load balancer created. In order to keep your browser happy, the DNS name you use has to match the certificate you used. In this example, the wildcard certificate we used is for * which matches our DNS entry,

Access Our HTTPS Service

Once we add the DNS entry, we can use our browser to access You will notice the browser has made a secure connection and we’re getting to the default webpage of an NGINX container.


Issues with LoadBalancer

There are a couple of issues with this method. First, it’s only available on several public cloud providers today. It’s not available on-premises. Second, the annotations used for each cloud provider are different. As a result, this method is not portable between providers or between providers and on-premises. This also requires one load balancer per service which can become expensive from both a cost and a management perspective if you have a lot of services.

What’s next?

In this post we’ve looked at implementing SSL termination for an HTTP/HTTPS application using an AWS LoadBalancer. Next time we will do the same thing with an Azure LoadBalancer. And we’ll follow that with an Ingress implementation.

Need help?

Do you need help with containers, cloud and/or DevOps? Capstone IT is here to help. We bring years of enterprise IT experience to help you with your digital transformations.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.