Kubernetes NetworkPolicies in Docker Enterprise Edition

Kubernetes running under Docker UCP uses the Calico CNI plugin so that you can use Kubernetes NetworkPolicies to control pod to pod communication as well as communication between pods and other network endpoints.

This blog post will walk you through an example of configuring Kubernetes NetworkPolicies. We will block traffic from one namespace into another namespace, while still allowing external traffic to access the “restricted” namespace. As a high-level use case, we will consider the situation where a development team is working on multiple branches of a project, and the pods in the different branches should not be able to communicate with each other. If you are not familiar with the basic concepts of NetworkPolicies, see the Kubernetes documentation here.

Clarification of acronyms:

To demonstrate how NetworkPolicies work in actual practice, we will create two namespaces, deploy workloads in those namespaces and then control traffic in several scenarios. For simplicity, we will use Tomcat as the application for all workloads, and we will test for connectivity using the curl command that is available in the Tomcat image. We will deploy two separate instances of Tomcat in both namespaces. Each Tomcat instance will be deployed as a Kubernetes deployment exposed by a Kubernetes service. This will allow us to demonstrate the effect of NetworkPolicies on pod-to-pod traffic and pod-to-service traffic in the same namespace and in different namespaces.

We will use a cluster-admin account for our demo work to avoid mixing RBAC considerations with NetworkPolicy considerations.

You can download the example manifest files for this post here if you would like to try out some of the scenarios yourself.

What we will demonstrate

Expanding on the previous description of this demo, we will explore using NetworkPolicies to control traffic in the following scenarios:

  • Deny all ingress and egress traffic to/from all pods in a namespace.
  • Deny all ingress traffic to pods in a namespace.
    • This has the side effect of also blocking communication between pods in that namespace.
  • Allow traffic between pods inside a namespace.
  • Allow ingress traffic from sources external to the cluster to pods in a namespace that otherwise blocks ingress traffic from other namespaces.

Setup for the demo

Label nodes to control where your workloads run

This step is optional, but if not used you will need to modify the deployment manifests used in this post. Depending on the CPU and memory resources available on your nodes, you may need to label more than one node. We will use three nodes for this demo to simulate what you might encounter in a real-world cluster.

kubectl label nodes <DEMO-NODE-1> my-nodelabel=np-demo
kubectl label nodes <DEMO-NODE-2> my-nodelabel=np-demo
kubectl label nodes <DEMO-NODE-3> my-nodelabel=np-demo

Set up the namespaces

We will create two namespaces, dev-branch-1 and dev-branch-2:

kubectl create ns dev-branch-1
kubectl create ns dev-branch-2

Add labels to the namespaces. This is needed to select namespaces in the NetworkPolicies. We will use the labels ns-id=dev-br-1 and ns-id=dev-br-2:

kubectl label ns dev-branch-1 ns-id=dev-br-1
kubectl label ns dev-branch-2 ns-id=dev-br-2

Set up the workloads

This is a high level diagram of what our demo environment will look like:

There will be two workloads in each namespace, each identified by the following labels:

app: app-<APP_NUMBER>
project: project-1-br<BRANCH_NUMBER>

There will be two branches: project-1-br1 and project-1-br2, each deployed into its own namespace. Each branch will include two apps; app-1 and app-2.

This is an example of one of the deployment manifests we will use:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: app-1
  name: app-1
  namespace: dev-branch-1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: app-1
      project: project-1-br1
  template:
    metadata:
      labels:
        app: app-1
        project: project-1-br1
    spec:
      containers:
      - image: tomcat:8
        name: tomcat-app1
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: my-nodelabel
                operator: In
                values:
                - np-demo

We will use the default ClusterIP type services for this demo. Services for app-1 will listen on port 9000 and service for app-2 will listen on port 9002. This is an example of a service manifest:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: app-1
  name: app-1-svc
  namespace: dev-branch-1
spec:
  ports:
  - port: 9000
    protocol: TCP
    targetPort: 8080
  selector:
    app: app-1
    project: project-1-br1

All the manifest files for the deployments and services are located in a single directory so we can create them with a single command:

kubectl apply -f <DEPLOYMENT_MANIFEST_DIRECTORY>

Check that the objects were successfully created. The results for the pods and services are shown here. We will use some of this data to verify communications prior to applying NetworkPolicies:

kubectl get all -n dev-branch-1 -o wide
NAME                         READY   STATUS    RESTARTS   AGE   IP             NODE               NOMINATED NODE
pod/app-1-55b7b9c47c-z9x67   1/1     Running   0          19m   10.16.116.23   ip-172-31-10-195   <none>
pod/app-2-56d565d567-rv7mv   1/1     Running   0          19m   10.16.116.24   ip-172-31-10-195   <none>

NAME                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE   SELECTOR
service/app-1-svc   ClusterIP   10.96.7.182     <none>        9000/TCP   19m   app=app-1,project=project-1-br1
service/app-2-svc   ClusterIP   10.96.100.121   <none>        9002/TCP   19m   app=app-2,project=project-1-br1
kubectl get all -n dev-branch-2 -o wide
NAME                         READY   STATUS    RESTARTS   AGE   IP             NODE               NOMINATED NODE
pod/app-1-75dbd6d84b-9w9fg   1/1     Running   0          20m   10.16.116.25   ip-172-31-10-195   <none>
pod/app-2-9c7df5664-6gz2f    1/1     Running   0          20m   10.16.116.26   ip-172-31-10-195   <none>

NAME                TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE   SELECTOR
service/app-1-svc   ClusterIP   10.96.164.54   <none>        9000/TCP   20m   app=app-1,project=project-1-br2
service/app-2-svc   ClusterIP   10.96.17.182   <none>        9002/TCP   20m   app=app-2,project=project-1-br2

Verify communications without any NetworkPolicies

Without any NetowrkPolicies applied, network access is allowed between all pods in both namespaces, either directly or through the coresponding services:

Let’s verify that the pods can communicate with each other directly, and through the services. Examples are shown here for internal communications inside the dev-branch-1 namespace, and from the dev-branch-1 namespace to the dev-branch-2 namespace.

We will exec into one of the pods in the dev-branch-1 namespace so we can test communications from the shell:

kubectl -n dev-branch-1 exec -it app-1-55b7b9c47c-z9x67  bash
root@app-1-55b7b9c47c-z9x67:/usr/local/tomcat#

From the shell, we will check connectivity to pods and services:

To a pod in the same namespace:

curl -I 10.16.116.24:8080
HTTP/1.1 200
Content-Type: text/html;charset=UTF-8
Transfer-Encoding: chunked
Date: Wed, 24 Apr 2019 14:42:11 GMT

To a service in the same namespace:

curl -I app-2-svc:9002
HTTP/1.1 200
Content-Type: text/html;charset=UTF-8
Transfer-Encoding: chunked
Date: Wed, 24 Apr 2019 14:43:28 GMT

To a pod in the dev-branch-2 namespace:

curl -I 10.16.116.25:8080
HTTP/1.1 200
Content-Type: text/html;charset=UTF-8
Transfer-Encoding: chunked
Date: Wed, 24 Apr 2019 14:44:18 GMT

To a service in the dev-branch-2 namespace:

curl -I app-2-svc.dev-branch-2:9002
HTTP/1.1 200
Content-Type: text/html;charset=UTF-8
Transfer-Encoding: chunked
Date: Wed, 24 Apr 2019 14:45:10 GMT

To a server external to the cluster (a Google DNS server for example):

ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=116 time=1.42 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=116 time=1.40 ms

Without any NetworkPolicies applied, connectivity works internally within the namespace, to pods and services in other namespaces, and to external servers.

Apply NetworkPolicies

Deny all ingress and egress traffic to and from a namespace

We will apply the following NetworkPolicy to our dev-branch-1 namespace to deny all traffic to and from all pods in that namespace. Note that this is done by selecting all pods in that namespace using an empty podSelector, and then not whitelisting any ingress or egress traffic. The idea here is that we will block all traffic to start with, and then we can later add more NetworkPolicies to selectively allow only the network traffic needed for our apps to function correctly.

The following diagram shows what we think our connectivity will probably look like:

This is our NetworkPolicy:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all
  namespace: dev-branch-1
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress

Let’s apply the NetworkPolicy:

kubectl apply -f .\network-policies\netpol-deny-all-ns-1.yaml

Now let’s try the same tests from the previous section. None of the tests are successful, including those for communications inside the namespace. In addition, if we try to ping the services by name, we get a response similar to:

ping  app-2-svc
ping: app-2-svc: Temporary failure in name resolution

The problem here is that we have blocked egress to everything, including pods and services in the same namespace, cluster-wide services such as kube-dns, and external services such as DNS. We can also infer that we will not be able to access general corporate resources such as databases and LDAP authentication servers. Unless we need extremely high security, this will approach will probably require too much configuration and maintenance to be practical.

This is what our connectivity diagram really looks like:


Let’s delete this NetworkPolicy so that we can try something a little less restrictive:

kubectl delete -f .\network-policies\netpol-deny-all-ns-1.yaml
networkpolicy.networking.k8s.io "deny-all" deleted

Deny all ingress to the dev-branch-1 namespace

We’ll use a NetworkPolicy that blocks ingress traffic to pods in the dev-branch-1 namespace but allows all egress traffic from pods in that namespace. We’ll just focus on one namespace for now; once we get NetworkPolicies configured the way we need them for one namespace, we can use the same approach for other namespaces.

This is what we expect our connectivity diagram to look like:

The NetworkPolicy looks like this:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all-ingress
  namespace: dev-branch-1
spec:
  podSelector: {}
  policyTypes:
  - Ingress

Apply the NetworkPolicy:

kubectl apply -f .\network-policies\netpol-deny-ingress-ns-1.yaml

Now we’ll check connectivity from the same pod in the dev-branch-1 namespace again. We’ll only show the result of a few of the tests here:

kubectl -n dev-branch-1 exec -it app-1-55b7b9c47c-z9x67  bash
root@app-1-55b7b9c47c-z9x67:/usr/local/tomcat#

To a pod in the same namespace:

curl -I 10.16.116.24:8080
curl: (7) Failed to connect to 10.16.116.24 port 8080: Connection timed out

To a service in the dev-branch-2 namespace:

curl -I app-2-svc.dev-branch-2:9002
HTTP/1.1 200
Content-Type: text/html;charset=UTF-8
Transfer-Encoding: chunked
Date: Wed, 24 Apr 2019 15:01:10 GMT

To a server external to the cluster (a Google DNS server for example):

ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=116 time=1.42 ms

We can access pods and services in the dev-branch-2 namespace as well as external servers, but we can’t connect to pods or services in the same namespace. This makes sense since we are denying all ingress traffic to our pods, even from pods in the same namespace.

Let’s try accessing the pods and service in the dev-branch-1 namespace from a pod in the dev-branch-2 namespace. We will exec into a pod in the dev-branch-2 namespace:

kubectl -n dev-branch-2 exec -it app-1-5b5897f5bd-6mx5c  bash
root@app-1-5b5897f5bd-6mx5c:/usr/local/tomcat#

Once in the shell in the container, we try connecting to a pod and a service in the dev-branch-1 namespace:

kubectl -n dev-branch-2 exec -it app-1-5b5897f5bd-6mx5c  bash
root@app-1-5b5897f5bd-6mx5c:/usr/local/tomcat# curl -I 10.16.116.16:8080
(returns timeout error)
root@app-1-5b5897f5bd-6mx5c:/usr/local/tomcat# curl -I app-1-svc.dev-branch-1:9000
(returns timeout error)

Both connection attempts time out, which is the result we want; we have isolated pods and services in the dev-branch-1 namespace from access by apps in other namespaces.

However, we need to allow communications between pods in the same namespace. This is what our current connectivity diagram really looks like:

Let’s verify that the pods can communicate with each other directly, and through the services. Examples are shown here for internal communications inside the dev-branch-1 namespace, and from the dev-branch-1 namespace to the dev-branch-2 namespace.

We will exec into one of the pods in the dev-branch-1 namespace so we can test communications from the shell:

kubectl -n dev-branch-1 exec -it app-1-55b7b9c47c-z9x67  bash
root@app-1-55b7b9c47c-z9x67:/usr/local/tomcat#

From the shell, we will check connectivity to pods and services:

To a pod in the same namespace:

curl -I 10.16.116.24:8080
HTTP/1.1 200
Content-Type: text/html;charset=UTF-8
Transfer-Encoding: chunked
Date: Wed, 24 Apr 2019 14:42:11 GMT

To a service in the same namespace:

curl -I app-2-svc:9002
HTTP/1.1 200
Content-Type: text/html;charset=UTF-8
Transfer-Encoding: chunked
Date: Wed, 24 Apr 2019 14:43:28 GMT

To a pod in the dev-branch-2 namespace:

curl -I 10.16.116.25:8080
HTTP/1.1 200
Content-Type: text/html;charset=UTF-8
Transfer-Encoding: chunked
Date: Wed, 24 Apr 2019 14:44:18 GMT

To a service in the dev-branch-2 namespace:

curl -I app-2-svc.dev-branch-2:9002
HTTP/1.1 200
Content-Type: text/html;charset=UTF-8
Transfer-Encoding: chunked
Date: Wed, 24 Apr 2019 14:45:10 GMT

To a server external to the cluster (a Google DNS server for example):

ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=116 time=1.42 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=116 time=1.40 ms

Without any NetworkPolicies applied, connectivity works internally within the namespace, to pods and services in other namespaces, and to external servers.

Apply NetworkPolicies

Deny all ingress and egress traffic to and from a namespace

We will apply the following NetworkPolicy to our dev-branch-1 namespace to deny all traffic to and from all pods in that namespace. Note that this is done by selecting all pods in that namespace using an empty podSelector, and then not whitelisting any ingress or egress traffic. The idea here is that we will block all traffic to start with, and then we can later add more NetworkPolicies to selectively allow only the network traffic needed for our apps to function correctly.

The following diagram shows what we think our connectivity will probably look like:

Allow ingress access to pods in the dev-branch-1 namespace from other pods in the same namespace

Next, we’ll add a NetworkPolicy that allows pods in a specific namespace to communicate with each other. Our NetworkPolicy is actually a little more restrictive and further only allows pods in our project (pods with the label project= project-1-br1) to accept traffic from other pods in the same project.

Our new NetworkPolicy looks like this:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-internal-ingress-project-1-br1
  namespace: dev-branch-1
spec:
  podSelector:
    matchLabels:
      project: project-1-br1
  policyTypes:
  - Ingress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          ns-id: dev-br-1
      podSelector:
        matchLabels:
          project: project-1-br1

Apply the NetworkPolicy:

kubectl apply -f .\network-policies\netpol-allow-ingress-internal-ns-1.yaml

Exec into a pod in the dev-branch-1 namespace, and try connecting to a pod and a service in that same namespace. Both connection attempts succeed this time:

kubectl -n dev-branch-1 exec -it app-1-55b7b9c47c-z9x67  bash
root@app-1-55b7b9c47c-z9x67:/usr/local/tomcat# curl -I 10.16.116.24:8080
HTTP/1.1 200
Content-Type: text/html;charset=UTF-8
Transfer-Encoding: chunked
Date: Wed, 24 Apr 2019 15:14:58 GMT

root@app-1-55b7b9c47c-z9x67:/usr/local/tomcat# curl -I app-1-svc:9000
HTTP/1.1 200
Content-Type: text/html;charset=UTF-8
Transfer-Encoding: chunked
Date: Wed, 24 Apr 2019 15:15:55 GMT

You can apply the test from previous sections to verify that connectivity from the dev-branch-2 namespace is still blocked and that outgoing connectivity to other namespaces and external servers still works.

This is what our connectivity diagram looks like now:

External access to pods in a namespace

Our dev and test teams need to be able to access some of the apps in the dev-branch-1 namespace for debugging and testing purposes. We’ll change one of the services in the dev-branch-1 namespace to a NodePort type service to expose a port on every node of the cluster. This is what we expect our connectivity diagram to look like, assuming we use port 33456 for the NodePort:

This is our new service manifest:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: app-2
  name: app-2-nodeport-svc
  namespace: dev-branch-1
spec:
  type: NodePort
  ports:
  - port: 9002
    protocol: TCP
    targetPort: 8080
    nodePort: 33456
  selector:
    app: app-2
    project: project-1-br1

We’ll delete the existing service and apply the new service:

kubectl delete -f .\workloads\svc-ns1-app2.yaml

kubectl apply -f .\nodeport-svc\svc-ns1-app2-nodeport.yaml

This diagram shows the IP address ranges we are using in this example:

Now test communications to one of the nodes, using the NodePort. We’ll execute this command from one of the cluster nodes, or from a developer or tester workstation:

curl 172.13.14.16:33456

The connection attempt times out. Our connectivity diagram currently looks like this, since we only allow ingress from pods in the same namespaces:

Maybe we can add an ipBlock to the ingress rules in our NetworkPolicy. We’ll start by trying to allow ingress from the CIDR Range that our nodes and dev/test workstations are in:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-internal-ingress-project-1-br1
  namespace: dev-branch-1
spec:
  podSelector:
    matchLabels:
      project: project-1-br1
  policyTypes:
  - Ingress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          ns-id: dev-br-1
      podSelector:
        matchLabels:
          project: project-1-br1
    - ipBlock:
        cidr: 172.13.0.0/16

We delete our previous NetworkPolicy that allows internal pod traffic in the dev-branch-1 namespace and apply the new NetworkPolicy:

kubectl delete -f .\network-policies\netpol-allow-ingress-internal-ns-1.yaml

kubectl apply -f .\network-policies\netpol-allow-ingress-internal-ns-1-plus-ip-range.yaml

We try to connect to our node port service again, with no luck. After experimenting with various ipBlock ranges, we find that that only CIDR range that works even for connections from outside the cluster is the pod CIDR range we are using for our cluster. Unfortunately, this is not a good solution since it allows any pod in the cluster to communicate with pods in our dev-branch-1 namespace, defeating the main purpose of our NetworkPolicy use case. Apparently, the NetworkPolicies are being processed after the traffic has already been NAT-ed by iptables. Your results may vary depending on the CNI plugin you are using, for instance, if you use Weave Net instead of Calico.

Let’s try another approach next. But first, let’s revert to our previous NetworkPolicy that allows communications between pods in the dev-branch-1 namespace but does not include a rule to allow access from an ipBock.

kubectl delete -f .\network-policies\netpol-allow-ingress-internal-ns-1-plus-ip-range.yaml

kubectl apply -f .\network-policies\netpol-allow-ingress-internal-ns-1.yaml

Access via an Ingress

What if we use Ingress resources and an Ingress Controller to allow external access to apps in our cluster? In my demo cluster, I have an Ingress Controller deployment and service deployed in a namespace named ingress-nginx. Maybe we can whitelist traffic from that namespace to allow external traffic access to pods in our dev-branch-1 namespace while still preventing pods in other namespaces from accessing the pods in our dev-branch-1 namespace.

We’ll assume that you have an NGINX-based Ingress Controller already set up in a namespace named ingress-nginx. If you don’t have an Ingress Controller set up, your Ingress objects will be created, but will not have any effect. See Ingress and Ingress Controllers if you are not familiar with Ingress Controllers and Ingress resources. In this example, we are using the Kuberntes NGINX Ingress Controller described here.

We will apply an Ingress resource to our dev-branch-1 namespace to make the app-1-svc and app-2-svc services in that namespace available outside of the cluster. The Ingress manifest looks like:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: project-1-br1
  namespace: dev-branch-1
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: netpoltest.net
    http:
      paths:
      - path: /app-1
        backend:
          serviceName: app-1-svc
          servicePort: 9000
      - path: /app-2
        backend:
          serviceName: app-2-nodeport-svc
          servicePort: 9002

This is what our diagram will look like with the Ingress Controller in place and the Ingress object created:

Before we continue, let’s consider how we can access our target apps via the Ingress Controller in our demo cluster. In the environment used for this example, the Ingress Controller is externally exposed on port 34308 on every node on the cluster via a NodePort type service. If you had DNS entries set up to point netpoltest.net to a load balancer for the cluster, you could reach the services at http://netpoltest.net:34308/app-1 and http://netpoltest.net:34308/app-2. For this example, we will add a Host header to our curl command rather than configuring DNS.

OK, let’s get to work. First, apply the Ingress:

kubectl apply -f .\ingress\dev-branch-1-ingress.yaml

Now if we test by using the curl command from a cluster node or dev/test workstation against a node in the cluster, we will not be able to connect yet (the connection times out), since our current NetworkPolicies in the dev-branch-1 namespace do not whitelist traffic from the ingress-nginx namespace, or from any ipBlock:

curl -I --header 'Host: netpoltest.net' http://172.13.14.16:34308/app-1
# connection times out

We’re going to apply a modified version of the NetworkPolicy that allows ingress traffic to our project-1-br1 pods in the dev-branch-1 namespace from any pod in the ingress-nginx namespace, but first we need to add a label to the ingress-nginx namespace so that we can select it in our NetworkPolicy:

kubectl label namespace ingress-nginx ns-id=ingress-nginx

Now that we can select our ingress-nginx namespace, here is our new NetworkPolicy:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-internal-ingress-project-1-br1
  namespace: dev-branch-1
spec:
  podSelector:
    matchLabels:
      project: project-1-br1
  policyTypes:
  - Ingress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          ns-id: ingress-nginx
    - namespaceSelector:
        matchLabels:
          ns-id: dev-br-1
      podSelector:
        matchLabels:
          project: project-1-br1

Next, swap in our new NetworkPolicy:

kubectl delete -f .\network-policies\netpol-allow-ingress-internal-ns-1.yaml

kubectl apply -f .\network-policies\netpol-allow-ingress-internal-ns-1-plus-ingress.yaml

Now if we test external connectivity with the curl command we used previously in this section, we see that we can connect via our Ingress:

curl -I --header 'Host: netpoltest.net' http://172.13.14.16:34308/app-1
HTTP/1.1 200
Server: nginx/1.13.8
Date: Tue, 16 Apr 2019 21:28:22 GMT
Content-Type: text/html;charset=UTF-8
Connection: keep-alive
Vary: Accept-Encoding

This is what our connectivity diagram looks like now:

You can also check that communication between pods in the dev-branch-1 namespace still works, and that pods from the dev-branch-2 namespace can’t access the pods in the dev-branch-1 namespace:

kubectl -n dev-branch-1 exec -it app-1-55b7b9c47c-z9x67  bash
root@app-1-55b7b9c47c-z9x67:/usr/local/tomcat# curl -I 10.16.116.24:8080
HTTP/1.1 200
Content-Type: text/html;charset=UTF-8
Transfer-Encoding: chunked
Date: Wed, 24 Apr 2019 16:36:16 GMT

root@app-1-55b7b9c47c-z9x67:/usr/local/tomcat# curl -I app-2-nodeport-svc:9002
HTTP/1.1 200
Content-Type: text/html;charset=UTF-8
Transfer-Encoding: chunked
Date: Wed, 24 Apr 2019 16:37:36 GMT
kubectl -n dev-branch-2 exec -it app-1-75dbd6d84b-9w9fg  bash
root@app-1-75dbd6d84b-9w9fg:/usr/local/tomcat# curl -I 10.16.116.24:8080
(returns timeout error)
root@app-1-75dbd6d84b-9w9fg:/usr/local/tomcat# curl -I app-2-nodeport-svc.dev-branch-1:9002
(returns timeout error)

Conclusion

This just scratches the surface of what you can do with Kubernetes NetworkPolicies, but I’m sure you can already see that some significant planning and testing is needed to satisfy anything past very basic requirements. Also, bear in mind that some behavior such as the order of processing of NetworkPolicies and iptables or IP Virtual Server (IPVS) will depend on the CNI plugin you are using.

If you have questions or feel like you need help with Kubernetes, Docker or anything related to running your applications in containers, get in touch with us at Capstone IT.

Dave Thompson
Solutions Architect
Docker Accredited Consultant