Accessing .NET Sockets thru Docker Ingress

I recently helped a development team that was running into problems while deploying their containerized application to a Docker Swarm cluster. This application was written in .NET and its primary purpose is to listen on a socket for messages and then process the data. The application was working standalone outside of a container, but we ran into issues whenever we tried accessing it via the Docker Ingress network. The socket server never received any messages from the client. I thought it might help if I showed you how we worked the issue.

Because we’ve deployed many applications into the Docker Swarm and we have used the ingress networks many times before, we felt comfortable that it was not a Swarm issue. Nor was it an ingress network issue. However, let’s just prove that out first. We started by creating a simple .NET socket server and .NET client application and ensured these worked locally on my desktop.

The steps we used to debug the issue started with attempting to duplicate the issue in a test swarm. We approached it with the following steps.

  • Create a simple .NET socket service,
  • Create a client to send messages to socket service,
  • Deploy containers via Docker stack file, and
  • Deploy a NetShoot container for diagnosing issues.

The following diagram illustrates the deployment.

.NET Server Client Layout

I created a .NET application that mimicked the socket bind and read of messages. The app would wait for a connection and print out the incoming message. We also created a client app that we could execute within the swarm and externally to test the functionality of the socket server. The client would send data to the server and we would expect that data to appear in the log output of the socket server.

Since I already had a java app that was containerized and performed the same functionality, we added the java-server to the stack file to validate with a known asset.

Once we had the images built and pushed to the Docker Trusted Registry, we deployed a stack and looked at the logs for each container. I’ve included the test stack file that was used.

version: '3'
services:
  dotnet-server:
    image: dtr.lab.capstonec.net/project-a/dotnet-server:1.0
    networks:
      - backend-network
    ports:
      - "9998:9998"
    deploy:
      mode: replicated
      replicas: 1

  java-server:
    image: dtr.lab.capstonec.net/project-a/javaserver:1.0
    networks:
      - backend-network
    ports:
      - 9999:9999
      
  dotnet-client:
    image: dtr.lab.capstonec.net/project-a/dotnet-client:1.0
    networks:
      - backend-network
    deploy:
      mode: replicated
      replicas: 1

  dotnet-client-lb:
    image: dtr.lab.capstonec.net/project-a/dotnet-client-lb:1.0
    networks:
      - backend-network
    deploy:
      mode: replicated
      replicas: 1

  netshoot:
    image: nicolaka/netshoot
    command: ["tail", "-f", "/dev/null"]
    networks:
      - backend-network
    deploy:
      mode: replicated
      replicas: 1

networks:
  backend-network:

Once we had the stack deployed, we started our debugging by analyzing from the inside and working our way to the outside.

Our first step was to validate that the app was accepting connections from inside the container. We exec-ed into the dotnet-server and ran the following netcat command to validate that the app was up and accepting data on a port. We are using netcat (‘nc’) to communicate to the socket service; therefore, we had to ensure that netcat was installed in our socket server’s Docker image.

$ docker exec –it <socket-server-container-id> sh
> echo “test data” | nc localhost 9998

This worked fine, just as we expected.

Next, we wanted to verify that the client container could talk to the socket server via the overlay network named “backend-network”. So, we exec’ed into the client container and ran the following.

$ docker exec –it <client-container-id> sh
> dotnet client.dll dotnet-server 9998

We found that in this case the client app was not able to communicate to the socket server. From inside the client container we then validated that the java-server was working by echoing the same data to java-server.

> dotnet client.dll java-server 9999

We then ran a netcat command against the java-server from outside the swarm and everything was working but the same command against the dotnet-server was not.

>echo “test data” | nc my-swarm.company.com 9999

At this point we believed that the Swarm and Ingress network infrastructure was setup correctly. So, that really narrows things down to the application code of the .Net socket server. We then exec-ed into the netshoot container and did some debugging.

  1. Validated that the container connected to the backend-network could resolve the dns name of the dotnet-server.
  2. Tried to send data to the dotnet-server with the netcat command. This was failing.

After looking into the .NET code for the dotnet-server we found that the bind command was setup incorrectly. They were using “IPHostEntry ipHostInfo = Dns.GetHostEntry(“localhost”)” . We changed this command to “IPHostEntry ipHostInfo = Dns.GetHostEntry(Dns.GetHostName());”  Within a container the Dns.GetHostName() will return the container ID. This change worked for the client container communicating to the socket server via its service name of “dotnet-server”. But this did not fix the problem of applications accessing from outside of the swarm.

At this point we look at the Java code and compared it to the .NET code. We noticed that in the Java code we did not specify an IP Address but only a port. In the .NET code we specified both IP Address and port. Through some googling of various socket server examples, we found that instead of specifying an IP Address or container/host name we could use a constant called IPAddress.Any. Finally, we could access the .Net socket server via the Swarm ingress network with the following command.

>echo “test data” | nc my-swarm.company.com 9998

What we learned is that by specifying the Dns.GetHostName() that the Socket would bind to all of overlay networks. Changing the binding to “any” allows the socket to receive data originating from any interface and therefore allowing ingress traffic to flow through correctly.

If you have any further questions, feel free to leave a message or even to contact some at Capstone via https://capstonec.com/contact-us/

Chuck Waid
Docker Accredited Consultant
Principal Consultant at Capstone

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.