Configure Custom CIDR Ranges in Docker EE

I recently worked with a customer to customize all of the default Classless Interdomain Routing (CIDR) ranges used for IP address allocation by Docker Enterprise Edition 3.0 (Docker EE 3.0). The customer primarily wanted to document the customization process for future use. However, there is often a real need to change some of the default CIDR ranges to avoid conflicts with existing private IP addresses already in use within a customer’s network. Typically such a conflict will make it impossible for applications running in containers or pods to access external hosts in the conflicting CIDR range.

Are you expected to own innovation? Our IT solutions team can share your challenges. We empower businesses to evolve, release projects faster, and enhance performance. Tell us how we can help.

    Here are a couple of examples:

    • Your company uses Docker Swarm services and has hosts external to the Docker cluster that uses addresses from subnets of Applications running as Swarm services within your Docker EE cluster need to access those hosts. By default, Swarm overlay networks use 10.x.y.0/24 subnets carved out of the CIDR range. The default Swarm ingress network defaults to If some of the external hosts that you need to access are using addresses from the subnet and you use the default ingress network in your services, then you will immediately have trouble accessing those hosts from Swarm services. Even if you use overlay networks other than the default ingress network for your Swarm services there is a chance that you will eventually create an overlay network with overlapping IP addresses for some of your external hosts. You can, of course, specify a known non-conflicting subnet when creating your overlay networks, but then you must track and manage the subnets for each overlay network to avoid errors when creating networks.
    • Your company uses Kubernetes under Docker EE and has hosts outside the cluster in the CIDR range. You have applications running in your cluster that need to access those hosts. Traffic from your apps in the Kubernetes cluster will not be correctly routed to those hosts.

    There are use cases that can cause similar problems for each of the default CIDR ranges, but by now you get the idea and there is no need to describe each use case.

    If you have been using Docker EE for a while, note that the default subnet size for overlay networks has recently changed from /16 to /24.

    The default CIDR ranges used by Docker EE

    Docker EE uses four sets of CIDR ranges. You can read a description from the Docker documentation here.

    CIDR ranges for Docker bridge networks

    The default address pool for local bridge networks on each individual Docker node includes the CIDR ranges through and through By default, Docker selects a CIDR range from this pool whenever you create a new bridge network.

    CIDR ranges for Swarm overlay networks

    The default address pool for Swarm overlay networks is the CIDR range. Every time you create a new overlay network, Docker carves out a new 10.x.y.0/24 subnet from the CIDR range for the new network.

    CIDR range for Kubernetes pods

    The default CIDR range used to allocate addresses for Kubernetes pods is Docker EE 3.0 uses Tigera Calico for its Kubernetes network plugin, and this is the default CIDR range for Calico.

    CIDR range for Kubernetes services

    The default CIDR range used to allocate Virtual IP addresses for Kubernetes services is

    Pre-allocated network addresses

    Docker EE creates several networks during the installation process. By default, Docker allocates these network addresses from the address pools described in the previous section.

    • The docker0 bridge network, also seen as the bridge network from the docker network ls command. This network uses the address by default. Docker creates this network on each Docker node when you first install and start Docker. When you create new stand-alone containers, Docker connects them to this network unless a different network is specified.
    • The docker_gwbridge bridge network. The docker_gwbridge network uses the address by default. Docker creates this network on each individual node when you join the node to the Swarm (and when you initialize the Swarm on the first node in the Swarm). Docker connects each container in a Docker Swarm service to this network by default.
    • The ingress overly network. This is a Swarm-scope network. Docker creates this network when you initialize the Swarm. The ingress network handles ingress traffic to containers in Swarm services. By default, Docker connects this network to all containers that have a port published by a Swarm service. The ingress network uses the address by default ( in earlier versions of Docker EE).

    Try it out yourself

    If you want to see the network address allocation process in action, create a set of bridge networks using the command docker network create my-network-X. Increment the value of X from 1 through 20. Then inspect those networks using the command docker network inspect my-network-X and observe the value of the subnet for each network. Depending on what bridge networks you already had created, you will see the subnets start in the 172.[19-31].0.0/16 range. After that range is used up subnets are allocated using network addresses from the 192.168.[0-240].0/20 range. You won’t see and used in the new networks you created since those addresses are already used for the docker0 (bridge) network and the docker_gwbridge network.

    You can try the same thing for overlay networks using the docker create network -d overlay my-overlay-network-X command.

    Note that you can provide additional parameters to the docker create network command to create bridge and overlay networks using subnets of your choice. However, right now we are discussing default behavior.

    Don’t forget to clean up the networks that you created if you tried things out in this section!

    Assigning custom values for the Docker EE CIDR ranges

    Now that we understand what the default CIDR ranges are, let’s look at how we can assign a custom value for each CIDR range and how we can change the addresses for the default networks.

    The address pool for Swarm overlay networks

    We will discuss the pool of networks for the Swarm overlay networks first since this has to be specified when the Swarm is initialized. This is a cluster-wide setting, and you cannot change this later without tearing down the cluster.

    The process for setting the default-address-pool is simple. Just pick your custom CIDR range and apply it when you first initialize the Swarm. For instance, if your custom CIDR range is, then use the command:

      docker swarm init --default-addr-pool

    In the example above, when new overlay networks are created, Docker will use 10.135.x.0/24 subnets carved out of the network. The default subnet mask is /24, but you can specify the subnet mask to use with the--default-addr-pool-mask-lengthoption. For instance, to use /20 subnets for new overlay networks:

      docker swarm init --default-address-pool --default-addr-pool-mask-length 20

    For further details, see the Docker documentation here.

    The address pool for Docker bridge networks

    This is a per-node setting, and it cannot be changed while a node is part of a Swarm. However, you can remove a node from the Swarm, change the setting and then join the node to the swarm again.

    If the node is part of a Swarm, first remove the node from the Swarm. Logon to the node as the docker user and execute the following command at the CLI prompt:

      docker swarm leave

    After the command completes on the node, check the status of the node from the Docker GUI, the Docker CLI on a client machine, or from the CLI on a Docker Swarm manager. Once the node status changes to unavailable/down, remove the node using the Docker UCP GUI or by using the following command at the CLI prompt on a Docker manager or client machine:

      docker node rm -f <NODE_NAME>

    The recommended way to configure this setting is to use the daemon.json file. Add the default-address-pools setting to the daemon.json file, for example:

         "default-address-pools": [
             {"base":" ","size":24}

    You can specify multiple address pools by providing additional {“base”:“”, “size”:zz} entries, for instance:

         "default-address-pools": [
             {"base":" ","size":24},
             {“base”:“”, “size”:zz}

    You will need to restart the docker service to pick up the changes.

    If you removed the node from the Swarm, join the node to the Swarm again using thedocker swarm joincommand. You can get the complete command text including the join token from the UCP GUI or from the CLI on a manager node:

    • From the UCP GUI, navigate to the Shared Resources -> Nodes panel, then click the Add Node button. In the resulting panel, select LINUX as the Node Type, select MANAGER or WORKER as the Node Role. Next, copy the join command from the text box at the bottom of the panel.
    • From the CLI, use thedocker swarm join-token [manager/worker] command to fetch the join command. You can use thedocker swarm join-token --help command to see additional options similar to the options available from the GUI.

    Note: Don’t confuse thedefault-addr-poolsetting used for Swarm overlay networks with thedefault-address-poolssetting used for Docker bridge networks. These are different settings for different purposes.

    Create the docker_gwbridge network or change its address

    This is a per-node setting, and you cannot change it while a node is part of a Swarm. We already covered removing a node from the Swarm and then joining the node to the Swarm again in the previous section.

    Remove the docker_gwbridge network if it exists:

      docker network rm docker_gwbridge

    Note that if any containers are connected to the docker_gwbridge network, you will first need to stop those containers. You can use the commanddocker network inspect docker_gwbridgeto find the IDs of any containers connected to the network, and then stop each container with the commanddocker container rm <CONTAINER_ID>.

    Next create the docker_gwbridge network using the custom subnet and gateway you have selected, for example:

      docker network create  \
        --subnet \
        --gateway \
        -o \
        -o \

    For more details about how to change the docker_gwbridge network after the Swarm has been initialized, see the instructions in this article by Docker.

    Change the CIDR range for the docker0 bridge

    This is a per-node setting, but you can change it while the node is part of a Swarm.

    The recommended way to configure the docker0 bridge setting is to use the daemon.json file. You can specify one or more of the following settings to configure the docker0 bridge.

        "bip": "",
        "fixed-cidr": ""

    bip: Supply a specific bridge IP address for the docker0 bridge network, using standard CIDR notation. We will use for our example. Note that you cannot use the network address ( in this case), so we are using the first IP address in the CIDR range. This address becomes the default gateway for the docker0 bridge network if you do not specify an address with a default-gateway attribute.

    fixed-cidr: The fixed-cidr option is only needed if you want to restrict the IP range for the docker0 bridge. We are using for our example. This range must be a subset of the bridge IP range (the value of bip in daemon.json). In this example using, IPs for your containers will be selected from the first half of the addresses
    ( – available in the bip subnet ( in this example).

    Restart the docker service to pick up the changes.

    For further information regarding attributes in daemon.json see the Docker documentation here.

    Change the Docker Swarm ingress network address

    If you need to change the CIDR range used by the default ingress network, you will first need to disconnect all containers using that network, remove the network, and then create a new ingress network. You can use the commanddocker network inspect ingressto find the IDs of the containers connected to the ingress network, and the commanddocker container stop <CONTAINER_ID>to remove each container. If the containers are in Docker Swarm services, you will first need to remove those services or scale the services to 0 replicas. Otherwise, Docker will start new containers automatically to replace the stopped containers.

    First, remove the default ingress network:

     docker network rm ingress

    Next create a new overlay network using the –ingress flag, along with the custom options you want to set. This example sets the subnet to and sets the gateway to You can name your ingress network something other than ingress, but you can only have one ingress network. This means one network created with the –ingress option.

      docker network create \
        --driver overlay \
        --ingress \
        --subnet= \
        --gateway= \

    For more details about customizing the ingress network see the Docker documentation here.
    Note that there are size limitations/considerations for overlay networks, and it is recommended not to create overlay networks larger than /24. See the Docker documentation here with the details explained here.

    Set the CIDR ranges for Kubernetes pods and services

    The best time to do this is when you first install UCP. Set the CIDR range values using the--pod-cidroption to specify the range for Kubernetes pod addresses and the--service-cluster-ip-rangeoption to specify the range for Kubernetes service addresses; For instance:

      docker container run --rm -it   --name ucp   \
        --volume /var/run/docker.sock:/var/run/docker.sock   \
        docker/ucp:3.1.9 install   \
        --host-address <host IP>   \
        --pod-cidr   \
        --service-cluster-ip-range   \

    For more details regarding UCP installation options, see the Docker documentation here and here.

    You can see an example of changing the Kubernetes pod CIDR range after UCP is already installed in my blog post on that topic.

    Have Questions?

    If you have questions or feel like you need help with Kubernetes, Docker or anything related to running your applications in containers, get in touch with us at Capstone IT.

    Dave Thompson
    Solution Architect
    Docker Accredited Consultant
    Certified Kubernetes Administrator
    Certified Kubernetes Application Developer


    Know someone who would be a great fit for one of our opportunities? Refer them to Capstone IT.

    Leave a Comment

    Your email address will not be published.

    This site uses Akismet to reduce spam. Learn how your comment data is processed.