Loading...
 

Docker Networking

Bridge Network

On a standalone Docker host, bridge is the default network that containers connect to if no other network is specified. In the following example a container is created with no network parameters. Docker Engine connects it to the bridge network by default. Inside the container, notice eth0 which is created by the bridge driver and given an address by the Docker native IPAM driver.

#Create a busybox container named "c1" and show its IP addresses
host $ docker run -it --name c1 busybox sh
c1 # ip address
4: eth0@if5:  mtu 1500 qdisc noqueue
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 scope global eth0
...

 

A container interface’s MAC address is dynamically generated and embeds the IP address to avoid collision. Here ac:11:00:02 corresponds to 172.17.0.2.

The tool brctl on the host shows the Linux bridges that exist in the host network namespace. It shows a single bridge called docker0docker0 has one interface, vetha3788c4, which provides connectivity from the bridge to the eth0 interface inside container c1.

host $ brctl show
bridge name      bridge id            STP enabled    interfaces
docker0          8000.0242504b5200    no             vethb64e8b8

Inside container c1, the container routing table directs traffic to eth0 of the container and thus the docker0 bridge.

c1# ip route
default via 172.17.0.1 dev eth0
172.17.0.0/16 dev eth0  src 172.17.0.2

User-defined Network

Creating a user defined network, which sits parallel to the default bridge0 network can be achieved as follows:

$ docker network create -d bridge --subnet 10.0.0.0/24 my_bridge
$ docker run -itd --name c2 --net my_bridge busybox sh
$ docker run -itd --name c3 --net my_bridge --ip 10.0.0.254 busybox sh

brctl now shows a second Linux bridge on the host. The name of the Linux bridge, br-4bcc22f5e5b9, matches the Network ID of the my_bridge network. my_bridge also has two veth interfaces connected to containers c2 and c3.

$ brctl show
bridge name      bridge id            STP enabled    interfaces
br-b5db4578d8c9  8000.02428d936bb1    no             vethc9b3282
                                                     vethf3ba8b5
docker0          8000.0242504b5200    no             vethb64e8b8

$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
b5db4578d8c9        my_bridge           bridge              local
e1cac9da3116        bridge              bridge              local
...

MACVLAN Network

The macvlan driver is a new implementation of the tried and true network virtualization technique. The Linux implementations are extremely lightweight because rather than using a Linux bridge for isolation, they are simply associated with a Linux Ethernet interface or sub-interface to enforce separation between networks and connectivity to the physical network.

MACVLAN offers a number of unique features and capabilities. It has positive performance implications by virtue of having a very simple and lightweight architecture. Rather than port mapping, the MACVLAN driver provides direct access between containers and the physical network. It also allows containers to receive routable IP addresses that are on the subnet of the physical network.

MACVLAN use-cases may include:

  • Very low-latency applications
  • Network design that requires containers be on the same subnet as and using IPs as the external host network

The macvlan driver uses the concept of a parent interface. This interface can be a physical interface such as eth0, a sub-interface for 802.1q VLAN tagging like eth0.10 (.10representing VLAN 10), or even a bonded host adaptor which bundles two Ethernet interfaces into a single logical interface.

A gateway address is required during MACVLAN network configuration. The gateway must be external to the host provided by the network infrastructure. MACVLAN networks allow access between containers on the same network. Access between different MACVLAN networks on the same host is not possible without routing outside the host.

Networking and DNS


network_mode is a concern for Triton which needs bridge, otherwise do we care? 

This is a single container and can only talk to Consul using WAN Serf, even though Zabbix will be fine.

Examples

Single container on Standard Docker engine (svc_discovery = consul):

{% dns is not defined && network_mode is not defined %}
    user-defined network is created
    dns = embedded DNS 127.0.0.11 and;
    container name resolution for consul fails because there is no consul container

{% dns is not defined && network_mode = bridge %}
    container is attached to default bridge network
    dns = name server of host (docker for mac, this is 192.168.65.1)
    container name resolution for consul fails because there is no consul container

{% dns = 127.0.0.1 (or any address or list) && network_mode is not defined %}
    user-defined network is created
    dns = embedded DNS 127.0.0.11 and;
    container name resolution for consul fails because there is no consul container

{% dns = 127.0.0.1 && network_mode = bridge %}
    container is attached to default bridge network
    dns = 127.0.0.1 (or whatever addresses are set as value of dns key in compose file)
    container name resolution for consul fails because there is no consul container
 

multiple containers on Standard docker engine (svc_discovery = consul):

{% dns is not defined && network_mode is not defined %}
    user-defined network is created
    dns = embedded DNS 127.0.0.11 and;
    container name resolution for consul succeeds

{% dns is not defined && network_mode = bridge %}
    container is attached to default bridge network
    dns = name server of host (docker for mac, this is 192.168.65.1)
    container name resolution for consul fails because this DNS server doesn’t know the container

{% dns = 127.0.0.1 (or any address or list) && network_mode is not defined %}
    user-defined network is created
    dns = embedded DNS 127.0.0.11 and;
    container name resolution for consul succeeds

{% dns = 127.0.0.1 && network_mode = bridge %}
    container is attached to default bridge network
    dns = 127.0.0.1 (or whatever addresses are set as value of dns key in compose file)
    container name resolution for consul fails because this DNS server doesn’t know the container

{% dns = 127.0.0.11 && network_mode = bridge %}
    container is attached to default bridge network
    dns = 127.0.0.11 (or whatever addresses are set as value of dns key in compose file)
    container name resolution for consul fails because this DNS server doesn’t know the container
 

single container on Triton (svc_discovery = consul):

dns shouldn’t be set. CNS should be used. Maybe keep svc_discovery as consul and only set dns-search

Triton has to explicitly set network_mode to bridge. This is probably because docker is setting it to default and default=bridge
 

Summary

DNS doesn’t need setting in any scenario but cns label does (cns could always be set because it’s just a label).

Therefore we have to create separate files for triton not handling network_mode in docker-compose v2+ well. 

  1. create multiple compose files because the file will need conditions
    1. docker-compose.yml
    2. triton-compose.yml
    3. docker-compose-integrations.yml
    4. triton-compose-integrations.yml
  2. Have network_mode commented in compose files 
  3. handle the conditions within Ansible code


Simply put, if sdc-docker handled network_mode: default as network_mode: bridge, then there’s no problem.