Container Networking : Host networks

Anirban Mukherjee
7 min readAug 3, 2018

--

This blog is about Host Networking : networking between containers on the same Docker host. The host could be a physical machine or a VM hosting the Docker environment.

Docker defines the driver type ‘bridge’ for networks that exist on a single Docker host. Containers connected to a network with driver=’bridge’ will be able to communicate with the external world, but they will not be able to recognize containers on other hosts as ‘containers’ but will recognize them as entities with IP addresses(just like any other server on the internet).

When a docker environment is initialized/created, Docker creates a network called ‘bridge’ using the ‘bridge’ driver. The default networks that exist in a Docker host environment (no user-defined networks created yet) are:

Docker03:~$ docker network lsNETWORK ID NAME DRIVER SCOPE
41349f735332 bridge bridge local
99e6a78287cf host host local
8290b71ecedb none null local

The other 2 networks ‘host’ and ‘none’ are not full-fledged networks and are used by Docker to start a container connected directly to the Docker host’s networking stack. When you create a container without specifying the “- — network” flag, it anyways gets a NIC device connected to the ‘bridge’ network.

We start by creating and running 2 docker containers out of an Ubuntu image:

Docker03:~$ docker run — hostname docker1 — name docker1 -dit ubuntu
638abcc7c6ddca61747f675fadf81d06036ef368878c5944628434e3c0240644
Docker03:~$ docker run — hostname docker2 — name docker2 -dit ubuntu
00c25d0888994d7f46b12ca09949ed524fbc07c753ed350597aa48eaf13f880b
Docker03:~$ docker container ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
00c25d088899 ubuntu “/bin/bash” 6 seconds ago Up 4 seconds docker2
638abcc7c6dd ubuntu “/bin/bash” About a minute ago Up About a minute docker1

Check the network devices created on the host:

Docker03:~ $ ifconfigdocker0 Link encap:Ethernet HWaddr 02:42:92:2c:02:25
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::b209:6569:7cff:6bfa/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:20 errors:0 dropped:0 overruns:0 frame:0
TX packets:88 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1673 (1.6 KiB) TX bytes:16104 (15.7 KiB)

The ‘docker0’ device part of the ‘bridge’ network and is the one that connects into the host’s network device. It has an IP address(172.17.0.1) allocated from the ‘bridge’ network.

When a container is created without specifying any networks, it gets a default NIC attached to the ‘bridge’ network. Inspecting docker1 container:

Docker03:~ $ docker container inspect docker1“NetworkSettings”: {
“Bridge”: “”,
“SandboxID”:
“4a9e29443e55c4859c28b8cdd3823666c39f953835ee40644020e3097fa8395b”,“HairpinMode”: false,
“LinkLocalIPv6Address”: “”,
“LinkLocalIPv6PrefixLen”: 0,
“Ports”: {},
“SandboxKey”: “/var/run/docker/netns/4a9e29443e55”,
“SecondaryIPAddresses”: null,
“SecondaryIPv6Addresses”: null,
“EndpointID”: “80d6796477ed08f45e832c0f60b4c2ac64f97d8cd5828ca8ffdf5e6e7ca1219c”,
“Gateway”: “172.17.0.1”,
“GlobalIPv6Address”: “”,
“GlobalIPv6PrefixLen”: 0,
“IPAddress”: “172.17.0.2”,
“IPPrefixLen”: 16,
“IPv6Gateway”: “”,
“MacAddress”: “02:42:ac:11:00:02”,
“Networks”: {
“bridge”: {
“IPAMConfig”: null,
“Links”: null,
“Aliases”: null,
“NetworkID”: “1b6d5a047710bd32897432b910712bde2a13f1fe525f60e8eda5ecdeef5074df”,
“EndpointID”: “80d6796477ed08f45e832c0f60b4c2ac64f97d8cd5828ca8ffdf5e6e7ca1219c”,
“Gateway”: “172.17.0.1”,
“IPAddress”: “172.17.0.2”,
“IPPrefixLen”: 16,
“IPv6Gateway”: “”,
“GlobalIPv6Address”: “”,
“GlobalIPv6PrefixLen”: 0,
“MacAddress”: “02:42:ac:11:00:02”
}

We can see here, that the container docker1 has the first network adapter from networking “bridge” provisioned with IP address=172.17.0.2, MAC address= “02:42:ac:11:00:02”. The gateway if the ‘docker0’ device connecting to the host’s networking stack.

Lets check by logging into the container:

Docker03:~ $ docker exec -it 638abcc7c6dd /bin/bashroot@docker1:/# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.2 netmask 255.255.0.0 broadcast 0.0.0.0
ether 02:42:ac:11:00:02 txqueuelen 0 (Ethernet)
RX packets 18155 bytes 26847332 (26.8 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 9189 bytes 620210 (620.2 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

You can run a ping to the guest container and verify the connectivity and the guest’s MAC address. The docker0 network connects to the host machine, so lets try pinging the guest from the host machine.

Docker03:~ $ ping 172.17.0.2 -c 1PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.410 ms
— — 172.17.0.2 ping statistics — -
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms
Docker03:~ $ arp -a

? (172.17.0.2) at 02:42:ac:11:00:02 [ether] on docker0

Check the network device parameters provisioned for docker2 container:

Docker03:~ $ docker container inspect docker2
….
“Networks”: {
“bridge”: {
“IPAMConfig”: null,
“Links”: null,
“Aliases”: null,
“NetworkID”: “1b6d5a047710bd32897432b910712bde2a13f1fe525f60e8eda5ecdeef5074df”,
“EndpointID”: “bbf47f68530dfe14419eb2eff715317c97fea60db177cabec3baa1785ed88d16”,
“Gateway”: “172.17.0.1”,
“IPAddress”: “172.17.0.3”,
“IPPrefixLen”: 16,
“IPv6Gateway”: “”,
“GlobalIPv6Address”: “”,
“GlobalIPv6PrefixLen”: 0,
“MacAddress”: “02:42:ac:11:00:03”
}

Now, with the 2 containers attached to the ‘bridge’ network, let us inspect the ‘bridge’ network:

Docker03:~ $ docker network inspect bridge
[
{
“Name”: “bridge”,
“Id”: “1b6d5a047710bd32897432b910712bde2a13f1fe525f60e8eda5ecdeef5074df”,
“Created”: “2018–08–02T17:44:17.149291826Z”,
“Scope”: “local”,“Driver”: “bridge”,
“EnableIPv6”: false,
“IPAM”: {
“Driver”: “default”,
“Options”: null,
“Config”: [
{
“Subnet”: “172.17.0.0/16”,
“Gateway”: “172.17.0.1”
}
]
},
“Internal”: false,
“Attachable”: false,
“Ingress”: false,
“Containers”: {
“00c25d0888994d7f46b12ca09949ed524fbc07c753ed350597aa48eaf13f880b”: {
“Name”: “docker2”,
“EndpointID”: “bbf47f68530dfe14419eb2eff715317c97fea60db177cabec3baa1785ed88d16”,
“MacAddress”: “02:42:ac:11:00:03”,
“IPv4Address”: “172.17.0.3/16”,
“IPv6Address”: “”
},
“638abcc7c6ddca61747f675fadf81d06036ef368878c5944628434e3c0240644”:
{
“Name”: “docker1”,
“EndpointID”: “80d6796477ed08f45e832c0f60b4c2ac64f97d8cd5828ca8ffdf5e6e7ca1219c”,
“MacAddress”: “02:42:ac:11:00:02”,
“IPv4Address”: “172.17.0.2/16”,
“IPv6Address”: “”
}
},
“Options”: {
“com.docker.network.bridge.default_bridge”: “true”,
“com.docker.network.bridge.enable_icc”: “true”,
“com.docker.network.bridge.enable_ip_masquerade”: “true”,
“com.docker.network.bridge.host_binding_ipv4”: “0.0.0.0”,
“com.docker.network.bridge.name”: “docker0”,
“com.docker.network.driver.mtu”: “1500”
},
“Labels”: {}
}
]

We can see that the docker1 and docker2 containers are attached to the network. The “Options” key specifies options that are specific for the driver type used for the network. The “mtu”, “name”, “default_bridge” options are self-explanatory.

The “enable_icc=true” allows inter container communication (provided “iptables=true” in host machine).

The “host_binding_ipv4” is the subnet range that external networks (from the host) can map into. There is a possibility that a network you create has subnets which you don’t want to carry external network data on.

Now, let us try to create a user-defined bridge network such that:

a. The network has a subnet of 192.168.15.0/24

b. Containers can manually attach to the network

c. Name of the network is test-host-bridge

d. The network has MTU=9100

e. Since it’s a bridge network, it will be available only within the same host.

Docker03:~ $ docker network create — subnet 192.168.15.0/24 — attachable — opt “com.docker.network.driver.mtu”=9100 test-host-bridge
Docker03:~ $ docker network inspect 8af44c3e7363384b68f2533b526662bbb8b40f1f0cdced30e51b4dc1a070f2b6
[
{
"Name": "test-host-bridge",
"Id": "8af44c3e7363384b68f2533b526662bbb8b40f1f0cdced30e51b4dc1a070f2b6",
"Created": "2018-08-03T14:09:05.390619529Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.15.0/24"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"Containers": {},
"Options": {
"com.docker.network.driver.mtu": "9100"
},
"Labels": {}
}
]

The network still does not have any containers attached to it, but notice that “attachable=true”.

The “internal=false” means this network can connect to the external world. If a container attaches only to a single internal network, then it will have a default route through this network alone, and will not be able to send packets out through the ‘br-xxx’ network adapter outside the docker environment.

Since this network is supposed to allow a connection to the host, let see the network adapter in the host through which this docker network can communicate to the host machine.

Docker03:~ $ ifconfigbr-8af44c3e7363 Link encap:Ethernet HWaddr 02:42:b6:c6:e4:bd
inet addr:192.168.15.1 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::b240:4a05:b34e:905d/64 Scope:Link
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

Now, let us connect the docker1 and docker2 containers to the test-host-bridge network.

Docker03:~ $ docker network connect test-host-bridge docker1Docker03:~ $ docker network connect test-host-bridge docker2Docker03:~ $ docker network inspect test-host-bridge
…..
“Internal”: false,
“Attachable”: true,
“Ingress”: false,
“Containers”: {
“00c25d0888994d7f46b12ca09949ed524fbc07c753ed350597aa48eaf13f880b”: {
“Name”: “docker2”,
“EndpointID”:
“cf9662047a0e17d7c63a906c30be5779bb4aae3ac93d8f765d201ce79a5502af”,
“MacAddress”: “02:42:c0:a8:0f:03”,
“IPv4Address”: “192.168.15.3/24”,
“IPv6Address”: “”
},
“638abcc7c6ddca61747f675fadf81d06036ef368878c5944628434e3c0240644”: {
“Name”: “docker1”,
“EndpointID”: “2e5c8684a5bd9d4e18f1e7328f860d813e6dacb0281cfae8f6569d4f0ccfffe”,
“MacAddress”: “02:42:c0:a8:0f:02”,
“IPv4Address”: “192.168.15.2/24”,
“IPv6Address”: “”
}
},
“Options”: {
“com.docker.network.driver.mtu”: “9100”
},
“Labels”: {}
}
]

The 2 containers are now attached to the network. Lets check the network adapters created inside a container:

docker2:/# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.3 netmask 255.255.0.0 broadcast 0.0.0.0
ether 02:42:ac:11:00:03 txqueuelen 0 (Ethernet)
RX packets 18366 bytes 27076143 (27.0 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 9220 bytes 667701 (667.7 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9100
inet 192.168.15.3 netmask 255.255.255.0 broadcast 0.0.0.0
ether 02:42:c0:a8:0f:03 txqueuelen 0 (Ethernet)
RX packets 75 bytes 16338 (16.3 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2 bytes 180 (180.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

Now, check MTU set on the test-host-bridge by sending large size pings with df=1 from inside docker2 to docker1:

root@docker2:/# ping 192.168.15.3 -s 9000 -M doPING 192.168.15.3 (192.168.15.3) 9000(9028) bytes of data.
9008 bytes from 192.168.15.3: icmp_seq=1 ttl=64 time=0.267 ms
9008 bytes from 192.168.15.3: icmp_seq=2 ttl=64 time=0.177 ms
9008 bytes from 192.168.15.3: icmp_seq=3 ttl=64 time=0.181 ms
9008 bytes from 192.168.15.3: icmp_seq=4 ttl=64 time=0.180 ms
^C
— — 192.168.15.3 ping statistics — -
4 packets transmitted, 4 received, 0% packet loss, time 3120ms
rtt min/avg/max/mdev = 0.177/0.201/0.267/0.039 ms

So there we have it, docker1 and docker2 are now both connected to the bridge network (by default), and also to the user-defined “test-host-bridge” network with a larger MTU.

Side notes:

If you want to completely seclude the docker2 container and have the test-host-bridge network be an internal-only network (no external traffic), that possible too. Steps:

a. Disconnect all the containers connected to test-host-bridge.

$ docker network disconnect test-host-bridge docker1$ docker network disconnect test-host-bridge docker2

b. Delete the network, and re-create it as an internal network.

$ docker network rm test-host-bridge$ docker network create test-host-bridge — subnet 192.168.15.0/24 — internal — attachable — opt com.docker.network.driver.mtu=9100

c. Connect the containers back to it.

$ docker network connect test-host-bridge docker1$ docker network connect test-host-bridge docker2

d. Disconnect the docker2 container from the ‘bridge’ network and verify that it is connected only to the test-host-bridge network

$ docker network disconnect bridge docker2$ docker container inspect docker2

I regularly write about different topics in tech including career guidance, the latest news, upcoming technologies, and much more. This blog was originally posted in my blogs at anirban-mukherjee.com

--

--

Anirban Mukherjee

Loves writing code, building projects, writing about tech stuff, running side hustles; Engineering leader by day, nerd builder by night.