Bridge Networking

The Bridge Network is the default networking mode in Docker. When you install Docker, it creates a virtual bridge interface named docker0 on the host machine. This bridge acts as a virtual switch, connecting containers to each other and to the outside world via NAT (Network Address Translation).

In this chapter, we will go beyond the basics and understand the physics of how packets flow through veth pairs, bridges, and the Linux kernel.

1. The Anatomy of a Bridge

To understand Bridge networking, we must first understand how Linux isolates network namespaces.

First Principles: The Virtual Cable

When a container starts, Docker creates a Network Namespace for it. This is a completely isolated network stack (IPs, routes, ARP table).

To connect this isolated namespace to the host, Docker uses a VETH Pair (Virtual Ethernet Pair).

  • VETH Pair: Think of it as a virtual patch cable with two ends.
  • End A (eth0): Placed inside the container.
  • End B (vethXXXX): Placed on the host machine and plugged into the docker0 bridge.
Host Network Namespace
docker0 Bridge
172.17.0.1
veth3a1b
(Host End)
Container Namespace
eth0
172.17.0.2
App
$ Ready to trace...

2. IP Masquerading (NAT)

How does a container with a private IP (172.17.0.2) talk to the internet (8.8.8.8)? The host acts as a router performing NAT.

When a packet leaves docker0 bound for the internet:

  1. Source NAT (SNAT) replaces 172.17.0.2 with the Host’s eth0 IP (e.g., 192.168.1.50).
  2. The router sees the packet coming from the Host.
  3. When the reply comes back, the Host remembers the connection and forwards it back to the container.

This is implemented using Linux iptables. You can see this rule by running:

sudo iptables -t nat -L -n -v

[!NOTE] Performance Impact: Bridge networking introduces a small CPU overhead because every packet must go through NAT and the bridge code in the kernel. For high-performance applications (like HFT), use Host Mode.

3. User-Defined Bridges

By default, all containers attach to the “Default Bridge”. This has two major flaws:

  1. No DNS: You cannot resolve containers by name.
  2. No Isolation: All containers are on the same subnet.

The solution is to create your own bridge:

docker network create my-app-net

This creates a new Linux bridge interface (e.g., br-3a4b5c...) and new iptables rules to isolate it.

4. Code Example: Inspecting Interfaces

We can verify this infrastructure by running code inside the container.

package main

import (
	"fmt"
	"net"
)

// CheckInterfaces lists all network interfaces visible to the container
func main() {
	ifaces, err := net.Interfaces()
	if err != nil {
		panic(err)
	}

	fmt.Println("Network Interfaces inside Container:")
	for _, i := range ifaces {
		addrs, _ := i.Addrs()
		fmt.Printf("Name: %s, MAC: %s\n", i.Name, i.HardwareAddr)
		for _, addr := range addrs {
			fmt.Printf("  IP: %s\n", addr.String())
		}
	}
}
import java.net.NetworkInterface;
import java.net.SocketException;
import java.util.Collections;
import java.util.Enumeration;

public class NetCheck {
    public static void main(String[] args) throws SocketException {
        Enumeration<NetworkInterface> nets = NetworkInterface.getNetworkInterfaces();
        System.out.println("Network Interfaces inside Container:");

        for (NetworkInterface netint : Collections.list(nets)) {
            System.out.printf("Name: %s\n", netint.getDisplayName());
            // In a container, you will typically see 'lo' and 'eth0'
            // eth0 is the Container-end of the VETH pair.
        }
    }
}

5. Summary

  • VETH Pairs: Connect the container to the host bridge.
  • Bridge (docker0): Acts as a virtual switch.
  • NAT: Allows outbound internet access.
  • User-Defined Bridges: Provide DNS and isolation.