DEV Community

Cover image for The Host Network Driver | Networking in Docker #5
Farhim Ferdous
Farhim Ferdous

Posted on • Edited on

The Host Network Driver | Networking in Docker #5

Learn what the Host driver is, how it provides the best performance, how to use it, possible use cases and limitations


Link to video: https://www.youtube.com/watch?v=qice7-Cfgzw


What is the host network driver in Docker and how does it provide the best network performance?

This blog will try to answer that (and more) as simply as possible.

Introduction

This blog is the fifth one in a series for Docker Networking.

If you are looking to learn more about the basics of Docker, I’ll recommended checking out the Docker Made Easy series.

Here’s the agenda for this blog:

  • What is the host network driver?
  • How to use it?
  • When to use it? - possible use cases
  • its limitations

Here are some quick reminders:

  • A Docker host is the physical or virtual machine that runs the Docker daemon.
  • Docker Network drivers enable us to easily use multiple types of networks for containers and hide the complexity required for network implementations.

Alright, so...

What is the host driver?

When a container uses the host network driver, the container shares the network stack (namespace) of its host.

host driver

This means the network of the container is not virtualized, making the container appear as if it is the host itself, from a networking perspective.

A direct consequence of this is that - if a container using the host driver publishes a port, let’s say 8000, then that port will also be published on the host machine. If that port is already in use, then the container will not start successfully.

However, in all other ways - like storage, process, and user namespace, the container is isolated from the host.

NOTE: the host driver is only supported on Linux as of now i.e. host driver is not available on Mac or Windows platforms.

How to use the host driver?

Example without specifying any network - using default bridge driver

Let’s start off by running an nginx container named app1 in the background (-d) without specifying any network:

docker run -d --name app1 nginx:alpine
Enter fullscreen mode Exit fullscreen mode

The nginx container by default maps onto port 80 inside the container. Docker uses the default bridge network driver if no driver is specified.

If we use curl localhost inside the container, we’ll see nginx the nginx index page successfully:

# works
docker exec app1 curl localhost
Enter fullscreen mode Exit fullscreen mode

NOTE: Did you know port 80 is the default HTTP port? Therefore curl localhost automatically refers to curl http://localhost:80.

But if we try to curl localhost from the host machine, it will fail:

# fails
curl localhost
Enter fullscreen mode Exit fullscreen mode

This is because we cannot reach the nginx container from the host, as the host and the container are using separate network namespaces (when using the default bridge network).

The solution is port mapping.

Let’s remove app1 and create it again using the port mapping -p option:

docker rm -f app1

docker run -d --name app1 -p 2000:80 nginx:alpine
Enter fullscreen mode Exit fullscreen mode

The port mapping option (-p 2000:80) instructs Docker to map port 2000 on the host to port 80 on the container.

Let’s check out results using curl:

# fails
curl localhost

# works
curl localhost:2000

# works
docker exec app1 curl localhost
Enter fullscreen mode Exit fullscreen mode

We’ll learn more about bridge drivers in the next part of this series.

Example using host driver

As we’d learnt, when using the host driver, the network stack is the same for both the container and host, so there’s no need for port mapping. Let’s check it out.

NOTE: you need a Linux machine to use the host driver.

We’ll run a similar nginx container named app2 using the host driver:

docker run -d --name app2 --network host nginx:alpine
Enter fullscreen mode Exit fullscreen mode

Curling localhost will now work:

# works
curl localhost
Enter fullscreen mode Exit fullscreen mode

Can you guess why?

The answer will be clear if we...

Check differences in IP configuration

Let’s first look at IP addresses assigned to all network interfaces on the host machine:

ip addr
Enter fullscreen mode Exit fullscreen mode

Note the number of interfaces and the associated IPs.

What happens if we execute the same command inside the app1 container (the one using the default bridge driver)?

docker exec app1 ip addr
Enter fullscreen mode Exit fullscreen mode

We should certainly see differences between the configuration in the host machine and app1, since they are using different network namespaces.

Now let’s try that for app2 container (the one using the host driver):

docker exec app2 ip addr
Enter fullscreen mode Exit fullscreen mode

You’ll notice that the network configuration is exactly the same as for the host machine.

This is a clear indication that host driver makes the container share the host machines network.

One consequence of using the host driver is...

Port conflicts

You cannot run multiple containers that bind to the same port using host driver.

To demonstrate this, we’ll try to run a very similar nginx container to app2, but named as app3:

docker run -d --name app3 --network host nginx:alpine
Enter fullscreen mode Exit fullscreen mode

If we check the logs:

docker logs -f app3
Enter fullscreen mode Exit fullscreen mode

We’ll notice that nginx failed to start because the port it tried to bind to (80) was already in use.

We can confirm that app3 exited:

docker ps -a
Enter fullscreen mode Exit fullscreen mode

Cleanup

To end the hands-on lab, we’ll clean up the containers:

docker rm -f app1 app2 app3
Enter fullscreen mode Exit fullscreen mode

Now that we have seen it in action, let’s learn...

When to use the host driver? - possible use cases

The following are some use cases where the host driver could be suitable.

  1. When the highest network performance is required

    The host network driver provides the best network performance compared to the other drivers since it uses the network namespace of the host machine directly and does not require port-mapping / network address translation (NAT).

  2. When a single container needs to handle a large number of ports

    If our workload demands a container to handle a large number of ports, then it might be preferable to use the host network directly, instead of mapping each port to the host one by one.

  3. When network isolation is not required

    This can actually simplify our networking since we won’t need to bother with port mapping or NAT.

Given its properties in mind, we should be aware of the...

Limitations of the host driver

  1. Lack of network isolation

    As the container’s network is not virtualized using host, it may not provide the level of isolation required for many multi-container workloads.

    Additionally, any networking configuration applied to the host machine will also be applied to the container. If the container is deployed in privileged mode, the container could reconfigure the host’s network stack as well.

    This tight binding with the host machine could raise bugs or security issues.

  2. Port conflicts

    As we have seen, we cannot run multiple containers that use the same port when using the host driver.

  3. Only works on Linux machines

    The host driver is only supported on Linux, not on Docker Desktop for Mac or Windows.

Conclusion

In this blog, we learnt about the host network driver in docker - what it is, how to use it, some possible use cases and its limitations.

By not providing network isolation, the host driver provides the best performance and simplicity. But its drawbacks are to be kept in mind when using the host driver in production.

I hope I could make things clearer for you, be it just a tiny bit.

In the next blog, we will learn about the bridge driver - the default network driver, that is likely used the most in development environments.

Thanks for making it so far! 👏

See you at the next one.

Till then…

Be bold and keep learning.

But most importantly,

Tech care!

Top comments (0)