DEV Community

Masayoshi Mizutani
Masayoshi Mizutani

Posted on

Design & Implementation of Modern Router with Docker + Linux for home and small office

photo

This article describes a router built by Docker + Linux for the home and small office.

Design

Principle

Modulability

Except for work that we have to do with the Linux kernel, I will make each service work as independently as possible.

  • Compared to commercially available broadband routers, machines with Linux can do anything because of its extremely high degree of freedom, but there is a problem that the environment becomes "dirty"      - While keeping minor changes, dependencies etc of services and saved files will gradually become unknown      - Especially if you are trying to insert or delete a new service, garbage will remain
  • Although there are not many opportunities to build a router, it is desirable to have a configuration that separates the service from the OS so that it can be easily migrated as much as possible when replacing the machine

Low running cost

External services (SaaS etc.) that are convenient for operation have increased compared to the past. Although I would like to actively use these things, I assume the home and small office to the end, so I do not expect to multiply the daily running cost.

Specifically, we aim to keep the monthly amount to about several hundred yen. It is seriously conscious that we can use a variety of better services if we put out money, but we design it to be the minimum necessary function based on this policy.

Just for engineers

In the first place I think that it is about IT engineers to make routers by themselves, but if it is dare to say, the purpose is to fulfil the desire that engineers want to tinker with themselves. Therefore, it is not designed for the purpose of replacing existing broadband router etc. For ordinary purposes it is far cheaper and easier to buy and set up a broadband router.

Assumptions

  • Hardware      - x86 machine (Although it may be configured with Raspberry Pi, there is no mind to work hard with a slight performance uncertainty & ARM)      - I have more than two NICs      - RAM can be used with some margin (assuming about 4 GB)      - Assumed that the HDD is not so large, leaving less permanent data
  • Required services      - Packet forwarding (home network -> Internet)      - Firewall      - DNS cache server      - DHCP server      - Communication monitoring

Architecture

Based on the above requirements etc., it was designed as shown in the figure below.

architecture

  • Package forwarding, firewalls and other parts directly related to the network to make Linux OS obediently obedient
  • Other services are basically modularized by configuring with Docker, making it easier to add / change / delete services      - DNS cache server (unbound)      - DHCP server (kea)      - Monitor communication (softflowd, dns - gazer)
  • Use external services as appropriate      - Metrics monitoring (mackerel)      - Save log (AWS S3)

Implementation

Preparing the machine

Machines prepared this time will be below.

XCY Intel Celeron J1900 Barebones (2 Ghz quad-core 4 threads) Gigabit LAN * 4 Fanless small space-saving (4 G RAM 32 G SSD)
https://www.amazon.com/dp/B01N6MDE01
photo

4 NICs of Ethernet are pierced, and the CPU has enough performance to use as a router with 2 GHz quad core, 4 G memory, 32 G SSD. Since there are four NICs, you can play with separate segments, and the aircraft is also small, so it is suitable for installation as a router.

Of course, even if it is not this machine, there is also a configuration such as sticking another NIC into the old desktop PC and using it. This time it will be explained on the premise that it will be installed using this XCY.

Installing Linux as a host

IMG_0024 2.jpg

OS Installation

Since the machine of XCY used this time is an ordinary x86 machine, there is no problem in the same procedure as installing on PC or server. As the picture shows, two USB mouths are pecking, so we will install CD / DVD or USB memory etc in there. I have not tried it, but since the COM serial port is on, I can install it even if I use it.

This time I used the CD / DVD drive, connected the keyboard to another USB, and installed while copying the VGA output on the back to the screen. When using a USB CD / DVD drive, it is necessary to tamper with BIOS boot priority ... (memorization is ambiguous because it was doing it for a while) But immediately after startup, put it in the BIOS with the ESC key.

In addition, this time I chose ubuntu Linux 16.04 as the host OS. After that, we will explain it on the premise.

Package Installation

$ sudo apt update
$ sudo apt upgrade -y
$ sudo apt install -y iptables docker.io docker-compose pppoeconf git
Enter fullscreen mode Exit fullscreen mode

Since the host OS side does not want to put extra things as much as possible, the installation package is minimized as much as possible.

Interface configuration

As shown in the design drawing, one NIC is used on the Internet side and the other NIC is used for the internal network side. In ubuntu Linux 16.04 (or Linux kernel 4.4.0-104-generic), the interface names written in the real machine and the device name of the OS correspond as follows.

  • LAN1 : enp1s0
  • LAN2 : enp2s0
  • LAN3 : enp3s0
  • LAN4 : enp4s0

For this time, set LAN1 (enp1s0) to the Internet side and LAN2 (enp2s0) to the internal network side. The network configuration is as follows.

  • Network: 10.0.0.0/24
  • Range of IP addresses used for DHCP: from 10.0.0.129 to 10.0.0.254
  • IP address of internal network side: 10.0.0.1

The settings of /etc/network/interfaces at this stage are as follows.

auto lo
iface lo inet loopback

auto enp2s0
iface enp2s0 inet static
    address 10.0.0.1
    netmask 255.255.255.0
Enter fullscreen mode Exit fullscreen mode

Setting up PPPoE

Grasp the user name and password of PPPoE and execute the following command.

$ sudo pppoeconf
Enter fullscreen mode Exit fullscreen mode

If you run pppoeconf you will recommend setting as nice, so if you answer the questions accordingly, it prepares it under /etc/ppp/. Please be aware that it is necessary to connect with the PPPoE-compatible device with the LAN cable at the time of executing the pppoeconf command.

NAT & F/W Configuration

Add a following line into /etc/sysctl.conf.

net.ipv4.ip_forward=1
Enter fullscreen mode Exit fullscreen mode

Edit /etc/rc.local.

/sbin/iptables-restore < /etc/network/iptables
exit 0
Enter fullscreen mode Exit fullscreen mode

Originally, it seems to be a good idea to put a script to read iptables under /etc/network/if-pre-up.d/, but after doing that, the docker service executed after the interface startup will be executed by iptables Since it seems to overwrite the setting, execute iptables-restore with /etc/rc.local.

I saved and edited the result of iptables-save in /etc/network/iptables. Mixing the original docker setting, setting up the setting to free the port for service, and so on are finally as follows (/etc/network/iptables).

*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:DOCKER - [0:0]

# eth0 is WAN interface, eth1 is LAN interface, ppp0 is the PPPoE connection
-A POSTROUTING -o ppp0 -j MASQUERADE
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN

COMMIT

*filter
:INPUT DROP [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER-ISOLATION -j RETURN

-A DOCKER -i ppp0 -p udp -m state --state ESTABLISHED -j ACCEPT
-A DOCKER -i ppp0 -p tcp -m state --state RELATED,ESTABLISHED -j ACCEPT
-A DOCKER -i ppp0 -j DROP

-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p udp -m state --state ESTABLISHED -j ACCEPT

-A INPUT -p tcp --dport 22 -s 10.0.0.0/24 -j ACCEPT
-A INPUT -p icmp -s 10.0.0.0/24 -j ACCEPT

COMMIT
Enter fullscreen mode Exit fullscreen mode

rsyslog Configuation

As will be described later, since all logs are saved in S3 of AWS via fluentd, rsyslog of host OS is also set to skip log to fluentd. Create a new /etc/rsyslog.d/10-remote.conf and edit it as follows.

*.* @127.0.0.1:5514
Enter fullscreen mode Exit fullscreen mode

Monitoring Service Installation

For monitoring service, use Mackerel. Similar services are Datadog and New Relic. Although I did not compare them properly, I thought that Mackrel seems to be relatively simple and easy to use, I chose custom metrics on the free frame so that it seems to be easy to use.

Although it was possible to monitor from the Docker container, it seemed confusing to forcibly reference the data on the host from the container, so I obediently installed the agent on the host OS. The basic monitoring agent can be installed by executing the following script as one line.

$ wget -q -O - https://mackerel.io/file/script/setup-all-apt-v2.sh | MACKEREL_APIKEY='2R8VBzXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' sh
Enter fullscreen mode Exit fullscreen mode

For details, refer to the following page.
https://mackerel.io/orgs/mizutani-home/instruction-agent

Although it seems to be able to set up 10 monitoring rules as well, we only move rules that detect default connectivity check and CPU, Disk, Memory overuse.

Screen Shot 2018-01-07 at 22.01.41.png

Service Configuration

Services

fluentd (log collection)

You know, the log collecting tool fluentd. In this router, logs are concentrated on fluentd and managed collectively. Major sources and destinations are as follows.

  • Sources      - Log of standard output and standard error sent from logging driver from each docker container      - Host OS syslog      - Netflow log sent from softflowd      - DNS query / reply log sent from dns-gazer
  • Destinations      - Logs are basically stored on AWS S3      - Send some metrics to mackerel to monitor (not implemented in this commentary, but will be implemented in the future)

As for the log, netflow log and DNS log can be stored even if it does not go 3MB a day. In the Tokyo region, the fee is \ $ 0.025 / GB (first 50 TB). Even if it gets about 1 GB in 1 year, it is 10 GB / month for 10 years monthly * \ $ 0.025 / month = 30 yen or less per month, calculated to fit the original goal.

unbound (DNS cache server)

Speaking of DNS servers for a long time, bind is famous, but if you want just to run it as a cache server, there's not much reason to use bind that implements the full DNS feature. unbound seems to be a DNS server created by focusing mainly on the function as a cache server, with its advantages such as cache pollution tolerance, performance, ease of setting, etc. It is. The setting is also very simple and it works as a DNS cache server with this setting contents only.

server:
  port: 53
  interface: 0.0.0.0
  access-control: 10.0.0.0/24 allow

  logfile: ""
  verbosity: 2
Enter fullscreen mode Exit fullscreen mode

kea (DHCP server)

DHCP which dynamically sets IP address also used to be so-called isc-dhcp-server which was conventionally implemented by Internet Systems Consortium, but recently ISC itself has implemented a DHCP server named "Modern Open Source DHCPv4 & DHCPv6 Server" kea. Facebook published a blog that they also uses kea at data center. There seems to be various introduction results.

Kea seems to have the following features compared to conventional ISC DHCP server.

  • APIs such as RESTful configuration change are provided
  • RDBS (PostgreSQL, MySQL) is available for lease DB
  • DHCPv4 and DHCPv6 are separated

Although I do not use the setting change etc. function this time, I try to use MySQL as a lease DB.

Also as an aside, it is very big as source code, it takes a tremendous amount of time to build from the source code. (Long time compiling which takes a few hours on the 2013 version of MacBook Pro) We recommend using a binary version when using it.

softflowd (Flow monitoring)

softflowd monitors the network interface and displays the communication flow information (source and destination IP address, IP protocol, source and destination port number, transmission / reception data Quantity, number of packets, start / end time etc).

In fact, nprobe provided by ntopng got more detailed information. However, to use it as a proper operation it was necessary to purchase a license. I gave up thinking that it is not suitable for this design that I implement this cheaply.

I'm operating with softflowd for the moment, but I'm planning to replace it with another tool or a tool I develop in the future.

dns-gazer (DNS monitoring)

I use the tool called dns-gazer to log inquiries in DNS (it is developed by myself). As well as softflowd, it monitors the network interface and creates DNS query and reply logs. It outputs the log directly to fluentd as forward format, and this time we save the log to AWS S3. There is dnscap as a similar tool to capture DNS communication, but since there was a difficulty in the log output method and format, I decided to develop as a separate tool did.

DNS logs are also acquired for security purposes. Even if it can not do advanced analysis so far, it simply can detect infection of malware by confirming that it was communicating with the domain name and IP address posted in blacklist later. In addition, it is also used for the purpose of enabling forensics at the time of incidents (although it is almost impossible to do such things in a normal home environment) in some cases.

Actually I wanted to introduce the utilisation of this log together, but this part are not mature yet. For the time being, this time it is only to leave the log properly.

Preparation of external service

AWS S3

As mentioned earlier, we will use the AWS S3 bucket for saving the log file this time. I will skip about account creation steps.

When the console becomes usable with an account, first create a bucket. This time we will proceed with the premise that we prepared S3 bucket called home-network.

Once we have created the bucket, we will get the API key. Although it is not impossible even if you use the API key of your account, we recommend that you create another user dedicated to saving logs to S3 as the authority.

After creating the user (or while doing it), create a policy to write to S3 and attach it to that user. The policy example to be set up is as follows. The s3:GetObject and s3:ListBucket privileges are also granted so that s3:PutObject and fluentd's out_s3 plugin to create log files can check the existence of files on the bucket.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::home-network",
                "arn:aws:s3:::home-network/*"
            ]
        }
    ]
}
Enter fullscreen mode Exit fullscreen mode

Once you have set the policy, you can retrieve and save the information of AWSAccessKeyId andAWSSecretKey.

Setting up docker-compose

From here, I will explain on the premise that we will build the environment using docke-compose configuration prepared by me.

$ sudo mkdir -p /opt
$ sudo chown $USER /opt
$ cd /opt
$ git clone https://github.com/m-mizutani/docker-based-home-router
$ cd docker-based-home-router
Enter fullscreen mode Exit fullscreen mode

Then, create a file named config.json.

$ cp config.template.json config.json
$ vim config.json
Enter fullscreen mode Exit fullscreen mode

And rewrite the necessary parts.

{
  "s3": {
    "key": "AKXXXXXXXXXXXXXXXXXXXXXX",
    "secret": "Ib346xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
    "region": "ap-northeast-1",
    "bucket_name": "home-network"
  },
  "dhcp": {
    "interfaces": [
      "enp0s2"
    ],
    "pools": [
      "10.0.0.129 - 10.0.0.254"
    ]
  },
  "network": {
    "internal_gateway": "10.0.0.1",
    "subnet": "10.0.0.0/24"
  },
  "monitor": {
    "interface": "enp0s2"
  },
  "mackerel": {
    "api_key": "2R8xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
  }
}
Enter fullscreen mode Exit fullscreen mode

I will put a commentary for a moment.

  • Interface names under dhcp/interfaces/ and monitor/interface are basically the same. The former is the setting of kea, the latter is the monitoring interface of softflowd and dns-gazer.
  • dhcp Please note that the following two items are list type (once inserting multiple values into the list it should work, but it is not verified at the moment)

If you can create config.json, runsetup.py. *I confirmed it with Python 3.6. * When it is executed, various setting files are automatically generated.

$ ./setup.py
2018-01-06 03:57:57,279 [DEBUG] config path: /opt/docker-based-home-router/config.json
2018-01-06 03:57:57,282 [INFO] creating fluentd config: /opt/docker-based-home-router/fluentd/fluent.conf
2018-01-06 03:57:57,286 [DEBUG] Found Mackerel API KEY in config
2018-01-06 03:57:57,286 [DEBUG] Found Mackerel ID file: /var/lib/mackerel-agent/id
2018-01-06 03:57:57,286 [INFO] creating kea config file: /opt/docker-based-home-router/kea/kea-config.json
2018-01-06 03:57:57,289 [INFO] creating env file for mysql: /opt/docker-based-home-router/mysql.env
2018-01-06 03:57:57,289 [INFO] creating env file for dns-gazer: /opt/docker-based-home-router/dns-gazer.env
Enter fullscreen mode Exit fullscreen mode

When the file is automatically generated, build the image.

$ sudo docker-compose build
Enter fullscreen mode Exit fullscreen mode

Now you are ready to set it up.

docker-compose.yml

As for the composition of docker-compose and the details of the contents of each image, basically it is a feeling that you should read the code, but I will cover points precisely.

  unbound:
    build: ./unbound
    restart: always    
    ports:
    - "53:53/udp"
    depends_on:
    - fluentd
    logging:
      driver: fluentd
      options:
        fluentd-address: localhost:24224
        tag: "docker.{{.ImageName}}.{{.ID}}"
Enter fullscreen mode Exit fullscreen mode

Not limited to unbound, all logging drivers are concentrated in the fluentd container and are skipped to S3. So we are specifying the fluentd container for every depends_on. I tried to make the tag docker.<Image name>.<Container ID>.

  kea:
    build: ./kea
    restart: always    
    network_mode: host
Enter fullscreen mode Exit fullscreen mode

The kea needs to receive broadcast packets that occur at the client's DHCP discover. Since the docker container is basically in the internal network of host OS, DHCP discover packets do not reach the container in the normal configuration. Although I seemed possible if I could do it a lot, this time I corresponded simply by setting a setting equivalent to --net=host.

  kea-db:
    build: ./kea-db
    restart: always    
    ports:
    - 127.0.0.1:3306:3306
    env_file:
    - ./mysql.env
    volumes:
    - kea-db-vol:/var/lib/mysql
Enter fullscreen mode Exit fullscreen mode

On the other hand, if the container of kea is in the state of net = host, mysql will expose 3306/tcp to local and kea will 127.0.0.1:3306 because it can not connect with the mysql container with links It is configured to connect.

The database initialization script is put in the image of mysql called kea-db. Also, since the database file is persisted by setting volume, the same lease DB will be inherited even if it is restarted.

  fluentd:
    build: ./fluentd
    restart: always
    ports:
    - 127.0.0.1:24224:24224
    - 127.0.0.1:5514:5514/udp
    - 127.0.0.1:2055:2055/udp
    volumes:
    - fluentd-buffer:/var/log/fluentd
Enter fullscreen mode Exit fullscreen mode

Fluentd also takes over the directory for the buffer even after volume mount and rebooting the container. Each port is expose for the following purposes.

  • 24224: Logging of logging driver + dns-gazer received
  • 5514: Receive rsyslog
  • 2055: Receive netflow

Make docker-compose to start up at machine startup

Wherever you are ready, add a startup script to /etc/rc.local at the end. Finally it will be as follows.

/sbin/iptables-restore < /etc/network/iptables
docker-compose -p router -f /opt/docker-based-home-router/docker-compose.yml --verbose up -d
exit 0
Enter fullscreen mode Exit fullscreen mode

After that, if you say sudo /etc/rc.local etc, all the components should start up. Since logging flows to the background with this startup command, when debugging is necessary, it is a good idea to run the docker-compose command with the -d option turned off.

$ sudo docker-compose -p router -f /opt/docker-based-home-router/docker-compose.yml --verbose up
Enter fullscreen mode Exit fullscreen mode

Experience

Since it is a router, the result is not visible and it is not easy to understand the result, but in the sense that it is working with this feeling for the time being.

dashboard

It is a monitoring screen on Mackrel. Although I run two traffic monitoring processes, the traffic volume is also totally low (since the maximum instantaneous wind speed of last night is 1 M Byte / ss = 8 Mbps), CPU is not being consumed at all. The cache has been going down quite a bit when memory restarted last night, but almost 1 GB is being used.

Acknowledgements

I would like to thank Google Translator.

Top comments (1)

Collapse
 
iridakos profile image
Lazarus Lazaridis

Long but nice post!