DEV Community

reddec
reddec

Posted on

tinc-boot - full-mesh VPN without pain

Alt Text

Automatic, secure, distributed, with transitive connections (that is, forwarding messages when there is no direct access between subscribers), without a single point of failure, peer-to-peer, time-tested, low resource consumption, and a full-mesh VPN network with the ability to "punch" NAT - is it possible?

Right answers:

  • yes, difficult if you are using pure tinc.
  • yes, easy, if you are using tinc + tinc-boot

GitHub logo reddec / tinc-boot

Bootstrap your Tinc node quickly and easy

Overview of Tinc

Unfortunately, Tinc VPN has no such big community like Open VPN or similar solutions, however, there are some good tutorials:

The Tinc man is always a good source of truth.

Tinc VPN (from the official site) is a service (the tincd daemon) that makes a private network by tunneling and encrypting traffic between nodes. Source code is open and available under the GPL2 license. Like a classic (OpenVPN) solutions, the virtual network created will be available at the IP level (OSI 3) which generally means that making changes to the applications is not required.

Key features:

  • encryption, authentication and compression of traffic;
  • fully automatic full-mesh solution, which includes building connections to network nodes in a peer-to-peer mode or, if this is not applicable, forwarding messages between intermediate hosts;
  • "punching" NAT;
  • the ability to connect isolated networks at the ethernet level (virtual switch);
  • Support for multiple operating systems: Linux, FreeBSD, OS X, Solaris, Windows, etc.

There are two branches of tinc development: 1.0.x (in almost all repositories) and 1.1 (eternal beta). In this article, only version 1.0.x was used.

From my point of view, some of the strongest features of Tinc is ability to forward messages over peers when direct connection is not possible. Routing tables are built automatically. Even nodes without a public address can become a relay server.

case scenario

Consider a situation with three servers (China, Russia, Singapore) and three clients (Russia, China and the Philippines):

  • Servers have a public addresses, clients are behind a NAT;
  • Due to Russian censorship rules, all the other ISPs were eventually blocked except the "friendly" China (unfortunately, not so unrealistic);
  • the network border of China <-> Russia is unstable and may fall (due to both countries' censorship rules);
  • connections to Singapore are pretty stable (from personal experience);
  • Manila (Philippines) is not a threat to anyone, and therefore is allowed by everyone (due to the distance from everyone and everything).

Using the traffic exchange between Shanghai and Moscow as an example, consider the following Tinc scenarios (approximately):

  1. Normal situation: Moscow <-> russia-srv <-> china-srv <-> Shanghai
  2. Due to censorship rules, connection to China has been closed: Moscow <-> russia-srv <-> Manila <-> Singapore <-> Shanghai
  3. (after 2) in case of server failure in Singapore, traffic is transferred to the server in China and vice versa.

Whenever possible, Tinc attempts to establish a direct connection between the two nodes behind NAT by punching.

Brief introduction to Tinc configuration

Tinc is positioned as an easy-to-configure service, however, something went wrong - to create a new node, minimal requirements are:

  • describe the general configuration of the node (type, name) (tinc.conf);
  • describe the node configuration file (served subnets, public addresses) (hosts /);
  • create a key;
  • create a script that sets the host address and related parameters (tinc-up);
  • It is advisable to create a script that clears the created parameters after stopping (tinc-down).

In addition to described above, when connecting to an existing network, you must obtain the existing host keys and provide your own.

Ie: for the second node

second node

For the third node

third node

By using two-way synchronization (for example, unison), the number of additional operations increases to N, where N is the number of public nodes.

With respect to the developers of Tinc - to join to the existing network, you just need to exchange keys with only one of the nodes (bootnode).
After starting the service and connecting to the participant, tinc will
get the network topology and will be able to work with all members.

However, in case of the boot host has become unavailable and then tinc was
restarted, then there is no way to connect to the virtual network.

Moreover, the enormous possibilities of tinc, together with the academic documentation of the project (well described, but few examples), provide an extensive field for making mistakes.

Reasons to create tinc-boot

Generalizing the problems described above and then formulating them as tasks, we got the following as necessities:

  1. simplify the process of creating a new node;
    • potentially, should be enough to give a common user ("Power User" in Windows terms) one small shell command to create a new node and join the network (I keep in mind something like this to connect with my grandad - former tech guy);
  2. provide automatic distribution of keys between all active nodes;
  3. provide a simplified procedure for exchanging keys between the bootnode and the new client.

bootnode - node with a public address;

Due to task 2 above, we can guarantee that after the key exchange between the bootnode and the new node and after establishing a connection to the network, the new key will be distributed automatically.

A tinc-boot was made to solve said tasks.

tinc-boot is a self-contained (except tinc) open source application that provides:

  • simple creation of a new node;
  • automatic connection to an existing network;
  • setting the majority of parameters by default;
  • distribution of keys between nodes.

Architecture

overview

The tinc-boot executable file contains four components: a bootnode server, a key distribution management server and RPC management commands for it, as well as a node generation module.

Module: config generation

The node generation module (tinc-boot gen) creates all the necessary files for tinc to run successfully.

Simplified, its algorithm can be described as follows:

  1. Define the host name, network, IP parameters, port, subnet mask, etc.
  2. Normalize them (tinc has a limit on some values) and create the missing ones
  3. Check parameters
  4. If necessary - install tinc-boot to the system (could be disabled)
  5. Create scripts tinc-up,tinc-down, subnet-up,subnet-down
  6. Create a configuration file tinc.conf
  7. Create a host file hosts /
  8. Perform key generation (4096 bits)
  9. Exchange keys with bootnode
    1. Encrypt and sign own host file with a public key, a random initialization vector (nounce) and host name using xchacha20poly1305, where the encryption key is the result of the sha256 function from the token
    2. Send data via HTTP/HTTPS protocol to bootnode
    3. Receive answer and the header X-Node containing the name of the boot node, decrypt using the original nounce and the same algorithm
    4. If successful, save the received key in hosts / and add the ConnectTo entry to the configuration file (ie a recommendation where to connect during tinc start)
    5. Otherwise, use the address of the boot node next in the list and repeat from step 2
  10. Print recommendations for starting the service

Conversion via SHA-256 is used only to normalize the key to 32 bytes

For the very first node (that is, when there is nothing to specify as the boot address), step 9 is skipped. The --standalone flag.

Example 1 - creating the first public node

Assume that the public address is 1.2.3.4

sudo tinc-boot gen --standalone -a 1.2.3.4

  • the -a flag allows you to specify publicly accessible addresses

Example 2 - adding a non-public node to the network

The boot node will be taken from the example above. The host must have tinc-boot bootnode running (to be described later).

sudo tinc-boot gen --token "<MY TOKEN>" http://1.2.3.4:8655

  • the --token flag sets the authorization token

Bootstrap module

The boot module (tinc-boot bootnode) runs an HTTP server and serves as API for the key exchange with new clients.

By default it uses port 8655.

Simplified, the algorithm can be described as follows:

  1. Accept a request from a client
  2. Decrypt and verify the request using xchacha20poly1305, using the initialization vector (nounce) passed during the request, and where the encryption key is the result of the sha256 function from the token
  3. Check name
  4. Save the file if there is no file with the same name yet
  5. Encrypt and sign your own host file and name using the algorithm described above
  6. Return to paragraph 1

Together, the primary key exchange process is as follows:

sequence

Example 1 - start bootnode

It is assumed that the first initialization of the node already was done (tinc-boot gen).

tinc-boot bootnode --token "MY TOKEN"

  • the --token flag sets the authorization token. It should be the same for clients connecting to the host.

Example 2 - start bootnode as a system service

tinc-boot bootnode --service --token "MY TOKEN"

  • the --service flag means to create a systemd service (by default, for the example,tinc-boot-dnet.service)
  • the --token flag sets the authorization token. It should be the same for clients connecting to the host.

Module: key distribution

The key distribution module (tinc-boot monitor) raises an HTTP server with an API for exchanging keys with other nodes inside the VPN. It binds to the address issued by the network (the default port is 1655, so there will be no conflict with several instances since each network has (or must have) its own address)

There is no need to manually work on the module since it starts and works automatically.

The module starts automatically when the network is up (in the tinc-up script) and automatically stops when it's down (in thetinc-down script).

Supports operations:

  • GET / - give your node file
  • POST /rpc/watch?Node=<>&subnet=<> - fetch a file from another node, assuming there is a similar service running on it. By default, attempts are timed out at 10 seconds, every 30 seconds until success or cancellation.
  • POST /rpc/forget?Node=<> - leave attempts (if any) to fetch the file from another node
  • POST /rpc/kill - terminates the service

In addition, every minute (by default) and when a new configuration file is received, indexing of the saved nodes is made for new public (with public IP) nodes. When nodes with the Address flag are detected, an entry is added to the tinc.conf configuration file to recommend connection during restart.

Module: key distribution control

Commands for requesting (tinc-boot watch) and canceling the request (tinc-boot forget) of the configuration file from other nodes are executed automatically upon detection of a new node (subnet-up script) and stop (subnet-down script) respectively.

In the process of stopping the service, the tinc-down script is executed in which the tinc-boot kill command stops the key distribution module.

To summarize

This utility was created under the influence of cognitive dissonance between the genius of Tinc developers and the linearly growing complexity of setting up new nodes.

The main ideas in the development process were:

  • if something can be automated, it must be automated;
  • default values ​​should cover at least 80% of use (Pareto law);
  • any value can be redefined using flags as well as using environment variables (12-factors);
  • the utility should help, and not cause a desire to call all the punishment of heaven on the creator;
  • using an authorization token for initial initialization is an obvious risk, however, to the extent possible, it was minimized due to total cryptography and authentication (even the host name in the response header cannot be replaced).

During development, I actively tested on real servers and clients (the picture from the description of tinc above is taken from real life). Now the system works flawlessly.

The application code is written in GO and open under the MPL 2.0 license. The license (free translation) allows commercial (if suddenly someone needs) use without opening the source product. The only requirement is that the changes must be transferred to the project.

Pool requests are welcome.

Useful links

Latest comments (5)

Collapse
 
fabiobassa profile image
fabiobassa • Edited

Hello, good morning and absolutely compliments for your work.
It is awesome and every thing is working out of box. Some questions, please:
1) I like to have some control and some check of what going under so i launched the tinc with
tincd -D -d 5 -n dnet
and i noticed this warning:
open /root/hosts no such file or directory
open /root/tinc.conf no such file or directory

it seems that instead of pointing to /etc/tinc/. something is pointing to root. i made a symlink and the problem is gone, but this is a fact

2) does it automatically try to punch the nat of every client even if the ip is not public, or should i set some more flag ?

3) could it be possible to add a flag -K for creating a smaller key (2048 e.g. ) in tinc-boot gen for small processor to avoid heavy key computing ?
actually it is fixed 4096

Thank you in advance for your kind answers

Collapse
 
cheshmghermezi profile image
چشم قرمزی

Also I get nothing on ip:1655. followed step by step everything.

Collapse
 
reddec profile image
reddec

I will very appreciate if you will tell me more about your setup. Ideally if you could open an issue in Github project and append all possible details (mask your public IP address if needed)

Collapse
 
cheshmghermezi profile image
چشم قرمزی

Hi there,
Thank you so much for creating this.
I am living in Iran and as you may know government is totally shutdown the internet here. Only few datacenters have internet access (of course with censorship applied) and we can access those datacenters via INTRANET through home connections.
Home Connection -----> iran dc (node1 and node2 in different dcs) -------> internet (node3 , node4 in abroad dcs)
I have some questions:

  1. Why not running it on port 80/443 ? how to change ports?
  2. How to route some ips through node1 and node 2 only? The nodes in Iran dcs are under heavy monitoring and I should route all Iran geo IPs through these nodes. So their traffic will look much normal.
  3. How to setup and add pfsense as the client?
  4. Can I donate via BTC to you for this great effort and how? Sorry for my bad english and thanks for supporting freedom.
Collapse
 
reddec profile image
reddec • Edited

I heard about it and would like to wish your country good luck in this hard situation!

  1. There are no restrictions. You may use flag --port during configuration by tinc-boot
  2. You may restrict connections from home to node 1, from node 1 to node 2 and so on by removing ConnectTo parameter in host file. However you also should to disable tinc-boot because it will overwrite configuration. So in this case tinc-boot will be used only as configuration wizard) However, I should remind, that it could be useful only in case you are caring of traffic detection, because tinc by itself can detect blocked connections (edges) and re-route traffic automatically.
  3. Not sure that I can help with it, however you only need to allow udp and tcp traffic for specified port chosen on step 1
  4. Donating always welcome)
  • ETH: 0xA4eD4fB5805a023816C9B55C52Ae056898b6BdBC
  • BTC: bc1qlj4v32rg8w0sgmtk8634uc36evj6jn3d5drnqy