Pipy is a programmable proxy that has previously been used for TCP/HTTP proxy, MQTT proxy, Dubbo proxy, Redis proxy, and Thrift proxy. Recently we were asked if Pipy can be used to program a DNS proxy? and our answer was super simple yes, and via this post we will demonstrate how Pipy can help you build DNS proxy with simple scripting knowledge.
By reading this article, you will learn:
- Basic introduction to DNS and the process of DNS handling
- Implementation of a DNS proxy using coding
- Adding intelligent route resolution functionality to the proxy
DNS Introduction
The Domain Name System (DNS) is a hierarchical and distributed naming system for computers, services, and other resources in the Internet or other Internet Protocol (IP) networks. It associates various information with domain names assigned to each of the associated entities. Most prominently, it translates readily memorized domain names to the numerical IP addresses needed for locating and identifying computer services and devices with the underlying network protocols.[1] The Domain Name System has been an essential component of the functionality of the Internet since 1985.
-- excerpt from Wikipedia
Simplified version of the DNS processing flow:
- A DNS client (such as a browser, application or device) sends a query request for the domain name
example.com
. - The DNS resolver receives the request, queries the local cache, and returns the local record if it is available and has not expired.
- If the local cache is not hit, the DNS resolver will start at the DNS root server and work its way down, starting with the Top Level Domain (TLD) DNS server (in this case
.com
) and working its way down to the server that can resolveexample.com
. - The server that can resolve
example.com
becomes the Authoritative DNS name server, which the resolver accesses and receives the IP address and other related information, and then returns it to the client. Resolution is completed.
During past or recently you might have encounter the need to change DNS records to update the domain name's true direction, such as switching running environments, traffic interception, and DNS is also often used as one of the means of service discovery. Usually, the DNS server is either maintained by the service provider or the company's internal network team, which makes it inconvenient to modify DNS resolution records. Moreover, due to the cache design of DNS, each record has a TTL setting, which will not be updated again before the cache expires. Both a long and a short TTL are not appropriate.
The introduction of DNS proxies can solve this problem while enabling more functionality.
Next, we will demonstrate how to use Pipy to implement a DNS proxy (more accurately, a combination of a proxy and a server) that returns DNS query requests from custom records. At the same time, we will also add a feature to return different DNS records based on the client IP address to achieve intelligent route resolution. The scripts used in the demonstration can be downloaded from here.
Solution
As shown in the above figure, the DNS proxy provides similar functionality to the original resolver. However, when the cache expires or fails to hit, it will query custom resolution records. If there is a custom record, it will return the custom record; if not, it will query the DNS server as per the original process.
Implementation
Before starting, let's use the wireshark network packet capture to look at the format of DNS messages. The format of DNS query and response messages is the same and includes the following four parts:
- Header: Contains the ID, flag, number of query entries, number of response entries, number of authority resource entries, and number of additional resource entries.
- Flag part: This part identifies the message type, whether the name server is authoritative, whether the query is recursive, whether the request has been truncated, and the status.
- Request part: Contains the domain name being/needing to be resolved and the record type (A, AAAA, MX, TXT, etc.). Each label in the domain name is prefixed with its length.
- Response part: Contains resource records for the queried domain name.
In previous announcement Pipy 0.70.0 is released!, it was highlighted that Pipy 0.70.0 comes bundled with DNS
encoder/decoder. And we will be using Pipy to decode the DNS packet into above mentioned parts.
PipyJS Code
The script logic is very simple. For ease of reading, it is divided into several modules according to function, and implements the parsing of several common types of records, including A, AAAA, CNAME, MX, TXT, and NS.
├── cache.js #Cache
├── main.js #main entry script
├── records.js #Logic for customizing records
├── records.json #Custom record content
├── smart-line.js #Logic for smart line parsing
└── smart-line.json #Configuration of smart-line parsing
Here is a part of the core code of main.js
with annotations:
- First, use DNS.decode() to decode the data stream
- Then find the queried domain name and type from the result
- Query the cache
- If the cache fails to hit, query custom records
- Intelligent route resolution
- Return the response
- If 3 and 4 are both unsuccessful, it will request the upstream DNS server and then cache and return the response.
.listen(5300, { protocol: 'udp' })
.replaceMessage(
msg => (
(query, res, record) => (
query = DNS.decode(msg.body), //1
query?.question?.[0]?.name && query?.question?.[0]?.type && ( //2
record = getDNS(query.question[0].type + '#' + query.question[0].name) //3
|| local.query(query.question[0].name, query.question[0].type) //4
),
record ? (
record = line.filter(__inbound.remoteAddress, record), //5
res = {},
res.qr = res.rd = res.ra = res.aa = 1,
res.id = query.id,
res.question = [{
'name': query.question[0].name,
'type': query.question[0].type
}],
record.status === 'deny' ? (
res.rcode = local.code.REFUSED
) : (
res.answer = record.rr
),
new Message(DNS.encode(res)) //6
) : (
_forward = true,
msg
)
)
)()
)
.branch(
() => _forward, $ => $
.connect(() => `${config.upstreamDNSServer}:53`, { protocol: 'udp' }) //7
.handleMessage(
msg => (
(res = DNS.decode(msg.body)) => (
res?.question?.[0]?.name && res?.question?.[0]?.type &&
!res?.rcode && (
setDNS(res.question[0].type + '#' + res.question[0].name,
{
rr: res.answer,
status: res.rcode == local.code.REFUSED ? 'deny' : null
}
)
)
)
)()
),
$ => $
)
Custom Record
Below is the content of the custom record, which is similar to the format of a DNS response. In order to support intelligent route resolution, some records have added label information: "labels"
: ["line1"]
.
[
{
"name": "example.com",
"type": "A",
"ttl": 60,
"rdata": "192.168.139.10",
"labels": ["line1"]
},
{
"name": "example.com",
"type": "A",
"ttl": 60,
"rdata": "192.168.139.11",
"labels": ["line2"]
},
...
{
"name": "example.com",
"type": "MX",
"ttl": 600,
"rdata": {
"preference": 10,
"exchange": "mail2.example.com"
}
},
{
"name": "example.com",
"type": "TXT",
"ttl": 600,
"rdata": "hi.pipy!"
},
{
"name": "example.com",
"type": "NS",
"ttl": 600,
"rdata": "ns1.example.com"
},
...
]
Intelligent Route Resolution
The logic of intelligent route resolution is relatively simple. Set different route labels for different IP ranges, and return only records with the corresponding label in the response if the record has a label.
{
"line1": [
"192.168.1.110/32"
],
"line2": [
"127.0.0.1/32"
]
}
Testing
Start the proxy:
$ pipy main.js
As shown in the above configuration, 127.0.0.1
is the address of the local loopback network card, and 192.168.1.110
is the address of the local Ethernet network card, and the proxy listens on port 5300
.
First, access the proxy using localhost
, so that the client IP address obtained by the proxy is 127.0.0.1
. When querying the records for example.com, it directly returns the record 192.168.139.11
corresponding to the route line2
of the local address.
$ dig @localhost -p 5300 a example.com
; <<>> DiG 9.10.6 <<>> @localhost -p 5300 a example.com
; (2 servers found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 25868
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;example.com. IN A
;; ANSWER SECTION:
example.com. 60 IN A 192.168.139.11
;; Query time: 0 msec
;; SERVER: 127.0.0.1#5300(127.0.0.1)
;; WHEN: Tue Dec 13 21:09:38 CST 2022
;; MSG SIZE rcvd: 56
Then access the proxy using 192.168.1.110
, this time the client's address is 192.168.1.110
, and the record returned is 192.168.139.10
of route line1
.
$ dig @192.168.1.110 -p 5300 a example.com
; <<>> DiG 9.10.6 <<>> @192.168.1.110 -p 5300 a example.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 54165
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;example.com. IN A
;; ANSWER SECTION:
example.com. 60 IN A 192.168.139.10
;; Query time: 0 msec
;; SERVER: 192.168.1.110#5300(192.168.1.110)
;; WHEN: Tue Dec 13 21:12:37 CST 2022
;; MSG SIZE rcvd: 56
If I access from another machine, because no route is set, it will return two records.
$ dig @192.168.1.110 -p 5300 a example.com
; <<>> DiG 9.16.1-Ubuntu <<>> @192.168.1.110 -p 5300 a example.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 64873
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;example.com. IN A
;; ANSWER SECTION:
example.com. 60 IN A 192.168.139.10
example.com. 60 IN A 192.168.139.11
;; Query time: 0 msec
;; SERVER: 192.168.1.110#5300(192.168.1.110)
;; WHEN: Tue Dec 13 13:15:24 UTC 2022
;; MSG SIZE rcvd: 83
Because only the route of the A record is set, other types of records are not affected.
$ $dig @localhost -p 5300 mx example.com
; <<>> DiG 9.10.6 <<>> @localhost -p 5300 mx example.com
; (2 servers found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 33492
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;example.com. IN MX
;; ANSWER SECTION:
example.com. 600 IN MX 10 mail1.example.com.
example.com. 600 IN MX 10 mail2.example.com.
;; Query time: 0 msec
;; SERVER: 127.0.0.1#5300(127.0.0.1)
;; WHEN: Tue Dec 13 21:18:27 CST 2022
;; MSG SIZE rcvd: 117
Going Further
Those who have some understanding of Pipy may know the Repo mode, and those who are interested can refer to previously published articles on Pipy Blog and Pipy Reference Documentation. Using the Repo mode, all proxies (or DNS servers) on the host real-timely obtain updates of custom records from the Repo and refresh the cache.
Due to space constraints, it will not be discussed in depth here. Those interested can give it a try to implement and learn Pipy.
Conclusion
With this, Pipy has added another type of proxy. DNS is everywhere, and it is precisely because of this that problems can be solved at the DNS level.
If you have been following us and might have gone through previous Multi-cluster related blog posts, and in Kubernetes: Multi-cluster communication with Flomesh Service Mesh (Demo) we demonstrated cross cluster service communication, and there we easily scheduled request traffic to other clusters for processing. In this demo we demonstrated how to access service located on another Kubernetes cluster, but to caller it was transparently forwarded to respective cluster, how was that done?
Here, a small trick was used. When setting iptables rules to intercept traffic in the initialization container of the mesh, DNS traffic was also intercepted by the DNS proxy implemented by the sidecar (listening on 127.0.0.153:5300
), and business traffic was intercepted through custom DNS records.
Top comments (0)