DEV Community

Vadim Ponomarev
Vadim Ponomarev

Posted on

A story about one DDOS attack and methods of protecting Juniper routing engine

I am often obliged to encounter DDOS attacks on servers as part of my work, but some time ago I had to face a rather different attack to which I wasn’t prepared. A Juniper MX80 router (which supports BGP sessions and performs announcement of data center subnets) was attacked. The attackers were targeting a web resource located on one of our servers, but this resulted in leaving the whole data center without any connection to the outer world. In this article, I am going to share the details of the attack I’ve faced, as well as tests and methods of how to fight such unwanted encounters.

Story of the attack

Historically for the company, all incoming UDP traffic, that targets our network, is blocked on the router. First wave of the attack (17:22) was exactly UDP traffic. Here’s the graph of unicast packets from the router’s uplink port:

Unicast packets from the router

and unicast packets graph from the port of the switch connected to that router:

Unicast packets from the switch

These graphs show that all that traffic did not pass the router’s filter. The number of unicast packets on router’s uplink was increased by 400 thousand more and this attack continued until 17:33. Then the attackers boosted their strategy, adding TCP SYN packets flood attack on the server to the UDP attack, as well as towards the router itself. As it is seen from the graph, the router was so flooded that it stopped sending SNMP to zabbix. After the SYN wave towards the router’s ports, BGP peering sessions began to fail (three uplinks are used, each receiving full view on ipv4 and on ipv6), logs stated the following tragic records:

Jun 27 17:35:07 ROUTER rpd[1408]: bgp_hold_timeout:4035: NOTIFICATION sent to ip.ip.ip.ip (External AS 1111): code 4 (Hold Timer Expired Error), Reason: holdtime expired for ip.ip.ip.ip (External AS 1111), socket buffer sndcc: 19 rcvcc: 0 TCP state: 4, snd_una: 1200215741 snd_nxt: 1200215760 snd_wnd: 15358 rcv_nxt: 4074908977 rcv_adv: 4074925361, hold timer out 90s, hold timer remain 0s
Jun 27 17:35:33 ROUTER rpd[1408]: bgp_hold_timeout:4035: NOTIFICATION sent to ip.ip.ip.ip (External AS 1111): code 4 (Hold Timer Expired Error), Reason: holdtime expired for ip.ip.ip.ip (External AS 1111), socket buffer sndcc: 38 rcvcc: 0 TCP state: 4, snd_una: 244521089 snd_nxt: 244521108 snd_wnd: 16251 rcv_nxt: 3829118827 rcv_adv: 3829135211, hold timer out 90s, hold timer remain 0s
Jun 27 17:37:26 ROUTER rpd[1408]: bgp_hold_timeout:4035: NOTIFICATION sent to ip.ip.ip.ip (External AS 1111): code 4 (Hold Timer Expired Error), Reason: holdtime expired for ip.ip.ip.ip (External AS 1111), socket buffer sndcc: 19 rcvcc: 0 TCP state: 4, snd_una: 1840501056 snd_nxt: 1840501075 snd_wnd: 16384 rcv_nxt: 1457490093 rcv_adv: 1457506477, hold timer out 90s, hold timer remain 0s
Enter fullscreen mode Exit fullscreen mode

As it turned out later, after the attack, this wave of TCP SYN increased the load on the router’s routing engine, after that all BGP sessions went down and the router was unable to restore functioning by its own. The attack lasted only for several minutes, but due to additional load, the router was unable to process full view from three uplinks and this resulted in sessions interrupting again. In order to restore performance, we had to activate each BGP session separately. Further attack was targeting the server.

Bench tests and attack reproduction

Juniper MX80 router with the same firmware version as the initial one was used during for testing purposes. A server with 10 Gb card and running ubuntu server + quagga was used to emulate the attacker. In order to generate lots of traffic we used a script calling hping3 utility. To verify the devastating effect of the traffic spikes, the script generated traffic with time intervals: 30 seconds attack followed by 2 minutes of tranquility. In addition, for the purity of this experiment, we established a server-to-router BGP session. The initially attacked router’s configuration had BGP and SSH ports open on all interfaces/addresses and they were not being filtered. Same configuration was used during bench testing.

The first step was to initiate TCP SYN packets flood attack towards BGP (179) router port. Ip address of the source matched the peer address in the config. IP address spoofing was not excluded as our uplinks don’t have uPRF turned on. The session was established. Quagga stated:

BGP router identifier 9.4.8.2, local AS number 9123
RIB entries 3, using 288 bytes of memory
Peers 1, using 4560 bytes of memory

Neighbor        V    AS MsgRcvd MsgSent   TblVer  InQ OutQ Up/Down  State/PfxRcd
9.4.8.1       4  1234    1633    2000        0    0    0 00:59:56        0

Total number of neighbors 1
Enter fullscreen mode Exit fullscreen mode

Juniper stated:

user@MX80> show bgp summary 
Groups: 1 Peers: 1 Down peers: 0
Table          Tot Paths  Act Paths Suppressed    History Damp State    Pending
inet.0               
                       2          1          0          0          0          0
Peer                     AS      InPkt     OutPkt    OutQ   Flaps Last Up/Dwn State|#Active/Received/Accepted/Damped...
9.4.8.2              4567        155        201       0     111       59:14 1/2/2/0              0/0/0/0
Enter fullscreen mode Exit fullscreen mode

Since the start of the attack the router received ~1.2 Mpps of traffic:

Packets per second test graph

Or 380Mbit/s:

Traffic test graph

The load on router’s CPU RE and CPU FE increases:

CPU load on the router

After 90 sec timeout BGP session fails permanently:

Jul 4 13:54:01 MX80 rpd[1407]: bgp_hold_timeout:4035: NOTIFICATION sent to 9.4.8.2 (External AS 4567): code 4 (Hold Timer Expired Error), Reason: holdtime expired for 9.4.8.2 (External AS 4567), socket buffer sndcc: 38 rcvcc: 0 TCP state: 4, snd_una: 3523671294 snd_nxt: 3523671313 snd_wnd: 114 rcv_nxt: 1556791630 rcv_adv: 1556808014, hold timer out 90s, hold timer remain 0s

The router is busy processing TCP SYN on BGP port and unable to establish BGP session. Lots of packets flooded the port:

user@MX80> monitor traffic interface ge-1/0/0 count 20
13:55:39.219155 In IP 9.4.8.2.2097 > 9.4.8.1.bgp: S 1443462200:1443462200(0) win 512
13:55:39.219169 In IP 9.4.8.2.27095 > 9.4.8.1.bgp: S 295677290:295677290(0) win 512
13:55:39.219177 In IP 9.4.8.2.30114 > 9.4.8.1.bgp: S 380995480:380995480(0) win 512
13:55:39.219184 In IP 9.4.8.2.57280 > 9.4.8.1.bgp: S 814209218:814209218(0) win 512
13:55:39.219192 In IP 9.4.8.2.2731 > 9.4.8.1.bgp: S 131350916:131350916(0) win 512
13:55:39.219199 In IP 9.4.8.2.2261 > 9.4.8.1.bgp: S 2145330024:2145330024(0) win 512
13:55:39.219206 In IP 9.4.8.2.z39.50 > 9.4.8.1.bgp: S 1238175350:1238175350(0) win 512
13:55:39.219213 In IP 9.4.8.2.2098 > 9.4.8.1.bgp: S 1378645261:1378645261(0) win 512
13:55:39.219220 In IP 9.4.8.2.30115 > 9.4.8.1.bgp: S 1925718835:1925718835(0) win 512
13:55:39.219227 In IP 9.4.8.2.27096 > 9.4.8.1.bgp: S 286229321:286229321(0) win 512
13:55:39.219235 In IP 9.4.8.2.2732 > 9.4.8.1.bgp: S 1469740166:1469740166(0) win 512
13:55:39.219242 In IP 9.4.8.2.57281 > 9.4.8.1.bgp: S 1179645542:1179645542(0) win 512
13:55:39.219254 In IP 9.4.8.2.2262 > 9.4.8.1.bgp: S 1507663512:1507663512(0) win 512
13:55:39.219262 In IP 9.4.8.2.914c/g > 9.4.8.1.bgp: S 1219404184:1219404184(0) win 512
13:55:39.219269 In IP 9.4.8.2.2099 > 9.4.8.1.bgp: S 577616492:577616492(0) win 512
13:55:39.219276 In IP 9.4.8.2.267 > 9.4.8.1.bgp: S 1257310851:1257310851(0) win 512
13:55:39.219283 In IP 9.4.8.2.27153 > 9.4.8.1.bgp: S 1965427542:1965427542(0) win 512
13:55:39.219291 In IP 9.4.8.2.30172 > 9.4.8.1.bgp: S 1446880235:1446880235(0) win 512
13:55:39.219297 In IP 9.4.8.2.57338 > 9.4.8.1.bgp: S 206377149:206377149(0) win 512
13:55:39.219305 In IP 9.4.8.2.2789 > 9.4.8.1.bgp: S 838483872:838483872(0) win 512
Enter fullscreen mode Exit fullscreen mode

The second step was again to perform a TCP SYN attack on BGP (179) router port with some difference described further. Ip address was chosen randomly and did not match the peer address stated in the router’s config. This attack had the same effect on the router. I do not intend to extend this article with exactly the same logs, yet it is important to see the graph:

CPU utilisation

The graph states the moment when the attack got initiated. BGP session failed permanently again.

The concept of building RE router protection

It is in Juniper equipment architecture to divide tasks between routing engine (RE) and packet forwarding engine (PFE). PFE handles all incoming traffic, filtering and routing it, based on a pre-formed scheme. RE is responsible for direct calls to router (traceroute, ping, ssh), handles packets for a number of services (BGP, NTP, DNS, SNMP), forms filtering schemes and traffic routing for router’s PFE.

The main idea of router’s protection is to filter all traffic that is intended for RE. Creating a filter will allow transferring the load, created by a DDOS attack, from CPU RE to CPU PFE of the router. This will grant RE the ability to handle only real packets without wasting time on other traffic. In order to build this defense mechanism it is required to define what needs to be filtered. The scheme of writing filters for IPv4 was based on the information from a book by Douglas Hanks Jr. — Day One Book: Securing the Routing Engine on M, MX and T series. Here’s the router’s scheme that I used:

Filtering for IPv4 protocol :

  • BGP — filter packets by source and destination ip, for source ip you may use any ip from BGP neighbor list. Allow only tcp established connections, this way the filter will drop off all SYN that come to BGP port, BGP session will only be established by our device (BGP neighbor should work in passive mode).
  • TACACS+ — filter packets by source and destination ip, source ip can only come from internal network. Limit the bandwidth to 1Mb/s.
  • SNMP — filter packets by source and destination ip, for source you may use any ip from snmp-clients list stored in config.
  • SSH — filter packets by destination ip, for source you may use any ip that is required, as there might be a need for emergency access to the device from any network. Limit the bandwidth to 5Mb/s.
  • NTP — filter packets by source and destination ip, for source ip you may use any from ntp servers config list. Limit the bandwidth to 1Mb/s (later we decreased the bandwidth to 512Kb/s).
  • DNS — filter packets by source and destination ip, for source ip you may use any from dns server config list. Limit the bandwidth to 1Mb/s.
  • ICMP — filter packets, allow only addresses that belong to the router. Limit the bandwidth to 5Mb/s (later we decreased the bandwidth to 1Mb/s).
  • TRACEROUTE — filter packets, allow only packets with a TTL = 1 and only to addresses that belong to the router. Limit the bandwidth to 1Mb/s.

As for IPv6 protocol, in my case, filtering was only applied to BGP, NTP, ICMP, DNS and traceroute. The only difference was regarding the filtering of ICMP traffic, due to the fact that ICMP is a part of IPv6. The rest of the protocols did not use IPv6 Addressing.

Configuration howto

There is a handy tool when it comes to writing filters in juniper — prefix-list, which allows us to dynamically define lists of ip addresses/address prefixes for the use in filters. For example, the following structure is used to create a list of IPv4 addresses of BGP neighbors listed in the config file:

prefix-list BGP-neighbors-v4 {
         apply-path "protocols bgp group <*> neighbor <*.*>";
}
Enter fullscreen mode Exit fullscreen mode

Defining the list resulted in the following:

> show configuration policy-options prefix-list BGP-neighbors-v4 | display inheritance
##
## apply-path was expanded to:
## 1.1.1.1/32;
## 2.2.2.2/32;
## 3.3.3.3/32;
##
apply-path «protocols bgp group <*> neighbor <*.*>»;
Enter fullscreen mode Exit fullscreen mode

Define dynamic lists of prefixes for all filters:

show config
/* List of all ipv4 addresses of BGP neighbours */
prefix-list BGP-neighbors-v4 {
    apply-path "protocols bgp group <*> neighbor <*.*>";
}
/* List of all ipv6 addresses of BGP neighbours  */
prefix-list BGP-neighbors-v6 {
    apply-path "protocols bgp group <*> neighbor <*:*>";
}
/* List of all ipv4 of NTP servers  */
prefix-list NTP-servers-v4 {
    apply-path "system ntp server <*.*>";
}
/* List of all ipv6 of NTP servers  */
prefix-list NTP-servers-v6 {
    apply-path "system ntp server <*:*>";
}
/* List of all ipv4 addresses assigned to router  */
prefix-list LOCALS-v4 {
    apply-path "interfaces <*> unit <*> family inet address <*>";
}
/* List of all ipv6 addresses assigned to router  */
prefix-list LOCALS-v6 {
    apply-path "interfaces <*> unit <*> family inet6 address <*>";
}
/* List of all ipv4 addresses of SNMP clients */
prefix-list SNMP-clients {
    apply-path "snmp client-list <*> <*>";
}
prefix-list SNMP-community-clients {
    apply-path "snmp community <*> clients <*>";
}
/* List of all TACACS+ servers */
prefix-list TACPLUS-servers {
    apply-path "system tacplus-server <*>";
}
/* List of router addresses that belong to internal network  */
prefix-list INTERNAL-locals {
    /* In my case the best option is to simply state the address  */
    192.168.0.1/32;
}
/* List of router addresses on the control interface, for SSH access */
prefix-list MGMT-locals {
    apply-path "interfaces fxp0 unit 0 family inet address <*>";
}
/* Private networks  */
prefix-list rfc1918 {
    10.0.0.0/8;
    172.16.0.0/12;
    192.168.0.0/16;
}
/* Loopback  */
prefix-list localhost-v6 {
    ::1/128;
}
prefix-list localhost-v4 {
    127.0.0.0/8;
}
/* List of all ipv4 BGP local addresses */
prefix-list BGP-locals-v4 {
    apply-path "protocols bgp group <*> neighbor <*.*> local-address <*.*>";
}
/* List of all ipv6 BGP local addresses  */
prefix-list BGP-locals-v6 {
    apply-path "protocols bgp group <*> neighbor <*:*> local-address <*:*>";
}  
/* List of all ipv4 of DNS servers  */                                     
prefix-list DNS-servers-v4 {
    apply-path "system name-server <*.*>";
}
/* List of all ipv6 of DNS servers */      
prefix-list DNS-servers-v6 {
    apply-path "system name-server <*:*>";
}
Enter fullscreen mode Exit fullscreen mode

Create policers for the purpose of limiting the bandwidth:

show config
/* 1Mb bandwidth */
policer management-1m {
    apply-flags omit;
    if-exceeding {
        bandwidth-limit 1m;
        burst-size-limit 625k;
    }
    /* drop all that does not fit */
    then discard;
}
/* 5Mb bandwidth */
policer management-5m {
    apply-flags omit;
    if-exceeding {
        bandwidth-limit 5m;
        burst-size-limit 625k;
    }
    /* drop all that does not fit */
    then discard;
}
/* 512Kb bandwidth */
policer management-512k {
    apply-flags omit;
    if-exceeding {
        bandwidth-limit 512k;
        burst-size-limit 25k;
    }
    /* drop all that does not fit */
    then discard;
}
Enter fullscreen mode Exit fullscreen mode

I am also sharing the final filters’ configuration of the protection for you to copy and paste (limited NTP and ICMP traffic bandwidth, the reasons are described in testing section of this article). IPv4 filters config:

show config
/* Filtering BGP traffic */
filter accept-bgp {
    interface-specific;
    term accept-bgp {
        from {
            source-prefix-list {
                BGP-neighbors-v4;
            }
            destination-prefix-list {
                BGP-locals-v4;
            }
            /* BGP neighbour passive mode. BGP session will only be established by our device. */
            tcp-established;
            protocol tcp;
            port bgp;
        }
        then {
            count accept-bgp;
            accept;
        }
    }
}
/* Filtering SSH traffic */
filter accept-ssh {
    apply-flags omit;
    term accept-ssh {
        from {
            destination-prefix-list {
                MGMT-locals;
            }
            protocol tcp;
            destination-port ssh;
        }
        then {
            /* Limiting bandwidth */
            policer management-5m;
            count accept-ssh;
            accept;
        }
    }
}
/* Filtering SNMP traffic */
filter accept-snmp {
    apply-flags omit;
    term accept-snmp {
        from {
            source-prefix-list {
                SNMP-clients;           
                SNMP-community-clients;
            }
            destination-prefix-list {
                /* connect only to address within internal network */
                INTERNAL-locals;
            }
            protocol udp;
            destination-port [ snmp snmptrap ];
        }
        then {
            count accept-snmp;
            accept;
        }
    }
}
/* Filtering ICMP traffic */
filter accept-icmp {
    apply-flags omit;
    /* Drop fragmented ICMP packets */
    term discard-icmp-fragments {
        from {
            is-fragment;
            protocol icmp;
        }
        then {
            count discard-icmp-fragments;
            discard;
        }
    }
    term accept-icmp {
        from {
            protocol icmp;
            icmp-type [ echo-reply echo-request time-exceeded unreachable source-quench router-advertisement parameter-problem ];
        }
        then {
            /* Limiting bandwidth */
            policer management-1m;
            count accept-icmp;
            accept;
        }
    }
}
/* Filtering traceroute traffic */
filter accept-traceroute {
    apply-flags omit;
    term accept-traceroute-udp {
        from {
            destination-prefix-list {
                LOCALS-v4;
            }
            protocol udp;
            /* allow only if TTL = 1 */
            ttl 1;
            destination-port 33434-33450;
        }                               
        then {
            /* Limiting bandwidth */
            policer management-1m;
            count accept-traceroute-udp;
            accept;
        }
    }
    term accept-traceroute-icmp {
        from {
            destination-prefix-list {
                LOCALS-v4;
            }
            protocol icmp;
            /* allow only if TTL = 1 */
            ttl 1;
            icmp-type [ echo-request timestamp time-exceeded ];
        }
        then {
            /* Limiting bandwidth */
            policer management-1m;
            count accept-traceroute-icmp;
            accept;
        }
    }
    term accept-traceroute-tcp {
        from {
            destination-prefix-list {
                LOCALS-v4;
            }
            protocol tcp;
            /* allow only if TTL = 1 */
            ttl 1;
        }
        then {
            /* Limiting bandwidth */
            policer management-1m;
            count accept-traceroute-tcp;
            accept;
        }
    }
}
/* Filtering DNS traffic */
filter accept-dns {
    apply-flags omit;
    term accept-dns {
        from {
            source-prefix-list {
                DNS-servers-v4;
            }
            destination-prefix-list {
                LOCALS-v4;
            }
            protocol udp;
            source-port 53;
        }                               
        then {
            /* Limiting bandwidth */
            policer management-1m;
            count accept-dns;
            accept;
        }
    }
}
/* Filter for dropping packets that did not pass all verifications */
filter discard-all {
    apply-flags omit;
    term discard-ip-options {
        from {
            ip-options any;
        }
        then {
            /* Packet counter to gather stats */
            count discard-ip-options;
            log;
            discard;
        }
    }                                   
    term discard-TTL_1-unknown {
        from {
            ttl 1;
        }
        then {
            /* Packet counter to gather stats */
            count discard-TTL_1-unknown;
            log;
            discard;
        }
    }
    term discard-tcp {
        from {
            protocol tcp;
        }
        then {
            /* Packet counter to gather stats */
            count discard-tcp;
            log;
            discard;
        }
    }
    term discard-udp {
        from {
            protocol udp;
        }
        then {
            /* Packet counter to gather stats */
            count discard-udp;
            log;
            discard;
        }
    }
    term discard-icmp {
        from {
            destination-prefix-list {
                LOCALS-v4;
            }
            protocol icmp;
        }
        then {
             /* Packet counter to gather stats */
            count discard-icmp;
            log;
            discard;
        }
    }
    term discard-unknown {
        then {
             /* Packet counter to gather stats */
            count discard-unknown;
            log;
            discard;
        }
    }
}
/* Filtering TACACS+ traffic */
filter accept-tacacs {                  
    apply-flags omit;
    term accept-tacacs {
        from {
            source-prefix-list {
                TACPLUS-servers;
            }
            destination-prefix-list {
                INTERNAL-locals;
            }
            protocol [ tcp udp ];
            source-port [ tacacs tacacs-ds ];
            tcp-established;
        }
        then {
            /* Limiting bandwidth */
            policer management-1m;
            count accept-tacacs;
            accept;
        }
    }
}
/* Filtering NTP traffic */
filter accept-ntp {
    apply-flags omit;
    term accept-ntp {
        from {
            source-prefix-list {
                NTP-servers-v4;
                localhost-v4;
            }
            destination-prefix-list {
                localhost-v4;
                LOCALS-v4;
            }
            protocol udp;
            destination-port ntp;
        }
        then {
            /* Limiting bandwidth */
            policer management-512k;
            count accept-ntp;
            accept;
        }
    }
}
/* Parent filter to simplify filtration */
filter accept-common-services {
    term protect-TRACEROUTE {
        filter accept-traceroute;
    }
    term protect-ICMP {
        filter accept-icmp;             
    }
    term protect-SSH {
        filter accept-ssh;
    }
    term protect-SNMP {
        filter accept-snmp;
    }
    term protect-NTP {
        filter accept-ntp;
    }
    term protect-DNS {
        filter accept-dns;
    }
    term protect-TACACS {
        filter accept-tacacs;
    }
}
Enter fullscreen mode Exit fullscreen mode

Similar filter for IPv6:

show config
/* Filtering BGP traffic */
filter accept-v6-bgp {
    interface-specific;
    term accept-v6-bgp {
        from {
            source-prefix-list {
                BGP-neighbors-v6;
            }
            destination-prefix-list {
                BGP-locals-v6;
            }
            tcp-established;
            next-header tcp;
            port bgp;
        }
        then {
            count accept-v6-bgp;
            accept;
        }
    }
}
/* Filtering ICMP traffic */
filter accept-v6-icmp {
    apply-flags omit;
    term accept-v6-icmp {
        from {
            next-header icmp6;
            /* filter by type is more general since ipv6 requires icmp */
            icmp-type [ echo-reply echo-request time-exceeded router-advertisement parameter-problem destination-unreachable packet-too-big router-solicit neighbor-solicit neighbor-advertisement redirect ];
        }
        then {
            policer management-1m;
            count accept-v6-icmp;
            accept;
        }
    }
}
/* Filtering traceroute traffic */
filter accept-v6-traceroute {
    apply-flags omit;
    term accept-v6-traceroute-udp {
        from {
            destination-prefix-list {
                LOCALS-v6;
            }
            next-header udp;
            destination-port 33434-33450;
            hop-limit 1;
        }
        then {
            policer management-1m;
            count accept-v6-traceroute-udp;
            accept;
        }                               
    }
    term accept-v6-traceroute-tcp {
        from {
            destination-prefix-list {
                LOCALS-v6;
            }
            next-header tcp;
            hop-limit 1;
        }
        then {
            policer management-1m;
            count accept-v6-traceroute-tcp;
            accept;
        }
    }
    term accept-v6-traceroute-icmp {
        from {
            destination-prefix-list {
                LOCALS-v6;
            }
            next-header icmp6;
            icmp-type [ echo-reply echo-request router-advertisement parameter-problem destination-unreachable packet-too-big router-solicit neighbor-solicit neighbor-advertisement redirect ];
            hop-limit 1;
        }
        then {
            policer management-1m;
            count accept-v6-traceroute-icmp;
            accept;
        }
    }
}
/* Filtering DNS traffic */
filter accept-v6-dns {
    apply-flags omit;
    term accept-v6-dns {
        from {
            source-prefix-list {
                DNS-servers-v6;
            }
            destination-prefix-list {
                LOCALS-v6;
            }
            next-header udp;
            source-port 53;
        }
        then {
            policer management-1m;
            count accept-v6-dns;
            accept;                     
        }
    }
}
/* Filtering NTP traffic */
filter accept-v6-ntp {
    apply-flags omit;
    term accept-v6-ntp {
        from {
            source-prefix-list {
                NTP-servers-v6;
                localhost-v6;
            }
            destination-prefix-list {
                localhost-v6;
                LOCALS-v6;
            }
            next-header udp;
            destination-port ntp;
        }
        then {
            policer management-512k;
            count accept-v6-ntp;
            accept;
        }
    }
}
/* Filter that droppes other packets */
filter discard-v6-all {
    apply-flags omit;
    term discard-v6-tcp {
        from {
            next-header tcp;
        }
        then {
            count discard-v6-tcp;
            log;
            discard;
        }
    }
    term discard-v6-udp {
        from {
            next-header udp;
        }
        then {
            count discard-v6-udp;
            log;
            discard;
        }
    }
    term discard-v6-icmp {
        from {                         
            destination-prefix-list {
                LOCALS-v6;
            }
            next-header icmp6;
        }
        then {
            count discard-v6-icmp;
            log;
            discard;
        }
    }
    term discard-v6-unknown {
        then {
            count discard-v6-unknown;
            log;
            discard;
        }
    }
}
/* Parent filter to simplify filtration */
filter accept-v6-common-services {
    term protect-TRACEROUTE {
        filter accept-v6-traceroute;
    }
    term protect-ICMP {
        filter accept-v6-icmp;
    }
    term protect-NTP {
        filter accept-v6-ntp;
    }
    term protect-DNS {
        filter accept-v6-dns;
    }
}
Enter fullscreen mode Exit fullscreen mode

Then you need to apply filters to the service interface lo0.0. In JunOS system this interface is used for data transmission between PFE and RE. Configuration:

show config
lo0 {
    unit 0 {
        family inet {
            filter {
                input-list [ accept-bgp accept-common-services discard-all ];
            }
        }
        family inet6 {
            filter {
                input-list [ accept-v6-bgp accept-v6-common-services discard-v6-all ];
            }
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Note that the order of stating filters in input-list of an interface is highly important. Each packet will be validated as it passes through the filters in exact order from left to right.

Filters testing

After applying the filters I ran a series of tests on the same bench. Firewall counters cleanup was performed after each run. Normal (without any attack) amount of load is displayed on the graphs between 11:06–11:08.
Graph of pps for the whole testing period:

Packets per second graph

CPU graph for the whole testing period:

CPU graph

ICMP flood test was the first to be performed, with a traffic bandwidth of 5Mb/s (can be seen on graphs between 10:21–10:24). Filter counters and the graph show traffic limitations by bandwidth, but even that was enough for the load to increase, that’s why the bandwidth was then limited to 1Mb/s. Counters show the following:

Filter: lo0.0-i
Counters:
Name                                                Bytes              Packets
accept-bgp-lo0.0-i                                      0                    0
accept-icmp-lo0.0-i                              47225584              1686628
accept-ntp-lo0.0-i                                    152                    2
accept-snmp-lo0.0-i                                174681                 2306
accept-ssh-lo0.0-i                                  38952                  702
accept-traceroute-icmp-lo0.0-i                          0                    0
accept-traceroute-tcp-lo0.0-i                         841                   13
accept-traceroute-udp-lo0.0-i                           0                    0
discard-TTL_1-unknown-lo0.0-i                           0                    0
discard-icmp-lo0.0-i                                    0                    0
discard-icmp-fragments-lo0.0-i                          0                    0
discard-ip-options-lo0.0-i                              0                    0
discard-tcp-lo0.0-i                                   780                   13
discard-udp-lo0.0-i                                 18743                  133
discard-unknown-lo0.0-i                                 0                    0
Policers:
Name                                                Bytes              Packets
management-1m-accept-ntp-lo0.0-i                        0                    0
management-1m-accept-traceroute-icmp-lo0.0-i            0                    0
management-1m-accept-traceroute-tcp-lo0.0-i             0                    0
management-1m-accept-traceroute-udp-lo0.0-i             0                    0
management-5m-accept-icmp-lo0.0-i               933705892             33346639
management-5m-accept-ssh-lo0.0-i                        0                    0
Enter fullscreen mode Exit fullscreen mode

Here’s the result of ICMP flood retest with a bandwidth of 1Mb/s (can be seen on graphs between 10:24–10:27). The load on the router’s RE decreased from 19% to 10%, the load on PFE decreased to 30%. Counters show the following:

Filter: lo0.0-i
Counters:
Name                                                Bytes              Packets
accept-bgp-lo0.0-i                                      0                    0
accept-icmp-lo0.0-i                               6461448               230766
accept-ntp-lo0.0-i                                      0                    0
accept-snmp-lo0.0-i                                113433                 1497
accept-ssh-lo0.0-i                                  33780                  609
accept-traceroute-icmp-lo0.0-i                          0                    0
accept-traceroute-tcp-lo0.0-i                           0                    0
accept-traceroute-udp-lo0.0-i                           0                    0
discard-TTL_1-unknown-lo0.0-i                           0                    0
discard-icmp-lo0.0-i                                    0                    0
discard-icmp-fragments-lo0.0-i                          0                    0
discard-ip-options-lo0.0-i                              0                    0
discard-tcp-lo0.0-i                                   360                    6
discard-udp-lo0.0-i                                 12394                   85
discard-unknown-lo0.0-i                                 0                    0
Policers:
Name                                                Bytes              Packets
management-1m-accept-icmp-lo0.0-i               665335496             23761982
management-1m-accept-ntp-lo0.0-i                        0                    0
management-1m-accept-traceroute-icmp-lo0.0-i            0                    0
management-1m-accept-traceroute-tcp-lo0.0-i             0                    0
management-1m-accept-traceroute-udp-lo0.0-i             0                    0
management-5m-accept-ssh-lo0.0-i                        0                    0
Enter fullscreen mode Exit fullscreen mode

Next, a flood test on router’s BGP port was performed from an ip address that was not included in config file (refer to the graphs between 10:29–10:36). As you can see from the counters, all flood went to discard-tcp RE filter and that only increased the load on PFE. The load on RE remained the same. Counters show the following:

Filter: lo0.0-i
Counters:
Name                                                Bytes              Packets
accept-bgp-lo0.0-i                                      824                    26
accept-icmp-lo0.0-i                                     0                    0
accept-ntp-lo0.0-i                                      0                    0
accept-snmp-lo0.0-i                                560615                 7401
accept-ssh-lo0.0-i                                  33972                  585
accept-traceroute-icmp-lo0.0-i                          0                    0
accept-traceroute-tcp-lo0.0-i                        1088                   18
accept-traceroute-udp-lo0.0-i                           0                    0
discard-TTL_1-unknown-lo0.0-i                           0                    0
discard-icmp-lo0.0-i                                    0                    0
discard-icmp-fragments-lo0.0-i                          0                    0
discard-ip-options-lo0.0-i                              0                    0
discard-tcp-lo0.0-i                           12250785600            306269640
discard-udp-lo0.0-i                                 63533                  441
discard-unknown-lo0.0-i                                 0                    0
Policers:
Name                                                Bytes              Packets
management-1m-accept-icmp-lo0.0-i                       0                    0
management-1m-accept-ntp-lo0.0-i                        0                    0
management-1m-accept-traceroute-icmp-lo0.0-i            0                    0
management-1m-accept-traceroute-tcp-lo0.0-i             0                    0
management-1m-accept-traceroute-udp-lo0.0-i             0                    0
management-5m-accept-ssh-lo0.0-i                        0                    0
Enter fullscreen mode Exit fullscreen mode

The session doesn’t fail:

user@MX80# run show bgp summary                
Groups: 1 Peers: 1 Down peers: 0
Table          Tot Paths  Act Paths Suppressed    History Damp State    Pending
inet.0               
                       2          1          0          0          0          0
Peer                     AS      InPkt     OutPkt    OutQ   Flaps Last Up/Dwn State|#Active/Received/Accepted/Damped...
9.4.8.2              4567         21         22       0      76        8:49 1/2/2/0              0/0/0/0
Enter fullscreen mode Exit fullscreen mode

Our fourth verification to perform was a UDP flood test (please refer to the graphs between 10:41–10:46) on NTP port (communication on this port, in the filter settings, is limited to the servers listed in the router config). According to the graph, the load was only increasing on router’s PFE by up to 28%. Counters:

Filter: lo0.0-i
Counters:
Name                                                Bytes              Packets
accept-bgp-lo0.0-i                                      0                    0
accept-icmp-lo0.0-i                                     0                    0
accept-ntp-lo0.0-i                                      0                    0
accept-snmp-lo0.0-i                                329059                 4344
accept-ssh-lo0.0-i                                  22000                  388
accept-traceroute-icmp-lo0.0-i                          0                    0
accept-traceroute-tcp-lo0.0-i                         615                   10
accept-traceroute-udp-lo0.0-i                           0                    0
discard-TTL_1-unknown-lo0.0-i                           0                    0
discard-icmp-lo0.0-i                                    0                    0
discard-icmp-fragments-lo0.0-i                          0                    0
discard-ip-options-lo0.0-i                              0                    0
discard-tcp-lo0.0-i                                     0                    0
discard-udp-lo0.0-i                            1938171670             69219329
discard-unknown-lo0.0-i                                 0                    0
Policers:
Name                                                Bytes              Packets
management-1m-accept-icmp-lo0.0-i                       0                    0
management-1m-accept-ntp-lo0.0-i                        0                    0
management-1m-accept-traceroute-icmp-lo0.0-i            0                    0
management-1m-accept-traceroute-tcp-lo0.0-i             0                    0
management-1m-accept-traceroute-udp-lo0.0-i             0                    0
management-5m-accept-ssh-lo0.0-i                        0                    0
Enter fullscreen mode Exit fullscreen mode

Our fifth step in the verification process was to UDP flood test (please refer to the graphs between 10:41–11:04) on NTP port with IP spoofing. The amount of load on RE was increased by 12%, the load on PFE increased by up to 22%. By looking at the counters you can see that the flood hit a threshold of 1 Mb/s, which was enough to increase the load on RE. The traffic threshold ended up being reduced to 512 Kb/s. Counters:

Filter: lo0.0-i
Counters:
Name                                                Bytes              Packets
accept-bgp-lo0.0-i                                      0                    0
accept-icmp-lo0.0-i                                     0                    0
accept-ntp-lo0.0-i                               34796804              1242743
accept-snmp-lo0.0-i                                630617                 8324
accept-ssh-lo0.0-i                                  20568                  366
accept-traceroute-icmp-lo0.0-i                          0                    0
accept-traceroute-tcp-lo0.0-i                        1159                   19
accept-traceroute-udp-lo0.0-i                           0                    0
discard-TTL_1-unknown-lo0.0-i                           0                    0
discard-icmp-lo0.0-i                                    0                    0
discard-icmp-fragments-lo0.0-i                          0                    0
discard-ip-options-lo0.0-i                              0                    0
discard-tcp-lo0.0-i                                     0                    0
discard-udp-lo0.0-i                                 53365                  409
discard-unknown-lo0.0-i                                 0                    0
Policers:
Name                                                Bytes              Packets
management-1m-accept-icmp-lo0.0-i                       0                    0
management-1m-accept-ntp-lo0.0-i               3717958468            132784231
management-1m-accept-traceroute-icmp-lo0.0-i            0                    0
management-1m-accept-traceroute-tcp-lo0.0-i             0                    0
management-1m-accept-traceroute-udp-lo0.0-i             0                    0
management-5m-accept-ssh-lo0.0-i                        0                    0
Enter fullscreen mode Exit fullscreen mode

A UDP flood retest (on the graph that is between 11:29–11:34) on NTP port with IP spoofing, but this time the traffic threshold was 512 Kb/s. Load graph:

UDP flood graph

The counters:

Filter: lo0.0-i
Counters:
Name                                                Bytes              Packets
accept-bgp-lo0.0-i                                      0                    0
accept-icmp-lo0.0-i                                     0                    0
accept-ntp-lo0.0-i                               21066260               752363
accept-snmp-lo0.0-i                                744161                 9823
accept-ssh-lo0.0-i                                  19772                  347
accept-traceroute-icmp-lo0.0-i                          0                    0
accept-traceroute-tcp-lo0.0-i                        1353                   22
accept-traceroute-udp-lo0.0-i                           0                    0
discard-TTL_1-unknown-lo0.0-i                           0                    0
discard-icmp-lo0.0-i                                    0                    0
discard-icmp-fragments-lo0.0-i                          0                    0
discard-ip-options-lo0.0-i                              0                    0
discard-tcp-lo0.0-i                                     0                    0
discard-udp-lo0.0-i                                 82745                  602
discard-unknown-lo0.0-i                                 0                    0
Policers:
Name                                                Bytes              Packets
management-1m-accept-icmp-lo0.0-i                       0                    0
management-1m-accept-traceroute-icmp-lo0.0-i            0                    0
management-1m-accept-traceroute-tcp-lo0.0-i             0                    0
management-1m-accept-traceroute-udp-lo0.0-i             0                    0
management-512k-accept-ntp-lo0.0-i             4197080384            149895728
management-5m-accept-ssh-lo0.0-i                        0                    0
Enter fullscreen mode Exit fullscreen mode

Conclusions

All these tests resulted in creating a RE traffic filtering configuration which is resistant to DDoS attacks. This configuration is already live on a production router and so far there have been no problems. The counters from the production MX80 router show the following:

Filter: lo0.0-i
Counters:
Name                                                Bytes              Packets
accept-v6-bgp-lo0.0-i                            31091878               176809
accept-v6-icmp-lo0.0-i                            1831144                26705
accept-v6-ntp-lo0.0-i                                   0                    0
accept-v6-traceroute-icmp-lo0.0-i                       0                    0
accept-v6-traceroute-tcp-lo0.0-i                    48488                  684
accept-v6-traceroute-udp-lo0.0-i                        0                    0
discard-v6-icmp-lo0.0-i                                 0                    0
discard-v6-tcp-lo0.0-i                                  0                    0
discard-v6-udp-lo0.0-i                                  0                    0
discard-v6-unknown-lo0.0-i                              0                    0
Policers:
Name                                                Bytes              Packets
management-1m-accept-v6-icmp-lo0.0-i                    0                    0
management-1m-accept-v6-traceroute-icmp-lo0.0-i         0                    0
management-1m-accept-v6-traceroute-tcp-lo0.0-i          0                    0
management-1m-accept-v6-traceroute-udp-lo0.0-i          0                    0
management-512k-accept-v6-ntp-lo0.0-i                   0                    0

Filter: lo0.0-i
Counters:
Name                                                Bytes              Packets
accept-bgp-lo0.0-i                              135948400               698272
accept-dns-lo0.0-i                                    374                    3
accept-icmp-lo0.0-i                             121304849              1437305
accept-ntp-lo0.0-i                                  87780                 1155
accept-snmp-lo0.0-i                            1265470648             12094967
accept-ssh-lo0.0-i                                2550011                30897
accept-tacacs-lo0.0-i                              702450                11657
accept-traceroute-icmp-lo0.0-i                      28824                  636
accept-traceroute-tcp-lo0.0-i                       75378                 1361
accept-traceroute-udp-lo0.0-i                       47328                 1479
discard-TTL_1-unknown-lo0.0-i                       27790                  798
discard-icmp-lo0.0-i                                26400                  472
discard-icmp-fragments-lo0.0-i                          0                    0
discard-ip-options-lo0.0-i                          35680                 1115
discard-tcp-lo0.0-i                              73399674              1572144
discard-udp-lo0.0-i                             126386306               694603
discard-unknown-lo0.0-i                                 0                    0
Policers:
Name                                                Bytes              Packets
management-1m-accept-dns-lo0.0-i                        0                    0
management-1m-accept-icmp-lo0.0-i                   38012                  731
management-1m-accept-tacacs-lo0.0-i                     0                    0
management-1m-accept-traceroute-icmp-lo0.0-i            0                    0
management-1m-accept-traceroute-tcp-lo0.0-i             0                    0
management-1m-accept-traceroute-udp-lo0.0-i             0                    0
management-512k-accept-ntp-lo0.0-i                      0                    0
management-5m-accept-ssh-lo0.0-i                        0                    0
Enter fullscreen mode Exit fullscreen mode

The amount of unwanted traffic, which does not pass the discard filter, is clearly seen from the counter above.

Let me know if you have any questions regarding this article, I will be glad to answer them!

Top comments (1)

Collapse
 
priteshusadadiya profile image
Pritesh Usadadiya

[[..Pingback..]]
This article was curated as a part of #73rd Issue of Software Testing Notes Newsletter.