DEV Community

Christopher Adigun
Christopher Adigun

Posted on

Virtual 4G Simulation Using kubernetes And GNS3

This tutorial is about how to deploy a virtual 4G stack using GNS3 and Kubernetes. It covers the following:

  • Openairinterface eNodeB and UE simulator software

  • Virtual EPC (MME, PGW, SGW, HSS, PCRF) using Open5gs software, this will be installed in a kubernetes cluster

  • Vyos router will be used for L3 purposes

  • GNS3 will be used to host all the components

The motivation for this tutorial stems up from the fact that I worked as a Packet core support engineer like 3 years ago before I moved into Cloud native with a focus on the Kubernetes platform. So I decided to see the possibility of simulating a 4G stack using open source tools. I hope you find this interesting as I did!

N.B - Some familiarity with Kubernetes and Telecommunication network is assumed.

GNS3 was chosen as a platform to deploy everything because it makes it easy to see everything at a glance and still interact with the components. While Telecom networks remains largely the same logically, the implementation usually is not the same, no two telecom network implementation are the same generally. So kindly take points below as just a way of implementation, there are diverse ways to achieve this especially when Kubernetes is used.

  • The S1AP of the MME was exposed directly at the POD layer instead of using a service. This point is really subject to how the EPC software is developed, I chose to go this route instead and this avoids the requirements of enabling the SCTP flag in the kubernetes API server, so the eNodeB connects directly to the POD instead of via a service.

  • Calico was used for this stage as CNI, this gives the possibility of advertising the POD IPs directly into your L3 network. This is quite interesting in my own opinion. Calico was able to advertise the POD IPs into the Vyos router, this made the routing between the eNodeB and EPC network seamless.

The GNS3 network diagram is shown below:

Alt Text

The UE and eNodeB simulators are running in the same VM while the virtual EPC stack is running in the kubernetes cluster.

Kubernetes Setup

This was installed using the official Kubeadm installation documentation, Calico was used for the CNI.

vEPC Setup

The vEPC was installed using the Open5gs software, it provides the following components:

  • HSS Database: MongoDB is used for this purpose (deployed as a statefulset).
  • Web-UI: Web interface to administer the MongoDB database, this is used to add subscriber information.
  • HSS (deployed as a statefulset)
  • PGW: It combines both PGW-C and PGW-U (deployed as a statefulset)
  • SGW: It combines both SGW-C and SGW-U (deployed as a statefulset)
  • MME (deployed as a statefulset)
  • PCRF (deployed as a statefulset)

It uses the freeDiameter project for the components that requires diameter (PGW--PCRF, HSS---MME).

A single docker image was used for all the vEPC core components (i.e. excluding the MongoDB and Web-UI), utilizing a dedicated image for each of the core EPC components may be desirable especially to reduce the overall image size.

The manifest files can be found in the repo:
https://bitbucket.org/infinitydon/virtual-4g-simulator/src/master/open5gs/

  • Create the open5gs namespace: kubectl create ns open5gs

  • Deploy all the manifest files:
    kubectl apply -f hss-database/ (it is adviseable to wait for the MongoDB POD to be running before proceeding with the rest)
    kubectl apply -f hss/
    kubectl apply -f mme/
    kubectl apply -f sgw/
    kubectl apply -f pgw/
    kubectl apply -f pcrf/

  • Status after applying the manifests:

chris@k8s-cp-1:~/open5gs$ kubectl -n open5gs get po -o wide
NAME                             READY   STATUS    RESTARTS   AGE   IP               NODE           NOMINATED NODE   READINESS GATES
mongo-0                          2/2     Running   2          11h   192.168.230.36   k8s-worker-1   <none>           <none>
open5gs-hss-deployment-0         1/1     Running   3          11h   192.168.230.35   k8s-worker-1   <none>           <none>
open5gs-mme-deployment-0         1/1     Running   1          10h   192.168.230.38   k8s-worker-1   <none>           <none>
open5gs-pcrf-deployment-0        1/1     Running   4          11h   192.168.230.28   k8s-worker-1   <none>           <none>
open5gs-pgw-deployment-0         1/1     Running   1          11h   192.168.230.30   k8s-worker-1   <none>           <none>
open5gs-sgw-deployment-0         1/1     Running   1          11h   192.168.230.34   k8s-worker-1   <none>           <none>
open5gs-webui-64d7b9946f-lktcs   1/1     Running   4          11h   192.168.230.33   k8s-worker-1   <none>           <none>
Enter fullscreen mode Exit fullscreen mode

N.B - It is normally to get some POD restarts as the services become active.

Sample log outputs:

chris@k8s-cp-1:~/open5gs$ kubectl -n open5gs logs open5gs-mme-deployment-0
Open5GS daemon v1.2.0

02/16 10:29:27.177: [app] INFO: Configuration: '/open5gs/config-map/mme.yaml' (../src/main.c:54)
02/16 10:29:27.177: [app] INFO: File Logging: '/var/log/open5gs/mme.log' (../src/main.c:57)
02/16 10:29:27.299: [app] INFO: MME initialize...done (../src/mme/app-init.c:33)
02/16 10:29:27.305: [gtp] INFO: gtp_server() [192.168.230.31]:2123 (../lib/gtp/path.c:32)
02/16 10:29:27.305: [gtp] INFO: gtp_connect() [10.108.0.76]:2123 (../lib/gtp/path.c:59)
02/16 10:29:27.305: [mme] INFO: s1ap_server() [192.168.230.31]:36412 (../src/mme/s1ap-sctp.c:57)
02/16 10:30:58.863: [diam] INFO: CONNECTED TO 'hss.localdomain' (TCP,soc#14): (../lib/diameter/common/logger.c:108)
Enter fullscreen mode Exit fullscreen mode
chris@k8s-cp-1:~/open5gs$ kubectl -n open5gs logs open5gs-pgw-deployment-0
net.ipv6.conf.all.disable_ipv6 = 0
Open5GS daemon v1.2.0

02/16 10:29:15.131: [app] INFO: Configuration: '/open5gs/config-map/pgw.yaml' (../src/main.c:54)
02/16 10:29:15.132: [app] INFO: File Logging: '/var/log/open5gs/pgw.log' (../src/main.c:57)
02/16 10:29:15.226: [gtp] INFO: gtp_server() [192.168.230.30]:2123 (../lib/gtp/path.c:32)
02/16 10:29:15.226: [gtp] INFO: gtp_server() [192.168.230.30]:2152 (../lib/gtp/path.c:32)
02/16 10:29:15.226: [app] INFO: PGW initialize...done (../src/pgw/app-init.c:31)
02/16 10:31:16.248: [diam] INFO: CONNECTED TO 'pcrf.localdomain' (TCP,soc#20): (../lib/diameter/common/logger.c:108)
Enter fullscreen mode Exit fullscreen mode
chris@k8s-cp-1:~/open5gs$ kubectl -n open5gs logs open5gs-sgw-deployment-0
Open5GS daemon v1.2.0

02/16 10:29:29.234: [app] INFO: Configuration: '/open5gs/config-map/sgw.yaml' (../src/main.c:54)
02/16 10:29:29.234: [app] INFO: File Logging: '/var/log/open5gs/sgw.log' (../src/main.c:57)
02/16 10:29:29.239: [app] INFO: SGW initialize...done (../src/sgw/app-init.c:31)
02/16 10:29:29.241: [gtp] INFO: gtp_server() [192.168.230.34]:2123 (../lib/gtp/path.c:32)
02/16 10:29:29.241: [gtp] INFO: gtp_server() [192.168.230.34]:2152 (../lib/gtp/path.c:32)
Enter fullscreen mode Exit fullscreen mode

At this stage neither the UE nor eNodeB are connected to the vEPC.

Also take note that Calico is advertising the POD IPs via BGP to the Vyos router so that the eNodeB can connect to the EPC PODs.

Calico BGP configuration is below:

apiVersion: projectcalico.org/v3
kind: BGPConfiguration
metadata:
  name: default
spec:
  asNumber: 63400
Enter fullscreen mode Exit fullscreen mode
apiVersion: projectcalico.org/v3
kind: BGPPeer
metadata:
    name: bgppeer-vyos
spec:
    asNumber: 63400
    peerIP: 10.10.10.1
Enter fullscreen mode Exit fullscreen mode

Similar configuration was done on the Vyos to add the Calico (using k8s-worker-1 Node IP) as a BGP peer:

set protocols bgp 63400 neighbor 10.10.10.3 remote-as '63400'
Enter fullscreen mode Exit fullscreen mode
vyos@vyos:~$ show ip route
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
       T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
       F - PBR, f - OpenFabric,
       > - selected route, * - FIB route, q - queued route, r - rejected route

S>* 0.0.0.0/0 [1/0] via 192.168.122.1, eth4, 00:43:11
C>* 10.10.10.0/24 is directly connected, eth0, 00:43:12
C>* 172.16.2.0/28 is directly connected, eth1, 00:43:13
C>* 192.168.122.0/24 is directly connected, eth4, 00:43:13
B>* 192.168.230.0/26 [200/0] via 10.10.10.3, eth0, 00:43:10
Enter fullscreen mode Exit fullscreen mode
vyos@vyos:~$ show ip bgp neighbors
BGP neighbor is 10.10.10.3, remote AS 63400, local AS 63400, internal link
  BGP version 4, remote router ID 10.10.10.3, local router ID 192.168.122.2
  BGP state = Established, up for 00:44:51
  Last read 00:00:39, Last write 00:00:51
  Hold time is 180, keepalive interval is 60 seconds
  Neighbor capabilities:
    4 Byte AS: advertised and received
    AddPath:
      IPv4 Unicast: TX received
      IPv4 Unicast: RX advertised IPv4 Unicast and received
    Route refresh: advertised and received(new)
    Address Family IPv4 Unicast: advertised and received
    Hostname Capability: advertised (name: vyos,domain name: n/a) not received
    Graceful Restart Capabilty: advertised and received
      Remote Restart timer is 120 seconds
      Address families by peer:
        IPv4 Unicast(preserved)
Enter fullscreen mode Exit fullscreen mode

You can see the 192.168.230.0/26 (this POD CIDR for k8s-worker-1) is been learnt from Calico via BGP.

eNodeB and UE Setup

The complete setup procedure can be found via https://metonymical.hatenablog.com/entry/2020/01/03/151233 (it's in chinese, so you need to translate it), you only need the eNodeb+UE sections, the EPC section is not needed. Sample configuration that was used is given below:

Important UE ue.nfapi.conf aspects:

L1s = (
        {
        num_cc = 1;
        tr_n_preference = "nfapi";
        local_n_if_name  = "lo";      
        remote_n_address = "127.0.0.2"; 
        local_n_address  = "127.0.0.1"; 
        local_n_portc    = 50000;
        remote_n_portc   = 50001;
        local_n_portd    = 50010;
        remote_n_portd   = 50011;
        }
);
Enter fullscreen mode Exit fullscreen mode

Important eNodeb rcc.band7.tm1.nfapi.conf aspects:


    ////////// MME parameters:
    mme_ip_address      = ( { ipv4       = "192.168.230.38";  //--> MME POD IP
                              ipv6       = "192:168:30::17";
                              active     = "yes";
                              preference = "ipv4";
                            }
                          );

    NETWORK_INTERFACES :
    {
        ENB_INTERFACE_NAME_FOR_S1_MME            = "enp0s3";
        ENB_IPV4_ADDRESS_FOR_S1_MME              = "172.16.2.2";
        ENB_INTERFACE_NAME_FOR_S1U               = "enp0s3";
        ENB_IPV4_ADDRESS_FOR_S1U                 = "172.16.2.2";
        ENB_PORT_FOR_S1U                         = 2152; # Spec 2152
        ENB_IPV4_ADDRESS_FOR_X2C                 = "172.16.2.2";
        ENB_PORT_FOR_X2C                         = 36422; # Spec 36422

    };
  }
);

MACRLCs = (
        {
        num_cc = 1;
        local_s_if_name  = "lo";     
        remote_s_address = "127.0.0.1"; 
        local_s_address  = "127.0.0.2"; 
        local_s_portc    = 50001;
        remote_s_portc   = 50000;
        local_s_portd    = 50011;
        remote_s_portd   = 50010;
        tr_s_preference = "nfapi";
        tr_n_preference = "local_RRC";
        }
);
Enter fullscreen mode Exit fullscreen mode

The UE SIM information which can be found at openair3/NAS/TOOLS/ue_eurecom_test_sfr.conf (inside the oaism VM):

        IMSI="208930100001111";
        USIM_API_K="8baf473f2f8fd09487cccbd7097c6862";
        OPC="e734f8734007d6c5ce7a0508809e7e9c";
Enter fullscreen mode Exit fullscreen mode

This info should be used to register the subscriber in the Web-UI POD that is running inside the kubernetes cluster:

Alt Text

Then we can proceed to start the simulators:

eNodeb:

cd ~/enb_folder/cmake_targets
sudo -E ./lte_build_oai/build/lte-softmodem -O ../ci-scripts/conf_files/rcc.band7.tm1.nfapi.conf > enb.log 2>&1
Enter fullscreen mode Exit fullscreen mode

After some seconds, you can check the MME logs to see if the eNodeB is registered:

chris@k8s-cp-1:~/open5gs$ kubectl -n open5gs logs open5gs-mme-deployment-0
Open5GS daemon v1.2.0

02/16 10:29:27.177: [app] INFO: Configuration: '/open5gs/config-map/mme.yaml' (../src/main.c:54)
02/16 10:29:27.177: [app] INFO: File Logging: '/var/log/open5gs/mme.log' (../src/main.c:57)
02/16 10:29:27.299: [app] INFO: MME initialize...done (../src/mme/app-init.c:33)
02/16 10:29:27.305: [gtp] INFO: gtp_server() [192.168.230.38]:2123 (../lib/gtp/path.c:32)
02/16 10:29:27.305: [gtp] INFO: gtp_connect() [10.108.0.76]:2123 (../lib/gtp/path.c:59)
02/16 10:29:27.305: [mme] INFO: s1ap_server() [192.168.230.38]:36412 (../src/mme/s1ap-sctp.c:57)
02/16 10:30:58.863: [diam] INFO: CONNECTED TO 'hss.localdomain' (TCP,soc#14): (../lib/diameter/common/logger.c:108)
02/16 11:43:09.224: [mme] INFO: eNB-S1 accepted[172.16.2.2]:36412 in s1_path module (../src/mme/s1ap-sctp.c:109)
02/16 11:43:09.224: [mme] INFO: eNB-S1 accepted[172.16.2.2] in master_sm module (../src/mme/mme-sm.c:167)
02/16 11:43:09.224: [mme] INFO: Added a eNB. Number of eNBs is now 1 (../src/mme/mme-context.c:68)
Enter fullscreen mode Exit fullscreen mode

You can see from the last line "Number of eNBs is now 1"

Then proceed to start the UE simulator:

cd ~/ue_folder/cmake_targets
sudo -E ./lte_build_oai/build/lte-uesoftmodem -O ../ci-scripts/conf_files/ue.nfapi.conf --L2-emul 3 --num-ues 1 --nums_ue_thread 1 > ue.log 2>&1
Enter fullscreen mode Exit fullscreen mode

Then check the MME and PGW logs to see if a session was created:

MME:

02/16 16:24:17.123: [mme] INFO: Added a eNB. Number of eNBs is now 1 (../src/mme/mme-context.c:68)
02/16 16:25:29.897: [mme] INFO: Added a UE. Number of UEs is now 1 (../src/mme/mme-context.c:58)
02/16 16:25:29.959: [mme] INFO: Added a session. Number of sessions is now 1 (../src/mme/mme-context.c:79)
Enter fullscreen mode Exit fullscreen mode

PGW:

chris@k8s-cp-1:~/open5gs$ kubectl -n open5gs logs open5gs-pgw-deployment-0
net.ipv6.conf.all.disable_ipv6 = 0
Open5GS daemon v1.2.0

02/16 15:04:53.192: [app] INFO: Configuration: '/open5gs/config-map/pgw.yaml' (../src/main.c:54)
02/16 15:04:53.192: [app] INFO: File Logging: '/var/log/open5gs/pgw.log' (../src/main.c:57)
02/16 15:04:53.273: [gtp] INFO: gtp_server() [192.168.230.41]:2123 (../lib/gtp/path.c:32)
02/16 15:04:53.274: [app] INFO: PGW initialize...done (../src/pgw/app-init.c:31)
02/16 15:04:53.274: [gtp] INFO: gtp_server() [192.168.230.41]:2152 (../lib/gtp/path.c:32)
02/16 15:05:08.939: [diam] INFO: CONNECTED TO 'pcrf.localdomain' (TCP,soc#12): (../lib/diameter/common/logger.c:108)
02/16 16:25:29.960: [pgw] INFO: UE IMSI:[208930100001111] APN:[internet] IPv4:[45.45.0.2] IPv6:[] (../src/pgw/pgw-context.c:845)
02/16 16:25:29.960: [pgw] INFO: Added a session. Number of active sessions is now 1 (../src/pgw/pgw-context.c:40)
02/16 16:25:29.961: [gtp] INFO: gtp_connect() [192.168.230.39]:2152 (../lib/gtp/path.c:59)
Enter fullscreen mode Exit fullscreen mode

The PGW has assigned IP 45.45.0.2 to the UE simulator, this can also be confirmed in the VM that is running the simulator:

chris@oaism:~/enb_folder/cmake_targets$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet 127.0.0.2/8 scope host secondary lo:
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:62:55:43 brd ff:ff:ff:ff:ff:ff
    inet 172.16.2.2/24 brd 172.16.2.255 scope global enp0s3
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe62:5543/64 scope link
       valid_lft forever preferred_lft forever
3: oip1: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 100
    link/generic 00:00:00:00:00:00:00:00 brd 00:00:00:00:00:00:00:00
    inet 45.45.0.2/8 brd 45.255.255.255 scope global oip1
       valid_lft forever preferred_lft forever
Enter fullscreen mode Exit fullscreen mode

Interface oip1 in the UE simulator has been configured with the same IP that the PGW assigned it.

Sample ping tests to the PGW from the UE interface shows that connectivity is working, the high reply times is because this is not a high end equipment that was used:

chris@oaism:~/enb_folder/cmake_targets$ ping -I oip1 45.45.0.1
PING 45.45.0.1 (45.45.0.1) from 45.45.0.2 oip1: 56(84) bytes of data.
64 bytes from 45.45.0.1: icmp_seq=1 ttl=64 time=46.3 ms
64 bytes from 45.45.0.1: icmp_seq=2 ttl=64 time=2585 ms
64 bytes from 45.45.0.1: icmp_seq=3 ttl=64 time=4150 ms
64 bytes from 45.45.0.1: icmp_seq=4 ttl=64 time=3136 ms
64 bytes from 45.45.0.1: icmp_seq=5 ttl=64 time=2135 ms
64 bytes from 45.45.0.1: icmp_seq=6 ttl=64 time=1120 ms
Enter fullscreen mode Exit fullscreen mode

Also one advantage of GNS3 is that you can easily capture tcpdump traces on the interfaces, some sample traces that were captured are given below:

Alt Text

Alt Text

It should be noted that this is just for learning purposes, in my experience the OAISM (Openairinterface as it is known now) simulator sometimes stops working and I have to re-compile to get it working again but I think as we proceed in this year, the experience should be better or maybe it's just the limitation of my hardware, also Facebook has open-sourced it's UE+eNodeB simulator (S1AP tester) but adequate documentation on how to use this is not available currently.

In the second part of this tutorial, I will implement the vEPC using multiple interfaces inside the vEPC components to simulate how it is done with some network design i.e. separating signalling/traffic interfaces from the general OAM network, for this we will make use of Nokia DANM (CNI network management for kubernetes).

Ideas and comments are welcome especially in the aspect of the UE+eNodeB simulators.

                            REFERENCES
Enter fullscreen mode Exit fullscreen mode

https://docs.projectcalico.org/reference/resources/bgppeer#bgp-peer-definition

https://gitlab.eurecom.fr/oai/openairinterface5g/wikis/l2-nfapi-simulator/l2-nfapi-simulator-w-S1-same-machine

https://www.gns3.com/

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

https://open5gs.org/open5gs/docs/guide/03-splitting-network-elements/

https://vyos.readthedocs.io/en/latest/routing/bgp.html

Top comments (2)

Collapse
 
eduardocalfaia profile image
eduardocalfaia

Hi Cristopher,
I've followed your article, great job, I've had some dificults due my network design to be differente of yours, I've fixed it. When I launch the eNB application I've seen the error below. Have you ever seen this error? Thanks
12/18 20:44:13.421: [mme] INFO: eNB-S1 accepted[192.168.49.10]:36412 in s1_path module (../src/mme/s1ap-sctp.c:109)
12/18 20:44:13.422: [mme] INFO: eNB-S1 accepted[192.168.49.10] in master_sm module (../src/mme/mme-sm.c:167)
12/18 20:44:13.422: [mme] INFO: Added a eNB. Number of eNBs is now 1 (../src/mme/mme-context.c:68)
12/18 20:44:13.440: [mme] WARNING: S1-Setup failure: (../src/mme/s1ap-handler.c:147)
12/18 20:44:13.440: [mme] WARNING: Cannot find Served TAI. Check 'mme.tai' configuration (../src/mme/s1ap-handler.c:148)
12/18 20:44:18.566: [mme] INFO: eNB-S1[192.168.49.10] connection refused!!! (../src/mme/mme-sm.c:217)
12/18 20:44:18.566: [mme] INFO: Removed a eNB. Number of eNBs is now 0 (../src/mme/mme-context.c:73)

Collapse
 
alessiodiama profile image
Alessio Diamanti

Hi,
in the mme.yaml config file the tac value is set to 12345. So just change it on the OAISIM eNB config file and it will connect.
However there will be problems will the ue connecting. I am no sure that the web interface is correctly adding the ue info into mongodb.