Before we start our blog post, let's understand what is a service communication proxy and why we need it?
What is Service Communication Proxy?
In general terms, A Service Communication Proxy is a component that facilitates communication between different services within a distributed system or microservices architecture. Similarly, A 5G Core Service Communication Proxy is a specialized component within the 5G core network architecture that manages and facilitates the communication between different network functions (NFs). It is integral to ensuring efficient, secure, and reliable interactions within the 5G core network.
The 5G core network is designed around a service-based architecture (SBA), where network functions (such as the Access and Mobility Management Function (AMF), Session Management Function (SMF), and Policy Control Function (PCF)) communicate with each other using standard web-based protocols (e.g., HTTP/2, RESTful APIs). The SCP acts as an intermediary that provides several key services to facilitate and enhance this communication.
Why do we need SCP?
The Service Communication Proxy is one of the most important components of the 3GPP Service-Based Architecture(SBA) for 5G Core Networks. The concept of SCP is not entirely new. Similar functionality is provided by Signaling Transfer Point(STP), the central signaling router in 2G/3G and Diameter Routing Agent(DRA) in 4G.
The 5G SCP performs multiple key functions and offers benefits such as:
- Routing, load balancing and distribution
- Enhanced Security
- Cloud-Native nature - Easy to deploy
- 5G Service Detection and Discovery
- Load Detection and Auto-Scaling
- Reduced complexity
- Better observability
SCP with LoxiLB
Background
Earlier many users have tried to deploy Open5gs and exposed the service externally not with the Load Balancer but through the NodePort. As we know using NodePort is fine for testing purposes but it is never used in the production environment. Moreover, 3GPP introduced the Service-Based-Architecture for 5G and to further the idea, the concept of SCP was introduced. In simple words, the very basic element of SCP is load balancing which cannot be accomplished with NodePort. Now, there are few load balancers available which can solve this problem but there are also few areas which they don't particularly address e.g. Flexibility to run in any environment, be it on-prem or public cloud, Hitless failover, Auto-scaling, L7 load balancing for 5G interfaces, SCTP multi-homing etc. This is where LoxiLB comes into the picture.
Introduction
Let me start with a basic introduction of LoxiLB - it is an open-source cloud-native load balancer, written in GoLang and uses eBPF technology for it's core engine, primarily designed to tackle independent workloads or microservices.
For more information about LoxiLB, please follow this. There are few 5G related blogs already published where other users have applied LoxiLB to N2 interface. You can read them here and here.
In this blog post, we are going to discuss how we deployed LoxiLB as SCP with a popular open-source 5G Core - Open5GS in Kubernetes environment.
Architecture
We are going to have a setup of total 6 nodes where 1 node will be dedicated for UE, UPF and LoxiLB each and rest of the 3 nodes will be required for a Kubernetes cluster to host Open5gs core components. LoxiLB can run in the in-cluster mode and outside the cluster as well. For this blog, we are running LoxiLB outside the cluster in a separate VM.
Prepare the Kubernetes cluster
We are assuming that the user has already set up a Kubernetes cluster. If not, then there are plenty of LoxiLB Quick start guides to help you kick-start.
Prepare LoxiLB Instance
Once the Kubernetes cluster is ready, we can deploy LoxiLB. To avoid a single point of failure, there are plenty of ways to deploy LoxiLB with High Availability. Please refer this to know about some of the common ways. For this blog, we will keep things simple and use a single LoxiLB instance.
Once the node instance is up and running, follow the steps below to start LoxiLB docker container:
$ apt-get update
$ apt-get install -y software-properties-common
#Install Docker
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
add-apt-repository -y "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
$ apt-get update
$ apt-get install -y docker-ce
#Run LoxiLB docker container
$ docker run -u root --cap-add SYS_ADMIN --restart unless-stopped --privileged -dit -v /dev/log:/dev/log --net=host --name loxilb ghcr.io/loxilb-io/loxilb:latest
Deploy kube-loxilb
kube-loxilb is used to deploy LoxiLB with Kubernetes.
$ wget https://raw.githubusercontent.com/loxilb-io/kube-loxilb/main/manifest/ext-cluster/kube-loxilb.yaml
kube-loxilb.yaml
args:
- --loxiURL=http://172.17.0.2:11111
- --externalCIDR=17.17.10.0/24
- --setMode=2
A description of these options follows:
loxiURL: LoxiLB API server address. kube-loxilb uses this URL to communicate with LoxiLB. The IP must be kube-loxilb can access. (e.g. private IP of LoxiLB node).
externalCIDR: When creating a LoadBalancer service, LoxiLB specifies the VIP CIDR to allocate to the LB rule. In this document, we will specify the private IP range.
setLBMode: Specifies the NAT mode of the load balancer. Currently, there are three modes supported (0=default, 1=oneArm, 2=fullNAT), and we will use mode 2 (fullNAT) for this deployment.
In the topology, the LoxiLB node's private IP is 192.168.80.9. So, values are changed to:
args:
- --loxiURL=http://192.168.80.9:11111
- --externalCIDR=123.123.123.0/24
- --setLBMode=2
After modifying the options, use kubectl to deploy kube-loxilb.
$ kubectl apply -f kube-loxilb.yaml
serviceaccount/kube-loxilb created
clusterrole.rbac.authorization.k8s.io/kube-loxilb created
clusterrolebinding.rbac.authorization.k8s.io/kube-loxilb created
deployment.apps/kube-loxilb created
When the deployment is complete, you can verify that the Deployment has been created in the kube-system namespace of k8s with the following command:
$ kubectl -n kube-system get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
calico-kube-controllers 1/1 1 1 18d
coredns 2/2 2 2 18d
kube-loxilb 1/1 1 1 18d
metrics-server 1/1 1 1 18d
Deploy UPF
Now, let's install Open5gs UPF on the UPF node.
Login to the UPF node and install mongodb first. Import the key for installation.
$ sudo apt update
$ sudo apt install gnupg
$ curl -fsSL <https://pgp.mongodb.com/server-6.0.asc> | sudo gpg -o /usr/share/keyrings/mongodb-server-6.0.gpg --dearmor
$ echo "deb [ arch=amd64,arm64 signed-by=/usr/share/keyrings/mongodb-server-6.0.gpg] <https://repo.mongodb.org/apt/ubuntu> focal/mongodb-org/6.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-6.0.list
#Install mongodb
$ sudo apt update
$ sudo apt install -y mongodb-org
$ sudo systemctl start mongod #(if '/usr/bin/mongod' is not running)
$ sudo systemctl enable mongod #(ensure to automatically start it on system boot)
After mongodb installation is complete, install open5gs with the following command.
$ sudo add-apt-repository ppa:open5gs/latest
$ sudo apt update
$ sudo apt install open5gs
#When Open5gs is installed, initially all Open5gs processes are running, but we just have to run only UPF on that node. So, stopping everything else with the following command.
$ sudo systemctl stop open5gs*
If you don't want the process to run again when node restart you can use the following command: However, since * does not apply to the commands below, you must manually apply them to all processes.
$ sudo systemctl disable open5gs-amfd
$ sudo systemctl disable open5gs-smfd
...
Open the /etc/open5gs/upf.yaml file. Change the addr of the upf - pfcp and gtpu objects to the private IP of the UPF node.
upf:
pfcp:
- addr: 192.168.80.5
gtpu:
- addr: 192.168.80.5
subnet:
- addr: 10.45.0.1/16
- addr: 2001:db8:cafe::1/48
metrics:
- addr: 127.0.0.7
port: 9090
Add the route towards LoxiLB in UPF
$ sudo ip route add 123.123.123.0/24 via 192.168.80.9
Restart UPF with the following command.
$ sudo systemctl start open5gs-upfd
Install UERAN simulator
Follow the steps below to install UERAN simulator:
$ git clone https://github.com/my5G/my5G-RANTester.git
$ cd my5G-RANTester
$ go mod download
$ cd cmd
$ go build app.go
Add the route towards LoxiLB in UE node
$ sudo ip route add 123.123.123.0/24 via 192.168.80.9
Deploy Open5gs Core to EKS using helm
Now, we will deploy Open5gs code components using helm-charts.
For deployment, you need to have helm installed locally where you can use kubectl.
$ git clone https://github.com/nik-netlox/open5gs-scp-helm-charts.git
Verify configuration
Before deploying, check open5gs-scp-helm-charts/values.yaml file.
$ cd open5gs-scp-helm-repo
$ vim values.yaml
Open5gs core has different components which run on the same ports. For simplicity, we have statically fixed the service IP addresses for all the services. If you notice the values of the tag “svc”, they indicate the service IP address of the components. For example:
amf:
mcc: 208
mnc: 93
tac: 7
networkName: Open5GS
ngapInt: eth0
svc: 123.123.123.1
AMF’s N2 interface service will be hosted at 123.123.123.1. This value set here will be used by kube-loxilb to create the service. Check the template file for AMF:
$ vim templates/amf-1-deploy.yaml
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}-amf
annotations:
loxilb.io/probetype : "ping"
loxilb.io/lbmode : "fullnat"
loxilb.io/staticIP: {{ .Values.amf.svc }}
labels:
epc-mode: amf
spec:
type: LoadBalancer
loadBalancerClass: loxilb.io/loxilb
Modify the upfPublicIP value of the smf object to the service IP value for the N4 interface. For this blog post, N4 interface service will be hosted at 123.123.123.2:8805
smf:
N4Int: eth0
upfPublicIP: 123.123.123.2
Note: Before deploying the open5gs, we must take care of one more thing. PFCP protocol is UDP based two-way protocol which means UPF and SMF both can initiate the message. Since, UPF is going to be deployed as a standalone entity so we have create a load balancer service rule for the SMF initiated traffic to go towards UPF.
# Create a rule to identify the SMF initiated traffic.
#loxicmd create firewall --firewallRule="sourceIP:<nodeCIDR>,minDestinationPort:8805,maxDestinationPort:8805" --allow --setmark=10
loxicmd create firewall --firewallRule="sourceIP:192.168.80.100/30,minDestinationPort:8805,maxDestinationPort:8805" --allow --setmark=10
# Create the LB rule
#loxicmd create lb <serviceIP> --udp=8805:8805 --mark=10 --endpoints=<upfIPaddress>:1 --mode=fullnat
loxicmd create lb 123.123.123.2 --udp=8805:8805 --mark=10 --endpoints=192.168.80.5:1 --mode=fullnat
Deploy Open5gs
After that, you can deploy open5gs with the following command:
$ kubectl create ns open5gs
$ helm -n open5gs upgrade --install core5g ./open5gs-scp-helm-charts/
When the deployment is complete, you can check the open5gs pod with the following command.
$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-74d5f9d7bb-v6td4 1/1 Running 0 18d
kube-system calico-node-5kvdw 1/1 Running 0 18d
kube-system calico-node-wnclp 1/1 Running 0 18d
kube-system coredns-7c5cd84f7b-g6rxs 1/1 Running 0 18d
kube-system coredns-7c5cd84f7b-lghq6 1/1 Running 0 18d
kube-system etcd-master 1/1 Running 0 18d
kube-system kube-apiserver-master 1/1 Running 0 18d
kube-system kube-controller-manager-master 1/1 Running 1 (22h ago) 18d
kube-system kube-loxilb-76f96b44f4-jwbht 1/1 Running 0 12d
kube-system kube-proxy-sh9nt 1/1 Running 0 18d
kube-system kube-proxy-wfrzw 1/1 Running 0 18d
kube-system kube-scheduler-master 1/1 Running 1 (22h ago) 18d
kube-system metrics-server-69fb86cf66-4vnwx 1/1 Running 3 (18d ago) 18d
open5gs core5g-amf-deployment-595f7fffb4-5n6nj 1/1 Running 0 3m8s
open5gs core5g-ausf-deployment-684b4bb9f-gpxbw 1/1 Running 0 3m8s
open5gs core5g-bsf-deployment-8f6dbd599-898jk 1/1 Running 0 3m8s
open5gs core5g-mongo-ue-import-rvtkr 0/1 Completed 0 3m8s
open5gs core5g-mongodb-5c5d64455c-vrjdz 1/1 Running 0 3m8s
open5gs core5g-nrf-deployment-b4d796466-cq597 1/1 Running 0 3m8s
open5gs core5g-nssf-deployment-5df4d988fd-5sbv6 1/1 Running 0 3m8s
open5gs core5g-pcf-deployment-7b87484dcf-sz5lh 1/1 Running 0 3m8s
open5gs core5g-smf-deployment-67f9f4bcd-p8mkh 1/1 Running 0 3m8s
open5gs core5g-udm-deployment-54bfd97d56-h5x4n 1/1 Running 0 3m8s
open5gs core5g-udr-deployment-7656cbbd7b-wwrsl 1/1 Running 0 3m8s
open5gs core5g-webui-78fc76b8f8-4vzhl 1/1 Running 0 3m8s
All the pods must be in a “Running” state except “core5g-mongo-ue-import-rvtkr”. As soon as it becomes “Completed” then the deployment can be considered completed.
Verify the services
$ sudo kubectl get svc -n open5gs
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
core5g-amf LoadBalancer 172.17.46.201 llb-123.123.123.1 38412:32670/SCTP,7777:31954/TCP,80:31678/TCP 6m56s
core5g-ausf LoadBalancer 172.17.27.89 llb-123.123.123.9 80:30211/TCP 6m56s
core5g-bsf LoadBalancer 172.17.24.86 llb-123.123.123.8 80:30606/TCP 6m56s
core5g-mongodb-svc LoadBalancer 172.17.39.185 llb-123.123.123.3 27017:31465/TCP 6m56s
core5g-nrf LoadBalancer 172.17.3.112 llb-123.123.123.4 80:31558/TCP,7777:32724/TCP 6m56s
core5g-nssf LoadBalancer 172.17.58.170 llb-123.123.123.5 80:32126/TCP 6m56s
core5g-pcf LoadBalancer 172.17.47.109 llb-123.123.123.7 80:31916/TCP 6m56s
core5g-smf LoadBalancer 172.17.20.10 llb-123.123.123.2 2123:31581/UDP,8805:31991/UDP,3868:30899/TCP,3868:30899/SCTP,7777:30152/TCP,2152:31071/UDP,9090:32299/TCP,80:30246/TCP 6m56s
core5g-udm LoadBalancer 172.17.7.145 llb-123.123.123.6 80:30852/TCP 6m56s
core5g-udr LoadBalancer 172.17.42.127 llb-123.123.123.10 80:32709/TCP,7777:32064/TCP 6m56s
core5g-webui LoadBalancer 172.17.28.242 llb-123.123.123.11 80:30302/TCP 6m56s
Verify the services at LoxiLB:
$ loxicmd get lb -o wide
| EXT IP | SEC IPS | PORT | PROTO | NAME | MARK | SEL | MODE | ENDPOINT | EPORT | WEIGHT | STATE | COUNTERS |
|-----------------|---------|-------|-------|----------------------------|------|-----|-----------|----------------|-------|--------|--------|-------------|
| 123.123.123.10 | | 80 | tcp | open5gs_core5g-udr | 0 | rr | fullnat | 192.168.80.10 | 32709 | 1 | active | 0:0 |
| | | | | | | | | 192.168.80.101 | 32709 | 1 | active | 0:0 |
| 123.123.123.10 | | 7777 | tcp | open5gs_core5g-udr | 0 | rr | fullnat | 192.168.80.10 | 32064 | 1 | active | 0:0 |
| | | | | | | | | 192.168.80.101 | 32064 | 1 | active | 0:0 |
| 123.123.123.11 | | 80 | tcp | open5gs_core5g-webui | 0 | rr | fullnat | 192.168.80.10 | 30302 | 1 | active | 0:0 |
| | | | | | | | | 192.168.80.101 | 30302 | 1 | active | 0:0 |
| 123.123.123.1 | | 80 | tcp | open5gs_core5g-amf | 0 | rr | fullnat | 192.168.80.10 | 31678 | 1 | active | 0:0 |
| | | | | | | | | 192.168.80.101 | 31678 | 1 | active | 0:0 |
| 123.123.123.1 | | 7777 | tcp | open5gs_core5g-amf | 0 | rr | fullnat | 192.168.80.10 | 31954 | 1 | active | 0:0 |
| | | | | | | | | 192.168.80.101 | 31954 | 1 | active | 0:0 |
| 123.123.123.1 | | 38412 | sctp | open5gs_core5g-amf | 0 | rr | fullnat | 192.168.80.10 | 32670 | 1 | active | 0:0 |
| | | | | | | | | 192.168.80.101 | 32670 | 1 | active | 0:0 |
| 123.123.123.2 | | 80 | tcp | open5gs_core5g-smf | 0 | rr | fullnat | 192.168.80.10 | 30246 | 1 | active | 0:0 |
| | | | | | | | | 192.168.80.101 | 30246 | 1 | active | 0:0 |
| 123.123.123.2 | | 2123 | udp | open5gs_core5g-smf | 0 | rr | fullnat | 192.168.80.10 | 31581 | 1 | active | 0:0 |
| | | | | | | | | 192.168.80.101 | 31581 | 1 | active | 0:0 |
| 123.123.123.2 | | 2152 | udp | open5gs_core5g-smf | 0 | rr | fullnat | 192.168.80.10 | 31071 | 1 | active | 0:0 |
| | | | | | | | | 192.168.80.101 | 31071 | 1 | active | 0:0 |
| 123.123.123.2 | | 3868 | sctp | open5gs_core5g-smf | 0 | rr | fullnat | 192.168.80.10 | 30899 | 1 | active | 0:0 |
| | | | | | | | | 192.168.80.101 | 30899 | 1 | active | 0:0 |
| 123.123.123.2 | | 3868 | tcp | open5gs_core5g-smf | 0 | rr | fullnat | 192.168.80.10 | 30899 | 1 | active | 0:0 |
| | | | | | | | | 192.168.80.101 | 30899 | 1 | active | 0:0 |
| 123.123.123.2 | | 7777 | tcp | open5gs_core5g-smf | 0 | rr | fullnat | 192.168.80.10 | 30152 | 1 | active | 0:0 |
| | | | | | | | | 192.168.80.101 | 30152 | 1 | active | 0:0 |
| 123.123.123.2 | | 8805 | udp | | 10 | rr | fullnat | 192.168.80.5 | 8805 | 1 | - | 279:16780 |
| 123.123.123.2 | | 8805 | udp | open5gs_core5g-smf | 0 | rr | fullnat | 192.168.80.10 | 31991 | 1 | active | 0:0 |
| | | | | | | | | 192.168.80.101 | 31991 | 1 | active | 0:0 |
| 123.123.123.2 | | 9090 | tcp | open5gs_core5g-smf | 0 | rr | fullnat | 192.168.80.10 | 32299 | 1 | active | 0:0 |
| | | | | | | | | 192.168.80.101 | 32299 | 1 | active | 0:0 |
| 123.123.123.3 | | 27017 | tcp | open5gs_core5g-mongodb-svc | 0 | rr | fullnat | 192.168.80.10 | 31465 | 1 | active | 277:49305 |
| | | | | | | | | 192.168.80.101 | 31465 | 1 | active | 250:42415 |
| 123.123.123.4 | | 80 | tcp | open5gs_core5g-nrf | 0 | rr | fullnat | 192.168.80.10 | 31558 | 1 | active | 1197:138839 |
| | | | | | | | | 192.168.80.101 | 31558 | 1 | active | 992:115387 |
| 123.123.123.4 | | 7777 | tcp | open5gs_core5g-nrf | 0 | rr | fullnat | 192.168.80.10 | 32724 | 1 | active | 0:0 |
| | | | | | | | | 192.168.80.101 | 32724 | 1 | active | 0:0 |
| 123.123.123.5 | | 80 | tcp | open5gs_core5g-nssf | 0 | rr | fullnat | 192.168.80.10 | 32126 | 1 | active | 0:0 |
| | | | | | | | | 192.168.80.101 | 32126 | 1 | active | 0:0 |
| 123.123.123.6 | | 80 | tcp | open5gs_core5g-udm | 0 | rr | fullnat | 192.168.80.10 | 30852 | 1 | active | 0:0 |
| | | | | | | | | 192.168.80.101 | 30852 | 1 | active | 0:0 |
| 123.123.123.7 | | 80 | tcp | open5gs_core5g-pcf | 0 | rr | fullnat | 192.168.80.10 | 31916 | 1 | active | 0:0 |
| | | | | | | | | 192.168.80.101 | 31916 | 1 | active | 0:0 |
| 123.123.123.8 | | 80 | tcp | open5gs_core5g-bsf | 0 | rr | fullnat | 192.168.80.10 | 30606 | 1 | active | 0:0 |
| | | | | | | | | 192.168.80.101 | 30606 | 1 | active | 0:0 |
| 123.123.123.9 | | 80 | tcp | open5gs_core5g-ausf | 0 | rr | fullnat | 192.168.80.10 | 30211 | 1 | active | 0:0 |
| | | | | | | | | 192.168.80.101 | 30211 | 1 | active | 0:0 |
Check UPF logs
Now, Check the logs at the UPF node to confirm N2 interface (PFCP) is established.
$ tail -f /var/log/open5gs/upf.log
Open5GS daemon v2.7.1
06/18 12:46:19.510: [app] INFO: Configuration: '/etc/open5gs/upf.yaml' (../lib/app/ogs-init.c:133)
06/18 12:46:19.510: [app] INFO: File Logging: '/var/log/open5gs/upf.log' (../lib/app/ogs-init.c:136)
06/18 12:46:19.577: [metrics] INFO: metrics_server() [http://127.0.0.7]:9090 (../lib/metrics/prometheus/context.c:299)
06/18 12:46:19.577: [pfcp] INFO: pfcp_server() [192.168.80.5]:8805 (../lib/pfcp/path.c:30)
06/18 12:46:19.577: [gtp] INFO: gtp_server() [192.168.80.5]:2152 (../lib/gtp/path.c:30)
06/18 12:46:19.579: [app] INFO: UPF initialize...done (../src/upf/app.c:31)
06/18 12:46:22.866: [pfcp] INFO: ogs_pfcp_connect() [123.123.123.2]:23197 (../lib/pfcp/path.c:61)
06/18 12:47:33.639: [pfcp] INFO: ogs_pfcp_connect() [123.123.123.2]:1066 (../lib/pfcp/path.c:61)
06/18 12:47:33.640: [upf] INFO: PFCP associated [123.123.123.2]:1066 (../src/upf/pfcp-sm.c:184)
Verify active connections
Check the status for all the current active connections at LoxiLB:
$ loxicmd get ct -o wide
| SERVICE NAME | DESTIP | SRCIP | DPORT | SPORT | PROTO | STATE | ACT | PACKETS | BYTES |
|----------------------------|---------------|----------------|-------|-------|-------|---------|---------------------------------------------|---------|-------|
| | 123.123.123.2 | 192.168.80.101 | 8805 | 1066 | udp | udp-est | fdnat-123.123.123.2,192.168.80.5:8805:w0 | 4 | 224 |
| | 123.123.123.2 | 192.168.80.5 | 1066 | 8805 | udp | udp-est | fsnat-123.123.123.2,192.168.80.101:8805:w0 | 4 | 224 |
| open5gs_core5g-mongodb-svc | 123.123.123.3 | 192.168.80.101 | 4486 | 31465 | tcp | est | hsnat-0.0.0.0:27017:w0 | 18 | 1459 |
| open5gs_core5g-mongodb-svc | 123.123.123.3 | 192.168.80.101 | 13071 | 31465 | tcp | est | hsnat-0.0.0.0:27017:w0 | 18 | 1535 |
| open5gs_core5g-mongodb-svc | 123.123.123.3 | 192.168.80.101 | 19556 | 31465 | tcp | est | hsnat-0.0.0.0:27017:w0 | 75 | 21311 |
| open5gs_core5g-mongodb-svc | 123.123.123.3 | 192.168.80.101 | 20114 | 31465 | tcp | est | hsnat-0.0.0.0:27017:w0 | 15 | 7015 |
| open5gs_core5g-mongodb-svc | 123.123.123.3 | 192.168.80.101 | 27017 | 4486 | tcp | est | fdnat-123.123.123.3,0.0.0.0:31465:w0 | 19 | 1666 |
| open5gs_core5g-mongodb-svc | 123.123.123.3 | 192.168.80.101 | 27017 | 13071 | tcp | est | fdnat-123.123.123.3,0.0.0.0:31465:w0 | 19 | 1771 |
| open5gs_core5g-mongodb-svc | 123.123.123.3 | 192.168.80.101 | 27017 | 19556 | tcp | est | fdnat-123.123.123.3,0.0.0.0:31465:w0 | 148 | 13820 |
| open5gs_core5g-mongodb-svc | 123.123.123.3 | 192.168.80.101 | 27017 | 20114 | tcp | est | fdnat-123.123.123.3,0.0.0.0:31465:w0 | 16 | 1436 |
| open5gs_core5g-mongodb-svc | 123.123.123.3 | 192.168.80.101 | 27017 | 45498 | tcp | est | fdnat-123.123.123.3,192.168.80.10:31465:w0 | 148 | 13824 |
| open5gs_core5g-mongodb-svc | 123.123.123.3 | 192.168.80.101 | 27017 | 50148 | tcp | est | fdnat-123.123.123.3,192.168.80.10:31465:w0 | 17 | 1502 |
| open5gs_core5g-mongodb-svc | 123.123.123.3 | 192.168.80.101 | 27017 | 62163 | tcp | est | fdnat-123.123.123.3,192.168.80.10:31465:w0 | 20 | 1926 |
| open5gs_core5g-mongodb-svc | 123.123.123.3 | 192.168.80.101 | 27017 | 63733 | tcp | est | fdnat-123.123.123.3,192.168.80.10:31465:w0 | 20 | 1928 |
| open5gs_core5g-mongodb-svc | 123.123.123.3 | 192.168.80.10 | 45498 | 31465 | tcp | est | fsnat-123.123.123.3,192.168.80.101:27017:w0 | 75 | 21311 |
| open5gs_core5g-mongodb-svc | 123.123.123.3 | 192.168.80.10 | 50148 | 31465 | tcp | est | fsnat-123.123.123.3,192.168.80.101:27017:w0 | 15 | 6935 |
| open5gs_core5g-mongodb-svc | 123.123.123.3 | 192.168.80.10 | 62163 | 31465 | tcp | est | fsnat-123.123.123.3,192.168.80.101:27017:w0 | 19 | 1639 |
| open5gs_core5g-mongodb-svc | 123.123.123.3 | 192.168.80.10 | 63733 | 31465 | tcp | est | fsnat-123.123.123.3,192.168.80.101:27017:w0 | 20 | 1705 |
| open5gs_core5g-nrf | 123.123.123.4 | 192.168.80.101 | 80 | 2368 | tcp | est | fdnat-123.123.123.4,192.168.80.10:31558:w0 | 226 | 28580 |
| open5gs_core5g-nrf | 123.123.123.4 | 192.168.80.101 | 80 | 6687 | tcp | est | fdnat-123.123.123.4,192.168.80.10:31558:w0 | 233 | 30410 |
| open5gs_core5g-nrf | 123.123.123.4 | 192.168.80.101 | 80 | 16244 | tcp | est | fdnat-123.123.123.4,0.0.0.0:31558:w0 | 229 | 28958 |
| open5gs_core5g-nrf | 123.123.123.4 | 192.168.80.101 | 80 | 24477 | tcp | est | fdnat-123.123.123.4,0.0.0.0:31558:w0 | 232 | 30594 |
| open5gs_core5g-nrf | 123.123.123.4 | 192.168.80.101 | 80 | 27944 | tcp | est | fdnat-123.123.123.4,192.168.80.10:31558:w0 | 219 | 28251 |
| open5gs_core5g-nrf | 123.123.123.4 | 192.168.80.101 | 80 | 32565 | tcp | est | fdnat-123.123.123.4,0.0.0.0:31558:w0 | 229 | 29153 |
| open5gs_core5g-nrf | 123.123.123.4 | 192.168.80.101 | 80 | 52707 | tcp | est | fdnat-123.123.123.4,192.168.80.10:31558:w0 | 226 | 28663 |
| open5gs_core5g-nrf | 123.123.123.4 | 192.168.80.101 | 80 | 57099 | tcp | est | fdnat-123.123.123.4,0.0.0.0:31558:w0 | 235 | 30941 |
| open5gs_core5g-nrf | 123.123.123.4 | 192.168.80.101 | 16244 | 31558 | tcp | est | hsnat-0.0.0.0:80:w0 | 157 | 13259 |
| open5gs_core5g-nrf | 123.123.123.4 | 192.168.80.101 | 24477 | 31558 | tcp | est | hsnat-0.0.0.0:80:w0 | 160 | 14281 |
| open5gs_core5g-nrf | 123.123.123.4 | 192.168.80.101 | 32565 | 31558 | tcp | est | hsnat-0.0.0.0:80:w0 | 158 | 13713 |
| open5gs_core5g-nrf | 123.123.123.4 | 192.168.80.101 | 57099 | 31558 | tcp | est | hsnat-0.0.0.0:80:w0 | 162 | 15996 |
| open5gs_core5g-nrf | 123.123.123.4 | 192.168.80.10 | 80 | 38140 | tcp | est | fdnat-123.123.123.4,0.0.0.0:31558:w0 | 235 | 31048 |
| open5gs_core5g-nrf | 123.123.123.4 | 192.168.80.10 | 2368 | 31558 | tcp | est | fsnat-123.123.123.4,192.168.80.101:80:w0 | 154 | 13024 |
| open5gs_core5g-nrf | 123.123.123.4 | 192.168.80.10 | 6687 | 31558 | tcp | est | fsnat-123.123.123.4,192.168.80.101:80:w0 | 157 | 13662 |
| open5gs_core5g-nrf | 123.123.123.4 | 192.168.80.10 | 27944 | 31558 | tcp | est | fsnat-123.123.123.4,192.168.80.101:80:w0 | 151 | 13925 |
| open5gs_core5g-nrf | 123.123.123.4 | 192.168.80.10 | 38140 | 31558 | tcp | est | hsnat-0.0.0.0:80:w0 | 162 | 16041 |
| open5gs_core5g-nrf | 123.123.123.4 | 192.168.80.10 | 52707 | 31558 | tcp | est | fsnat-123.123.123.4,192.168.80.101:80:w0 | 155 | 13091 |
Configure UERAN Simulator
You have to change the UE’s configuration file to connect the UE to the core. The path of the configuration file is ~/my5G-RANTester/config/config.yml.
gnodeb:
controlif:
ip: "172.0.14.27"
port: 9487
dataif:
ip: "172.0.14.27"
port: 2152
plmnlist:
mcc: "208"
mnc: "93"
tac: "000007"
gnbid: "000001"
slicesupportlist:
sst: "01"
sd: "000001"
ue:
msin: "0000000031"
key: "0C0A34601D4F07677303652C0462535B"
opc: "63bfa50ee6523365ff14c1f45f88737d"
amf: "8000"
sqn: "0000000"
dnn: "internet"
hplmn:
mcc: "208"
mnc: "93"
snssai:
sst: 01
sd: "000001"
amfif:
ip: "43.201.17.32"
port: 38412
logs:
level: 4
First, register the private IP of the UE node in gnodeb - controlif object’s ip and dataif object’s ip.
gnodeb:
controlif:
ip: "172.0.14.27"
port: 9487
dataif:
ip: "172.0.14.27"
port: 2152
Next, you modify the values of mcc, mnc, and tac in plmnlist object. This value should match the AMF settings of the Open5gs core deployed with helm. You can check the values in the ./open5gs-scp-helm-charts/values.yaml file. Here are the AMF settings in the values.yaml file used in this post.
amf:
mcc: 208
mnc: 93
tac: 7
networkName: Open5GS
ngapInt: eth0
nssf:
sst: "1"
sd: "1"
The values of mcc, mnc, and tac in the UE settings must match the values above.
plmnlist:
mcc: "208"
mnc: "93"
tac: "000007"
gnbid: "000001"
The sst, sd values of the slicesupportlist object in the UE settings must match the values of the nssf object in ./open5gs-scp-helm-charts/values.yaml.
slicesupportlist:
sst: "01"
sd: "000001"
The msin, key, and opc values of the ue object in the UE settings must match the simulator-ue1 object in ./open5gs-scp-helm-charts/values.yaml . Here is the content of the ./open5gs-scp-helm-charts/values.yaml file.
simulator:
ue1:
imsi: "208930000000031"
imei: "356938035643803"
imeiSv: "4370816125816151"
op: "8e27b6af0e692e750f32667a3b14605d"
secKey: "8baf473f2f8fd09487cccbd7097c6862"
sst: "1"
sd: "1"
If you modify the UE settings according to the contents of the values.yaml file, it looks like this:
• msin: the last 10 digits of the imsi value excluding mcc(208) and mnc(93)
• key: secKey
• opc: op
• mcc, mnc, sst, sd: Enter the values described above
Other values are left as default.
ue:
msin: "0000000031"
key: "8baf473f2f8fd09487cccbd7097c6862"
opc: "8e27b6af0e692e750f32667a3b14605d"
amf: "8000"
sqn: "0000000"
dnn: "internet"
hplmn:
mcc: "208"
mnc: "93"
snssai:
sst: 01
sd: "000001"
Finally, you have to modify amfif - ip value. Since the gNB needs to connect to the AMF through the LoxiLB load balancer, it needs to be changed to the service IP for N2 interface. In the current topology it is 123.123.123.1.
amfif:
ip: "123.123.123.1"
port: 38412
After editing the configuration file, the UE can be connected to AMF with the following command.
$ cd ~/my5G-RANTester/cmd
$ sudo ./app ue
INFO[0000] my5G-RANTester version 1.0.1
INFO[0000] ---------------------------------------
INFO[0000] [TESTER] Starting test function: Testing an ue attached with configuration
INFO[0000] [TESTER][UE] Number of UEs: 1
INFO[0000] [TESTER][GNB] Control interface IP/Port: 192.168.80.4/9487
INFO[0000] [TESTER][GNB] Data interface IP/Port: 192.168.80.4/2152
INFO[0000] [TESTER][AMF] AMF IP/Port: 123.123.123.1/38412
INFO[0000] ---------------------------------------
INFO[0000] [GNB] SCTP/NGAP service is running
INFO[0000] [GNB] UNIX/NAS service is running
INFO[0000] [GNB][SCTP] Receive message in 0 stream
INFO[0000] [GNB][NGAP] Receive Ng Setup Response
INFO[0000] [GNB][AMF] AMF Name: open5gs-amf
INFO[0000] [GNB][AMF] State of AMF: Active
INFO[0000] [GNB][AMF] Capacity of AMF: 255
INFO[0000] [GNB][AMF] PLMNs Identities Supported by AMF -- mcc: 208 mnc:93
INFO[0000] [GNB][AMF] List of AMF slices Supported by AMF -- sst:01 sd:000001
INFO[0001] [UE] UNIX/NAS service is running
INFO[0001] [GNB][SCTP] Receive message in 1 stream
INFO[0001] [GNB][NGAP] Receive Downlink NAS Transport
INFO[0001] [UE][NAS] Message without security header
INFO[0001] [UE][NAS] Receive Authentication Request
INFO[0001] [UE][NAS][MAC] Authenticity of the authentication request message: OK
INFO[0001] [UE][NAS][SQN] SQN of the authentication request message: VALID
INFO[0001] [UE][NAS] Send authentication response
INFO[0001] [GNB][SCTP] Receive message in 1 stream
INFO[0001] [GNB][NGAP] Receive Downlink NAS Transport
INFO[0001] [UE][NAS] Message with security header
INFO[0001] [UE][NAS] Message with integrity and with NEW 5G NAS SECURITY CONTEXT
INFO[0001] [UE][NAS] successful NAS MAC verification
INFO[0001] [UE][NAS] Receive Security Mode Command
INFO[0001] [UE][NAS] Type of ciphering algorithm is 5G-EA0
INFO[0001] [UE][NAS] Type of integrity protection algorithm is 128-5G-IA2
INFO[0001] [GNB][SCTP] Receive message in 1 stream
INFO[0001] [GNB][NGAP] Receive Initial Context Setup Request
INFO[0001] [GNB][UE] UE Context was created with successful
INFO[0001] [GNB][UE] UE RAN ID 1
INFO[0001] [GNB][UE] UE AMF ID 1
INFO[0001] [GNB][UE] UE Mobility Restrict --Plmn-- Mcc: not informed Mnc: not informed
INFO[0001] [GNB][UE] UE Masked Imeisv: 1110000000ffff00
INFO[0001] [GNB][UE] Allowed Nssai-- Sst: 01 Sd: 000001
INFO[0001] [GNB][NAS][UE] Send Registration Accept.
INFO[0001] [GNB][NGAP][AMF] Send Initial Context Setup Response.
INFO[0001] [UE][NAS] Message with security header
INFO[0001] [UE][NAS] Message with integrity and ciphered
INFO[0001] [UE][NAS] successful NAS MAC verification
INFO[0001] [UE][NAS] successful NAS CIPHERING
INFO[0001] [UE][NAS] Receive Registration Accept
INFO[0001] [UE][NAS] UE 5G GUTI: [215 0 14 119]
INFO[0001] [GNB][SCTP] Receive message in 1 stream
INFO[0001] [GNB][NGAP] Receive Downlink NAS Transport
INFO[0001] [UE][NAS] Message with security header
INFO[0001] [UE][NAS] Message with integrity and ciphered
INFO[0001] [UE][NAS] successful NAS MAC verification
INFO[0001] [UE][NAS] successful NAS CIPHERING
INFO[0001] [UE][NAS] Receive Configuration Update Command
INFO[0001] [GNB][SCTP] Receive message in 1 stream
INFO[0001] [GNB][NGAP] Receive PDU Session Resource Setup Request
INFO[0001] [GNB][NGAP][UE] PDU Session was created with successful.
INFO[0001] [GNB][NGAP][UE] PDU Session Id: 1
INFO[0001] [GNB][NGAP][UE] NSSAI Selected --- sst: 01 sd: 000001
INFO[0001] [GNB][NGAP][UE] PDU Session Type: ipv4
INFO[0001] [GNB][NGAP][UE] QOS Flow Identifier: 1
INFO[0001] [GNB][NGAP][UE] Uplink Teid: 37088
INFO[0001] [GNB][NGAP][UE] Downlink Teid: 1
INFO[0001] [GNB][NGAP][UE] Non-Dynamic-5QI: 9
INFO[0001] [GNB][NGAP][UE] Priority Level ARP: 8
INFO[0001] [GNB][NGAP][UE] UPF Address: 192.168.80.5 :2152
INFO[0001] [UE][NAS] Message with security header
INFO[0001] [UE][NAS] Message with integrity and ciphered
INFO[0001] [UE][NAS] successful NAS MAC verification
INFO[0001] [UE][NAS] successful NAS CIPHERING
INFO[0001] [UE][NAS] Receive DL NAS Transport
INFO[0001] [UE][NAS] Receiving PDU Session Establishment Accept
INFO[0001] [UE][DATA] UE is ready for using data plane
Challenges and Future Work
In this blog, we exposed N2 Interface, N4 Interface, MongoDB and NRF services externally with LoxiLB. Many 5G core components advertises the service IP but we noticed that few components were not advertising the address correctly. We will continue this work to cover all the interfaces with SCP and collaborate with Open5gs community.
About Authors
This blog was prepared by Nikhil Malik and Jung BackGyun. We are contributors of LoxiLB project.
Top comments (0)