Home » Free5gc (Open-Source 5G Core) on Kubernetes

Free5gc (Open-Source 5G Core) on Kubernetes

Introduction

In this article we will go through the steps in deploying the open-source 5G core (free5gc) in Kubernetes. Free5GC is also considered as a complete open-source release complying with 3GPP release.

Following this article, it is assumed that you already have your Kubernetes Cluster setup. Otherwise, you can follow our other article setting up a k8s cluster.

Preparing hosts

# Install GO (master and worker nodes)
    $ wget -q https://storage.googleapis.com/golang/getgo/installer_linux
    $ chmod +x installer_linux
    $ ./installer_linux
    $ source ~/.bash_profile
    $ rm -f installer_linux

Install Multus Daemonset

# Create multus daemonset
$ kubectl apply -f 01_multus_daemonset.yaml
# Output

customresourcedefinition.apiextensions.k8s.io/network-attachment-definitions.k8s.cni.cncf.io created
clusterrole.rbac.authorization.k8s.io/multus created
clusterrolebinding.rbac.authorization.k8s.io/multus created
serviceaccount/multus created
configmap/multus-cni-config created
daemonset.apps/kube-multus-ds-amd64 created
daemonset.apps/kube-multus-ds-ppc64le created
daemonset.apps/kube-multus-ds-arm64v8 created
k8s5gc@k85gcms01:~/git_free5gc/free5gc-k8s/clusters/cni/multus$ ^C
k8s5gc@k85gcms01:~/git_free5gc/free5gc-k8s/clusters/cni/multus$
# 01_multus_daemonset.yaml

---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: network-attachment-definitions.k8s.cni.cncf.io
spec:
  group: k8s.cni.cncf.io
  scope: Namespaced
  names:
    plural: network-attachment-definitions
    singular: network-attachment-definition
    kind: NetworkAttachmentDefinition
    shortNames:
    - net-attach-def
  versions:
    - name: v1
      served: true
      storage: true
      schema:
        openAPIV3Schema:
          description: 'NetworkAttachmentDefinition is a CRD schema specified by the Network Plumbing
            Working Group to express the intent for attaching pods to one or more logical or physical
            networks. More information available at: https://github.com/k8snetworkplumbingwg/multi-net-spec'
          type: object
          properties:
            apiVersion:
              description: 'APIVersion defines the versioned schema of this represen
                tation of an object. Servers should convert recognized schemas to the
                latest internal value, and may reject unrecognized values. More info:
                https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
              type: string
            kind:
              description: 'Kind is a string value representing the REST resource this
                object represents. Servers may infer this from the endpoint the client
                submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
              type: string
            metadata:
              type: object
            spec:
              description: 'NetworkAttachmentDefinition spec defines the desired state of a network attachment'
              type: object
              properties:
                config:
                  description: 'NetworkAttachmentDefinition config is a JSON-formatted CNI configuration'
                  type: string
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: multus
rules:
  - apiGroups: ["k8s.cni.cncf.io"]
    resources:
      - '*'
    verbs:
      - '*'
  - apiGroups:
      - ""
    resources:
      - pods
      - pods/status
    verbs:
      - get
      - update
  - apiGroups:
      - ""
      - events.k8s.io
    resources:
      - events
    verbs:
      - create
      - patch
      - update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: multus
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: multus
subjects:
- kind: ServiceAccount
  name: multus
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: multus
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: multus-cni-config
  namespace: kube-system
  labels:
    tier: node
    app: multus
data:
  # NOTE: If you'd prefer to manually apply a configuration file, you may create one here.
  # In the case you'd like to customize the Multus installation, you should change the arguments to the Multus pod
  # change the "args" line below from
  # - "--multus-conf-file=auto"
  # to:
  # "--multus-conf-file=/tmp/multus-conf/70-multus.conf"
  # Additionally -- you should ensure that the name "70-multus.conf" is the alphabetically first name in the
  # /etc/cni/net.d/ directory on each node, otherwise, it will not be used by the Kubelet.
  cni-conf.json: |
    {
      "name": "multus-cni-network",
      "type": "multus",
      "capabilities": {
        "portMappings": true
      },
      "delegates": [
        {
          "cniVersion": "0.3.1",
          "name": "default-cni-network",
          "plugins": [
            {
              "ipam":{
                  "type":"calico-ipam"
               },
               "kubernetes":{
                  "kubeconfig":"/etc/cni/net.d/calico-kubeconfig"
               },
               "log_level":"info",
               "policy":{
                  "type":"k8s"
               },
               "type":"calico"                
              },
            {
               "capabilities":{
                  "portMappings":true
               },
               "snat":true,
               "type":"portmap"
            }
          ]
        }
      ],
      "kubeconfig": "/etc/cni/net.d/multus.d/multus.kubeconfig"
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-multus-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: multus
    name: multus
spec:
  selector:
    matchLabels:
      name: multus
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        tier: node
        app: multus
        name: multus
    spec:
      hostNetwork: true
      nodeSelector:
        kubernetes.io/arch: amd64
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: multus
      containers:
      - name: kube-multus
        image: docker.io/nfvpe/multus:stable
        command: ["/entrypoint.sh"]
        args:
        - "--multus-conf-file=auto"
        - "--cni-version=0.3.1"
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        volumeMounts:
        - name: cni
          mountPath: /host/etc/cni/net.d
        - name: cnibin
          mountPath: /host/opt/cni/bin
        - name: multus-cfg
          mountPath: /tmp/multus-conf
      volumes:
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: cnibin
          hostPath:
            path: /opt/cni/bin
        - name: multus-cfg
          configMap:
            name: multus-cni-config
            items:
            - key: cni-conf.json
              path: 70-multus.conf
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-multus-ds-ppc64le
  namespace: kube-system
  labels:
    tier: node
    app: multus
    name: multus
spec:
  selector:
    matchLabels:
      name: multus
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        tier: node
        app: multus
        name: multus
    spec:
      hostNetwork: true
      nodeSelector:
        kubernetes.io/arch: ppc64le
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: multus
      containers:
      - name: kube-multus
        # ppc64le support requires multus:latest for now. support 3.3 or later.
        image: docker.io/nfvpe/multus:stable-ppc64le
        command: ["/entrypoint.sh"]
        args:
        - "--multus-conf-file=auto"
        - "--cni-version=0.3.1"
        resources:
          requests:
            cpu: "100m"
            memory: "90Mi"
          limits:
            cpu: "100m"
            memory: "90Mi"
        securityContext:
          privileged: true
        volumeMounts:
        - name: cni
          mountPath: /host/etc/cni/net.d
        - name: cnibin
          mountPath: /host/opt/cni/bin
        - name: multus-cfg
          mountPath: /tmp/multus-conf
      volumes:
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: cnibin
          hostPath:
            path: /opt/cni/bin
        - name: multus-cfg
          configMap:
            name: multus-cni-config
            items:
            - key: cni-conf.json
              path: 70-multus.conf
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-multus-ds-arm64v8
  namespace: kube-system
  labels:
    tier: node
    app: multus
    name: multus
spec:
  selector:
    matchLabels:
      name: multus
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        tier: node
        app: multus
        name: multus
    spec:
      hostNetwork: true
      nodeSelector:
        kubernetes.io/arch: arm64
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: multus
      containers:
      - name: kube-multus
        image: docker.io/nfvpe/multus:stable-arm64v8
        command: ["/entrypoint.sh"]
        args:
        - "--multus-conf-file=auto"
        - "--cni-version=0.3.1"
        resources:
          requests:
            cpu: "100m"
            memory: "90Mi"
          limits:
            cpu: "100m"
            memory: "90Mi"
        securityContext:
          privileged: true
        volumeMounts:
        - name: cni
          mountPath: /host/etc/cni/net.d
        - name: cnibin
          mountPath: /host/opt/cni/bin
        - name: multus-cfg
          mountPath: /tmp/multus-conf
      volumes:
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: cnibin
          hostPath:
            path: /opt/cni/bin
        - name: multus-cfg
          configMap:
            name: multus-cni-config
            items:
            - key: cni-conf.json
              path: 70-multus.conf

Create Namespace

# Create Namespace - subokf5gc
$ kubectl create ns subokf5gc

# OUTPUT
k8s5gc@k85gcms01:~$ kubectl create ns subokf5gc
namespace/subokf5gc created

Install NFS Server and Client

# Install NFS Server (NFS Server = Master node)
$ sudo apt-get install -qqy nfs-kernel-server
$ sudo mkdir -p /nfsshare/mongodb
$ echo "/nfsshare *(rw,sync,no_root_squash)" | sudo tee /etc/exports
$ sudo exportfs -r
$ sudo showmount -e
# Install NFS Client (Client = worker nodes)
$ sudo apt update
$ sudo apt install nfs-common

Install Helm

# Install Helm
$ curl -L https://get.helm.sh/helm-v3.6.3-linux-amd64.tar.gz > helm-v3.6.3-linux-amd64.tar.gz 
$ tar -zxvf helm-v3.6.3-linux-amd64.tar.gz
$ chmod +x linux-amd64/helm 
$ sudo mv linux-amd64/helm /usr/local/bin/helm

# pip Install
$ sudo apt update
$ sudo apt install python3-pip

$ sudo pip install yq

Database installation (mongodb)

# Create PV (pv.yml)
# NFS Server = 192.168.1.108
--
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mongo-nfs
spec:
  storageClassName: mongo
  capacity:
    storage: 5Gi
  accessModes:
  - ReadWriteMany
  persistentVolumeReclaimPolicy: Recycle
  nfs: 
    path: /nfsshare
    server: 192.168.1.108
---


# Apply pv.yaml

$ kubectl apply -f pv.yaml
persistentvolume/mongo-nfs created

$ kubectl describe pv mongo-nfs

Name:            mongo-nfs
Labels:          <none>
Annotations:     pv.kubernetes.io/bound-by-controller: yes
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    mongo
Status:          Bound
Claim:           free5gc/mongodb-mongo-0
Reclaim Policy:  Recycle
Access Modes:    RWX
VolumeMode:      Filesystem
Capacity:        5Gi
Node Affinity:   <none>
Message:
Source:
    Type:      NFS (an NFS mount that lasts the lifetime of a pod)
    Server:    192.168.1.108
    Path:      /nfsshare
    ReadOnly:  false
Events:        <none>
# Create mongodb service (service.yaml)

---
apiVersion: v1
kind: Service
metadata:
  labels:
    environment: testing
    service: mongo
  name: mongo-external
  namespace: free5gc
spec:
  externalTrafficPolicy: Cluster
  ports:
  - name: mongo
    nodePort: 31717
    port: 27017
    protocol: TCP
    targetPort: 27017
  selector:
    service: mongo
  sessionAffinity: None
  type: NodePort
# Apply service.yaml

$ kubectl apply -f service.yaml
service/mongo-external created

$ kubectl get service mongo-external -n free5gc
NAME             TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)           AGE
mongo-external   NodePort   192.168.6.114   <none>        27017:31717/TCP   29s

# Create mongodb statefulset (statefulset.yaml)

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mongo
  namespace: free5gc
spec:
  serviceName: "mongo"
  replicas: 1
  selector:
    matchLabels:
      role: mongo  
  template:
    metadata:
      labels:
        service: mongo
        role: mongo
    spec:
      terminationGracePeriodSeconds: 10
      containers:
        - name: subok-mongodb
          resources:
            requests:
              cpu: 100m
          image: mongo:4.1-xenial
          command:
            - mongod
            - "--bind_ip"
            - 0.0.0.0
          ports:
            - containerPort: 27017
          volumeMounts:
            - name: mongodb
              mountPath: /data/db
  volumeClaimTemplates:
  - metadata:
      name: mongodb
    spec:
      accessModes: [ "ReadWriteMany" ]
      storageClassName: mongo
      resources:
        requests:
          storage: 5Gi
# Apply statefulset.yaml

$ kubectl apply -f statefulset.yaml
statefulset.apps/mongo created


$ kubectl describe pod mongo-0 -n free5gc
Name:         mongo-0
Namespace:    free5gc
Priority:     0
Node:         k8s5gcn3/192.168.1.124
Start Time:   Thu, 19 Aug 2021 19:44:18 +0000
Labels:       controller-revision-hash=mongo-545db8495b
              role=mongo
              service=mongo
              statefulset.kubernetes.io/pod-name=mongo-0
Annotations:  cni.projectcalico.org/containerID: ad49ac7b990e02d9017a698ab7a20e8e3ffae3cd172dc1120edbf6a3cb40b8f6
              cni.projectcalico.org/podIP: 192.168.7.73/32
              cni.projectcalico.org/podIPs: 192.168.7.73/32
              k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "",
                    "ips": [
                        "192.168.7.73"
                    ],
                    "default": true,
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "",
                    "ips": [
                        "192.168.7.73"
                    ],
                    "default": true,
                    "dns": {}
                }]
Status:       Running
IP:           192.168.7.73
IPs:
  IP:           192.168.7.73
Controlled By:  StatefulSet/mongo
Containers:
  subok-mongodb:
    Container ID:  containerd://8558a8ceec2259076e9e40e1d3a6c0374c8dbbac94ca1491b5286835ae8b61f3
    Image:         mongo:4.1-xenial
    Image ID:      docker.io/library/mongo@sha256:4a63d973ed74166f78284e28236bd45149e6b4d69a104eed6ce26dbac7687d75
    Port:          27017/TCP

.....
.....
....

Free5gc - Webui deployment

#free5gc-webui.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: webui-deployment
  namespace: free5gc
  labels:
    app: webui
spec:
  replicas: 1
  selector:
    matchLabels:
      app: webui
  template:
    metadata:
      labels:
        app: webui
    spec:
      containers:
      - name: subok-webui
        image: sufuf3/nextepc-webui:latest
        command:
          - npm
          - run
          - dev
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 3000
        env:
          - name: DB_URI
            value: mongodb://mongo-0.mongo.free5gc.svc.subok-tech.local/nextepc
$:~/subok_k8s_free5gc$ kubectl apply -f free5gc-webui.yaml
deployment.apps/webui-deployment created
# free5gc-webui-service.yaml

apiVersion: v1
kind: Service
metadata:
  labels:
    environment: testing
    app: webui
  name: webui-np
  namespace: free5gc
spec:
  externalTrafficPolicy: Cluster
  ports:
  - name: webui
    nodePort: 31727
    port: 3000
    protocol: TCP
    targetPort: 3000
  selector:
    app: webui
  sessionAffinity: None
  type: NodePort
$:~/subok_k8s_free5gc$ kubectl apply -f free5gc-webui-service.yaml
service/webui-np created

Leave a Reply

Your email address will not be published. Required fields are marked *