How to set up connection between mininet and kubernetes service running ryu-controller

3/4/2020

I am trying to get successful communication between my default mininet topology and ryu controller running on docker containers and managed using kubernetes. I have a kubemaster and two kubenodes, and used Flannel to set-up Pod network, attached is the topo diagram.

enter image description here

Kubemaster VM [Ubuntu 16.04] has two interfaces, both with Promiscuous mode set to Allow All- Bridge adapter – 10.0.0.141/24 Host only adapter – 192.168.56.10/24

Kubenode1 VM [Ubuntu 16.04] has two interfaces, both with Promiscuous mode set to Allow All- Bridge adapter – 10.0.0.178/24 Host only adapter – 192.168.56.20/24

Kubenode2 VM [Ubuntu 16.04] has two interfaces, both with Promiscuous mode set to Allow All- Bridge adapter – 10.0.0.10/24 Host only adapter – 192.168.56.30/24

Below are my replica and service yaml files.

Replica.yaml

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: pod-replica
  namespace: pod-replica
  labels:
    app: sdn-controller
spec:
  replicas: 3
  selector:
    matchLabels:
      app: sdn-controller
  template:
    metadata:
      name: pod-replica
      labels:
        app: sdn-controller
    spec:
      containers:
      - name: sdn-controller
        image: osrg/ryu
        command: ['/bin/bash', '-c']
        args:
          - while true; do
              echo Hi, I am RYU Replica!;
              sleep 2;
            done
        ports:
        - containerPort: 6653

Service.yaml

kind: Service
apiVersion: v1
metadata:
  name: service-nodeport
  namespace: pod-replica
spec:
  type: NodePort
  selector:
    app: sdn-controller
  ports:
  - nodePort: 32102
    protocol: TCP
    port: 6653
    targetPort: 6653

Container Pods in pod-replica namespace:

container@kubemaster:~$ sudo kubectl get pods -n pod-replica -o wide
[sudo] password for container:
NAME                READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE
pod-replica-8qw4s   1/1     Running   0          32m   192.168.2.5   kubenode2   <none>
pod-replica-v228f   1/1     Running   0          29m   192.168.2.6   kubenode2   <none>
pod-replica-vgfbm   1/1     Running   0          32m   192.168.1.4   kubenode1   <none>

All cluster nodes:

container@kubemaster:~$ sudo kubectl get nodes -o wide
NAME         STATUS   ROLES    AGE    VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
kubemaster   Ready    master   123m   v1.12.7   10.0.0.141    <none>        Ubuntu 16.04.6 LTS   4.15.0-45-generic   docker://18.6.1
kubenode1    Ready    <none>   122m   v1.12.7   10.0.0.178    <none>        Ubuntu 16.04.6 LTS   4.15.0-45-generic   docker://18.6.1
kubenode2    Ready    <none>   122m   v1.12.7   10.0.0.10     <none>        Ubuntu 16.04.6 LTS   4.15.0-45-generic   docker://18.6.1

Service details:

container@kubemaster:~$ sudo kubectl describe service service-nodeport -n pod-replica
Name:                     service-nodeport
Namespace:                pod-replica
Labels:                   <none>
Annotations:              <none>
Selector:                 app=sdn-controller
Type:                     NodePort
IP:                       10.98.167.121
Port:                     <unset>  6653/TCP
TargetPort:               6653/TCP
NodePort:                 <unset>  32102/TCP
Endpoints:                192.168.1.4:6653,192.168.2.5:6653,192.168.2.6:6653
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

Detail of one of the pod:

container@kubemaster:~$ sudo kubectl exec pod-replica-vgfbm -n pod-replica -- printenv | grep SERVICE
SERVICE_NODEPORT_PORT_6653_TCP_PORT=6653
SERVICE_NODEPORT_PORT_6653_TCP_ADDR=10.98.167.121
KUBERNETES_SERVICE_PORT=443
SERVICE_NODEPORT_SERVICE_HOST=10.98.167.121
SERVICE_NODEPORT_PORT=tcp://10.98.167.121:6653
SERVICE_NODEPORT_PORT_6653_TCP=tcp://10.98.167.121:6653
SERVICE_NODEPORT_PORT_6653_TCP_PROTO=tcp
KUBERNETES_SERVICE_PORT_HTTPS=443
SERVICE_NODEPORT_SERVICE_PORT=6653
KUBERNETES_SERVICE_HOST=10.96.0.1

Default mininet topo running on kubemaster VM:

container@kubemaster:~$ sudo mn
[sudo] password for container:
*** No default OpenFlow controller found for default switch!
*** Falling back to OVS Bridge
*** Creating network
*** Adding controller
*** Adding hosts:
h1 h2
*** Adding switches:
s1
*** Adding links:
(h1, s1) (h2, s1)
*** Configuring hosts
h1 h2
*** Starting controller

*** Starting 1 switches
s1 ...
*** Starting CLI:
mininet>
mininet> sh ovs-vsctl set-controller s1 tcp:10.98.167.121:6653 tcp:10.98.167.121:32102
mininet> sh ovs-vsctl set bridge s1 protocols=OpenFlow10
mininet>
mininet> sh ovs-vsctl show
d4ab3625-78e5-491c-814b-844090a0abeb
    Bridge "s1"
        Controller "tcp:10.98.167.121:6653"
        Controller "tcp:10.98.167.121:32102"
        fail_mode: standalone
        Port "s1-eth1"
            Interface "s1-eth1"
        Port "s1"
            Interface "s1"
                type: internal
        Port "s1-eth2"
            Interface "s1-eth2"
    ovs_version: "2.5.5"

Example of pod running ryu controller:

container@kubenode1:~$ sudo docker exec -ti c28e32ad535c /bin/bash
root@pod-replica-vwwr4:~# ryu-manager
loading app ryu.controller.ofp_handler
instantiating app ryu.controller.ofp_handler of OFPHandler

QUESTION- My mininet is running on kubemaster VM and I am setting the controller IP to the service IP but it doesn’t show connection:is true. How do I get up successful networking in this scenario?

-- Prarthana Shedge
docker
kubernetes
kubernetes-pod
mininet
ryu

0 Answers