Cannot connect to the server deployed on kubernetes cluster

7/5/2019

I have all ready prepared my docker image. My Dockerfile :

FROM python:3.7-alpine

# Creating Application Source Code Directory
RUN mkdir -p /FogAPP/src

# Setting Home Directory for containers
WORKDIR /FogAPP/src

# Copying src code to Container
COPY fogserver.py /FogAPP/src

# Application Environment variables
ENV APP_ENV development

# Exposing Ports
EXPOSE 31700

# Setting Persistent data
VOLUME ["/app-data"]

#Running Python Application
CMD ["python", "fogserver.py"]

My source code fogserver.py (socket programming) :

import socket
from datetime import datetime
import os

def ReceiveDATA():
    hostname = socket.gethostname()
    i=0
    host = socket.gethostbyname(hostname)
    port = 31700
    while True:
        s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # Create a socket object

        s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)

        s.bind((host, port))            # Bind to the port

        s.listen(10)                    # Accepts up to 10 clientections.


        print("############################# ",i+1," #################################")

        print('Server listening.... on '+ str(host))

        client, address = s.accept()

        print('Connection from : ',address[0])

        i+=1

        date=str(datetime.now())
        date=date.replace('-', '.')
        date=date.replace(' ', '-')
        date=date.replace(':', '.')

        PATH = 'ClientDATA-'+date+'.csv'

        print(date+" : File created")

        f = open(PATH,'wb') #open in binary

        # receive data and write it to file
        l = client.recv(1024)

        while (l):
            f.write(l)
            l = client.recv(1024)

        f.close()


        dt=str(datetime.now())
        dt=dt.replace('-', '.')
        dt=dt.replace(' ', '-')
        dt=dt.replace(':', '.')

        print(dt+' : '+'Successfully get the Data')

        feedback = dt

        client.send(feedback.encode('utf-8'))

        client.close()

        s.close()



if __name__ == '__main__':
    ReceiveDATA()

My kubernetes cluster is ready :

kubectl get nodes

NAME         STATUS   ROLES    AGE     VERSION
rpimanager   Ready    master   3d23h   v1.15.0
rpiworker1   Ready    worker   3d23h   v1.15.0
rpiworker2   Ready    worker   3d23h   v1.15.0

Then I have deployed the docker image in 2 pods through the kubernetes dashboard :

kubectl get services

NAME             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
cluster-fogapp   NodePort    10.101.194.192   <none>        80:31700/TCP   52m
kubernetes       ClusterIP   10.96.0.1        <none>        443/TCP      3d23h

So actually docker image is runing in two pods :

kubectl get pods

NAME                             READY   STATUS    RESTARTS   AGE
cluster-fogapp-c987dfffd-6zc2x   1/1     Running   0          56m
cluster-fogapp-c987dfffd-gq5k4   1/1     Running   0          56m

and I have also a client source code which is also socket programming. Here I have found a problem which address of the server in cluster I have to put ?

This is my client code source :

    host = "????????????"#Which Address should I set  
    port = 31700

    s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
    s.connect((host, port))

    PATH = GenerateDATA()

    f = open (PATH, "rb")

    l = f.read(1024)

    while (l):
        s.send(l)
        l = f.read(1024)


    print(dt+' : '+'Done sending')

I have tried the address of the master node and I get an error of Connection refused.

I would like just to clarify that I am working on a cluster composed of raspberry Pi3 and the client is on my own pc. The pc and the raspberry cards are connected to the same local network.

Thank you for helping me.

-- Abid Omar
docker
kubernetes
python

2 Answers

7/25/2019

I have succeeded to expose the app in cluster to outside through the node port mode : ManagerIP:31700. I would like to know where the cluster of raspberry stores the files of data that it have already received ?

-- Abid Omar
Source: StackOverflow

7/5/2019

You can access the service with the worker nodes IP since you exposed the service as NodePort.

WorkerNode:<NodePort>

The problem with this approach is that if any of the nodes are dead, you might face issue. The ideal solution is to expose the service as LoadBalancer, so that you can access the service outside the cluster with external IP or DNS.

-- Malathi
Source: StackOverflow