Kubernetes Django Restframework on GKE

1/16/2020

I've tried to get access via external ip from the load balancer. But I can't connect with my browser through that ip.

I've got a connection refused on port 80. I don't know if my yaml file is incorrect or its a config on my load balancer

I builded my docker image successfully with my requirements.txt and load it to the bucket from GKE to pull the docker image into Kubernetes.

I deployed my image with the command: kubectl create -f <filename>.yaml

with following yaml file:

# [START kubernetes_deployment]
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: example
  labels:
    app: example
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: example
    spec:
      containers:
      - name: faxi
        image: gcr.io/<my_id_stands_here>/<bucketname>
        command: ["python3", "manage.py", "runserver"]
        env:
            # [START cloudsql_secrets]
            - name: DATABASE_USER
              valueFrom:
                secretKeyRef:
                  name: cloudsql
                  key: username
            - name: DATABASE_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: cloudsql
                  key: password
            # [END cloudsql_secrets]
        ports:
        - containerPort: 8080

      # [START proxy_container]
      - image: b.gcr.io/cloudsql-docker/gce-proxy:1.05
        name: cloudsql-proxy
        command: ["/cloud_sql_proxy", "--dir=/cloudsql",
                  "-instances=<I put here my instance name inside>",
                  "-credential_file=<my_credential_file"]
        volumeMounts:
          - name: cloudsql-oauth-credentials
            mountPath: /secrets/cloudsql
            readOnly: true
          - name: ssl-certs
            mountPath: /etc/ssl/certs
          - name: cloudsql
            mountPath: /cloudsql
      # [END proxy_container] 
      # [START volumes]
      volumes:
        - name: cloudsql-oauth-credentials
          secret:
            secretName: cloudsql-oauth-credentials
        - name: ssl-certs
          hostPath:
            path: /etc/ssl/certs
        - name: cloudsql
          emptyDir:
      # [END volumes]        
# [END kubernetes_deployment]

---

# [START service]
apiVersion: v1
kind: Service
metadata:
  name: example
  labels:
    app: example
spec:
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: example
# [END service]

It works fine I've got the Output below with kubectl get pods

NAME                    READY   STATUS    RESTARTS   AGE
mypodname.            2/2     Running             0           21h

and with kubectl get services

NAME         TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
example      LoadBalancer   10.xx.xxx.xxx   34.xx.xxx.xxx   80:30833/TCP   21h
kubernetes   ClusterIP      10.xx.xxx.x     <none>          443/TCP        23h

kubectl describe services example gives me following output

Name:                     example
Namespace:                default
Labels:                   app=example
Annotations:              <none>
Selector:                 app=example
Type:                     LoadBalancer
IP:                       10.xx.xxx.xxx
LoadBalancer Ingress:     34.xx.xxx.xxx
Port:                     <unset>  80/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30833/TCP
Endpoints:                10.xx.x.xx:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

But I can't connect to my rest api via curl or browser I try to connect to <external_ip>:80and get a connection refused on this port

I scanned the external ip with Nmap and it shows that port 80 is closed. But Im not sure If thats the reason. My nmap output:

PORT     STATE  SERVICE
80/tcp   closed http
554/tcp  open   rtsp
7070/tcp open   realserver

Thank you for your help guys

-- ToniWth
django
django-rest-framework
gke-networking
google-kubernetes-engine
kubernetes

1 Answer

1/17/2020

Please create an ingress firewall rule to allow traffic via port 30833 for all the nodes, as it should resolve the issue.

-- Anurag Sharma
Source: StackOverflow