nginx with minikube and metallb

12/6/2020

Hello i'm trying to launch my own deployment with my own container in minikube. Here's my yaml file

apiVersion: apps/v1
kind: Deployment
metadata:
  name: wildboar-nginx-depl
  labels:
    app: services.nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: services.nginx
  template:
    metadata:
      labels:
        app: services.nginx
    spec:
      containers:
        - name: wildboar-nginx-pod
          image: services.nginx
          ports:
            - containerPort: 80
            - containerPort: 443
            - containerPort: 22
          imagePullPolicy: Never
---
apiVersion: v1
kind: Service
metadata:
  name: wildboar-nginx-service
  annotations: 
    metallb.universe.tf/allow-shared-ip: wildboar-key
spec:
  type: LoadBalancer
  loadBalancerIP: 192.168.1.101 
  selector:
    app: services.nginx
  ports:
    - name: http
      protocol: TCP
      port: 80
      targetPort: 80
      nodePort: 30080
    - name: https
      protocol: TCP
      port: 443
      targetPort: 443
      nodePort: 30443
    - name: ssh
      protocol: TCP
      port: 22
      targetPort: 22
      nodePort: 30022

That's my Dockerfile

FROM alpine:latest
RUN apk update && apk upgrade -U -a
RUN apk add nginx openssl openrc openssh supervisor
RUN mkdir /www/
RUN adduser -D -g 'www' www
RUN chown -R www:www /www
RUN chown -R www:www /var/lib/nginx
RUN openssl req -x509 -nodes -days 30 -newkey rsa:2048 -subj \
"/C=RU/ST=Moscow/L=Moscow/O=lchantel/CN=localhost" -keyout \
/etc/ssl/private/lchantel.key -out /etc/ssl/certs/lchantel.crt
COPY ./conf /etc/nginx/conf.d/default.conf
COPY ./nginx_conf.sh .
COPY ./supervisor.conf /etc/
RUN mkdir -p /run/nginx/
EXPOSE 80 443 22
RUN chmod 755 /nginx_conf.sh
CMD sh nginx_conf.sh

That's my nginx_conf.sh

#!bin/sh

cp /var/lib/nginx/html/index.html /www/
rc default
rc-service sshd start
ssh-keygen -A
rc-service sshd stop
/usr/bin/supervisord -c /etc/supervisord.conf

After i'm successfuly apllying yaml files, but i'm stuck in CrashLoopBackOff error:

$ kubectl get pod
NAME                                   READY   STATUS             RESTARTS   AGE
wildboar-nginx-depl-57d64f58d8-cwcnn   0/1     CrashLoopBackOff   2          40s
wildboar-nginx-depl-57d64f58d8-swmq2   0/1     CrashLoopBackOff   2          40s

I tried to reboot, but it doesn't help. I tried to describe pod, but information is not helpfull:

$ kubectl describe pod wildboar-nginx-depl-57d64f58d8-cwcnn
Name:         wildboar-nginx-depl-57d64f58d8-cwcnn
Namespace:    default
Priority:     0
Node:         minikube/192.168.99.100
Start Time:   Sun, 06 Dec 2020 17:49:19 +0300
Labels:       app=services.nginx
              pod-template-hash=57d64f58d8
Annotations:  <none>
Status:       Running
IP:           172.17.0.7
IPs:
  IP:           172.17.0.7
Controlled By:  ReplicaSet/wildboar-nginx-depl-57d64f58d8
Containers:
  wildboar-nginx-pod:
    Container ID:   docker://6bd4ab3b08703293697d401e355d74d1ab09f938eb23b335c92ffbd2f8f26706
    Image:          services.nginx
    Image ID:       docker://sha256:a62f240db119e727935f072686797f5e129ca44cd1a5f950e5cf606c9c7510b8
    Ports:          80/TCP, 443/TCP, 22/TCP
    Host Ports:     0/TCP, 0/TCP, 0/TCP
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Sun, 06 Dec 2020 17:52:13 +0300
      Finished:     Sun, 06 Dec 2020 17:52:15 +0300
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Sun, 06 Dec 2020 17:50:51 +0300
      Finished:     Sun, 06 Dec 2020 17:50:53 +0300
    Ready:          False
    Restart Count:  5
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-hr82j (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  default-token-hr82j:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-hr82j
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  3m9s                                    Successfully assigned default/wildboar-nginx-depl-57d64f58d8-cwcnn to minikube
  Normal   Pulled     98s (x5 over 3m9s)   kubelet, minikube  Container image "services.nginx" already present on machine
  Normal   Created    98s (x5 over 3m9s)   kubelet, minikube  Created container wildboar-nginx-pod
  Normal   Started    98s (x5 over 3m9s)   kubelet, minikube  Started container wildboar-nginx-pod
  Warning  BackOff    59s (x10 over 3m4s)  kubelet, minikube  Back-off restarting failed container

I ran out of ideas what should i do:(

-- WildBoar
docker
kubernetes
metallb
minikube

1 Answer

12/8/2020

Well i solved issue with nginx. First of all, i rewrote supervisor.conf and it now something like this:

[supervisord]
nodaemon=true
user = root

[program:nginx]
command=nginx -g 'daemon off;'
autostart=true
autorestart=true
startsecs=0
redirect_stderr=true

[program:ssh]
command=/usr/sbin/sshd -D
autostart=true
autorestart=true

The second, i got problem with loadBalancer. I swap service and deployment configurations in file and also add for service next stat spec.externalTrafficPolicy: Cluster (for ip address sharing).

apiVersion: v1
kind: Service
metadata:
  name: wildboar-nginx-service
  labels:
    app: nginx
  annotations: 
    metallb.universe.tf/allow-shared-ip: minikube
spec:
  type: LoadBalancer
  loadBalancerIP: 192.168.99.105
  selector:
    app: nginx
  externalTrafficPolicy: Cluster
  ports:
    - name: http
      protocol: TCP
      port: 80
      targetPort: 80
    - name: https
      protocol: TCP
      port: 443
      targetPort: 443
    - name: ssh
      protocol: TCP
      port: 22
      targetPort: 22

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: wildboar-nginx-depl
  labels:
    app: nginx
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
     restartPolicy: Always
     containers:
       - name: wildboar-nginx-pod
         image: wildboar.nginx:latest
         ports:
           - containerPort: 80
             name: http
           - containerPort: 443
             name: https
           - containerPort: 22
             name: ssh
         imagePullPolicy: Never

The third i rebuilt minikube and all configs with script like this

#!/bin/bash

kubectl ns default
kubectl delete deployment --all
kubectl delete service --all
kubectl ns metallb-system
kubectl delete configmap --all
kubectl ns default
docker rmi -f <your_custom_docker_image>
minikube stop
minikube delete 
minikube start --driver=virtualbox --disk-size='<your size>mb' --memory='<your_size>mb'
minikube addons enable metallb
eval $(minikube docker-env)
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/metallb.yaml
# next line is only when you use mettallb for first time
#kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
docker build -t <your_custom_docker_images> .
kubectl apply -f <mettalb_yaml_config>.yaml
kubectl apply -f <your_config_with_deployment_and_service>.yaml

I also mentioned, that yaml files are very sensitive to spaces and tabs, so i installed yamllint for basic debugging of yaml files. I wanna thank confused genius and David Maze for help!

-- WildBoar
Source: StackOverflow