nginx-ingress-controller: Error while initializing connection to Kubernetes apiserver

11/28/2019

nginx-ingress-controller error. it gives error while initializing connection to kubernetes apiserver. Is there some issue with the cluster, not able to understand this issue. I want to expose my services outside cluster. below is the docker logs with the error and my nginx-ingress-controller.yml

docker log

Creating API client for https://10.96.0.1:443
F1128 06:30:25.376076       7 launch.go:330] Error while initializing connection to 
Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has 
invalid apiserver certificates or service accounts configuration). Reason: Get  
https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout

nginx-controller.yml

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx


--- 
kind: Service
apiVersion: v1
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
labels:
  app: ingress-nginx
spec:
  externalTrafficPolicy: Local
type: LoadBalancer
selector:
app: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https


---

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: default
spec:
replicas: 1
#  selector:
#    matchLabels:
#      app: ingress-nginx
template:
metadata:
  labels:
    app: ingress-nginx
  annotations:
    prometheus.io/port: '10254'
    prometheus.io/scrape: 'true'
spec:
  containers:
    - name: nginx-ingress-controller
      image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0-beta.17
          args:
        - /nginx-ingress-controller
        - --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend-external

        - --logtostderr
        - --configmap=$(POD_NAMESPACE)/nginx-ingress-config
        - --default-ssl-certificate=$(POD_NAMESPACE)/default-tls
      env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
      ports:
        - name: http
          containerPort: 80
        - name: https
         containerPort: 443

what could be the reason

-- Charvee Punia
kubernetes
nginx
nginx-ingress

1 Answer

12/10/2019

Posting as Community Wiki based on comments, for better visibility.

Root cause of issue was not with nginx-controller settings but with Kubernetes Cluster Configuration.

1. Using wrong cidr.

Original Poster used same value of --pod-network-cidr as host. It is described in documentation.

Also, beware, that your Pod network must not overlap with any of the host networks as this can cause issues. If you find a collision between your network plugin’s preferred Pod network and some of your host networks, you should think of a suitable CIDR replacement and use that during kubeadm init with --pod-network-cidr and as a replacement in your network plugin’s YAML.

2. CoreDNS crash.

Original Poster changied ConfigMap using kubectl -n kube-system edit configmap coredns which contains information about CoreDNS configuration (commented loop). Later OP installed CNI - Flannel and restarted CoreDNS pods to receive new configuration from ConfigMap.

After that Nginx-controller configuration YAMLs worked fine.

-- PjoterS
Source: StackOverflow