I have installed nignx ingress helm chat on CentOS 8 Kubernetes 1.17 with containerd, ingress pod failed with below error message. Same helm chat worked on CentOS 7 with Docker.
I0116 04:17:06.624547 8 flags.go:205] Watching for Ingress class: nginx
W0116 04:17:06.624803 8 flags.go:250] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)
W0116 04:17:06.624844 8 client_config.go:543] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: 0.27.1
Build: git-1257ded99
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.17.7
-------------------------------------------------------------------------------
I0116 04:17:06.624968 8 main.go:194] Creating API client for https://10.224.0.1:443
I0116 04:17:06.630907 8 main.go:238] Running in Kubernetes cluster version v1.17 (v1.17.0) - git (clean) commit 70132b0f130acc0bed193d9ba59dd186f0e634cf - platform linux/amd64
I0116 04:17:06.633567 8 main.go:91] Validated nginx-ingress/nginx-ingress-default-backend as the default backend.
F0116 04:17:06.843785 8 ssl.go:389] unexpected error storing fake SSL Cert: could not create PEM certificate file /etc/ingress-controller/ssl/default-fake-certificate.pem: open /etc/ingress-controller/ssl/default-fake-certificate.pem: permission denied
if I remove this from deployment, ingress pod is starting.
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
I like to understand why same helm chart failing on containerd
containerd --version
containerd github.com/containerd/containerd 1.2.0
adding deployment.
containers:
- args:
- /nginx-ingress-controller
- --default-backend-service=nginx-ingress/nginx-ingress-default-backend
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=nginx-ingress/nginx-ingress-controller
- --default-ssl-certificate=nginx-ingress/ingress-tls
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.27.1
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: nginx-ingress-controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
runAsUser: 101
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: nginx-ingress
error message
-------------------------------------------------------------------------------
W0116 16:02:30.074390 8 queue.go:130] requeuing nginx-ingress/nginx-ingress-controller, err
-------------------------------------------------------------------------------
Error: exit status 1
nginx: the configuration file /tmp/nginx-cfg613392629 syntax is ok
2020/01/16 16:02:30 [emerg] 103#103: bind() to 0.0.0.0:80 failed (13: Permission denied)
nginx: [emerg] bind() to 0.0.0.0:80 failed (13: Permission denied)
nginx: configuration file /tmp/nginx-cfg613392629 test failed
I experienced the same. the solution is not to remove the capability section but to change the runAsuser
if you download the new release (0.27.1) deployment of the Nginx ingress controller, you can see:
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 101
runAsUser: 101
The "runAsUser" line has a different user id. the user id in my old deployment was different so I got this error. Since I Changed the runAsUser to ID 101, the id in the kubernetes definitions is the same as the ID used in the new Nginx image and it works again :)