I have setup nginx ingress controller configurations under the data property as shown in the below yaml file.
I would like to know is this the correct way to set nginx configurations instead of providing a nginx.conf file.
Secondly I would like to find out whether the provided configurations are set. To find whether the new configurations are applied, should I exec into the pod and run nginx -T
or is there any other way to find it?
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
data:
worker-processes: "24"
worker-connections: "100000"
worker-rlimit-nofile: "102400"
worker-cpu-affinity: "auto 111111111111111111111111"
keepalive: "200"
main-template: |
user nginx;
worker_processes {{.WorkerProcesses}};
{{- if .WorkerRlimitNofile}}
worker_rlimit_nofile {{.WorkerRlimitNofile}};{{end}}
{{- if .WorkerCPUAffinity}}
worker_cpu_affinity {{.WorkerCPUAffinity}};{{end}}
{{- if .WorkerShutdownTimeout}}
worker_shutdown_timeout {{.WorkerShutdownTimeout}};{{end}}
daemon off;
error_log /var/log/nginx/error.log {{.ErrorLogLevel}};
pid /var/run/nginx.pid;
{{- if .MainSnippets}}
{{range $value := .MainSnippets}}
{{$value}}{{end}}
{{- end}}
events {
worker_connections {{.WorkerConnections}};
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
...
sendfile on;
access_log off;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 315;
keepalive_requests 10000000;
#gzip on;
...
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
There are many ways how to install NGINX Ingress Controller
, however they depends on environments they are deploying on.
For example for minikube:
For standard usage:
minikube addons enable ingress
To check if the ingress controller pods have started, run the following command:
$ kubectl get pods -n ingress-nginx \
-l app.kubernetes.io/name=ingress-nginx --watch
You can use helm (but only v3
):
NGINX Ingress controller
can be installed via Helm using the chart from the project repository. To install the chart with the release name ingress-nginx
:
$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
$ helm repo update
$ helm install ingress-nginx ingress-nginx/ingress-nginx
Then try to detect installed version:
POD_NAME=$(kubectl get pods -l app.kubernetes.io/name=ingress-nginx -o jsonpath='{.items[0].metadata.name}')
kubectl exec -it $POD_NAME -- /nginx-ingress-controller --version
However most common way is to install the NGINX Ingress Controller in your Kubernetes cluster using Kubernetes manifests and then modify nginx-config.yaml
Summing up: you have to to modify nginx.conf file. You are providing clear specification and then easily you can debug it.
Read more: nginx-ingress-controller-installation-manifest, nginx-ingress-controller.
Even while troubleshooting you have examples to check nginx.conf
file.
To check Ingress Controller you can for example:
check the Ingress Resource Events
$ kubectl get ing -n <namespace-of-ingress-resource> NAME
$ kubectl describe ing <ingress-resource-name> -n <namespace-of-ingress-resource>
check the Ingress Controller Logs
$ kubectl get pods -n <namespace-of-ingress-controller>
$ kubectl logs -n <namespace> nginx-ingress-controller
check the Nginx Configuration
$ kubectl get pods -n <namespace-of-ingress-controller>
$ kubectl exec -it -n <namespace-of-ingress-controller> nginx-ingress-controller -- cat /etc/nginx/nginx.conf
$ kubectl get svc --all-namespaces
See more: ingress-troubleshooting.