I'm trying to change the client_max_body_size
value, so my nginx ingress will not return 413 error.
I've tested few solutions.
Here is my test config map:
kind: ConfigMap
apiVersion: v1
data:
proxy-connect-timeout: "15"
proxy-read-timeout: "600"
proxy-send-timeout: "600"
proxy-body-size: "8m"
hsts-include-subdomains: "false"
body-size: "64m"
server-name-hash-bucket-size: "256"
client-max-body-size: "50m"
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app: ingress-nginx
These changes has no effect at all, after loading it, in the nginx controller log I can see the information about reloading config map, but the values in nginx.conf are the same:
root@nginx-ingress-controller-95db685f5-b5s6s:/# cat /etc/nginx/nginx.conf | grep client_max
client_max_body_size "8m";
client_max_body_size "1m";
client_max_body_size "1m";
client_max_body_size "1m";
client_max_body_size "1m";
client_max_body_size "1m";
client_max_body_size "1m";
client_max_body_size "1m";
client_max_body_size "1m";
client_max_body_size "1m";
client_max_body_size "1m";
client_max_body_size "1m";
client_max_body_size "1m";
My nginx-controller config uses this image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.13.0
How can I force the nginx to change the value? I need to change it globally, for all my ingresses.
I have tried both proxy-body-size and client-max-body-size on the configmap and did a rolling restart of the nginx controller pods and when I grep the nginx.conf file in the pod it returns the default 1m. I am trying to do this within Azure Kubernetes Service (AKS). I'm working with someone from their support. They said its not on them since it appears to be a nginx config issue.
The weird hting is we had other clusters in Azure that this wasnt an issue on until we discovered this with some of the newer deployments. The initial fix they came up with is what is in this thread but it just refuses to change.
Below is my configmap:
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
client-max-body-size: 0m
proxy-connect-timeout: 10s
proxy-read-timeout: 10s
kind: ConfigMap
metadata:
annotations:
control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"nginx-nginx-ingress-controller-7b9bff87b8-vxv8q","leaseDurationSeconds":30,"acquireTime":"2020-03-10T20:52:06Z","renewTime":"2020-03-10T20:53:21Z","leaderTransitions":1}'
creationTimestamp: "2020-03-10T18:34:01Z"
name: ingress-controller-leader-nginx
namespace: ingress-nginx
resourceVersion: "23928"
selfLink: /api/v1/namespaces/ingress-nginx/configmaps/ingress-controller-leader-nginx
uid: b68a2143-62fd-11ea-ab45-d67902848a80
After issuing a rolling restart: kubectl rollout restart deployment/nginx-nginx-ingress-controller -n ingress-nginx
Grepping the nginx ingress controller pod to query the value now reveals:
kubectl exec -n ingress-nginx nginx-nginx-ingress-controller-7b9bff87b8-p4ppw cat nginx.conf | grep client_max_body_size
client_max_body_size 1m;
client_max_body_size 1m;
client_max_body_size 1m;
client_max_body_size 1m;
client_max_body_size 21m;
Doesnt matter where I try to change it. On the configmap for global or the Ingress route specifically.......this value above never changes.
Update:
I have been experiencing the same problem and no solutions were working. After reading through countless blogs and docs that all had the same suggested solution I found that they have changed the naming convention.
It is no longer denoted by "proxy-body-size" or this just never works for me.
link below shows that the correct configmap variable to use is "client-max-body-size"
You can use the annotation nginx.ingress.kubernetes.io/proxy-body-size
to set the max-body-size option right in your Ingress object instead of changing a base ConfigMap.
Here is the example of usage:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-app
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "50m"
...
To set it globally, this configmap.md documentation might be helpful. Turns out the variable to set is proxy-body-size
, not client-max-body-size
.
When you deploy the helm chart, you can set --set-string controller.config.proxy-body-size="4m"
.