Following the example at https://cloud.google.com/nat/docs/gke-example, I receive many warnings. I can get rid of all of them except:
WARNING: The Pod address range limits the maximum size of the cluster. Please refer to https://cloud.google.com/kubernetes-engine/docs/how-to/flexible-pod-cidr to learn how to optimize IP address allocation. This will enable the auto-repair feature for nodes. Please see https://cloud.google.com/kubernetes-engine/docs/node-auto-repair for more information on node auto repairs.
Looking at the source for gcloud cluster creation, it appears that if enable_ip_alias is set there is a warning. If it is not set and there that one max node number there is also a warning.
if options.enable_ip_alias:
log.warning(
'The Pod address range limits the maximum size of the cluster. '
'Please refer to https://cloud.google.com/kubernetes-engine/docs/how-to/flexible-pod-cidr to learn how to optimize IP address allocation.'
)
I understand the material being supplied. If I add the arguments from the examples in the URL the warning does not go away.
This is my first time using GKE can someone please confirm that this warning is not indicative of an actual issue.
Using IP ranges for pods or services allows you to have control on the number of pods/services that are going to be in each node by limiting the address space within the cluster. By default, there is a limit of 110 pods per node. However, this can be shrinked if needed.
From the docs:
Reducing the maximum number of Pods per node allows the cluster to have more nodes, since each node requires a smaller part of the total IP address space.
Depending on the settings that you used to create your cluster, it might impact the maximum number of nodes that can be in its node pools, hence the warning.
This address space allocation is part of the VPC Native features that are enabled using IP Alias, so whenever you create a cluster with this option enabled, you'll have the possibility to limit said ranges.
Regarding the warning related to Node auto repair, it is a feature that periodically checks your nodes to ensure they're working ok, and if not, it automatically replaces them. This feature was also available in GCE Managed Instance Groups, so you can say that is ported from there to GKE node pools.
Now, whether these are indicative of actual issues, at least Node auto repair doesn't seem to be a threat. It might be the case with the address space but that depends in the relationship between your business logic and if that's affected by the number of nodes in your cluster.
Regarding your update on if this is an actual issue and how to disable these warnings, the answer might be opinion-based. I will try to address it as objectively as possible.
According to this design guideline on software design:
Warnings are meant to alert the user of an impending risk [...]. Whenever a warning is used, the risk that motivates the usage of a warning should be identified and presented clearly.
That means it's trying to warn you about a potential issue. In your context, you're limiting the number of total resources in your cluster by limiting the address space, which is directly impacting several things in it.
The same applies to Node auto repair since, when is triggered, it might cause temporary disruptions to scheduled workloads in the affected nodes.
Please note that this is not an actual issue but something that, in specific situations, can cause one. As mentioned above, whether you have an issue or not, depends enterely in your business logic and how your application is designed.
Since you turned on features that can potentially and under specific situations cause disruptions, the system is trying to warn you, ensuring that you're aware of the risks.