We're implementing security on our k8s cluster in Azure (managed Kubernetes - AKS).
Cluster is deployed via ARM template, the configuration is as following:
1 node, availability set, Standard load balancer, Nginx-based ingress controller, a set of application ddeployed.
According to the document we've updated cluster to protect API server from the whole internet:
az aks update --resource-group xxxxxxxx-xxx-xx-xx-xx-x -n xx-xx-xxx-aksCluster
--api-server-authorized-ip-ranges XX.XX.X.0/24,XX.XX.X.0/24,XX.XXX.XX.0/24,XX.XXX.XXX.XXX/32
--subscription xxxxx-xxx-xxx-xxx-xxxxxx
Operation is completed successfully.
When trying to grab logs from the pod the follwoing error is occured:
kubectl get pods -n lims-dev
NAME READY STATUS RESTARTS AGE
XXXX-76df44bc6d-9wdxr 1/1 Running 0 14h
kubectl logs XXXXX-76df44bc6d-9wdxr -n lims-dev
Error from server: Get https://aks-agentpool-XXXXXX-1:10250/containerLogs/XXXX/XXXXX-
76df44bc6d-9wdxr/listener: dial tcp 10.22.0.35:10250: i/o timeout
When trying to deploy using Azure DevOps, the same error is raised:
2020-04-07T04:49:49.0409528Z ##[error]Error: error installing:
Post https://xxxxx-xxxx-xxxx-akscluster-dns-xxxxxxx.hcp.eastus2.azmk8s.io:443
/apis/extensions/v1beta1/namespaces/kube-system/deployments:
dial tcp XX.XX.XXX.142:443: i/o timeout
Of course, the subnet where I'm running the kubectl is added to authorized range.
I'm trying to understand what's the source of the problem.
You need also to specify --load-balancer-outbound-ips
parameter once creating AKS cluster. This IP will be used by your pods to communicate to external world, as well as to AKS API server. See here