I have been trying to set up Knative development environment on my system. But everytime i deploy Istio , the pilot remain in pending state and i find its because of resource exhaustion.
I followed basic setup guide from Knative docs. i.e. serving/blob/master/DEVELOPMENT.md
Now if i install and deploy istio according to it, the resources get exhausted and istio-pilot remain in pending state due to no node available.
If i try same with the one given on installation guide i.e. https://knative.dev/docs/install/installing-istio/
It works fine until later when i restart the cluster the api-server get stopped which is according to what i found by searching is also due to lack of resources.
So What is the exact requirement for Knative set up?
I used system with 8 -core processor and 32GB RAM.
Am i allocating it wrong? as much I understood we have to give at least 8Gb memory and 6 CPU to a single node kubernetes structures(That's what i'm using). What about the resources Istio and Knative deployments use?
I checked for the resources and limits in node and got the limits are set to 0%.
I have already tried to limit the CPU and RAM in minikube config, then with --cpu and --memory on the time of starting the minikube, but the output remain the same.
Minikube started with : minikube start
Creating virtualbox VM (CPUs=6, Memory=8192MB, Disk=20000MB) ...
Preparing Kubernetes v1.15.2 on Docker 18.09.8 ...
Pulling images ...
ISTIO Deployed by:
kubectl apply -f ./third_party/istio-1.2-latest/istio-crds.yaml
while [[ $(kubectl get crd gateways.networking.istio.io -o
jsonpath='{.status.conditions[?(@.type=="Established")].status}') !=
'True' ]]; do
echo "Waiting on Istio CRDs"; sleep 1
done
kubectl apply -f ./third_party/istio-1.2-latest/istio.yaml
The pilot remain pending and after describing the pod we get :
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 71s (x4 over 5m12s) default-scheduler 0/1 nodes are available: 1 Insufficient cpu.
Output for kubectl describe node nodename
:
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 5105m (85%) 13800m (229%)
memory 3749366272 (45%) 9497290Ki (117%)
ephemeral-storage 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 19m kubelet, minikube Starting kubelet.
Normal NodeHasSufficientMemory 19m (x8 over 19m) kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 19m (x8 over 19m) kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 19m (x7 over 19m) kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 19m kubelet, minikube Updated Node Allocatable limit across pods
Normal Starting 18m kube-proxy, minikube Starting kube-proxy.
The setup should have been successful as i did set up the limits with RequestQuotas and LimitRange too. But nothing is working.
what am i doing wrong here?