It seems that with Kubernetes release 1.2.0 a single node deployment:
gcloud container clusters create "$CLUSTER_NAME" \
--machine-type "n1-standard-1" \
--num-nodes "1" \
does not leave much cpu headroom - the k8n pods claim 920 out of 1000 cpu points.
(My other question : Google container engine : pod creation stuck in 'Pending' status)
Is it a conscious decision?
Is there a way to adjust it, as I do not see the need for such limit in my case (CPU does not exceed 15%) and I would like to avoid getting a larger machine?
I actually had the same issue, already on 1.1.8. I solved it by removing most pods in the kube-system namespace (kubectl --namespace=kube-system get po) and adjusting resources on my own RC.
End result, I have 10 pods running on 1 f1-micro...
I would not recommend that for a prod environment though.