In Kubernetes, will Go container use all cores when another is using cores

4/8/2020

Scenario: On a 16-core node, a Go service/container is run in a pod with another container, where the other is allocated 4 cores, and the go container is set to use GOMAXPROCS.

In requests that use goroutines, will the Go program utilize all CPU's available to it. I think this is dependant on GOMAXPROCS, but am unsure if it only sees the 1 core used at pod startup, are all on the machine.

Ideally, I'd like for cpu-intensive requests to use all available CPUs, but am having a hard time measuring what's actually happening at runtime (GKE).

kube top shows what is expected in idle:

POD        NAME            CPU(cores)   MEMORY(bytes)
pod-go-py  go-service      1m           862Mi
pod-go-py  py-service      4m           489Mi

fmt.Println(runtime.NumCPU()) shows 16 core available. So I can trust that the Go program will utilize them all in the requests? I also imagine as I scale the pods on the node that Ill have to be mindful of throttling.

-- rambossa
cpu-usage
go
google-kubernetes-engine
kubernetes

2 Answers

5/18/2020

The container will see all cores on the machine. What Kubernetes limits do is setup cgroup that tell the kernel how much CPU the container can consume. That means that while Go will see all the cores, when he tries to go above the limit the kernel will throttle it. That's actually a bad thing. You want Go to be aware of cgroups and scale GOMAXPROCS appropriately. For that you can use this

It looks like what you want is oversubscription. Set requests very low and limit to 4 cores on the first container. Set requests very low but limit to 16 cores (or don't set the limit at all) on the second. That way the second container will be able to utilize all the CPU.

-- creker
Source: StackOverflow

5/18/2020

The answer is No ...

eg if the other container is declared to use 4CPU, then the go container will only see 12.

-- rambossa
Source: StackOverflow