Can kubernetes shared single GPU between pods?

1/8/2019

Is there a possibility to share a single GPU between kubernetes pods ?

-- Gofrane Haj Ahmed
gpu
kubernetes
pod

4 Answers

1/8/2019

As the official doc says

GPUs are only supposed to be specified in the limits section, which means:

You can specify GPU limits without specifying requests because Kubernetes will use the limit as the request value by default.

You can specify GPU in both limits and requests but these two values must be equal.

You cannot specify GPU requests without specifying limits. Containers (and pods) do not share GPUs. There’s no overcommitting of GPUs.

Each container can request one or more GPUs. It is not possible to request a fraction of a GPU.

Also, you can follow this discussion to get a little bit more information.

-- Mauro Baraldi
Source: StackOverflow

3/11/2019

Yes, it is possible - at least with Nvidia GPUs.

Just don't specify it in the resource limits/requests. This way containers from all pods will have full access to the GPU as if they were normal processes.

-- Adam
Source: StackOverflow

11/26/2019

Yes it's possible by making some changes to the scheduler, someone on github kindly open-sourced their solution, take a look here: https://github.com/AliyunContainerService/gpushare-scheduler-extender

-- GioGio
Source: StackOverflow

2/12/2019

Official docs says pods can't request fraction of CPU. If you are running machine learning application in multiple pods then you have to look into kubeflow. Those guys have solved this issue.

-- iamvishnuks
Source: StackOverflow