Skip to content
This repository has been archived by the owner on Nov 17, 2022. It is now read-only.

GPU workflows: Unable to move pods that use the entire node #68

Open
cep21 opened this issue Feb 25, 2020 · 0 comments
Open

GPU workflows: Unable to move pods that use the entire node #68

cep21 opened this issue Feb 25, 2020 · 0 comments

Comments

@cep21
Copy link

cep21 commented Feb 25, 2020

Hi,

K8s does not support fractional GPU reservations. We have an ASG of on demand and an ASG of spot nodes that all have a single GPU.

We eventually get into a state where 3 on demand nodes are fully used ( i e a pod on each requiring a single gpu) and 3 spot nodes are fully used ( i e a pod on each requiring a single gpu). Both the spot reschedule and the auto scaler get stuck here and I'm unsure how to get out. Ideally I could boot up a spot node and wait for it to fill out. Cluster autoscaler suggests sleep containers with low priority to reserve capacity, but these low priority pods won't be considered by the spot reschedule as ignorable.

Thoughts?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant