-
-
Notifications
You must be signed in to change notification settings - Fork 10k
Description
🚀 The feature, motivation and pitch
It will be great if we have an API to support evicting all KV cache from GPU memory.
By mentioning sleep mode
, I mean, if there are some technical considerations, it's OK to make the vLLM unavailable for inference during this period, until we manually switch back to work by another separate API call. Ideally, the GPU memory usage during this period should be minimal, but the time for it to return back to normal should still be very fast.
Alternatives
Another alternative is to support changing --gpu-memory-utilization
dynamically.
Additional context
This feature would be a great help for use cases such that, during some periods of the day, the requests for inference are none. We would like to use this period for the GPU to do other GPU computing jobs. Killing the vLLM inference engine would, of course, be a choice here, but it will be a great overhead when the user requests come in again, and we need to bring the vLLM engine back to work, especially if the model is very large and checkpoints loading takes minutes to complete. As a result, it would be a great addition for vLLM if vLLM supports the sleep mode
(such as evicting all KV cache from GPU memory)
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.