-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature]: Tree attention about Speculative Decoding #3960
Comments
Thanks for your interest in contributing! FYI tree attention is a bit complicated to implement with non-contiguous KV cache, since intra-block attention masking has not been implemented anywhere AFAIK. We can get around this by limiting vLLM to block size of 1, but this makes it difficult to optimize latency of verification as we limit the allowed vLLM configuration space. The way I'd recommend going about this is to implement intra-block attention masking first, then integrate it with vLLM. This is the surefire way to obtain the best latency reduction possible in vLLM. The steps as follows:
After the remaining open sourcing work is complete, I'll add some documentation for this. More background information here: https://docs.google.com/document/d/1T-JaS2T1NRfdP51qzqpyakoCXxSXTtORppiwaj5asxA/edit#heading=h.kk7dq05lc6q8 |
Tree attention mechanisms can also be utilized to generate multiple outcomes from the same prompt by varying the seeds. This approach is an effective strategy to ensure the stability of results produced by Large Language Models (LLMs). For instance, when employing an LLM as a scoring tool to derive metrics, one could sample the LLM's outputs multiple times. By averaging these samples, a more reliable result can be obtained. This feature might become available following the implementation of tree attention mechanisms. |
@cadedaniel |
@yukavio you should talk with @LiuXiaoxuanPKU , who is adding MQA scoring to vLLM |
馃殌 The feature, motivation and pitch
I want to implement tree attention for vllm mentioned in RoadMap. But I don鈥檛 know whether I should implement it based on paged-attention kernel implemented in vllm or FlashInfer due to I found we plan to replace this kernel in this PR.
Alternatives
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: