New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is Infini-attention support possible? #7213
Labels
enhancement
New feature or request
Comments
The paper describes a new model architecture which would have to be implemented which takes some work. The model they released with the paper is a "very early checkpoint" so it might be wise to wait until at least one fully baked model exists in this architecture. It's a very cool model though so it might be worth it. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Prerequisites
Please answer the following questions for yourself before submitting an issue.
Feature Description
Infini-attention as described here... https://arxiv.org/pdf/2404.07143
There is a python implementation here: https://github.com/mustafaaljadery/gemma-2B-10M
Motivation
This new attention mechanism allows an effectively unlimited context without the quadratic penalty. There is a proof of concept with 10M context in < 32GB of RAM. I feel like this would be extremely useful to support but I'm uncertain what if any changes to llama.cpp would be required.
Possible Implementation
There is a python implementation here: https://github.com/mustafaaljadery/gemma-2B-10M
Thanks so much for looking!
The text was updated successfully, but these errors were encountered: