You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
I'm using a quantized version of starsnatched's MemGPT-DPO-MoE-test model and am finding that it thinks that page 1 is the first page so it always misses the first page of possible results.
Can we set the default to 1 and modify the formulas for pagination accordingly? If the LLM sends "page": 0 in the query, we can readily set it to 1 so for models that default to 0 will still work.
I'd be happy to generate a PR for this if this is something that would be considered?
The text was updated successfully, but these errors were encountered:
Describe the bug
I'm using a quantized version of starsnatched's MemGPT-DPO-MoE-test model and am finding that it thinks that page 1 is the first page so it always misses the first page of possible results.
Can we set the default to 1 and modify the formulas for pagination accordingly? If the LLM sends
"page": 0
in the query, we can readily set it to 1 so for models that default to 0 will still work.I'd be happy to generate a PR for this if this is something that would be considered?
The text was updated successfully, but these errors were encountered: