Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🪙 feat: Configure Max Context and Output Tokens #2648

Merged
merged 10 commits into from
May 9, 2024

Conversation

danny-avila
Copy link
Owner

@danny-avila danny-avila commented May 9, 2024

Summary

Closes #2549

Allows user to configure max context tokens for all supported endpoints (OpenAI, Anthropic, Google, Plugins, custom) and max_tokens (OpenAI/custom) via convo parameters/presets

image

Other changes

  • fixes Ollama vision issues on followup after image description and compatibility with firebase
    • it will continue to use the vision model by default once an image is attached to the conversation.
    • Use "Resend Files" option to disable this behavior, to only use vision when a file is explicitly attached
  • make frequent Meilisearch "error" log to "debug" log
  • added DynamicInputNumber component for new feature.
  • bring back mobile nav and improve title styling

Change Type

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)

Checklist

  • My code adheres to this project's style guidelines
  • I have performed a self-review of my own code
  • I have commented in any complex areas of my code
  • I have made pertinent documentation changes
  • My changes do not introduce new warnings
  • I have written tests demonstrating that my changes are effective or that my feature works
  • Local unit tests pass with my changes
  • Any changes dependent on mine have been merged and published in downstream modules.
  • New documents have been locally validated with mkdocs

@danny-avila danny-avila merged commit 6ba7f60 into main May 9, 2024
3 checks passed
@danny-avila danny-avila deleted the max-context-tokens branch May 9, 2024 17:27
danny-avila referenced this pull request May 9, 2024
* chore: remove unused mobile nav

* fix: mobile nav fix for 'more' and 'archive' buttons div

* refactor(useTextarea): rewrite handleKeyUp for backwards compatibility

refactor(useTextarea): rewrite handleKeyUp for backwards compatibility

* experimental: add processing delay to azure streams for better performance/UX

* experiemental: adjust gpt-3 azureDelay

* fix: perplexity titles
@1Mr-Styler
Copy link

Nice! What's the default value of "system"?

jinzishuai pushed a commit to aitok-ai/LibreChat that referenced this pull request May 20, 2024
* chore: make frequent 'error' log into 'debug' log

* feat: add maxContextTokens as a conversation field

* refactor(settings): increase popover height

* feat: add DynamicInputNumber and maxContextTokens to all endpoints that support it (frontend), fix schema

* feat: maxContextTokens handling (backend)

* style: revert popover height

* feat: max tokens

* fix: Ollama Vision firebase compatibility

* fix: Ollama Vision, use message_file_map to determine multimodal request

* refactor: bring back MobileNav and improve title styling
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Enhancement: Make output configurable
2 participants