Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expose LLM Evaluator Prompts #21

Open
joeferraratonic opened this issue Jan 13, 2024 · 0 comments
Open

Expose LLM Evaluator Prompts #21

joeferraratonic opened this issue Jan 13, 2024 · 0 comments
Labels
good first issue Good for newcomers

Comments

@joeferraratonic
Copy link
Collaborator

Currently the LLM evaluator prompts for a given metric are not exposed by the class that calculates the metric. Make it so that these prompts are explicitly accessible from the metric class.

This change will improve the tonic_validate integration in llama_index, as the evaluators in llama_index have methods that return these prompts. Once this change is made, a subsequent PR to llama_index should be made to add this functionality into the tonic_validate integration there.

Reference: run-llama/llama_index#10000 (comment)

@akamor akamor added the good first issue Good for newcomers label Feb 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

2 participants