Skip to content

prompttools 0.0.45 - Introducing Observability features!

Latest
Compare
Choose a tag to compare
@NivekT NivekT released this 02 Jan 06:57
· 31 commits to main since this release
326e8c4

Launch of PromptTools Observability (Private Beta)

We're excited to announce the addition of observability features on our hosted platform. It allows your teams to monitor and evaluate your production usages of LLMs with just one line of code change!

import prompttools.logger

The new features are integrated with our open-source library as well as the PromptTools playground. Our goal is to enable reliable deployments of LLM usages more quickly and observes any issues in real-time.

If you are interested to try out platform, please reach out to us.

We remain committed to expanding this open source library. We look forward to build more development tools that enable you to iterate faster with AI models. Please have a look at our open issues to what features are coming.

Major Features Updates

OpenAI API Updates

  • We have updated various experiments and examples to use OpenAI's latest features and Python API
  • Make sure you are using openai version 1.0+

Moderation API

  • We have integrated with OpenAI's moderation API as an eval function
  • This allows you to check if your experiments' responses (from any LLMs) violate content moderation policy (such as violence, harassment).

Hosted APIs

  • Production logging API
    • Contact us if you would like to get started with our hosted observability features!

Community

If you have suggestions on the API or use cases you'd like to be covered, please open a GitHub issue. We'd love to hear thoughts and feedback. As always, we welcome new contributors to our repo and we have a few good first issues to get you started.

Full Changelog: v0.0.41...v0.0.45