Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduces MockLLM and provides an example #130

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

delip
Copy link
Contributor

@delip delip commented Apr 1, 2024

Rationale: For testing and other purposes, we need a Log10 LLM instance that mimics an LLM chat call but doesn't rely on existing LLMs, local or otherwise. The NoopLLM class provides an interface that always produces a constant output. However, for testing feedback/logging, you might want to have a mock LLM that produces different but deterministic outputs for different inputs. This PR provides that.

As a choice, the current implementation only provides .chat() to facilitate my current work and leaves .text() for future implementation.

Copy link
Contributor

@wenzhe-log10 wenzhe-log10 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for contributing.

High level question - what's the use case for the MockLLM? do we expect the user to set prompt messages and response message directly? This is related to several comments for the MockLLM chat interface.

Please also run ruff check log10 examples and ruff format log10 examples for lint/format. current check failed.

mock_function: Callable = None
If mock_function is not provided, it is assigned to an identity function.
'''
super().__init__(hparams, log10_config)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how about marking {"model": "MockLLM"}, maybe when hparams["model"] is None?

log10/llm.py Outdated
request = self.chat_request(messages, hparams)

start_time = time.perf_counter()
content = self.mock_function(messages[-1].content)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it would be better to check messages is not empty before messages[-1].

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

self.mock_function(messages[-1].content)
Does this assume the mock_function is the default lambda x:x? What if self.mock_function need to take all messages?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@wenzhe-log10 yes, extra check won't hurt. will do.

Yes, mock_function does that. However, I will update this with another commit that does what you are asking for from a different interface.

{
"message": {
"role": "assistant",
"content": content,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems the completion is the last message of messages. And request also contains the full messages. Depends on how you want to use the logs in future, you may want to remove the last message (completion) from request.
Or maybe have a mock_function to do that, which takes the full messages.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please check the chat_expected method I will be committing soon.

log10/llm.py Outdated
'''
MockLLM is an LLM interface that allows you to mock the behavior of an LLM using any Python function.
It is useful for log10 testing and development without having to setup or call a real LLM.
See examples/feedback/echo for an example.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you please add a simple example here (maybe in the format of doctest)? Might be easier to follow than the above examples, which combines feedback etc..

Format like this, but make the code runnable.

Example:
    >>> from log10.llm import MockLLM, Message, Log10Config
    >>> client = MockLLM(mock_function=noisy_predictor, log10_config=config)
    >>> response = client.chat([Message(role="user", content=str(x))])

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure. I like yours. I will add that.

@wenzhe-log10 wenzhe-log10 removed their assignment Apr 9, 2024
@delip delip requested a review from wenzhe-log10 April 11, 2024 14:09
Copy link
Contributor

@wenzhe-log10 wenzhe-log10 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cool. looks good.
Do we still need the two example files in examples/feedback/echo? Maybe remove since you have add the doc string now.
Last thing is to update the ruff format (failed here).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants