Replies: 7 comments 8 replies
-
🤖 To pass the 'seed' parameter to the OpenAI chat API and retrieve the 'system_fingerprint' from the response using LangChain, you need to modify the methods that interact with the OpenAI API in the LangChain codebase. First, you need to add the 'seed' parameter to the '_default_params' method in the 'OpenAI' class. This method returns the default parameters for calling the OpenAI API. Here is how you can do it: @property
def _default_params(self) -> Dict[str, Any]:
"""Get the default parameters for calling OpenAI API."""
normal_params: Dict[str, Any] = {
"temperature": self.temperature,
"top_p": self.top_p,
"frequency_penalty": self.frequency_penalty,
"presence_penalty": self.presence_penalty,
"n": self.n,
"logit_bias": self.logit_bias,
"seed": <your_seed_value> # Add your seed value here
}
# ... rest of the code In the above code, replace Second, you need to modify the methods that handle the response from the OpenAI API to include 'system_fingerprint'. Here is how you can do it: def create_llm_result(
self,
choices: Any,
prompts: List[str],
token_usage: Dict[str, int],
*,
system_fingerprint: Optional[str] = None,
) -> LLMResult:
"""Create the LLMResult from the choices and prompts."""
generations = []
for i, _ in enumerate(prompts):
sub_choices = choices[i * self.n : (i + 1) * self.n]
generations.append(
[
Generation(
text=choice["text"],
generation_info=dict(
finish_reason=choice.get("finish_reason"),
logprobs=choice.get("logprobs"),
system_fingerprint=system_fingerprint, # Add this line
),
)
for choice in sub_choices
]
)
llm_output = {"token_usage": token_usage, "model_name": self.model_name}
if system_fingerprint:
llm_output["system_fingerprint"] = system_fingerprint
return LLMResult(generations=generations, llm_output=llm_output) The 'system_fingerprint' is retrieved from the response and added to the 'generation_info' in the 'create_llm_result' method. In the context shared, the '_get_chat_params' method is where the 'seed' parameter should be added to the 'params' dictionary. The 'system_fingerprint' should be retrieved from the 'full_response' dictionary in the '_generate' and '_agenerate' methods and passed to the 'create_llm_result' method. In the context of the 'PromptLayerChatOpenAI' class, you would need to modify the '_generate' and '_agenerate' methods to include the 'seed' parameter in the 'params' dictionary and retrieve the 'system_fingerprint' from the 'full_response' dictionary. The 'system_fingerprint' should then be passed to the 'create_llm_result' method. You can find the relevant code in the following files in the LangChain repository: Please let me know if you need further assistance. Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
-
The robot's response seems to have resolved my issue with the 'seed' parameter, though I cannot confirm this as validation is not possible for me. Nevertheless, the 'system_fingerprint' remains unchanged. Is it advisable for me to submit a pull request to address this issue? Furthermore, I'm curious to know whether the LangChain team has any intentions of incorporating this feature in their future updates? |
Beta Was this translation helpful? Give feedback.
-
I want to use this feature , could langchain team please help us mention intent on whether/when they plan to add support for this parameter ? Thanks! |
Beta Was this translation helpful? Give feedback.
-
We would also like to use OpenAI's new |
Beta Was this translation helpful? Give feedback.
-
You can pass non-standard parameters to the openai call by providing a model_kwargs dictionary when instantiating the ChatOpenAI object. The same you do for top_p as well as other params: model_kwargs={"seed":235, "top_p":0.01} |
Beta Was this translation helpful? Give feedback.
-
@radsimu any idea on how I can access the system_fingerprint value while streaming the response? |
Beta Was this translation helpful? Give feedback.
-
How are people doing this nowadays? I'm using azure openai as my provider, and when using |
Beta Was this translation helpful? Give feedback.
-
https://platform.openai.com/docs/guides/text-generation/reproducible-outputs
OpenAI has updated their API documentation. How can I pass the 'seed' parameter and retrieve the 'system_fingerprint' from the response using Langchain?
Beta Was this translation helpful? Give feedback.
All reactions