Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: pydantic.v1.error_wrappers.ValidationError: 1 validation error for Generation type unexpected value; permitted: 'Generation' (type=value_error.const; given=ChatGeneration; permitted=('Generation',)) #585

Open
sdd031215 opened this issue Dec 7, 2023 · 18 comments
Labels
next version It will be implemented in the next version

Comments

@sdd031215
Copy link

Current Behavior

def init_gptcache_map(cache_obj: Cache):
cache_base = CacheBase('sqlite')
vector_base = VectorBase('milvus', host='xx', port='19530', dimension=len(cache_embeddings.to_embeddings('abc')), collection_name='chatbot')
data_manager = get_data_manager(cache_base, vector_base)
cache_obj.init(
pre_embedding_func=get_content_func,
embedding_func=cache_embeddings.to_embeddings,
data_manager=data_manager,
similarity_evaluation=SearchDistanceEvaluation(), #ExactMatchEvaluation()
)
langchain.llm_cache = GPTCache(init_gptcache_map)

The following error occurred during the execution of langchain's tool:
" pydantic.v1.error_wrappers.ValidationError: 1 validation error for Generation type unexpected value; permitted: 'Generation' (type=value_error.const; given=ChatGeneration; permitted=('Generation',))"

@SimFG

Expected Behavior

No response

Steps To Reproduce

No response

Environment

No response

Anything else?

No response

@aravindarc
Copy link

Any update on resolving this issue or any workaround, if someone guides me I want to attempt fixing this

@SimFG
Copy link
Collaborator

SimFG commented Jan 16, 2024

@aravindarc If you also have this problem, you can give me the error stack, and then I can look at the cause of the problem and give you a certain solution.

@Aneerudh2k2
Copy link

Aneerudh2k2 commented Feb 26, 2024

Hey guys, same issue here code remains same as @sdd031215, and here is the error stack below, can you help me @SimFG
@aravindarc @technicalpickles @jmahmood ?

Traceback (most recent call last):
  File "/Users/aneerudhm/anaconda3/envs/llm/lib/python3.11/site-packages/flask/app.py", line 1488, in __call__
    return self.wsgi_app(environ, start_response)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/aneerudhm/anaconda3/envs/llm/lib/python3.11/site-packages/flask/app.py", line 1466, in wsgi_app
    response = self.handle_exception(e)
               ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/aneerudhm/anaconda3/envs/llm/lib/python3.11/site-packages/flask_cors/extension.py", line 176, in wrapped_function
    return cors_after_request(app.make_response(f(*args, **kwargs)))
                                                ^^^^^^^^^^^^^^^^^^^^
  File "/Users/aneerudhm/anaconda3/envs/llm/lib/python3.11/site-packages/flask/app.py", line 1463, in wsgi_app
    response = self.full_dispatch_request()
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/aneerudhm/anaconda3/envs/llm/lib/python3.11/site-packages/flask/app.py", line 872, in full_dispatch_request
    rv = self.handle_user_exception(e)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/aneerudhm/anaconda3/envs/llm/lib/python3.11/site-packages/flask_cors/extension.py", line 176, in wrapped_function
    return cors_after_request(app.make_response(f(*args, **kwargs)))
                                                ^^^^^^^^^^^^^^^^^^^^
  File "/Users/aneerudhm/anaconda3/envs/llm/lib/python3.11/site-packages/flask/app.py", line 870, in full_dispatch_request
    rv = self.dispatch_request()
         ^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/aneerudhm/anaconda3/envs/llm/lib/python3.11/site-packages/flask/app.py", line 855, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)  # type: ignore[no-any-return]
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/aneerudhm/Documents/my documents/project_files/folder_name/nodejs-backend/src/langchain.microservice/app.py", line 754, in summary_gen
    response = ask_query_to_qa_chain(
               ^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/aneerudhm/Documents/my documents/project_files/folder_name/nodejs-backend/src/langchain.microservice/app.py", line 421, in ask_query_to_qa_chain
    qa_chain(
  File "/Users/aneerudhm/anaconda3/envs/llm/lib/python3.11/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
    return wrapped(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/aneerudhm/anaconda3/envs/llm/lib/python3.11/site-packages/langchain/chains/base.py", line 378, in __call__
    return self.invoke(
           ^^^^^^^^^^^^
  File "/Users/aneerudhm/anaconda3/envs/llm/lib/python3.11/site-packages/langchain/chains/base.py", line 163, in invoke
    raise e
  File "/Users/aneerudhm/anaconda3/envs/llm/lib/python3.11/site-packages/langchain/chains/base.py", line 153, in invoke
    self._call(inputs, run_manager=run_manager)
  File "/Users/aneerudhm/anaconda3/envs/llm/lib/python3.11/site-packages/langchain/chains/combine_documents/base.py", line 137, in _call
    output, extra_return_dict = self.combine_docs(
                                ^^^^^^^^^^^^^^^^^^
  File "/Users/aneerudhm/anaconda3/envs/llm/lib/python3.11/site-packages/langchain/chains/combine_documents/stuff.py", line 244, in combine_docs
    return self.llm_chain.predict(callbacks=callbacks, **inputs), {}
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/aneerudhm/anaconda3/envs/llm/lib/python3.11/site-packages/langchain/chains/llm.py", line 293, in predict
    return self(kwargs, callbacks=callbacks)[self.output_key]
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/aneerudhm/anaconda3/envs/llm/lib/python3.11/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
    return wrapped(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/aneerudhm/anaconda3/envs/llm/lib/python3.11/site-packages/langchain/chains/base.py", line 378, in __call__
    return self.invoke(
           ^^^^^^^^^^^^
  File "/Users/aneerudhm/anaconda3/envs/llm/lib/python3.11/site-packages/langchain/chains/base.py", line 163, in invoke
    raise e
  File "/Users/aneerudhm/anaconda3/envs/llm/lib/python3.11/site-packages/langchain/chains/base.py", line 153, in invoke
    self._call(inputs, run_manager=run_manager)
  File "/Users/aneerudhm/anaconda3/envs/llm/lib/python3.11/site-packages/langchain/chains/llm.py", line 103, in _call
    response = self.generate([inputs], run_manager=run_manager)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/aneerudhm/anaconda3/envs/llm/lib/python3.11/site-packages/langchain/chains/llm.py", line 115, in generate
    return self.llm.generate_prompt(
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/aneerudhm/anaconda3/envs/llm/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 544, in generate_prompt
    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/aneerudhm/anaconda3/envs/llm/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 408, in generate
    raise e
  File "/Users/aneerudhm/anaconda3/envs/llm/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 398, in generate
    self._generate_with_cache(
  File "/Users/aneerudhm/anaconda3/envs/llm/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 585, in _generate_with_cache
    cache_val = llm_cache.lookup(prompt, llm_string)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/aneerudhm/anaconda3/envs/llm/lib/python3.11/site-packages/langchain_community/cache.py", line 807, in lookup
    return [
           ^
  File "/Users/aneerudhm/anaconda3/envs/llm/lib/python3.11/site-packages/langchain_community/cache.py", line 808, in <listcomp>
    Generation(**generation_dict) for generation_dict in json.loads(res)
  File "/Users/aneerudhm/anaconda3/envs/llm/lib/python3.11/site-packages/langchain_core/load/serializable.py", line 120, in __init__
    super().__init__(**kwargs)
  File "/Users/aneerudhm/anaconda3/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__
    raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for Generation
type
  unexpected value; permitted: 'Generation' (type=value_error.const; given=ChatGeneration; permitted=('Generation',))

@SimFG
Copy link
Collaborator

SimFG commented Feb 27, 2024

@Aneerudh2k2 From the error stack, I don’t seem to see anything related to gptcache. Can you try other caches to see if there are similar issues?

@Aneerudh2k2
Copy link

@SimFG this error persists after using gptcache, i tried different cache like redis error didn't occur. I guess the issue from langchain integration with gptcache, because the common thing is with me and @sdd031215 is langchain. Let me try wit different approach if works let me post it over here.

btw thanks : )

@SimFG
Copy link
Collaborator

SimFG commented Feb 28, 2024

@aravindarc thanks your feedback. and I will confirm this asap

@oussamaJmaaa
Copy link

any update regarding this issue ?

@SimFG
Copy link
Collaborator

SimFG commented Feb 29, 2024

@oussamaJmaaa I am confirming step by step and will reply as soon as the result is available

@SimFG SimFG added the next version It will be implemented in the next version label Mar 1, 2024
@SimFG
Copy link
Collaborator

SimFG commented Mar 1, 2024

Hi, @oussamaJmaaa @Aneerudh2k2 I use the latest langchain and gptcache lib, and there seem be no error.
You can use the latest version to check it, and you meet other errors, please tell me the error stack and the demo code to expedite problem resolution.

Here is my code:

import hashlib
import timeit

from gptcache import Cache
from gptcache.adapter.api import init_similar_cache
from gptcache.processor.pre import get_prompt
from gptcache.manager.factory import manager_factory
from langchain.globals import set_llm_cache
from langchain_community.cache import GPTCache

from langchain.globals import set_llm_cache
from langchain_openai import OpenAI

# def init_gptcache(cache_obj: Cache, llm: str):
#     cache_obj.init(
#         pre_embedding_func=get_prompt,
#         data_manager=manager_factory(
#             manager="map",
#             data_dir=f"map_cache_{llm}"
#         ),
#     )

def get_hashed_name(name):
    return hashlib.sha256(name.encode()).hexdigest()


def init_gptcache(cache_obj: Cache, llm: str):
    hashed_llm = get_hashed_name(llm)
    init_similar_cache(cache_obj=cache_obj, data_dir=f"similar_cache_{hashed_llm}")

set_llm_cache(GPTCache(init_gptcache))

llm = OpenAI(model_name="gpt-3.5-turbo-instruct", n=2, best_of=2)

execution_time = timeit.timeit(lambda: llm("Tell me a joke"), number=1)
print(f"Execution time: {execution_time} seconds")

execution_time = timeit.timeit(lambda: llm("Tell me a joke"), number=1)
print(f"Execution time: {execution_time} seconds")

execution_time = timeit.timeit(lambda: llm("Tell me joke"), number=1)
print(f"Execution time: {execution_time} seconds")

the test result:
image

the env and lib version:

Python 3.10.13

Name: langchain
Version: 0.1.9

Name: gptcache
Version: 0.1.43

@oussamaJmaaa
Copy link

oussamaJmaaa commented Mar 1, 2024

thank you for your help , your code works fine but when i changed the llm instance to llm = ChatOpenAI(model_name=llm_name, temperature=0) and used a ConversationalRetrieval chain , it gives the same error again after the second execution , can you please help me with that ?
Here's the code :

def get_hashed_name(name):
return hashlib.sha256(name.encode()).hexdigest()

def init_gptcache(cache_obj: Cache, llm: str):
hashed_llm = get_hashed_name(llm)
init_similar_cache(cache_obj=cache_obj, data_dir=f"cache2{hashed_llm}")
set_llm_cache(GPTCache(init_gptcache))

llm = ChatOpenAI(model_name="gpt-3.5-turbo")

embedding = OpenAIEmbeddings()
vector_store = Chroma(persist_directory="my_directory", embedding_function=embedding)

template = """
You are a customer service bot. Respond to the user's input in the
same language they used. Use the given context to provide a brief yet
informative response. If unsure, just say so. Finish by saying
"Thanks for asking!"\n \n\n{context}\n*\nQuestion: {question}\nHelpful Answer:"""
QA_CHAIN_PROMPT = PromptTemplate.from_template(template)

memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
retriever = vector_store.as_retriever(k=5)

qa = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=retriever,
memory=memory,
combine_docs_chain_kwargs={"prompt": QA_CHAIN_PROMPT},
verbose = False
)
user_input =" hello !"
response = qa({"question": user_input})
print(response["answer"])

@SimFG
Copy link
Collaborator

SimFG commented Mar 1, 2024

@oussamaJmaaa thanks you share your demo code, i will check it tomorrow.
But judging from the current results, I guess there is a high probability that it is not a problem with GPTCache, because GPTCache is implemented based on the cache interface like other caches.

@oussamaJmaaa
Copy link

hi , thank you for your reply , the code works fine without GPTCache and when i used it to cache the results , the first execution of the code works fine and saves data into to the sqlite db but after executing a second i get that error ..

@SimFG
Copy link
Collaborator

SimFG commented Mar 2, 2024

@oussamaJmaaa
I carefully read the cache code in the langchain part, and I found that the problem was not caused by gptcache, but that there was a problem with langchain's cache processing.

Code to store results in GPTCache:

    def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None:
        """Update cache.
        First, retrieve the corresponding cache object using the `llm_string` parameter,
        and then store the `prompt` and `return_val` in the cache object.
        """
        for gen in return_val:
            if not isinstance(gen, Generation):
                raise ValueError(
                    "GPTCache only supports caching of normal LLM generations, "
                    f"got {type(gen)}"
                )
        from gptcache.adapter.api import put

        _gptcache = self._get_gptcache(llm_string)
        handled_data = json.dumps([generation.dict() for generation in return_val])
        put(prompt, handled_data, cache_obj=_gptcache)
        return None

Code to get results from GPTCache:

    def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]:
        """Look up the cache data.
        First, retrieve the corresponding cache object using the `llm_string` parameter,
        and then retrieve the data from the cache based on the `prompt`.
        """
        from gptcache.adapter.api import get

        _gptcache = self._get_gptcache(llm_string)

        res = get(prompt, cache_obj=_gptcache)
        if res:
            return [
                Generation(**generation_dict) for generation_dict in json.loads(res)
            ]
        return None

According to the error stack, there should be an error during json.loads. That is, the format of the saved result and the parsing format should be inconsistent, which caused this error. I guess this may be related to the result processing of the conversation type llm, because if it is just openai, there will be no such error. However, the current langchain is too complicated and is no longer what I originally understood it to be. I can no longer fix this error.

@oussamaJmaaa
Copy link

i see , thank you for your reply , is there any alternative ways to use langchain RetrievalChain with memory and gptcache ?

@SimFG
Copy link
Collaborator

SimFG commented Mar 2, 2024

@oussamaJmaaa i have no idea about this, and i think it need to be fixed by the langchain community

@oussamaJmaaa
Copy link

OKay , thanks a lot !

@theinhumaneme
Copy link

Hello I have looked into the code and when changing Generation to ChatGeneration in the cache.py the cache seems to work fine and is responsive back again, it seems like langchain has moved to ChatGeneration from Generation in the internal working and the cache.py could have fallen behind.

I don't know if this is the right approach to solve this problem. but it works :D (locally)

@SimFG @oussamaJmaaa can I make pull request in this regard for this change?

@SimFG
Copy link
Collaborator

SimFG commented Mar 22, 2024

@theinhumaneme of course, do it!

eyurtsev pushed a commit to langchain-ai/langchain that referenced this issue Mar 26, 2024
Description:
this change fixes the pydantic validation error when looking up from
GPTCache, the `ChatOpenAI` class returns `ChatGeneration` as response
which is not handled.
use the existing `_loads_generations` and `_dumps_generations` functions
to handle it

Trace
```
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/development/scripts/chatbot-postgres-test.py", line 90, in <module>
    print(llm.invoke("tell me a joke"))
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 166, in invoke
    self.generate_prompt(
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 544, in generate_prompt
    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 408, in generate
    raise e
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 398, in generate
    self._generate_with_cache(
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 585, in _generate_with_cache
    cache_val = llm_cache.lookup(prompt, llm_string)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_community/cache.py", line 807, in lookup
    return [
           ^
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_community/cache.py", line 808, in <listcomp>
    Generation(**generation_dict) for generation_dict in json.loads(res)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/load/serializable.py", line 120, in __init__
    super().__init__(**kwargs)
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__
    raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for Generation
type
  unexpected value; permitted: 'Generation' (type=value_error.const; given=ChatGeneration; permitted=('Generation',))
```


Although I don't seem to find any issues here, here's an
[issue](zilliztech/GPTCache#585) raised in
GPTCache. Please let me know if I need to do anything else

Thank you

---------

Co-authored-by: Bagatur <[email protected]>
rahul-trip pushed a commit to daxa-ai/langchain that referenced this issue Mar 27, 2024
…hain-ai#19427)

Description:
this change fixes the pydantic validation error when looking up from
GPTCache, the `ChatOpenAI` class returns `ChatGeneration` as response
which is not handled.
use the existing `_loads_generations` and `_dumps_generations` functions
to handle it

Trace
```
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/development/scripts/chatbot-postgres-test.py", line 90, in <module>
    print(llm.invoke("tell me a joke"))
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 166, in invoke
    self.generate_prompt(
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 544, in generate_prompt
    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 408, in generate
    raise e
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 398, in generate
    self._generate_with_cache(
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 585, in _generate_with_cache
    cache_val = llm_cache.lookup(prompt, llm_string)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_community/cache.py", line 807, in lookup
    return [
           ^
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_community/cache.py", line 808, in <listcomp>
    Generation(**generation_dict) for generation_dict in json.loads(res)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/load/serializable.py", line 120, in __init__
    super().__init__(**kwargs)
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__
    raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for Generation
type
  unexpected value; permitted: 'Generation' (type=value_error.const; given=ChatGeneration; permitted=('Generation',))
```


Although I don't seem to find any issues here, here's an
[issue](zilliztech/GPTCache#585) raised in
GPTCache. Please let me know if I need to do anything else

Thank you

---------

Co-authored-by: Bagatur <[email protected]>
bechbd pushed a commit to bechbd/langchain that referenced this issue Mar 29, 2024
…hain-ai#19427)

Description:
this change fixes the pydantic validation error when looking up from
GPTCache, the `ChatOpenAI` class returns `ChatGeneration` as response
which is not handled.
use the existing `_loads_generations` and `_dumps_generations` functions
to handle it

Trace
```
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/development/scripts/chatbot-postgres-test.py", line 90, in <module>
    print(llm.invoke("tell me a joke"))
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 166, in invoke
    self.generate_prompt(
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 544, in generate_prompt
    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 408, in generate
    raise e
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 398, in generate
    self._generate_with_cache(
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 585, in _generate_with_cache
    cache_val = llm_cache.lookup(prompt, llm_string)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_community/cache.py", line 807, in lookup
    return [
           ^
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_community/cache.py", line 808, in <listcomp>
    Generation(**generation_dict) for generation_dict in json.loads(res)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/load/serializable.py", line 120, in __init__
    super().__init__(**kwargs)
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__
    raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for Generation
type
  unexpected value; permitted: 'Generation' (type=value_error.const; given=ChatGeneration; permitted=('Generation',))
```


Although I don't seem to find any issues here, here's an
[issue](zilliztech/GPTCache#585) raised in
GPTCache. Please let me know if I need to do anything else

Thank you

---------

Co-authored-by: Bagatur <[email protected]>
gkorland pushed a commit to FalkorDB/langchain that referenced this issue Mar 30, 2024
…hain-ai#19427)

Description:
this change fixes the pydantic validation error when looking up from
GPTCache, the `ChatOpenAI` class returns `ChatGeneration` as response
which is not handled.
use the existing `_loads_generations` and `_dumps_generations` functions
to handle it

Trace
```
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/development/scripts/chatbot-postgres-test.py", line 90, in <module>
    print(llm.invoke("tell me a joke"))
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 166, in invoke
    self.generate_prompt(
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 544, in generate_prompt
    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 408, in generate
    raise e
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 398, in generate
    self._generate_with_cache(
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 585, in _generate_with_cache
    cache_val = llm_cache.lookup(prompt, llm_string)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_community/cache.py", line 807, in lookup
    return [
           ^
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_community/cache.py", line 808, in <listcomp>
    Generation(**generation_dict) for generation_dict in json.loads(res)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/load/serializable.py", line 120, in __init__
    super().__init__(**kwargs)
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__
    raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for Generation
type
  unexpected value; permitted: 'Generation' (type=value_error.const; given=ChatGeneration; permitted=('Generation',))
```


Although I don't seem to find any issues here, here's an
[issue](zilliztech/GPTCache#585) raised in
GPTCache. Please let me know if I need to do anything else

Thank you

---------

Co-authored-by: Bagatur <[email protected]>
hinthornw pushed a commit to langchain-ai/langchain that referenced this issue Apr 26, 2024
Description:
this change fixes the pydantic validation error when looking up from
GPTCache, the `ChatOpenAI` class returns `ChatGeneration` as response
which is not handled.
use the existing `_loads_generations` and `_dumps_generations` functions
to handle it

Trace
```
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/development/scripts/chatbot-postgres-test.py", line 90, in <module>
    print(llm.invoke("tell me a joke"))
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 166, in invoke
    self.generate_prompt(
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 544, in generate_prompt
    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 408, in generate
    raise e
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 398, in generate
    self._generate_with_cache(
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 585, in _generate_with_cache
    cache_val = llm_cache.lookup(prompt, llm_string)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_community/cache.py", line 807, in lookup
    return [
           ^
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_community/cache.py", line 808, in <listcomp>
    Generation(**generation_dict) for generation_dict in json.loads(res)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/load/serializable.py", line 120, in __init__
    super().__init__(**kwargs)
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__
    raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for Generation
type
  unexpected value; permitted: 'Generation' (type=value_error.const; given=ChatGeneration; permitted=('Generation',))
```


Although I don't seem to find any issues here, here's an
[issue](zilliztech/GPTCache#585) raised in
GPTCache. Please let me know if I need to do anything else

Thank you

---------

Co-authored-by: Bagatur <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
next version It will be implemented in the next version
Projects
None yet
Development

No branches or pull requests

6 participants