Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ChatPromptTemplate.from_template returns serialized object from vectorstore retriever #21140

Open
5 tasks done
gpt-partners opened this issue May 1, 2024 · 0 comments
Open
5 tasks done
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature 🔌: openai Primarily related to OpenAI integrations Ɑ: retriever Related to retriever module Ɑ: vector store Related to vector store module

Comments

@gpt-partners
Copy link

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

from operator import itemgetter

from langchain_community.vectorstores import FAISS
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_openai import ChatOpenAI, OpenAIEmbeddings

vectorstore = FAISS.from_texts(
["harrison worked at kensho"], embedding=OpenAIEmbeddings()
)
retriever = vectorstore.as_retriever()

template = """Answer the question based only on the following context:
{context}

Question: {question}

Answer in the following language: {language}
"""
prompt = ChatPromptTemplate.from_template(template)

chain = (
{
"context": itemgetter("question") | retriever,
"question": itemgetter("question"),
"language": itemgetter("language"),
}
| prompt
)

chain.invoke({"question": "where did harrison work", "language": "italian"})

Error Message and Stack Trace (if applicable)

ChatPromptValue(messages=[HumanMessage(content="Answer the question based only on the following context:\n[Document(page_content='harrison worked at kensho')]\n\nQuestion: where did harrison work\n\nAnswer in the following language: italian\n")])

Description

  • When I build a prompt the content property of HumanMessage includes the serialized form of the Document
  • Instead I expect only the page_content to be included as context information, such as

ChatPromptValue(messages=[HumanMessage(content="Answer the question based only on the following context:\n"""\nharrison worked at kensho\n"""\n\nQuestion: where did harrison work\n\nAnswer in the following language: italian\n")])

  • I wonder if this behaviour is on purpose (can the LLM read the serialized object well, is it of any help for the answer) or is it a bug?

System Info

pip install --upgrade --quiet langchain langchain-openai

@gpt-partners gpt-partners changed the title ChatPromptTemplate.from_template returns serialized object ChatPromptTemplate.from_template returns serialized object from vectorstore retriever May 1, 2024
@dosubot dosubot bot added Ɑ: retriever Related to retriever module Ɑ: vector store Related to vector store module 🔌: openai Primarily related to OpenAI integrations 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature labels May 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature 🔌: openai Primarily related to OpenAI integrations Ɑ: retriever Related to retriever module Ɑ: vector store Related to vector store module
Projects
None yet
Development

No branches or pull requests

1 participant