New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RAG Embedding LLM + MetaGPT #1256
Labels
documentation
Improvements or additions to documentation
Comments
Zhipu embedding seems not yet supported by Llama Index, need to create a custom embedding and pass to |
@seehi can you add some documentation to clarify this issue? |
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I'm going to do a work which need LLM to read PDF. I found it's easy to do in LangChain with RecursiveCharacterTextSplitter and embeddings. But in MetaGPT, it seems hard to do.
I found embedding in config2.yaml, but do not know how to use it.
I'm using zhipuai, and the ZhipuAIEmbeddings, ZhipuAILLM are customized.
Here is my chain:
embedding = ZhipuAIEmbeddings(zhipuai_api_key=api_key)
vectordb = Chroma(persist_directory=persist_directory, embedding_function=embedding)
llm = ZhipuAILLM(model="glm-4", zhipuai_api_key=api_key,temperature=0)
retriever = vectordb.as_retriever(search_type="similarity", search_kwargs={"k": 6})
qa_interface = RetrievalQA.from_chain_type(
llm,
chain_type="stuff",
retriever=retriever,
return_source_documents=True,
)
I want the function "qa_interface()" instead of self._aask() for compatibility, and sincerely ask for your help
The text was updated successfully, but these errors were encountered: