New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question]: Can I skip using indexing if the documents are very less? #13073
Comments
For your scenario with very limited and frequently changing data, you can indeed bypass using a To implement this, you would need to build or use an existing Knowledge Graph and then utilize the For more detailed guidance and examples on how to set this up, you can refer to the LlamaIndex documentation and example code provided in their repository. This will give you a clear understanding of how to leverage Knowledge Graphs for efficient information retrieval in scenarios like yours.
|
Question Validation
Question
I just want to know whether we can call the query engine in RAG with out vectorstoreindex as I have very limited data for retrieval and it will be changed at every inference call
all_sentences=['Document1 data','Document2 data'] documents = [Document(text=sent) for sent in all_sentences] self.index = VectorStoreIndex.from_documents( documents, service_context=self.service_context) # Query the index with the prompt query_engine = self.index.as_query_engine()
The text was updated successfully, but these errors were encountered: