Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question]: Can I skip using indexing if the documents are very less? #13073

Open
1 task done
saidineshpola opened this issue Apr 24, 2024 · 1 comment
Open
1 task done
Labels
question Further information is requested

Comments

@saidineshpola
Copy link

Question Validation

  • I have searched both the documentation and discord for an answer.

Question

I just want to know whether we can call the query engine in RAG with out vectorstoreindex as I have very limited data for retrieval and it will be changed at every inference call
all_sentences=['Document1 data','Document2 data'] documents = [Document(text=sent) for sent in all_sentences] self.index = VectorStoreIndex.from_documents( documents, service_context=self.service_context) # Query the index with the prompt query_engine = self.index.as_query_engine()

@saidineshpola saidineshpola added the question Further information is requested label Apr 24, 2024
Copy link

dosubot bot commented Apr 24, 2024

For your scenario with very limited and frequently changing data, you can indeed bypass using a VectorStoreIndex in RAG by leveraging alternative approaches such as a KnowledgeGraphRAGQueryEngine. This method is particularly useful as it allows for dynamic querying of a Knowledge Graph without the overhead of maintaining a vector index, making it ideal for small or frequently updated datasets.

To implement this, you would need to build or use an existing Knowledge Graph and then utilize the KnowledgeGraphRAGQueryEngine for querying. This approach dynamically generates responses based on the current data in the Knowledge Graph, thus eliminating the need for a pre-built vector index.

For more detailed guidance and examples on how to set this up, you can refer to the LlamaIndex documentation and example code provided in their repository. This will give you a clear understanding of how to leverage Knowledge Graphs for efficient information retrieval in scenarios like yours.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

1 participant