New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question]: What is the best to retrieve Knowledge graphs embeddings #13129
Comments
To retrieve knowledge graph embeddings directly from a graph store, you can use a specialized approach involving the
Here's a simplified example to illustrate the setup: from llama_index.core.indices.knowledge_graph.base import KnowledgeGraphIndex
from llama_index.core.service_context import ServiceContext
from llama_index.core.indices.knowledge_graph.retrievers import KGTableRetriever, KGRetrieverMode
from llama_index.core.schema import QueryBundle
from llama_index.core.graph_stores.types import GraphStore
class YourGraphStore(GraphStore):
# Implement required methods for your graph store
your_graph_store = YourGraphStore()
service_context = ServiceContext()
knowledge_graph_index = KnowledgeGraphIndex(service_context=service_context, graph_store=your_graph_store)
kg_table_retriever = KGTableRetriever(index=knowledge_graph_index, retriever_mode=KGRetrieverMode.EMBEDDING)
query_bundle = QueryBundle(query_str="Your query here")
nodes_with_embeddings = kg_table_retriever._retrieve(query_bundle) This setup allows you to directly interact with your graph store to retrieve embeddings, bypassing the need for local storage contexts. Ensure your If you encounter any specific issues or need further clarification on implementing this solution, feel free to ask.
|
I have the same problem, as you mentioned in your code KGTableRetriever takes knowledge_graph_index as parameter. Which means every time we need to re-build the KG index. Is there a way to build the KG indexes and store in a graph database, then query without building the index again? |
@dosu The code you provided when we will call KGTableRetriever it needs index=knowledge_graph_index so every time we call the kg_table_retriever it will recreate the knowledge_graph_index which we want to avoid. knowledge_graph_index = KnowledgeGraphIndex(service_context=service_context, graph_store=your_graph_store) |
To address the concern about rebuilding the For scenarios where you're looking to persist the knowledge graph index and avoid rebuilding it on every application start, you can leverage the persistence capabilities of the
This approach allows you to maintain and reuse the knowledge graph index without the overhead of rebuilding it, enhancing the efficiency of your application. Remember to replace This method ensures that your application can efficiently query the knowledge graph by leveraging the stored index, significantly reducing the initialization time and computational overhead associated with rebuilding the index from scratch.
|
Hi @navneteberi30 @oykunehir, have you guys tried using the KnowledgeGraphRAGRetriever()? You can build your index with the KnowledgeGraphIndex() and then pass in the graph_store you used for this process to the KnowledgeGraphRAGRetriever(). Let me know if this works for you. |
@gich2009 that doesn't work because once i build the KnowledgeGraphIndex.from_documents() and unable to get any reposnse
|
@navneteberi30, let me give it a try on my side then I'll get back to you on it. |
Question Validation
Question
Hi I have general question,
How can we retrieve the knowledge graph embeddings from the graph store instead of using the local storage context?
I am unable to find any documentation related to that or is it something we can expect in future
current process looks something like this once we create the knowledge graphs and then use local storage context.
Could you please share how can we retrieve from the graph store that we build
kg_index = load_index_from_storage(
storage_context=storage_context,
max_triplets_per_chunk=10,
space_name=space_name,
edge_types=edge_types,
rel_prop_names=rel_prop_names,
tags=tags,
verbose=True,
)
The text was updated successfully, but these errors were encountered: