Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question]: Which Open Source LLM is best for Knowledge Graphs ? #13120

Open
1 task done
nikhitaKanoj opened this issue Apr 26, 2024 · 3 comments
Open
1 task done

[Question]: Which Open Source LLM is best for Knowledge Graphs ? #13120

nikhitaKanoj opened this issue Apr 26, 2024 · 3 comments
Labels
question Further information is requested

Comments

@nikhitaKanoj
Copy link

Question Validation

  • I have searched both the documentation and discord for an answer.

Question

I want to understand Which Open Source LLM is best for Knowledge Graphs ?
Because I am using KnowledgeGraphIndex functionality in llama-index and creating Knowledge graphs ,
I used Nous-hermes-Llama2 , the accuracy of the results is not that appreciating ,
Can you please suggest which open source current LLM is good for this ?

@nikhitaKanoj nikhitaKanoj added the question Further information is requested label Apr 26, 2024
Copy link

dosubot bot commented Apr 26, 2024

For enhancing Knowledge Graphs with high accuracy and efficiency, transitioning to the Llama Index framework, especially its Knowledge Graph RAG Query Engine, is recommended. This framework is well-suited for creating and querying Knowledge Graphs, leveraging Large Language Models like OpenAI's GPT-3.5-turbo and Azure's GPT-3.5-turbo-16k. It utilizes NebulaGraphStore for graph storage, which, in combination with the Knowledge Graph RAG Query Engine, facilitates efficient data storage and querying.

The Llama Index framework significantly improves upon Nous-hermes-Llama2 by supporting embedding models and enabling a hybrid querying approach that combines text and embeddings. This approach offers a substantial upgrade in terms of accuracy and user experience for Knowledge Graph applications.

For practical implementation and further exploration, consider reviewing the following resources:

These resources should guide you in leveraging the Llama Index framework for your Knowledge Graph projects effectively.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

@nikhitaKanoj
Copy link
Author

@dosu , I am using llama-index only for my application with RAG , I want to know which OPEN SOURCE llm to us with this so that the results are better ,
Current Components I am using : KnowledgeGraphIndex - for indexing , SimpleGraphStore()- for graph store , Nous Hermes llama2 13B - llm , all mini - text embedding .
With these , the results are not so accurate,
I want to know how to increase its performance and which LLM is better ?

Copy link

dosubot bot commented Apr 26, 2024

To enhance the performance and accuracy of your Knowledge Graph application using the llama-index framework, consider the following steps:

  1. Switch to IpexLLMEmbedding for Text Embedding: The IpexLLMEmbedding is optimized for Intel hardware, potentially offering better efficiency and performance. The model BAAI/bge-large-en-v1.5 used here could provide improvements over the Nous Hermes llama2 13B model. Explore its integration as detailed in the basic.py example.

  2. Benchmark the BAAI/bge-large-en-v1.5 Model: Test this model's performance in your application to see how it compares with your current setup. Focus on accuracy and efficiency, especially if you're using Intel hardware where you might see significant gains.

  3. Utilize Device-Specific Optimizations: The efficiency improvements from IpexLLMEmbedding could be substantial, especially on Intel platforms. Experimenting with different hardware setups might reveal the most cost-effective configuration for your application.

  4. Leverage Advanced Features: The IpexLLMEmbedding offers features like batch processing for text embeddings, which could improve the processing efficiency of your Knowledge Graph application. Make sure to integrate these features to fully benefit from the model's capabilities.

  5. Continuous Benchmarking and Iteration: Keep testing different models and configurations. Continuous benchmarking will help you identify the best setup for your specific needs, balancing accuracy, efficiency, and resource consumption.

  6. Share Your Findings: If you achieve significant improvements, consider sharing your experience and insights with the community. This can help others facing similar challenges and contribute to the development of the ecosystem.

For optimizing the KnowledgeGraphIndex, explore strategies discussed in community threads such as optimizing triplet extraction and parallelizing processes. Remember, these optimizations might impact result quality, so thorough testing is crucial to ensure they meet your application's requirements.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

1 participant