Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to Retrieve Chat Responses from Multiple Indexes. #13125

Open
1 task done
rk-spirinova opened this issue Apr 26, 2024 · 2 comments
Open
1 task done

Unable to Retrieve Chat Responses from Multiple Indexes. #13125

rk-spirinova opened this issue Apr 26, 2024 · 2 comments
Labels
question Further information is requested

Comments

@rk-spirinova
Copy link

Question Validation

  • I have searched both the documentation and discord for an answer.

Question

Hello everyone,

I'm currently encountering an issue while attempting to retrieve chat responses from multiple indexes in my project. Here's a brief overview of the situation:

Problem: Despite conducting thorough research, I haven't found a suitable solution for fetching chat responses from multiple indexes simultaneously.

Objective: My goal is to efficiently collect chat responses from various indexes within my application to enhance the user experience.

Below is a snippet of the code I'm currently using to handle chat requests and create indexes:

async def handle_chat_request(request: Request, data: ChatData) -> StreamingResponse:
    if data.config.model in OLLAMA_MODELS:
        return await _ollama_chat(request, data)
    elif data.config.model in OPENAI_MODELS:
        return await _openai_chat(request, data)
    else:
        raise HTTPException(status_code=400, detail="Invalid Model Name.")
    
async def _openai_chat(request: Request, data: ChatData) -> StreamingResponse:
    print("Received an OpenAI chat request:", request, data)
    Settings.llm = OpenAI(model=data.config.model, temperature=0.2)
    Settings.embed_model = OpenAIEmbedding(model="text-embedding-3-small")
    return await _chat_stream(request, data)


async def _chat_stream(request: Request, data: ChatData) -> StreamingResponse:
    try:
        index = _get_or_create_index(data.config.model, data.datasource)
        chat_engine = index.as_chat_engine()
        response = chat_engine.stream_chat(data.message, [])

        async def event_generator():
            for token in response.response_gen:
                if await request.is_disconnected():
                    break
                yield convert_sse(token)
            yield convert_sse({"done": True})

        return StreamingResponse(
            event_generator(),
            media_type="text/event-stream",
        )
    except Exception as e:
        full_exception = traceback.format_exc()
        logger.error(f"{data.config.model} chat error: {e}\n{40 * '~'} \n{full_exception}")
        raise HTTPException(status_code=500, detail="Internal Server Error")

Initially, I attempted to combine multiple documents into a single index to address this challenge. However, this approach led to accuracy issues in the response. Refactoring is necessary to overcome this obstacle. Below is the code snippet illustrating the initial approach:

document1 = SimpleDirectoryReader(ds_data_dir).load_data()
document2 = generate_web_index("https://github.com")

index = VectorStoreIndex.from_documents(document1 + document2 , show_progress=True, 
                                        storage_context=StorageContext.from_defaults(),
                                        embed_model=Settings.embed_model)
index.storage_context.persist(ds_storage_dir)
def generate_web_index(website_url: str):
    return BeautifulSoupWebReader().load_data(urls=[website_url])

I'm seeking advice on how to improve this approach and handle multiple indexes effectively. Any insights or suggestions would be greatly appreciated.

Thank you.

@rk-spirinova rk-spirinova added the question Further information is requested label Apr 26, 2024
Copy link

dosubot bot commented Apr 26, 2024

To address the challenge of fetching chat responses from multiple indexes simultaneously while maintaining accuracy and enhancing user experience, consider the following approach:

  1. Maintain Separate Indexes: Keep your data in different indexes based on the type of chat responses. This helps in preserving data integrity and improving the accuracy of responses.

  2. Implement Asynchronous Fetching: Use asynchronous programming to fetch and process chat responses from each index concurrently. This reduces response time significantly.

  3. Aggregate Results Thoughtfully: After fetching the responses, aggregate them in a way that maintains their relevance and accuracy.

  4. Utilize Advanced Chat Engines: Explore using advanced chat engines like CitationsContextChatEngine from LlamaIndex, which is designed for applications requiring context-aware responses with citations and related documents. This engine can fetch and process responses from multiple indexes effectively.

  5. Caching for Efficiency: Implement caching mechanisms for frequently accessed data to improve the efficiency of your application.

Here's a simplified example of how to implement concurrent fetching using an advanced chat engine:

import asyncio
from llama_index.packs.cohere_citation_chat.citations_context_chat_engine import CitationsContextChatEngine

async def fetch_and_process_chat_responses(queries):
    chat_engine = CitationsContextChatEngine()  # Advanced engine for enriched responses
    
    tasks = []
    for query in queries:
        task = asyncio.create_task(chat_engine.chat(query))
        tasks.append(task)
    
    responses = await asyncio.gather(*tasks)
    for response in responses:
        print(f"Chat Response with Citations and Documents: {response}")
        # Process and aggregate responses as needed

# Example usage
queries = ['Query 1', 'Query 2', 'Query 3']
asyncio.run(fetch_and_process_chat_responses(queries))

This approach leverages asynchronous programming to concurrently fetch and process data from multiple sources, enriching the chat responses with citations and related documents. Depending on your specific needs, you may choose a different chat engine provided by LlamaIndex to better suit your application's requirements.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

@logan-markewich
Copy link
Collaborator

A few notes

  • Ollama can only process requests sequentially
  • Settings is a global singleton, probably you meant to pass in the llm/embedding model
async def _openai_chat(request: Request, data: ChatData) -> StreamingResponse:
    print("Received an OpenAI chat request:", request, data)
    llm = OpenAI(model=data.config.model, temperature=0.2)
    embed_model = OpenAIEmbedding(model="text-embedding-3-small")
    return await _chat_stream(request, data, llm, embed_model)


async def _chat_stream(request: Request, data: ChatData, llm, embed_model) -> StreamingResponse:
    try:
        index = _get_or_create_index(data.config.model, data.datasource)
        chat_engine = index.as_chat_engine(llm=llm, embed_model=embed_model)
        response = chat_engine.stream_chat(data.message, [])

        async def event_generator():
            for token in response.response_gen:
                if await request.is_disconnected():
                    break
                yield convert_sse(token)
            yield convert_sse({"done": True})

        return StreamingResponse(
            event_generator(),
            media_type="text/event-stream",
        )
    except Exception as e:
        full_exception = traceback.format_exc()
        logger.error(f"{data.config.model} chat error: {e}\n{40 * '~'} \n{full_exception}")
        raise HTTPException(status_code=500, detail="Internal Server Error")

Also in a server setting like this, I wouldn't be saving to/from disk. This can be pretty slow (and also blocks the async event loop). I would use a remote vector db like qdrant, weaviate, etc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants