Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

langchain-python-rag-privategpt "Cannot submit more than 5,461 embeddings at once" #4476

Open
dcasota opened this issue May 16, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@dcasota
Copy link
Contributor

dcasota commented May 16, 2024

What is the issue?

In langchain-python-rag-privategpt, there is a bug 'Cannot submit more than x embeddings at once' which already has been mentioned in various different constellations, lately see #2572.

Now with Ollama version 0.1.38 the chromadb version already has been updated to 0.47, but the max_batch_size calculation still seems to produce issues, see actual issue case chroma-core/chroma#2181.

Meanwhile, is there a workaround for Ollama?

(.venv) dcaso [ ~/ollama/examples/langchain-python-rag-privategpt ]$ python ./ingest.py
Creating new vectorstore
Loading documents from source_documents
Loading new documents: 100%|████████████████| 1355/1355 [00:15<00:00, 88.77it/s]
Loaded 80043 new documents from source_documents
Split into 478012 chunks of text (max. 500 tokens each)
Creating embeddings. May take some minutes...
Traceback (most recent call last):
  File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/./ingest.py", line 161, in <module>
    main()
  File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/./ingest.py", line 153, in main
    db = Chroma.from_documents(texts, embeddings, persist_directory=persist_directory)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/.venv/lib/python3.11/site-packages/langchain/vectorstores/chroma.py", line 612, in from_documents
    return cls.from_texts(
           ^^^^^^^^^^^^^^^
  File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/.venv/lib/python3.11/site-packages/langchain/vectorstores/chroma.py", line 576, in from_texts
    chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids)
  File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/.venv/lib/python3.11/site-packages/langchain/vectorstores/chroma.py", line 222, in add_texts
    raise e
  File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/.venv/lib/python3.11/site-packages/langchain/vectorstores/chroma.py", line 208, in add_texts
    self._collection.upsert(
  File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/.venv/lib/python3.11/site-packages/chromadb/api/models/Collection.py", line 298, in upsert
    self._client._upsert(
  File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/.venv/lib/python3.11/site-packages/chromadb/api/segment.py", line 290, in _upsert
    self._producer.submit_embeddings(coll["topic"], records_to_submit)
  File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/.venv/lib/python3.11/site-packages/chromadb/db/mixins/embeddings_queue.py", line 127, in submit_embeddings
    raise ValueError(
ValueError:
                Cannot submit more than 5,461 embeddings at once.
                Please submit your embeddings in batches of size
                5,461 or less.

(.venv) dcaso [ ~/ollama/examples/langchain-python-rag-privategpt ]$

OS

WSL2

GPU

Nvidia

CPU

Intel

Ollama version

0.1.38

Research findings

In ingest.py, in def maint(), I've modified the else condition as following but it didn't help (same issue).
image

@dcasota dcasota added the bug Something isn't working label May 16, 2024
@dcasota
Copy link
Contributor Author

dcasota commented May 16, 2024

It may be yet another subcomponent issue. With v0.1.38, langchain version is 0.0.274.

pip3 list | grep langchain
langchain                0.0.274

Not use of e.g. langchain_community.

As workaround, I've updated all components. This is not recommended because usually it creates more side effects and it's more difficult to reproduce issues.

pip --disable-pip-version-check list --outdated --format=json | python -c "import json, sys; print('\n'.join([x['name'] for x in json.load(sys.stdin)]))" | sudo xargs -n1 pip install -U

Afterwards, the following langchain packages are installed:

pip3 list |grep langchain
langchain                0.1.20
langchain-community      0.0.38
langchain-core           0.1.52
langchain-text-splitters 0.0.2

python ingest.py and python privateGPT.py run successfully, but the output contains warnings with various deprecated langchain components. From the findings so far, a curated requirements.txt list would be helpful.

python ingest.py always starts with Creating new vectorstore. It does not preserve already loaded documents. Why?

@dcasota
Copy link
Contributor Author

dcasota commented May 26, 2024

Same issue with v0.1.39. Luckily the workaround works, with Nvidia drivers 552 (see #4563).

edited June 5th: Same with v0.1.41.
edited June 19th: Same with 0.1.44. Add pip install chromadb==0.5.0

dcasota added a commit to dcasota/ollama that referenced this issue Jun 19, 2024
With chromadb==0.4.7, ingest.py still fails with 
`Cannot submit more than 5,461 embeddings at once. Please submit your embeddings in batches of size 5,461 or less.`

See 
- ollama#4476
- ollama#2572
- ollama#533
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant