Selecting chunking methods and Node Parser #12956
Replies: 1 comment 2 replies
-
For managing a high cardinality database with 500 columns and 75 million records, and enabling semantic search to shortlist columns for user queries, the Here's a brief guide on how to use it:
However, consider the following limitations and constraints:
For detailed implementation and optimization, refer to the Sources
|
Beta Was this translation helpful? Give feedback.
-
Hi Team
I am having the high cardinallity database; which is having 500 columns and 75 million of records. I also have the metadata of all 500 attributes/columns in a structure form in excel, that is having 2 columns one is attribute name and another is its definition.
I want to store it in vector database in such a way that once a user will ask any question at first it should query the vector database for semantic search to shortlist what are the columns needs to be added in the user query, then it will passed to llm for sql query generation.
So my question is what is the best node parser i should use to store this.
Thanks and Regards
Pradipta
Beta Was this translation helpful? Give feedback.
All reactions