-
Notifications
You must be signed in to change notification settings - Fork 222
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vector search reference #2824
Vector search reference #2824
Conversation
How to see the preview of this PR?Go to this URL: https://website-git-deploy-preview-mei-16-meili.vercel.app/docs/branch:ai-search-reference-parameter Credentials to access the page are in the company's password manager as "Docs deploy preview". |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for putting this reference together, that's a lot of work ☀️ There are some missing pieces, see inline comments
Thanks for the review, @dureuill! I have updated the Also, I'm not convinced about the documentation for the |
I agree! I'm thinking of a better way of achieving this, here's what I get:
Maybe for clarity we could rename these to Or these could be What do you think? Do you think it would make things clearer? |
Sure, To send the embedding request, Meilisearch performs two steps:
For example, to implement the OpenAI embedder API, the final request needs to be: {
"input": "TEXT TO EMBED",
"model": "text-embedding-ada-002",
"encoding_format": "float"
} in this example, the {
"model": "text-embedding-ada-002",
"encoding_format": "float",
} However the Final embedder configuration would be: {
"url": "https://api.openai.com/v1/embeddings",
"apiKey": "OPENAI_APIKEY",
"query": {
"model": "text-embedding-ada-002",
"encoding_format": "float",
},
"inputField": ["input"],
// omitting "inputType", "pathToEmbeddings" and "embeddingObject"
} Similarly, an ollama request looks like: {
"model": "nomic-embed-text",
"prompt": "TEXT TO EMBED"
} So you'd have |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok for code samples
@dureuill, Regarding the API, I think Possibly stupid idea: would we gain anything from creating two main fields, {
"default": {
"source": "",
"input": {
"apiKey": "",
"model": "",
"revision": "",
"dimensions": 123,
"inputType": "",
"pathToInput": [],
"query": {}
},
"response": {
"pathToEmbeddingArray": [],
"pathToEmbeddingData": [],
"distribution": {}
},
}
} I was also thinking about Regarding |
Hey @guimachiavelli
Nice we might consider changing the name of these parameters then :-)
I like the idea, but I'm not sure we could implement it. The fields are shared between all embedders, and the fields in Also, as much as I love nesting, we should avoid it as much as possible, because it becomes very unwieldy when using the API (the previous API had the parameters nested under the source, which was easier from an implementation perspective, but harder to input).
Not really, it determines if multiple texts can be sent as input, which allows for better performance.
We could, but we might have to come with some sort of smart naming 🤔
That is because the model is a first-class concept for openAi, huggingFace and ollama, but not for REST. The REST embedder configuration is just a way to tell Meilisearch how to send POST request with a JSON body where the text to embed is injected. You could imagine some embedders with a REST API not exposing the model at all. For instance, Hugging Face inference endpoints are created with the model already selected, so one does not pass the model every request. For a HF inference endpoint, the REST configuration could be something like: {
"url": "https://l2skjfwp9punv393.us-east-1.aws.endpoints.huggingface.cloud",
"apiKey": "YOUR_TOKEN",
"query": {
"truncate": true
},
"inputField": ["inputs"],
"inputType": "textArray",
"pathToEmbeddings": [] // no idea what the answer looks like would have to test,
"embeddingObject": [] // same
} |
Ok, thanks for the answers, @dureuill. I'm curious on how the API will evolve before we stabilise it, especially if we manage to get more direct feedback from users (perhaps organising a poll or a couple of interviews?). A lot of my concerns might end up being fairly academic and mostly unimportant for the majority of people who are actually using vector search. In any case, I think this PR is ready for an official review. I don't think you need to re-read everything, just the new section describing each embedder option in more detail: https://github.com/meilisearch/documentation/pull/2824/files#diff-a88efc3f5697059650c8e14b221124b09e9c2eb12aadc2290bb87a71456fd64aR1999-R2193 |
reference/api/settings.mdx
Outdated
|
||
Other models, such as those provided by Ollama and REST embedders may also be compatible with Meilisearch. | ||
|
||
This field is mandatory for `openAi`, `huggingFace`, and `Ollama` embedders. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This field has default values for openAi
and huggingFace
, so it is only mandatory for ollama
embedders.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What are the default values for openAi
and huggingFace
? text-embedding-3-small
and BAAI/bge-base-en-v1.5
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BAAI/bge-base-en-v1.5 for hf and the ada one for openai
Ah, I also think it would be nice to have a list of the embedder with the allowed/mandatory parameter per embedder. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for this huge addition 🎈 🎉
This PR adds an initial base reference for:
hybrid
andvector
search parametersembedders
index setting