MTEB: Massive Text Embedding Benchmark
-
Updated
May 22, 2024 - Python
MTEB: Massive Text Embedding Benchmark
Work in progress. An llm util to work as an evaluation step in RAG applications
Lite & Super-fast re-ranking for your search & retrieval pipelines. Supports SoTA Listwise and Pairwise reranking based on LLMs and cross-encoders and more. Created by Prithivi Da, open for PRs & Collaborations.
Querying local documents, powered by LLM
Advanced RAG pipeline using Re-Ranking after initial retrieval
Exploring search relevance techniques.
임베딩(SentenceTransformer) 및 재순위화(Re-Ranking)
Is ChatGPT Good at Search? LLMs as Re-Ranking Agent [EMNLP 2023 Outstanding Paper Award]
This repository showcases a comprehensive approach to information retrieval, document re-ranking, and language model integration. It incorporates techniques such as document chunking, embedding projection, and automatic query expansion to enhance the effectiveness of information retrieval systems.
A curated list of awesome papers related to pre-trained models for information retrieval (a.k.a., pretraining for IR).
Wikipedia Semantic Search w/ Embeddings
This is an official implementation for "Robust Graph Structure Learning over Images via Multiple Statistical Tests" accepted at NeurIPS 2022.
Official repository of ICCV21 paper "Viewpoint Invariant Dense Matching for Visual Geolocalization"
Training a customized dataset on fast-reid, evaluation and visualization
Explore from keyword search to dense retrieval and reranking, which injects the intelligence of LLMs into your search system, making it faster and more effective.
Add a description, image, and links to the reranking topic page so that developers can more easily learn about it.
To associate your repository with the reranking topic, visit your repo's landing page and select "manage topics."