The repository for VeriProof
-
Updated
May 22, 2024 - Jupyter Notebook
The repository for VeriProof
BERT model on CMS synthetic EHR data for diagnosis and procedure prediction in PyTorch
[Survey] Masked Modeling for Self-supervised Representation Learning on Vision and Beyond (https://arxiv.org/abs/2401.00897)
Gateway into the John Snow Labs Ecosystem
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Providing enterprise-grade LLM-based development framework, tools, and fine-tuned models.
Towards evaluation of fairness in MDD models: Automatic analysis of symptom differences for gender groups in the D-vlog dataset
Minimal keyword extraction with BERT
A Unified Library for Parameter-Efficient and Modular Transfer Learning
🔍 LLM orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.
lightweight Vercel-Friendly version of BMX, the BookMark eXtractor https://github.com/cooperability/BMX-bookmark-extractor
👑 Easy-to-use and powerful NLP and LLM library with 🤗 Awesome model zoo, supporting wide-range of NLP tasks from research to industrial applications, including 🗂Text Classification, 🔍 Neural Search, ❓ Question Answering, ℹ️ Information Extraction, 📄 Document Intelligence, 💌 Sentiment Analysis etc.
Toolkit for a learning health system
Easy multi-task learning with HuggingFace Datasets and Trainer
Tokenize and convert sample text data into vectors using BERT. Load the vector representation of the text to OpenSearch and use kNN for semantic search
OpenSearch Neural Search example. Load BERT to OpenSearch and create embeddings as data is indexed. Use the embedding to preform vector search
Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)
Q&A System using BERT and Faiss Vector Database
BERT classification model for processing texts longer than 512 tokens. Text is first divided into smaller chunks and after feeding them to BERT, intermediate results are pooled. The implementation allows fine-tuning.
Rust native ready-to-use NLP pipelines and transformer-based models (BERT, DistilBERT, GPT2,...)
Add a description, image, and links to the bert topic page so that developers can more easily learn about it.
To associate your repository with the bert topic, visit your repo's landing page and select "manage topics."