Email Auto-ReplAI is a Python tool that uses AI to automate drafting responses to unread Gmail messages, streamlining email management tasks.
-
Updated
Aug 1, 2023 - Python
Email Auto-ReplAI is a Python tool that uses AI to automate drafting responses to unread Gmail messages, streamlining email management tasks.
Transcribes videos and describes them with OpenAI APIs or local models.
tinydogBIGDOG uses gpt4all and openai api calls to create a consistent and persistent chat agent. choosing between the "tiny dog" or the "big dog" in a student-teacher frame. Two dogs with a single bark.
Like ChatGPT's voice conversations with an AI, but entirely offline/private/trade-secret-friendly, using local AI models such as LLama 2 and Whisper
An automatic docstrings generator using local LLM models.
OfflineAI is an artificial intelligence that operates offline and uses machine learning to perform various tasks based on the code provided. It is built using two powerful AI models by Mistral AI.
Local AI Open Orca For Dummies is a user-friendly guide to running Large Language Models locally. Simplify your AI journey with easy-to-follow instructions and minimal setup. Perfect for developers tired of complex processes!
Fluentcards AI is an enhanced version of the original Fluentcards project, integrating AI technology to generate detailed explanations for vocabulary collected from Kindle devices. It enables language learners to export their Kindle vocabulary into Anki flashcards, with AI-powered explanations providing deeper context and understanding of each word
Telegram bot that interacts with the local Ollama 🦙 to answer user messages
MemoryCache is an experimental development project to turn a local desktop environment into an on-device AI agent
Voice-to-voice personal assistant, Full-local, GPU company agnostic.
Conversate effortlessly in more than 50 languages!
Extract structured data from local or remote LLM models
Catalog of OCI images for popular open-source or open Large Language Models.
🦙 Ollama Telegram bot, with advanced configuration
MLX-VLM is a package for running Vision LLMs locally on your Mac using MLX.
Add a description, image, and links to the local-ai topic page so that developers can more easily learn about it.
To associate your repository with the local-ai topic, visit your repo's landing page and select "manage topics."