WebAssembly binding for llama.cpp - Enabling in-browser LLM inference
-
Updated
May 14, 2024 - C++
WebAssembly binding for llama.cpp - Enabling in-browser LLM inference
Conversational AI Platform to build effective Proactive Digital Assistants using Visual LLM Chaining
Practical course about Large Language Models.
🤘 TT-NN operator library, and TT-Metalium low level kernel programming model.
LLMOps with Prompt Flow is a "LLMOps template and guidance" to help you build LLM-infused apps using Prompt Flow. It offers a range of features including Centralized Code Hosting, Lifecycle Management, Variant and Hyperparameter Experimentation, A/B Deployment, reporting for all runs and experiments and so on.
🚀 海螺AI大模型逆向API白嫖测试【特长:超自然语音】,支持高速流式输出、语音合成、联网搜索、长文档解读、图像解析、多轮对话,零配置部署,多路token支持,自动清理会话痕迹。
A cloud-native vector database, storage for next generation AI applications
🍶 llm-distillery ⇢ use LLMs to run map-reduce summarization tasks on large documents until a target token size is met.
The platform for customizing AI from enterprise data
Build AI-powered applications with React, Svelte, Vue, and Solid
Daring Mechanician is a Python library for building tools that use AI by building tools that AIs use.
Text analytics for LLM apps. PostHog for prompts. Extract evaluations, intents and events from text messages. phospho leverages LLM (OpenAI, MistralAI, Ollama, etc.)
Integrate cutting-edge LLM technology quickly and easily into your apps
Add a description, image, and links to the llm topic page so that developers can more easily learn about it.
To associate your repository with the llm topic, visit your repo's landing page and select "manage topics."