ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
-
Updated
May 10, 2024 - C++
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
🍷 Gracefully claim weekly free games and monthly content from Epic Store.
Open standard for machine learning interoperability
Java version of LangChain
Burn is a new comprehensive dynamic Deep Learning Framework built using Rust with extreme flexibility, compute efficiency and portability as its primary goals.
Tiny, no-nonsense, self-contained, Tensorflow and ONNX inference
Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
🚀 Accelerate training and inference of 🤗 Transformers and 🤗 Diffusers with easy to use hardware optimization tools
Neural Network Compression Framework for enhanced OpenVINO™ inference
Speech-to-text, text-to-speech, and speaker recongition using next-gen Kaldi with onnxruntime without Internet connection. Support embedded systems, Android, iOS, Raspberry Pi, RISC-V, x86_64 servers, websocket server/client, C/C++, Python, Kotlin, C#, Go, NodeJS, Java, Swift
Machine learning framework for both deep learning and traditional algorithms
Efficient CPU/GPU/Vulkan ML Runtimes for VapourSynth (with built-in support for waifu2x, DPIR, RealESRGANv2/v3, Real-CUGAN, RIFE, SCUNet and more!)
Visualizer for neural network, deep learning and machine learning models
Export Segment Anything Models to ONNX
A type-safe, lightweight, modern, and performant binding Java binding of Microsoft's ONNX Runtime
Represent trained machine learning models as Pyomo optimization formulations
Add a description, image, and links to the onnx topic page so that developers can more easily learn about it.
To associate your repository with the onnx topic, visit your repo's landing page and select "manage topics."