TURN-KEY DEEP LEARNING MODELS
Built to run on any device anywhere
Bringing Unparalleled Performance to Edge Devices
Our hardware-agnostic deep-learning models deliver world-leading inference speed and minimized memory footprints to most hardware platforms. Our compute-efficient software solutions are optimized to run on a wide variety of hardware, from legacy CPUs and GPUs to the latest and most advanced neural accelerators. This unparalleled performance allows our customers to quickly benefit from edge AI, which includes reduced latency and a better user experience, enhanced privacy, and significant reduction in cloud operating expenses.
| CPUs & MCUs (e.g. ARM, x86)
| GPUs & NPUs
| DSPs & FPGAs
| Neural Network Native ASICs
Deploying Highly Accurate Computer Vision Models
Our field-proven models solve critical problems directly on compute-constrained embedded devices, interpreting the visual world with the highest accuracy and lowest latency, supporting a broad range of computer vision applications.
| Detection (e.g. person, vehicle, pose)
| Classification (e.g. animal)
| Identification (e.g. face, license plate)
| Segmentation (e.g. background removal)
Delivering a Natural Audio Interface
Whether for audio event detection or a custom wake word, our deep-learning models for always-on audio and speech applications massively improve power and performance levels, enabling a natural, hands-free interface directly on edge devices.
| Voice commands
| Wake words
| Noise suppression
| Acoustic event detection (e.g. glass break, smoke alarm)
| Echo cancellation