Deploy models with highest performance and accuracy
The Neural Compute SDK makes it easy for you to deploy machine learning models efficiently across a range of devices.
IMGDNN
Neural network optimisation & runtime API for integrating into your application or framework.
TVM Heterogeneous Compilation
Handle multiple devices with ease. Compile models for a range of hardware with automatic partitioning to devices and efficient runtime synchronisation.
Quantisation Tools
Powerful tools to take full advantage of NNA and GPU hardware including state of the art quantisation and compression techniques.
Our Netron-based interface brings together the full power of the Neural Compute SDK tools, making them easy and intuitive to use.
You can find more information on the Neural Compute SDK and download the academic version
from our Imagination University Programme website.