STMicroelectronics has released STM32Cube.AI version 7.2.0, the first artificial-intelligence (AI) development tool by an MCU (microcontroller) vendor to support ultra-efficient deeply quantized neural networks. STM32Cube.AI converts pre-trained neural networks to optimized C code that is compatible with STM32 microcontrollers (MCUs). It is a vital device for the development of the most cutting-edge AI solutions that make the most use of the limited memory size and computing capabilities of embedded devices. The move of AI closer to its edge and away from cloud, provides significant advantages for the software. This includes privacy by design, deterministic real-time responses, better security, and lower power consumption. It also assists in optimizing cloud use.
With the support of advanced quantization format inputs, such as qKeras and Larq developers can decrease the size of their networks in terms of memory footprint, size, and latency. This opens up more opportunities of AI in the edge of the network that include cost-effective and efficient applications. Developers are able to create edge devices, like self-powered IoT devices that provide superior functionality and performance along with a longer battery life. ST’s STM32 family offers many compatible hardware platforms. The selection ranges from ultra-low-power Arm Cortex(r)-M0 MCUs, to high-performance devices that utilize Cortex-M7, -M33, and Cortex A7 cores. STM32Cube.AI version 7.2.0 7.2.0 also includes the ability to support TensorFlow 2.9 models as well as kernel performance enhancements. new machine learning algorithms from Scikit-learn and brand the brand new Open Neural Network eXchange (ONNX) operators.