硅实验室硬件毫升无线芯片上的细节

Technology News |
Silicon Labs has benchmarked the performance of its hardware accelerator for machine learning in its latest wireless chip. The MG24 and BG24 includes a purpose built matrix vector execution unit optimised for convolutional neural networks. The accelerator is designed with native support for the TensorFlow for Microcontrollers (TF4M) framework from…
By Nick Flaherty

Share:

Silicon Labs has benchmarked the performance of its hardware accelerator for machine learning in its latest wireless chip.

The MG24 and BG24 includes a purpose built matrix vector execution unit optimised for convolutional neural networks. The accelerator is designed with native support for the TensorFlow for Microcontrollers (TF4M) framework from Google, offloading the matrix calculations from the ARM Corex M33 microcontroller core in the chip.

This gives a low energy benchmark of 1721uJ for image processing with a latency of 186ms and 45uJ with a latency of 5ms for anomaly detection in the MLperf Tiny benchmarks (see link below for the other results).

The low power comes from allowing the rest of the chip to shut down while the ML calculations are made.

Silicon Labs is using the TensorFlow API for customers, and working on its own frameworks. “There are customers who want to develop on their own and we see TF4M as the most widespread framework for these Cortex M4+ systems,” said Tamas Daranyi, product manager for IoT & AI/ML “We are actively working on extending our eco-system and bringing in third parties and partners who enable ready-made libraries for complete solutions. We are new into this area so other solutions are coming. A third approach is an internal ML development tool where people can develop their own solutions in Python”

几家公司已经使用的样品G24 and BG24 since January. These support multiple wireless protocols, including Matter, Zibee, WiFi and Bluetooth via the separate radio sub-system.

Edge Impulse is using the chips for an embedded ML platform for companies building AI-aware products with automated data labeling, pre-built digital signal processing and ML blocks. This enables live classification testing and digital twins that are less complex, more contextual and easier to develop.

“Integrating Edge Impulse with the built-in machine learning accelerator on the BG24 and MG24 enables up to 4x faster processing of machine learning algorithms with up to 6x lower power consumption while offloading the main CPU for other applications – enabling smarter and faster edge devices with long battery life and new potential workloads,” said Zach Shelby, CEO, and co-founder at Edge Impulse. “By minimizing latency and traffic over the internet for time-sensitive applications, we are strengthening privacy and security, taking full advantage of MG24 and BG24 right at the edge.”

SensiML has ported its AI tools to use the built-in AI/ML accelerator in the MG24 and BG24 for acoustic event detection, motion analysis, gesture and keyword recognition, anomaly detection, predictive maintenance and other time series sensor signal processing. SensiML’s software tool automates the upfront development complexity and optimizes the resulting firmware to deliver accurate results with the smallest memory and power footprint possible.

www.silabs.com/

Other articles on eeNews Europe

Linked Articles

eeNews Europe

10s
Baidu