NewsROHM Develops Ultra-Low-Power On-Device Learning Edge AI Chip

ROHM Develops Ultra-Low-Power On-Device Learning Edge AI Chip

Category articles

ROHM has created an on-device AI chip (SoC that includes an on-device AI accelerator) to be used as an edge computing endpoint within the IoT field. It makes use of artificial intelligence to detect the possibility of failures (predictive fail detection) on electronic devices with sensors and motors in real-time, with extremely low power consumption.

The majority of the time, AI chips perform learning and inferences in order to accomplish artificial intelligence capabilities. Because learning demands lots of data is gathered, put into a database, then changed as required. Therefore an AI chip that learns requires significant computing power and will require a lot of energy. It has been difficult to design AI chips that learn on the go, consuming less power for edge computers and endpoints, which can create an effective IoT ecosystem. Based on an “on-device learning algorithm’ devised by professor Matsutani from Keio University, ROHM’s newly created AI chip primarily consists of an AI accelerator (AI-dedicated hardware circuit) and ROHM’s highly efficient 8-bit CPU called TinyMicon MatisseCORE(tm)’. The 20,000 gate ultra-compact AI accelerator and a super-fast CPU that allows learning and inference using a very low power consumption of only about tens of milliwatts (1000x smaller than traditional AI chips that are capable of learning).

This allows real-time failure predictions for a variety of applications. ‘Anomaly detect results (anomaly score) are able to be output numerically in the case of unknowing source data at the place where the equipment is placed without the need for cloud servers. Moving forward, ROHM plans to incorporate the AI accelerator in the AI chip into a variety of IC products that deal with sensors and motors. Commercialization is planned to begin in 2023 with production scheduled for 2024. Professor Hiroki Matsutani, Dept. of Information and Computer Science, Keio University, Japan “As IoT technologies like 5G communications and digital twins improve and cloud computing becomes more prevalent, it will require a change yet processing all data in cloud servers may not be the most efficient solution for the load, cost, and energy consumption. Through the “on-device learning” we study and the ‘on-device machine learning algorithms’ that we have developed we hope to create greater efficiency in data processing on the edge of the device to create a more efficient IoT ecosystem.

Through this partnership, ROHM has shown us how to market the technology at a reasonable cost by further developing on-device learning circuit technology. I believe that to see the first preliminary AI chip will be integrated within ROHM’s IC products in the near future.” Find out more details about our On-device AI chip Technology (2.6MB) The tinyMicon MatisseCORE(tm) tinyMicon MatisseCORE(tm) (Matisse Micro arithmetic unit designed for a small size sequencing) is ROHM’s own 8-bit processor designed for the goal of making analog ICs smarter for the IoT-related ecosystems. An instruction set that is optimized for embedded applications , along with the most recent compiler technology to provide fast arithmetic processing within the smallest chip size and code size. Applications with high reliability are also accepted, for instance, applications that need to be certified according to ISO 26262 and ASIL-D. ISO 26262 and ASIL-D vehicle functional safety standards. Additionally, the onboard’real-time-debugging function’ keeps that the process of debugging from interfering in program execution, allowing for debugging to take place even while the program is running. The ROHM’s AI Chip (SoC with On-Device Learning AI Accelerator) The prototype AI chip (Prototype Part Number. BD15035) is based on an on-device-learning algorithm (three-layer neural circuit for AI networks) designed by professor Matsutani from Keio University. ROHM reduced its AI circuit, which was originally 5 million gates down to only 20000 (0.4 percent the size) to allow it to be reconfigured as an exclusive AI accelerator (AxlCORE-ODL) that is controlled by ROHM’s high-efficiency , 8-bit CPU tinyMicon MatisseCORE(tm) that allows AI training and learning at extremely low power consumption, averaging only a few tens of millions of mW.

This allows the output of ‘anomaly detected results’ possible with unknown data patterns. data types (i.e. brightness, acceleration, current or voices) at the location where equipment is placed without the need for cloud servers or prior AI learning. This lets you use real-time prediction of failures (detection of indicators of failure that are predictive) via on-site AI while keeping the cost of cloud-based servers and communication affordable. To test this AI chip we have an evaluation board with Arduino-compatible terminals, which can be equipped to an extension sensor that can be used for connection with the MCU (Arduino). The wireless communications modules (Wi-Fi as well as Bluetooth(r)) together with 64kbit of EEPROM memory are installed on the board. by connecting devices such as sensors and connecting them to the equipment of choice, it is possible to test the effects on the AI chip using a screen. This evaluation board can be loaned by ROHM sales. Contact us for more details. AI chip demo video A demonstration video demonstrating this AI chip that is used in an evaluation device is now available. Terminology Edge Computer / Endpoint Computers and servers that form the core for big data that are connected the cloud are referred to as cloud servers or cloud computers. Edge computers are the computers and devices located on the terminal end. Endpoints are the devices or points on the edges of computers. AI Accelerator An instrument that increases the speed of processing of AI functions using hardware circuits, not software developed by CPUs. Digital Twin A technology that conveys and reproduces data within the physical world, but in an imaginary (digital) spatial space similar to twins.

Three-Layer Neural Network Neural network (a model of mathematical formulas and functions) that is influenced by the human brain that processes simple three layers that comprise input intermediate, output, and input layers. Deep learning is the process of adding many intermediate layers for more advanced AI processing. Arduino An open source, worldwide-popular platform developed by Arduino comprised of a program development environment as well as a board with input/output ports as well as an MCU.

Michal Pukala
Electronics and Telecommunications engineer with Electro-energetics Master degree graduation. Lightning designer experienced engineer. Currently working in IT industry.

News