Step into the World of TinyML!

Step into the World of TinyML!

Table of Contents

  1. Introduction
  2. STM32 Cube AI Suite: An Overview
  3. Getting Started with X Cube AI Tool
  4. Preparing the Model
    • Converting the Model
    • Optimizing the Model for Size
  5. Importing the Model into X Cube AI Tool
  6. Setting up the Development Environment
    • Installing the X Cube AI Package
    • Configuring the Project in STM32 Cube IDE
  7. Configuring the Neural Network
    • Analyzing the Model
    • Adjusting for Resource Limitations
  8. Running Inference on the Microcontroller
  9. Testing and Validation
  10. Performance Comparison with TensorFlow Lite
  11. Pros and Cons of X Cube AI Tool
  12. Conclusion

Introduction

In the world of embedded systems, microcontrollers play a crucial role in powering various devices and applications. With the advancements in machine learning, there is a growing need to bring the power of artificial intelligence to these resource-constrained microcontrollers. This is where STMicroelectronics' STM32 Cube AI suite of tools comes into play. In this article, we will explore how to get started with the X Cube AI tool and use it to deploy a pre-trained neural network model on an STM32 microcontroller.

STM32 Cube AI Suite: An Overview

STMicroelectronics released the STM32 Cube AI suite of tools back in 2019. This suite of tools is specifically designed to help developers Create tiny machine learning applications on their microcontrollers. The X Cube AI tool, which is part of the suite, allows users to convert and deploy neural network models trained in popular frameworks like TensorFlow or ONNX on STM32 microcontrollers. The tool provides functions for running inference, which is the process of using the trained model to make predictions on new unseen data.

Getting Started with X Cube AI Tool

To get started with the X Cube AI tool, we first need to prepare our pre-trained neural network model. This involves converting the model into a format compatible with the tool and optimizing it for size. Once the model is ready, we can import it into the X Cube AI tool and begin running inference on our microcontroller.

Preparing the Model

Before we can use the X Cube AI tool, we need to convert our pre-trained neural network model into a format that the tool can understand. Typically, the model is trained in frameworks like TensorFlow or ONNX. We can use the TF Lite converter function to convert the model into a TensorFlow Lite (TF Lite) file, which is compatible with the X Cube AI tool.

Optimizing the model for size is also an important step, as microcontrollers have limited resources. Although the X Cube AI tool optimizes the model for size automatically, it is still recommended to keep the model as small as possible to ensure it can fit on the chosen microcontroller.

Importing the Model into X Cube AI Tool

Once we have our converted TF Lite model file, we can import it into the X Cube AI tool. The tool provides a user-friendly interface for importing the model, analyzing its complexity, and configuring it for the target microcontroller. We can also Visualize the model's architecture and verify its input and output formats using netron or similar tools.

Setting up the Development Environment

To use the X Cube AI tool, we need to have the appropriate development environment set up. This includes installing the X Cube AI package and configuring the project in STM32 Cube IDE.

The X Cube AI package can be installed from the STMicroelectronics tab in the Embedded Software Packages manager. Once installed, we can create a new STM32 project and select the microcontroller board or processor we will be using. This ensures that only the microcontrollers supporting the X Cube AI library are shown in the target selection window.

Configuring the Neural Network

In the X Cube AI tool, we can analyze the complexity of our model and check its resource requirements such as RAM and multiply-accumulate (MACC) operations. If the model size exceeds the microcontroller's resources, we can choose to use external flash and RAM chips. The tool provides options to visualize the model's graph and validate it on the desktop before deploying it on the microcontroller.

It is important to note that we can load multiple models at a time if the microcontroller has enough flash and RAM. However, Attention should be given to the resource limitations to ensure efficient operation.

Running Inference on the Microcontroller

With the model imported and the development environment set up, We Are ready to run inference on the microcontroller using the X Cube AI tool. We can initialize the model, load the input data, perform inference, and obtain the output predictions. The tool provides functions for initializing the model, running inference, and accessing the input and output tensors.

By utilizing the X Cube AI tool, we can leverage the power of machine learning on resource-constrained microcontrollers, enabling them to make intelligent predictions Based on new data.

Testing and Validation

To ensure the accuracy and reliability of our deployed model, it is crucial to test and validate it. The X Cube AI tool provides options for validating the model on the desktop or running validation programs on the microcontroller itself. These tests help verify the correct functioning of the model and its compatibility with the microcontroller.

Performance Comparison with TensorFlow Lite

In the previous video, we compared the performance of the same neural network model running on the same microcontroller using TensorFlow Lite. We observed that the X Cube AI tool offers several advantages over TensorFlow Lite, including reduced flash usage and faster inference time.

In terms of flash usage, the X Cube AI tool saved over 40% compared to TensorFlow Lite, occupying only about 28,000 bytes. The RAM usage slightly increased to about 4900 bytes. The inference time with X Cube AI was around 77 microseconds, while TensorFlow Lite took about 104 microseconds. These results highlight the efficiency and speed improvements offered by the X Cube AI tool.

Pros and Cons of X Cube AI Tool

Pros:

  • Dedicated tool specifically designed for STM32 microcontrollers
  • Optimized for size, reducing flash usage compared to TensorFlow Lite
  • Faster inference time compared to TensorFlow Lite
  • User-friendly interface with visualization and analysis capabilities

Cons:

  • Proprietary tool limited to STM32 microcontrollers
  • Potential risk of support limitations in the future

Conclusion

The X Cube AI tool provided by STMicroelectronics' STM32 Cube AI suite offers an effective solution for deploying machine learning models on microcontrollers. By following the steps outlined in this article, developers can leverage the power of artificial intelligence on resource-constrained microcontrollers. With its optimized size and faster inference time, the X Cube AI tool provides a viable option for developers seeking to integrate machine learning capabilities into their embedded systems.

Most people like

Find AI tools in Toolify

Join TOOLIFY to find the ai tools

Get started

Sign Up
App rating
4.9
AI Tools
20k+
Trusted Users
5000+
No complicated
No difficulty
Free forever
Browse More Content