Learn TinyML using Wio Terminal and Arduino IDE #7 Machine Learning on ARM Cortex M0+ MCU Seeeduino XIAO and XIAO RP2040

This is the last article of the TinyML course series – in the previous articles we have discussed how to train and deploy machine learning models for audio scene classification, anomaly detection, speech recognition and other tasks to Wio Terminal, a compact production-ready development board from Seeed studio. All of the articles were published on Seeed blog and for videos, you can have a look at TinyML course with Wio Terminal playlist on YouTube.

Wio Terminal while being convenient for experiments due to presence of multiple built-in sensors and a case, might be a bit too bulky for some applications for example, wearables. In the last project we’ll go even Tinier and use Seeeduino XIAO boards – namely the original XIAO and newer XIAO 2040 and will briefly mention soon-to-be-released XIAO BLE.

For more details, watch the video version of the tutorial!

Specks-wise, original XIAO at the time of launch was likely the smallest M0 development board available and was packing quite a punch for its size with ARM® Cortex®-M0+ 48MHz microcontroller(SAMD21G18) and 256KB Flash, 32KB SRAM.

Seeeduino SAMD21 Cortex M0 - Arduino Ready - Micro Robotics
Original XIAO – as you can see it is a bit larger than a thumb nail.

Later RP2040 chip arrived and delivered even better specs Cortex M0+ design. Both are quite capable of running the tiny neural network we will have for this project, but if you have some more demanding applications it does make sense to choose XIAO RP2040 over the original XIAO.

As a software engineer I, like many of you I’m sure, spend a lot of time in front of the glowing screen on my chair. And later in the day it becomes difficult to maintain a proper pose.

If only there was a way to make a device that could learn your specific body position for proper and wrong poses and warn you when you slouch too much or go into “Python pose”… Wait a moment, there is!

The best sensor for the task that will provide the data for machine learning model is obviously accelerometer. All of the XIAO series boards, being very small do not come equipped with accelerometer sensor. While we could use a XIAO expansion board for development and testing, it eliminates the low-footprint advantage XIAO boards have. If you are going to create your own product, the better option would be to create your own custom PCB board for the chip or SoM. I asked our hardware engineer to design a simple carrier board for XIAO, that would include LIS3DH accelerometer, a buzzer and a battery connector with a power switch.

Then we used Seeed studio Fusion service to print some PCBA samples – for that go to https://www.seeedstudio.com/fusion_pcb.html and upload Gerber files, which contain the PCB design and choose the proper parameters for the board, such as number of layers, base material, minimum drill hole size and so on. We can see that a simple 2-layer board is approximated to cost 4.9 USD for 10 pieces plus shipping cost.

If you’d like to repeat the experiment without custom PCB, you can connect Grove LIS3DH accelerometer module to XIAO expansion board and start collecting the data. I collected 3 data samples for each posture, 60 seconds each with device attached to t-shirt on my back.

For each sample, I maintained the same pose, but included some arm, head and torso movements to simulate normal activity.

I have chosen 5 seconds time window with window shift of 1 second and Flatten processing block, since we are dealing with very slow moving data. A very plain fully connected network provided a good accuracy. Here is the link to public version of the Edge Impulse project.

Some improvement can be made by collecting more data and making sure proper and improper postures can be recognized with some variations in device positioning on the clothes. Since the device is thought to be individual usage device it does not need to generalize to different people’s postures and can be easily re-trained. You can check how well it detects your postures after training in Live classification tab.

After you’re satisfied with accuracy download the resulting model as Arduino library and copy it to your Arduino sketches/libraries folder. You can download sample code, that would collect 5 second sample, performs the inference and turn on the buzzer if one of the improper poses are detected.

void loop()


    // Allocate a buffer here for the values we'll read from the IMU
    float buffer[EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE] = { 0 };

    for (size_t ix = 0; ix < EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE; ix += 3) {
        // Determine the next tick (and then sleep later)
        uint64_t next_tick = micros() + (EI_CLASSIFIER_INTERVAL_MS * 1000);

        lis.getAcceleration(&buffer[ix], &buffer[ix+1], &buffer[ix + 2]);
        buffer[ix + 0] *= CONVERT_G_TO_MS2;
        buffer[ix + 1] *= CONVERT_G_TO_MS2;
        buffer[ix + 2] *= CONVERT_G_TO_MS2;

        delayMicroseconds(next_tick - micros());

    // Turn the raw buffer in a signal which we can the classify
    signal_t signal;
    int err = numpy::signal_from_buffer(buffer, EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE, &signal);
    if (err != 0) {
        ei_printf("Failed to create signal from buffer (%d)\n", err);

    // Run the classifier
    ei_impulse_result_t result = { 0 };

    err = run_classifier(&signal, &result, debug_nn);
    if (err != EI_IMPULSE_OK) {
        ei_printf("ERR: Failed to run classifier (%d)\n", err);

    // print the predictions
    ei_printf("Predictions ");
    ei_printf("(DSP: %d ms., Classification: %d ms., Anomaly: %d ms.)",
        result.timing.dsp, result.timing.classification, result.timing.anomaly);
    ei_printf(": \n");
    for (size_t ix = 0; ix < EI_CLASSIFIER_LABEL_COUNT; ix++) {
        ei_printf("    %s: %.5f\n", result.classification[ix].label, result.classification[ix].value);
    ei_printf("    anomaly score: %.3f\n", result.anomaly);
  if (result.classification[1].value > ALARM_THRESHOLD || result.classification[2].value > ALARM_THRESHOLD)
  tone(BUZZER_PIN, 523, 250);
  tone(BUZZER_PIN, 523, 250);


Since it is relatively slowly changing data and we do not need fast response times, normal sequential inference pipeline suits this application well.

A step above would be to use the newest XIAO BLE and connect the device to user’s smartphone, which would allow for better alerts, statistics and so on.

We have released the Seeeduino BLE product in two versions: BLE and BLE Sense. Both versions are based on the Nordic Semiconductor nRF52840 SoC, but the Sense version also features an IMU sensor and a PDM microphone, making it a great choice for TinyML projects such as the one described in this article. They are introduced in detail below.

Seeed XIAO BLE nRF52840 – Supports Arduino / MicroPython – Bluetooth5.0 with Onboard Antenna

Here are features and specifications:

  • Powerful CPU: Nordic nRF52840, ARM® Cortex™-M4 32-bit processor with FPU operating at 64 MHz
  • Wireless capabilities: Bluetooth 5.0, NFC, and ZigBee module with onboard antenna
  • Ultra-small size: 21 x 17.5mm, Seeed Xiao series classic form-factor for wearable devices
  • Ultra-low sleep power: 5 μA, deep sleep model
  • Battery charging chip: BQ25101 chip supported lithium battery charge management
  • Rich interface: 1x Reset button, Ix UART, 1x IIC, 1x SPI, 1x NFC, 1x SWD, 11x GPIO, 6x ADC, 1x Three-in-one LED,1x User LED
  • Onboard 2 MB flash
  • Onboard PDM microphone and 6-axis IMU (only for XIAO BLE nRF52840 Sense)
  • Single-sided components, surface mounting design
  • Support Arduino/ MicroPython/ CircuitPython

If you are interested in programming embedded machine learning, we have Codecraft graphic programming that can help you quickly start your own TinyML project. And we have set up a #tinyml channel on our Discord server, please click to join for 24/7 making, sharing, discussing, and helping each other out. We can do this all day.

Hope you enjoyed my TinyML course video series and learned a lot about Machine Learning on microcontrollers. Tinkering with Machine Learning on smallest of devices has been truly an eye-opening experience for me, after executing much larger ML models on SBCs and servers. If you would like to continue learning on that topic, have a look at free courses by Coursera and Edge impulse and also Codecraft, a graphical programming environment that allows creating TinyML application and deploying them to Wio Terminal.


November 2021