MARK 5010 Firmware Release and Model Testing Standards

MARK KS campaign is more than 500% funded and we’re at TinkerGen are contemplating what interesting add-ons and functions, we can have as next stretch goal. While we’re contemplating, we continue to work non-stop at improving quality of already existing functions.

Today a new major update for MARK is released to be downloaded through Codecraft. 5010 version major changes are:

  • custom model support, both loaded from SD card and from flash memory (5009 only supported loading from flash)
  • all pre-trained models have been updated, increasing their accuracy
  • added pre-loaded start-up program

In this blog post we would like to briefly explain how do we test these pre-trained models and thus what sort of performance can users expect from them.

Here is our testing environment – the flash cards are 35 cm away from the camera, illumination is ~600 as measured by Grove Light Sensor and background behind flashcards is white cardboard.

The results are only mildly affected by cluttered background and we could do testing without white cardboard, but this way it would be harder to reproduce results.

Image recognition models (also called image classification) are trying to classify the WHOLE image into one of the categories. It means that for them to work properly, the object that needs to be recognized needs to occupy most of the image. Here is the compilation video for testing domestic animals models and zoo animals models.

A side note about using real animals or other pictures – the network was trained using transfer learning on about 1500 images, 200+ for every class. Thus it will be able to classify real animals/other pictures of animals, not only the sample cards, as long as they’re representative enough of their class. For example the model won’t have difficulty recognizing German Shepherd Dog as dog, but struggle to properly classify this image.

Next is object detection models testing – object detection models divide the image into a grid and perform search for objects in the grid cells. That’s why these models can also output the coordinates of the object. These are more complicated to train, since the model not only needs to learn how to classify things, but also how to properly adjust the bounding boxes around the objects. Here is the compilation video for testing traffic signs detection model and numbers detection model.

As mentioned above, these models will be able to detect real life traffic signs or even handwritten digits – but since we cannot reliably guarantee the performance of models on handwritten digits, we only can do testing with printed numbers.

The last pre-trained model is a bit special on regards of testing – common objects recognition model, which includes following classes:

  • human
  • chair
  • book
  • cup
  • pen
  • computer
  • smartphone
  • backpack
  • pizza
  • bomb

Many people are surprised when they see “bomb” on the list of recognizable objects. The reason for including “bomb” is that we have a patrol robot task in Autonomous driving course. “Bomb” class is the only object in the above list that has a flashcard for recognition task. The rest of the objects are easy to find in the environment and are of different sizes, so it is different to design a reproducible test for them. In general as long as an object is a representative of its class, it will be recognized by this model.

Stay tuned for more articles from us and updates on MARK Kickstarter campaign.

For more information on Grove Zero series, Codecraft and other hardware for makers and STEM educators, visit our website,

About Author


May 2020