SKU
114991790
$74.99
-
+

A USB accessory that brings machine learning inferencing to existing systems. Works with Raspberry Pi and other Linux systems.

attention icon

Attention

This product has shipping restriction to certain countries. It can only be sent to countries and regions listed as below:  Austria , Belgium , Bulgaria , Croatia , Cyprus , Denmark , Estonia , Finland , Germany , Greece , Hong Kong, Hungary , Iceland , Ireland , Italy , Japan, Korea, Latvia , Liechtenstein , Lithuania , Luxembourg , Malta , Netherlands , Norway , Poland , Portugal , Romania , Slovakia , Slovenia , Spain , Sweden , Switzerland , Turkey , United Kingdom , US

The Coral USB Accelerator brings powerful ML inferencing capabilities to existing Linux systems.

Featuring the Edge TPU —  a small ASIC designed and built by Google— the USB Accelerator provides high performance ML inferencing with a low power cost over a USB 3.0 interface. For example, it can execute state-of-the-art mobile vision models such as MobileNet v2 at 100+ fps, in a power efficient manner. This allows you to add fast ML inferencing to your embedded AI devices in a power-efficient and privacy-preserving way.

 Models are developed in TensorFlow Lite and then compiled to run on the USB Accelerator.

 

Edge TPU key benefits

  • High-speed TensorFlow Lite inferencing
  • Low power
  • Small footprint

 Coral is a division of Google, that helps you build intelligent ideas with our platform for local AI.

 

FEATURES

  • Google Edge TPU ML accelerator coprocessor
  • USB 3.0 Type-C socket
  • Supports Debian Linux on host CPU
  • Models are built using TensorFlow
  • Fully supports MobileNet and Inception architectures though custom architectures are possible
  • Compatible with Google Cloud

 

SPECS

Edge TPU ML accelerator

  • ASIC designed by Google that provides high performance ML inferencing for TensorFlow Lite models

Arm 32-bit Cortex-M0+ Microprocessor (MCU)

  •  Up to 32 MHz max
  • 16 KB Flash memory with ECC
  •  2 KB RAM

Connections

  • USB 3.1 (gen 1) port and cable (SuperSpeed, 5Gb/s transfer speed)
  • Included cable is USB Type-C to Type-AU

Local inferencing

Run on-device ML inferencing on the Edge TPU designed by Google.

 

Works with Debian Linux

Connect to any Linux-based system with an included USB Type-C cable.

 

Supports TensorFlow lite

No need to build models from the ground up. Tensorflow Lite models can be compiled to run on USB Accelerator.

An Open Software Ecosystem

We recognize that each project has different requirements and goals. As such we have worked to provide as much flexibility to developers as possible. Our quad-core solution allows us to run numerous software solutions. From the selection of algorithms, NLP services, smart assistants, operating systems, and more, we wish to support our users. If you have suggestions on what services you would like to see us officially support let us know on our forums, or if you enjoy the work yourself, submit a merge request to our github.

More Information
stock_type General
Shipment Date Jun 24, 2019