"[K210] ships with pre-trained, mostly TensorFlow based, models for object, face, age/gender, voice, and abnormal vibration detection."
"El tipo de aplicaciones para las que se diseñó son las que operan en el borde de la red (edge), como las de visión por computadora, facilitando su desarrollo incluso en cuestión de minutos (según afirman desde XaLogic) en vez de en meses."
"It will also be possible to train your own model using a more powerful host machine, ideally with an NVIDIA GPU, with TensorFlow. "
"You can run neural networks on your Raspberry Pi, but they’re demanding and can suck up a lot of your processor time – especially if you’re using a Raspberry Pi Zero. This HAT lets you offload some of the processing..."
K210 AI Accelerator is a compact Raspberry Pi HAT that uses the the Kendryte K210 AI processor to provide 0.5 TOPs (Tera Operations Per Second) of processing power. Using one of our many free pre-trained models, you can add machine vision features using deep learning to your RPi-based camera in a matter of minutes — skipping the tedious flow of training your own neural networks.
This handy HAT lets you add AI features to your RPi based camera even if you don’t know how to train your model. Our plugin module, together with pre-trained models, will make your camera AI-enabled in minutes with a few Python API calls.
Pre-trained models are currently available for:
Potential future pre-trained models:
We try to make your life easier by providing free models, but that should not stop you from developing your own.
To train your own model, you would need a seprate computer, preferably with an Nvidia GPU. We predominantly use TensorFlow and will provide an exmaple how to train your own object detection model. The Kendryte KModel conversion tool supports TFLite, Caffe, and limited support of ONNX format.
Using the familiar Visual Studio Code for Raspberry Pi and the necessary toolchain for K210, you can develop all the K210 firmware on the Pi itself.
The K210 AI Accelerator has a Infineon Trust-M onboard. This lets you establish a secure connection to AWS through MQTT without exposing the private key. This is important when you are deploying your IoT devices in the field.
Schematics, C code in the K210, and all code running on the Raspberry Pi will be open sourced. Pre-trained models are provided in binary form. Also, sample Caffe and Tensorflow projects are available to help you create your own custom neural network.
|K210 AI Accelerator||Coral USB Accelerator||Intel Neural Compute Stick 2|
|Chipset||Kendryte K210||Google Edge TPU||Intel® Movidius™ Myriad™ X|
|Form-factor||Raspberry Pi Zero HAT||USB dongle||USB dongle|
|NPU performance||0.5 TOPS||4 TOPS||4/1 TOPS²|
|Power consumption||0.3 W||2 W||1.5 W|
|Security IC||On-board Infineon Trust-M chipset||None||None|
|Open source (schematic and firmware)||Yes||No||No|
|Build application in minutes||Yes³||Depends||Depends|
¹ SPI clock speed of 40 Mhz.
² Intel Movidius Myraid X VPU produces 4 TOPS performance together with a Neural Compute Engine capable of 1 TOPS.
³ We create a library to help you build your applications in minutes with little to no knowledge of running ML models. The other solutions involve more of a learning curve.
K210 AI Accelerator will be manufactured in China by our partner who has manufactured our past products, including the first generation of this product, the XAPIZ3500. We have done a trial run of a small batch to iron out any manufacturing issues.
Once the boards are manufactured, they will be tested in the factory and shipped to Crowd Supply. Crowd Supply will then fulfill your pledges via their logistics partner, Mouser Electronics. For more information, you can refer to this useful guide to ordering, paying, and shipping. You can confirm and update your order details and more in your Crowd Supply account.
Hardware quality and production risks have been mitigated as much as possible for K210 AI Accelerator. The design itself is stable, the first batch has been tested, had several early beta testers have successfully run applications on it. While the supply issues due to COVID-19 are manageable, we continue to be mindful when planning component purchases due to long lead time for some components. Should anything occur that would shift shipping timelines, backers will be informed via project updates.
Funding ends on Mar 11, 2021 at 03:59 PM PST (11:59 PM UTC)
The open-source stereoscopic camera based on Raspberry Pi with Wi-Fi, Bluetooth, and an advanced powering system
An easy-to-use LiDAR camera that acts like a webcam for real-time 3D depth data
A 100% open source dev board for the EOS S3 low-power MCU with embedded FPGA