This page is an archive of the original crowdfunding campaign for this project. It may not be up-to-date with the latest updates and product availability. Return to the current project page.
"The compact module is powered by Intel's Myriad X neural processing unit (NPU), offering 4 TOPS of compute performance alongside 12 megapixel hardware JPEG encoding and hardware H.264 and H.265 encoding"
"... instead of just handling QVGA at around 15 to 18 fps, megaAI can supports inference at 4K resolution up to 30 fps."
"MegaAI becomes the best bet for engineers and researchers if they want to directly integrate by OEMs and expand the AI processing capability of their project."
"If you are looking for a truly tiny board for artificial intelligence (AI) and computer vision projects, megaAI from Luxonis is worth your attention."
"Like the DepthAI USB module, the megaAI can work with any Linux, Mac, or Windows computer, but is primarily marketed as an add-on to the Raspberry Pi."
"Tanto con el apartado de procesamiento, como con el puramente óptico, este módulo compacto para visión por computadora viene ya preprogramado para reconocer hasta una veintena de categorías de objetos."
megaAI with a US quarter for scale. Actual size: 43 mm by 30 mm
The megaAI is a turn-key computer vision and artificial intelligence solution that combines and harnesses the four TOPS (Trillion Operations Per Second) of AI processing power with a beautiful 4K, 60 FPS camera for human/object tracking in a tiny, low-power, package. It’s perfect for hobbyists and researchers and is ready for direct integration by OEMs. It’s also compatible with our DepthAI ecosystem, and is therefore insanely easy to use.
The megaAI takes previously difficult computer vision tasks like real-time object detection and tracking and makes them as simple as plugging in a USB cable and running a Python script. Just clone the DepthAI git repository and run
python depthai.py to see a live demonstration of MobileNetSSD being run on your host system. You can even record live 4K, 30 FPS video of everything the camera sees.
Object localization is the capability to know what an object is and where it is in the physical world. The megaAI is able to accomplish this at 30 frames per second on a Raspberry Pi, without adding any load on the Pi.
The easiest-to-run net on megaAI is what we use as our test case: Object Detection on 20 classes (PASCAL VOC 2012):
Consequently, when you connect megaAI to a host and point the camera at these objects, you’ll see a bounding box with a label drawn around each object.
Need other neural network models? There are many pre-trained models that will work right away, many are available directly from Intel for free. Swap the new model in to the included Python script and boom! You’re ready to go.
Some examples of cool megaAI-compatible models:
If you want something custom, you can train your own models based on available/public datasets and then use OpenVINO to deploy them to DepthAI. See our documentation section on creating custom tracked objects.
Not only is megaAI a fast and power-efficient way to implement stock or custom recurrent neural networks, it’s also a real-time video encoder that compresses 4K H.265 video at 30 frames per second.
Since megaAI does so much processing on its own (including optionally compressing video), the load on both USB and the CPU is lowered significantly, allowing Luxonis to achieve dramatically higher frame rates, bringing low-power hardware acceleration to any project:
Real time object detection with OpenVINO and Movidius
|Pi 3B+, CPU Only||Pi 3B+, NCS2, OpenVINO||Pi 3B+, DepthAI, OpenVINO|
|MobileNetSSD (display on)||5.88 FPS||8.31 FPS||25.5 FPS|
|MobileNetSSD (display off)||6.08 FPS||8.37 FPS||25.5 FPS|
(Raspberry Pi/NCS2 data courtesy of the awesome folks over at PyImageSearch)
DepthAI, megaAI’s software, enables the use of the full power of the Myriad X. This was the sole mission of the DepthAI project originally and we achieved this through custom implementation at all layers of the stack (hardware, firmware, and software). After a lot of iteration and and collaboration what resulted was an efficient and easy-to-use system which takes full advantage of the four Trillion Operations Per Second (TOPS) vision processing capability of the Myriad X.
|Hardware Block||Other Myriad X Solutions||Luxonis DepthAI|
|Neural Compute Engine||Yes||Yes|
|MIPI ISP Pipeline||Inaccessible||Yes|
|H.264 and H.265 Encoding||Inaccessible||Yes|
|Metric||megaAI w/ Raspberry Pi*||Intel NUC AI Kit|
|Picture Resolution||12 MP (4056x3040)||1 MP (1280x720)|
|Easy Setup & Development||Yes||No|
|Efficient Data Path||Yes||No|
|CPU Free for User Code||Yes||No|
|Hardware H.265 Support||Yes||No|
|Hardware JPEG Support||Yes||No|
|Hardware Feature Tracking Support||Yes||No|
|Power||6 W (max, including Pi)||50 W (max)|
|Price||$169 USD (during campaign)||$879.95 USD|
* A Raspberry Pi is not included with any megaAI product or pledge offered during this campaign.
megaAI is an open source project. So if you want to build something off of it, do it!
We’ve open sourced hardware and software to allow you to do so:
Complete documentation on all the software
And even our documentation is open-source, so if you find an error you can submit a PR with the fix!
It’s never been easier to get up and running with machine learning, computer vision, and artificial intelligence.
With megaAI it’s just a handful of steps before you’re up and running. The following video shows everything required with a Raspberry Pi.
For other systems (macOS, Windows, and Linux variants), it’s just as easy. How about training megaAI to detect custom objects?
That’s easy too. We provide free online training through Google Colab notebooks:
The below tutorials are based on MobileNetv2-SSD, which is an object detector which natively runs on DepthAI. A bunch of other object detectors could be trained/supported on Colab and run on DepthAI, so if you have a request for a different object detector/network backend, please feel free to submit a GitHub Issue! We’re constantly adding more (in fact as of this writing, we just got some new YOLO variants running).
Easy Object Detector Training: Open in Colab
The tutorial notebook Easy_Object_Detection_With_Custom_Data_Demo_Training.ipynb shows how to quickly train an object detector based on the Mobilenet SSDv2 network.
After training is complete, it also converts the model to a .blob file that runs on our DepthAI platform and modules. First the model is converted to a format usable by OpenVINO called Intermediate Representation, or IR. The IR model is then compiled to a .blob file using a server we set up for that purpose. (The IR model can also be converted locally to a blob.)
And that’s it, in less than a couple of hours you have an advanced proof of concept object detector that can run on megaAI to detect objects of your choice. But don’t take our word for it, keep reading for an example we built.
Example: COVID-19 Mask/No-Mask Training: Open In Colab
The Medical Mask Detection Demo Training.ipynb training notebook shows an example of a more complex object detector. The training data set consists of people wearing or not wearing masks for viral protection. There are almost 700 pictures with approximately 3600 bounding box annotations. The images are complex: they vary quite a lot in scale and composition. Nonetheless, the object detector does quite a good job with this relatively small dataset for such a task. Again, training takes around 2 hours. Depending on which GPU the Colab lottery assigns to the notebook instance, training 10k steps can take 2.5 hours or 1.5 hours. Either way, a short period for such a good quality proof of concept for such a difficult task. We then performed the steps above for converting to blob and then ran it on our DepthAI module.
Manufacturing at scale is always a challenge, with problems occurring at high-volumes which were simply not experienced at smaller production. We will do our best to nimbly respond to these problems as they arise and keep to the schedule, while delivering megaAI at the standard of quality we at Luxonis would expect as customers.
We have already produced a first run of the megaAI boards, so our supply chain and manufacturing process has been tested and verified successfully. Of course, delays are possible with any manufacturing process, and we will communicate any issues with backers through regular updates.
The megaAI is a ready-for-manufacture product. We delivered all four original DepthAI boards with additional features above and beyond the ones promised during that campaign. Not only that, but unlike some other crowdfunded campaigns, we shipped on time!
The megaAI boards will be fulfilled in the following batches:
NOTE: Roadrunner tier pledges will be shipped immediately at the conclusion of the campaign by Luxonis (based in USA). The rest of the campaign orders will be shipped using Crowd Supply’s fulfillment service. Domestic (within the USA) orders ship for free, while international shipments incur a shipping surcharge that is applied at checkout.
COMING SOON: A PoE version of megaAI!