Luxonis
Video & Cameras
Machine Learning

megaAI

Tiny-but-mighty 4K, 60 FPS camera solution for computer vision powered by Myriad X

$36,611 raised

of $15,000 goal

244% Funded! Order Below

In stock. Order now, ships within three business days.

$199

View Purchasing Options

Recent Updates

You'll be notified about news and stock updates for this project.

The megaAI is a turn-key computer vision and artificial intelligence solution that combines and harnesses the four TOPS (Trillion Operations Per Second) of AI processing power with a beautiful 4K, 60 FPS camera for human/object tracking in a tiny, low-power, package. It’s perfect for hobbyists and researchers and is ready for direct integration by OEMs. It’s also compatible with our DepthAI ecosystem, and is therefore insanely easy to use.

megaAI with a US quarter for scale. Actual size: 43mm by 30mm
megaAI with a US quarter for scale. Actual size: 43 mm by 30 mm

Hardware Features

Unbelievably Simple Object Detection

The megaAI takes previously difficult computer vision tasks like real-time object detection and tracking and makes them as simple as plugging in a USB cable and running a Python script. Just clone the DepthAI git repository and run python depthai.py to see a live demonstration of MobileNetSSD being run on your host system. You can even record live 4K, 30 FPS video of everything the camera sees.

Object Tracking & Detection

Object localization is the capability to know what an object is and where it is in the physical world. The megaAI is able to accomplish this at 30 frames per second on a Raspberry Pi, without adding any load on the Pi.

The easiest-to-run net on megaAI is what we use as our test case: Object Detection on 20 classes (PASCAL VOC 2012):

Consequently, when you connect megaAI to a host and point the camera at these objects, you’ll see a bounding box with a label drawn around each object.

Pre-trained Detection & Tracking

Need other neural network models? There are many pre-trained models that will work right away, many are available directly from Intel for free. Swap the new model in to the included Python script and boom! You’re ready to go.

Some examples of cool megaAI-compatible models:

Custom Detection & Tracking

If you want something custom, you can train your own models based on available/public datasets and then use OpenVINO to deploy them to DepthAI. See our documentation section on creating custom tracked objects.

Real-time H.264 & H.265 Encoding

Not only is megaAI a fast and power-efficient way to implement stock or custom recurrent neural networks, it’s also a real-time video encoder that compresses 4K H.265 video at 30 frames per second.

High Frame Rates, Low Power Consumption

Since megaAI does so much processing on its own (including optionally compressing video), the load on both USB and the CPU is lowered significantly, allowing Luxonis to achieve dramatically higher frame rates, bringing low-power hardware acceleration to any project:

Real time object detection with OpenVINO and Movidius

Pi 3B+, CPU OnlyPi 3B+, NCS2, OpenVINOPi 3B+, DepthAI, OpenVINO
MobileNetSSD (display on)5.88 FPS8.31 FPS25.5 FPS
MobileNetSSD (display off)6.08 FPS8.37 FPS25.5 FPS

(Raspberry Pi/NCS2 data courtesy of the awesome folks over at PyImageSearch)

Some Ideas For How To Use megaAI

Movidius Myriad X, Unleashed

DepthAI, megaAI’s software, enables the use of the full power of the Myriad X. This was the sole mission of the DepthAI project originally and we achieved this through custom implementation at all layers of the stack (hardware, firmware, and software). After a lot of iteration and and collaboration what resulted was an efficient and easy-to-use system which takes full advantage of the four Trillion Operations Per Second (TOPS) vision processing capability of the Myriad X.

Hardware BlockOther Myriad X SolutionsLuxonis DepthAI
Neural Compute EngineYes Yes
SHAVE CoresYes Yes
Motion EstimationInaccessible Yes
Edge DetectionInaccessible Yes
Harris FilteringInaccessible Yes
Warp/De-WarpInaccessible Yes
MIPI ISP PipelineInaccessible Yes
JPEG EncodingInaccessible Yes
H.264 and H.265 EncodingInaccessible Yes

Comparison to Another Full AI/CV Solution

MetricmegaAI w/ Raspberry Pi*Intel NUC AI Kit
Picture Resolution12 MP (4056x3040)1 MP (1280x720)
Video Resolution4K720p
Easy Setup & DevelopmentYesNo
Efficient Data PathYesNo
Real TimeYesYes
Low LatencyYesYes
EmbeddableYesNo
ProductizableYesNo
CPU Free for User CodeYesNo
CPU UtilizationNear-zeroHigh
Hardware H.265 SupportYesNo
Hardware JPEG SupportYesNo
Hardware Feature Tracking SupportYesNo
Power6 W (max, including Pi)50 W (max)

* A Raspberry Pi is not included with megaAI.

Open Source - MIT Licensed

megaAI is an open source project. So if you want to build something off of it, do it!

We’ve open sourced hardware and software to allow you to do so:

Hardware: https://github.com/luxonis/depthai-hardware

Software:

Complete documentation on all the software

And even our documentation is open-source, so if you find an error you can submit a PR with the fix!

From Zero to Artificial Intelligence

It’s never been easier to get up and running with machine learning, computer vision, and artificial intelligence.

With megaAI it’s just a handful of steps before you’re up and running. The following video shows everything required with a Raspberry Pi.

For other systems (macOS, Windows, and Linux variants), it’s just as easy. How about training megaAI to detect custom objects?

That’s easy too. We provide free online training through Google Colab notebooks:

The below tutorials are based on MobileNetv2-SSD, which is an object detector which natively runs on DepthAI. A bunch of other object detectors could be trained/supported on Colab and run on DepthAI, so if you have a request for a different object detector/network backend, please feel free to submit a GitHub Issue! We’re constantly adding more (in fact as of this writing, we just got some new YOLO variants running).

Easy Object Detector Training: Open in Colab

The tutorial notebook Easy_Object_Detection_With_Custom_Data_Demo_Training.ipynb shows how to quickly train an object detector based on the Mobilenet SSDv2 network.

After training is complete, it also converts the model to a .blob file that runs on our DepthAI platform and modules. First the model is converted to a format usable by OpenVINO called Intermediate Representation, or IR. The IR model is then compiled to a .blob file using a server we set up for that purpose. (The IR model can also be converted locally to a blob.)

And that’s it, in less than a couple of hours you have an advanced proof of concept object detector that can run on megaAI to detect objects of your choice. But don’t take our word for it, keep reading for an example we built.

Example: COVID-19 Mask/No-Mask Training: Open In Colab

The Medical Mask Detection Demo Training.ipynb training notebook shows an example of a more complex object detector. The training data set consists of people wearing or not wearing masks for viral protection. There are almost 700 pictures with approximately 3600 bounding box annotations. The images are complex: they vary quite a lot in scale and composition. Nonetheless, the object detector does quite a good job with this relatively small dataset for such a task. Again, training takes around 2 hours. Depending on which GPU the Colab lottery assigns to the notebook instance, training 10k steps can take 2.5 hours or 1.5 hours. Either way, a short period for such a good quality proof of concept for such a difficult task. We then performed the steps above for converting to blob and then ran it on our DepthAI module.

In the Press


Ask a Question

Produced by Luxonis in Boulder, CO.

Sold and shipped by Crowd Supply.

megaAI

$199 $8 US Shipping / $18 Worldwide

About the Team

Luxonis

Boulder, CO  ·   luxonis.com

Brandon quit his job at Ubiquiti leading the UniFi team in order to focus on embedded machine learning and computer vision. He misses the UniFi team. But he just had to try this, as he thinks it's the future!

See Also

Subscribe to the Crowd Supply newsletter, highlighting the latest creators and projects