DepthAI

by Luxonis

An embedded platform for combining Depth and AI, built around Myriad X

View all updates Jun 18, 2020

New Spatial AI Capabilities & Multi-Stage Inference

by Brandon G

We have a super-interesting feature-set coming to DepthAI:

  • 3D feature localization (e.g. finding facial features) in physical space
  • Parallel-inference-based 3D object localization
  • Two-stage neural inference support

And all of these are initially working (in this PR).

So to the details and how this works:

We are actually implementing a feature that allows you to run neural inference on either or both of the grayscale cameras.

This sort of flow is ideal for finding the 3D location of small objects, shiny objects, or objects for which disparity depth might struggle to resolve the distance (z-dimension), which is used to get the 3D position (XYZ). So this now means DepthAI can be used two modalities:

  1. As it's used now: The disparity depth results within a region of the object detector are used to re-project xyz location of the center of object.
  2. Run the neural network in parallel on both left/right grayscale cameras, and the results are used to triangulate the location of features.

An example where 2 is extremely useful is finding the xyz positions of facial landmarks, such as eyes, nose, and corners of the mouth.

Why is this useful for facial features like this? For small features like this, the risk of disparity depth having a hole in the location goes up, and even worse, for faces with glasses, the reflection of the glasses may throw the disparity depth calculation off (and in fact it might ‘properly’ give the depth result for the reflected object).

When running the neural network in parallel, none of these issues exist, as the network finds the eyes, nose, and mouth corners per image, and then the disparity in location of these in pixels from the right and left stream results gives the z-dimension (depth = 1/disparity), and then this is reprojected through the optics of the camera to get the full XYZ position of all of these features.

And as you can see below, it works fine even w/ my quite-reflective anti-glare glasses:

Cheers,
Brandon and the Luxonis Team

About the Author

Brandon G

 Westminster, CO


$52,706 raised

of $1 goal

Funded! Order Below

Product Choices

$399

DepthAI: RPi Compute Module Edition

Complete DepthAI system including Raspberry Pi Compute Module, microSD card pre-loaded with Raspbian and DepthAI Python interface. Boots up running object localization demo. Just connect to power and an HDMI display.


$315

DepthAI: USB3 Onboard Camera Edition

This DepthAI variant interfaces over USB3C to the host, allowing use with your (embedded) host platform of choice, including the Raspberry Pi and other popular embedded hosts.


$215

DepthAI: USB3 FFC Edition

DepthAI for the host of your choice. Runs on anything that runs OpenVINO (a lot of things), including Mac OS X, Linux (Ubuntu 16.04, Ubuntu 18.04, CentOS, Yocto), and Windows 10.


$120

DepthAI: System on Module

Allows you to integrate the power of DepthAI into your own products. Supports 3 cameras total; dual 720p, 120 Hz Global Shutter and one 4K, 60 Hz Color connected through a 100-pin board-to-board connector. All power conditioning/sequencing, clock synthesis for the Myriad and the cameras, and boot sequencing is included in the module. Just provide 5 V power and physical connections to cameras.


$99

4K, 60 Hz Video Modular Color Camera


$105

720p, 120 Hz Global Shutter Modular Stereo Camera Pair

Credits

Luxonis

Brandon quit his job at Ubiquiti leading the UniFi team in order to focus on embedded machine learning and computer vision. He misses the UniFi team. But he just had to try this, as he thinks it's the future!


Brandon Gilles


MacroFab

Recommended

Prototype Fabrication

King-Top

Recommended

Manufacturing Partner

Sunny Opotech

Recommended

Sourcing: Camera modules

See Also

Subscribe to the Crowd Supply newsletter, highlighting the latest creators and projects: