Vision FPGA SoM

An FPGA-based SoM with integrated vision, audio, and motion-sensing capability

This project is coming soon. Sign up to receive updates and be notified when this project launches.


Low Power Computer Vision

The Vision FPGA SoM is an iCE40 5K FPGA based System on Module that integrates a low power qVGA vision sensor, 3 axis accelerometer/Gyroscope and an I2S MEMS microphone in a small form factor (3 cm x 2 cm).

We designed the device with the following thoughts in mind:

  • Low power image sensor + FPGA: Overall power consumption should be minimized and in the 10-20mW range to enable battery-powered applications.
  • Well designed host interface: Easy to integrate into a larger system using an interrupt driven SPI host interface, programmable IO voltages to support a glueless HW interface.
  • Modular design: Complexity should be localized to the SoM so that the developer can quickly integrate this device into their system using a breadboard as well as a simple API. All required components for vision/audio/motion (eg. image sensor and illumination) should be integrated so the developer doesn't have to cobble together parts. The solution would ideally translate into a final product with minimal changes.
  • Flexible and Easy to Use: Easy to integrate mechanically and electrically. The SoM should appear like an SPI device with a SW library to support it. Ability to configure the FPGA CRAM instead of having to program the Flash followed by a reset to configure the FPGA. Provide flexibility to the developer in choosing an image sensor while providing a reasonable low-cost default option.
  • Strong set of features: Should not be just another FPGA board. Require a reasonably sized SRAM to allow for temporary data storage at low power (not DRAM!) which is especially important for vision applications where multiple image frames may need to be captured/processed. Audio and IMU are commonly used together with vision and would make sense to integrate on this platform.
  • Documented and Open FPGA code: Ideally use an open source toolchain. Well-documented code and plenty of simple examples for each subsystem as well as a large design that ties together various parts of the SoM.

The modular concept is summarized in a talk given at a recent tinyML meetup.

Potential Applications

  • Image/Sound/Motion capture
  • Trigger on scene change/sound/motion to capture video/audio
  • Capture IMU readings with images for VR/AR applications
  • Beamform audio with vision input to pay attention to areas with motion
  • Low Power Edge Processing. Preserve privacy by processing data on the SoM
  • Enable developers to detect objects, keywords and gestures using a low complexity Neural Network with no cloud connectivity required ie. tinyML

SoM Specifications

  • The main processing element is the Lattice iCE40UP5k FPGA with 5K LUT's, 1 Mb RAM and 8 MAC units
  • Image sensor options
    • On-board low power qVGA monochrome global shutter imager (Pixart PAJ6100U6)
    • Connector for color/monochrome rolling shutter imager (Himax HMB010) Image sensor not included!
    • Connector for OV7670 flex Image sensor not included!
  • One Knowles MEMS I2S microphone, expandable to a stereo configuration.
  • Invensense IMU 60289 6-axis Gyro/accelerometer
  • Memory
    • 4 Mb qSPI Flash for bitstream/code storage
    • 64 Mb qSPI SRAM for temporary data
  • LEDs
    • Tri-colour LED for a user interface driven by the FPGA
    • IR LED for low light illumination with frame exposure synchronization
  • Four GPIO, programmable IO voltage
  • 4 wire SPI host interface with programmable IO voltage
  • Flexible power options:
    • Single 3.3V operation, can supply 1.8V and 1.2V @ 100mA (max) to external devices using onboard LDO's
    • External 3.3V, 1.8V, 1.2V for lower power operation
  • Supports the Lattice SensAI toolchain using Tensorflow/Caffe/Keras for model development, quantization and mapping to the SensAI Neural Network engines.
    • Vision based People detection
    • Audio keyword detection
  • Small size: 21.3 mm x 31.3 mm
  • Developer kit breaks out all pins and provides USB connectivity for programming/debug, power measurement over I2C, LED's, additional microphone, IR LED's for illumination, PMOD expansion header and a small prototyping area.

Current Status

HW/SWStatusDescription
Vision SoMDeveloped, in Design Verification & Testing (DVT)The star of this campaign!
Developer kitDeveloped, in DVTUber dev kit with all features exposed!
Breakout boardUnder developmentPassive board breakout to 2.54mm pitch headers for quick prototyping
GatewareUnder developmentVerilog FPGA code
SW APIUnder developmentSPI based API to enable developers to control the provided example design and push models into it as well as retrieve NN inferences.
BrainwareUnder developmentGoogle Colab training framework to enable developers to train their own models and deploy on the SoM.
NN modelsUnder developmentObject (face) detection and audio classification are the first models to be developed.
BLE/Wifi donglePlannedESP32 based dongle with a LiPo battery that attaches to the SoM connector to form a complete system

Want to participate? We are looking for developers to get involved with the FPGA code as well as SW and Neural network training, please join the Discord channel!

Vision FPGA SoM on the SoM breakout board.

Subscribe to the Crowd Supply newsletter, highlighting the latest creators and projects: