Deepwave Digital
Software Defined Radio
AMD
NVIDIA has officially backed the AIR-T project. Read the full update here.
The Artificial Intelligence Radio Transceiver (AIR-T) is a high-performance software-defined radio (SDR) seamlessly integrated with state-of-the-art processing and deep learning inference hardware. The incorporation of an embedded graphics processing unit (GPU) enables real-time wide-band digital signal processing (DSP) algorithms to be executed in software, without requiring specialized field programmable gate array (FPGA) firmware development. The GPU is the most utilized processor for machine learning, therefore the AIR-T significantly reduces the barrier for engineers to create autonomous signal identification, interference mitigation, and many other machine learning applications. By granting the deep learning algorithm full control over the transceiver system, the AIR-T allows for fully autonomous software defined and cognitive radio.
Deepwave’s AIR-T is the first software-defined radio (SDR) with embedded high-performance computing. The AIR-T lowers the price and performance barriers to autonomous signal identification, interference mitigation, and much more. The AIR-T allows for a fully autonomous SDR by giving the AI engine complete control over the hardware. Whether your background is in electrical engineering, applied physics, or a related field, AIR-T will open up new possibilities for your work. It does this by uniquely integrating three digital processors that provide the functionality needed for any signal processing application:
This unique combination brings you the worlds of high-performance computing (HPC), artificial intelligence, deep learning, and advanced graphics and rendering on one embedded platform. The system has the versatility to function as a highly parallel SDR, data recorder, or inference engine for deep learning algorithms. The embedded GPU allows your SDR applications to process bandwidths greater than 200 MHz in real-time.
For a complete list of features, see the AIR-T Product Guide here
RF Transceiver | ||
---|---|---|
Manufacturer | Analog Devices 9371 | |
Number of Receive Channels | 2 | |
Number of Transmit Channels | 2 | |
Maximum Bandwidth | 100 MHz | |
Maximum Sample Rate | 125 MSPS | |
Frequency Tuning Range | 300 MHz - 6 GHz | |
Receiver Power Level Control | AGC or manual gain control | |
Transmitter Power Level Control | TPC or manual gain control | |
External Reference Input | Yes | |
Built in Calibrations | Quadrature error correction | |
LO suppression | ||
LO leakage correction | ||
Processors | ||
Manufacturer | NVIDIA Jetson TX2 | |
CPU 1 | ARM A-57 (4-core) | |
CPU 2 | ARM Denver2 (2-core) | |
GPU | Pascal (256-core) | |
Memory | 8 GBytes shared memory | |
Storage | 32 GBytes flash | |
FPGA | ||
Manufacturer | Xilinx Artix-7 FPGA | |
LUTs | 47.2k | |
DSP Slices | 180 | |
RAM | 3.75 kbits | |
Networking | ||
Ethernet | 10/100/1000 BASE-T | |
WLAN | 802.11a/b/g/n/ac dual-band 2x2 MIMO | |
Bluetooth | Version 4.1 | |
Display | ||
HDMI | 3840 x 2160 (4k) | |
Peripheral Interfaces | ||
SATA | Version 3.1 | |
SD Card | SD 3.0 or SD-XC cards up to 2 TB | |
USB | USB 3.0 Super Speed mode (up to 5Gb/s) | |
USB 2.0 High Speed mode (up to 480Mb/s) | ||
USB On-The-Go | ||
UART | See NVIDIA Jetson TX2 datasheet for information | |
GPIO | See NVIDIA Jetson TX2 datasheet for information | |
SPI | See NVIDIA Jetson TX2 datasheet for information | |
I2C | See NVIDIA Jetson TX2 datasheet for information | |
Audio | See NVIDIA Jetson TX2 datasheet for information | |
Power | ||
Input | 8-15 VDC | |
Mechanical | ||
Board Form Factor | Mini-ITX | |
Dimensions | 170 × 170 x 35 mm (6.7" × 6.7" x 1.4") | |
Weight | 285 grams (0.63 pounds) | |
Software | ||
Operating System | Ubuntu (Linux) | |
Drivers | AirStack |
The AIR-T comes pre-loaded with a full software stack, AirStack. AirStack includes all the components necessary to utilize the AIR-T, such as an Ubuntu based operating system, AIR-T specific device drivers, and the FPGA firmware. The operating system is based off of the NVIDIA Jetpack and is upgraded periodically. Please check for the latest software at http://www.deepwavedigital.com.
Applications for the AIR-T may be developed using almost any software language, but C/C++ and Python are the primary supported languages. Various Application Programming Interfaces (APIs) are supported by AirStack and a few of the most common APIs are described below.
SoapySDR is the primary API for interfacing with the AIR-T via the SoapyAIRT driver. SoapySDR is an open-source API and run-time library for interfacing with various SDR devices. The AirStack environment includes the SoapySDR and the SoapyAIRT driver to enable communication with the radio interfaces using Python or C++. The Python code below provides an operational example of how to leverage the SoapyAIRT for SDR applications.
#!/usr/bin/env python3
from SoapySDR import Device, SOAPY_SDR_RX, SOAPY_SDR_CS16
import numpy as np
sdr = Device(dict(driver="SoapyAIRT")) # Create AIR-T instance
sdr.setSampleRate(SOAPY_SDR_RX, 0, 125e6) # Set sample rate on chan 0
sdr.setGainMode(SOAPY_SDR_RX, 0, True) # Use AGC on channel 0
sdr.setFrequency(SOAPY_SDR_RX, 0, 2.4e9) # Set frequency on chan 0
buff = np.empty(2 * 16384, np.int16) # Create memory buffer
stream = sdr.setupStream(SOAPY_SDR_RX,
SOAPY_SDR_CS16, [0]) # Setup data stream
sdr.activateStream(stream) # Turn on the radio
for i in range(10): # Receive 10x16384 windows
sr = sdr.readStream(stream, [buff], 16384) # Read 16384 samples
rc = sr.ret # Number of samples read
assert rc == 16384, 'Error code = %d!' % rc # Make sure no errors
s0 = buff.astype(float) / np.power(2.0, 15) # Scaled interleaved signal
s = s0[::2] + 1j*s0[1::2] # Complex signal data
# <Insert code here that operates on s>
sdr.deactivateStream(stream) # Stop streaming samples
sdr.closeStream(stream) # Turn off radio
A key feature of SoapySDR is its ability to translate to/from other popular SDR APIs, such as UHD. The SoapyUHD plugin is included with AirStack and enables developers to create applications using UHD or execute existing UHD-based applications on the AIR-T. This interface is described in the figure below.
The figure below illustrates supported Python APIs that can be used to develop signal processing applications on both the CPU and GPU of the AIR-T. In general, these have been selected because they have modest overhead compared to native code and are well suited to rapid prototyping. In addition, C++ interfaces are provided for many control and processing interfaces to the AIR-T for use in performance-critical applications.
The table below outlines the common data processing APIs that are natively supported by AirStack, along with the supported GPP for each API. Some of these are included with AirStack, while some are available via the associated URL.
API | GPP | Description |
---|---|---|
numpy | CPU | numpy is one a common data analysis and processing Python module. |
scipy.signal | CPU | SciPy is a scientific computing library for Python that contains a signal processing library, scipy.signal. |
cupy | GPU | Open-source matrix library accelerated with NVIDIA CUDA that is semantically compatible with numpy. |
cuSignal | GPU | Open-source signal processing library accelerated with NVIDIA CUDA based on scipy.signal. |
PyCUDA / numba | GPU | Python access to the full power of NVIDIA’s CUDA API. |
Custom CUDA Kernels | GPU | Custom CUDA kernels may be developed and executed on the AIR-T. |
The AIR-T also supports GNU Radio, one of the most widely used open-source toolkits for signal processing and SDR. Included with AirStack, the toolkit provides modules for the instantiation of bidirectional data streams with the AIR-T’s transceiver (transmit and receive) and multiple DSP modules in a single framework. GNU Radio Companion may also be leveraged for a graphical programming interface, as shown in the figure below. GNU Radio is written in C++ and has Python bindings.
Like the majority of SDR applications, most functions in GNU Radio rely on CPU processing. Since many DSP engineers are already familiar with GNU Radio, two free and open source modules have been created for AirStack to provide GPU acceleration on the AIR-T from within GNU Radio. Gr-Cuda and gr-wavelearner, along with the primary GNU Radio modules for sending and receiving samples to and from the AIR-T, are shown in the table below and included with AirStack.
GNU Radio Module | Description |
---|---|
gr-cuda | A detailed tutorial for incorporating CUDA kernels into GNU Radio. |
gr-wavelearner | A framework for running both GPU-based FFTs and neural network inference in GNU Radio. |
gr-uhd | The GNU Radio module for supporting UHD devices. |
gr-soapy | Vendor neutral set of source/sink blocks for GNU Radio. |
The workflow for creating a deep learning application for the AIR-T consists of three phases: training, optimization, and deployment. These steps are illustrated in the figure below and covered in the proceeding sections.
AirPack is an add-on software package (not included with the AIR-T) that provides source code for the complete training-to-deployment workflow described in this section. More information about AirPack may be found here: https://deepwavedigital.com/airpack/.
The primary inference library used on the AIR-T is NVIDIA’s TensorRT. TensorRT allows for optimized interference to run on the AIR-T’s GPU. TensorRT is compatible with models trained using a wide variety of frameworks as shown below.
Deep Learning Framework | Description | TensorRT Support | Programming Languages |
---|---|---|---|
TensorFlow | Google’s deep learning framework | UFF, ONXX | Python, C++, Java |
PyTorch | Open source deep learning framework maintained by Facebook | ONNX | Python, C++ |
MATLAB | MATLAB has a Statistics and Machine Learning Toolbox and a Deep Learning Toolbox | ONNX | MATLAB |
CNTK | Microsoft’s open source Cognitive Toolkit | ONNX | Python, C#, C++ |
We are continuously adding new tutorials to our documentation page.
One of our favorites is leveraging the open source cuSignal library to speed up the execution of a polyphase resampler by 8x using the GPU vs. the CPU. cuSignal is part of the NVIDIA RAPIDS development environment and is an effort to GPU accelerate all of the signal processing functions in the SciPy Signal Library.
The full tutorial may be found here.
AIR-T | Ettus E310 | Ettus N310 | LimeNET Mini | Epiq Maveriq | |
---|---|---|---|---|---|
GPU for Signal Processing | 256 Core NVIDA Jetson | - | - | - | - |
Deep Learning Capable | TensorFlow, Caffe, Keras, Pytorch | - | - | - | - |
CPU Cores | 6 (ARM A57, Denver2) | 2 (ARM A9) | 2 (ARM A9) | 2 (Intel i7-7500U) | 4 (Intel Atom) |
RAM (GB) | 8 | 1 | 1 | 32 | 8 |
Internal Storage (GB) | 32 | - | - | 512 | Up to 1000 |
Tx Bandwidth > 60 MHz | 100 MHz | - | 100 MHz | 61.4 MHz | - |
Rx Bandwidth > 60 MHz | 100 MHz | - | 100 MHz | 61.4 MHz | - |
Max Bandwidth for Onboard Processing (MHz) | >200 | 10 | Not Published | Not Published | Not Published |
USB 3.0 | 1 | - | - | 2 | - |
SATA | 1 | - | - | 1 (Internal Storage) | 1 (Internal Storage |
1 Gb Ethernet | 1 | 1 | 1 | 1 | 1 |
Wi-Fi | 1 | - | - | 1 | - |
Bluetooth | 1 | - | - | 1 | - |
Display Out | HDMI (4K) | - | - | HDMI (4K) | - |
Max Power Consumption (W) | 22 | 6 | 80 | Not Published | 14 |
Price | $5,500 | $2,982 | $10,000 | $2,599 | Unavailable |
With the AIR-T, you can use deep learning to maximize applications from Wi-Fi to OpenBTS. Pairing a GPU directly with an RF front-end means you don’t have to purchase an additional computer or server for processing. Just power on the AIR-T, plug in a keyboard, mouse, and monitor and you’re ready to go. Use GNU Radio blocks to quickly develop and deploy your current or new wireless system or, if you need more control, talk directly with the drivers using Python or C++. And for you superusers, the AIR-T is an open platform, so you can program the FPGA and GPU directly.
Communicating past Pluto is hard. With the power of a single-board SDR with an embedded GPU, the AIR-T can prove your concepts before you launch them into space. AIR-T lets you reduce development time and costs by adding deep learning to your satellite communication system. With the ability to program in Python and rapidly port existing code from GNU Radio, you can accelerate your existing applications within minutes. Yet, you can do a LOT more with the AIR-T. We are committed to having an open architecture. Meaning, you can program in Python, control the drivers with your a custom software, or program the FPGA directly. We anticipate customers developing at every level.
There is a seemingly endless number of terrestrial communication systems, with more being developed every day. From high-power, high-frequency voice communications to 60 GHz millimeter wave digital technology, there are significant challenges in every band. As spectral density becomes more congested, we are nearing the end of the amount of information that can be passed over wireless systems using the same spectrum. AI can be used in order to maximize these resources. The AIR-T is well-positioned to easily and quickly help you prototype and deploy your wireless AI solution. From 300 MHz to 6 GHz the AIR-T covers the majority of commercial communication bands.
While the AIR-T was designed for wireless development, it can process any type of data. NVIDIA is a graphics processing powerhouse and their products, including the Jetson TX2, are known for their high-performance when it comes to video. With the AIR-T, you can combine the traditional uses of image and video processing with radio frequency. With USB 3.0 & 2.0, Gigabit Ethernet, and high-speed IO, there are many ways to bring data in and out of the AIR-T. You can attach additional sensors and allow the AIR-T to fuse the data together.
The AIR-T lets you demodulate a signal and apply deep learning to the resulting image, video, or audio data in one integrated platform. For example, you used to need multiple devices to directly receive a signal that contains audio and then perform speech recognition. The AIR-T integrates this hardware into one easy-to-use package. From speech recognition to digital signal processing, the integrated NVIDIA GPU provides the horsepower needed for your cutting edge application.
Now that the crowdfunding campaign is over and all backers received their AIR-T units, we’re happy to continue stocking AIR-T through Crowd Supply. Orders placed now ship within three business days. For more information about shipping, please take a look at the Crowd Supply Guide. If you have any questions or concerns regarding a specific AIR-T order, contact the Crowd Supply Support team for assistance.
If you have any questions or simply want to find out more about AIR-T or the Deepwave team, please don’t hesitate to get in touch, or check out the latest news on our blog.
Produced by Deepwave Digital in Philadelphia, PA.
Sold and shipped by Crowd Supply.
The AIR-T is the first platform that enables out-of-the-box machine learning wireless systems. Simply port current GNURadio or code directly in Python to access the radio frequency spectrum. Includes, AIR-T board, four MCX-to-SMA cables, getting started tutorial, and power supply. The Air-T enclosure is expertly constructed from aluminum to produce a polished, elegant, and sleek metallic silver finish. It measures 192 x 182 x 79 mm (7.5 x 7.2 3.1 inches) and the power button illuminates blue when they system is on. All RF ports are brought to the front of the enclosure for ease of use and all computer peripherals connections are brought to the rear.
The AIR-T is the first platform that enables out-of-the-box machine learning wireless systems. Simply port current GNURadio or code directly in Python to access the radio frequency spectrum. Includes, AIR-T board, four MCX-to-SMA cables, getting started tutorial, and power supply.
We are a dedicated team of radio frequency and wireless industry experts with over fifty years of combined experience providing digital signal processing solutions to the commercial and defense industries. We understand both the technical capabilities as well as the limitations of these technologies, leading us to develop novel hardware and software solutions to combine the fields of artificial intelligence and signal processing.
A Software-Definable RF Front End Module for LimeSDR Platforms
A tiny-but-mighty open-source carrier board for the Raspberry Pi CM4
A tiny, single-sided M.2 SDR board that you can operate easily using your web browser