Simplifying Radar Data Processing with NVIDIA DRIVE for Autonomy

2 views Source
Simplifying Radar Data Processing with NVIDIA DRIVE for Autonomy

In the current state of automotive radar, machine learning engineers cannot work with camera-equivalent raw RGB images. Instead, they rely on the output of radar constant false alarm rate (CFAR), similar to edge detection in computer vision. The communication and compute architectures have not kept pace with AI trends and the needs of Level 4 autonomy, despite radar being a staple of vehicle-level sensing for years.

The real 3D/4D 'image' signal is processed inside the edge device. The radar outputs objects or, in some cases, point clouds, akin to how a camera outputs a classical Canny edge-detection image. Centralized radar processing on NVIDIA DRIVE alters this model: raw analog-to-digital converter (ADC) data moves into a centralized compute platform. From there, a software-defined pipeline accelerated by dedicated NVIDIA Programmable Vision Accelerator (PVA) hardware manages everything from raw ADC samples to point clouds, with the GPU reserved for AI usage at any stage in the data flow.

In this paradigm, machine learning AI systems are not constrained to edge detections; they can utilize the full fidelity radar image, offering approximately a 100x increase in available bits of information. By removing the high-power digital signal processor/microcontroller unit (DSP/MCU) within edge compute radar, centralized radar returns to its radiofrequency (RF) roots with a streamlined printed circuit board (PCB). This design reduces unit costs by over 30% and decreases volume by about 20%, achieving an ultra-slim form factor.

Leveraging the superior energy efficiency of central domain controllers, overall system power consumption drops by around 20%. This innovation not only reshapes hardware design but also aligns perfectly with global green energy trends. NVIDIA collaborated with ChengTech, the first raw radar partner joining the DRIVE platform, to validate centralized compute radar processing on DRIVE with production-grade hardware.

At GTC 2026 last week, NVIDIA and ChengTech demonstrated this pipeline running in real time on DRIVE AGX Thor using production ChengTech radar units. Most production automotive radars utilize edge processing architecture. Each sensor unit integrates its own system on chip (SoC) or field-programmable gate array (FPGA), runs a fixed signal-processing chain onboard, and outputs a sparse point cloud to the central advanced driver assistance system electronic control unit (ADAS ECU).

Related articles