AI generated/Shutterstock image
Computational Imaging Detection and Ranging (CIDAR) Challenge

The Defense Advanced Research Projects Agency (DARPA) seeks to explore the potential of computational imaging, machine learning, and artificial intelligence tools and techniques to enhance the accuracy of passive ranging for tactical and civil applications.

 
Background Image Source: Shutterstock


About Our Effort
AI generated/Shutterstock image

Optical range measurements can be accomplished by active or passive imaging. Active approaches involve the emission of laser radiation by the observer, which might cause hazard to the surrounding vicinity and make the observer visible to others, compromising effective intelligence, surveillance, and reconnaissance (ISR) and sense and avoid (SAA) capabilities. Passive approaches are important because they do not emit radiation and cannot be detected or jammed.

DARPA seeks to explore the potential of novel computational imaging, machine learning (ML) algorithms, and artificial intelligence (AI) tools and techniques to enhance the accuracy of passive ranging for tactical and civil applications, such as augmented reality and autonomous vehicles.

The CIDAR challenge aims to discover passive imaging algorithms for high-accuracy, low-latency distance measurements that equal or exceed the performance of today’s active range measurement systems. Specifically, the challenge will extend passive range measurements to 10 km or more with high accuracy while minimizing floating point operations to achieve low latency. The Cramer-Rao bound defines the fundamental limit of distance information in images, but passive imaging approaches to range measurements today can only capture ~1% of this information. The accuracy of range measurement algorithms improves 10–100x when information from a single spatial, spectral, or temporal optical filter is added to the information in unfiltered images. If we create new algorithms that integrate information from all optical filters, then we may be able to increase the accuracy of passive range measurements 10x–100x further to approach the fundamental limit of distance information in images. If the challenge succeeds, active range measurements like laser detection and ranging (LADAR) and laser range finding (LRF) can be performed passively without sacrificing speed or accuracy.

 
Desired Outcomes for Department of Defense (DOD)
Beyond the limitations of today’s active ranging systems
Current active ranging systems for ISR are at risk of detection from emissions.
 
A zero-emission range measurement capability developed through CIDAR could eliminate this limitation.
Operational superiority in the battlefield
Passive ranging with high accuracy supports effective targeting capabilities in a fast time scale, leaving almost no time for the target to respond.
Reduced size, weight, and power (SWaP) and costs
Developing advanced software for accurate range detection leads to less complexity in supporting hardware and electronic controls, supporting low size, weight, power, and cost for DOD needs.
Effective and flexible access to civil airspace
Unmanned aircraft systems (UAS) need regular and more flexible access to civil airspace to enhance existing training capabilities and to improve the logistics of transporting of UAS.
 
It offers UAS the capability to range non-cooperative air traffic, which can enable a passive only solution to SAA for UAS.
 
Transportation safety and efficiency
Accurate passive range measurements can augment reality by overlaying distance information with images, improving the efficiency of autonomous driving algorithms and improving context awareness for drivers.

Fundamental limit of distance information in images
 

The Cramer-Rao bound defines the limit of distance information in images, independent of how it is measured. It is a function of the characteristics of an aperture (focal length, diameter, etc.) and it implies that there is enough distance information in images to get high accuracy distance measurements at long ranges. CIDAR will determine if algorithms are able to extract this distance information given optically filtered images.

Traditional images without optical filters contain limited distance information. When a single optical filter (spatial, spectral or temporal information) is added to a traditional image the accuracy of passive distance measurements improves one to two orders of magnitude. If information from multiple optical filters can be integrated with algorithms, can we extract more distance information from images that approaches the fundamental limit?

Background Image Source: Shutterstock

CIDAR Challenge timeline