» Articles » PMID: 39056885

Application of Event Cameras and Neuromorphic Computing to VSLAM: A Survey

Overview
Date 2024 Jul 26
PMID 39056885
Authors
Affiliations
Soon will be listed here.
Abstract

Simultaneous Localization and Mapping (SLAM) is a crucial function for most autonomous systems, allowing them to both navigate through and create maps of unfamiliar surroundings. Traditional Visual SLAM, also commonly known as VSLAM, relies on frame-based cameras and structured processing pipelines, which face challenges in dynamic or low-light environments. However, recent advancements in event camera technology and neuromorphic processing offer promising opportunities to overcome these limitations. Event cameras inspired by biological vision systems capture the scenes asynchronously, consuming minimal power but with higher temporal resolution. Neuromorphic processors, which are designed to mimic the parallel processing capabilities of the human brain, offer efficient computation for real-time data processing of event-based data streams. This paper provides a comprehensive overview of recent research efforts in integrating event cameras and neuromorphic processors into VSLAM systems. It discusses the principles behind event cameras and neuromorphic processors, highlighting their advantages over traditional sensing and processing methods. Furthermore, an in-depth survey was conducted on state-of-the-art approaches in event-based SLAM, including feature extraction, motion estimation, and map reconstruction techniques. Additionally, the integration of event cameras with neuromorphic processors, focusing on their synergistic benefits in terms of energy efficiency, robustness, and real-time performance, was explored. The paper also discusses the challenges and open research questions in this emerging field, such as sensor calibration, data fusion, and algorithmic development. Finally, the potential applications and future directions for event-based SLAM systems are outlined, ranging from robotics and autonomous vehicles to augmented reality.

Citing Articles

Visual Localization Domain for Accurate V-SLAM from Stereo Cameras.

Di Salvo E, Bellucci S, Celidonio V, Rossini I, Colonnese S, Cattai T Sensors (Basel). 2025; 25(3).

PMID: 39943378 PMC: 11821077. DOI: 10.3390/s25030739.

References
1.
GOLDBERG D, Cauwenberghs G, Andreou A . Probabilistic synaptic weighting in a reconfigurable network of VLSI integrate-and-fire neurons. Neural Netw. 2001; 14(6-7):781-93. DOI: 10.1016/s0893-6080(01)00057-0. View

2.
LeCun Y, Bengio Y, Hinton G . Deep learning. Nature. 2015; 521(7553):436-44. DOI: 10.1038/nature14539. View

3.
Bavle H, Sanchez-Lopez J, Cimarelli C, Tourani A, Voos H . From SLAM to Situational Awareness: Challenges and Survey. Sensors (Basel). 2023; 23(10). PMC: 10222985. DOI: 10.3390/s23104849. View

4.
. Building brain-inspired computing. Nat Commun. 2019; 10(1):4838. PMC: 6802080. DOI: 10.1038/s41467-019-12521-x. View

5.
Cuadrado J, Rancon U, Cottereau B, Barranco F, Masquelier T . Optical flow estimation from event-based cameras and spiking neural networks. Front Neurosci. 2023; 17:1160034. PMC: 10210135. DOI: 10.3389/fnins.2023.1160034. View