» Articles » PMID: 40031732

Self-Supervised Feature Detection and 3D Reconstruction for Real-Time Neuroendoscopic Guidance

Overview
Date 2025 Mar 3
PMID 40031732
Authors
Affiliations
Soon will be listed here.
Abstract

Objective: Transventricular approach to deep-brain targets offers direct visualization but also imparts deformation that challenges accurate neuronavigation. 3D reconstruction and registration of the endoscopic view could provide up-to-date, realtime guidance. We develop and evaluate a self-supervised feature detection method for 3D reconstruction and navigation in neuroendoscopy.

Methods: Unlabeled neuroendoscopic video data from 15 clinical cases yielding 11,527 video frames yielding 11,527 video frames were used to train a self-supervised learning method (R2D2-E) with 5-fold cross validation integrated into a simultaneous localization and mapping (SLAM) pipeline for 3D reconstruction. A series of experiments guided nominal hyperparameters selection and evaluated performance in comparison to SIFT, SURF and SuperPoint in terms of the accuracy of feature matching and 3D reconstruction.

Results: R2D2-E demonstrated a superior performance in feature matching and 3D reconstruction. R2D2-E features achieved a median projected error of 0.64 mm compared to 0.90 mm, 0.99 mm and 0.83 mm error for SIFT, SURF and SuperPoint, respectively. The method also improved F1 score by 14%, 25% and 22% compared to SIFT, SURF and SuperPoint, respectively.

Conclusion: The proposed feature detection approach enables accurate, real-time 3D reconstruction in neuroendoscopy, offering robust feature detection in the presence of endoscopic artifacts and provides up-to-date navigation following soft-tissue deformation.

Significance: The self-supervised feature detection method advances capabilities for vision-based guidance and augmented visualization of target structures in neuroendoscopic procedures. The approach could enhance the accuracy and precision of neurosurgery to improve patient outcomes.