» Articles » PMID: 33864974

Automatic Tip Detection of Surgical Instruments in Biportal Endoscopic Spine Surgery

Overview
Journal Comput Biol Med
Publisher Elsevier
Date 2021 Apr 17
PMID 33864974
Citations 8
Authors
Affiliations
Soon will be listed here.
Abstract

Background: Recent advances in robotics and deep learning can be used in endoscopic surgeries and can provide numerous advantages by freeing one of the surgeon's hands. This study aims to automatically detect the tip of the instrument, localize a point, and evaluate the detection accuracy in biportal endoscopic spine surgery (BESS). The tip detection could serve as a preliminary study for the development of vision intelligence in robotic endoscopy.

Methods: The dataset contains 2310 frames from 9 BESS videos with x and y coordinates of the tip annotated by an expert. We trained two state-of-the-art detectors, RetinaNet and YOLOv2, with bounding boxes centered around the tip annotations with specific margin sizes to determine the optimal margin size for detecting the tip of the instrument and localizing the point. We calculated the recall, precision, and F1-score with a fixed box size for both ground truth tip coordinates and predicted midpoints to compare the performance of the models trained with different margin size bounding boxes.

Results: For RetinaNet, a margin size of 150 pixels was optimal with a recall of 1.000, precision of 0.733, and F1-score of 0.846. For YOLOv2, a margin size of 150 pixels was optimal with a recall of 0.864, precision of 0.808, F1-score of 0.835. Also, the optimal margin size of 150 pixels of RetinaNet was used to cross-validate its overall robustness. The resulting mean recall, precision, and F1-score were 1.000 ± 0.000, 0.767 ± 0.033, and 0.868 ± 0.022, respectively.

Conclusions: In this study, we evaluated an automatic tip detection method for surgical instruments in endoscopic surgery, compared two state-of-the-art detection algorithms, RetinaNet and YOLOv2, and validated the robustness with cross-validation. This method can be applied in different types of endoscopy tip detection.

Citing Articles

Deep learning-based multimodal integration of imaging and clinical data for predicting surgical approach in percutaneous transforaminal endoscopic discectomy.

Xu Y, Liu S, Tian Q, Kou Z, Li W, Xie X Eur Spine J. 2025; .

PMID: 39920320 DOI: 10.1007/s00586-025-08668-5.


Deep Learning in Spinal Endoscopy: U-Net Models for Neural Tissue Detection.

Lee H, Rhee W, Chang S, Chang B, Kim H Bioengineering (Basel). 2024; 11(11).

PMID: 39593742 PMC: 11591488. DOI: 10.3390/bioengineering11111082.


A Multi-Element Identification System Based on Deep Learning for the Visual Field of Percutaneous Endoscopic Spine Surgery.

Bu J, Lei Y, Wang Y, Zhao J, Huang S, Liang J Indian J Orthop. 2024; 58(5):587-597.

PMID: 38694692 PMC: 11058141. DOI: 10.1007/s43465-024-01134-2.


Assessment of Automated Identification of Phases in Videos of Total Hip Arthroplasty Using Deep Learning Techniques.

Kang Y, Kim S, Seo S, Lee S, Kim H, Yoo J Clin Orthop Surg. 2024; 16(2):210-216.

PMID: 38562629 PMC: 10973629. DOI: 10.4055/cios23280.


Minimizing Tissue Injury and Incisions in Multilevel Biportal Endoscopic Spine Surgery: Technical Note and Preliminary Results.

Kim S Medicina (Kaunas). 2024; 60(3).

PMID: 38541240 PMC: 10971946. DOI: 10.3390/medicina60030514.