» Articles » PMID: 31920208

CAI4CAI: The Rise of Contextual Artificial Intelligence in Computer Assisted Interventions

Overview
Date 2020 Jan 11
PMID 31920208
Citations 19
Authors
Affiliations
Soon will be listed here.
Abstract

Data-driven computational approaches have evolved to enable extraction of information from medical images with a reliability, accuracy and speed which is already transforming their interpretation and exploitation in clinical practice. While similar benefits are longed for in the field of interventional imaging, this ambition is challenged by a much higher heterogeneity. Clinical workflows within interventional suites and operating theatres are extremely complex and typically rely on poorly integrated intra-operative devices, sensors, and support infrastructures. Taking stock of some of the most exciting developments in machine learning and artificial intelligence for computer assisted interventions, we highlight the crucial need to take context and human factors into account in order to address these challenges. Contextual artificial intelligence for computer assisted intervention, or CAI4CAI, arises as an emerging opportunity feeding into the broader field of surgical data science. Central challenges being addressed in CAI4CAI include how to integrate the ensemble of prior knowledge and instantaneous sensory information from experts, sensors and actuators; how to create and communicate a faithful and actionable shared representation of the surgery among a mixed human-AI actor team; how to design interventional systems and associated cognitive shared control schemes for online uncertainty-aware collaborative decision making ultimately producing more precise and reliable interventions.

Citing Articles

LoViT: Long Video Transformer for surgical phase recognition.

Liu Y, Boels M, Garcia-Peraza-Herrera L, Vercauteren T, Dasgupta P, Granados A Med Image Anal. 2024; 99:103366.

PMID: 39418831 PMC: 11876726. DOI: 10.1016/j.media.2024.103366.


Artificial intelligence automated surgical phases recognition in intraoperative videos of laparoscopic pancreatoduodenectomy.

You J, Cai H, Wang Y, Bian A, Cheng K, Meng L Surg Endosc. 2024; 38(9):4894-4905.

PMID: 38958719 DOI: 10.1007/s00464-024-10916-6.


SDA-CLIP: surgical visual domain adaptation using video and text labels.

Li Y, Jia S, Song G, Wang P, Jia F Quant Imaging Med Surg. 2023; 13(10):6989-7001.

PMID: 37869278 PMC: 10585553. DOI: 10.21037/qims-23-376.


Procedural Software Toolkit in the Armamentarium of Interventional Therapies: A Review of Additive Usefulness and Current Evidence.

Al-Sharydah A, BinShaiq F, Aloraifi R, Almefleh A, Alessa S, Alobud A Diagnostics (Basel). 2023; 13(4).

PMID: 36832254 PMC: 9955934. DOI: 10.3390/diagnostics13040765.


Translation of Medical AR Research into Clinical Practice.

Seibold M, Spirig J, Esfandiari H, Farshad M, Furnstahl P J Imaging. 2023; 9(2).

PMID: 36826963 PMC: 9961816. DOI: 10.3390/jimaging9020044.


References
1.
Zia A, Sharma Y, Bettadapura V, Sarin E, Essa I . Video and accelerometer-based motion analysis for automated surgical skills assessment. Int J Comput Assist Radiol Surg. 2018; 13(3):443-455. DOI: 10.1007/s11548-018-1704-z. View

2.
Lalys F, Jannin P . Surgical process modelling: a review. Int J Comput Assist Radiol Surg. 2013; 9(3):495-511. DOI: 10.1007/s11548-013-0940-5. View

3.
Sarikaya D, Corso J, Guru K . Detection and Localization of Robotic Tools in Robot-Assisted Surgery Videos Using Deep Neural Networks for Region Proposal and Detection. IEEE Trans Med Imaging. 2017; 36(7):1542-1549. DOI: 10.1109/TMI.2017.2665671. View

4.
Issenhuth T, Srivastav V, Gangi A, Padoy N . Face detection in the operating room: comparison of state-of-the-art methods and a self-supervised approach. Int J Comput Assist Radiol Surg. 2019; 14(6):1049-1058. DOI: 10.1007/s11548-019-01944-y. View

5.
Boudissa M, Courvoisier A, Chabanas M, Tonetti J . Computer assisted surgery in preoperative planning of acetabular fracture surgery: state of the art. Expert Rev Med Devices. 2017; 15(1):81-89. DOI: 10.1080/17434440.2017.1413347. View