» Articles » PMID: 33336212

A Hybrid Deep Learning Approach for Spatial Trigger Extraction from Radiology Reports

Overview
Date 2020 Dec 18
PMID 33336212
Citations 5
Authors
Affiliations
Soon will be listed here.
Abstract

Radiology reports contain important clinical information about patients which are often tied through spatial expressions. Spatial expressions (or triggers) are mainly used to describe the positioning of radiographic findings or medical devices with respect to some anatomical structures. As the expressions result from the mental visualization of the radiologist's interpretations, they are varied and complex. The focus of this work is to automatically identify the spatial expression terms from three different radiology sub-domains. We propose a hybrid deep learning-based NLP method that includes - 1) generating a set of candidate spatial triggers by exact match with the known trigger terms from the training data, 2) applying domain-specific constraints to filter the candidate triggers, and 3) utilizing a BERT-based classifier to predict whether a candidate trigger is a true spatial trigger or not. The results are promising, with an improvement of 24 points in the average F1 measure compared to a standard BERT-based sequence labeler.

Citing Articles

A scoping review of large language model based approaches for information extraction from radiology reports.

Reichenpfader D, Muller H, Denecke K NPJ Digit Med. 2024; 7(1):222.

PMID: 39182008 PMC: 11344824. DOI: 10.1038/s41746-024-01219-0.


Advancing medical imaging with language models: featuring a spotlight on ChatGPT.

Hu M, Qian J, Pan S, Li Y, Qiu R, Yang X Phys Med Biol. 2024; 69(10).

PMID: 38537293 PMC: 11075180. DOI: 10.1088/1361-6560/ad387d.


Weakly supervised spatial relation extraction from radiology reports.

Datta S, Roberts K JAMIA Open. 2023; 6(2):ooad027.

PMID: 37096148 PMC: 10122604. DOI: 10.1093/jamiaopen/ooad027.


Application of a Domain-specific BERT for Detection of Speech Recognition Errors in Radiology Reports.

Chaudhari G, Liu T, Chen T, Joseph G, Vella M, Lee Y Radiol Artif Intell. 2022; 4(4):e210185.

PMID: 35923373 PMC: 9344210. DOI: 10.1148/ryai.210185.


Fine-grained spatial information extraction in radiology as two-turn question answering.

Datta S, Roberts K Int J Med Inform. 2021; 158:104628.

PMID: 34839119 PMC: 9072592. DOI: 10.1016/j.ijmedinf.2021.104628.

References
1.
Rink B, Roberts K, Harabagiu S, Scheuermann R, Toomay S, Browning T . Extracting actionable findings of appendicitis from radiology reports using natural language processing. AMIA Jt Summits Transl Sci Proc. 2013; 2013:221. PMC: 3845763. View

2.
Li X, Fu C, Zhong R, Zhong D, He T, Jiang X . A hybrid deep learning framework for bacterial named entity recognition with domain features. BMC Bioinformatics. 2019; 20(Suppl 16):583. PMC: 6886245. DOI: 10.1186/s12859-019-3071-3. View

3.
Roberts K, Rink B, Harabagiu S, Scheuermann R, Toomay S, Browning T . A machine learning approach for identifying anatomical locations of actionable findings in radiology reports. AMIA Annu Symp Proc. 2013; 2012:779-88. PMC: 3540484. View

4.
Datta S, Roberts K . A dataset of chest X-ray reports annotated with Spatial Role Labeling annotations. Data Brief. 2020; 32:106056. PMC: 7451761. DOI: 10.1016/j.dib.2020.106056. View

5.
Datta S, Si Y, Rodriguez L, Shooshan S, Demner-Fushman D, Roberts K . Understanding spatial language in radiology: Representation framework, annotation, and spatial relation extraction from chest X-ray reports using deep learning. J Biomed Inform. 2020; 108:103473. PMC: 7807990. DOI: 10.1016/j.jbi.2020.103473. View