» Articles » PMID: 36932023

Artificial Intelligence-based Analysis of Whole-body Bone Scintigraphy: The Quest for the Optimal Deep Learning Algorithm and Comparison with Human Observer Performance

Abstract

Purpose: Whole-body bone scintigraphy (WBS) is one of the most widely used modalities in diagnosing malignant bone diseases during the early stages. However, the procedure is time-consuming and requires vigour and experience. Moreover, interpretation of WBS scans in the early stages of the disorders might be challenging because the patterns often reflect normal appearance that is prone to subjective interpretation. To simplify the gruelling, subjective, and prone-to-error task of interpreting WBS scans, we developed deep learning (DL) models to automate two major analyses, namely (i) classification of scans into normal and abnormal and (ii) discrimination between malignant and non-neoplastic bone diseases, and compared their performance with human observers.

Materials And Methods: After applying our exclusion criteria on 7188 patients from three different centers, 3772 and 2248 patients were enrolled for the first and second analyses, respectively. Data were split into two parts, including training and testing, while a fraction of training data were considered for validation. Ten different CNN models were applied to single- and dual-view input (posterior and anterior views) modes to find the optimal model for each analysis. In addition, three different methods, including squeeze-and-excitation (SE), spatial pyramid pooling (SPP), and attention-augmented (AA), were used to aggregate the features for dual-view input models. Model performance was reported through area under the receiver operating characteristic (ROC) curve (AUC), accuracy, sensitivity, and specificity and was compared with the DeLong test applied to ROC curves. The test dataset was evaluated by three nuclear medicine physicians (NMPs) with different levels of experience to compare the performance of AI and human observers.

Results: DenseNet121_AA (DensNet121, with dual-view input aggregated by AA) and InceptionResNetV2_SPP achieved the highest performance (AUC = 0.72) for the first and second analyses, respectively. Moreover, on average, in the first analysis, Inception V3 and InceptionResNetV2 CNN models and dual-view input with AA aggregating method had superior performance. In addition, in the second analysis, DenseNet121 and InceptionResNetV2 as CNN methods and dual-view input with AA aggregating method achieved the best results. Conversely, the performance of AI models was significantly higher than human observers for the first analysis, whereas their performance was comparable in the second analysis, although the AI model assessed the scans in a drastically lower time.

Conclusion: Using the models designed in this study, a positive step can be taken toward improving and optimizing WBS interpretation. By training DL models with larger and more diverse cohorts, AI could potentially be used to assist physicians in the assessment of WBS images.

Citing Articles

Automatic detecting multiple bone metastases in breast cancer using deep learning based on low-resolution bone scan images.

Shi J, Zhang R, Yang Z, Chen Z, Hao Z, Huo L Sci Rep. 2025; 15(1):7876.

PMID: 40050676 PMC: 11885835. DOI: 10.1038/s41598-025-92594-5.


Artificial intelligence-powered coronary artery disease diagnosis from SPECT myocardial perfusion imaging: a comprehensive deep learning study.

Hajianfar G, Gharibi O, Sabouri M, Mohebi M, Amini M, Yasemi M Eur J Nucl Med Mol Imaging. 2025; .

PMID: 39976703 DOI: 10.1007/s00259-025-07145-x.


Artificial intelligence-based cardiac transthyretin amyloidosis detection and scoring in scintigraphy imaging: multi-tracer, multi-scanner, and multi-center development and evaluation study.

Salimi Y, Shiri I, Mansouri Z, Sanaat A, Hajianfar G, Hervier E Eur J Nucl Med Mol Imaging. 2025; .

PMID: 39907796 DOI: 10.1007/s00259-025-07117-1.


Deep learning automatically distinguishes myocarditis patients from normal subjects based on MRI.

Hatfaludi C, Rosca A, Popescu A, Chitiboi T, Sharma P, Benedek T Int J Cardiovasc Imaging. 2024; 40(12):2617-2629.

PMID: 39509018 PMC: 11618149. DOI: 10.1007/s10554-024-03284-8.


Deep learning model using planar whole-body bone scintigraphy for diagnosis of skull base invasion in patients with nasopharyngeal carcinoma.

Mu X, Ge Z, Lu D, Li T, Liu L, Chen C J Cancer Res Clin Oncol. 2024; 150(10):449.

PMID: 39379746 PMC: 11461747. DOI: 10.1007/s00432-024-05969-y.


References
1.
Shiri I, Vafaei Sadr A, Amini M, Salimi Y, Sanaat A, Akhavanallaf A . Decentralized Distributed Multi-institutional PET Image Segmentation Using a Federated Deep Learning Framework. Clin Nucl Med. 2022; 47(7):606-617. DOI: 10.1097/RLU.0000000000004194. View

2.
Sanaat A, Shooli H, Ferdowsi S, Shiri I, Arabi H, Zaidi H . DeepTOFSino: A deep learning model for synthesizing full-dose time-of-flight bin sinograms from their corresponding low-dose sinograms. Neuroimage. 2021; 245:118697. DOI: 10.1016/j.neuroimage.2021.118697. View

3.
Shiri I, Vafaei Sadr A, Akhavan A, Salimi Y, Sanaat A, Amini M . Decentralized collaborative multi-institutional PET attenuation and scatter correction using federated deep learning. Eur J Nucl Med Mol Imaging. 2022; 50(4):1034-1050. PMC: 9742659. DOI: 10.1007/s00259-022-06053-8. View

4.
Agrawal K, Marafi F, Gnanasegaran G, Van der Wall H, Fogelman I . Pitfalls and Limitations of Radionuclide Planar and Hybrid Bone Imaging. Semin Nucl Med. 2015; 45(5):347-72. DOI: 10.1053/j.semnuclmed.2015.02.002. View

5.
Shiri I, Arabi H, Sanaat A, Jenabi E, Becker M, Zaidi H . Fully Automated Gross Tumor Volume Delineation From PET in Head and Neck Cancer Using Deep Learning Algorithms. Clin Nucl Med. 2021; 46(11):872-883. DOI: 10.1097/RLU.0000000000003789. View