Opening the Black Box: Challenges and Opportunities Regarding Interpretability of Artificial Intelligence in Emergency Medicine
Overview
Overview
Authors
Authors
Affiliations
Affiliations
Soon will be listed here.
References
1.
Graziani M, Dutkiewicz L, Calvaresi D, Amorim J, Yordanova K, Vered M
. A global taxonomy of interpretable AI: unifying the terminology for the technical and social sciences. Artif Intell Rev. 2022; 56(4):3473-3504.
PMC: 9446618.
DOI: 10.1007/s10462-022-10256-8.
View
2.
Avati A, Jung K, Harman S, Downing L, Ng A, Shah N
. Improving palliative care with deep learning. BMC Med Inform Decis Mak. 2018; 18(Suppl 4):122.
PMC: 6290509.
DOI: 10.1186/s12911-018-0677-8.
View
3.
de Hond A, Leeuwenberg A, Hooft L, Kant I, Nijman S, van Os H
. Guidelines and quality criteria for artificial intelligence-based prediction models in healthcare: a scoping review. NPJ Digit Med. 2022; 5(1):2.
PMC: 8748878.
DOI: 10.1038/s41746-021-00549-7.
View
4.
Lin C, Liu W, Tsai D, Lou Y, Chang C, Lee C
. AI-enabled electrocardiography alert intervention and all-cause mortality: a pragmatic randomized clinical trial. Nat Med. 2024; 30(5):1461-1470.
DOI: 10.1038/s41591-024-02961-4.
View
5.
Tsai D, Lou Y, Lin C, Fang W, Lee C, Ho C
. Mortality risk prediction of the electrocardiogram as an informative indicator of cardiovascular diseases. Digit Health. 2023; 9:20552076231187247.
PMC: 10336769.
DOI: 10.1177/20552076231187247.
View