» Articles » PMID: 38147869

Security and Privacy in Machine Learning for Health Systems: Strategies and Challenges

Overview
Publisher Thieme
Date 2023 Dec 26
PMID 38147869
Authors
Affiliations
Soon will be listed here.
Abstract

Objectives: Machine learning (ML) is a powerful asset to support physicians in decision-making procedures, providing timely answers. However, ML for health systems can suffer from security attacks and privacy violations. This paper investigates studies of security and privacy in ML for health.

Methods: We examine attacks, defenses, and privacy-preserving strategies, discussing their challenges. We conducted the following research protocol: starting a manual search, defining the search string, removing duplicated papers, filtering papers by title and abstract, then their full texts, and analyzing their contributions, including strategies and challenges. Finally, we collected and discussed 40 papers on attacks, defense, and privacy.

Results: Our findings identified the most employed strategies for each domain. We found trends in attacks, including universal adversarial perturbation (UAPs), generative adversarial network (GAN)-based attacks, and DeepFakes to generate malicious examples. Trends in defense are adversarial training, GAN-based strategies, and out-of-distribution (OOD) to identify and mitigate adversarial examples (AE). We found privacy-preserving strategies such as federated learning (FL), differential privacy, and combinations of strategies to enhance the FL. Challenges in privacy comprehend the development of attacks that bypass fine-tuning, defenses to calibrate models to improve their robustness, and privacy methods to enhance the FL strategy.

Conclusions: In conclusion, it is critical to explore security and privacy in ML for health, because it has grown risks and open vulnerabilities. Our study presents strategies and challenges to guide research to investigate issues about security and privacy in ML applied to health systems.

Citing Articles

Informatics for One Health.

Fultz Hollis K, Mougin F, Soualmia L Yearb Med Inform. 2024; 32(1):2-6.

PMID: 38575142 PMC: 10994713. DOI: 10.1055/s-0043-1768757.


Machine and Deep Learning Dominate Recent Innovations in Sensors, Signals and Imaging Informatics.

Baumgartner C, Rittner L, Deserno T Yearb Med Inform. 2023; 32(1):282-285.

PMID: 38147870 PMC: 10751153. DOI: 10.1055/s-0043-1768743.

References
1.
Silva J, Pinho E, Monteiro E, Silva J, Costa C . Controlled searching in reversibly de-identified medical imaging archives. J Biomed Inform. 2017; 77:81-90. DOI: 10.1016/j.jbi.2017.12.002. View

2.
Venugopal R, Shafqat N, Venugopal I, Tillbury B, Stafford H, Bourazeri A . Privacy preserving Generative Adversarial Networks to model Electronic Health Records. Neural Netw. 2022; 153:339-348. DOI: 10.1016/j.neunet.2022.06.022. View

3.
Minagi A, Hirano H, Takemoto K . Natural Images Allow Universal Adversarial Attacks on Medical Image Classification Using Deep Neural Networks with Transfer Learning. J Imaging. 2022; 8(2). PMC: 8875959. DOI: 10.3390/jimaging8020038. View

4.
Apostolidis K, Papakostas G . Digital Watermarking as an Adversarial Attack on Medical Image Analysis with Deep Learning. J Imaging. 2022; 8(6). PMC: 9225333. DOI: 10.3390/jimaging8060155. View

5.
Hirano H, Minagi A, Takemoto K . Universal adversarial attacks on deep neural networks for medical image classification. BMC Med Imaging. 2021; 21(1):9. PMC: 7792111. DOI: 10.1186/s12880-020-00530-y. View