» Articles » PMID: 38391649

Volumetric Imitation Generative Adversarial Networks for Anatomical Human Body Modeling

Overview
Date 2024 Feb 23
PMID 38391649
Authors
Affiliations
Soon will be listed here.
Abstract

Volumetric representation is a technique used to express 3D objects in various fields, such as medical applications. On the other hand, tomography images for reconstructing volumetric data have limited utilization because they contain personal information. Existing GAN-based medical image generation techniques can produce virtual tomographic images for volume reconstruction while preserving the patient's privacy. Nevertheless, these images often do not consider vertical correlations between the adjacent slices, leading to erroneous results in 3D reconstruction. Furthermore, while volume generation techniques have been introduced, they often focus on surface modeling, making it challenging to represent the internal anatomical features accurately. This paper proposes volumetric imitation GAN (VI-GAN), which imitates a human anatomical model to generate volumetric data. The primary goal of this model is to capture the attributes and 3D structure, including the external shape, internal slices, and the relationship between the vertical slices of the human anatomical model. The proposed network consists of a generator for feature extraction and up-sampling based on a 3D U-Net and ResNet structure and a 3D-convolution-based LFFB (local feature fusion block). In addition, a discriminator utilizes 3D convolution to evaluate the authenticity of the generated volume compared to the ground truth. VI-GAN also devises reconstruction loss, including feature and similarity losses, to converge the generated volumetric data into a human anatomical model. In this experiment, the CT data of 234 people were used to assess the reliability of the results. When using volume evaluation metrics to measure similarity, VI-GAN generated a volume that realistically represented the human anatomical model compared to existing volume generation methods.

References
1.
Jiang M, Zhi M, Wei L, Yang X, Zhang J, Li Y . FA-GAN: Fused attentive generative adversarial networks for MRI image super-resolution. Comput Med Imaging Graph. 2021; 92:101969. PMC: 8453331. DOI: 10.1016/j.compmedimag.2021.101969. View

2.
Roschger P, Gupta H, Berzlanovich A, Ittner G, Dempster D, Fratzl P . Constant mineralization density distribution in cancellous human bone. Bone. 2003; 32(3):316-23. DOI: 10.1016/s8756-3282(02)00973-0. View

3.
Taha A, Hanbury A . Metrics for evaluating 3D medical image segmentation: analysis, selection, and tool. BMC Med Imaging. 2015; 15:29. PMC: 4533825. DOI: 10.1186/s12880-015-0068-x. View

4.
Munawar A, Li Z, Nagururu N, Trakimas D, Kazanzides P, Taylor R . Fully immersive virtual reality for skull-base surgery: surgical training and beyond. Int J Comput Assist Radiol Surg. 2023; 19(1):51-59. PMC: 11163985. DOI: 10.1007/s11548-023-02956-5. View

5.
Nakao M, Nakamura M, Mizowaki T, Matsuda T . Statistical deformation reconstruction using multi-organ shape features for pancreatic cancer localization. Med Image Anal. 2020; 67:101829. DOI: 10.1016/j.media.2020.101829. View