» Articles » PMID: 38182607

DDCNN-F: Double Decker Convolutional Neural Network 'F' Feature Fusion As a Medical Image Classification Framework

Overview
Journal Sci Rep
Specialty Science
Date 2024 Jan 5
PMID 38182607
Authors
Affiliations
Soon will be listed here.
Abstract

Melanoma is a severe skin cancer that involves abnormal cell development. This study aims to provide a new feature fusion framework for melanoma classification that includes a novel 'F' Flag feature for early detection. This novel 'F' indicator efficiently distinguishes benign skin lesions from malignant ones known as melanoma. The article proposes an architecture that is built in a Double Decker Convolutional Neural Network called DDCNN future fusion. The network's deck one, known as a Convolutional Neural Network (CNN), finds difficult-to-classify hairy images using a confidence factor termed the intra-class variance score. These hirsute image samples are combined to form a Baseline Separated Channel (BSC). By eliminating hair and using data augmentation techniques, the BSC is ready for analysis. The network's second deck trains the pre-processed BSC and generates bottleneck features. The bottleneck features are merged with features generated from the ABCDE clinical bio indicators to promote classification accuracy. Different types of classifiers are fed to the resulting hybrid fused features with the novel 'F' Flag feature. The proposed system was trained using the ISIC 2019 and ISIC 2020 datasets to assess its performance. The empirical findings expose that the DDCNN feature fusion strategy for exposing malignant melanoma achieved a specificity of 98.4%, accuracy of 93.75%, precision of 98.56%, and Area Under Curve (AUC) value of 0.98. This study proposes a novel approach that can accurately identify and diagnose fatal skin cancer and outperform other state-of-the-art techniques, which is attributed to the DDCNN 'F' Feature fusion framework. Also, this research ascertained improvements in several classifiers when utilising the 'F' indicator, resulting in the highest specificity of + 7.34%.

Citing Articles

Skin Lesion Classification Through Test Time Augmentation and Explainable Artificial Intelligence.

Cino L, Distante C, Martella A, Mazzeo P J Imaging. 2025; 11(1).

PMID: 39852328 PMC: 11766406. DOI: 10.3390/jimaging11010015.


Artificial Intelligence in Dermoscopy: Enhancing Diagnosis to Distinguish Benign and Malignant Skin Lesions.

Reddy S, Shaheed A, Patel R Cureus. 2024; 16(2):e54656.

PMID: 38523958 PMC: 10959827. DOI: 10.7759/cureus.54656.

References
1.
Pacheco A, Lima G, Salomao A, Krohling B, Biral I, de Angelo G . PAD-UFES-20: A skin lesion dataset composed of patient data and clinical images collected from smartphones. Data Brief. 2020; 32:106221. PMC: 7479321. DOI: 10.1016/j.dib.2020.106221. View

2.
Hagerty J, Stanley R, Almubarak H, Lama N, Kasmi R, Guo P . Deep Learning and Handcrafted Method Fusion: Higher Diagnostic Accuracy for Melanoma Dermoscopy Images. IEEE J Biomed Health Inform. 2019; 23(4):1385-1391. DOI: 10.1109/JBHI.2019.2891049. View

3.
Esteva A, Kuprel B, Novoa R, Ko J, Swetter S, Blau H . Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017; 542(7639):115-118. PMC: 8382232. DOI: 10.1038/nature21056. View

4.
Bi L, Celebi M, Iyatomi H, Fernandez-Penas P, Kim J . Image analysis in advanced skin imaging technology. Comput Methods Programs Biomed. 2023; 238:107599. DOI: 10.1016/j.cmpb.2023.107599. View

5.
Maron R, Weichenthal M, Utikal J, Hekler A, Berking C, Hauschild A . Systematic outperformance of 112 dermatologists in multiclass skin cancer image classification by convolutional neural networks. Eur J Cancer. 2019; 119:57-65. DOI: 10.1016/j.ejca.2019.06.013. View