» Articles » PMID: 36236483

A Review on Multiscale-Deep-Learning Applications

Overview
Journal Sensors (Basel)
Publisher MDPI
Specialty Biotechnology
Date 2022 Oct 14
PMID 36236483
Authors
Affiliations
Soon will be listed here.
Abstract

In general, most of the existing convolutional neural network (CNN)-based deep-learning models suffer from spatial-information loss and inadequate feature-representation issues. This is due to their inability to capture multiscale-context information and the exclusion of semantic information throughout the pooling operations. In the early layers of a CNN, the network encodes simple semantic representations, such as edges and corners, while, in the latter part of the CNN, the network encodes more complex semantic features, such as complex geometric shapes. Theoretically, it is better for a CNN to extract features from different levels of semantic representation because tasks such as classification and segmentation work better when both simple and complex feature maps are utilized. Hence, it is also crucial to embed multiscale capability throughout the network so that the various scales of the features can be optimally captured to represent the intended task. Multiscale representation enables the network to fuse low-level and high-level features from a restricted receptive field to enhance the deep-model performance. The main novelty of this review is the comprehensive novel taxonomy of multiscale-deep-learning methods, which includes details of several architectures and their strengths that have been implemented in the existing works. Predominantly, multiscale approaches in deep-learning networks can be classed into two categories: multiscale feature learning and multiscale feature fusion. Multiscale feature learning refers to the method of deriving feature maps by examining kernels over several sizes to collect a larger range of relevant features and predict the input images' spatial mapping. Multiscale feature fusion uses features with different resolutions to find patterns over short and long distances, without a deep network. Additionally, several examples of the techniques are also discussed according to their applications in satellite imagery, medical imaging, agriculture, and industrial and manufacturing systems.

Citing Articles

Advanced pathological subtype classification of thyroid cancer using efficientNetB0.

Guo H, Zhang J, Li Y, Pan X, Sun C Diagn Pathol. 2025; 20(1):28.

PMID: 40055769 PMC: 11887243. DOI: 10.1186/s13000-025-01621-6.


Recent Advances in Deep Learning-Based Spatiotemporal Fusion Methods for Remote Sensing Images.

Lian Z, Zhan Y, Zhang W, Wang Z, Liu W, Huang X Sensors (Basel). 2025; 25(4).

PMID: 40006322 PMC: 11859923. DOI: 10.3390/s25041093.


Comparing prediction accuracy for 30-day readmission following primary total knee arthroplasty: the ACS-NSQIP risk calculator versus a novel artificial neural network model.

Buddhiraju A, Shimizu M, Chen T, Seo H, Bacevich B, Xiao P Knee Surg Relat Res. 2025; 37(1):3.

PMID: 39806502 PMC: 11727824. DOI: 10.1186/s43019-024-00256-z.


From multi-omics to predictive biomarker: AI in tumor microenvironment.

Hai L, Jiang Z, Zhang H, Sun Y Front Immunol. 2025; 15:1514977.

PMID: 39763649 PMC: 11701166. DOI: 10.3389/fimmu.2024.1514977.


Neural network representations of multiphase Equations of State.

Kevrekidis G, Serino D, Kaltenborn M, Gammel J, Burby J, Klasky M Sci Rep. 2024; 14(1):30288.

PMID: 39632913 PMC: 11618593. DOI: 10.1038/s41598-024-81445-4.


References
1.
Boulent J, Foucher S, Theau J, St-Charles P . Convolutional Neural Networks for the Automatic Identification of Plant Diseases. Front Plant Sci. 2019; 10:941. PMC: 6664047. DOI: 10.3389/fpls.2019.00941. View

2.
Teng L, Li H, Karim S . DMCNN: A Deep Multiscale Convolutional Neural Network Model for Medical Image Segmentation. J Healthc Eng. 2020; 2019:8597606. PMC: 6948302. DOI: 10.1155/2019/8597606. View

3.
Du X, Song Y, Liu Y, Zhang Y, Liu H, Chen B . An integrated deep learning framework for joint segmentation of blood pool and myocardium. Med Image Anal. 2020; 62:101685. DOI: 10.1016/j.media.2020.101685. View

4.
Lu D, Popuri K, Ding G, Balachandar R, Beg M . Multimodal and Multiscale Deep Neural Networks for the Early Diagnosis of Alzheimer's Disease using structural MR and FDG-PET images. Sci Rep. 2018; 8(1):5697. PMC: 5890270. DOI: 10.1038/s41598-018-22871-z. View

5.
Pace D, Dalca A, Geva T, Powell A, Moghari M, Golland P . Interactive Whole-Heart Segmentation in Congenital Heart Disease. Med Image Comput Comput Assist Interv. 2016; 9351:80-88. PMC: 4753059. DOI: 10.1007/978-3-319-24574-4_10. View