Skip to content
2000
Volume 20, Issue 1
  • ISSN: 1573-4056
  • E-ISSN: 1875-6603

Abstract

Introduction

Multimodal medical image fusion techniques play an important role in clinical diagnosis and treatment planning. The process of combining multimodal images involves several challenges depending on the type of modality, transformation techniques, and mapping of structural and metabolic information.

Methods

Artifacts can form during data acquisition, such as minor movement of patients, or data pre-processing, registration, and normalization. Unlike single-modality images, the detection of an artifact is a more challenging task in complementary fused multimodal images. Many medical image fusion techniques have been developed by various researchers, but not many have tested the robustness of their fusion approaches. The main objective of this study is to identify and classify the noise and artifacts present in the fused MRI-SPECT brain images using transfer learning by fine-tuned CNN networks. Deep neural network-based techniques are capable of detecting minor amounts of noise in images. In this study, three pre-trained convolutional neural network-based models (ResNet50, DenseNet 169, and InceptionV3) were used to detect artifacts and various noises including Gaussian, Speckle, Random, and mixed noises present in fused MRI -SPECT brain image datasets using transfer learning.

Results

The five-fold stratified cross-validation (SCV) is used to evaluate the performance of networks. The obtained performance results for the pre-trained DenseNet169 model for various folds were greater as compared with the rest of the models; the former had an average accuracy of five-fold of 93.8±5.8%, 98%±3.9%, 97.8±1.64%, and 93.8±5.8%, whereas InceptionNetV3 had a value of 90.6±9.8%, 98.8±1.6%, 91.4±9.74%, and 90.6±9.8%, and ResNet50 had a value of 75.8±21%.84.8±7.6%, 73.8±22%, and 75.8±21% for Gaussian, speckle, random and mixed noise, respectively.

Conclusion

Based on the performance results obtained, the pre-trained DenseNet 169 model provides the highest accuracy among the other four used models.

This is an open access article distributed under the terms of the Creative Commons Attribution 4.0 International Public License (CC-BY 4.0), a copy of which is available at: https://creativecommons.org/licenses/by/4.0/legalcode. This license permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Loading

Article metrics loading...

/content/journals/cmir/10.2174/0115734056256872240909112137
2024-01-01
2025-07-06
The full text of this item is not currently available.

References

  1. JamesA.P. DasarathyB.V. Medical image fusion: A survey of the state of the art.Inf. Fusion20141941910.1016/j.inffus.2013.12.002
    [Google Scholar]
  2. Ardeshir GoshtasbyA. NikolovS. MurariA.J.E. Image fusion: Advances in the state of the art.Inf. Fusion20078211411810.1016/j.inffus.2006.04.001
    [Google Scholar]
  3. El-GamalF.E-Z.A. ElmogyM. AtwanA. AtwanM.A. Current trends in medical image registration and fusion.Egyptian Informatics Journal20161719912410.1016/j.eij.2015.09.002
    [Google Scholar]
  4. HeC. LiuQ. LiH. WangH. Multimodal medical image fusion based on IHS and PCA.Procedia Eng.2010728028510.1016/j.proeng.2010.11.045
    [Google Scholar]
  5. FatemehF. MosaviM. LajvardiM.M. Image fusion based on multi-scale transform and sparse representation: an image energy approach.IET Image Process.2017111110411049
    [Google Scholar]
  6. DanielE. AnithaJ. GnanarajJ. Optimum laplacian wavelet mask based medical image using hybrid cuckoo search – grey wolf optimization algorithm.Knowl. Base. Syst.2017131586910.1016/j.knosys.2017.05.017
    [Google Scholar]
  7. YangL. GuoB.L. NiW. Multimodality medical image fusion based on multiscale geometric analysis of contourlet transform.Neurocomputing2008721-320321110.1016/j.neucom.2008.02.025
    [Google Scholar]
  8. LiuX. ZhouY. WangJ. Image fusion based on shearlet transform and regional features.AEU Int. J. Electron. Commun.201468647147710.1016/j.aeue.2013.12.003
    [Google Scholar]
  9. SrivastavaR. PrakashO. KhareA. Local energy‐based multimodal medical image fusion in curvelet domain.IET Comput. Vis.201610651352710.1049/iet‑cvi.2015.0251
    [Google Scholar]
  10. LiH. ChaiY. LiZ. Multi-focus image fusion based on nonsubsampled contourlet transform and focused regions detection.Optik (Stuttg.)20131241405110.1016/j.ijleo.2011.11.088
    [Google Scholar]
  11. ZhuJ. JinC. ZhonghuaG. Curvelet algorithm of the remote sensing image fusion based on the local mean and standard deviation.Journal of Ningxia University (Natural Science Edition)201435124
    [Google Scholar]
  12. XinJ. NieR. ZhouD. WangQ. HeK. Multi-focus color image fusion based on NSST and PCNN.J. Sens.20168359602
    [Google Scholar]
  13. GanasalaP. KumarV. Feature-motivated simplified adaptive PCNN-based medical image fusion algorithm in NSST domain.J. Digit. Imaging2016291738510.1007/s10278‑015‑9806‑4
    [Google Scholar]
  14. DanielE. AnithaJ. KamaleshwaranK.K. RaniI. Optimum spectrum mask based medical image fusion using Gray Wolf Optimization.Biomed. Signal Process. Control201734364310.1016/j.bspc.2017.01.003
    [Google Scholar]
  15. DanielE. Optimum wavelet-based homomorphic medical image fusion using hybrid genetic–grey wolf optimization algorithm.IEEE Sens. J.201818166804681110.1109/JSEN.2018.2822712
    [Google Scholar]
  16. HuiL. WuX.J. KittlerJ. Infrared and visible image fusion using a deep learning framework.24th International Conference on Pattern Recognition (ICPR)2018201827052710
    [Google Scholar]
  17. MaJ. YuW. LiangP. LiC. JiangJ. FusionG.A.N. FusionGAN: A generative adversarial network for infrared and visible image fusion.Inf. Fusion201948112610.1016/j.inffus.2018.09.004
    [Google Scholar]
  18. YuL. ChenX. ChenJ. PengH. A medical image fusion method based on convolutional neural networks.20th International Conference on Information Fusion201717
    [Google Scholar]
  19. ZhangY. LiuY. SunP. YanH. ZhaoX. ZhangL. IFCNN: A general image fusion framework based on convolutional neural network.Inf. Fusion2020549911810.1016/j.inffus.2019.07.011
    [Google Scholar]
  20. SrivastavaR. KhareA. Multifocus noisy image fusion using contourlet transform.Imaging Sci. J.201563740842210.1179/1743131X15Y.0000000025
    [Google Scholar]
  21. Mahbubur RahmanS.M. Omair AhmadM. SwamyM.N.S. Contrast-based fusion of noisy images using discrete wavelet transform.IET Image Process.20104537438410.1049/iet‑ipr.2009.0163
    [Google Scholar]
  22. ZhaoW. XuZ. ZhaoJ. Gradient entropy metric and p-Laplace diffusion constraint-based algorithm for noisy multispectral image fusion.Inf. Fusion20162713814910.1016/j.inffus.2015.06.003
    [Google Scholar]
  23. HuafengL. WangY. YangZ. WangR. LiX. TaoD. Discriminative dictionary learning-based multiple component decompositions for detail-preserving noisy image fusion.IEEE Trans. Instrum. Meas.20196910821102
    [Google Scholar]
  24. PranavR. IrvinJ. ZhuK. YangB. MehtaH. DuanT. DingD. CheXNet: Radiologist-level pneumonia detection on chest X-Rays with deep learning.arXiv2017201705225
    [Google Scholar]
  25. ChoiH.H. LeeJ.H. KimS.M. ParkS.Y. Speckle noise reduction in ultrasound images using a discrete wavelet transform-based image fusion technique.Biomed. Mater. Eng.201526s1S1587S159710.3233/BME‑151458
    [Google Scholar]
  26. LozaA. BullD. CanagarajahN. AchimA. Non-Gaussian model-based fusion of noisy images in the wavelet domain.Comput. Vis. Image Underst.20101141546510.1016/j.cviu.2009.09.002
    [Google Scholar]
  27. CvejicN. CanagarajahC.N. BullD.R. Improving the performance of ICA domain multimodal image fusion in presence of noiseProceedings of the IEEE International Conference on Fuzzy SystemsVancouver, BC, Canada20061266127010.1109/FUZZY.2006.1681872
    [Google Scholar]
  28. OksuzI. RuijsinkB. Puyol-AntónE. CloughJ.R. CruzG. BustinA. PrietoC. BotnarR. RueckertD. SchnabelJ.A. KingA.P. Automatic CNN-based detection of cardiac MR motion artefacts using k-space data augmentation and curriculum learning.Med. Image Anal.20195513614710.1016/j.media.2019.04.009
    [Google Scholar]
  29. SpanholF.A. OliveiraL.S. CavalinP. PetitjeanC. HeutteL. Deep features for breast cancer histopathological image classificationIEEE International Conference on Systems, Man, and Cybernetics (SMC)201768187310.1109/SMC.2017.8122889
    [Google Scholar]
  30. KalraS. ChoiC. ShahS. PantanowitzL. TizhooshH. Yottixel–an image search engine for large archives of histopathology whole slide images.Medical Image Analysis201965101757
    [Google Scholar]
  31. KalraS. TizhooshH. ShahS. ChoiC. DamaskinosS. SafarpoorA. Pan-cancer diagnostic consensus through searching archival histopathology images using artificial intelligence.NPJ Digit Med20193311
    [Google Scholar]
  32. JiaG. LamH-K. XuY. Classification of COVID-19 chest X-Ray and CT images using a type of dynamic CNN modification method.Comput. Biol. Med.202113410442510.1016/j.compbiomed.2021.104425
    [Google Scholar]
  33. NandhiniS. OmarE. Al-MaadeedS. ChowdhuryM. A review of deep learning-based detection methods for COVID-19.Comput. Biol. Med.2022143105233
    [Google Scholar]
  34. EstevaA. KuprelB. NovoaR.A. KoJ. SwetterS.M. BlauH.M. ThrunS. Dermatologist-level classification of skin cancer with deep neural networks.Nature2017542763911511810.1038/nature21056
    [Google Scholar]
  35. BenganiS. JA.A.J. SV. Automatic segmentation of optic disc in retinal fundus images using semi-supervised deep learning.Multimedia Tools Appl.20218033443346810.1007/s11042‑020‑09778‑6
    [Google Scholar]
  36. FengL. WangY. XuT. DongL. YanL. JiangM. ZhangX. JiangH. WuZ. ZouH. Deep learning-based automated detection for diabetic retinopathy and diabetic macular oedema in retinal fundus photographs.Eye (Lond.)202119
    [Google Scholar]
  37. TsiknakisN. TheodoropoulosD. ManikisG. KtistakisE. BoutsoraO. BertoA. ScarpaF. ScarpaA. FotiadisD.I. MariasK. Deep learning for diabetic retinopathy detection and classification based on fundus images: A review.Comput. Biol. Med.202113510459910.1016/j.compbiomed.2021.104599
    [Google Scholar]
  38. KrausO.Z. GrysB.T. BaJ. ChongY. FreyB.J. BooneC. AndrewsB.J. Automated analysis of high‐content microscopy data with deep learning.Mol. Syst. Biol.201713492410.15252/msb.20177551
    [Google Scholar]
  39. BressemK.K. AdamsL. ErxlebenC. HammB. NiehuesS. VahldiekJ. Comparing different deep learning architectures for classification of chest radiographs.Sci Rep.202010113590
    [Google Scholar]
  40. AlexanderK. EllsworthW. BanerjeeO. NgA.Y. RajpurkarP. CheXtransfer: performance and parameter efficiency of imageNet models for chest X-Ray Interpretation.CHIL '21: Proceedings of the Conference on Health, Inference, and Learning2021116124
    [Google Scholar]
  41. HuangG. LiuZ. van der MaatenL. WeinbergerK.Q. Densely Connected Convolutional Networks.Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR)201747004708
    [Google Scholar]
  42. BradyK. BabaieM. KalraS. TizhooshH.R. Convolutional neural networks for histopathology image classification: Training vs. using pre-trained networks.7th International Conference on Image Processing Theory, Tools and Applications, IEEE201716
    [Google Scholar]
  43. KevinF. XieQ. HanD. GoyleK. VolynskayaZ. DjuricU. DiamandisP. Visualizing histopathologic deep learning classification and anomaly detection using nonlinear feature space dimensionality reduction.BMC Bioinformatics201819115
    [Google Scholar]
  44. RiasatianA. BabaieM. MalekiD. KalraS. ValipourM. HematiS. ZaveriM. SafarpoorA. ShafieiS. AfshariM. RasoolijaberiM. SikaroudiM. AdnanM. ShahS. ChoiC. DamaskinosS. CampbellC.J.V. DiamandisP. PantanowitzL. KashaniH. GhodsiA. TizhooshH.R. Fine-Tuning and training of densenet for histopathology image representation using TCGA diagnostic slides.Med. Image Anal.20217010203210.1016/j.media.2021.102032
    [Google Scholar]
  45. LorchB. VaillantG. BaumgartnerC. BaiW. RueckertD. MaierA. Automated detection of motion artifact in MR imaging using decision forests.J. Med. Eng.201720171910.1155/2017/4501647
    [Google Scholar]
  46. KüstnerT. LiebgottA. MauchL. MartirosianP. BambergF. NikolaouK. YangB. SchickF. GatidisS. Automated reference-free detection of motion artifacts in magnetic resonance images.MAGMA201831224325610.1007/s10334‑017‑0650‑z
    [Google Scholar]
/content/journals/cmir/10.2174/0115734056256872240909112137
Loading
/content/journals/cmir/10.2174/0115734056256872240909112137
Loading

Data & Media loading...

This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error
Please enter a valid_number test