Skip to content
2000
Volume 20, Issue 1
  • ISSN: 1573-4056
  • E-ISSN: 1875-6603
side by side viewer icon HTML

Abstract

Background:

At present, there are some problems in multimodal medical image fusion, such as texture detail loss, leading to edge contour blurring and image energy loss, leading to contrast reduction.

Objective:

To solve these problems and obtain higher-quality fusion images, this study proposes an image fusion method based on local saliency energy and multi-scale fractal dimension.

Methods:

First, by using a non-subsampled contourlet transform, the medical image was divided into 4 layers of high-pass subbands and 1 layer of low-pass subband. Second, in order to fuse the high-pass subbands of layers 2 to 4, the fusion rules based on a multi-scale morphological gradient and an activity measure were used as external stimuli in pulse coupled neural network. Third, a fusion rule based on the improved multi-scale fractal dimension and new local saliency energy was proposed, respectively, for the low-pass subband and the 1st closest to the low-pass subband. Layer-high pass sub-bands were fused. Lastly, the fused image was created by performing the inverse non-subsampled contourlet transform on the fused sub-bands.

Results:

On three multimodal medical image datasets, the proposed method was compared with 7 other fusion methods using 5 common objective evaluation metrics.

Conclusion:

Experiments showed that this method can protect the contrast and edge of fusion image well and has strong competitiveness in both subjective and objective evaluation.

This is an open access article distributed under the terms of the Creative Commons Attribution 4.0 International Public License (CC-BY 4.0), a copy of which is available at: https://creativecommons.org/licenses/by/4.0/legalcode. This license permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Loading

Article metrics loading...

/content/journals/cmir/10.2174/0115734056273589231226052622
2024-01-01
2025-07-17
The full text of this item is not currently available.

References

  1. QiG. WangJ. ZhangQ. ZengF. ZhuZ. An integrated dictionary-learning entropy-based medical image fusion framework.Future Internet2017946110.3390/fi9040061
    [Google Scholar]
  2. ManchandaM. SharmaR. An improved multimodal medical image fusion algorithm based on fuzzy transform.J. Vis. Commun. Image Represent.201851769410.1016/j.jvcir.2017.12.011
    [Google Scholar]
  3. BaghaieA. SchnellS. BakhshinejadA. FathiM.F. D’SouzaR.M. RayzV.L. Curvelet Transform-based volume fusion for correcting signal loss artifacts in Time-of-Flight Magnetic Resonance Angiography data.Comput. Biol. Med.20189914215310.1016/j.compbiomed.2018.06.00829929053
    [Google Scholar]
  4. LiuX. ZhouY. WangJ. Image fusion based on shearlet transform and regional features.AEU Int. J. Electron. Commun.201468647147710.1016/j.aeue.2013.12.003
    [Google Scholar]
  5. LiY. SunY. HuangX. QiG. ZhengM. ZhuZ. An image fusion method based on sparse representation and sum modified-Laplacian in NSCT domain.Entropy201820752210.3390/e2007052233265611
    [Google Scholar]
  6. QiG. ZhangQ. ZengF. WangJ. ZhuZ. Multi-focus image fusion via morphological similarity-based dictionary construction and sparse representation.CAAI Trans. Intell. Technol.201832839410.1049/trit.2018.0011
    [Google Scholar]
  7. ShabanzadeF. GhassemianH. Multimodal image fusion via sparse representation and clustering-based dictionary learning algorithm in nonsubsampled contourlet domain.In: 2016 8th International Symposium on Telecommunications201647247710.1109/ISTEL.2016.7881866
    [Google Scholar]
  8. GanasalaP KumarV MR image fusion scheme in nonsubsampled contourlet transform domain.J. Digit. Imaging Off. J. Soc. Comput. Appl. Radiol2014
    [Google Scholar]
  9. BhatnagarG. WuQ.M.J. LiuZ. Directive contrast based multimodal medical image fusion in NSCT domain.IEEE Trans. Multimed.20131551014102410.1109/TMM.2013.2244870
    [Google Scholar]
  10. XiaJ. ChenY. ChenA. ChenY. Medical image fusion based on sparse representation and PCNN in NSCT domain.Comput. Math. Methods Med.2018201811210.1155/2018/280604729991960
    [Google Scholar]
  11. YangZ. LianJ. GuoY. LiS. WangD. SunW. MaY. An overview of PCNN model’s development and its application in image processing.Arch. Comput. Methods Eng.201926249150510.1007/s11831‑018‑9253‑8
    [Google Scholar]
  12. GongJ. WangB. QiaoL. XuJ. ZhangZ. Image fusion method based on improved NSCT transform and PCNN model.In: 2016 9th International Symposium on Computational Intelligence and Design2016283110.1109/ISCID.2016.1015
    [Google Scholar]
  13. YinM. LiuX. LiuY. ChenX. Medical image fusion with parameter-adaptive pulse coupled neural network in nonsubsampled shearlet transform domain.IEEE Trans. Instrum. Meas.2019681496410.1109/TIM.2018.2838778
    [Google Scholar]
  14. DinhP.H. Medical image fusion based on enhanced three-layer image decomposition and Chameleon swarm algorithm.Biomed. Signal Process. Control20238410474010.1016/j.bspc.2023.104740
    [Google Scholar]
  15. DinhP.H. Combining spectral total variation with dynamic threshold neural P systems for medical image fusion.Biomed. Signal Process. Control20238010434310.1016/j.bspc.2022.104343
    [Google Scholar]
  16. ZhangC. ZhangZ. FengZ. YiL. Joint sparse model with coupled dictionary for medical image fusion.Biomed. Signal Process. Control20237910403010.1016/j.bspc.2022.104030
    [Google Scholar]
  17. DinhP.H. A novel approach using the local energy function and its variations for medical image fusion.Imaging Sci. J.202371766067610.1080/13682199.2023.2190947
    [Google Scholar]
  18. FuJ. LiW. PengX. DuJ. OuyangA. WangQ. ChenX. MDRANet: A multiscale dense residual attention network for magnetic resonance and nuclear medicine image fusion.Biomed. Signal Process. Control20238010438210.1016/j.bspc.2022.104382
    [Google Scholar]
  19. ZhangG. NieR. CaoJ. ChenL. ZhuY. FDGNet: A pair feature difference guided network for multimodal medical image fusion.Biomed. Signal Process. Control20238110454510.1016/j.bspc.2022.104545
    [Google Scholar]
  20. LiuH. LiS. ZhuJ. DengK. LiuM. NieL. DDIFN: A dual-discriminator multi-modal medical image fusion network.ACM Trans. Multimed. Comput. Commun. Appl.20231911710.1145/3603534
    [Google Scholar]
  21. DograA. KumarS. Multi-modality medical image fusion based on guided filter and image statistics in multidirectional shearlet transform domain.J. Ambient Intell. Humaniz. Comput.2023149121911220510.1007/s12652‑022‑03764‑6
    [Google Scholar]
  22. ZhouT. LiQ. LuH. ChengQ. ZhangX. GAN review: Models and medical image fusion applications.Inf. Fusion20239113414810.1016/j.inffus.2022.10.017
    [Google Scholar]
  23. PanigrahyC. SealA. MahatoN.K. Fractal dimension based parameter adaptive dual channel PCNN for multi-focus image fusion.Opt. Lasers Eng.202013310614110.1016/j.optlaseng.2020.106141
    [Google Scholar]
  24. ZhangY. BaiX. WangT. Boundary finding based multi-focus image fusion through multi-scale morphological focus-measure.Inf. Fusion2017358110110.1016/j.inffus.2016.09.006
    [Google Scholar]
  25. ZhuZ. ZhengM. QiG. WangD. XiangY. A phase congruency and local Laplacian energy based multi-modality medical image fusion method in NSCT domain.IEEE Access20197208112082410.1109/ACCESS.2019.2898111
    [Google Scholar]
  26. ChenJ. WuK. ChengZ. LuoL. A saliency-based multiscale approach for infrared and visible image fusion.Signal Processing202118210793610.1016/j.sigpro.2020.107936
    [Google Scholar]
  27. PanigrahyC. SealA. MahatoN.K. Quantitative texture measurement of gray-scale images: Fractal dimension using an improved differential box counting method.Measurement201914710685910.1016/j.measurement.2019.106859
    [Google Scholar]
  28. PanigrahyC. SealA. MahatoN.K. Fractal dimension of synthesized and natural color images in Lab space.Pattern Anal. Appl.202023281983610.1007/s10044‑019‑00839‑7
    [Google Scholar]
  29. PanigrahyC. SealA. MahatoN.K. Image texture surface analysis using an improved differential box counting based fractal dimension.Powder Technol.202036427629910.1016/j.powtec.2020.01.053
    [Google Scholar]
  30. SarkarN. ChaudhuriB.B. An efficient differential box-counting approach to compute fractal dimension of image.IEEE Trans. Syst. Man Cybern.199424111512010.1109/21.259692
    [Google Scholar]
  31. PanigrahyC. Garcia-PedreroA. SealA. Rodríguez-EsparragónD. MahatoN. Gonzalo-MartínC. An approximated box height for differential-box-counting method to estimate fractal dimensions of gray-scale images.Entropy2017191053410.3390/e19100534
    [Google Scholar]
  32. ChenW.S. YuanS.Y. HsiehC.M. Two algorithms to estimate fractal dimension of gray-level images.Opt. Eng.20034282452246410.1117/1.1585061
    [Google Scholar]
  33. PanigrahyC. SealA. MahatoN.K. BhattacharjeeD. Differential box counting methods for estimating fractal dimension of gray-scale images: A survey.Chaos Solitons Fractals201912617820210.1016/j.chaos.2019.06.007
    [Google Scholar]
  34. PanigrahyC. SealA. MahatoN.K. Is box-height really a issue in differential box counting based fractal dimension?2019 International Conference on Information Technology (ICIT)376381IEEE201910.1109/ICIT48102.2019.00073
    [Google Scholar]
  35. AchantaR. HemamiS. EstradaF. SusstrunkS. Frequency-tuned salient region detection. In: 2009 IEEE conference on computer vision and pattern recognition.IEEE20091597160410.1109/CVPR.2009.5206596
    [Google Scholar]
  36. LiH. QiuH. YuZ. ZhangY. Infrared and visible image fusion scheme based on NSCT and low-level visual features.Infrared Phys. Technol.20167617418410.1016/j.infrared.2016.02.005
    [Google Scholar]
  37. Lin Zhang Lei Zhang Xuanqin Mou ZhangD. FSIM: A feature similarity index for image quality assessment.IEEE Trans. Image Process.20112082378238610.1109/TIP.2011.210973021292594
    [Google Scholar]
  38. ChandaB. KunduM.K. Vani PadmajaY. A multi-scale morphologic edge detector.Pattern Recognit.199831101469147810.1016/S0031‑3203(98)00014‑4
    [Google Scholar]
  39. LiuY. WangZ. Simultaneous image fusion and denoising with adaptive sparse representation.IET Image Process.20159534735710.1049/iet‑ipr.2014.0311
    [Google Scholar]
  40. Shutao Li Xudong Kang Jianwen Hu Image fusion with guided filtering.IEEE Trans. Image Process.20132272864287510.1109/TIP.2013.224422223372084
    [Google Scholar]
  41. TanW. ThitønW. XiangP. ZhouH. Multi-modal brain image fusion based on multi-level edge-preserving filtering.Biomed. Signal Process. Control20216410228010.1016/j.bspc.2020.102280
    [Google Scholar]
  42. LiuY. ChenX. PengH. WangZ. Multi-focus image fusion with a deep convolutional neural network.Inf. Fusion20173619120710.1016/j.inffus.2016.12.001
    [Google Scholar]
  43. XuH. MaJ. JiangJ. GuoX. LingH. U2Fusion: A unified unsupervised image fusion network.IEEE Trans. Pattern Anal. Mach. Intell.202244150251810.1109/TPAMI.2020.301254832750838
    [Google Scholar]
  44. XuH. MaJ. EMFusion: An unsupervised enhanced medical image fusion network.Inf. Fusion20217617718610.1016/j.inffus.2021.06.001
    [Google Scholar]
  45. Van AardtJ. Van AardtJ.A. AhmedF.B. Assessment of image fusion procedures using entropy, image quality, and multispectral classification.J. Appl. Remote Sens.20082102352210.1117/1.2945910
    [Google Scholar]
  46. RaoY.J. In-fibre bragg grating sensors.Meas. Sci. Technol.19978435537510.1088/0957‑0233/8/4/002
    [Google Scholar]
  47. AslantasV. BendesE. A new image quality metric for image fusion: The sum of the correlations of differences.AEU Int. J. Electron. Commun.201569121890189610.1016/j.aeue.2015.09.004
    [Google Scholar]
  48. CuiG. FengH. XuZ. LiQ. ChenY. Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition.Opt. Commun.201534119920910.1016/j.optcom.2014.12.032
    [Google Scholar]
  49. ZhengY. EssockE.A. HansenB.C. HaunA.M. A new metric based on extended spatial frequency and its application to DWT based fusion algorithms.Inf. Fusion20078217719210.1016/j.inffus.2005.04.003
    [Google Scholar]
  50. LiuY. ChenX. ChengJ. PengH. A medical image fusion method based on convolutional neural networks.In: 2017 20th international conference on information fusion20171710.23919/ICIF.2017.8009769
    [Google Scholar]
/content/journals/cmir/10.2174/0115734056273589231226052622
Loading
/content/journals/cmir/10.2174/0115734056273589231226052622
Loading

Data & Media loading...

This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error
Please enter a valid_number test