Abstract
Multi-modal medical image fusion can reduce information redundancy, increase the understandability of images and provide medical staff with more detailed pathological information. However, most of traditional methods usually treat the channels of multi-modal medical images as three independent grayscale images which ignore the correlation between the color channels and lead to color distortion, attenuation and other bad effects in the reconstructed image. In this paper, we propose a multi-modal medical image fusion algorithm with geometric algebra based sparse representation (GA-SR). Firstly, the multi-modal medical image is represented as a multi-vector, and the GA-SR model is introduced for multi-modal medical image fusion to avoid losing the correlation of channels. Secondly, the orthogonal matching pursuit algorithm based on geometric algebra (GAOMP) is introduced to obtain the sparse coefficient matrix. The K-means clustering singular value decomposition algorithm based on geometric algebra (K-GASVD) is introduced to obtain the geometric algebra dictionary, and update the sparse coefficient matrix and dictionary. Finally, we obtain the fused image by linear combination of the geometric algebra dictionary and the coefficient matrix. The experimental results demonstrate that the proposed algorithm outperforms existing methods in subjective and objective quality evaluation, and shows its effectiveness for multi-modal medical image fusion.
1 Introduction
Medical Image fusion technology integrates technologies in many fields such as computer technology, sensor technology, artificial intelligence, and image processing. It comprehensively extracts image information collected by different sensors and concentrates all the information of the image, which can reduce the information redundancy of the image, enhance the readability of the image and provide more specific disease information for diagnosis (Riboni and Murtas, 2019; Li et al., 2021; Wang et al., 2022).
According to the types of fused images, medical image fusion can be divided into unimodal medical image fusion and multi-modal medical image fusion (Tirupal et al., 2021). A unimodal medical image refers to multiple images of a patient’s organ collected by the same device, which are combined into one image by corresponding fusion algorithm. The purpose is to collect image information under different contrasts (Zhang. et al., 2021). Multi-modal medical images refer to images obtained by different imaging methods. Different types of medical images contain different information, and the obtained fused image can summarize various feature information to provide medical staff with more comprehensive pathological information (Zhu et al., 2017). Common medical images include CT images, MR images, and SPECT images (Thieme et al., 2012; Nazir et al., 2021; Engudar et al., 2022).
Multi-modal medical image fusion mainly includes the following methods: morphological methods, knowledge based methods, wavelet based methods, neural network based methods, methods based on fuzzy logic, and so on (James and Dasarathy, 2014). Naeem used discrete wavelet transform (DWT) to fuse images with different details, which changed the uniformity of the details contained in the fused image (Naeem et al., 2016). Guruprasad et al. (2013) proposed an image fusion algorithm based on DWT-DBSS and use the maximum selection rule to obtain detailed fusion coefficients. Bruno presents a novel Wavelet-based method to fuse medical images according to the MRA approach, that aims to put the right “semantic” content in the fused image by applying two different quality indexes: variance and modulus maxima (Alfano et al., 2007). A hierarchical image fusion scheme is presented which preserves the details of the input images of most relevance for visual perception (Marshall and Matsopoulos, 2002).
Sparse representation (Shao et al., 2020) can deal with the natural sparsity of signals by the physiological properties of the human visual system, which is a linear combination of dictionary atoms and sparse coefficients to represent the signal with as few atoms as possible in a given overcomplete dictionary. Bin Yang and Shutao Li (2010) first introduced sparse representation into image fusion, and adopted the sliding window technique to make the fusion process robust to noise and registration. Zong and Qiu (2017) proposed a fusion method based on classified image blocks, which used the directional gradient histogram feature to classify image blocks to establish a sub-dictionary. It can reduce the loss of image details and improve the quality of image fusion.
Traditional sparse representation fusion method usually processes the color channels separately, which easily destroys the correlation between image channels and results in loss of color in the fused image. Geometric algebra (GA) has been considered as one of the most powerful tools in multi-dimensional signal processing and has witnessed great success in a wide range of applications, such as physics, quantum computing, electromagnetism, satellite navigation, neural computing, camera geometry, image processing, robotics, and computer vision, et al. (da Rocha and Vaz, 2006; Wang et al., 2019a; Wang et al., 2021a). Inspired by the paper (Wang et al., 2019b), the geometric algebra-based sparse representation (GA-SR) is introduced for multi-modal medical image fusion in this paper.
The rests of this paper are organized as follows. In Section 2, this paper introduces the basic knowledge of geometric algebra. Section 3 introduces the GA-SR algorithm and the fusion steps of the proposed algorithm. Section 4 provides the experimental analysis including subjective and objective quality evaluations. Finally, Section 5 concludes the papers.
2 Geometric Algebra
Geometric algebra combines quaternions and Grassmann algebras, which can extend operations to higher-dimensional spaces. The geometric algebra space does not rely on coordinate information for calculation (Batard et al., 2009), and all geometric operators are included in the space. Any multi-modal medical image can be represented by geometric algebraic orthonormal base as a multi-vector for overall processing, which can ensure the correlation between each channel of the image (da Rocha and Vaz, 2006; López-González et al., 2016).
Geometric algebra is generated by quadratic space, and is defined as follows. Let denote the dimensional geometric algebraic space generated by the orthonormal basis vectors , including the following complete orthonormal base: Eq. 1
For example, the orthonormal base of vector space consists of vectors, which are .
The orthonormal base in the geometric algebraic space satisfies the following basic operation rules, Eqs 2–5where represents the outer product symbol, represents the inner product symbol, represents the geometric product of and , which is equal to the sum of the inner and outer products of and .
3 Geometric Algebra Based Sparse Representation Based Multi-Modal Medical Image Fusion Based on
In this section, the GA-SR based multi-modal medical image fusion is provided.
3.1 Geometric Algebra Based Sparse Representation Model
The sparse representation model of GA multi-vector can be defined aswhere is a geometric algebra dictionary containing M dictionary atoms, and is a sparse coefficients vector in geometric algebra form. is the objective function, which is used to calculate the number of non-zero vectors in the vector a. The multi-modal medical image based on the GA-SR model can be described in the Eq. 7
For a three-channel multi-modal medical image , it is assumed that each image block can be converted into a vector of length N, and the vector can be expressed as shown in the Eq. 8
For a three-channel multi-modal medical image, its sparse representation model can be defined as shown in the Eq. 9Where , is a three-channel geometric algebra dictionary consisting of M dictionary atoms, is the corresponding geometric algebra coefficient vector, and is used to calculate the number of non-zero elements in the vector .
Therefore, the GA-SR model of the three-channel medical image can be described as follows
The general form of a three-channel medical image sparse coefficient matrix can be obtained by
3.2 The Representation of Multi-Modal Medical Image
Any pixel F of a multi-modal medical image can be represented as a multi-vector form in space, as shown in the Eq. 12
While are the orthonormal base of geometric algebra, and represent the pixels of the multi-modal medical image at . Each channel of a multi-modal medical image can be encoded on an orthonormal basis of geometric algebra. Therefore, a multi-modal medical image of size and can be expressed as
Assuming that each image block of the multi-modal medical image is , where N represents the size of image and K represents the number of image patches, which can be converted into a vector form of length N, and the geometric algebra form of the image block q is shown in the Eq. 14
3.3 The Proposed Fusion Algorithm
Let
and
represent two multi-modal medical source images, respectively, and the framework of GA-SR based multi-modal medical image fusion is shown in
Figure 1.
(1) The sliding window technique is introduced to divide the two source images into several sub-image blocks. The size of the sliding window is generally and the step size is 1. The image blocks are converted into column vectors, and the th image block is formed into the column vector which can be denoted as , .
(2) The sparse representation coefficients and of the column vectors can be calculated by GAOMP algorithm respectively in Wang et al. (2019b), which are described as follows
where
Drepresents the adaptive dictionary of image blocks obtained by dictionary training,
and
respectively represent the sparse coefficient vectors obtained by GAOMP, which can be combined to obtain a sparse coefficient matrix.
is the cutoff condition for dictionary training.
(3) The fused sparse coefficient matrix is obtained by the L1 norm (Yanan et al., 2020) maximum rule. The L1 norm refers to the sum of the absolute values of the elements, and the L1 norm is the optimal convex approximation of the L0 norm, which is more efficient than the L0 norm and is easier to optimize the solution. The L1 coefficient of the corresponding columns of the two sparse coefficient matrices are calculated, and the column with the larger norm is used as the column of the fused sparse coefficient matrix. The fusion rules of the sparse coefficients are as Eq. 17
(4) A dictionary training algorithm is used to obtain the dictionary required for sparse representation. K-SVD is a classic dictionary training algorithm (Fu et al., 2019) in sparse representation. The K-GASVD algorithm in (Wang et al., 2019b) consists of two steps, which are sparse coding (Sandhya et al., 2021) and dictionary update (Thanthrige et al., 2020). The K-GASVD algorithm is used to perform and update dictionary training on the obtained sparse coefficient matrix.
(5) The fusion result of and can be obtained according to the GA-SR model of the three-channel multi-modal medical image, as shown in the Eq. 18
(6) All image patches are processed in the same way, the image block vector is calculated and converted into data sub-blocks. Finally, we can obtain a new image block and compose the final fused image .
FIGURE 1
4 Experimental Analysis
In order to verify the effectiveness of the GA-SR based multi-modal medical image fusion, the experiments are implemented on four groups of multi-modal medical images selected from Harvard Medical School Database in Matlab with other exiting methods, such as Laplacian Pyramid algorithm (Liu et al., 2019), DWT-DBSS algorithm (Guruprasad et al., 2013), SIDWT-Haar algorithm (Xin et al., 2013) and Morphological Difference Pyramid algorithm (Matsopoulos et al., 1995). The source images are SPECT images obtained with different radionuclide elements, respectively. The spatial resolution of each image is 256 × 256. The source images used in the experiments are shown in Figure 2.
FIGURE 2
4.1 Subjective Quality Evaluation
The multi-modal medical images are fused by six different algorithms respectively, and the obtained results are shown in Figures 3–6.
FIGURE 3
FIGURE 4
FIGURE 5
FIGURE 6
Figures 6A,B in each group are the source images used in the experiment, and Figures 6C–H are the fused results obtained by the six different algorithms. Subjectively, it can be seen that the edge of the images obtained by the first four algorithms is relatively complete, but the middle part is darker. The contrast and clarity of the images are low, which indicates that these four algorithms cannot fuse the two source images completely. As a result, the fused image information is incomplete. It can be seen that the fused images obtained by the SR algorithm and GA-SR algorithm are relatively complete, which can comprehensively cover the color and structure information of the two source images, and the fused images obtained are relatively clear. However, there are multiple red spots of different sizes in the images obtained by the SR algorithm, which cause the result to be distorted. The red spots will cover the correct information of the source image, which is not conducive to clinical diagnosis. As can be seen from each group of Figure 6H, the images are relatively clear, and there is no obvious occlusion area. The contrast of the images is relatively high, which indicates that the fused images obtained by the GA-SR algorithm can comprehensively cover the source image. It can provide comprehensive pathological information for medical staff and convenience for clinical medicine.
4.2 Objective Quality Evaluation
The evaluation indicators are adopted for objective evaluation of image quality. In this paper, four indicators of CC (Correlation Coefficient) (Li and Dai, 2009), PSNR (Peak Signal to Noise Ratio) (Hore and Ziou, 2010), RMSE (Root Mean Square Error) (Zhao et al., 2020) and Joint-Entropy (Okarma and Fastowicz, 2020) are used for performance analysis with the six fusion algorithms, and four groups of tables are obtained respectively, as shown in Tables 1–4.
TABLE 1
| Evaluation standard | Laplacian pyramid | DWT-DBSS | SIDWT-Haar | Morphological difference pyramid | SR | GA-SR |
|---|---|---|---|---|---|---|
| CC | 0.6923 | 0.6284 | 0.6481 | 0.6842 | 0.7058 | 0.74135 |
| Joint-Entropy | 3.4538 | 3.4572 | 3.3152 | 3.4986 | 5.8191 | 5.9109 |
| PSNR | 17.442 | 17.5662 | 17.7478 | 17.0234 | 15.6494 | 17.6253 |
| RMSE | 0.1342 | 0.1323 | 0.7335 | 0.1409 | 0.1644 | 0.1309 |
Quality evaluation of fused images of the first group.
Column 1 of the table is the Evaluation standard. The other columns of the table are the evaluated values of different methods.
TABLE 2
| Evaluation standard | Laplacian pyramid | DWT-DBSS | SIDWT-Haar | Morphological difference pyramid | SR | GA-SR |
|---|---|---|---|---|---|---|
| CC | 0.6556 | 0.5840 | 0.602 | 0.6400 | 0.6832 | 0.71145 |
| Joint-Entropy | 3.9194 | 3.7748 | 3.7609 | 3.9974 | 6.67605 | 6.6487 |
| PSNR | 16.3629 | 16.5158 | 16.7125 | 15.8303 | 14.8449 | 16.8667 |
| RMSE | 0.1520 | 0.1494 | 0.1460 | 0.1616 | 0.1803 | 0.1362 |
Quality evaluation of fused images of the second group.
Column 1 of the table is the Evaluation standard. The other columns of the table are the evaluated values of different methods.
TABLE 3
| Evaluation standard | Laplacian pyramid | DWT-DBSS | SIDWT-Haar | Morphological difference pyramid | SR | GA-SR |
|---|---|---|---|---|---|---|
| CC | 0.7046 | 0.6472 | 0.6665 | 0.6737 | 0.6889 | 0.7017 |
| Joint-Entropy | 3.6714 | 3.7782 | 3.5535 | 3.5208 | 6.7404 | 6.9520 |
| PSNR | 16.8556 | 16.7812 | 17.0263 | 16.6019 | 15.194 | 17.1890 |
| RMSE | 0.1436 | 0.1449 | 0.1408 | 0.1479 | 0.1732 | 0.1327 |
Quality evaluation of fused images of the third group.
Column 1 of the table is the Evaluation standard. The other columns of the table are the evaluated values of different methods.
TABLE 4
| Evaluation standard | Laplacian pyramid | DWT-DBSS | SIDWT-Haar | Morphological difference pyramid | SR | GA-SR |
|---|---|---|---|---|---|---|
| CC | 0.6510 | 0.5897 | 0.6267 | 0.6235 | 0.56865 | 0.6582 |
| Joint-Entropy | 3.6212 | 3.6904 | 3.4017 | 3.9315 | 7.0763 | 6.61685 |
| PSNR | 17.0485 | 17.0083 | 17.1788 | 16.5814 | 15.0864 | 17.2829 |
| RMSE | 0.1405 | 0.1411 | 0.1384 | 0.1482 | 0.1754 | 0.1362 |
Quality evaluation of fused images of the fourth group.
Column 1 of the table is the Evaluation standard. The other columns of the table are the evaluated values of different methods.
For fusion of the four groups, the CC of each group of images obtained by the GA-SR algorithm is higher than that obtained by other algorithms, indicating that the correlation of the images obtained by the GA-SR algorithm with the source image is higher, and the obtained image information is more complete. At the same time, the PSNR and RMSE of the images obtained by the GA-SR algorithm are higher than those obtained by other algorithms, indicating that the fused images obtained by the GA-SR algorithm are closer to the source images and have less distortion and more comprehensive information (Xiao et al., 2021; Gao et al., 2022a; Gao et al., 2022b; Gao et al., 2022c).
4.3 Further Analysis
Dictionary training is very important for sparse representation, and the quality of the dictionary directly affects the quality of image fusion. The dictionaries training based on the K-SVD and K-GASVD algorithms can be obtained respectively, as shown in Figure 7.
FIGURE 7
Figure 7A is the dictionary image obtained by the K-SVD algorithm, and Figure 7B is the dictionary image obtained by the K-GASVD algorithm. It is obvious that the color of the dictionary image obtained by the K-SVD algorithm is relatively single, that is because the K-SVD algorithm cannot fully handle the spectral components of the source image, resulting in the generated dictionary image containing a large number of gray image blocks. The dictionary image of K-GASVD contains richer comprehensive information.
In order to verify the effect of the number of dictionary atoms on the quality of the fused image, we change the number of dictionary atoms to obtain different dictionary images, and finally obtain corresponding fused images. The relationship between the PSNR and the atomic number of fused images obtained from dictionaries with different atomic numbers is shown in Figure 8.
FIGURE 8
We can find that the PSNR of the fused images obtained by the K-GASVD model is significantly higher than that of the K-SVD with the increase of the number of dictionary atoms. On the other hand, the number of dictionary atoms required by the K-GASVD model is about 3/10 of the number of atoms required by the K-SVD model if the PSNR is same. Therefore, the number of atoms used in the K-GASVD is significantly reduced in the realization of the same fusion performance, which can present more colorful structures.
For computational complexity, it usually requires longer computational time for multi-modal medical image fusion than other existing real-valued algorithms, because of the non-commutativity of geometric multiplication. Inspired by the work in (Wang et al., 2021b), reduced geometric algebra (RGA) will be introduced to improve our algorithm with lower computational complexity.
5 Conclusion
In this paper, the multi-modal medical image is represented as a multi-vector, and the GA-SR model is introduced for multi-modal medical image fusion to avoid losing the correlation of channels. And the dictionary learning method based on geometric algebra is provided for more specific disease information for diagnosis. The experimental results validate its rationality and effectiveness. At next steps, we will focus on the analysis and diagnosis of pathological information using GA-SR based multi-modal medical image fusion.
Statements
Data availability statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
Author contributions
YL: contributed to the conception of the study; NF: performed the experiment, performed the data analyses and wrote the manuscript; HW: contributed significantly to analysis and manuscript preparation; RW: helped perform the analysis with constructive discussions.
Funding
This research was funded by National Natural Science Foundation of China (NSFC) under Grant No. 61771299.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
1
AlfanoB.CiampiM.De PietroG. (2007). A Wavelet-Based Algorithm for Multimodal Medical Image Fusion. Int. Conf. Samt Semantic Multimedia. DBLP, 117–120. 10.1007/978-3-540-77051-0_13
2
BatardT.Saint-JeanC.BerthierM. (2009). A Metric Approach to Nd Images Edge Detection with Clifford Algebras. J. Math. Imaging Vis.33, 296–312. 10.1007/s10851-008-0115-0
3
Bin YangB.Shutao LiS. (2010). Multifocus Image Fusion and Restoration with Sparse Representation. IEEE Trans. Instrum. Meas.59, 884–892. 10.1109/TIM.2009.2026612
4
da RochaR.VazJ.Jr (2006). Extended Grassmann and Clifford Algebras. Aaca16, 103–125. 10.1007/s00006-006-0006-7
5
EngudarG.Rodríguez-RodríguezC.MishraN. K.BergamoM.AmourouxG.JensenK. J.et al (2022). Metal-ion Coordinated Self-Assembly of Human Insulin Directs Kinetics of Insulin Release as Determined by Preclinical SPECT/CT Imaging. J. Control. Release343, 347–360. 10.1016/J.JCONREL.2022.01.032
6
FuJ.YuanH.ZhaoR.RenL. (2019). Clustering K-Svd for Sparse Representation of Images. EURASIP J. Adv. Signal Process.2019, 187–207. 10.1186/s13634-019-0650-4
7
GaoH.QiuB.Duran BarrosoR. J.HussainW.XuY.WangX. (2022). TSMAE: A Novel Anomaly Detection Approach for Internet of Things Time Series Data Using Memory-Augmented Autoencoder. IEEE Trans. Netw. Sci. Eng., 1. 10.1109/TNSE.2022.3163144
8
GaoH.XiaoJ.YinY.LiuT.ShiJ. (2022). A Mutually Supervised Graph Attention Network for Few-Shot Segmentation: The Perspective of Fully Utilizing Limited Samples. IEEE Trans. Neural Netw. Learn. Syst., 1–13. 10.1109/TNNLS.2022.3155486
9
GaoH.XuK.CaoM.XiaoJ.XuQ.YinY. (2022). The Deep Features and Attention Mechanism-Based Method to Dish Healthcare under Social IoT Systems: An Empirical Study with a Hand-Deep Local-Global Net. IEEE Trans. Comput. Soc. Syst.9 (1), 336–347. 10.1109/TCSS.2021.3102.591
10
GuruprasadS.KurianM. Z.SumaH. N.rajS. (2013). A Medical Multi-Modality Image Fusion of Ct/pet with Pca, Dwt Methods. Ijivp4, 677–681. 10.21917/ijivp.2013.0098
11
HoreA.ZiouD. (2010). “Image Quality Metrics: PSNR vs. SSIM,” in 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23-26 August 2010 (IEEE Computer Society), 2366–2369. 10.1109/ICPR.2010.579
12
JamesA. P.DasarathyB. V. (2014). Medical Image Fusion: a Survey of the State of the Art. Inf. Fusion19, 4–19. 10.1016/j.inffus.2013.12.002
13
LiJ.DaiW. (2009). “Image Quality Assessment Based on Theil Inequality Coefficient and Discrete 2-D Wavelet Transform,” in 2009 IEEE International Conference on Automation and Logistics, Shenyang, China, 05-07 August 2009, 196–199. 10.1109/ICIMA.2009.5156594
14
LiX.ZhouF.TanH. (2021). Joint Image Fusion and Denoising via Three-Layer Decomposition and Sparse Representation. Knowledge-Based Syst.224, 107087. 10.1016/j.knosys.2021.107087
15
LiuF.ChenL.LuL.AhmadA.JeonG.YangX. (2019). Medical Image Fusion Method by Using Laplacian Pyramid and Convolutional Sparse Representation. Concurr. Comput. Pract. Exper32, 97–102. 10.1002/cpe.5632
16
López-GonzálezG.Altamirano-GómezG.Bayro-CorrochanoE. (2016). Geometric Entities Voting Schemes in the Conformal Geometric Algebra Framework. Adv. Appl. Clifford Algebr.26, 1045–1059. 10.1007/s00006-015-0589-y
17
MarshallS.MatsopoulosG. K. (2002). “Morphological Data Fusion in Medical Imaging,” in IEEE Winter Workshop on Nonlinear Digital Signal Processing, Tampere, Finland, 17-20 January 1993. 10.1109/NDSP.1993.767735
18
MatsopoulosG. K.MarshallS.BruntJ. (1995). Multiresolution Morphological Fusion of MR and CT Images of the Human Brain. IEEE Vis. Image & Signal Process. Conf.141, 137–142. 10.1049/ic:19950506
19
NaeemE. A.Abd ElnabyM. M.El-SayedH. S.Abd El-SamieF. E.FaragallahO. S. (2016). Wavelet Fusion for Encrypting Images with a Few Details. Comput. Electr. Eng.54, 450–470. 10.1016/j.compeleceng.2015.08.018
20
NazirI.HaqI. U.KhanM. M.QureshiM. B.UllahH.ButtS. (2021). Efficient Pre-processing and Segmentation for Lung Cancer Detection Using Fused CT Images. Electronics11, 34. 10.3390/ELECTRONICS11010034
21
OkarmaK.FastowiczJ. (2020). Improved Quality Assessment of Colour Surfaces for Additive Manufacturing Based on Image Entropy. Pattern Anal. Applic23, 1035–1047. 10.1007/s10044-020-00865-w
22
RiboniD.MurtasM. (2019). Sensor-based Activity Recognition: One Picture Is Worth a Thousand Words. Future Gener. Comput. Syst.101, 709–722. 10.1016/j.future.2019.07.020
23
SandhyaG.SrinagA.PantangiG. B.KanaparthiJ. A. (2021). Sparse Coding for Brain Tumor Segmentation Based on the Non-linear Features. Jbbbe49, 63–73. 10.4028/www.scientific.net/JBBBE.49.63
24
ShaoL.WuJ.WuM. (2020). Infrared and Visible Image Fusion Based on Spatial Convolution Sparse Representation. J. Phys. Conf. Ser.1634, 012113. 10.1088/1742-6596/1634/1/012113
25
ThanthrigeU. S. K. P. M.BarowskiJ.RolfesI.ErniD.KaiserT.SezginA. (2020). Characterization of Dielectric Materials by Sparse Signal Processing with Iterative Dictionary Updates. IEEE Sens. Lett.4, 1–4. 10.1109/LSENS.2020.3019924
26
ThiemeS. F.GrauteV.NikolaouK.MaxienD.ReiserM. F.HackerM.et al (2012). Dual Energy CT Lung Perfusion Imaging-Correlation with SPECT/CT. Eur. J. radiology81, 360–365. 10.1016/j.ejrad.2010.11.037
27
TirupalT.MohanB. C.KumarS. S. (2021). Multimodal Medical Image Fusion Techniques - A Review. Cst16, 142–163. 10.2174/1574362415666200226103116
28
WangR.FangN.HeY.LiY.CaoW.WangH. (2022). Multi-modal Medical Image Fusion Based on Geometric Algebra Discrete Cosine Transform. Adv. Appl. Clifford Algebr.32, 19. 10.1007/S00006-021-01197-6
29
WangR.ShenM.CaoW. (2019). Multivector Sparse Representation for Multispectral Images Using Geometric Algebra. IEEE Access7, 12755–12767. 10.1109/access.2019.2892822
30
WangR.ShenM.WangX.CaoW. (2021). RGA-CNNs: Convolutional Neural Networks Based on Reduced Geometric Algebra. Sci. China Inf. Sci.64, 129101. 10.1007/s11432-018-1513-5
31
WangR.ShiY.CaoW. (2019). GA-SURF: A New Speeded-Up Robust Feature Extraction Algorithm for Multispectral Images Based on Geometric Algebra. Pattern Recognit. Lett.127, 11–17. 10.1016/j.patrec.2018.11.001
32
WangR.WangY.LiY.CaoW.YanY. (2021). Geometric Algebra-Based Esprit Algorithm for DOA Estimation. Sensors21, 5933. 10.3390/s21175933
33
XiaoJ.XuH.GaoH.BianM.LiY. (2021). A Weakly Supervised Semantic Segmentation Network by Aggregating Seed Cues: The Multi-Object Proposal Generation Perspective. ACM Trans. Multimed. Comput. Commun. Appl.17, 1–19. 10.1145/3419842
34
XinW.You-LiW.FuL. (2013). “A New Multi-Source Image Sequence Fusion Algorithm Based on Sidwt,” in 2013 seventh international conference on image and graphics, Qingdao, China, 26-28 July 2013, 568–571. 10.1109/icig.2013.119
35
YananG.XiaoqunC.BainianL.KechengP.GuangjieW.MeiG. (2020). Research on Numerically Solving the Inverse Problem Based on L1 Norm. IOP Conf. Ser. Mater. Sci. Eng.799, 012044. 10.1088/1757-899X/799/1/012044
36
Zhang.Y.GuoC.ZhaoP. (2021). Medical Image Fusion Based on Low-Level Features. Comput. Math. methods Med.2021–13. 10.1155/2021/8798003
37
ZhaoY.ZhangY.HanJ.WangY. (2020). “Analysis of Image Quality Assessment Methods for Aerial Images,” in 10th International Conference on Computer Engineering and Networks. Editors LiuQ.LiuX.ShenT.QiuX. (Singapore: Springer), 188–195. 10.1007/978-981-15-8462-6_19
38
ZhuL.WangW.QinJ.WongK.-H.ChoiK.-S.HengP.-A. (2017). Fast Feature-Preserving Speckle Reduction for Ultrasound Images via Phase Congruency. Signal Process.134, 275–284. 10.1016/j.sigpro.2016.12.011
39
ZongJ.-j.QiuT.-s. (2017). Medical Image Fusion Based on Sparse Representation of Classified Image Patches. Biomed. Signal Process. Control34, 195–205. 10.1016/j.bspc.2017.02.005
Summary
Keywords
multi-modal medical image, sparse representation, geometric algebra, image fusion, dictionary learning (DL)
Citation
Li Y, Fang N, Wang H and Wang R (2022) Multi-Modal Medical Image Fusion With Geometric Algebra Based Sparse Representation. Front. Genet. 13:927222. doi: 10.3389/fgene.2022.927222
Received
24 April 2022
Accepted
30 May 2022
Published
23 June 2022
Volume
13 - 2022
Edited by
Ying Li, Zhejiang University, China
Reviewed by
Wei Xiang, La Trobe University, Australia
Zhou Jian, Shanghai Institute of Microsystem and Information Technology (CAS), China
Guo Zunhua, Shandong University, China
Updates
Copyright
© 2022 Li, Fang, Wang and Wang.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Rui Wang, rwang@shu.edu.cn
This article was submitted to Computational Genomics, a section of the journal Frontiers in Genetics
Disclaimer
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.