Skip to main content

ORIGINAL RESEARCH article

Front. Phys., 02 December 2022
Sec. Physical Acoustics and Ultrasonics
Volume 10 - 2022 | https://doi.org/10.3389/fphy.2022.1053353

A new conjugate gradient algorithm for noise reduction in signal processing and image restoration

  • 1School of Mathematics and Information Science, Weifang University, Weifang, Shandong, China
  • 2School of Management Science, Qufu Normal University, Rizhao, Shandong, China

Noise-reduction methods are an area of intensive research in signal processing. In this article, a new conjugate gradient method is proposed for noise reduction in signal processing and image restoration. The superiority of this method lies in its employment of the ideas of accelerated conjugate gradient methods in conjunction with a new adaptive method for choosing the step size. In this work, using some assumptions, the weak convergence of the designed method was established. As example applications, we implemented our method to solve signal-processing and image-restoration problems. The results of our numerical simulations demonstrate the effectiveness and superiority of the new approach.

1 Introduction

Noise reduction is an important step in signal pre-processing; it is widely applied in many fields, including underwater acoustic imaging [1, 2], pattern recognition [3], and target detection and feature extraction [4], among others [5]. In this article, a new approach based on a conjugate gradient method is derived from mathematical principles.

We consider the degradation model of signal or image such as:

y=Aω+ε,(1)

where ωRN is an original signal or image, A is the degradation operator, ɛ is the noise, yRM is the observed data. The essence of noise reduction is solving Eq. 1 to obtain ω. The solving of Eq. 1 can be considered as the following problem [6]:

minωRN12yAω2s.t.ω1r,(2)

where r > 0 and ‖ ⋅ ‖1 is the 1 norm. Let C={ωRN:ω1r} and Q = {y}, then Eq. 2 can be seen as a split feasibility problem (SFP) [710]. Thus, we translate the problem of noise reduction to SFP, which can be described as:

findωCsuchthatAωQ,(3)

where H1 and H2 are real Hilbert spaces, A: H1H2 is a bounded linear operator, the closed and convex set CH1 (C ≠ ∅), and QH2 (Q ≠ ∅). In order to solve the SFP, Byrne [11, 12] presented the CQ algorithm, which creates a sequence {ωi}:

ωi+1=PCωiτiA*IPQAωi,(4)

where PC is the projection to C, PQ is the projection to Q, τi(0,2A2), and A* is the adjoint operator of A. For convex functions c and q, the definitions of C and Q are

C=ωH1:cω0andQ=uH2:qu0.

There have been some research works devoted to solving Eq. 3. In 2004, Yang [13] presented a relaxed CQ algorithm using PCi and PQi to replace PC and PQ. Here, we define two sets at point ωi by

Ci=ωH1:cωiζi,ωiω,(5)

where ζi∂c(ωi), and

Qi=uH2:qAωiϑi,Aωiu,(6)

where ϑiq(Aωi). For all i > 1, clearly, CCi and QQi. In addition, Ci and Qi are half-spaces. Furthermore, referring to [14], we define

fiω=12IPCiω2+12IPQiAω2,(7)

where Ci and Qi are given as in Eqs. 5, 6. In this specific case, their gradient is

fiω=IPCiω+A*IPQiAω.(8)

Yang [13] presented a relaxed CQ algorithm in a finite-dimensional Hilbert space:

ωi+1=PCωiτifiωi,(9)

where τi(0,2A2). Notice that calculating A is complex and costly when A is a high-dimensional dense matrix. In 2005, Yang [15] presented a new adaptive step size τi, which is defined as:

τi=ρifix,(10)

where

i=1ρi=,i=1ρi2<+.

However, Yang’s step size (Eq. 10) requires that Qi is bounded and the matrix A is full rank. Recently, Wang [16] absolutely eliminated these problems. Considering the CQ algorithm, López [17] introduced a novel step size to overcome these problems; this is defined as:

τi=ρifixifixi2,(11)

where ρi ∈ (0, 4). With Lopez’s step size (Eq. 11), it was proved that {ωi} in Eq. 9 weakly converges to the solution of the SFP.

In 2005, Qu and Xiu [18] introduced a relaxed CQ algorithm that is improved by using an Armijo line search in Euclidian space. In 2017, on the basis of the above application, Gibali [19] extended this to Hilbert spaces, which proved that {ωi} weakly converges to a solution of the SFP as follows:

yi=PCiωiτifiωi,ωi+1=PCiωiτifiyi,(12)

where τi=γli, γ > 0, ∈ (0, 1), li is the smallest nonnegative integer, and ν ∈ (0, 1) satisfies:

τifiωifiyiνωiyi.

In 2020, Kesornprom et al. [20] introduced a gradient-CQ algorithm that derived a weak-convergence theorem for solving the SFP in the framework of Hilbert spaces. This is described as:

yi=ωiτifiωi,ωi+1=PCiyiφifiyi,

where Ci, fi, and ∇fi are given in Eqs 5, 7, 8, respectively, and

τi=ρifiωifiωi2+θi,andφi=ρifiyifiyi2+θi,0<ρi<4,0<θi<1.

The conjugate gradient method [21] is a commonly used acceleration scheme in the steepest descent method. The conjugate gradient direction of f at ωi is

di+1=fiωi+βidi,

where d0 = −∇f(ω0) and βi ∈ (0, ). In this article, motivated by previous works [2224], a new viscosity approximation method based on the conjugate gradient method is introduced. Many other iterative methods of solving the SFP have been proposed [2529].

Herein, combining the relaxed CQ algorithm with a new step size and the conjugate gradient method, we find the solution of noise reduction problem in Eq. 1 by solving the SFP in Hilbert spaces with a novel approach. Section 2 gives some basic definitions and lemmas. In Section 3, the theorem for proving the weak convergence of our method is presented. In Section 4, we present experimental results and compare them with the relaxed CQ algorithms of López [17], Yang [15], and Sakurai and Iiduka [20]. Finally, conclusions are given in Section 5.

2 Preliminaries

Throughout this article, to obtain our results, some technical lemmas are used.

3 Algorithm and convergence

A novel gradient-CQ algorithm is established in this section. Furthermore, we prove that the sequence created by our approach is convergent.

4 Experimental results

In this section, we describe numerical simulations to demonstrate the applications of Yang’s algorithm [15], López’s algorithm [17], Sakurai and Iiduka’s algorithm [20], and the proposed algorithm (Algorithm 1) in signal processing and image recovery. The results of our simulations show that the proposed method has higher efficiency than the well-known methods in the literature. The experiments were carried out in the environment of Matlab2016 and the CPU is Intel(R) Core(TM) i5-8265U with @1.60GHz 1.80 GHz.

4.1 Signal processing

In the test, let original signal has m nonzero components, we choose N = 4,096, M = 2048, and m = 128 according to Eq. 1 . The mean value and variance of Gaussian noise are 0 and 10–4, respectively. The initial point ω1 = (1,1,…,1)T, ω0 = (0,0,…,0)T, α1 = 0.8, α2 = 0.9, ρi = 1.1, θi=1i3 and r = m. The mean squared error (MSE) can be chosen as the evaluation criterion, which is defined as:

MSE=1Nω*ω2,

where ω is the original signal, ω* is the recovered signal. We set the stopping criteria MSE ≤10–5. Figure 1 shows the results of this experiment. These indicate that the number of iterations and CPU time required by our approach are the best of the four methods.

FIGURE 1
www.frontiersin.org

FIGURE 1. From top to bottom: original signal, observed signal, and signals recovered by López’s algorithm, Yang’s algorithm, Sakurai and Iiduka’s algorithm, and Algorithm 1.

4.2 Image recovery

The value of each pixel in a grayscale image is in the range [0.255]. The image restoration can be described as the minimizer:

mins̄CAs̄y2,

where ‖⋅‖2 is the standard Euclidean norm, y is the observed image, s̄ is the approximation of the original image, and A is a blurring operator. When a color image is processed, we divide it into three channels: red, green, and blue. Supposing the size of the image in each channel is M × N, we have the formula for the MSE:

MSE=1MNi=0M1j=0N1s̄i,jsi,j2,

where s̄ and s are the restored and original images, respectively.

Seeking to illustrate the effects of image recovery, we use the signal-to-noise ratio (SNR) and peak SNR (PSNR), which are defined:

SNR20log10s̄2ss̄2,PSNR20log10255MSE.

In short, larger SNR and PSNR values indicate better restoration of the image. Figure 2 show the results of different color images recovery. Figure 3 shows a comparison of the SNR and PSNR values for images recovered using the four algorithms. The experimental results show that the proposed algorithm always has the largest SNR and PSNR values for different images, which clearly indicates that the proposed algorithm is more effective in recovery than other algorithms.We next applied our method to medical images. Figures 4, 5 show computed tomography (CT) images of a knee joint and a head, and Figure 6 shows a comparison of the SNR values resulting from recovery using each algorithm for these images. From Figure 6, it can be seen clearly that the SNR of our method(the red line) is significantly higher than other methods.

FIGURE 2
www.frontiersin.org

FIGURE 2. Comparison of recovered color images of Lena, peppers, house, panda using different algorithms with 1,000 iterations. From left to right: original image, noised image, López’s algorithm, Yang’s algorithm, Sakurai and Iiduka’s algorithm, and Algorithm 1.

FIGURE 3
www.frontiersin.org

FIGURE 3. Comparison of SNR (left) and PSNR (right) values resulting from image recovery using the four algorithms with 1,000 iterations.

FIGURE 4
www.frontiersin.org

FIGURE 4. Comparison of CT images of a knee joint recovered using different algorithms with 500 iterations. From left to right: original image, noised image, López’s algorithm, Yang’s algorithm, Sakurai and Iiduka’s algorithm, and Algorithm 1.

FIGURE 5
www.frontiersin.org

FIGURE 5. Comparison of CT images of a head recovered using different algorithms with 500 iterations. From left to right: original image, noised image, López’s algorithm, Yang’s algorithm, Sakurai and Iiduka’s algorithm, and Algorithm 1.

FIGURE 6
www.frontiersin.org

FIGURE 6. Comparison of SNR values of the knee joint (left) and head (right) images resulting from image recovery using the four algorithms with 500 iterations.

Figure 7 shows an original grayscale image. In Figure 8, we investigate the use of our method on this image with different sampling rates. In Figure 9, we show the image recovered by the four algorithms with different sampling rates. Figure 10 shows a comparison of the SNR values of these images. It can clearly be seen that the performance of our method is the best.Finally, it can clearly be seen that our method provides higher SNR and PSNR values than López’s algorithm, Yang’s algorithm, or Sakurai and Iiduka’s algorithm.

FIGURE 7
www.frontiersin.org

FIGURE 7. Original image.

FIGURE 8
www.frontiersin.org

FIGURE 8. Versions of the image in Figure 7 with sampling rates of 30%, 40%, 50%, and 60% from left to right.

FIGURE 9
www.frontiersin.org

FIGURE 9. Comparison of images recovered using the four algorithms. From top to bottom: López’s algorithm, Yang’s algorithm, Sakurai and Iiduka’s algorithm, and Algorithm 1; from left to right: sampling rates of 30%, 40%, 50%, and 60%.

FIGURE 10
www.frontiersin.org

FIGURE 10. Comparison of the SNR values of the images in Figure 9.

5 Conclusion

In this article, we propose a new conjugate gradient method for signal recovery. The superiority of our method lies in its employment of the ideas of accelerated conjugate gradient methods with a new adaptive way of choosing the step size. Under some assumptions, the weak convergence of the designed method was established. As application demonstrations, we implemented our method to solve signal-processing and image-restoration problems. The results of our numerical simulations verify the effectiveness and superiority of the new approach. However, in the numerical experiments in this paper, we always assume that the noise is known. In the future work, we will devote to signal and image recovery research without prior knowledge of noise by optimization method.

Data availability statement

The raw data supporting the conclusion of this article will be made available by the authors, without undue reservation.

Author contributions

PH and KL designed the work, wrote the manuscript, and managed communication among all the authors. PH and KL analyzed the data. All authors contributed to the article and approved the submitted version.

Funding

This project is supported by the Shandong Provincial Natural Science Foundation (Grant No. ZR2019MA022).

Acknowledgments

The authors would like to thank the reviewers and editors for their useful comments, which have helped to improve this article.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

1. Zhang X, Wu H, Sun H, Ying W. Multireceiver SAS imagery based on monostatic conversion. IEEE J Sel Top Appl Earth Obs Remote Sens (2021) 14:10835–53. doi:10.1109/jstars.2021.3121405

CrossRef Full Text | Google Scholar

2. Zhang X, Ying W, Yang P, Sun M. Parameter estimation of underwater impulsive noise with the Class B model. IET Radar Sonar &amp; Navigation (2020) 14:1055–60. doi:10.1049/iet-rsn.2019.0477

CrossRef Full Text | Google Scholar

3. Li Y, Geng B, Jiao S. Dispersion entropy-based lempel–ziv complexity: A new metric for signal analysis. Chaos Solitons Fractals (2022) 161:112400. doi:10.1016/j.chaos.2022.112400

CrossRef Full Text | Google Scholar

4. Li Y, Tang B, Yi Y. A novel complexity-based mode feature representation for feature extraction of ship-radiated noise using VMD and slope entropy. Appl Acoust (2022) 196:108899. doi:10.1016/j.apacoust.2022.108899

CrossRef Full Text | Google Scholar

5. Li Y, Mu L, Gao P. Particle swarm optimization fractional slope entropy: A new time series complexity indicator for bearing fault diagnosis. Fractal Fract (2022) 6:345. doi:10.3390/fractalfract6070345

CrossRef Full Text | Google Scholar

6. Cai T, Xu G, Zhang J. On recovery of sparse signals via l1 minimization. IEEE Trans Inf Theor (2009) 55:3388–97. doi:10.1109/tit.2009.2021377

CrossRef Full Text | Google Scholar

7. Censor Y, Elfving T. A multiprojection algorithm using Bregman projections in a product space. Numer Algorithms (1994) 8:221–39. doi:10.1007/bf02142692

CrossRef Full Text | Google Scholar

8. Xu H. Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl (2010) 26:105018. doi:10.1088/0266-5611/26/10/105018

CrossRef Full Text | Google Scholar

9. Moudafi A, Gibali A. l1-l2 regularization of split feasibility problems. Numer Algorithms (2017) 78:739–57. doi:10.1007/s11075-017-0398-6

CrossRef Full Text | Google Scholar

10. Bauschke H, Combettes P. A weak-to-strong convergence principle for Fejér-monotone methods in Hilbert spaces. Mathematics OR (2001) 26:248–64. doi:10.1287/moor.26.2.248.10558

CrossRef Full Text | Google Scholar

11. Byrne C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl (2002) 18:441–53. doi:10.1088/0266-5611/18/2/310

CrossRef Full Text | Google Scholar

12. Byrne C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl (2004) 20:103–20. doi:10.1088/0266-5611/20/1/006

CrossRef Full Text | Google Scholar

13. Yang Q. The relaxed CQ algorithm solving the split feasibility problem. Inverse Probl (2004) 20:1261–6. doi:10.1088/0266-5611/20/4/014

CrossRef Full Text | Google Scholar

14. Wang F. Polyak’s gradient method for split feasibility problem constrained by level sets. Numer Algorithms (2017) 77:925–38. doi:10.1007/s11075-017-0347-4

CrossRef Full Text | Google Scholar

15. Yang Q. On variable-step relaxed projection algorithm for variational inequalities. J Math Anal Appl (2005) 302:166–79. doi:10.1016/j.jmaa.2004.07.048

CrossRef Full Text | Google Scholar

16. Wang F. On the convergence of CQ algorithm with variable steps for the split equality problem. Numer Algorithms (2017) 74:927–35. doi:10.1007/s11075-016-0177-9

CrossRef Full Text | Google Scholar

17. Lopez G, Martin-Marquez V, Wang F, Xu HK. Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl (2012) 28:085004. doi:10.1088/0266-5611/28/8/085004

CrossRef Full Text | Google Scholar

18. Qu B, Xiu N. A note on the CQ algorithm for the split feasibility problem. Inverse Probl (2005) 21:1655–65. doi:10.1088/0266-5611/21/5/009

CrossRef Full Text | Google Scholar

19. Gibali A, Liu LW, Tang YC. Note on the modified relaxation CQ algorithm for the split feasibility problem. Optim Lett (2017) 12:817–30. doi:10.1007/s11590-017-1148-3

CrossRef Full Text | Google Scholar

20. Kesornprom S, Pholasa N, Cholamjiak P. On the convergence analysis of the gradient-CQ algorithms for the split feasibility problem. Numer Algorithms (2020) 84:997–1017. doi:10.1007/s11075-019-00790-y

CrossRef Full Text | Google Scholar

21. Nocedal J, Wright SJ. Numerical optimization. New York: Springer (2006).

Google Scholar

22. Hideaki I. Iterative algorithm for solving triple-hierarchical constrained optimization problem. J Optim Theor Appl (2011) 148:580–92. doi:10.1007/s10957-010-9769-z

CrossRef Full Text | Google Scholar

23. Nocedal J. Numerical optimization: Springer series in operations research and financial engineering. New York: Springer (2006).

Google Scholar

24. Sakurai K, Iiduka H. Acceleration of the Halpern algorithm to search for a fixed point of a nonexpansive mapping. Fixed Point Theor Appl (2014) 2014:202. doi:10.1186/1687-1812-2014-202

CrossRef Full Text | Google Scholar

25. Dang Y, Gao Y. The strong convergence of a KM-CQ-like algorithm for a split feasibility problem. Inverse Probl (2011) 27:015007. doi:10.1088/0266-5611/27/1/015007

CrossRef Full Text | Google Scholar

26. Suantai S, Pholasa N, Cholamjiak P. Relaxed CQ algorithms involving the inertial technique for multiple-sets split feasibility problems. Rev Real Acad Cienc Exactas (2019) 13:1081–99. doi:10.1007/s13398-018-0535-7

CrossRef Full Text | Google Scholar

27. Wang F, Xu HK. Approximating curve and strong convergence of the CQ algorithm for the split feasibility problem. J Inequal Appl (2010) 2010:1–13. doi:10.1155/2010/102085

CrossRef Full Text | Google Scholar

28. Xu H. Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl (2010) 26:105018. doi:10.1088/0266-5611/26/10/105018

CrossRef Full Text | Google Scholar

29. Zhao J, Zhang Y, Yang Q. Modified projection methods for the split feasibility problem and the multiple-sets split feasibility problem. Appl Math Comput (2012) 219:1644–53. doi:10.1016/j.amc.2012.08.005

CrossRef Full Text | Google Scholar

30. Goebel K, Reich S. Uniform convexity. New York: Marcel Dekker (1984).

Google Scholar

Keywords: signal processing, image restoration, weak convergence, noise reduction, conjugate gradient method

Citation: Huang P and Liu K (2022) A new conjugate gradient algorithm for noise reduction in signal processing and image restoration. Front. Phys. 10:1053353. doi: 10.3389/fphy.2022.1053353

Received: 25 September 2022; Accepted: 02 November 2022;
Published: 02 December 2022.

Edited by:

Yuxing Li, Xi’an University of Technology, China

Reviewed by:

Dezhou Kong, Shandong Agricultural University, China
Ge Tian, Xi’an University of Technology, China

Copyright © 2022 Huang and Liu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Kaiping Liu, kaipliu@163.com

Download