ORIGINAL RESEARCH article

Front. Phys., 02 December 2022

Sec. Physical Acoustics and Ultrasonics

Volume 10 - 2022 | https://doi.org/10.3389/fphy.2022.1053353

A new conjugate gradient algorithm for noise reduction in signal processing and image restoration

  • 1. School of Mathematics and Information Science, Weifang University, Weifang, Shandong, China

  • 2. School of Management Science, Qufu Normal University, Rizhao, Shandong, China

Article metrics

View details

1

Citations

2,8k

Views

716

Downloads

Abstract

Noise-reduction methods are an area of intensive research in signal processing. In this article, a new conjugate gradient method is proposed for noise reduction in signal processing and image restoration. The superiority of this method lies in its employment of the ideas of accelerated conjugate gradient methods in conjunction with a new adaptive method for choosing the step size. In this work, using some assumptions, the weak convergence of the designed method was established. As example applications, we implemented our method to solve signal-processing and image-restoration problems. The results of our numerical simulations demonstrate the effectiveness and superiority of the new approach.

1 Introduction

Noise reduction is an important step in signal pre-processing; it is widely applied in many fields, including underwater acoustic imaging [1, 2], pattern recognition [3], and target detection and feature extraction [4], among others [5]. In this article, a new approach based on a conjugate gradient method is derived from mathematical principles.

We consider the degradation model of signal or image such as:where is an original signal or image, is the degradation operator, ɛ is the noise, is the observed data. The essence of noise reduction is solving Eq. 1 to obtain ω. The solving of Eq. 1 can be considered as the following problem [6]:where r > 0 and ‖ ⋅ ‖1 is the 1 norm. Let and Q = {y}, then Eq. 2 can be seen as a split feasibility problem (SFP) [710]. Thus, we translate the problem of noise reduction to SFP, which can be described as:where H1 and H2 are real Hilbert spaces, : H1H2 is a bounded linear operator, the closed and convex set CH1 (C ≠ ∅), and QH2 (Q ≠ ∅). In order to solve the SFP, Byrne [11, 12] presented the CQ algorithm, which creates a sequence {ωi}:where PC is the projection to C, PQ is the projection to Q, , and is the adjoint operator of . For convex functions c and q, the definitions of C and Q are

There have been some research works devoted to solving Eq. 3. In 2004, Yang [13] presented a relaxed CQ algorithm using and to replace PC and PQ. Here, we define two sets at point ωi bywhere ζi∂c(ωi), andwhere . For all i > 1, clearly, CCi and QQi. In addition, Ci and Qi are half-spaces. Furthermore, referring to [14], we definewhere Ci and Qi are given as in Eqs. 5, 6. In this specific case, their gradient isYang [13] presented a relaxed CQ algorithm in a finite-dimensional Hilbert space:where . Notice that calculating is complex and costly when is a high-dimensional dense matrix. In 2005, Yang [15] presented a new adaptive step size τi, which is defined as:whereHowever, Yang’s step size (Eq. 10) requires that Qi is bounded and the matrix is full rank. Recently, Wang [16] absolutely eliminated these problems. Considering the CQ algorithm, López [17] introduced a novel step size to overcome these problems; this is defined as:where ρi ∈ (0, 4). With Lopez’s step size (Eq. 11), it was proved that {ωi} in Eq. 9 weakly converges to the solution of the SFP.

In 2005, Qu and Xiu [18] introduced a relaxed CQ algorithm that is improved by using an Armijo line search in Euclidian space. In 2017, on the basis of the above application, Gibali [19] extended this to Hilbert spaces, which proved that {ωi} weakly converges to a solution of the SFP as follows:where , γ > 0, ∈ (0, 1), li is the smallest nonnegative integer, and ν ∈ (0, 1) satisfies:In 2020, Kesornprom et al. [20] introduced a gradient-CQ algorithm that derived a weak-convergence theorem for solving the SFP in the framework of Hilbert spaces. This is described as:where Ci, fi, and ∇fi are given in Eqs 5, 7, 8, respectively, and

The conjugate gradient method [21] is a commonly used acceleration scheme in the steepest descent method. The conjugate gradient direction of f at ωi iswhere d0 = −∇f(ω0) and βi ∈ (0, ). In this article, motivated by previous works [2224], a new viscosity approximation method based on the conjugate gradient method is introduced. Many other iterative methods of solving the SFP have been proposed [2529].

Herein, combining the relaxed CQ algorithm with a new step size and the conjugate gradient method, we find the solution of noise reduction problem in Eq. 1 by solving the SFP in Hilbert spaces with a novel approach. Section 2 gives some basic definitions and lemmas. In Section 3, the theorem for proving the weak convergence of our method is presented. In Section 4, we present experimental results and compare them with the relaxed CQ algorithms of López [17], Yang [15], and Sakurai and Iiduka [20]. Finally, conclusions are given in Section 5.

2 Preliminaries

Throughout this article, to obtain our results, some technical lemmas are used.

Lemma 2.1 [

30

].

Suppose the nonempty setC

H1is closed and convex. Thus, for allh1

,

h2

H1andc

C,
  • (i)h1PCh1, cPCh1⟩ ≤ 0;

  • (ii)PCh1PCh22 ≤ ⟨PCh1PCh2, h1h2;

  • (iii)PCh1c2 ≤ ‖h1c2 − ‖PCh1h12.

From Lemma 2.1(ii), let I express the identity operator; then, IPC is a firmly nonexpansive operator, i.e.,

Definition 1. Supposeis a set of real numbers,is convex; the definition of its subdifferential atwis thenTo obtain our results, we prove the following lemma.

Lemma 2.2. Letfi(ω) be defined in Eq. 7; thenfiis Lipschitz continuous with Lipschitz constantProof. For any p, qH,So, ∇fi is -Lipschitz continuous.

3 Algorithm and convergence

A novel gradient-CQ algorithm is established in this section. Furthermore, we prove that the sequence created by our approach is convergent.

Algorithm 1

We next state our weak-convergence theorem.

Theorem 3.1.

The following assumptions hold:
  • (C1)

  • (C2)

  • (C3)

  • (C4) and are bounded.

  • So, {ωi} inAlgorithm 1converges weakly toω* ∈ Ω, which is the nonempty solution set of the SFP.

Proof. First, by using mathematical induction, we show that {di} and are bounded. Assume that ‖di‖ ≤ M holds, for some ii0. Assumption C3 implies that there exists such that , ∀ ii0. Let . From Algorithm 1, the triangle inequality guarantees thatwhich means that ‖di‖ ≤ M for all ii0, so {di} is bounded.Assume that is true for some ii0 and let . As with the proof that ‖di‖ is bounded, we deduceLet z ∈ Ω. Since QQi and CCi, we obtain and . We have ∇fi(z) = 0. From Lemma 2.1(iii),Combining Lemma 2.1(ii), Eq. 7, and Eq. 8, we obtainas with Eq. 14, it follows thatnotice thatsimilar to Eq. 15, we deduceFurthermore, combining Algorithm 1, and Eq. 15, we havewhere . As with Eq. 17, we deducewhere . Thus, from Eqs 13, 17, 18, it holds thatfrom Theorem 3.1(C3) and 0 < ρi < 4, we deduceTherefore, exists; hence {ωi} is bounded. Consequently, {yi} and {zi} are bounded. Back to the previous step (Eq. 19), we obtainwhich implies by (C2) and (C3) of Theorem 3.1 thatfurthermore, it yieldswhere . This implies that ‖∇fi(ωi)‖ and ‖∇fi(yi)‖ are bounded. From Eqs 20, 21, we havewhich impliesMoreover, from Eq. 19, we haveWe notice thatTherefore, combining Eqs. 22, 23 and , we haveIn addition, from Algorithm 1 and Theorem 3.1(C3), we obtainThen, we deduceconsidering {ωi} is bounded. Consequently, we can find a subsequence and ω* ∈ H1. Subsequently, we prove ω* ∈ Ω. Using Eq. 5, and the fact that , we havewhere . Applying the boundedness of ∂c, it follows thatFrom and Eq. 24, we deduceHence, ω* ∈ C. Then, we show that . The fact that implieswhere . Then, we getMoreover, according to Eq. 25, we deduceTherefore, . We can thus draw the conclusion that the sequence {ωi} → Ω.

4 Experimental results

In this section, we describe numerical simulations to demonstrate the applications of Yang’s algorithm [15], López’s algorithm [17], Sakurai and Iiduka’s algorithm [20], and the proposed algorithm (Algorithm 1) in signal processing and image recovery. The results of our simulations show that the proposed method has higher efficiency than the well-known methods in the literature. The experiments were carried out in the environment of Matlab2016 and the CPU is Intel(R) Core(TM) i5-8265U with @1.60GHz 1.80 GHz.

4.1 Signal processing

In the test, let original signal has m nonzero components, we choose N = 4,096, M = 2048, and m = 128 according to Eq. 1 . The mean value and variance of Gaussian noise are 0 and 10–4, respectively. The initial point ω1 = (1,1,…,1)T, ω0 = (0,0,…,0)T, α1 = 0.8, α2 = 0.9, ρi = 1.1, and r = m. The mean squared error (MSE) can be chosen as the evaluation criterion, which is defined as:where ω is the original signal, ω* is the recovered signal. We set the stopping criteria MSE ≤10–5. Figure 1 shows the results of this experiment. These indicate that the number of iterations and CPU time required by our approach are the best of the four methods.

FIGURE 1

FIGURE 1

From top to bottom: original signal, observed signal, and signals recovered by López’s algorithm, Yang’s algorithm, Sakurai and Iiduka’s algorithm, and Algorithm 1.

4.2 Image recovery

The value of each pixel in a grayscale image is in the range [0.255]. The image restoration can be described as the minimizer:where ‖⋅‖2 is the standard Euclidean norm, y is the observed image, is the approximation of the original image, and is a blurring operator. When a color image is processed, we divide it into three channels: red, green, and blue. Supposing the size of the image in each channel is M × N, we have the formula for the MSE:where and s are the restored and original images, respectively.

Seeking to illustrate the effects of image recovery, we use the signal-to-noise ratio (SNR) and peak SNR (PSNR), which are defined:In short, larger SNR and PSNR values indicate better restoration of the image. Figure 2 show the results of different color images recovery. Figure 3 shows a comparison of the SNR and PSNR values for images recovered using the four algorithms. The experimental results show that the proposed algorithm always has the largest SNR and PSNR values for different images, which clearly indicates that the proposed algorithm is more effective in recovery than other algorithms.We next applied our method to medical images. Figures 4, 5 show computed tomography (CT) images of a knee joint and a head, and Figure 6 shows a comparison of the SNR values resulting from recovery using each algorithm for these images. From Figure 6, it can be seen clearly that the SNR of our method(the red line) is significantly higher than other methods.

FIGURE 2

FIGURE 2

Comparison of recovered color images of Lena, peppers, house, panda using different algorithms with 1,000 iterations. From left to right: original image, noised image, López’s algorithm, Yang’s algorithm, Sakurai and Iiduka’s algorithm, and Algorithm 1.

FIGURE 3

FIGURE 3

Comparison of SNR (left) and PSNR (right) values resulting from image recovery using the four algorithms with 1,000 iterations.

FIGURE 4

FIGURE 4

Comparison of CT images of a knee joint recovered using different algorithms with 500 iterations. From left to right: original image, noised image, López’s algorithm, Yang’s algorithm, Sakurai and Iiduka’s algorithm, and Algorithm 1.

FIGURE 5

FIGURE 5

Comparison of CT images of a head recovered using different algorithms with 500 iterations. From left to right: original image, noised image, López’s algorithm, Yang’s algorithm, Sakurai and Iiduka’s algorithm, and Algorithm 1.

FIGURE 6

FIGURE 6

Comparison of SNR values of the knee joint (left) and head (right) images resulting from image recovery using the four algorithms with 500 iterations.

Figure 7 shows an original grayscale image. In Figure 8, we investigate the use of our method on this image with different sampling rates. In Figure 9, we show the image recovered by the four algorithms with different sampling rates. Figure 10 shows a comparison of the SNR values of these images. It can clearly be seen that the performance of our method is the best.Finally, it can clearly be seen that our method provides higher SNR and PSNR values than López’s algorithm, Yang’s algorithm, or Sakurai and Iiduka’s algorithm.

FIGURE 7

FIGURE 7

Original image.

FIGURE 8

FIGURE 8

Versions of the image in Figure 7 with sampling rates of 30%, 40%, 50%, and 60% from left to right.

FIGURE 9

FIGURE 9

Comparison of images recovered using the four algorithms. From top to bottom: López’s algorithm, Yang’s algorithm, Sakurai and Iiduka’s algorithm, and Algorithm 1; from left to right: sampling rates of 30%, 40%, 50%, and 60%.

FIGURE 10

FIGURE 10

Comparison of the SNR values of the images in Figure 9.

5 Conclusion

In this article, we propose a new conjugate gradient method for signal recovery. The superiority of our method lies in its employment of the ideas of accelerated conjugate gradient methods with a new adaptive way of choosing the step size. Under some assumptions, the weak convergence of the designed method was established. As application demonstrations, we implemented our method to solve signal-processing and image-restoration problems. The results of our numerical simulations verify the effectiveness and superiority of the new approach. However, in the numerical experiments in this paper, we always assume that the noise is known. In the future work, we will devote to signal and image recovery research without prior knowledge of noise by optimization method.

Statements

Data availability statement

The raw data supporting the conclusion of this article will be made available by the authors, without undue reservation.

Author contributions

PH and KL designed the work, wrote the manuscript, and managed communication among all the authors. PH and KL analyzed the data. All authors contributed to the article and approved the submitted version.

Funding

This project is supported by the Shandong Provincial Natural Science Foundation (Grant No. ZR2019MA022).

Acknowledgments

The authors would like to thank the reviewers and editors for their useful comments, which have helped to improve this article.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

  • 1.

    ZhangXWuHSunHYingW. Multireceiver SAS imagery based on monostatic conversion. IEEE J Sel Top Appl Earth Obs Remote Sens (2021) 14:1083553. 10.1109/jstars.2021.3121405

  • 2.

    ZhangXYingWYangPSunM. Parameter estimation of underwater impulsive noise with the Class B model. IET Radar Sonar &amp; Navigation (2020) 14:105560. 10.1049/iet-rsn.2019.0477

  • 3.

    LiYGengBJiaoS. Dispersion entropy-based lempel–ziv complexity: A new metric for signal analysis. Chaos Solitons Fractals (2022) 161:112400. 10.1016/j.chaos.2022.112400

  • 4.

    LiYTangBYiY. A novel complexity-based mode feature representation for feature extraction of ship-radiated noise using VMD and slope entropy. Appl Acoust (2022) 196:108899. 10.1016/j.apacoust.2022.108899

  • 5.

    LiYMuLGaoP. Particle swarm optimization fractional slope entropy: A new time series complexity indicator for bearing fault diagnosis. Fractal Fract (2022) 6:345. 10.3390/fractalfract6070345

  • 6.

    CaiTXuGZhangJ. On recovery of sparse signals via l1 minimization. IEEE Trans Inf Theor (2009) 55:338897. 10.1109/tit.2009.2021377

  • 7.

    CensorYElfvingT. A multiprojection algorithm using Bregman projections in a product space. Numer Algorithms (1994) 8:22139. 10.1007/bf02142692

  • 8.

    XuH. Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl (2010) 26:105018. 10.1088/0266-5611/26/10/105018

  • 9.

    MoudafiAGibaliA. l1-l2 regularization of split feasibility problems. Numer Algorithms (2017) 78:73957. 10.1007/s11075-017-0398-6

  • 10.

    BauschkeHCombettesP. A weak-to-strong convergence principle for Fejér-monotone methods in Hilbert spaces. Mathematics OR (2001) 26:24864. 10.1287/moor.26.2.248.10558

  • 11.

    ByrneC. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl (2002) 18:44153. 10.1088/0266-5611/18/2/310

  • 12.

    ByrneC. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl (2004) 20:10320. 10.1088/0266-5611/20/1/006

  • 13.

    YangQ. The relaxed CQ algorithm solving the split feasibility problem. Inverse Probl (2004) 20:12616. 10.1088/0266-5611/20/4/014

  • 14.

    WangF. Polyak’s gradient method for split feasibility problem constrained by level sets. Numer Algorithms (2017) 77:92538. 10.1007/s11075-017-0347-4

  • 15.

    YangQ. On variable-step relaxed projection algorithm for variational inequalities. J Math Anal Appl (2005) 302:16679. 10.1016/j.jmaa.2004.07.048

  • 16.

    WangF. On the convergence of CQ algorithm with variable steps for the split equality problem. Numer Algorithms (2017) 74:92735. 10.1007/s11075-016-0177-9

  • 17.

    LopezGMartin-MarquezVWangFXuHK. Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl (2012) 28:085004. 10.1088/0266-5611/28/8/085004

  • 18.

    QuBXiuN. A note on the CQ algorithm for the split feasibility problem. Inverse Probl (2005) 21:165565. 10.1088/0266-5611/21/5/009

  • 19.

    GibaliALiuLWTangYC. Note on the modified relaxation CQ algorithm for the split feasibility problem. Optim Lett (2017) 12:81730. 10.1007/s11590-017-1148-3

  • 20.

    KesornpromSPholasaNCholamjiakP. On the convergence analysis of the gradient-CQ algorithms for the split feasibility problem. Numer Algorithms (2020) 84:9971017. 10.1007/s11075-019-00790-y

  • 21.

    NocedalJWrightSJ. Numerical optimization. New York: Springer (2006).

  • 22.

    HideakiI. Iterative algorithm for solving triple-hierarchical constrained optimization problem. J Optim Theor Appl (2011) 148:58092. 10.1007/s10957-010-9769-z

  • 23.

    NocedalJ. Numerical optimization: Springer series in operations research and financial engineering. New York: Springer (2006).

  • 24.

    SakuraiKIidukaH. Acceleration of the Halpern algorithm to search for a fixed point of a nonexpansive mapping. Fixed Point Theor Appl (2014) 2014:202. 10.1186/1687-1812-2014-202

  • 25.

    DangYGaoY. The strong convergence of a KM-CQ-like algorithm for a split feasibility problem. Inverse Probl (2011) 27:015007. 10.1088/0266-5611/27/1/015007

  • 26.

    SuantaiSPholasaNCholamjiakP. Relaxed CQ algorithms involving the inertial technique for multiple-sets split feasibility problems. Rev Real Acad Cienc Exactas (2019) 13:108199. 10.1007/s13398-018-0535-7

  • 27.

    WangFXuHK. Approximating curve and strong convergence of the CQ algorithm for the split feasibility problem. J Inequal Appl (2010) 2010:113. 10.1155/2010/102085

  • 28.

    XuH. Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl (2010) 26:105018. 10.1088/0266-5611/26/10/105018

  • 29.

    ZhaoJZhangYYangQ. Modified projection methods for the split feasibility problem and the multiple-sets split feasibility problem. Appl Math Comput (2012) 219:164453. 10.1016/j.amc.2012.08.005

  • 30.

    GoebelKReichS. Uniform convexity. New York: Marcel Dekker (1984).

Summary

Keywords

signal processing, image restoration, weak convergence, noise reduction, conjugate gradient method

Citation

Huang P and Liu K (2022) A new conjugate gradient algorithm for noise reduction in signal processing and image restoration. Front. Phys. 10:1053353. doi: 10.3389/fphy.2022.1053353

Received

25 September 2022

Accepted

02 November 2022

Published

02 December 2022

Volume

10 - 2022

Edited by

Yuxing Li, Xi’an University of Technology, China

Reviewed by

Dezhou Kong, Shandong Agricultural University, China

Ge Tian, Xi’an University of Technology, China

Updates

Copyright

*Correspondence: Kaiping Liu,

This article was submitted to Physical Acoustics and Ultrasonics, a section of the journal Frontiers in Physics

Disclaimer

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Outline

Figures

Cite article

Copy to clipboard


Export citation file


Share article

Article metrics