ORIGINAL RESEARCH article

Front. Comput. Neurosci., 09 March 2023

Volume 17 - 2023 | https://doi.org/10.3389/fncom.2023.1120516

Approximate solutions to several classes of Volterra and Fredholm integral equations using the neural network algorithm based on the sine-cosine basis function and extreme learning machine

  • 1. School of Electronics and Information Engineering, Taizhou University, Zhejiang, Taizhou, China

  • 2. Data Mining Research Center, Xiamen University, Fujian, Xiamen, China

  • 3. School of Mathematics and Statistics, Central South University, Hunan, Changsha, China

Article metrics

View details

4

Citations

7,8k

Views

1,1k

Downloads

Abstract

In this study, we investigate a new neural network method to solve Volterra and Fredholm integral equations based on the sine-cosine basis function and extreme learning machine (ELM) algorithm. Considering the ELM algorithm, sine-cosine basis functions, and several classes of integral equations, the improved model is designed. The novel neural network model consists of an input layer, a hidden layer, and an output layer, in which the hidden layer is eliminated by utilizing the sine-cosine basis function. Meanwhile, by using the characteristics of the ELM algorithm that the hidden layer biases and the input weights of the input and hidden layers are fully automatically implemented without iterative tuning, we can greatly reduce the model complexity and improve the calculation speed. Furthermore, the problem of finding network parameters is converted into solving a set of linear equations. One advantage of this method is that not only we can obtain good numerical solutions for the first- and second-kind Volterra integral equations but also we can obtain acceptable solutions for the first- and second-kind Fredholm integral equations and Volterra–Fredholm integral equations. Another advantage is that the improved algorithm provides the approximate solution of several kinds of linear integral equations in closed form (i.e., continuous and differentiable). Thus, we can obtain the solution at any point. Several numerical experiments are performed to solve various types of integral equations for illustrating the reliability and efficiency of the proposed method. Experimental results verify that the proposed method can achieve a very high accuracy and strong generalization ability.

1. Introduction

Volterra and Fredholm integral equations have many applications in natural sciences and engineering. A linear phenomenon appearing in many applications in scientific fields can be modeled by linear integral equations (Abdou, 2002; Isaacson and Kirby, 2011). For example, as mentioned by Lima and Buckwar (2015), a class of integro-differential equations, known as neural field equations, describes the large-scale dynamics of spatially structured networks of neurons. These equations are widely used in the field of neuroscience and robotics, and they also play a crucial role in cognitive robotics. The reason is that the architecture of autonomous robots, which are able to interact with other agents in dealing with a mutual task, is strongly inspired by the processing principles and the neuronal circuitry in the primate brain.

This study aims to consider several kinds of linear integral equations. The general form of linear integral equations is defined as follows:

Where the functions k1(x, t), k2(x, t), and g(x) are known, but y(x) is the unknown function that will be determined; a and b are constants; and ϵ, λ and μ are parameters. Notably, we have

  • (i). Equation (1) is called linear Fredholm integral equation of the first kind if ϵ, μ = 0 and λ = 1.

  • (ii). Equation (1) is called linear Volterra integral equation of the first kind if ϵ, λ = 0 and μ = 1.

  • (iii). Equation (1) is called linear Fredholm integral equation of the second kind if μ = 0 and ϵ, λ = 1.

  • (iv). Equation (1) is called linear Volterra integral equation of the second kind if λ = 0 and ϵ, μ = 1.

  • (v). Equation (1) is called linear Volterra–Fredholm integral equation if μ, ϵ, λ = 1.

Many methods for numerical solutions of Volterra integral equations, Fredholm integral equations, and Volterra-Fredholm integral equations have been presented in recent years. Orthogonal polynomials (e.g., wavelets Maleknejad and Mirzaee, 2005, Bernstein Mandal and Bhattacharya, 2007, Chebyshev Dastjerdi and Ghaini, 2012) were proposed for solving integral equations. The Taylor collocation method (Wang and Wang, 2014), Lagrange collocation method (Wang and Wang, 2013; Nemati, 2015), and Fibonacci collocation method (Mirzaee and Hoseini, 2016) were effective and convenient for solving integral equations. The Sinc-collocation method (Rashidinia and Zarebnia, 2007) and Galerkin method (Saberi-Nadjafi et al., 2012) also give good performance in solving Volterra integral equation problems. However, most of these traditional methods have the following disadvantage: they provide the solution, in the form of an array, at specific preassigned mesh points in the domain, and they need an additional interpolation procedure to yield the solution for the whole domain. In order to have an accurate solution, one either has to increase the order of the method or decrease the step size. This, however, increases the computational cost.

The neural network has excellent application potential in many fields (Habib and Qureshi, 2022; Li and Ying, 2022) owing to its universal function approximation capabilities (Hou and Han, 2012; Hou et al., 2017, 2018). In this case, the neural network is widely used as an effective tool for solving differential equations, integral equations, and integro–differential equations (Mall and Chakraverty, 2014, 2016; Jafarian et al., 2017; Pakdaman et al., 2017; Zuniga-Aguilar et al., 2017; Rostami and Jafarian, 2018). Golbabai and Seifollahi presented radial basis function networks for solving linear Fredholm and Volterra integral equations of the second kind (Golbabai and Seifollahi, 2006), and they solved a system of nonlinear integral equations (Golbabai and Seifollahi, 2009). Effati and Buzhabadia presented multilayer perceptron networks for solving Fredholm integral equations of the second kind (Effati and Buzhabadi, 2012). Jafarian and Nia proposed a feedback neural network method for solving linear Fredholm and Volterra integral equations of the second kind (Jafarian and Nia, 2013a,b). Jafarian presented artificial neural networks-based modeling for solving the Volterra integral equations system (Jafarian et al., 2015). However, the traditional neural network algorithms have some problems, such as over-fitting, difficulty to determine hidden layer nodes, optimization of model parameters, being easily trapped into local minima, slow convergence speed, and reduction in the learning speed and efficiency of the model when the input data are large or the network structure is complex (Huang and Chen, 2008).

Huang et al. (2006a,b) proposed an extreme learning machine (ELM) algorithm, which is a single-hidden-layer feed-forward neural network. The ELM algorithm only needs to set the number of hidden nodes of the network but does not need to adjust the input weights and bias values, and the output weights can be determined by the Moore–Penrose generalized inverse operation. The ELM algorithm provides faster learning speed, better generalization performance, with least human intervention. Based on the advantages, the ELM algorithm has been widely applied to many real-world applications, such as regression and classification problems (Wong et al., 2018). Many neural network methods based on the improved extreme learning machine algorithm for solving ordinary differential equations (Yang et al., 2018; Lu et al., 2022), partial differential equations (Sun et al., 2019; Yang et al., 2020), the ruin probabilities of the classical risk model and the Erlang (2) risk model in Zhou et al. (2019); Lu et al. (2020), and one-dimensional asset-pricing (Ma et al., 2021) have been developed. Chen et al. (2020, 2021, 2022) proposed the trigonometric exponential neural network, Laguerre neural network, and neural finite element method for ruin probability, generalized Black–Scholes differential equation, and generalized Black–Scholes–Merton differential equation. Inspired by these studies, the motivation of this research is to present the sine-cosine ELM (SC-ELM) algorithm to solve linear Volterra integral equations of the first kind, linear Volterra integral equations of the second kind, linear Fredholm integral equations of the first kind, linear Fredholm integral equations of the second kind, and linear Volterra–Fredholm integral equations. In the latest study, a linear integral equation of the third kind with fixed singularities in the kernel is studied by Gabbasov and Galimova (2022), and Volterra integral equations of the first kind on a bounded interval are considered by Bulatov and Markova (2022). For more results, we may refer to Din et al. (2022) and Usta et al. (2022).

In this study, we propose a neural network method based on the sine-cosine basis function and the improved ELM algorithm to solve linear integral equations. Specifically, the hidden layer is eliminated by expanding the input pattern utilizing the sine-cosine basis function, and this simplifies the calculation to some extent. Moreover, the improved ELM algorithm can automatically satisfy the boundary conditions and it transforms the problem into solving a linear system, which provides great convenience for calculation. Furthermore, the closed-form solution by utilizing this model can be obtained, and the approximate solution of any point for linear integral equations can be provided from it.

The remainder of the article is organized as follows. In Section 2, a brief review of the ELM algorithm is provided. In Section 3, a novel neural network method based on the sine-cosine basis function and ELM algorithm for solving integral equations in the form of Equation (1) are discussed. In Section 4, we show several numerical examples to demonstrate the accuracy and the efficiency of the improved neural network algorithm. In Section 5, concluding remarks are presented.

2. The ELM algorithm

The ELM algorithm was originated from the single-hidden-layer feed-forward network (SLFN) and then got developed into a generalized SLFN algorithm (Huang and Chen, 2007). The ELM algorithm not only is fully automatically implemented without iterative tuning but also tends to the minimum training error. The ELM algorithm can provide least human intervention, faster learning speed, and better generalization performance. Therefore, the ELM algorithm is widely used in classification and regression tasks (Huang et al., 2012; Cambria and Huang, 2013).

For a data set with N+1 different training samples (xi, gi) ∈ ℝ × ℝ(i = 0, 1, ..., N), the neural network with M+1 hidden neurons is expressed as follows:

Where f is the activation function, wj is the input weight of the j-th hidden layer node, bj is the bias value of the j-th hidden layer node, and βj is the output weight connecting the j-th hidden layer node and the output node.

The error function of SLFN is as follows:

Assuming the error between the output value oi of SLFN and the exact value gi is zero, the relationship between xi and gi can be modeled as follows:

Where both the input weight wj and the bias value bj are randomly generated. The equations (4) can be rewritten in the following matrix form, that is:

Where H is the output matrix of the hidden layer, and it is defined as follows:

A common minimum norm least-squares solution of the linear system Equation (5) is calculated by

3. The proposed method

In this section, we propose a neural network method based on sine-cosine basis function and extreme learning machine algorithm to solve linear integral equations. The single-hidden-layer sine-cosine neural network algorithm consists of three layers: an input layer, a hidden layer, and an output layer. The unique hidden layer consists of two parts. The first part uses the cosine basis function as the basis function and the other part implements the superposition of the sine basis function. The structure of sine-cosine neural network method is shown in Figure 1.

Figure 1

The steps of the sine-cosine neural network method for solving several kinds of linear integral equations are as follows:

Step 1: Discretize the interval [a, b] into a series of collocation points Ω = {a = x0<x1 < ... < xN = b}, .

Step 2: Construct the approximate solution by using sine-cosine basis as an activation function, that is

Step 3: According to different problems and different data sets, we substitute the trial solution ŷSC-ELM into the Equation (1) to be solved. Then, we convert this equation into a matrix form:

Where ;.

Step 4: From the theory of Moore–Penrose generalized inverse of matrix H, we can obtain the net parameters as

Step 5: Find the connection parameters aj, bj and the number of neurons M with the smallest MSE as the optimal value. The corresponding optimal number of neurons M and output weights aj, bj are, respectively, the optimal number of neurons M and optimal output weights .

Step 6: Substitute aj, bj, j = 0, 1, 2, …, M into Equation (7) to get the new numerical solution.

Some advantages of the single-layer sine-cosine neural network method for solving integral equations are as follows:

  • (i) The hidden layer is eliminated by expanding the input pattern using the sine-cosine basis function.

  • (ii) The sine-cosine neural network algorithm only needs to determine the weights of the output layer. The problem could be transformed into a linear system, and the output weights can be obtained by a simple generalized inverse matrix, which greatly improves the calculation speed.

  • (iii) We can obtain the closed-form solution by using this model, and most important of all, the approximate solution of any point for linear integral equations can be given from it. It provides a good method for solving integral equations.

4. Numerical experiments

In this section, some numerical experiments are performed to demonstrate the reliability and powerfulness of the improved neural network algorithm. The sine-cosine neural network method based on the sine-cosine basis function and ELM algorithm is applied to solve the linear Volterra integral equations of the first kind, linear Volterra integral equations of the second kind, linear Fredholm integral equations of the first kind, linear Fredholm integral equations of the second kind, and linear Volterra–Fredholm integral equations.

The algorithm is evaluated with MATLAB R2021a running in an Intel Xeon Gold 6226R CPU with 64.0GB RAM. The training set is obtained by taking points at equal intervals, and the testing set is randomly selected. The validation set is the set of midpoints V = {vi|vi = (xi+xi+1)/2, i = 0, 1, ..., N}, where are training points in the following studies. We use mean square error (MSE), absolute error (AE), mean absolute error (MAE) and root mean square error (RMSE) to measure the error of numerical solution. They can be defined as follows:

Where y(xi) denote the exact solution and ŷ(xi) represent the approximate solution obtained by the proposed algorithm. Note that wj = /(ba) and bj = −jπa/(ba)(j = 0, 1, 2, ..., M) are selected in our proposed method. Moreover, the number M of hidden neurons that results in minimum mean squared error on the validation set can be selected.

4.1. Example 1

Consider linear Volterra integral equation of the second kind (Guo et al., 2012) as

The analytical solution is f(x) = cos(x).

We train our proposed neural network for 50 equidistant points in the given interval [0, 1] with the first 12 sine-cosine basis functions. Comparison between the exact solution and the approximate solution via our improved neural network algorithm is depicted in Figure 2A, and the plot of the error function between them is cited in Figure 2B. As shown in the figures, the mean squared error is 1.3399 × 10−19, and the maximum absolute error is approximately 7.4910 × 10−10.

Figure 2

Table 1 incorporates the results of the exact solution and the approximate solution via our proposed neural network algorithm for 11 testing points at unequal intervals in the domain [0, 1]. The absolute errors are listed in Table 1, in which we observe that the mean squared error is approximately 1.6789 × 10−19. These results imply that the proposed method has higher accuracy.

Table 1

xExact solutionApproximate solutionAbsolute error
0.06240.998053751640.998053751263.8610e-10
0.09150.995816794790.995816794106.9084e-10
0.15180.988500487630.988500487323.1110e-10
0.24100.971099786600.971099786471.2768e-10
0.36040.935755839120.935755839047.4031e-11
0.52520.865223681720.865223681692.7006e-11
0.63950.802394255330.802394255003.2675e-10
0.75900.725524569650.725524569244.1422e-10
0.84820.661334380710.661334380244.7606e-10
0.90840.615008169340.615008169013.3012e-10
0.93480.593979334310.593979333616.9796e-10

Comparison between the exact solution and approximate solution (Example 1).

Table 2 compares the proposed method with the LS-SVR method. The maximum absolute error is approximately 6.8246 × 10−10. Note that in Guo et al. (2012), the maximum absolute error shown in Guo et al. (2012) Table 5 is approximately 2.4981 × 10−7. The solution accuracy of the proposed algorithm is higher.

Table 2

xLS-SVR in Guo et al. (2012)SC-ELM
0.17.4597e-086.8246e-10
0.22.7590e-083.7957e-10
0.35.1917e-093.2404e-10
0.42.3898e-071.6271e-10
0.52.4981e-079.4236e-11
0.63.8031e-084.6072e-11
0.72.3423e-071.9703e-10
0.85.2083e-082.3283e-10
0.92.4366e-073.1284e-10

Comparison between the SC-ELM method and the LS-SVR method (Example 2).

4.2. Example 2

Consider the linear Volterra integral equation of the first kind (Masouri et al., 2010) as

The analytical solution is f(x) = ex.

A total of 21 equidistant points in the given interval [0, 1] are used as the training points, and the neural network adapts the first 10 sine-cosine basis functions. Figures 3A, B shows that the exact solution and the approximate solution are highly consistent. The maximum absolute error is approximately 1.3959 × 10−6.

Figure 3

Table 3 lists the results of the exact solution and the approximate solution via our proposed neural network algorithm in the domain [0, 1]. The mean squared error is approximately 2.5781 × 10−16. These findings provide a strong support for the effectiveness of our proposed method.

Table 3

xExact solutionApproximate solutionAbsolute error
0.06240.939507008820.939507032932.4118e–08
0.09150.912561316150.912561294422.1728e–08
0.15180.859160095580.859160098192.6073e–09
0.24100.785841626390.785841616231.0154e–08
0.36040.697397311350.697397292871.8481e–08
0.52520.591437065120.591437048021.7099e–08
0.63950.527556136180.527556118641.7535e–08
0.75900.468134327350.468134315821.1524e–08
0.84820.428184971650.428184966635.0275e–09
0.90840.403168778300.403168802122.3823e–08
0.93480.392664390560.392664392842.2803e–09

Comparison between exact solution and approximate solution (Example 2).

4.3. Example 3

We consider linear Fredholm integral equation of the first kind (Rashed, 2003) as

The analytical solution is f(x) = x.

This problem is solved by utilizing our proposed neural network model in the given interval [0, 1]. We consider 21 equidistant points in the domain [0, 1] with the first six sine-cosine basis functions to train the model. Comparison between the exact solution and the approximate solution via our improved neural network algorithm is depicted in Figure 4A, and the error plot is depicted in Figure 4B. Note that the mean squared error is 4.5915 × 10−8 for these training points.

Figure 4

Table 4 incorporates the results of the exact solution and the approximate solution via our proposed neural network algorithm for 11 testing points at unequal intervals in the domain [0, 1]. We observe that the maximum absolute error is approximately 2.7433 × 10−4. The results show that this new neural network has a good generalization ability.

Table 4

xExact solutionApproximate solutionAbsolute error
0.06240.06240.06240076918877.6919e–07
0.09150.09150.09149904351389.5649e–07
0.15180.15180.15179980736691.9263e–07
0.24100.24100.24100087957028.7957e–07
0.36040.36040.36040131085471.3109e–06
0.52520.52520.52517227115642.7729e–05
0.63950.63950.63951141594621.1416e–05
0.75900.75900.75904518005644.5180e–05
0.84820.84820.84800976589211.9023e–04
0.90840.90840.90848022728808.0227e–05
0.93480.93480.93507433341362.7433e–04

Comparison between the exact solution and approximate solution (Example 3).

4.4. Example 4

We consider the linear Fredholm integral equation of the second kind (Golbabai and Seifollahi, 2006) as

The analytical solution is f(x) = e2x.

The improved neural network algorithm for the linear Fredholm integral equation of the second kind has been trained with 50 equidistant points in the given interval [0, 1] with the first 12 sine-cosine basis functions. The approximate solution obtained by the improved neural network algorithm and the exact solution are shown in Figure 5A, and the error function is displayed in Figure 5B. Especially, the mean squared error is 2.3111 × 10−17, and the maximum absolute error is approximately 9.8998 × 10−9, which fully demonstrates the superiority of the improved neural network algorithm.

Figure 5

Finally, Table 5 provides the results of the exact solution and the approximate solution via our proposed neural network algorithm for 11 testing points at unequal intervals in the domain [0, 1]. As shown in Table 5, the mean squared error is approximately 3.1391 × 10−17, which undoubtedly shows the power and effectiveness of the proposed method.

Table 5

xExact solutionApproximate solutionAbsolute error
0.06241.132921846037671.132921843815662.2220e–09
0.09151.200814408080831.200814415306387.2255e–09
0.15181.354727056874311.354727060115513.2412e–09
0.24101.619309785301931.619309788044382.7425e–09
0.36042.056077414806382.056077416232271.4259e–09
0.52522.858794407152962.858794411060423.9075e–09
0.63953.593044883564263.593044888637835.0735e–09
0.75904.563089883109014.563089892022658.9136e–09
0.84825.454276609768955.454276618751438.9825e–09
0.90846.152140069079566.152140070489701.4101e–09
0.93486.485701599721516.485701607781248.0597e–09

Comparison between the exact solution and approximate solution (Example 4).

Table 6 compares the proposed method with RBF networks. The maxmium absolute error by our proposed method is approximately 7.7601 × 10−9. Note that in Golbabai and Seifollahi (2006), the maxmium absolute error shown in Golbabai and Seifollahi (2006), as shown in Table 1, is approximately 6.7698 × 10−7. The solution accuracy of the proposed algorithm is higher.

Table 6

xRBF in Golbabai and Seifollahi (2006)SC-ELM
0.14.1721e-077.7331e-09
0.21.6226e-077.7601e-09
0.39.9728e-087.0314e-09
0.45.3328e-074.9446e-09
0.55.1282e-071.7010e-09
0.68.8658e-089.5548e-11
0.73.8239e-072.1508e-09
0.86.7698e-074.8329e-09
0.93.3687e-071.6513e-09

Comparison between the SC-ELM method and RBF method (Example 4).

4.5. Example 5

Consider the linear Volterra–Fredholm integral equation (Wang and Wang, 2014) as

The analytical solution is f(x) = ex.

A total of 50 equidistant points in the given interval [0, 1] and the first 11 sine-cosine basis functions are considered to train the neural network model. The comparison images and error images of the exact solution and the approximate solution are listed in Figures 6A, B, from which we can see that the mean squared error is 3.3499 × 10−18.

Figure 6

Table 7 shows the results of the exact solution and the approximate solution via the improved ELM method for 11 testing points at unequal intervals in the domain [0, 1]. As shown in the table, the maximum absolute error is approximately 2.6673 × 10−9, which reveals that the improved neural network algorithm has higher accuracy and excellent performance.

Table 7

xExact solutionApproximate solutionAbsolute error
0.06240.939507008820.939507006921.8957e-09
0.09150.912561316150.912561313482.6673e-09
0.15180.859160095580.859160093322.2550e-09
0.24100.785841626390.785841624751.6352e-09
0.36040.697397311350.697397310261.0886e-09
0.52520.591437065120.591437063141.9856e-09
0.63950.527556136180.527556134461.7191e-09
0.75900.468134327350.468134325421.9239e-09
0.84820.428184971650.428184969542.1172e-09
0.90840.403168778300.403168776651.6478e-09
0.93480.392664390560.392664388432.1275e-09

Comparison between the exact solution and approximate solution (Example 5).

We compare the RMSE of our proposed method and the Taylor collocation method in Wang and Wang (2014). From Table 8, we can see clearly that our algorithm is more accurate than the algorithm in the Taylor collocation method. As can be seen from Table 8, when 5, 8, and 9 points are tested, the RMSEs shown by the Taylor collocation method in Wang and Wang (2014) are approximately 4.03 × 10−7, 9.50 × 10−7, and 2.15 × 10−5, but the RMSEs shown by our proposed method are respectively 1.67 × 10−9, 1.78 × 10−9, and 1.67 × 10−9.

Table 8

NTaylor solutionSC-ELM solution
54.03e-071.67e-09
89.50e-071.78e-09
92.15e-051.67e-09

RMSE comparison of Example 5.

4.6. Example 6

We consider linear the Volterra integral equation of the second kind (Saberi-Nadjafi et al., 2012).

The analytical solution is f(x) = sin(2x)+cos(2x).

A total of 21 equidistant discrete points and the first 11 sine-cosine basis functions are utilized to construct the neural network model. The comparison images and error images of the exact solution and the approximate solution are displayed in Figures 7A, B. It ia not hard to find that the MSE is 4.2000 × 10−16, and this implies that the proposed algorithm has higher accuracy.

Figure 7

To verify the effectiveness of our proposed method, we provide the results of the exact solution and the approximate solution via the improved ELM method for 11 testing points at unequal intervals in the domain [0, 1], see Table 9. As shown in the table, the maximum absolute error is approximately 2.9539 × 10−8, which shows that the proposed algorithm has good generalization ability.

Table 9

xExact solutionApproximate solutionAbsolute error
0.06241.11669887369141.11669888340329.7117e-09
0.09151.16528247202461.16528248685241.4828e-08
0.15181.25322392373051.25322394213931.8409e-08
0.24101.34962183170101.34962185078221.9081e-08
0.36041.41126388636931.41126390909792.2729e-08
0.52521.36484622341911.36484625295852.9539e-08
0.63951.24540174809451.24540176916742.1073e-08
0.75901.05137839999481.05137841621221.6217e-08
0.84820.86684854987140.86684856623641.6365e-08
0.90840.72636348579820.72636350214921.6351e-08
0.93480.66131224332530.66131226276731.9442e-08

Comparison between the exact solution and approximate solution (Example 6).

Table 10 compares the MSE of the numerical solutions obtained by the SC-ELM model when more training points are added and different numbers of hidden layer neurons are configured. From these results, it can be seen that the proposed method can achieve good accuracy. The calculation time of different examples is listed in Table 11. These data suggest that our method is efficient and feasible.

Table 10

MSEM= 5, N = 20M= 10, N = 20M= 10, N = 100
Example 15.4420e-113.6233e-176.9420e-19
Example 22.7384e-091.9056e-126.3433e-17
Example 34.5915e-084.4380e-054.8435e-06
Example 42.8418e-084.5969e-163.8451e-16
Example 51.6398e-108.0501e-172.0969e-18
Example 62.8025e-114.2000e-164.0548e-19

Comparison of the different examples of MSE with different numbers of training points and hidden neurons.

Table 11

Examplet
Example 10.3317
Example 20.1505
Example 30.1080
Example 40.3391
Example 50.6356
Example 60.1683

Execution time of different examples.

5. Conclusion

In this study, the improved neural network algorithm based on the sine-cosine basis function and extreme learning machine algorithm has been developed for solving linear integral equations. The accuracy of the improved neural network has been checked by solving a linear Volterra integral equation of the first kind, a linear Volterra integral equation of the second kind, a linear Fredholm integral equation of the first kind, a linear Fredholm integral equation of the second kind, and a linear Volterra-Fredholm integral equation. The experimental results of the improved ELM approach with different types of integral equations show that the simulation results are close to the exact results. Therefore, the proposed model is very precise and could be a good tool for solving linear integral equations.

Statements

Data availability statement

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.

Author contributions

All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.

Acknowledgments

The authors sincerely thank all the reviewers and the editor for their careful reading and valuable comments, which improved the quality of this study.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

  • 1

    AbdouM. A. (2002). Fredholm-Volterra integral equation of the first kind and contact problem. Appl. Math. Comput. 125, 177193. 10.1016/S0096-3003(00)00118-1

  • 2

    BulatovM. V.MarkovaE. V. (2022). Collocation-variational approaches to the solution to volterra integral equations of the first kind. Comput. Math. Math. Phys. 62, 98105. 10.1134/S0965542522010055

  • 3

    CambriaE.HuangG. B. (2013). Extreme learning machine: trends and controversies. IEEE Intell. Syst. 28, 3059. 10.1109/MIS.2013.140

  • 4

    ChenY.WeiL.CaoS.LiuF.YangY.ChengY. (2022). Numerical solving for generalized Black-Scholes-Merton model with neural finite element method. Digit. Signal Process. 131, 103757. 10.1016/j.dsp.2022.103757

  • 5

    ChenY.YiC.XieX.HouM.ChengY. (2020). Solution of ruin probability for continuous time model based on block trigonometric exponential neural network. Symmetry12, 876. 10.3390/sym12060876

  • 6

    ChenY.YuH.MengX.XieX.HouM.ChevallierJ. (2021). Numerical solving of the generalized Black-Scholes differential equation using Laguerre neural network. Digit. Signal Process. 112, 103003. 10.1016/j.dsp.2021.103003

  • 7

    DastjerdiH. L.GhainiF. M. M. (2012). Numerical solution of Volterra-Fredholm integral equations by moving least square method and Chebyshev polynomials. Appl. Math. Model36, 32833288. 10.1016/j.apm.2011.10.005

  • 8

    DinZ. U.IslamS. U.ZamanS. (2022). Meshless procedure for highly oscillatory kernel based one-dimensional volterra integral equations. J. Comput. Appl. Math. 413, 114360. 10.1016/j.cam.2022.114360

  • 9

    EffatiS.BuzhabadiR. (2012). A neural network approach for solving Fredholm integral equations of the second kind. Neural Comput. Appl. 21, 843852. 10.1007/s00521-010-0489-y

  • 10

    GabbasovN. S.GalimovaZ. K. (2022). On numerical solution of one class of integral equations of the third kind. Comput. Math. Math. Phys. 62, 316324. 10.1134/S0965542522020075

  • 11

    GolbabaiA.SeifollahiS. (2006). Numerical solution of the second kind integral equations using radial basis function networks. Appl. Math. Comput. 174, 877883. 10.1016/j.amc.2005.05.034

  • 12

    GolbabaiA.SeifollahiS. (2009). Solving a system of nonlinear integral equations by an RBF network. Comput. Math. Appl. 57, 16511658. 10.1016/j.camwa.2009.03.038

  • 13

    GuoX. C.WuC. G.MarcheseM.LiangY. C. (2012). LS-SVR-based solving Volterra integral equations. Appl. Math. Comput. 218, 1140411409. 10.1016/j.amc.2012.05.028

  • 14

    HabibG.QureshiS. (2022). Global Average Pooling convolutional neural network with novel NNLU activation function and HYBRID parallelism. Front. Comput. Neurosci. 16, 1004988. 10.3389/fncom.2022.1004988

  • 15

    HouM.HanX. (2012). Multivariate numerical approximation using constructive L2(R) RBF neural network. Neural Comput. Appl. 21, 2534. 10.1007/s00521-011-0604-8

  • 16

    HouM.LiuT.YangY.ZhuH.LiuH.YuanX.et al. (2017). A new hybrid constructive neural network method for impacting and its application on tungsten rice prediction. Appl. Intell. 47, 2843. 10.1007/s10489-016-0882-z

  • 17

    HouM.YangY.LiuT.PengW. (2018). Forecasting time series with optimal neural networks using multi-objective optimization algorithm based on AICc. Front. Comput. Sci. 12, 12611263. 10.1007/s11704-018-8095-8

  • 18

    HuangG. B.ChenL. (2007). Letters: Convex incremental extreme learning machine. Neurocomputing70, 30563062. 10.1016/j.neucom.2007.02.009

  • 19

    HuangG. B.ChenL. (2008). Enhanced random search based incremental extreme learning machine. Neurocomputing71, 34603468. 10.1016/j.neucom.2007.10.008

  • 20

    HuangG. B.ChenL.SiewC. K. (2006a). Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Trans. Neural Netw. 17, 879892. 10.1109/TNN.2006.875977

  • 21

    HuangG. B.ZhouH.DingX.ZhangR. (2012). Extreme learning machine for regression and multiclass classification. IEEE Trans. Syst. Man Cybern. B Cybern. 42, 513529. 10.1109/TSMCB.2011.2168604

  • 22

    HuangG. B.ZhuQ. Y.SiewC. K. (2006b). Extreme learning machine: theory and applications. Neurocomputing70, 489501. 10.1016/j.neucom.2005.12.126

  • 23

    IsaacsonS. A.KirbyR. M. (2011). Numerical solution of linear Volterra integral equations of the second kind with sharp gradients. J. Comput. Appl. Math. 235, 42834301. 10.1016/j.cam.2011.03.029

  • 24

    JafarianA.MeasoomyS.AbbasbandyS. (2015). Artificial neural networks based modeling for solving Volterra integral equations system. Appl. Soft. Comput. 27, 391398. 10.1016/j.asoc.2014.10.036

  • 25

    JafarianA.MokhtarpourM.BaleanuD. (2017). Artificial neural network approach for a class of fractional ordinary differential equation. Neural Comput. Appl. 28, 765773. 10.1007/s00521-015-2104-8

  • 26

    JafarianA.NiaS. M. (2013a). Feedback neural network method for solving linear Volterra integral equations of the second kind. Int. J. Math. Model. Numer. Optim. 4, 225237. 10.1504/IJMMNO.2013.056531

  • 27

    JafarianA.NiaS. M. (2013b). Using feed-back neural network method for solving linear Fredholm integral equations of the second kind. J. Hyperstruct2, 5371.

  • 28

    LiY. F.YingH. (2022). Disrupted visual input unveils the computational details of artificial neural networks for face perception. Front. Comput. Neurosci. 16, 1054421. 10.3389/fncom.2022.1054421

  • 29

    LimaP. M.BuckwarE. (2015). Numerical solution of the neural field equation in the two-dimensional case. SIAM J. Sci. Comput. 37, B962-B979. 10.1137/15M1022562

  • 30

    LuY.ChenG.YinQ.SunH.HouM. (2020). Solving the ruin probabilities of some risk models with Legendre neural network algorithm. Digit. Signal Process. 99, 102634. 10.1016/j.dsp.2019.102634

  • 31

    LuY.WengF.SunH. (2022). Numerical solution for high-order ordinary differential equations using H-ELM algorithm. Eng. Comput. 39, 27812801. 10.1108/EC-11-2021-0683

  • 32

    MaM.ZhengL.YangJ. (2021). A novel improved trigonometric neural network algorithm for solving price-dividend functions of continuous time one-dimensional asset-pricing models. Neurocomputing435, 151161. 10.1016/j.neucom.2021.01.012

  • 33

    MaleknejadK.MirzaeeF. (2005). Using rationalized Haar wavelet for solving linear integral equations. Appl. Math. Comput. 160, 579587. 10.1016/j.amc.2003.11.036

  • 34

    MallS.ChakravertyS. (2014). Chebyshev neural network based model for solving Lane-Emden type equations. Appl. Math. Comput. 247, 100114. 10.1016/j.amc.2014.08.085

  • 35

    MallS.ChakravertyS. (2016). Application of Legendre neural network for solving ordinary differential equations. Appl. Soft. Comput. 43, 347356. 10.1016/j.asoc.2015.10.069

  • 36

    MandalB. N.BhattacharyaS. (2007). Numerical solution of some classes of integral equations using Bernstein polynomials. Appl. Math. Comput. 190, 17071716. 10.1016/j.amc.2007.02.058

  • 37

    MasouriZ.BabolianE.Hatamzadeh-VarmazyarS. (2010). An expansion-iterative method for numerically solving Volterra integral equation of the first kind. Comput. Math. Appl. 59, 14911499. 10.1016/j.camwa.2009.11.004

  • 38

    MirzaeeF.HoseiniS. F. (2016). Application of Fibonacci collocation method for solving Volterra-Fredholm integral equations. Appl. Math. Comput. 273, 637644. 10.1016/j.amc.2015.10.035

  • 39

    NematiS. (2015). Numerical solution of Volterra-Fredholm integral equations using Legendre collocation method. J. Comput. Appl. Math. 278, 2936. 10.1016/j.cam.2014.09.030

  • 40

    PakdamanM.AhmadianA.EffatiS.SalahshourS.BaleanuD. (2017). Solving differential equations of fractional order using an optimization technique based on training artificial neural network. Appl. Math. Comput. 293, 8195. 10.1016/j.amc.2016.07.021

  • 41

    RashedM. T. (2003). Numerical solution of the integral equations of the first kind. Appl. Math. Comput. 145, 413420. 10.1016/S0096-3003(02)00497-6

  • 42

    RashidiniaJ.ZarebniaM. (2007). Solution of Voltera integral equation by the Sinc-collection method. J. Comput. Appl. Math. 206, 801813. 10.1016/j.cam.2006.08.036

  • 43

    RostamiF.JafarianA. (2018). A new artificial neural network structure for solving high-order linear fractional differential equations. Int. J. Comput. Math. 95, 528539. 10.1080/00207160.2017.1291932

  • 44

    Saberi-NadjafiJ.MehrabinezhadM.AkbariH. (2012). Solving Volterra integral equations of the second kind by wavelet-Galerkin scheme. Comput. Math. Appl. 63, 15361547. 10.1016/j.camwa.2012.03.043

  • 45

    SunH.HouM.YangY.ZhangT.WengF.HanF. (2019). Solving partial differential equation based on bernstein neural network and extreme learning machine algorithm. Neural Process. Lett. 50, 11531172. 10.1007/s11063-018-9911-8

  • 46

    UstaF.AkyiğitM.SayF.AnsariK. J. (2022). Bernstein operator method for approximate solution of singularly perturbed volterra integral equations. J. Math. Anal. Appl. 507, 125828. 10.1016/j.jmaa.2021.125828

  • 47

    WangK. Y.WangQ. S. (2013). Lagrange collocation method for solving Volterra-Fredholm integral equations. Appl. Math. Comput. 219, 1043410440. 10.1016/j.amc.2013.04.017

  • 48

    WangK. Y.WangQ. S. (2014). Taylor collocation method and convergence analysis for the Volterra-Fredholm integral equations. J. Comput. Appl. Math. 260, 294300. 10.1016/j.cam.2013.09.050

  • 49

    WongC.VongC.WongP.CaoJ. (2018). Kernel-based multilayer extreme learning machines for representation learning. IEEE Trans. Neural Netw. Learn. Syst. 29, 757762. 10.1109/TNNLS.2016.2636834

  • 50

    YangY.HouM.LuoJ. (2018). A novel improved extreme learning machine algorithm in solving ordinary differential equations by Legendre neural network methods. Adv. Diff. Equat. 469, 124. 10.1186/s13662-018-1927-x

  • 51

    YangY.HouM.SunH.ZhangT.WengF.LuoJ. (2020). Neural network algorithm based on Legendre improved extreme learning machine for solving elliptic partial differential equations. Soft Comput. 24, 10831096. 10.1007/s00500-019-03944-1

  • 52

    ZhouT.LiuX.HouM.LiuC. (2019). Numerical solution for ruin probability of continuous time model based on neural network algorithm. Neurocomputing331, 6776. 10.1016/j.neucom.2018.08.020

  • 53

    Zuniga-AguilarC. J.Romero-UgaldeH. M.Gomez-AguilarJ. F.JimenezR. F. E.ValtierraM. (2017). Solving fractional differential equations of variable-order involving operators with Mittag-Leffler kernel using artificial neural networks. Chaos Solitons Fractals103, 382403. 10.1016/j.chaos.2017.06.030

Summary

Keywords

Volterra-Fredholm integral equations, approximate solutions, neural network algorithm, sine-cosine basis function, extreme learning machine

Citation

Lu Y, Zhang S, Weng F and Sun H (2023) Approximate solutions to several classes of Volterra and Fredholm integral equations using the neural network algorithm based on the sine-cosine basis function and extreme learning machine. Front. Comput. Neurosci. 17:1120516. doi: 10.3389/fncom.2023.1120516

Received

10 December 2022

Accepted

13 February 2023

Published

09 March 2023

Volume

17 - 2023

Edited by

Jia-Bao Liu, Anhui Jianzhu University, China

Reviewed by

Yinghao Chen, Eastern Institute for Advanced Study, China; Zhengguang Liu, Xi'an Jiaotong University, China

Updates

Copyright

*Correspondence: Hongli Sun

Disclaimer

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Outline

Figures

Cite article

Copy to clipboard


Export citation file


Share article

Article metrics