Approximate solutions to several classes of Volterra and Fredholm integral equations using the neural network algorithm based on the sine-cosine basis function and extreme learning machine

In this study, we investigate a new neural network method to solve Volterra and Fredholm integral equations based on the sine-cosine basis function and extreme learning machine (ELM) algorithm. Considering the ELM algorithm, sine-cosine basis functions, and several classes of integral equations, the improved model is designed. The novel neural network model consists of an input layer, a hidden layer, and an output layer, in which the hidden layer is eliminated by utilizing the sine-cosine basis function. Meanwhile, by using the characteristics of the ELM algorithm that the hidden layer biases and the input weights of the input and hidden layers are fully automatically implemented without iterative tuning, we can greatly reduce the model complexity and improve the calculation speed. Furthermore, the problem of finding network parameters is converted into solving a set of linear equations. One advantage of this method is that not only we can obtain good numerical solutions for the first- and second-kind Volterra integral equations but also we can obtain acceptable solutions for the first- and second-kind Fredholm integral equations and Volterra–Fredholm integral equations. Another advantage is that the improved algorithm provides the approximate solution of several kinds of linear integral equations in closed form (i.e., continuous and differentiable). Thus, we can obtain the solution at any point. Several numerical experiments are performed to solve various types of integral equations for illustrating the reliability and efficiency of the proposed method. Experimental results verify that the proposed method can achieve a very high accuracy and strong generalization ability.

/fncom. . networks of neurons. These equations are widely used in the field of neuroscience and robotics, and they also play a crucial role in cognitive robotics. The reason is that the architecture of autonomous robots, which are able to interact with other agents in dealing with a mutual task, is strongly inspired by the processing principles and the neuronal circuitry in the primate brain. This study aims to consider several kinds of linear integral equations. The general form of linear integral equations is defined as follows: ǫy(x)+λ b a k 1 (x, t)y(t)dt+µ x a k 2 (x, t)y(t)dt = g(x), x ∈ [a, b], (1) Where the functions k 1 (x, t), k 2 (x, t), and g(x) are known, but y(x) is the unknown function that will be determined; a and b are constants; and ǫ, λ and µ are parameters. Notably, we have (i). Equation (1)  Many methods for numerical solutions of Volterra integral equations, Fredholm integral equations, and Volterra-Fredholm integral equations have been presented in recent years. Orthogonal polynomials (e.g., wavelets Maleknejad and Mirzaee, 2005, Bernstein Mandal and Bhattacharya, 2007, Chebyshev Dastjerdi and Ghaini, 2012 were proposed for solving integral equations. The Taylor collocation method (Wang and Wang, 2014), Lagrange collocation method (Wang and Wang, 2013;Nemati, 2015), and Fibonacci collocation method (Mirzaee and Hoseini, 2016) were effective and convenient for solving integral equations. The Sinccollocation method (Rashidinia and Zarebnia, 2007) and Galerkin method (Saberi-Nadjafi et al., 2012) also give good performance in solving Volterra integral equation problems. However, most of these traditional methods have the following disadvantage: they provide the solution, in the form of an array, at specific preassigned mesh points in the domain, and they need an additional interpolation procedure to yield the solution for the whole domain. In order to have an accurate solution, one either has to increase the order of the method or decrease the step size. This, however, increases the computational cost.
The neural network has excellent application potential in many fields (Habib and Qureshi, 2022;Li and Ying, 2022) owing to its universal function approximation capabilities (Hou and Han, 2012;Hou et al., 2017Hou et al., , 2018. In this case, the neural network is widely used as an effective tool for solving differential equations, integral equations, and integro-differential equations Chakraverty, 2014, 2016;Jafarian et al., 2017;Pakdaman et al., 2017;Zuniga-Aguilar et al., 2017;Rostami and Jafarian, 2018). Golbabai and Seifollahi presented radial basis function networks for solving linear Fredholm and Volterra integral equations of the second kind (Golbabai and Seifollahi, 2006), and they solved a system of nonlinear integral equations (Golbabai and Seifollahi, 2009). Effati and Buzhabadia presented multilayer perceptron networks for solving Fredholm integral equations of the second kind (Effati and Buzhabadi, 2012). Jafarian and Nia proposed a feedback neural network method for solving linear Fredholm and Volterra integral equations of the second kind (Jafarian and Nia, 2013a,b). Jafarian presented artificial neural networksbased modeling for solving the Volterra integral equations system (Jafarian et al., 2015). However, the traditional neural network algorithms have some problems, such as over-fitting, difficulty to determine hidden layer nodes, optimization of model parameters, being easily trapped into local minima, slow convergence speed, and reduction in the learning speed and efficiency of the model when the input data are large or the network structure is complex (Huang and Chen, 2008). Huang et al. (2006a,b) proposed an extreme learning machine (ELM) algorithm, which is a single-hidden-layer feed-forward neural network. The ELM algorithm only needs to set the number of hidden nodes of the network but does not need to adjust the input weights and bias values, and the output weights can be determined by the Moore-Penrose generalized inverse operation. The ELM algorithm provides faster learning speed, better generalization performance, with least human intervention. Based on the advantages, the ELM algorithm has been widely applied to many real-world applications, such as regression and classification problems (Wong et al., 2018). Many neural network methods based on the improved extreme learning machine algorithm for solving ordinary differential equations Lu et al., 2022), partial differential equations (Sun et al., 2019;Yang et al., 2020), the ruin probabilities of the classical risk model and the Erlang (2) risk model in Zhou et al. (2019), Lu et al. (2020), and one-dimensional asset-pricing (Ma et al., 2021) have been developed. Chen et al. (2020Chen et al. ( , 2021Chen et al. ( , 2022 proposed the trigonometric exponential neural network, Laguerre neural network, and neural finite element method for ruin probability, generalized Black-Scholes differential equation, and generalized Black-Scholes-Merton differential equation. Inspired by these studies, the motivation of this research is to present the sinecosine ELM (SC-ELM) algorithm to solve linear Volterra integral equations of the first kind, linear Volterra integral equations of the second kind, linear Fredholm integral equations of the first kind, linear Fredholm integral equations of the second kind, and linear Volterra-Fredholm integral equations. In the latest study, a linear integral equation of the third kind with fixed singularities in the kernel is studied by Gabbasov and Galimova (2022), and Volterra integral equations of the first kind on a bounded interval are considered by Bulatov and Markova (2022). For more results, we may refer to Din et al. (2022) and Usta et al. (2022).
In this study, we propose a neural network method based on the sine-cosine basis function and the improved ELM algorithm to solve linear integral equations. Specifically, the hidden layer is eliminated by expanding the input pattern utilizing the sinecosine basis function, and this simplifies the calculation to some extent. Moreover, the improved ELM algorithm can automatically satisfy the boundary conditions and it transforms the problem into solving a linear system, which provides great convenience for calculation. Furthermore, the closed-form solution by utilizing this model can be obtained, and the approximate solution of any point for linear integral equations can be provided from it. The remainder of the article is organized as follows. In Section 2, a brief review of the ELM algorithm is provided. In Section 3, a novel neural network method based on the sine-cosine basis function and ELM algorithm for solving integral equations in the form of Equation (1) are discussed. In Section 4, we show several numerical examples to demonstrate the accuracy and the efficiency of the improved neural network algorithm. In Section 5, concluding remarks are presented.

. The ELM algorithm
The ELM algorithm was originated from the singlehidden-layer feed-forward network (SLFN) and then got developed into a generalized SLFN algorithm (Huang and Chen, 2007). The ELM algorithm not only is fully automatically implemented without iterative tuning but also tends to the minimum training error. The ELM algorithm can provide least human intervention, faster learning speed, and better generalization performance. Therefore, the ELM algorithm is widely used in classification and regression tasks (Huang et al., 2012;Cambria and Huang, 2013).
For a data set with N + 1 different training samples (x i , g i ) ∈ R × R(i = 0, 1, ..., N), the neural network with M + 1 hidden neurons is expressed as follows: Where f is the activation function, w j is the input weight of the j-th hidden layer node, b j is the bias value of the j-th hidden layer node, and β j is the output weight connecting the j-th hidden layer node and the output node.
The error function of SLFN is as follows: Assuming the error between the output value o i of SLFN and the exact value g i is zero, the relationship between x i and g i can be modeled as follows: Where both the input weight w j and the bias value b j are randomly generated. The equations (4) can be rewritten in the following matrix form, that is: Where H is the output matrix of the hidden layer, and it is defined as follows: A common minimum norm least-squares solution of the linear system Equation (5) is calculated by . The proposed method In this section, we propose a neural network method based on sine-cosine basis function and extreme learning machine algorithm to solve linear integral equations. The single-hidden-layer sinecosine neural network algorithm consists of three layers: an input layer, a hidden layer, and an output layer. The unique hidden layer consists of two parts. The first part uses the cosine basis function as the basis function and the other part implements the superposition of the sine basis function. The structure of sine-cosine neural network method is shown in Figure 1. The steps of the sine-cosine neural network method for solving several kinds of linear integral equations are as follows: Step 1: Discretize the interval [a, b] into a series of collocation Step 2: Construct the approximate solution by using sine-cosine basis as an activation function, that iŝ Step 3: According to different problems and different data sets, we substitute the trial solutionŷ SC-ELM into the Equation (1) to be solved. Then, we convert this equation into a matrix form: Where Step 4: From the theory of Moore-Penrose generalized inverse of matrix H, we can obtain the net parameters as Step 5: Find the connection parameters a j , b j and the number of neurons M with the smallest MSE as the optimal value. The corresponding optimal number of neurons M and output weights a j , b j are, respectively, the optimal number of neurons M and optimal output weights β.
Some advantages of the single-layer sine-cosine neural network method for solving integral equations are as follows: (i) The hidden layer is eliminated by expanding the input pattern using the sine-cosine basis function. (ii) The sine-cosine neural network algorithm only needs to determine the weights of the output layer. The problem could be transformed into a linear system, and the output weights can be obtained by a simple generalized inverse matrix, which greatly improves the calculation speed. (iii) We can obtain the closed-form solution by using this model, and most important of all, the approximate solution of any

. Numerical experiments
In this section, some numerical experiments are performed to demonstrate the reliability and powerfulness of the improved neural network algorithm. The sine-cosine neural network method based on the sine-cosine basis function and ELM algorithm is applied to solve the linear Volterra integral equations of the first kind, linear Volterra integral equations of the second kind, linear Fredholm integral equations of the first kind, linear Fredholm integral equations of the second kind, and linear Volterra-Fredholm integral equations.
The algorithm is evaluated with MATLAB R2021a running in an Intel Xeon Gold 6226R CPU with 64.0GB RAM. The training set is obtained by taking points at equal intervals, and the testing set is randomly selected. The validation set is the set of midpoints V = are training points in the following studies. We use mean square error (MSE), absolute error (AE), mean absolute error (MAE) and root mean square error (RMSE) to measure the error of numerical solution. They can be defined as follows: Where y(x i ) denote the exact solution andŷ(x i ) represent the approximate solution obtained by the proposed algorithm. Note

. . Example
Consider linear Volterra integral equation of the second kind (Guo et al., 2012) as The analytical solution is f (x) = cos(x). We train our proposed neural network for 50 equidistant points in the given interval [0, 1] with the first 12 sine-cosine basis functions. Comparison between the exact solution and the approximate solution via our improved neural network algorithm is depicted in Figure 2A, and the plot of the error function between them is cited in Figure 2B. As shown in the figures, the mean squared error is 1.3399 × 10 −19 , and the maximum absolute error is approximately 7.4910 × 10 −10 . Table 1 incorporates the results of the exact solution and the approximate solution via our proposed neural network algorithm for 11 testing points at unequal intervals in the domain [0, 1]. The absolute errors are listed in Table 1, in which we observe that the mean squared error is approximately 1.6789 × 10 −19 . These results imply that the proposed method has higher accuracy. Table 2 compares the proposed method with the LS-SVR method. The maximum absolute error is approximately 6.8246 × 10 −10 . Note that in Guo et al. (2012), the maximum absolute error shown in Guo et al. (2012)

. . Example
Consider the linear Volterra integral equation of the first kind (Masouri et al., 2010) The analytical solution is f (x) = e −x . A total of 21 equidistant points in the given interval [0, 1] are used as the training points, and the neural network adapts the first 10 sine-cosine basis functions. Figures 3A, B shows that the exact solution and the approximate solution are highly consistent. The maximum absolute error is approximately 1.3959 × 10 −6 . Table 3  2.5781 × 10 −16 . These findings provide a strong support for the effectiveness of our proposed method.

. . Example
We consider linear Fredholm integral equation of the first kind (Rashed, 2003) as The analytical solution is f (x) = x. This problem is solved by utilizing our proposed neural network model in the given interval [0, 1]. We consider 21 equidistant points in the domain [0, 1] with the first six sinecosine basis functions to train the model. Comparison between the exact solution and the approximate solution via our improved neural network algorithm is depicted in Figure 4A, and the error plot is depicted in Figure 4B. Note that the mean squared error is 4.5915 × 10 −8 for these training points. Table 4 incorporates the results of the exact solution and the approximate solution via our proposed neural network algorithm for 11 testing points at unequal intervals in the domain [0, 1]. We observe that the maximum absolute error is approximately 2.7433 × 10 −4 . The results show that this new neural network has a good generalization ability.

. . Example
We consider the linear Fredholm integral equation of the second kind (Golbabai and Seifollahi, 2006) as The analytical solution is f (x) = e 2x . The improved neural network algorithm for the linear Fredholm integral equation of the second kind has been trained with 50 equidistant points in the given interval [0, 1] with the first 12 sine-cosine basis functions. The approximate solution obtained by the improved neural network algorithm and the exact solution are shown in Figure 5A, and the error function is displayed in Figure 5B. Especially, the mean squared error is 2.3111 × 10 −17 , and the maximum absolute error is approximately 9.8998 × 10 −9 , which fully demonstrates the superiority of the improved neural network algorithm.
Finally, Table 5 provides the results of the exact solution and the approximate solution via our proposed neural network algorithm for 11 testing points at unequal intervals in the domain [0, 1]. As shown in Table 5, the mean squared error is approximately 3.1391× .
10 −17 , which undoubtedly shows the power and effectiveness of the proposed method. Table 6 compares the proposed method with RBF networks. The maxmium absolute error by our proposed method is approximately 7.7601 × 10 −9 . Note that in Golbabai and Seifollahi (2006), the maxmium absolute error shown in Golbabai and Seifollahi (2006), as shown in Table 1, is approximately 6.7698 × 10 −7 . The solution accuracy of the proposed algorithm is higher.

. . Example
Consider the linear Volterra-Fredholm integral equation (Wang and Wang, 2014) as The analytical solution is f (x) = e −x . A total of 50 equidistant points in the given interval [0, 1] and the first 11 sine-cosine basis functions are considered to train the neural network model. The comparison images and error images of the exact solution and the approximate solution are listed in Figures 6A, B, from which we can see that the mean squared error is 3.3499 × 10 −18 . Table 7 shows the results of the exact solution and the approximate solution via the improved ELM method for 11 testing points at unequal intervals in the domain [0, 1]. As shown in the table, the maximum absolute error is approximately 2.6673 × 10 −9 , which reveals that the improved neural network algorithm has higher accuracy and excellent performance.
We compare the RMSE of our proposed method and the Taylor collocation method in Wang and Wang (2014). From Table 8, we can see clearly that our algorithm is more accurate than the algorithm in the Taylor collocation method. As can be seen from Table 8, when 5, 8, and 9 points are tested, the RMSEs shown by the Taylor collocation method in Wang and Wang (2014) are approximately 4.03 × 10 −7 , 9.50 × 10 −7 , and 2.15 × 10 −5 , but the RMSEs shown by our proposed method are respectively 1.67 × 10 −9 , 1.78 × 10 −9 , and 1.67 × 10 −9 .

. . Example
We consider linear the Volterra integral equation of the second kind (Saberi-Nadjafi et al., 2012).
A total of 21 equidistant discrete points and the first 11 sinecosine basis functions are utilized to construct the neural network model. The comparison images and error images of the exact solution and the approximate solution are displayed in Figures 7A,  B. It ia not hard to find that the MSE is 4.2000 × 10 −16 , and this implies that the proposed algorithm has higher accuracy.
To verify the effectiveness of our proposed method, we provide the results of the exact solution and the approximate solution via the improved ELM method for 11 testing points at unequal intervals in the domain [0, 1], see Table 9. As shown     Table 10 compares the MSE of the numerical solutions obtained by the SC-ELM model when more training points are added and different numbers of hidden layer neurons are configured. From these results, it can be seen that the proposed method can achieve good accuracy. The calculation time of different examples is listed in Table 11. These data suggest that our method is efficient and feasible.

. Conclusion
In this study, the improved neural network algorithm based on the sine-cosine basis function and extreme learning machine algorithm has been developed for solving linear integral equations. The accuracy of the improved neural network has been checked by solving a linear Volterra integral equation of the first kind, a linear Volterra integral equation of the second kind, a linear Fredholm integral equation of the first kind, a linear Fredholm integral equation of the second kind, and a linear Volterra-Fredholm integral equation. The experimental results of the improved ELM approach with different types of integral equations show that the simulation results are close to the exact results. Therefore, the proposed model is very precise and could be a good tool for solving linear integral equations.

Data availability statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.