# Exponential convergence for numerical solution of integral equations using radial basis functions.

1. Introduction

Integral equations have been solved by many different methods [1, 2]. In [3] integral equations and methods of their solving are classified. This reference includes some traditional methods for solving integral equations. But some recent methods are Adomian decomposition method (ADM) [4], homotopy perturbation method (HPM) [5], He's variational iteration methods [6], optimal control [7], wavelets [8-11], neural networks [12], simulation method [13], block-pulse method [14], and so forth. Also, there are many other articles which contain new approaches in solving integral equations [15-18]. It is necessary to recall that most of the mentioned methods are not easy for solving integral equations in higher dimensions [19-24] and also for solving the mixed Volterra-Fredholm cases [6, 25-27]. However, in the present paper, we restrict ourselves to the method of radial basis functions (RBFs).

The RBF methodology was introduced by Hardy [28]. At first, it was popular in multivariate interpolation [29-34]. In 1990, Kansa introduced a way to use these functions for solving parabolic, hyperbolic, and elliptic partial differential equations [35]. After that, radial basis functions have been widely applied in numerous fields. In spite of many other applications of RBFs, we only focus on the use of RBFs for solving integral equations.

In 1992, Makroglou [36] applied the collocation technique to solve various linear and nonlinear integral equations. In 2002, Galperin and Kansa [37] applied RBFs for solution of weakly singular Volterra integral equations by global optimization. In 2007, Alipanah and Dehghan [38] used RBFs for solving one-dimensional nonlinear Fredholm integral equations without optimization technique and via quadrature integration methods. In [39], this method is generalized for two dimensions problems and the accuracy of the method is compared with the traditional spectral method in [40]. Also, Avazzadeh et al. used the RBFs for solving partial integrodifferential equations [41-43].

In [38,39], the method was applied for Fredholm integral equations. In this paper, we describe the method for solving more different types of integral equations such as Volterra and mixed Volterra-Fredholm equations. In fact, some singular types of integral equations can be solved by this method. Therefore, the method can solve linear and nonlinear Fredholm, Volterra, and mixed Volterra-Fredholm equations even in higher dimensions.

The paper is organized in the following way. In Section 2, the radial basis functions as a tool for approximation are introduced. In Section 3, we recall the method of the solution of Fredholm integral equation [38, 39] and then the Volterra and mixed Volterra-Fredholm integral equations will be solved by using radial basis functions. In Section 4, some illustrating examples are presented. The last section includes conclusion and some ideas for future work.

Definition 1. Consider a given set of n distinct data points [{[p.sub.j]}.sup.n.sub.j=0] and the corresponding data values [{[f.sub.j]}.sup.n.sub.j=0], the basic RBF interpolant is given by

s(p) = [n.summation over (j=0)][c.sub.j][phi]([parallel]p - [p.sub.j][parallel]), (1)

where [parallel] x [parallel] is the Euclidean norm, p, [p.sub.j] [member of] [R.sup.d] (d is a positive finite integer), and [f.sub.j] is scalar. Also [phi](r), r [greater than or equal to] 0, is some radial basis functions. The coefficient [c.sub.j] is determined from the interpolation s([p.sub.j]) = [f.sub.j], j = 0, 1, ..., n, which leads to the following symmetric linear system:

A[c.bar] = [f.bar], (2)

where the entries of [c.bar], [f.bar], and A are given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (3)

Sometimes, [{[p.sub.j]}.sup.n.sub.j=0] are called center points. Every basis is directly related to one center point. Since these points are chosen arbitrarily, we have a mesh-free method [44-46].

The sufficient conditions for 0(r) in (3) to guarantee non-singularity of the matrix are given in [47]. Also, Micchelli [33] showed that a larger class of functions could be considered, and thus the RBF method is uniquely solvable.

There are two kinds of radial basis functions, the piecewise smooth and the infinitely smooth radial functions. For infinitely smooth radial functions, we have a shape parameter [epsilon]. The parameter [epsilon] is a free parameter for controlling the shape of functions. As [epsilon] [right arrow] 0 the radial functions become flatter [48, 49].

Some piecewise smooth RBFs are [r.sup.3] (Cubic) and [r.sup.2] log r (Thin plate spline) and some common infinitely smooth examples of the [phi](r) that lead to a uniquely solvable method are in the following forms:

linear: r,

Gaussian (GA): [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

Multiquadric (MQ): [(1 + [([epsilon]r).sup.2]).sup.[alpha]/2], ([alpha] [not equal to] 0, [alpha] = 2N),

inverse multiquadric (IMQ): [[(1 + ([epsilon]r).sup.2]).sup.-1/2],

inverse quadric (IQ): [(1 + [([epsilon]r).sup.2]).sup.-1].

Madych and Nelson have proved exponential convergence property of multiquadratic approximation [31, 50]. He has shown that under certain conditions the interpolation error is [epsilon] = O([[lambda].sup.c/h]) (note that MQ RBF has been redefined from Hardy's original definition by the transformation c = 1/[epsilon]), h is the mesh size, and 0 < [lambda] < 1 is a constant. As is said in [51], this implies that there are two ways to improve the approximation: either by reducing the size of h or by increasing the magnitude of c. It means that if c [right arrow] [infinity] then [epsilon] [right arrow] 0. While reducing h leads to the heavy computations, increasing c is without the extra computational cost. However, according to "uncertainty principle" of Schaback [52], as the error becomes smaller, the matrix becomes more ill-conditioned; hence the solution will break down as c becomes too large. Nevertheless, there exists a wide range of c that high accurate results can be produced. So, if we could solve the ill-conditioned system, we could increase c and obtain the best approximation [50]. There are some experimental trials about the shape parameter, ill-conditioning, and convergence [53-55].

There are some methods for trade-off between c and error [56, 57]. The golden section algorithm [56] as a new method for finding a good shape parameter can be effective but often it is expensive. Baxter [58] investigated the preconditioned conjugate gradient technique. Casciola et al. [59] regularized the solutions with changing the Euclidean norm to the anisotropic norm. Karageorghis et al. [60] applied the matrix decomposition algorithm for improving 3D elliptic problems. Also, there are the regularization techniques for solving ill-conditioned systems such as truncated singular value decomposition (TSVD) and Tikhonov regularization method. Reader can see details in [58-60] and the references there in.

3. Integral Equation

3.1. Fredholm Integral Equation. Consider the following Fredholm integral equation of the Urysohn form:

u(x) - [lambda] [[integral].sup.b.sub.a] G(x, t, u(t))df = f(x), (4)

where [lambda] is constant; f(x) and G(x, t, u(t)) are assumed to be defined on the interval a [less than or equal to] x, t [less than or equal to] b. Let [phi](x) be a radial basis function and we approximate u(x) with the following interpolant function:

u(x) - [n.summation over (j=0)][c.sub.j][phi]([parallel]x - [x.sub.j][parallel]) = [C.sup.T][PSI](x), (5)

where [C.sup.T] = [[c.sub.0], [c.sub.1], ..., [c.sub.n]] and [PSI](x) = [[[phi]([parallel] x - [x.sub.1] [parallel]), ..., [phi]([parallel]x - [x.sub.n][paragraph])].sup.T]. Now, byreplacing (5) in (4) we obtain

[C.sup.T][PSI](x) - [lambda] [[integral].sup.b.sub.a] G(x, t, [C.sup.T][PSI](t))dt [equivalent] f(x). (6)

In the above equation, [c.sub.j], j = 0, 1, ..., n, are unknown. For computing them, we collocate the points [x.sub.i], i = 0, 1, ..., n, as follows:

[C.sup.T][PSI](x) - [lambda] [[integral].sup.b.sub.a] G ([x.sub.i], t, [C.sup.T][PSI](t)) dt [equivalent] f ([x.sub.i]). (7)

By applying the Legendre quadrature integration formula [61], (5) can be changed to the following form:

[C.sup.T][PSI]([x.sub.i]) - [lambda][N.summation over (j=0)][w.sub.j]G ([x.sub.i], [t.sub.j], [C.sup.T][PSI]([t.sub.j])) dt [equivalent] f([x.sub.i]). (8)

This is a nonlinear system of equations that can be solved by Newton's iterative method to obtain the unknown vector [C.sup.T].

Similarly, for the two-dimensional integral equation, consider the Fredholm integral equation as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (9)

where G(x, y, s, t, u(s, t)) and f(x, y) are given analytic functions. According to (1) the function u(x, y) may be represented by approximate series as

u(p) [equivalent] [n.summation over ([gamma]=0)][c.sub.[gamma]][phi]([parallel]p - [p.sub.[gamma]][parallel]) = [C.sup.T][PSI](p), (10)

where n is any natural number, p = (x, y) [member of] [R.sup.2], and [p.sub.[gamma]] = ([x.sub.i], [y.sub.j]) [member of] [R.sup.2]. Noting the previous section, it is clear that the collocation points [{[p.sub.[gamma]]}.sup.n[gamma].sub.=0] can be chosen as the centers. However, the selection process of the center points can affect accuracy; sometimes the uniform points or random points are preferred.

Replacing (10) in (9) we have

[C.sup.T][PSI](x, y) - [lambda] [[integral].sup.d.sub.c][[integral].sup.b.sub.a] G(x, y, s, t, [C.sup.T][psi] (s, t)) ds dt [equivalent] f(x, y), (x, y) [member of] [a, b] x [c, d[. (11)

In the above equation, only [c.sub.[gamma]] ([gamma] = 0, 1, ..., n) are unknowns and it is an interesting technical advantage in using of RBFs. It means the process of solving is no more complicated in spite of increasing the dimension of the given problem.

Now we substitute the given collocation points in the above equation. Collocation points can be the same center points or any other arbitrary points:

[C.sup.T][PSI](x, y) - [lambda] [[integral].sup.d.sub.c][[integral].sup.b.sub.a] G(x, y, s, t, [C.sup.T][psi] (s, t)) ds dt [equivalent] f([x.sub.i], [y.sub.j]). (12)

By applying the quadrature integration formula, (12) can be changed to the following form:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (13)

This is a nonlinear system of equations that can be solved by Newton's iterative method to obtain the unknown vector [C.sup.T]. We recall that the obtained linearized system by Newton's method is ill-conditioned and the use of regularization methods is efficient. Also, we can apply some other iterative regularization methods for solving an ill-conditioned nonlinear system of equations [62].

The mentioned method for solving Fredholm integral equation was discussed in [38, 39] and in the current paper it is generalized for solving Volterra and Volterra-Fredholm integral equations.

3.2. Volterra Integral Equation. Consider the following Urysohn Volterra integral equation:

u(x) - [lambda] [[integral].sup.x.sub.a] G(x, t, u(t))dt = f(x), (14)

where [lambda] is constant; f(x) and G(x, t, u(t)) are assumed to be defined on the interval a [less than or equal to] x, t [less than or equal to] b. Similar to the previous section, we substitute (5) in (14) and collocate the points [x.sub.i], i = 0, 1, ..., n. So we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (15)

Using the linear transform as follows,

t = [mu](s) = [[x.sub.i] - a/2]] + [[x.sub.i] + a/2], (16)

reduces (15) to the following integral equation:

[C.sup.T][PSI]([x.sub.i]) - [lambda][[x.sub.i] - a/2] [[integral].sup.1.sub.-1] G([x.sub.i], [mu](s), [C.sup.T][PSI]([mu](s))) ds [equivalent] f([x.sub.i]). (17)

Now, by applying a quadrature integration formula, we approximate the integral in (17) and thus we have a nonlinear system of equations that can be solved by Newton's iterative method to obtain the unknown vector [C.sup.T].

It is easy to use that the proposed method for solving the two-dimensional Volterra integral equation gives

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (18)

where G(x, y, s, t, u(s, t)) and f(x, y) are given analytic functions. Similarly, we substitute (10) in (18) and collocate the points ([x.sub.i], [y.sub.j]), i, j = 0, 1, ..., n. So we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (19)

In the above equation, let the linear transforms

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (20)

Therefore, (19) is reduced to the integral equation which can be solved easily:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (21)

Now, by applying a quadrature integration formula, we approximate the obtained integrals; then the nonlinear system of equations can be solved by Newton's iterative method to obtain the unknown vector [C.sup.T].

3.3. The Mixed Volterra-Fredholm Integral Equation. In general, Volterra-Fredholm integral equations can be classified into different types [6,26]. We will only investigate the mixed Volterra-Fredholm integral equations in the following form:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (22)

where G(x, y, s, i, m) and f(x, y) are given analytic functions. Similarly, we must substitute (10) in (22) and collocate the points ([x.sub.i], [y.sub.j]), i, j = 0, 1, ...,n. After that, let the linear transforms

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (23)

and apply the quadrature integration formula corresponding to [-1, 1]. Similar to the previous section, we get the following system of equations:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (24)

The obtained nonlinear system of equations can be solved by Newton's iterative method to fulfill the unknown vector [C.sup.T].

4. Numerical Example

In this section, some examples are given to show validity and efficiency of the mentioned method. In this paper, the computations have been done using the Maple 13 with 100 digits precision. Note that digits are important factor because the obtained systems are ill-conditioned. For improving the results we have two ways: increasing digits and applying a regularization method. Since decreasing of digits leads to intensive decreasing of accuracy [54], we preferred to use high digits instead of applying complicated regularization methods. It can be a trade-off between increasing of complexity of mathematical operations in applying a regularization algorithm and increasing digits.

In our practical experiments with 100 digits, even applying SVD, QR, and iterative refinement methods did not affect the results for solving the linear systems. It must be mentioned that although it occurred in our experiments, it can be related to the rate of ill-conditioning. Hence, we reported the result of solving the obtaining nonlinear systems by Newton's iteration method without applying any regularization method. In this study, the criterion of accuracy is the value of infinity norm of the error function.

Example 1. Consider the following Volterra integral equation [63]:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (25)

where (x, y) [member of] [[0,1].sup.2] and the exact solution is m(x, y) = x sin y. The error for some radial basis functions and for different values of n is given in Table 1.

As it is mentioned, the interpolation error is [epsilon] = O([[lambda].sup.c/h]), h is the mesh size, c = 1/[epsilon], and 0 < [lambda] < 1 is a constant [50]. However the proof is for an interpolant function [50] and we are involved with a nonlinear integral equation; since we apply the collocation method, the error of the approximation and interpolation techniques are nearly the same.

Now we investigate how n affects error. Note that e is directly related to n because n is inversely related to mesh size:

[1/h] [varies] n [??] [varies] n. (26)

So, we expect increasing of n decreases e exponentially [44, 50]. In Figure 1, we show how n affects error. Also, we show the effect of c on error in Figure 2. We must note the obtained system is nonlinear. Therefore, although we expect the exponential trend, the nature of nonlinearity has negative effect on the rate of convergence which is different for each problem.

Example 2. Consider the following Volterra nonlinear integral equation [20]:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (27)

with the exact solution u(x, y) = [x.sup.2] + [y.sup.2]. Error for some radial basis functions for different values of n is given in Table 2. The effect of n on error is shown in Figure 1.

Example 3. Consider the following Volterra-Fredholm integral equation of Urysohn type [24]:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (28)

where (x, y) [member of] [[0, 1].sup.2] and the exact solution is u(x, y) = - ln(1 + xy/(1 + [y.sup.2])). Error for some radial basis functions for different values of n is given in Table 3. The exponential effect of n on error is shown in Figure 1.

So far we compute the errors in the infinity norm that are shown in Tables 1, 2, and 3. Now, we compute the root mean square residual errors by the formula

[square root of ([[summation].sup.N.sub.i=1][[delta].sup.2.sub.ij]/N)], (29)

where

[delta](x, y) = [??] (x, y) - [integral][integral] - G(x, y, s,t, [??](s, t)) ds dt - f(x,y), (30)

such that [??] is the approximated solution, N is a large number, and ([x.sub.i], [y.sub.j]) are points uniformly distributed over the domain. This criterion shows tolerance of error in the solution region.

The results for Examples 1, 2, and 3 are reported in Table 4 for the Gaussian radial basis function with N = 400. In fact, if we have the exact solution, we can compute the absolute maximum of error or infinity norm of error function; otherwise, we must compute the root mean square residual error which is defined in (29). The comparison between Tables 1, 2, 3, and 4 confirms the strong correlation between the absolute maximum of errors and the root of mean square of residual errors. So, even without having the exact solution we still will be able to estimate the error of approximation.

5. Conclusion

First, we exploit some technical methods that can be used to improve results.

Supplementary Techniques

Quadrature Integration Methods. we applied the generalized Gauss-Lobatto quadrature on interval [a, b] where the boundaries of the interval coincide with the collocation points. Since [x.sub.0] = a and [x.sub.n] = b in Gauss-Lobatto quadrature, we can improve the approximation on boundaries and the results are stable on boundary.

Regularization. We suggest applying the regularization methods for solving the resulting linear and particularly nonlinear systems. Considering there is no guarantee that Newton's method leads to the convergent solution, using of the regularization methods such as Tikhonov or Landweber is definitely recommended for ill-conditioned nonlinear system [62].

Partitioning. As mentioned the obtained systems by the collocation points are ill-conditioned. Moreover, the ill-conditioning is worse by increasing the number of collocation points. Therefore, to avoid the ill-conditioning we can partition domain of problems to the smaller area. This technique gives the smaller systems that can be solved easier. The numerical experiments show that performance of this technique can be effective when it is not possible to improve the tools of computation.

Finding the accurate solution of the two-dimensional integral equations is usually difficult. In this work, the linear and nonlinear, Fredholm, Volterra, and mixed Volterra-Fredholm integral equations of the second kind are solved by applying the radial basis functions (RBFs) method. The illustrative examples confirm the exponential convergence rate for integral equations similar to rate of convergence for solving partial and ordinary deferential equations using RBF method reported in [53].

http://dx.doi.org/10.1155/2014/710437

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The work described in this paper was supported by the National Basic Research Program of China (973 Project no. 2010CB832702), the National Science Funds for Distinguished Young Scholars of China (1077403), NSFC Funds (no. 11372097), the 111 project under Grant B12032. Also, the authors greatly appreciate the precious time of Professor Mahdi Dehghan (from Amirkabir University, Iran) who help them generously.

References

[1] K. E. Atkinson, The Numerical Solution of Integral Equations of the Second Kind, Cambridge University Press, Cambridge, UK, 1997.

[2] A. J. Jerri, Introduction to Integral Equations with Applications, John Wiley & Sons, New York, NY, USA, 1999.

[3] A. D. Polyanin and A. V. Manzhirov, Handbook of Integral Equations, Chapman & Hall/CRC, 2008.

[4] A.-M. Wazwaz, A First Course in Integral Equations, World Scientific, Singapore, 1997.

[5] S. Abbasbandy, "Application of He's homotopy perturbation method to functional integral equations," Chaos, Solitons & Fractals, vol. 31, no. 5, pp. 1243-1247, 2007.

[6] S. A. Yousefi, A. Lotfi, and M. Dehghan, "He's variational iteration method for solving nonlinear mixed Volterra-Fredholm integral equations," Computers & Mathematics with Applications, vol. 58, no. 11-12, pp. 2172-2176, 2009.

[7] S. A. Belbas, "A new method for optimal control of Volterra integral equations," Applied Mathematics and Computation, vol. 189, no. 2, pp. 1902-1915, 2007.

[8] Y. Mahmoudi, "Wavelet Galerkin method for numerical solution of nonlinear integral equation," Applied Mathematics and Computation, vol. 167, no. 2, pp. 1119-1129, 2005.

[9] K. Maleknejad, R. Mollapourasl, and M. Alizadeh, "Numerical solution of Volterra type integral equation of the first kind with wavelet basis," Applied Mathematics and Computation, vol. 194, no. 2, pp. 400-405, 2007.

[10] G. Vainikko, A. Kivinukk, and J. Lippus, "Fast solvers of integral equations of the second kind: wavelet methods," Journal of Complexity, vol. 21, no. 2, pp. 243-273, 2005.

[11] S. Yousefi and M. Razzaghi, "Legendre wavelets method for the nonlinear Volterra-Fredholm integral equations," Mathematics and Computers in Simulation, vol. 70, no. 1, pp. 1-8, 2005.

[12] A. Golbabai, M. Mammadov, and S. Seifollahi, "Solving a system of nonlinear integral equations by an RBF network," Computers & Mathematics with Applications, vol. 57, no. 10, pp. 1651-1658, 2009.

[13] M. H. Kalos and P. A. Whitlock, Monte Carlo Methods, Wiley-Blackwell, Weinheim, Germany, 2nd edition, 2004.

[14] K. Maleknejad, M. Shahrezaee, and H. Khatami, "Numerical solution of integral equations system of the second kind by block-pulse functions," Applied Mathematics and Computation, vol. 166, no. 1, pp. 15-24, 2005.

[15] Z. Avazzadeh, M. Heydari, and G. B. Loghmani, "Numerical solution of Fredholm integral equations of the second kind by using integral mean value theorem," Applied Mathematical Modelling, vol. 35, no. 5, pp. 2374-2383, 2011.

[16] K. Maleknejad and H. Derili, "Numerical solution of integral equations by using combination of spline-collocation method and Lagrange interpolation," Applied Mathematics and Computation, vol. 175, no. 2, pp. 1235-1244, 2006.

[17] E. Mammen and K. Yu, "Nonparametric estimation of noisy integral equations of the second kind," Journal of the Korean Statistical Society, vol. 38, no. 2, pp. 99-110, 2009.

[18] H. K. Pathak, M. S. Khan, and R. Tiwari, "A common fixed point theorem and its application to nonlinear integral equations," Computers & Mathematics with Applications, vol. 53, no. 6, pp. 961-971, 2007.

[19] S. A. Belbas, "Optimal control of Volterra integral equations in two independent variables," Applied Mathematics and Computation, vol. 202, no. 2, pp. 647-665, 2008.

[20] H. Guoqiang, K. Hayami, K. Sugihara, and W. Jiong, "Extrapolation method of iterated collocation solution for two-dimensional nonlinear Volterra integral equations," Applied Mathematics and Computation, vol. 112, no. 1, pp. 49-61, 2000.

[21] G. Han and J. Wang, "Extrapolation of Nystrom solution for two dimensional nonlinear Fredholm integral equations," Journal of Computational and Applied Mathematics, vol. 134, no. 1-2, pp. 259-268, 2001.

[22] M. Heydari, Z. Avazzadeh, H. Navabpour, and G. B. Loghmani, "Numerical solution of Fredholm integral equations of the second kind by using integral mean value theorem II. High dimensional problems," Applied Mathematical Modelling, vol. 37, no. 1-2, pp. 432-442, 2013.

[23] K. Maleknejad, S. Sohrabi, and B. Baranji, "Application of 2D-BPFs to nonlinear integral equations," Communications in Nonlinear Science and Numerical Simulation, vol. 15, no. 3, pp. 527-535, 2010.

[24] A. M. Wazwaz, "A reliable treatment for mixed Volterra-Fredholm integral equations," Applied Mathematics and Computation, vol. 127, no. 2-3, pp. 405-414, 2002.

[25] M. A. Abdou, K. I. Mohamed, and A. S. Ismail, "On the numerical solutions of Fredholm-Volterra integral equation," Applied Mathematics and Computation, vol. 146, no. 2-3, pp. 713-728, 2003.

[26] A. A. Badr, "Numerical solution of Fredholm-Volterra integral equation in one dimension with time dependent," Applied Mathematics and Computation, vol. 167, no. 2, pp. 1156-1161, 2005.

[27] A. Yildirim, "Homotopy perturbation method for the mixed Volterra-Fredholm integral equations," Chaos, Solitons and Fractals, vol. 42, no. 5, pp. 2760-2764, 2009.

[28] R. L. Hardy, "Multiquadratic equation of topology and other irregular surface," Journal of Geophysical Research, vol. 76, pp. 1905-1915, 1971.

[29] M. D. Buhmann, "Multivariate interpolation in odd-dimensional Euclidean spaces using multiquadrics," Constructive Approximation, vol. 6, no. 1, pp. 21-34, 1990.

[30] M. D. Buhmann and C. A. Micchelli, "Multiquadric interpolation improved," Computers & Mathematics with Applications, vol. 24, no. 12, pp. 21-25, 1992.

[31] W. R. Madych and S. A. Nelson, "Multivariate interpolation and conditionally positive definite functions, II," Mathematics of Computation, vol. 54, no. 189, pp. 211-230, 1990.

[32] W. R. Madych and S. A. Nelson, "Multivariate interpolation and conditionally positive definite functions," Approximation Theory and its Applications, vol. 4, no. 4, pp. 77-89, 1988.

[33] C. A. Micchelli, "Interpolation of scattered data: distance matrices and conditionally positive definite functions," Constructive Approximation, vol. 2, no. 1, pp. 11-22, 1986.

[34] G. B. Wright, Radial basis function interpolation: numerical and analytical developments [Ph.D. Dissertation], University of Colorado, 2003.

[35] E. J. Kansa, "Multiquadrics--a scattered data approximation scheme with applications to computational fluid-dynamics--II solutions to parabolic, hyperbolic and elliptic partial differential equations," Computers & Mathematics with Applications, vol. 19, no. 8-9, pp. 147-161, 1990.

[36] A. Makroglou, "Radial basis functions in the numerical solution of Fredholm integral and integro-differential equations," in Advances in Computer Methods for Partial Differential Equations, VII, R. Vichnevetsky, D. Knight, and G. Richter, Eds., pp. 478-484, Karen Hahn Imacs, New Brunswick, NJ, USA, 1992.

[37] E. A. Galperin and E. J. Kansa, "Application of global optimization and radial basis functions to numerical solutions of weakly singular Volterra integral equations," Computers & Mathematics with Applications, vol. 43, no. 3-5, pp. 491-499, 2002.

[38] A. Alipanah and M. Dehghan, "Numerical solution of the nonlinear Fredholm integral equations by positive definite functions," Applied Mathematics and Computation, vol. 190, no. 2, pp. 1754-1761, 2007.

[39] A. Alipanah and S. Esmaeili, "Numerical solution of the two-dimensional Fredholm integral equations using Gaussian radial basis function," Journal of Computational and Applied Mathematics, vol. 235, no. 18, pp. 5342-5347, 2011.

[40] Z. Avazzadeh, M. Heydari, and G. B. Loghmani, "A comparison between solving two dimensional integral equations by the traditional collocation method and radial basis functions," Applied Mathematical Sciences, vol. 5, no. 21-24, pp. 1145-1152, 2011.

[41] Z. Avazzadeh, M. Heydari, W. Chen, and G. Loghmani, "Smooth solutio of partial integro-differential equations using radial basis functions," The Journal of Applied Analysis and Computation, vol. 4, no. 2, pp. 115-127, 2014.

[42] Z. Avazzadeh, Z. Beygi Rizi, F. M. Maalek Ghaini, and G. B. Loghmani, "A semi-discrete scheme for solving nonlinear hyperbolic-type partial integro-differential equations using radial basis functions," Journal of Mathematical Physics, vol. 52, no. 6, Article ID 063520, 15 pages, 2011.

[43] Z. Avazzadeh, Z. Beygi Rizi, F. M. Maalek Ghaini, and G. B. Loghmani, "A numerical solution of nonlinear parabolic-type Volterra partial integro-differential equations using radial basis functions," Engineering Analysis with Boundary Elements, vol. 36, no. 5, pp. 881-893, 2012.

[44] M. D. Buhmann, Radial Basis Functions, Cambridge University Press, 2003.

[45] M. D. Buhmann, Multivariable interpolation using radial basis functions [Ph.D. Dissertation], University of Cambridge, 1989.

[46] W. Cheney and W. Light, A Course in Approximation Theory, William Allan, New York, NY, USA, 1999.

[47] I. J. Schoenberg, "Metric spaces and completely monotone functions," Annals of Mathematics, vol. 39, no. 4, pp. 811-841, 1938.

[48] T. A. Driscoll and B. Fornberg, "Interpolation in the limit of increasingly flat radial basis functions," Computers & Mathematics with Applications, vol. 43, no. 3-5, pp. 413-422, 2002.

[49] B. Fornberg, G. Wright, and E. Larsson, "Some observations regarding interpolants in the limit of flat radial basis functions," Computers & Mathematics with Applications, vol. 47, no. 1, pp. 37-55, 2004.

[50] W. R. Madych, "Miscellaneous error bounds for multiquadric and related interpolators," Computers & Mathematics with Applications, vol. 24, no. 12, pp. 121-138, 1992.

[51] J. Li, A. H.-D. Cheng, and C.-S. Chen, "A comparison of efficiency and error convergence of multiquadric collocation method and finite element method," Engineering Analysis with Boundary Elements, vol. 27, no. 3, pp. 251-257, 2003.

[52] R. Schaback, "Error estimates and condition numbers for radial basis function interpolation," Advances in Computational Mathematics, vol. 3, no. 3, pp. 251-264, 1995.

[53] A. H. Cheng, M. A. Golberg, E. J. Kansa, and G. Zammito, "Exponential convergence and H-c multiquadric collocation method for partial differential equations," Numerical Methods for Partial Differential Equations, vol. 19, no. 5, pp. 571-594, 2003.

[54] C.-S. Huang, C.-F. Lee, and A. H.-D. Cheng, "Error estimate, optimal shape factor, and high precision computation of multiquadric collocation method," Engineering Analysis with Boundary Elements, vol. 31, no. 7, pp. 614-623, 2007.

[55] Z.-C. Li and H.-T. Huang, "Study on effective condition number for collocation methods," Engineering Analysis with Boundary Elements, vol. 32, no. 10, pp. 839-848, 2008.

[56] C. H. Tsai, J. Kolibal, and M. Li, "The golden section search algorithm for finding a good shape parameter for meshless collocation methods," Engineering Analysis with Boundary Elements, vol. 34, no. 8, pp. 738-746, 2010.

[57] J. G. Wang and G. R. Liu, "On the optimal shape parameters of radial basis functions used for 2-D meshless methods," Computer Methods in Applied Mechanics and Engineering, vol. 191, no. 23-24, pp. 2611-2630, 2002.

[58] B. J. Baxter, "Preconditioned conjugate gradients, radial basis functions, and Toeplitz matrices," Computers & Mathematics with Applications, vol. 43, no. 3-5, pp. 305-318, 2002.

[59] G. Casciola, L. B. Montefusco, and S. Morigi, "The regularizing properties of anisotropic radial basis functions," Applied Mathematics and Computation, vol. 190, no. 2, pp. 1050-1062, 2007.

[60] A. Karageorghis, C. S. Chen, and Y. S. Smyrlis, "Matrix decomposition RBF algorithm for solving 3D elliptic problems," Engineering Analysis with Boundary Elements, vol. 33, no. 12, pp. 1368-1373, 2009.

[61] P. K. Kythe and M. R. Schaferkotter, Handbook of Computational Method for Integration, Chapman & Hall/CRC Press, 2005.

[62] B. Kaltenbacher, A. Neubauer, and O. Scherzer, Iterative Regularization Methods for Nonlinear Ill-Posed Problems, Walter de Gruyter, 2008.

[63] A. Tari, M. Y. Rahimi, S. Shahmorad, and F. Talati, "Solving a class of two-dimensional linear and nonlinear Volterra integral equations by the differential transform method," Journal of Computational and Applied Mathematics, vol. 228, no. 1, pp. 70-76, 2009.

Zakieh Avazzadeh, (1) Mohammad Heydari, (2) Wen Chen, (1) and G. B. Loghmani (3)

(1) State Key Laboratory of Hydrology-Water Resources and Hydraulic Engineering, College of Mechanics and Materials, Hohai University, Nanjing 210098, China

(2) Young Researchers and Elite Club, Islamic Azad University, Ashkezar Branch, Ashkezar 8941613695, Iran

(3) Department of Mathematics, Yazd University, P.O. Box 89195-741, Yazd, Iran

Received 4 July 2014; Revised 31 October 2014; Accepted 1 November 2014; Published 10 December 2014

```
TABLE 1: Numerical results of different RBFs for Example 1. The roots
of Legendre polynomial are chosen as center points.

n            GA                   MQ                   IQ

4    7.6 x [10.sup.-3]    9.0 x [10.sup.-3]    7.5 x [10.sup.-3]
9    2.1 x [10.sup.-4]    2.0 x [10.sup.-4]    1.6 x [10.sup.-4]
16   1.6 x [10.sup.-5]    1.1 x [10.sup.-5]    1.4 x [10.sup.-5]
25   3.0 x [10.sup.-7]    7.0 x [10.sup.-7]    2.6 x [10.sup.-7]
36   6.6 x [10.sup.-8]    4.9 x [10.sup.-8]    6.2 x [10.sup.-8]
49   2.5 x [10.sup.-10]   3.7 x [10.sup.-10]   2.8 x [10.sup.-10]

TABLE 2: Numerical results of different RBFs for Example 2. The
roots of Legendre polynomial are chosen as center points.

n            GA                   MQ                   IQ

4    2.2 x [10.sup.-2]    2.6 x [10.sup.-2]    2.2 x [10.sup.-2]
9    2.5 x [10.sup.-4]    1.2 x [10.sup.-4]    5.2 x [10.sup.-4]
16   3.2 x [10.sup.-7]    3.0 x [10.sup.-7]    4.4 x [10.sup.-7]
25   7.1 x [10.sup.-8]    7.4 x [10.sup.-8]    8.0 x [10.sup.-8]
36   6.0 x [10.sup.-11]   5.1 x [10.sup.-11]   5.9 x [10.sup.-11]
49   1.5 x [10.sup.-11]   2.1 x [10.sup.-11]   2.0 x [10.sup.-11]

TABLE 3: Numerical results of different RBFs for Example 3. The
roots of Legendre polynomial are chosen as the center points.

n            GA                   MQ                   IQ

4    1.2 x [10.sup.-2]    1.7 x [10.sup.-2]    1.2 x [10.sup.-2]
9    2.1 x [10.sup.-3]    2.4 x [10.sup.-4]    2.4 x [10.sup.-3]
16   8.0 x [10.sup.-4]    7.7 x [10.sup.-4]    6.5 x [10.sup.-4]
25   1.0 x [10.sup.-4]    3.1 x [10.sup.-4]    3.4 x [10.sup.-4]
36   1.6 x [10.sup.-5]    2.8 x [10.sup.-5]    1.4 x [10.sup.-5]
49   5.2 x [10.sup.-6]    4.9 x [10.sup.-6]    7.2 x [10.sup.-6]

TABLE 4: Value of the root of mean square of residual errors that is
computed by (29). The results for Examples 1, 2, and 3 are reported
for Gaussian radial basis functions and N = 400. Comparison
between Tables 1, 2, and 3 and following results confirms the strong
correlation between absolute maximum of error and the root of
mean square of residual errors. In brief, the estimation error of
approximation is possible by (29) even as the exact solution is not
given.

n        Example 1            Example 2            Example 3

4    1.0 x [10.sup.-3]    9.3 x [10.sup.-3]    1.2 x [10.sup.-3]
9    1.8 x [10.sup.-5]    8.9 x [10.sup.-5]    2.1 x [10.sup.-4]
16   1.8 x [10.sup.-5]    1.4 x [10.sup.-7]    8.3 x [10.sup.-4]
25   1.4 x [10.sup.-6]    3.1 x [10.sup.-8]    7.5 x [10.sup.-6]
36   8.5 x [10.sup.-8]    2.0 x [10.sup.-11]   5.9 x [10.sup.-6]
49   6.8 x [10.sup.-9]    5.2 x [10.sup.-12]   8.2 x [10.sup.-7]
```