# Analytic-Numerical Solution of Random Parabolic Models: A Mean Square Fourier Transform Approach.

1 Introduction

Random partial differential initial value problems (IVP) was considered an emergent mathematical subject since the celebrated surveys  edited by Albert T. Bharucha-Reid. Diffusion models with uncertainties are frequent due to material impurities apart from the appearance of error measurements. The consideration of pollutants in presence of impurities is another situation where uncertainty is relevant in diffusion problems. In the evaluation of microwave heating processes, the time dependent model is more appropriate to avoid misleading results due to the complexity of the field distribution within the oven and the variation of dielectric properties of the material with temperature, moisture content, density and other parameters [15,20,27].

Random heat transfer models have been studied in  using a random perturbation method, in  using finite methods, and in  by applying finite difference methods. Random linear advection equation has been treated in  and the statistical moments of the solution of the random Burgers-Riemann problem and of the transport random differential equation are studied in  and [21,22], respectively. Stochastic heat transfer problems modeled in a different way to the one considered here, in fact based upon Browniann motion and Ito calculus, may be found in . As indicated in , numerous problems like continuum mechanics systems can be modeled by partial differential equations with random coefficients or random operators and stochastic initial and/or boundary conditions. One of the main difficulties in dealing with random partial differential equations is the fact the search for solutions and the analysis has to be carried out for every realization of the random parameters of the model equation. In this respect one usually faces arduous problems when trying to apply the usual and well known numerical techniques of the deterministic case. Consequently, it appears very interesting to look for approximated analytic solution. Also in , authors say that in many complex models understanding the behavior of the system requires obtaining many realizations of the state equations which necessitates performing simulations over a range of model parameter values. Because performing many simulations for complex partial differential equations (PDEs) is typically computationally expensive, methods have been developed to reduce the work. In , authors propose interesting stabilizations methods to overcome these drawbacks for the advection-diffusion-reaction equation. The aim of this paper is just to progress in this direction and we propose the construction of analytic-numerical solutions for random parabolic-type models. To achieve this goal, we use a mean square Fourier transform approach.

Random constant coefficient parabolic models have been recently treated using a mean square (m.s.) approach based on an integral transform technique in  using the random Laplace transform, and in [5,6], using several random Fourier transforms (trigonometric and exponential). In all these cases, the constant coefficient model allows to obtain the exact m.s. solution of the random transformed differential problem as well as of the inverse integral transform captures the solution stochastic process (s.p.) of the original problem. For the random time dependent case, the capture of the solution s.p. of the original problem involves, throughout the inverse integral transform, unbounded random integrals that makes advisable the numerical evaluation of random complicated integrals. This is a major contribution introduced here, where random numerical quadrature formulae are applied to approximate the solution s.p. to random parabolic problems obtained after using the random Fourier transform.

In this paper we solve the time dependent random parabolic problem

[u.sub.t](x,t) = [a.sub.2](t) [u.sub.xx](x,t) + [a.sub.1](t) [u.sub.x](x,t) + [a.sub.3](t) u(x,t), (1.1)

- [infinity] < x < +[infinity], t > 0, u(x, 0) = f (x), - [infinity] < x < +[infinity], (1.2)

where [a.sub.i](t) = [a.sub.i](t; w) :]0, +[infinity][[x[OMEGA] [right arrow] R, 1 [less than or equal to] i [less than or equal to] 3 and f (x) = f (x; w) : R x [OMEGA] [right arrow] R are s.p.'s, defined in a complete probability space ([OMEGA], F, P), that satisfy certain hypotheses that will be specified later. To achieve this goal, firstly we will establish some new results related to the so-called [L.sup.p]-random calculus. Afterwards, we will extend some classical quadrature formulae to the random context in order to compute reliable approximations of the mean and the variance of the solution s.p., u(x, t), also termed random field, to the IVP given in (1.1)-(1.2). All our theoretical findings will be illustrated by means of several examples. The model (1.1)-(1.2) for the deterministic case, using the Fourier transform was studied in .

The paper is organized as follows. Section 2 begins with some notational and adapted results that are introduced for the sake of clarity in the presentation. Some new auxiliary results that will be required throughout the paper are also established. In Section 2.1, the numerical method of random Gauss-Hermite for the evaluation of random improper integrals is introduced and it is applied to an example strategically placed that will be used later in Section 3, where problem (1.1)-(1.2) is firstly analytically solved using the random Fourier transform. Then, using the random Gauss-Hermite quadrature introduced in Section 2.1, the solution of problem (1.1)-(1.2) is numerically approximated. Numerical examples illustrating the theoretical results are included in Section 3.

2 Preliminaries

This section is addressed to introduce some preliminaries, definitions and results that will be required throughout this paper. Further details about these preliminaries can be checked [1,28]. Let ([OMEGA], F, P) be a complete probability space, a complex random variable (r.v.), x : [OMEGA] [right arrow] C, is said to be of order p [greater than or equal to] 1, if E[[absolute value of x]p] < +[infinity], being E[*] the expectation operator. It can be shown that the set of all r.v.'s of order p,

[L.sup.RV.sub.p]([OMEGA])] = {x : [OMEGA] [right arrow] C / E[[absolute value of x]p] < +[infinity]}, 1 [less than or equal to] p < +[infinity], (2.1)

endowed with the norm

[[parallel]x[parallel].sub.p,RV] = [(E[[[absolute value of x].sup.p]]).sup.1/p] < +[infinity], (2.2)

is a Banach space, [1, p.9]. The convergence inferred by the [parallel]*[parallel][sub.P,RV]-norm is usually referred to as the p-th mean convergence. More precisely, a sequence of r.v.'s {[x.sub.n] : n [greater than or equal to] 0} in [L.sup.RV.sub.p]([OMEGA])] is p-th mean convergent to the r.v. x [member of] [L.sup.RV.sub.p]([OMEGA]), and it is denoted as [mathematical expression not reproducible], if and only if, [mathematical expression not reproducible]. The cases p = 2 and p = 4, corresponding to the so called mean square and mean fourth convergence, respectively, play a major role in the study of random differential equations [10,30]. This key role will also be manifested throughout this paper as well.

Below, we state some inequalities for r.v.'s, belonging to the space ([L.sup.RV.sub.p]([OMEGA]), [parallel]*[parallel][sub.p,RV), that will be required subsequently. In accordance with the Liapunov's inequality

(E[[[[absolute value of x].sup.p]]).sup.1/p] [less than or equal to] (E[[[[absolute value of x].sup.1]]).sup.1/1], 1 [less than or equal to] p < q < +[infinity], (2.3)

one gets

[L.sup.RV.sub.q]([OMEGA])] = [subset] [L.sup.RV.sub.p]([OMEGA])], 1 [less than or equal to] p < q < +[infinity]. (2.4)

In dealing with random differential equations, a primary goal is try to formulate general results in the space [L.sup.RV.sub.q]([OMEGA]), [parallel]*[parallel][sub.2,RV]). Although the biggest space corresponding to p = 1 has its own mathematical interest, when dealing with random differential equations the reference space is [L.sup.RV.sub.q]([OMEGA]). It is because in practice most of the r.v.'s have finite variance. However, the legitimation of some mean square operational rules often requires to assume hypotheses involving information related to [L.sup.RV.sub.q]([OMEGA]). For example, it can be seen that 

[mathematical expression not reproducible].

Nevertheless, in general, this property does not hold if either [mathematical expression not reproducible]or y [member of] [L.sup.RV.sub.2]([OMEGA]). This matter is a consequence of the fact that the [parallel]*[parallel][sub.p RV]-norm is not submultiplicative, i.e., for [mathematical expression not reproducible] [parallel]y[parallel]pRV, in general (see ). In the particular case that x, y [member of] [L.sup.RV.sub.2]([OMEGA]) are independent r.v.'s, the above relationship is just an identity, i.e.,

[mathematical expression not reproducible]. (2.5)

This result is a consequence of the following Proposition 1 together with the definition of the [parallel]*[parallel][sub.p RV]-norm in terms of the operator expectation (see (2.2)).

Proposition 1. [19, p.92] Let [f.sub.1], [f.sub.2]: R [right arrow] R be measurable transformations and [x.sub.1], [x.sub.2] : [OMEGA] [right arrow] R be independent r.v.'s. Then, [f.sub.1]([x.sub.1]) and [f.sub.2]([x.sub.2]) are independent r.v.'s and

E [[f.sub.1]([x.sub.1])[f.sub.2]([x.sub.2])] = E [[f.sub.1]([x.sub.1])] E [[f.sub.2]([x.sub.2])], provided the above expectations exist.

A set {x(v) : v [member of] V [subset] R} of r.v.'s in [L.sup.RV.sub.p]([OMEGA])] indexed by the index v, is said to be a s.p. of order p. As usual, the definitions of continuity, differentiability and integrability of a s.p. of order p can be established in terms of the [parallel]*[parallel][.sub.p,RV]-norm. For instance, a s.p. of order p, x(v), is said to be continuous at v [member of] V (x(v) is [parallel]*[parallel][.sub.p,RV]-continuous at v [member of] V, for short) if

[mathematical expression not reproducible].

As a direct consequence of the Liapunov's inequality (2.3), one deduces that if x(v) is [parallel]*[parallel][sub.q,RV]-continuous (differentiable or integrable) s.p., then x(v) is [parallel]*[parallel][sub.p,RV]-continuous (differentiable or integrable) s.p., being q [greater than or equal to] p [greater than or equal to] 1. However, the reverse is not true, in general.

Additionally to the definition of [parallel]*[parallel][sub.p,RV]-integrability of a s.p. x(v) defined in the space [L.sup.RV.sub.p]([OMEGA]), we will use the concept of [parallel]*[parallel][sub.p,RV]-absolutely integrable of a s.p. Namely, a s.p. x(v) [member of] [L.sup.RV.sub.p]([OMEGA]) is said to be [parallel]*[parallel][sub.p,RV]- absolutely integrable s.p. if the following deterministic integral

[[integral].sup.+[infinity].sub.-[infinity]][parallel]x(v)[parallel][.sub.p,RV] dv (2.6)

exists and is finite. If x(v) [member of] [L.sup.RV.sub.p]([OMEGA]) is [parallel]*[parallel][sub.p,RV]-absolutely integrable s.p., then its random exponential [parallel]*[parallel][sub.p,RV]-Fourier transform is defined by

X([xi] := F[absolute value of x(v)]([xi] = [[integral].sup.+[infinity].sub.-infinity]]x(v) exp(-i[xi]v) dv, [xi] [member of] R, i = +[square root of -1],

where this random integral defines a s.p. {X([xi]) : [xi] [member of] R} in the Banach space ([L.sup.RV.sub.p]([OMEGA]), [parallel]*[parallel][sub.p,RV]). If x(v) is RV-absolutely integrable s.p., it is clear that it admits a random [parallel]*[parallel][sub.P,RV]-Fourier transform since

[mathematical expression not reproducible]

where we have used that, [absolute value of exp(i[xi]v)] = 1 and that x(v) is [parallel]*[parallel][sub.p,RV]-absolutely integrable s.p., hence by (2.6) the last integral is finite. In [5, p.5926], it is proved the extension of the following well-known properties of the Fourier transform

[mathematical expression not reproducible], (2.7)

to the random framework provided that the involved random [parallel]*[parallel][sub.p,RV]-derivatives exist and x(v), x'(v) and x"(v) are [parallel]*[parallel][sub.p,RV]-absolutely integrable s.p.'s. These properties will be used later.

In order to formalize our study, besides the above Banach space of complex r.v.'s having absolute moments of order p, ([L.sup.RV.sub.p]([OMEGA])], [parallel]*[parallel][sub.p,RV]), we will also need the following Banach space, ([L.sup.SP.sub.p](R x [OMEGA]), [parallel]*[parallel][sub.p,SP]) where

[mathematical expression not reproducible], (2.8)

and

[parallel]f[parallel][sub.p,SP] = + [([[integral].sup.+[infinity].sub.- [infinity][parallel]f(v)[parallel][sub.p,RV]dv).sup.1/p], 1 [less than or equal to] P < + [infinity].

Notice that the elements of [L.sup.SP.sub.p](R x [OMEGA]) are [parallel]*[parallel][sub.p,RV]-absolutely integrable s.p.'s (see (2.6)). Observe that if f [member of] [L.sup.SP.sub.p](R x [OMEGA]), then the expectation E [[[absolute value of f(v)].sup.p]] exists and is finite for every v [member of] R fixed (otherwise would not make sense the definition of the space [L.sup.SP.sub.p](R x [OMEGA]) given in (2.8)). Hence, for every v [member of] R fixed, f(v) is a r.v. of the space [L.sup.RV.sub.p]([OMEGA]).

As usual, in dealing with s.p.'s for convenience sometimes the sample parameter w [member of] [OMEGA] will be hidden depending on the context. Hence, an element f of [L.sup.RV.sub.p]([OMEGA]) or [L.sup.SP.sub.p](R x [OMEGA]) will be denoted by f (v) or f (v; w) interchangeably throughout this paper.

Now, we recall an important class of r.v.'s that will be considered later. This class has been used in previous works where random differential equations are studied [5,7].

Definition 1. A real r.v. a : [OMEGA] [right arrow] R defined on a probability space ([OMEGA], F, P) is said to be of class C if

[there exists] M, H > 0: E [[[absolute value of a].sup.m]] [less than or equal to] M[H.sup.m] < +[infinity], [for all]m [greater than or equal to] 0. (2.9)

Remark 1. Condition (2.9) can be written in terms of the Landau's symbol as

E [[[absolute value of a].sup.m]] = O([(H).sup.m]).

As it has been demonstrated in , an important class of r.v.'s that belong to the class C are bounded r.v.'s. Thus, binomial, uniform, beta, [lambda]-distributed r.v.'s, etc. satisfy condition (2.9). Unbounded r.v.'s can be approximated using the truncation method [25, ch.V] instead of checking the condition (2.9). This is particularly convenient because there exist families of r.v.'s for which a closed expression for their absolute statistical moments is not available.

Below, we establish an auxiliary result that will be used later. This result involves a class of s.p.'s that satisfy a natural generalization of condition (2.9).

Lemma 1. Let h([xi]) be a complex deterministic function and let a(t) be a real s.p. such that

[mathematical expression not reproducible], (2.10)

for every t > 0 fixed. Then,

[mathematical expression not reproducible], (2.11)

where Re(*) denotes the real part of a complex number.

Proof. On the one hand, it is important to point out that following an analogous reasoning to the one exhibited in Section 3 of  and under condition (2.10), the exponential s.p. exp(h([xi]) [??](t)) is well-defined for every t > 0 and h([xi]) given. On the other hand, using the definition of the p-norm for p = 2 (see (2.2)), one gets

[mathematical expression not reproducible]

where we have used that [absolute value of exp(z)] = exp(Re(z)), for every complex number z. This proves the result.

Now, we apply the previous Lemma 1 to a particular case that will required later in the Example 2.

Remark 2. Let [??](t) = {at : t > 0} be a s.p. such that a is a r.v. of class C, i.e., satisfying condition (2.9).

Let [xi] > 0 and let us observe that applying Lemma 1 to h([xi]) = -[[xi].sup.2], one gets

[parallel]exp (-[[xi].sup.2] t a)[parallel][sub.2,RV] [less than or equal to] [square root of M] exp (-[[xi].sup.2] Ht). (2.12)

Observe that the constant [H.sub.t,[??]] that appears in (2.11), now is just Ht, for every t > 0 fixed.

2.1 Approximation of random improper integrals by the random Gauss-Hermite quadrature

We begin this section by extending to the random framework the practical Gauss-Hermite quadrature formulae for the evaluation of improper random integrals that appear in a natural way when using random integral transform methods.

For f [member of] [L.sup.SP.sub.2](R x [OMEGA]), let us consider the following integral

I = I[absolute value of f] = [[integral].sup.+[infinity].sub.-[infinity]] f([xi])exp(-[[xi].sup.2])d[xi], (2.13)

which is a r.v. Since 0 < exp(-[[xi.sup.]]2) < 1 for all [xi] [member of] R and f [member of] [L.sup.SP.sub.2](R x [OMEGA]) (see (2.8) with p = 2), one gets

[mathematical expression not reproducible].

Then, I[f] is well-defined. If we further assume that f [member of] [L.sup.SP.sub.2](R x [OMEGA]) has continuous sample trajectories, i.e. f (x)(w) is continuous with respect to x [member of] R for all w [member of] [OMEGA], then the r.v. (2.13) coincides, with probability 1, with the (deterministic) sample integrals

I(w) = I[absolute value of f](w) = [[integral].sup.+[infinity].sub.-[infinity]] f([xi]; w)exp(-[[xi].sup.2])d[xi], W [member of] [OMEGA],

which are well-defined and thus they are convergent for all w [member of] [OMEGA] [28, Appendix I]. Then, taking advantage of the Gauss-Hermite quadrature formula of degree N, [13,14], we can consider the following numerical approximation

[mathematical expression not reproducible], (2.14)

where [[xi].sub.j,h] are the roots of the deterministic Hermite polynomial, [H.sub.N], of degree N.

Example 1. Let us consider the following random integral whose interest will be apparent later.

I(x,t) = [[integral].sup.+][infinity].sub.-[infinity] exp(-[[xi].sup.2] ([ta.sub.2] + 1/4)cos([xi] ([ta.sub.1] + x))d[xi], (2.15)

for fixed (x, t) [member of] R x (0, +[infinity]) and given real r.v.'s [a.sub.1], [a.sub.2] : [OMEGA] [right arrow] R defined on a common probability space ([OMEGA], F, P) satisfying certain properties to be specified later.

Note that the integrand of (2.15) can be transformed into the form of the integrand (2.13), multiplying them by exp([[xi].sup.2]) to obtain the s.p. f([xi])(x,t), that is,

[mathematical expression not reproducible]. (2.16)

Assuming that [a.sub.1] and [a.sub.2] are so that the s.p. f (x,t) [member of] [L.sup.SP.sub.2](R x [OMEGA]) and its sample trajectories f (x,t; w) are continuous, then according to (2.14) we can consider the following numerical approximations of (2.16)

[mathematical expression not reproducible]. (2.17)

3 Solving random parabolic problems

In this section we consider the initial value problem (1.1)-(1.2) where the time s.p.'s, [a.sub.i](t), i = 1, 2, 3, and the spatial s.p., f (x), are assumed to satisfy the following conditions

[mathematical expression not reproducible], (3.1)

[mathematical expression not reproducible] (3.2)

[a.sub.i](t) are [parallel]*[parallel][sub.4,RV]--continuous s.p.'s, [for all]i : 1 [less than or equal to] i [less than or equal to] 3 (3.3)

and

[[??].sub.i](t) = [[integral].sup.t.sub.0][a.sub.i](s) ds, 1 [less than or equal to] i [less than or equal to] 3, (3.4)

satisfy condition (2.10), i.e.,

[mathematical expression not reproducible]. (3.5)

Notice that {[a.sub.i](t) : t [greater than or equal to] 0}, 1 [less than or equal to] i [less than or equal to] 3 are [parallel]*[parallel][sub.4,RV]-continuous s.p.'s (see hypothesis (3.3)), then by Lemma 3.16 of  the integral s.p.'s [[??].sub.i](t) given in (3.4) are well-defined in ([L.sup.RV.sub.4]([OMEGA]), [parallel]*[parallel][sub.4,RV]) (and hence also in ([L.sup.RV.sub.2]([OMEGA]), [parallel]*[parallel][sub.2,RV]), see (2.4)).

In the following, we will apply the random Fourier transform approach, introduced in , by assuming for the time being, that problem (1.1)-(1.2) admits a solution s.p. u(x,t) such that itself and its two first derivatives with respect to x, [u.sub.x](x,t) and [u.sub.xx](x,t), regarded as s.p.'s of the spatial variable x, all are [parallel]*[parallel][sub.2,RV]-random Fourier transformable. Let F[w(*, t)]([xi]) = U(t)([xi]) be the random Fourier transform of the solution s.p. u(x,t) considering x as the active variable and t fixed. Applying the random Fourier transform to both sides of equation (1.1) and to the initial condition (1.2), and using its linearity, one gets

F[[u.sub.t](*, t)]([xi]) = [a.sub.2](t)F[[u.sub.xx](*, t)]([xi]) + [a.sub.1](t)F[[u.sub.x](*,t)]([xi]) + [a.sub.3](t)F[u(*, t)]([xi]), (3.6)

F[u(*, 0)]([xi]) = F[f (x)]([xi]) = F ([xi]). (3.7)

By the properties of the random Fourier transform of a s.p. stated in (2.7), one gets

[mathematical expression not reproducible].

Assuming that the solution s.p. u(x,t) is such that [u.sub.t](*,t) is Fourier transformable and that hypotheses of Lemma 2 of  hold, then one gets

F[[u.sub.t](*, t)]([xi]) = d/dt F[u(*, t)]([xi]) = d/dt (U(t))([xi]). (3.8)

Therefore, from (3.6)-(3.8) one deduces that, for each [xi] [member of] R fixed, U(t)([xi]) satisfies the random IVP

[mathematical expression not reproducible]. (39)

On the one hand, let us denote by

a(t) := a(t, [xi]) = -[[xi].sup.2] [a.sub.2](t) + i[xi][a.sub.1](t)+ [a.sub.3](t), [xi] [member of] R fixed, (3.10)

and assume that

[mathematical expression not reproducible]. (3.11)

On the other hand, observe that by hypotheses (3.2) and (3.3), [a.sup.i](t) and f(x) are in [L.sup.RV.sub.4]([OMEGA]) for each t > 0 and x [member of] R, respectively, then it is guaranteed that a(t), defined by (3.10), and F([xi]), defined by (3.7), also belong to [L.sup.RV.sub.4]([OMEGA] for [xi] [member of] R fixed. Moreover, due to the hypothesis of independence among [a.sub.i](t) and f (x) assumed in (3.1), a(t) and F([xi]) are also independent. Finally, by hypothesis (3.3), it is clear that the s.p. a(t) is [parallel]*[parallel][sub.4,RV]-continuous. Then, Theorem 8 of  allows us to guarantee that the mean square solution s.p. of the IVP (3.9) is given by

U(t)([xi]) = exp ([[integral].sup.t.sub.0) a(s, [xi])ds) F([xi]), t> 0, being [xi] [member of] R fixed.

Now using formally the random inverse Fourier transform, the candidate solution s.p. of problem (1.1)-(1.2) is given by

[mathematical expression not reproducible]. (3.12)

For every (x,t) [member of] R x [0, +[infinity][ fixed, it remains to justify the latter random integral given in (3.12) is convergent in the space ([L.sup.SP.sub.2](R x [OMEGA]), [parallel]*[parallel][sub.2,SP]) defined in (2.8) with p = 2. As a consequence, the s.p. u(x, t) given in (3.12) is well-defined in the mean square sense, that is in the Banach space ([L.sup.RV.sub.2]([OMEGA]), [parallel]*[parallel][sub.2,RV])defined in (2.1). With this goal, let us observe that

[mathematical expression not reproducible], (3.13)

where in the last step we have applied the relationship (2.5), since by hypothesis (3.1), [[integral].sup.t.sub.0] a(s, [xi]) ds and F([xi]) are independent r.v.'s for every t > 0 and [xi] [member of] R. Moreover, according to hypotheses (3.1), (3.4)-(3.5), (3.10) and Lemma 1, one gets

[mathematical expression not reproducible]. (3.14)

Taking into account the bound (3.14) in (3.13), one gets

[mathematical expression not reproducible],

where the finiteness of the last integral follows because by hypothesis (3.2), F([xi]) [member of] [L.sup.SP.sub.2](R x [OMEGA]).

Summarizing the following result has been established.

Theorem 1. Let us consider the random IVP (1.1)-(1.2) and assume that the coefficients [a.sub.i](t), 1 [less than or equal to] i [less than or equal to] 3 and the initial condition f (x) satisfy conditions (3.1)-(3.5) and (3.10)-(3.11). Then, the mean square solution s.p. of (1.1)-(1.2) is given by

u(x, t) = 1/2[pi] [[integral].sup.+[infinity].sub.-[infinity]] exp (i[xi]x + [[integral].sup.t.sub.0] a(s, [xi])ds) F([xi])d[xi], (3.15)

being F([xi]) the random Fourier transform of the stochastic process f(x).

Taking into account that [[integral].sup.t.sub.0] a(s, [xi]) ds and F([xi]) in (3.15), are independent r.v.'s for every t > 0 and [xi] G R due to condition (3.1), we can obtain the following explicit expressions for the expectation and the standard deviation of the solution s.p. (3.15) of the random IVP (1.1)-(1.2)

[mathematical expression not reproducible].

Example 2. Let us consider the following particular case of the random IVP (1.1)-(1.2)

[u.sub.t](x, t)= [a.sub.2][u.sub.xx](x, t) + [a.sub.1][u.sub.x](x,t) + [a.sub.3]u(x, t), [absolute value of x] < [infinity], t > 0 (3.16)

u(x,0) = exp(-[x.sup.2]), -[infinity] < x < +[infinity]. (3.17)

Observe that the initial condition is determinis(tic an)d admits a deterministic Fourier transform, F([xi]) = F[f (x)]([xi]) = 1/[square root of 2] exp (-[[xi].sup.2]/4) (see Example 1 in , for instance). We will assume that coefficients [a.sub.i], 1 [less than or equal to] i [less than or equal to] 3, in (3.16), are independent r.v.'s satisfying condition (2.9), i.e.,

E[[[absolute value of [a.sub.i]].sup.m]] [less than or equal to] [M.sub.i][([H.sub.i]).sup.m] < +[infinity], [for all]m > 0, [for all]i :1 [less than or equal to] i [less than or equal to] 3. (3.18)

Thereby,

[mathematical expression not reproducible],

hence it is straightforwardly to check that all the hypotheses of Theorem 1 hold. Using expression (3.15) for each (x,t) fixed, one gets

u(x,t) = exp(t[a.sub.3])/2[pi][square root of 2] ([[integral].sup.+[infinity].sub.-[infinity]] ([ta.sub.2] + 1/4) + i[xi]([ta.sub.1] + x)) d[xi]. (3.19)

Note that imaginary part of integral (3.19) vanishes because the s.p.

y([xi]) := sin ([xi]([ta.sub.1] + x)) exp (-[e.sup.2] ([ta.sub.2] + 1/4))

is odd, i.e., y(-[xi])(w) = -y([xi])(w) for all w [member of] [OMEGA]. Thus,

[[integral].sup.+[infinity].sub.-[infinity]] sin ([xi]([ta.sub.1] + x)) exp (-[[xi].sup.2] ([ta.sub.2] + 1/4)) d[xi] = 0,

and one gets

u(x,t) = exp([ta.sub.3])/2[pi] [square root of 2] [[integral].sup.+[infinity].sub.-[infinity]] exp (-[[xi].sup.2] ([ta.sub.2] + 1/4)) cos ([xi]([ta.sub.1] + x))d[xi]. (3.20)

Using that exp ([[xi].sup.2] ([ta.sub.2] + 1/4)) cos ([xi]([ta.sub.1] + x)) is an even s.p. of the variable [xi], hence the solution s.p. of problem (3.16)-(3.17), given by (3.20), takes the form

u(x, t) = exp([ta.sub.3])/[pi][square root of 2] [[integral].sup.+[infinity].sub.0] exp (-[[xi].sup.2] ([ta.sub.2] + 1/4)) cos ([xi]([ta.sub.1] + x))d[xi]. (3.21)

The knowledge of deterministic integrals involving conditions of exponentials and trigonometric functions, and in particular (see [18, p.515])

[[integral].sup.+[infinity].sub.0] exp(-[beta][[xi].sup.2]) cos(b[xi])d[xi] = 1/2[square root of [pi]/[beta] exp (- [b.sup.2]/4[beta]), Re([beta]) > 0 (3.22)

suggests the possibility of finding a closed-form expression of the integral given in (3.21). Following this strategy, previously developed in , let us consider the random improper integral

J (x, t) = [[integral].sup.+[infinity].sub.0] exp (-[[xi].sup.2]([ta.sup.2] + 1/4)cos([xi]([ta.sub.1] + x)) d[xi], x [member of] R, t > 0. (3.23)

Note that as for each C [member of] [0, +[infinity][, t > 0 and x [member of] R fixed, it is verified

[parallel]cos([xi]([ta.sub.1] + x))[parallel][sub.2,RV] [less than or equal to] 1. (3.24)

Taking into account the hypothesis of independence of [a.sub.1] and [a.sub.2] and, then applying Proposition 1 together with (3.24) and (2.12) (see Remark 2), one gets

[mathematical expression not reproducible],

being [M.sub.2] > 0 and [H.sub.2] > 0 the constants involved in (3.18) for i = 2. Observe that [M.sub.2] and [H.sub.2] are independent of x. Applying (3.22) with [beta] = [H.sub.2] t + 1/4 > 0 and b = 0 yields

[[integral].sup.+[infinity].sub.0] exp (-([H.sub.2]t + 1/4) [[xi].sup.2]) d[xi] = 1/2 [square root of [pi]/[H.sup.2]t + 1/4] < +[infinity],

hence, the integral J(x,t), given by (3.23), is absolutely convergent in [L.sup.RV.sub.2]([OMEGA]), for each (x, t) fixed. This guarantees the mean square random integral J(x, t) defined in (3.23) and its sample representation,

J(x,t)(w)= [[integral].sup.+[infinity].sub.0] exp (-[[xi].sup.2]([ta.sub.2](w) + 1/4)) cos([xi]([ta.sub.1](w) + x))[xi]C,

both coincide (see [28, Appendix I]). Then, applying (3.22) with [beta] = [ta.sub.2](w) +1/4, (provided Re ([a.sub.2](w)) > -1/4t, for t > 0 fixed) and b = [ta.sup.1](w) + x, (with t and x fixed), one gets

J (x, t)(w) = [square root of [pi]]/[square root of 4[ta.sub.2] + 1] exp ([([ta.sub.1] + x).sup.2]/4[ta.sub.2](w) + 1), [for all] w [member of] [OMEGA].

Thereby

J(x,t)= [square root of [pi]]/[square root of 4[ta.sub.2] + 1] exp ([([ta.sub.1] + x).sup.2]/4[ta.sub.2] + 1)

and substituting the latter expression in (3.21), one finally obtains:

u(x,t)= 1/[square root of 2[pi]] exp([ta.sub.3])/[square root of 4[ta.sub.2] + 1] exp ([([ta.sub.1] + x).sup.2]/ 4[ta.sub.2] + 1), x [member of] R, t > 0. (3.25)

Now, using Example 1, note that for fixed (x,t), u(x,t) given by (3.20) can be approximated using random Gauss-Hermite quadrature formula (2.17)

[mathematical expression not reproducible]. (3.26)

By mean of the independence of r.v.'s [a.sub.i], 1 [less than or equal to] i [less than or equal to] 3, one gets the expressions of the expectation and the standard deviation for the exact solution s.p. (3.25) and their numerical approximations by the random Gauss-Hermite quadrature (3.26), respectively,

[mathematical expression not reproducible], (3.27)

[mathematical expression not reproducible], (3.28)

[mathematical expression not reproducible], (3.29)

[mathematical expression not reproducible], (3.30)

[mathematical expression not reproducible], (3.31)

[mathematical expression not reproducible]. (3.32)

In order to compute the values of (3.27), (3.28) and (3.31) for the random IVP (3.16)-(3.17) and compare these values with those numerical approximations obtained by the random Gauss-Hermite quadrature, (3.29), (3.30) and (3.32), respectively, we will assume that the input r.v.'s of IVP (3.16)(3.17) follow some particular probabilistic distributions. Concretely, r.v. [a.sub.1] has a gamma distribution of parameters (2; 3) truncated on the interval [0, 6], [a.sub.1] ~ [Gamma.sub.[0,6]] (2; 3); [a.sub.2] has a beta distribution of parameters (2; 1), [a.sub.2] ~ Beta(2; 1); and finally [a.sub.3] has an exponential distribution of parameter [lambda] = 1 truncated on the interval [1, 2], [a.sub.3] ~ Exp[1 9](1). For this choice of the r.v.'s [a.sub.i], 1 [less than or equal to] i [less than or equal to] 3, it is guaranteed condition (3.18) because all of them are bounded r.v.'s and, in addition, for r.v. [a.sub.2] that Re([a.sub.2](w)) = [a.sub.2](w) > - 1/4t with t > 0 fixed, since [a.sub.2](w) [member of] (0,1), w [member of] [OMEGA]. Observe that, the rest of the hypotheses of Theorem 1 clearly also hold in the context of this example.

Figure 1 shows the evolution on the time domain 0 [less than or equal to] t [less than or equal to] 1 of the expectation E[u(x,t)], computed by (3.27), and the standard deviation [square root of Var[w(x, t)]], computed by (3.27), (3.28) and (3.31), of exact solution s.p. (3.25) to the random IVP (3.16)-(3.17) on the spatial domain -7 [less than or equal to] x [less than or equal to] 5. Outside this spatial range, both the expectation and the standard deviation tends to zero.

In Figure 2a and Figure 3a, we compare the expectation E[w([x.sub.i,] 0.5)], (3.27) and the standard deviation [square root of Var[w([x.sub.i], 0.5)]], (3.31), respectively, vs. their numerical approximations, E[[u.sup.G-H.sub.N]([x.sub.i], 0.5)], computed by means of (3.29), and [square root of Var[[u.sup.G-H.sub.N]([x.sub.i], 0.5)], computed by means of (3.29), (3.30) and (3.32), for some Hermite's polynomials of degree N, at the time instant t = 0.5 and on the spatial domain -5 [less than or equal to] x [less than or equal to] 5.

It can be seen in Figure 2b and Figure 3b the numerical values of the relative errors for the approximate expectation, RelErr [[E.sup.G-H.sub.N], and the approximate standard deviation, RelErr yVarN H, respectively, computed using the following expressions

RelErr [[E.sup.G-H.sub.N]] = [absolute value of (E[u(x, t)] - E[[u.sup.G-H.sub.N] (x, t)])/E[u(x, t)]], (3.33)

RelErr [[square root of [Var.sup.G-H.sub.N]]] = [absolute value of [square root of Var[u(x, t)] - [square root of [Var.sup.G-H.sub.N]]/[square root of Var[u(x, t)]]. (3.34)

The spatial domain considered, -4 [less than or equal to] x [less than or equal to] 1, has been properly chosen to compute these relative errors since outside this piece the expectation, E[u([x.sub.i], 0.5)], and the standard deviation, [square root of Var [u([x.sub.i], 0.5)]], have very small values. The computed relative errors show us that, for time t = 0.5, it is sufficient to consider a Hermite's polynomial of degree N = 8, in the random Gauss-Hermite quadrature, in order to obtain good approximations of the exact expectation E[u([x.sub.i], 0.5)] on the spatial domain -4 [less than or equal to] x [less than or equal to] 1.

Figure 4 illustrates that, in a longer time t = 1, the approximations of the expectation, E [[u.sup.G-H.sub.N]([x.sub.i], 1)], and the standard deviation, [square root of Var[u.sup.G-H.sub.N] ([x.sub.i], 1)]], are being improved as the degree N of the Hermite's polynomials increases.

In Tables 1 and 2 we collect the numerical values for the approximations shown in Figure 4 as well as the relative errors on the spatial domain -4 [less than or equal to] x [less than or equal to] 1. We observe that increasing N = 5 up to N = 30, the proposed method provides a reasonable approximation to the exact expectation E[u([x.sub.i], 1)], while for obtaining good approximations to the standard deviation Var[u([x.sub.i], 1)] it will be enough to consider N = 10.

Example 3. In this example, we shall illustrate the theoretical results previously established by a random parabolic problem where both initial condition and coefficients are s.p.'s. Let us consider random IVP (1.1)-(1.2) with

[a.sub.1](t) = [a.sub.1] cos(t), [a.sub.2](t) = [a.sub.2]t, [a.sub.3](t) = [a.sub.3], f(x) = exp(-b[x.sup.2]),

where the r.v.'s [a.sub.1], 1 [less than or equal to] i [less than or equal to] 3, and b are given following the distributions

[a.sub.1] ~ Beta(2; 3), [a.sub.2] - [N.sup.[2,4]](3; 0.1), [a.sub.3] - [Exp.sub.[0.5,1.5]](1), b ~ Un([1, 2]).

Following an analogous reasoning as the one exhibited in Example 2, it is straightforwardly to check that hypotheses of Theorem 1 hold. Notice that r.v. b has a positive lower bound [l.sub.1] > 0, i.e., b(w) [greater than or equal to] 1 > 0, [for all]w [member of] [OMEGA]. Also expressions of exact solution s.p. (3.15) and numerical approximation by the random Gauss-Hermite quadrature (2.14), now take the form, respectively,

[mathematical expression not reproducible], (3.35)

[mathematical expression not reproducible]. (3.36)

Using statistical independence of r.v.'s [a.sub.i], 1 [less than or equal to] i [less than or equal to] 3, and b, one obtains the expectation and standard deviation (using (3.31)-(3.32)) of u(x,t) given by (3.35) and the expectation and standard deviation of their numerical approximations by the random Gauss-Hermite quadrature [u.sup.G-H.sub.N] (x,t) given by (3.36), as well

[mathematical expression not reproducible], (3.37)

[mathematical expression not reproducible], (3.38)

[mathematical expression not reproducible], (3.39)

[mathematical expression not reproducible]. (3.40)

In Figure 5a and Figure 6a, we compare the expectation E[u([x.sub.i], 1)], using (3.37), and standard deviation [square root of Var[w([x.sub.i], 1)]], (3.31) and (3.37)-(3.38), respectively, vs. their E [[u.sup.G-H.sub.N]([x.sub.i], 1)], (3.39), and [square root of Var[u.sup.G-H.sub.N]([x.sub.i], 1)]], (3.32) and (3.39)-(3.40), for N [member of] {3, 5, 8,10,12} at the time instant t = 1 on the spatial domain -3.5 [less than or equal to] x [less than or equal to] 3.5. Notice that approximations improve as N increases. This behaviour can be observed in Figure 5b and Figure 6b, where relative errors, computed by (3.33)-(3.34), are shown. For the sake of clarity, since the order of magnitud of relative errors associated to N [member of] {3, 5, 8, 10, 12} are very different, the latter two figures (Figure 5b and Figure 6b) have been plotted on a shorted spatial domain, -2 [less than or equal to] x [less than or equal to] 2, for N [member of] {8,10,12}.

4 Conclusions

In this paper we have considered the construction of exact and approximate solution of random time dependent parabolic partial differential initial value problems where the uncertainty is treated in the mean square sense. We have shown that a random Fourier transform method can be applied so efficiently as it has been proved to be in the solution of deterministic problems . However, in the random case, not only the solution stochastic process is important, but also its expectation and standard deviation. This has motivated the consideration of random integral numerical methods to approximate infinite mean square convergent integrals. In fact, random Gauss-Hermite quadrature formulae are proposed to approximate the solution stochastic process in a more computable way. Results have been illustrated with an example.

https://doi.org/10.3846/mma.2018.006

Acknowledgements

This work has been partially supported by the Spanish Ministerio de Economia y Competitividad grant MTM2013-41765-P.

References

 L. Arnold. Stochastic Differential Equations: Theory and Applications. Dover Publ., New York, 2013.

 N. Bellomo, L. M. De Socio and R. Monaco. Random heat equation: solutions by the stochastic adaptive interpolation method. Computers and Mathematics with Applications, 16(9):759-766, 1988. https://doi.org/10.1016/0898-1221(88)90011-9.

 A. T. Bharucha-Reid. Probabilistic Methods in Applied Mathematics. Academic Press, London, 1973.

 G. Calbo, J.-C. Cortes, L. Jodar and L. Villafuerte. Analytic stochastic process solutions of second-order random differential equations. Applied Mathematics Letters, 23(12):1421-1424, 2010. https://doi.org/10.1016/j.aml.2010.07.011.

 M.-C. Casaban, R. Company, J.-C. Cortes and L. Jodar. Solving the random diffusion model in an infinite medium: A mean square approach. Applied Mathematical Modelling, 38(24):5922-5933, 2014. https://doi.org/10.1016/j.apm.2014.04.063.

 M.-C. Casaban, J.-C. Cortes, B. Garcla-Mora and L. Jodar. Analytic-numerical solution of random boundary value heat problems in a semi-infinite bar. Abstract and Applied Analysis, 2013(Article ID 676372):1-9, 2013. https://doi.org/10.1155/2013/676372.

 M.-C. Casaban, J.-C. Cortes and L. Jodar. A random Laplace transform method for solving random mixed parabolic differential problems. Applied Mathematics and Computation, 259:654-667, 2015. https://doi.org/10.1016/j.amc.2015.02.091.

 M.-C. Casaban, J.-C. Cortes and L. Jodar. Solving linear and quadratic random matrix differential equations: A mean square approach. Applied Mathematical Modelling, 40(21-22):9362-9377, 2016. https://doi.org/10.1016/j.apm.2016.06.017.

 R. Chiba. Stochastic heat conduction analysis of a functionally grade annular disc with spatially random heat transfer coefficients. Applied Mathematical Modelling, 33(1):507-523, 2009. https://doi.org/10.1016/j.apm.2007.11.014.

 J.-C. Cortes, L. Jodar, M.-D. Rosello and L. Villafuerte. Solving initial and two-point boundary value linear random differential equations: A mean square approach. Applied Mathematics and Computation, 219(4):2204-2211, 2012. https://doi.org/10.1016/j.amc.2012.08.066.

 J.-C. Cortaes, L. Joadar, L. Villafuerte and R. J. Villanueva. Computing mean square approximations of random diffusion models with source term. Mathematics and Computers in Simulation, 76(1-3):44-48, 2007. https://doi.org/10.1016/j.matcom.2007.01.020.

 M. C. C. Cunha and F. A. Dorini. Statistical moments of the solution of the random Burgers-Riemann problem. Mathematics and Computers in Simulation, 79(5):1440-1451, 2009. https://doi.org/10.1016/j.matcom.2008.06.001.

 P. J. Davis and P. Rabinowitz. Computer Science and Applied Mathematics, second edition. Academic Press, Inc., San Diego, 1984.

 L. M. Delves and J. L. Mohamed. Computational Methods for Integral Equations. Cambridge University Press, New York, 1985.

 D. C. Dibben and R. Metaxas. Time domain finite element analysis of multimode microwave applicators. IEEE Transactions on Magnetics, 32(3):942-945, 1996. https://doi.org/10.1109/20.497397.

 F. A. Dorini and M. C. C. Cunha. On the linear advection equation subject to random velocity fields. Mathematics and Computers in Simulation, 82(4):679-690, 2011. https://doi.org/10.1016/j.matcom.2011.10.008.

 S. J. Farlow. Partial Differential Equations for Scientists and Engineers. Dover, New York, 1993.

 I. S. Gradshteyn and I. M. Ryzhik. Table of integrals, series and products, fifth edition. Academic Press, Inc., San Diego, 1994.

 G. R. Grimmett and D. R. Stirzaker. Probability and Random Processes. Clarendon Press, New York, 2000.

 A. F. Harvey. Microwave Engineering. Academic Press, New York, 1963.

 A. Hussein and M. M. Selim. Solution of the stochastic transport equation of neutral particles with anisotropic scattering using RVT technique. Applied Mathematics and Computation, 213(1):250-261, 2009. https://doi.org/10.1016/j.amc.2009.03.016.

 A. Hussein and M. M. Selim. A general analytical solution for the stochastic Milne problem using Karhunen-Loeve (K-L) expansion. Quantitative Spectroscopy and Radiative Transfer, 125:84-92, 2013. https://doi.org/10.1016/j.jqsrt.2013.03.018.

 L. Jodar, J. I. Castano, J. A. Sanchez and G. Rubio. Accurate numerical solution of coupled time dependent parabolic initial value problems. Applied Numerical Mathematics, 47(3-4):467-476, 2003. https://doi.org/10.1016/S0168-9274(03)00086-2.

 Y. Li and S. Long. A finite element model based on statistical two-scale analysis for equivalent heat transfer parameters of composite material with random grains. Applied Mathematical Modelling, 33(7):3157-3165, 2009. https://doi.org/10.1016/j.apm.2008.10.018.

 M. Loeve. Probability Theory I, series: Graduate Texts in Mathematics, Vol. 45. Springer-Verlag, New York, 1977.

 B. McLaughlin, J. Peterson and M. Ye. Stabilized reduced order models for the advection-diffusion-reaction equation using operator splitting. Computers and Mathematics with Applications, 71(11):2407-2420, 2016. https://doi.org/10.1016/jxamwa.2016.01.032.

 M. Necati Ozisik. Boundary Value Problems of Heat Conduction. Dover Publications, New York, 1968.

 T. T. Soong. Random Differential Equations in Science and Engineering. Academic Press, New York, 1973.

 S.R.S. Varadhan and N. Zygouras. Behavior of the solution of a random semilinear heat equation. Communications on Pure and Applied Mathematics, 61(9):1298-1329, 2008. https://doi.org/10.1002/cpa.20256.

 L. Villafuerte, C.A. Braumann, J.-C. Cortes and L. Jodar. Random differential operational calculus: theory and applications. Computers and Mathematics with Applications, 59(1):115-125, 2010. https://doi.org/10.1016/jxamwa.2009.08.061.

Maria-Consuelo Casaban(a), Juan-Carlos Cortes (a) and Lucas Jodar (a)

(a) Universitat Politecnica de Valencia. Instituto Universitario de Matematica Multidisciplinar Building 8G, access C, 2nd floor, Camino de Vera s/n, 46022 Valencia, Spain

E-mail(corresp.): macabar@imm.upv.es

E-mail: jccortes@imm.upv.es

E-mail: ljodar@imm.upv.es

Received May 17, 2017; revised December 4, 2017; accepted December 6, 2017

Caption: Figure 1. Plot (a): surface of the expectation E[u(x, t)], plot (b): surface of the standard deviation [square root of Var[u(x, t)]].

Caption: Figure 2. Plot (a): E[u([x.sub.i], 0.5)] vs. E [[u.sup.G-H.sub.N]([x.sub.i], 0.5)] by random Gauss-Hermite quadrature using Hermite's polynomials of degree N [member of] {3, 5, 8,10,12,15}. Plot (b): RelErr [[E.sup.G-H.sub.N]. In order to represent properly the relative error, the domain -5 [less than or equal to] x [less than or equal to] 5 has been shorten since the expectation is almost zero outside the interval -4 [less than or equal to] x [less than or equal to] 1.

Caption: Figure 3. Plot (a): [square root of Var[u([x.sub.i], 0.5]] vs. [square root of Var[u.sup.G-H.sub.N]([x.sub.i], 0.5)]] by random Gauss-Hermite quadrature using Hermite's polynomials of degree N [member of] {3, 5, 8,10,12,15}. Plot (b): RelErr [square root of [Var.sup.G-H.sub.N]]. To be consistent with Plot (b) in Figure 2, where the relative error has been represented on the spatial interval -4 [less than or equal to] x [less than or equal to] 1, here we keep the same interval to plot the relative error.

Caption: Figure 4. Plot (a): E[u([x.sub.i], 1)] vs. E [[u.sup.G-H.sub.N]([x.sub.i], 1)] for the degrees N [member of] {5, 10, 20, 30}. Plot (b): [square root of Var[u([x.sub.i], 1)]] vs. [square root of Var[u.sup.G-H.sub.N]([x.sub.i], 1)]].

Caption: Figure 5. Plot (a): E[u([x.sub.i], 1)] vs E[[u.sup.H-G.sub.N]([x.sub.i], 1)]] for the degrees N [member of] {3, 5, 8, 10, 12}. Plot (b): RelErr [[E.sup.G-H.sub.N]. In order to represent properly the relative error, the domain -3.5 [less than or equal to] x [less than or equal to] 3.5 has been shorten.

Caption: Figure 6. Plot (a): [square root of Var[u([x.sub.i], 0.5)]] vs [square root of Var[[u.sup.G-H.sub.N]([x.sub.i], 0.5)]] for the degrees N [member of] {3, 5,8, 10, 12}. Plot (b): RelErr [[square root of [Var.sup.G-H.sub.N]]]. To be consistent with Plot (b) in Figure 5, where the relative error has been represented on the spatial interval -2 [less than or equal to] x [less than or equal to] 2, here we keep the same interval to plot the relative error.
```Table 1. Exact values of E[u([x.sub.i],t)] and [square root of
Var[u([x.sub.i], t)]], at some spatial points [x.sub.i] [member of]
[-4, -2] at the end time t = 1. Approximations of
E[[u.sup.G-H.sub.N]([x.sub.i], t)], and [square root of
Var[[u.sup.G-H.sub.N]([x.sub.i],t)]], obtained by random
Gauss-Hermite quadrature using Hermite's polynomials of degree N
[member of] {5, 10, 20, 30}. A comparison of these approximations
with respect to the exact values is reported by means of the
calculation of relative errors in the spatial points [x.sub.i]
[member of] [-4, -2] considered and for each Hermite's polynomial
of degree N.

[x.sub.i]

t = 1                        N    -4.0         -3.5         -3

E[u([x.sub.i], t)]                0.5437       0.5644       0.5626

5    0.5383       0.5559       0.5543
E[[u.sup.G-H.sub.N]          10   0.5343       0.5543       0.5528
([x.sub.i],t)]               20   0.5343       0.5543       0.5528
30   0.5343       0.5543       0.5528

5    9.9863e-03   1.5080e-02   1.4671e.02
RelErr [[E.sup.G-H.sub.N]]   10   1.7238e-02   1.7901e-02   1.7499e-02
20   0.5588e-01   3.3138e-01   5.5945e-02
30   1.7228e-02   1.7900e-02   1.7494e-02

[square root of                   0.3636       0.3511       0.3539
Var[u([x.sub.i], t)]]
5    0.3471       0.3374       0.34n2
[square root of              10   0.3515       0.3391       0.3417
Var[[u.sup.G-H.sub.N]        20   0.3515       0.3391       0.3417
([x.sub.i],t)]]              30   0.3515       0.3391       0.3417

5    4.5441e-02   3.899ne-02   3.8795e-02
RelErr [[square root of      10   3.3189e-02   3.4337e-02   3.4367e-02
[Var.sup.G-H.sub.N]]         20   3.3208e-02   3.4339e-02   3.4377e-02
30   3.3208e-02   3.4339e-02   3.4377e-02

[x.sub.i]

t = 1                        N    -2.5         -2.0

E[u([x.sub.i], t)]                0.5387       0.4931

5    0.5331       0.4941
E[[u.sup.G-H.sub.N]          10   0.5295       0.4847
([x.sub.i],t)]               20   0.5295       0.4847
30   0.5295       0.4847

5    1.0383e-02   2.1077e-03
RelErr [[E.sup.G-H.sub.N]]   10   1.7039e-02   1.7035e-02
20   2.4677e-01   5.4834e-01
30   1.7035e-02   1.6999e-02

[square root of                   0.3677       0.3849
Var[u([x.sub.i], t)]]
5    0.353n       0.3652
[square root of              10   0.3568       0.3747
Var[[u.sup.G-H.sub.N]        20   0.3568       0.3746
([x.sub.i],t)]]              30   0.3568       0.3746

5    4.0012e-02   5.1189e-02
RelErr [[square root of      10   2.9479e-02   2.6660e-02
[Var.sup.G-H.sub.N]]         20   2.9485e-02   2.6724e-02
30   2.9485e-02   2.6794e-02

Table 2. The same random functions of Table 1 at some spatial points
[x.sub.i] [member of] [-1,1] at the end time instant t = 1.
[x.sub.i]

t = 1                        N      -1.0         -0.5

E[u([x.sub.i], t)]                  0.3492       0.2651

5      0.3946       0.3596
E[[u.sup.G-H.sub.N]          10     0.3418       0.2572
([x.sub.i],t)]               20     0.3421       0.2585
30     0.3421       0.2585

5      1.2978e-01   3.5644e-01
RelErr [[E.sup.G-H.sub.N]]   10     2.1372e-02   3.0038e-02
20     1.0635       1.2569
30     2.0301e-02   2.5157e-02

[square root of                     0.3776       0.3379
Var[u([x.sub.i], t)]]

[square root of              5      0.3387       0.3038
Var[[u.sup.G-H.sub.N]        10     0.3690       0.3292
([x.sub.i],t)]]              20     0.3687       0.3282
30     0.3687       0.3282
5      1.0294e-01   1.0073e-01
RelErr [[square root of      10     2.2619e-02   2.5642e-02
[Var.sup.G-H.sub.N]]         20     2.3517e-02   2.8699e-02
30     2.3517e-02   2.8699e-02

[x.sub.i]
t = 1                        N      0            0.5

E[u([x.sub.i], t)]                  0.1859       0.1197

5      0.3495       0.3663
E[[u.sup.G-H.sub.N]          10     0.1758       0.1043
([x.sub.i],t)]               20     0.1797       0.1145
30     0.1797       0.1145

5      8.8040e-01   2.0606
RelErr [[E.sup.G-H.sub.N]]   10     5.3999e-02   1.2882e-01
20     1.4052       1.5137
30     3.3004e-02   4.3317e-02

[square root of                     0.2769       0.2066
Var[u([x.sub.i], t)]]

[square root of              5      0.2911       0.3153
Var[[u.sup.G-H.sub.N]        10     0.2685       0.2023
([x.sub.i],t)]]              20     0.2656       0.1944
30     0.2656       0.1944
5      5.1349e-02   5.2626e-01
RelErr [[square root of      10     3.0474e-02   2.0711e-02
[Var.sup.G-H.sub.N]]         20     4.0889e-02   5.8873e-02
30     4.0890e-02   5.8875e-02

[x.sub.i]

t = 1                        N      1

E[u([x.sub.i], t)]                  0.0706

5      0.4036
E[[u.sup.G-H.sub.N]          10     0.0426
([x.sub.i],t)]               20     0.0667
30     0.0667

5      4.7138
RelErr [[E.sup.G-H.sub.N]]   10     3.9720e-01
20     1.5902
30     5.5300e-02

[square root of                     0.1402
Var[u([x.sub.i], t)]]

[square root of              5      0.3495
Var[[u.sup.G-H.sub.N]        10     0.1526
([x.sub.i],t)]]              20     0.1292
30     0.1292
5      1.4926
RelErr [[square root of      10     8.8231e-02
[Var.sup.G-H.sub.N]]         20     7.8277e-02
30     7.8294e-02
```

Please Note: Illustration(s) are not available due to copyright restrictions.
COPYRIGHT 2018 Vilnius Gediminas Technical University
No portion of this article can be reproduced without the express written permission from the copyright holder.