# Solution to the problem control of a distributed parameter process.

1. Introduction

The issues of exploring and utilisation of the real world processes and phenomena, which are of the space-time nature, require a more adequate description of the state variables, are which are expressed as distributed parameters, eventually field values. The issues of distributed parameter systems control were brought to the attention of a professional public at the first IFAC international conference in 1960, where Bellman, (1967) and Pontryagin, (1983) presented their first results. In the light of these early works, distributed parameter systems have been defined as the systems whose state variables are the given distributed parameters or field values. The early publications included the generalised methods of a dynamic programming and a maximum principle. The problems solutions are mostly based on the approximation of the controlled systems dynamics by equations of mathematical physics. The processes and systems are described mathematically by partial differential equations (parabolic, hyperbolic and elliptic), as well as by integral and functional ones. The solutions also make use of discretization methods. This approach in dealing with each specific problem provides the results which are especially useful for our orientation. The development of numerical methods and their algorithms for solving partial differential equations was initiated mainly by the publications (Hrennikoff, 1941), (courant, 1943). owing to a dynamic development of information technology, today there are many numerical methods and algorithms applied to the solution of partial differential equations and integral equations scopes of the definition of complex spatial 3D shapes. Many works in the theory of control of distributed parameter systems are based on the works by (Lions,1968) and (Butkovskij,1975) as well as on the results of mathematical disciplines: calculus of variations (classical and non-classical), functional analysis and the theory of semigroups: (Bellman, 1967), (Butkovskij,1982), (Lions,1968), (Pontryagin et al.,1983), (Lasiecka & Triggiani, 2000).The aim of the article is to present the designed procedure applied to the solution of the problem of process optimal control of heat transsfer with distributed parameters. To express the mathematical model of a heat transfer process by a bi-dimensional partial differential equation with boundary and initial conditions according to the type of the problem as well as by a criterial function which can be expressed in a functional form. To use the least squares method in order to solve the defined problem of optimal control of a distributed parameter process. To show that the solution obtained by the algorithm of the least squares method is basically the approximation of the Green's function or an impulse transition function.

2. The problem formulation

Let us consider the set of controlled distributed parameter processes, which can be described by linear non-homogeneous partial differential equations with variable coefficients taking the form:

a (x, t) [F.sub.tt] + b(x, t) [F.sub.t] = A(x, t)[F.sub.xx] + B(x, t)[F.sub.x] + C(x, t)F + f(x, t, u(t) v(t), w(x, t)) (1)

where x is a spatial variable, [x.sub.0] [less than or equal to] x [less than or equal to] [x.sub.1], t is time, t [greater than or equal to] [t.sub.0]. In the equation (1), u(t), v(t) andw(x, t) are the control functions. The state of the controlled system is characterised by the function F (x, t). The coefficients a, b, A, B, C and the function f are considered to be known. In a general case, the boundary conditions for the equation (1), if x = [x.sub.0] and x = [x.sub.1] we can write

([[alpha].sub.0] (t)[F.sub.x] + [[beta].sub.0] (t)F) x = [x.sub.0] = [f.sub.0] (t , [U.sub.0] (t)), t [greater than or equal to] [t.sub.0] (2)

([[alpha].sub.1] (t)[F.sub.x] + [[beta].sub.1] (t)F)x = [x.sub.1] = [f.sub.1] (t , [U.sub.1] (t)), t [greater than or equal to] [t.sub.0] (3)

where [[alpha].sub.0] (t), [[beta].sub.0] (t), [[alpha].sub.1] (t), [[beta].sub.1] (t) are the given functions of the variable t. For the uniqueness of the solution to the equation (1) with the boundary conditions (2), (3) it is necessary to define the initial conditions:

F (x, [t.sub.0]) = [F.sub.0] (x), [x.sub.0] [less than or equal to] x [less than or equal to] [x.sub.1] (4)

[F.sub.t] (x, [t.sub.0]) = [F.sub.1] (x), [x.sub.0] [less than or equal to] x [less than or equal to] [x.sub.1] (5)

The functions [u.sub.0] (t), [u.sub.1] (t) are the control functions for the boundary values. The controls u (t), v(t), w (x, t), [u.sub.0] (t), u1 (t) define the vector of control variables

u (x, t) = (u(t), v(t), w(x, t), [u.sub.0], (t), [u.sub.1] (t)) (6)

which can be investigated as an element of some normed space. The vector of control variables u(x, t) can be imposed additonal restrictions on, e.g.:

[parallel]u (x, t)[parallel] [less than or equal to] M , (7)

where M is the given number, M [greater than or equal to] 0. Let at a particular time interval t = [t.sub.f] be given the finite states of the process

F (x, [t.sub.f]) = [F.sup.*.sub.0] (x), [x.sub.0] [less than or equal to] x [less than or equal to] [x.sub.1] (8)

[F.sub.t] (x, [t.sub.f]) = [F.sup.*.sub.1] (x), [x.sub.0] [less than or equal to] x [less than or equal to] [x.sub.1] (9)

If we consider the material heating simulation techniques then the mathematical model of the process we are describing has to take into account heat transfer by emission which is expressed by the following nonlinear boundary condition:

[lambda] [F.sub.x] = [[sigma].sub.1] {[([u.sub.1] (y,t) + 273)/100].sup.4] - [[(F(S,y,t) + 273)/100].sup.4] } + [[alpha].sub.1] ([u.sub.1] (y,t) - F(S,y,t)) (10)

- [lambda] [F.sub.x] = [[sigma].sub.1] {[([u.sub.1] (y,t) + 273)/100].sup.4] - [[(F(0,y,t) + 273)/100].sup.4] } + [[alpha].sub.2] ([u.sub.2] (y,t) - F(0,y,t)) Where [lambda] is the heat conductivity of the material, [sigma] the coefficient of heat transfer by emission, [alpha] the coefficient of heat transfer by convec u(t) the medium temperature in the furnace. The subscripts 1 and 2 denote the upper or lower surface of the heated material, respectivity. Furthermore, for purpuses of simulation for minimization of losses due to oxidation during heating we may define an integral criterion as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (11)

R = 1 if F (S, t) [greater than or equal to] [H.sub.F] ; R = 0 if F (S, t) < [H.sub.F]

P is the surfasce overburn of the metal, F(S,t the surface temperature of the metal, [H.sub.F] the limit temperature for scale formation, [a.sub.S] the aggressivity coefficient of combustion product ([H.sub.F] and [[alpha].sub.S] are considered as given for the purposes of simulation). In optimum control problems for systems with distributed parameters, (1), (10) and (11), the control function u (t) may appear directly or indirectly in the integrand of the functional to be minimized (through the boundary conditions), (Hrubina, 1992).

Problem 1 It is necessary to find such a vector of control variables u (x, t) ,so that the conditions (8) and (9) are satisfied in the lowest possible time interval t = [t.sub.f]. The vector of control variables can be imposed on the restriction of the type (7). Problem 2. Let be given the time interval t = [t.sub.f]. It is necessary to find a vector of control variables u (x, t) for which the conditions (8) and (9) are satisfying. Thereby, there can be placed a condition so that the standard of the vector of control variables u(x, t) obtained the minimum value.

Problem 3. In this case the problem is to propose an algorithm for solving the mathematical model expressed by (1), the given initial and boundary conditions according to the problem type which may be considered e.g. as given by (10) and the selected optimality criterion in the integral form (11).The result of thesolution in a given time interval will be the time-dependent control function u (t) (the medium temperature inside the furnace), as well as the time dependences of the temperature of surfaces and center of the heated material,(Hrubina &Jadlovska, 2002), (Hrubina 1992, 2007). The theoretical solution to the defined problems 1 and 2 presented in the publication by (Butkovskij, 1975) and the algorithm for solving the problem 2. in the work by (Hrubina, 1992). (Hrubina, 2007), Finally, it is possible to show the method of equations transformation (1) - (5), (8), (9) to a suitable form for the application of the least squares method. The possibility of such a transformation stems from the fact that the F (x, t) function which satisfies the equations (1) to (4) can be expressed in the form of integrals, which include the functions f, [f.sub.0] and [f.sub.1]:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (12)

considering the general theory, the interesting fact is that all the cores, i.e. the functions [N.sub.0], [N.sub.1], K, [K.sub.0], [K.sub.1] can be expressed in the finite form by a single function G(x,t,[xi],[tau]), which is called either an impulse transition function or the effect function, a fundamental solution to the system (1) to (5), or the Green's function (Butkovskij, 1982); (Hulko, 1998).

3. Approximation of the solution to a partial differential equation by the least squares method

The least squares method consists in searching for a vector of unknown parameters so that the weighted sum of squares of the residuals is minimal. The least squares method can be formulated for both a continuous method and a discrete one according to the weighted sum of the residues in the given domain [OMEGA], (Hrubina & Majercak, 2011). The classification of the least squares method can also be carried out according to the selection of the form of a partial differential equation approximated solution. By Easton 1976, we distinguish the two approaches:

1) Global approach, where we search for the approximated solution in the shape of the sum of the series

2) Local approach or the approach of the finite elements method Let [OMEGA] be the bounded area of the space [R.sub.2] of the variables M([x.sub.1], [x.sub.2]), with smooth non-intercepted border [GAMMA]. We investigate in [OMEGA] x (0,[t.sub.f]>, (0 < [t.sub.f] < [infinity]) the task of defining of the approximate solution F(M,t)of the equation

div{L(M)grad F (M, t)} = C(M) [[partial derivative].sub.t] F (M, t) (13)

where

C(M) > 0 and C(M) [member of] [C.sup.0] ([bar.[OMEGA]]); [bar.[OMEGA]] = [OMEGA] + [GAMMA] ; (14)

L(M) > 0 and L(M) [member of] [C.sup.1] ([bar.[OMEGA]]), F(M, t) is the temperature in positive time t at the point M. Let the solution (13) satisfy on the border [GAMMA] the boundary condition

F(P, t) = g(P, t) (15)

and the initial condition for t = 0

F(M,0) = [F.sub.0] (M), M [member of] [bar.[OMEGA]] (16)

where the given function g(P,t) [member of] [C.sup.0(S),] S = [GAMMA] x (0,[t.sub.f]) and for t = 0 is g(P,0) = 0 and [F.sub.0](M) [member of] [C.sup.0] ([bar.[OMEGA]]).

In this paper we will present some suppositions, specification of which requires introducing of the concepts from the field of abstract differential equations. Let us denote by [L.sub.2]([OMEGA]) a Hilbert space of real functions integrable with the quadrate in the area [OMEGA], where the scalar product is defined by the equality

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Within this space, let us choose the set of functions V, the elements of which are twice continuously differentiable functions in [OMEGA].

Let us search for the approximate solution (1) in the following form

F (M, t) = [n.summation over (i-0)] [v.sub.i], (M) [f.sub.i] (t)

Where basis functions [v.sub.i](M) [member of] V,(i = 0,1, ..., n)represent the first n functions of the whole system of functions {[v.sub.i] (M)}, (i = 1, 2,...n,...) and satisfy the boundary condition. Let us denote:

E = div{L(M)grad(F(M, t)}-C(M) [[partial derivative].sub.t] F(M, t) (17)

F = F (M,0) - [F.sub.0](M) (18)

We will denote unknown functions [f.sub.i](t) so that the function F(M, t) minimises the functionals with weight functions, (Legras, 1971).

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

If we determine the solution of the homogeneous task (satisfying the equations (13), (15) and (16), where g(P,0) = 0), and the solution of the non-homogeneous task (satisfying the equations (13), (15) and (16), where [F.sub.0](M) = 0), then by their superposition we will obtain the solution to the given problem.

3.1 Solution homogeneous problem

Let

F(M, t) = [n.summation over (i=0)] [v.sub.i] (M)[f.sub.i] (t) = v(M)f(t) (19)

where v (M) is line and f(t) is column matrix. Coordinate functions [v.sub.i] (M) are chosen so that [v.sub.i](P) = 0 ; P [member of] [GAMMA]. Let [f.sub.i](t) be linear combination [f.sub.i](t)

[f'.sub.i](t) = [n.summation over (k=0)] [[alpha].sub.ik] [f.sub.k] (t) (20)

Let us denote the matrix

A = [[[alpha].sub.ik]] ; (i, k = 0,1, ..., n) (21)

supposition (9) can be written down in the form of the matrix equation

[[partial derivative].sub.t] f (t) = Af (t) (22)

By minimisation of the functional [I.sub.E] and [I.sub.F] we define the elements of the matrix and the matrix f(0), i.e. initial values of the matrix f(t). These two results define unknown [f.sub.i](t), because from (22) it follow f (t) = [e.sup.At]f (0). If we know f (t), then F (M, t) can be expressed in the following form

F (M, t) = v(M) [e.sup.At] f (0) (23)

If we substitute in (17) F (M, t) for (19) and use the supposition (22), we will obtain:

E = {div{L(M) grad v(M)}- C(M) v(M)A}f (t) = [epsilon] f (t). (24)

[epsilon] is line matrix, the i-th element of which has the following form

[[epsilon].sub.1] = div{L(M)grad [v.sub.i] (M)}- C (M) [n.summation over (k-0)] [v.sub.k] (M)[[alpha].sub.ki]

If [E.sup.2] =[{[n.summation over(i=0)] [[epsilon].sub.i] [f.sub.i]}.sup.2] [less than or equal to] (n + 1) [n.summation over(i=0)] [[epsilon].sup.2.sub.i]] [f.sup.2.sub.i] it is enough to min [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. For each [[epsilon].sub.i] we will obtain the upper approximation of the minimum [I.sub.E]. Minimum [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] will be defined by (n + 1) of the equation [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] or completely by [(n + 1).sup.2] equations (25), from which we can calculate [(n + 1).sup.2] of the unknown elements oft the matrix A.

[n.summation over (i=0)] {(C[v.sub.i] [v.sub.0])[[alpha].sub.0i] + (C[v.sub.i] [v.sub.1])[[alpha].sub.1i] + ... + (C[v.sub.i][v.sub.n]) [[alpha].sub.ni] = [n.summation over (i=0)] ([v.sub.i] div{L(M)grad [v.sub.i] (M)}) (25)

where for the sake of briefness we will use the denotation

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

If we define the matrices B a D so that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (26)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (27)

Then [(n + 1).sup.2] the equations of the system (25) can be written down in the form of the matrix equation BA = D when A = [B.sup.-1] D. We minimise the functional [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. From the conditions for the minimum [I.sub.F], [partial derivative][I.sub.f]/[partial derivative] [f.sub.i](0) = 0, (i = 0,1, ..., n) we will obtain (n + 1) equations for calculation of the elements of the matrix f(0).

[n.summation over (i=0)](C[v.sub.i][v.sub.0]) [f.sub.0] (0) + (C[v.sub.i][v.sub.1]) [f.sub.i] (0) + ... + (C[v.sub.i][v.sub.n])[f.sub.n] (0) = [n.summation over (i=0)] (C [F.sub.0] [v.sub.i]) (28)

i=0 i=0

Using the matrix B, defined in (26), the system (28) can be expressed in the following form Bf(0) = (C(M)[F.sub.0](M)v'(M)) where the matrix v' (M) is transposed to the line matrix v(M) thus

f (0) = [B.sup.-1] (C(M)[F.sub.0](M) v' (M)) (29)

If we write down (23) using (29) and after being expressed by the integral we will obtain the solution of the homogeneous boundary task

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] .

3.2 Solution non-homogeneous problem

Let us search for F(M, t) in the following form

F (M, t) = [omega](M, t) + v(M)f(t) (30)

Where [omega](P, t) = g(P, t) for P [member of] [GAMMA]. Let [f'.sub.i] (t) = [n.summation over (i=0)] [[alpha].sub.ik] [f.sub.k] (t) + [H.sub.i] (t) or in the matrix form

[[partial derivative].sub.t] f (t) = Af(t) + H(t) (31)

where A is the matrix defined in (21), H(t) is the unknown column matrix. By minimisation of the functional [I.sub.E] and [I.sub.F] we will define the matrices A, H(t) a f(0), thus, also the unknown matrix f(t), if from (31)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (32)

If we substitute in (17) F (M, t) for (30) a 31), we will obtain:

E = {div{L(M) grad v(M)}- C(m)v (m) A}f (t)- C(M)v (m) H(t) + x([omega](M , t)) where

x([omega](M, t)) = div{L(M)grad [omega](M, t)}- C(M) [[partial derivative].sub.t] [omega](M, t) Minimisation of the functional [I.sub.E] will be performed in two steps {div{L(M) grad v(M)}- C(M) v(M) A}f (t) (33) Expression (33) was included in v [I.sub.E] for the homogeneous boundary task, and by its minimisation, the matrix A = [B.sup.-1] D will be defined.

a) Let us minimise the functional [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

with the same weight function as with (33). From (n + 1) conditions for minimisation of I

[[partial derivative].sub.Hi(t)] I = 0

we will obtain the system of the following equations:

[n.summation over (i=0)] {(C[v.sub.i][v.sub.0])[H.sub.0] + (C[v.sub.i][v.sub.1])[H.sub.1] + ... + (C[v.sub.i][v.sub.n])[H.sub.n]} = [n.summation over (i=0)] {[v.sub.i] x([omega](M, t))}

whence

H(t) = [B.sup.-1](v'(M)x([omega](M, t))) (34)

Let us remind that in the non-homogeneous task [F.sub.0] (m) = 0, thus F in (26) is reduced to

F = [omega](M ,0) + v(M)f (0)

In a similar way to homogeneous task, from the conditions for the minimum [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] we can calculate

f (0) = [-B.sup.-1](C(M)[omega] (M,O)v'(M)) (35)

After defining H(t) from (34), f(0) from (35), f(t) in the relationship (32) can be written down in the following form

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (36)

Solution of the non-homogeneous boundary task based on the (36) will be as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

General solution of the problem will be obtained by the superposition of solution of the homogeneous and the non-homogeneous tasks, thus

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (37)

If we denote

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (38)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (39)

3.3 The Green s function

In the following part we are going to investigate the properties of the core K(M, M', t) and compare them with the properties of the theoretical core expressed by the Green's function. In the relationship (38) there occur the matrices A a [B.sup.-1], properties of which we are going to investigate. Function K(M, M', t) will be called "the core". According to (26) it is obvious that the matrix B is symmetric. If the Green's first theorem is applied to the integral expressing the element [d.sub.ij] of the matrix D [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] it is obvious that the matrix D is also symmetric. Matrix B is positive definite. Let Q(M) = Y(M)BY'(M) be the quadratic form conjugated to the matrix B, Y(M) is the non-zero line matrix with (n + 1)elements. If

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

we can write down

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

If we perform the matrix product , we will obtain

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

C(M') > 0, that is why for all non-zero vectors Y(M) is Q(M positive definite, thus, the matrix B is also positive definite.

Matrix D is negative definite. Let us consider the quadratic form

P(M) = X (M) D X' (M)

conjugated to the matrix D. In a similar way to the forementioned theorem we will come to

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Thus, the quadratic form is negative definite for all non-zero vectors X(M), consequently, D is also negative definite.

Eigenvalues of the matrix A = [B.sup.-1] D are real and negative.

Let [lambda] be the eigenvalue of the matrix A and X the corresponding eigenvector, then the following is valid:

AX = [lambda]X

Let us consider the two possibilities of the scalar product expression (BAX, X)

(BAX, X) = [lambda](BX, X) (40) (BAX, X) = (X, A * B * X) (41)

where A *, B * are the matrices conjugated to A and B. If B is real and symmetric, B * = B, matrix A is real , thus A * = A', where A' is matrix transposed to A. From the expression (41) we will obtain:

(BAX, X) = (X, A' AX) = (X, BAX) = (X, B[lambda]X) = [lambda](X, BX) (42)

From (40) and (42) it follows that if X exists, [lambda] = [??], thus, the eigenvalues are real. Let us consider the scalar product (BAX, X) = [lambda](BX, X). If BA = D, (BAX, X) = (DX, X) whence (DX, X) = [lambda](BX, X) , (BX, X) > 0 (DX, X) < 0 [??][lambda] < 0. The core K(M, M', t) minimises the functional

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

1. The core K(M,M', t) satisfies the homogeneous boundary conditions. For the Dirichlet condition v (P) = 0 for P [member of] [GAMMA], K (P, M', t) = 0.

2. The integral [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is intermediate quadratic approximation of the integral [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is the Dirac delta function, (Butkovskij, 1982).

3.4 Approximation of the [phi](M) function and Dirac's function

Let [phi](M) be the function which we intend in the area [OMEGA] to substitute for the product [phi](M) = [a.sub.0][v.sub.0] (M) + [a.sub.1][v.sub.1] (M) + ... + [a.sub.n][v.sub.n] (M) the co-ordinate functions [v.sub.i](M) are given, [a.sub.i](i = 0,1, ..., n), the unknown constants which we define so that the functional [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] dM is minimal. From (n + 1) conditions for the minimum I, [[partial derivative].sub.ai] = 0, we will obtain [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. Where the matrix B is defined in (26), a = [[a.sub.i]], (i = 0,1..., n) is the line matrix. Then [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

3. If we consider the Dirac delta function [[delta].sub.M] (M'), the integral [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] .

We can see that the integral [I.sub.1] is intermediate quadratic approximation of the integral [I.sub.2], and thus K (M, M ',0) is the approximation of [[delta].sub.M](M').

From the theorems 5., 6. and 7. it follows that K (M, M', t) is the approximate solution of the system

div{L(M)grad F (M, M', t)} = C (M) [partial derivative]F (M, M', t)/[[partial derivative].sub.t]

F(P,M', t) = 0; P [member of] [GAMMA] (43)

F (M, M ',0) = [[delta].sub.M] (M'), the exact solution of which can be expressed by the Green's function.

C(M)K (M, M', t) = C(M')K (M', M, t) (44)

The symmetricity of the core regarding the points M, M' [member of] [OMEGA] can be proved if we show that the product [e.sup.At] [B.sup.-1] is symmetric, as

C(M) K(M,M', t) = C(M) C(M') v(M) [e.sup.At] [B.sup.-1] v' (M')

C(M') K(M',M, t) = C(M') C(M) v(M') [e.sup.At] [B.sup.-1] v' (M)

The products [B.sup.-1]D[B.sup.-1] and D[B.sup.-1]D are symmetric. If we unwind the matrix [e.sup.At], we will obtain [e.sup.At] = I + At + [A.sup.2] [t.sup.2]/2! + .... If we substitute the matrix A for the matrix [B.sup.- 1]D and multiply [e.sup.At] s [B.sup.-1] on the right, we will obtain [e.sup.At] [B.sup.-1] = [B.sup.-1] + [B.sup.-1] D[B.sup.-1]t + [B.sup.-1] D [B.sup.-1] D[B.sup.-1][t.sup.2]/2! + ... from the above it follows that [e.sup.At] [B.sup.-1] is symmetric matrix, (Jadlovska et al., 2011).

3.5 Approximation of the Green's function by the least squares method

In the following part of the paper we are going to show that solution F (M, t) of the equation (13) defined by the least square method is the approximation of the exact theoretical solution expressed by the Green's function. Solution (13) under the conditions (14), (15) and (16), if it exists, is the only solution. It can be easily shown that if G(M,M',t) is the Green's function which is considered to be the solution to the system

div{L(M)grad G(M,M', t)} = C(M) [[partial derivative].sub.t] G(M,M', t); M,M' [member of] [OMEGA]

G(P, M\ t) = 0; P [member of] [GAMMA]

G(M, M',0) = [[delta].sub.M] (M) = [[delta].sub.M] (M')

then the theoretical solution (13) under the conditions (14), (15) and (16) is of the following form

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (45)

where [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is the derivation in the direction of the outer normal to [GAMMA] at the point P. Theoretical solution of the homogeneous boundary task for which g (P, t) = 0, will be obtained from (45) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. Let [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. We can easily find out that [phi](M, t) is also the solution of the homogeneous boundary task. The solution is unambiguous, thus we must

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (46)

Identity (46) is correct for the random function [F.sub.0] (M') thus

C(M') G(M', M, t) = C(M) G(M, M', t) (47)

If specific heat C(M) is constant, then the theoretical core G(M, M ',t) is symmetric regarding points M, M' [member of][OMEGA].

Using (47) we can simply write down the theoretical solution (45)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (48)

3.6 Comparison of exact and approx,mate solutions

Comparison of the exact and approximate solution (13) for the Dirichlet boundary task within one-dimensional space. The solution of the system

[[partial derivative].sub.x] {L(x) [[partial derivative].sub.x]F(x, t)} = C(x) [[partial derivative].sub.t]F(x, t) F (0, t) = [g.sub.1] t ; t [member of] (0, [t.sub.f]) 0 < [t.sub.f] < [infinity] F (l, t) = [g.sub.2]t; F(x,0) = [F.sub.0] (x); x [member of] (0, 1)> (49)

defined by the least square method (39) is as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (50)

Second integral in (50) expressed explicitly after integration of the variable by per parts method will be as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (51)

Theoretical solution for the Dirichlet boundary task within one-dimensional space from the relationship (48) is as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (52)

where G(x, y, t) is the theoretical solution of the system (53)

[[partial derivative].sub.x] {L(x) [[partial derivative].sub.x]G(x, y, t)} = C(x) [[partial derivative].sub.t] G(x, y, t) G(0, y, t) = G(l, y, t) = 0 G(x, y, 0) = [[delta].sub.X] (y) = [[delta].sub.y] (x) (53)

If we multiply the first equation of the system (53) gradually with x/l, x - l/l and integrate within the borders from 0 to 1, then for the expression between the braces in (52) we will obtain:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (54)

Using the relationship (54) and taking into consideration orientation of the normals, the theoretical solution (52) can be written in the following form

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (55)

Theoretical solution (49) expressed by the relationship (55) has comparable members with the solution defined by the least square method (51) where there are two extra members

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

In the 7th theorem we have shown that the integral

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

is intermediate quadratic approximation [omega](x, t). Thus, the core K(x, y, t) can be considered to be approximation of the Green's function G(x, y, t). If we choose C(x) = c, L(x) = ax + b and the co-ordinate functions [v.sub.i] (x) = [x.sup.i+1] (x-l) then the algorithm of the solution

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

of the homogeneous task is based on the following stages: 1. Calculation of the matrices B a [B.sup.-1]

Elements [b.sub.ij] of the matrix B are calculated according to the formula

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

N is the number of the co-ordinate functions.

The inverse matrix [B.sup.-1] simply and exactly calculated is not completely symmetric, but if we perform calculations twice exactly, the inverse matrix [B.sup.-1] will be also symmetric.

2. Calculation of the matrices D and A = [B.sup.-1]D

Elements [d.sub.ij] of the matrix D are calculated according to the formula

3. We calculate the integral [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

4. We determine the eigenvalues and eigenvectors of the matrix A.

5. We select t and calculate the matrix [e.sup.At].

6. We multiply [e.sup.At] by the matrix [B.sup.-1].

7. The matrix [e.sup.At] [B.sup.-1] is multiplied by the matrix [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

8. We select x and the function F(x, t), thus we obtain:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

It is obvious that for the time t and the constant number of the co-ordinate functions N we can determine the solution F(x,t) for the various values of x. If we change the variable t without changing the number of the co-ordinate functions, calculation must be started from the 5th stage.

If for t and N , which are constant, we chose the initial conditions, it is enough to perform calculation at the stages 3, 7 and 8 keeping other results. In the above described algorithm, the calculation of the matrix [e.sup.At] is the most important task.

Similarly we can write out the algorithm of the solution of the nonhomogeneous task, where calculation of the integral is the most difficult.

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

but regarding the function g (t) we have to apply approximate methods.

3.7 Approxmation of the solution to a d,fferential system w,th constant coeffidents

For the requirements of the process control desing, the partial differential equations (13) to (16) can be transformed by the method of lines (i.e through the variables [x.sub.1], [x.sub.2], discretization) to the system of differential equation taking the form:

[phi](t) = A[phi](t) + f (t) , [phi](0)= [phi]0 (57)

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

A is a square matrix of the n-th degrees whose elements are independent of time, t. The function [f.sub.i] (t) are the given functions chose values are known for different values t .Furthermore, we assume that the initial values of the functions are known [phi]1 (t), [phi]2 (t), ..., [[phi].sub.n](t). The aim is to determine the solution [[phi].sub.1](t), [[phi].sub.2](t), ..., [[phi].sub.n] (t), to the system of differential equations (57) by a numerical method. We know that the exact solution to the problem formulated with the initial values is definid a matrix representation:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (58)

The exact solution to the problem (57) can be used in the step by step method which allows the calculation of the functions [[phi].sub.i] in time (t + [mu]), whereas using the functions [[phi].sub.i] (t); u values; it is the step. Thus, the exact solution will be in the form:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (59)

For the use of the relation (59) in the network domain it is necessary to develop numerical methods algorithms that allow the calculation of the matrix method [e.sup.A[mu]] as well as that of the matrices integral:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (60)

To obtain the approximation of the defined integral (60) with a sufficient accurate, with the application of a numerical method we can use an algorithm whose calculation includes the exact values of the matrix [e.sup.A(u-s)] and we interpolate the matrix f (t + s) with the s-degree polynomial. In order to implement the interpolation, any of the numerical methods can be chosen, for example the Cotes (Legras, 1978); (Hrubina and Jadlovska, 2002); (Hrubina, 2007).

4. Application of the least squares method to the solving of the problem of distributed parameter process control

Let us consider the mathematical model of the heat transfer process (heating of isolated metal rod-a distributed parameter system on the interval <0, ?>) described by the type of a partial differential equation:

[Y.sub.t] (x,t) - [a.sup.2] [Y.sub.xx] (x,t) = U(x,t)

Boundary conditions:

Y(0, t) = [g.sub.1] (t), Y(L, t) = [g.sub.2] (t)

Initial condition Y (x,0) = [Y.sub.0] (x), 0 [less than or equal to] x [less than or equal to] L, t [greater than or equal to] 0, a [not equal to] 0 Standardized function:

R (x, t) = U (x, t) + [Y.sub.0] (x)[delta](t) - [a.sup.2] [delta]'(x)[g.sub.1] (t) + [a.sup.2] [delta]'(L - x)[g.sub.2] (t)

The Green's function:

G(x,[xi]t) = 2/L [[infinity].summatio over (n=1)] sin n[pi]x/L sin n [pi] [xi]/L exp[-[(n[pi]a/L).sup.2] t]

Transfer function:

S(x,[xi],s) = 2/L [[infinity].summation over (n=1)] sin n[pi]x/L sin n[pi][xi]/L/ s + [(n[pi]a/L).sup.2]

where "s" is the variable of the Laplace transformation.

The solution to the partial differential equation with boundary and initial conditions leads to the expression of the controlled process output variable, which is the convolution product of the standardized function and the Green's function:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

The Green's function contains components of the heating process dynamics depending on time "t" and the spatial variable "x". These components are the integral functions of a partial differential equation describing the controlled process.

5. Conclusion

The contribution of this chapter lies in the presentation of the solution to the problem of the heat transfer process control, whose mathematical model is expressed by the bi-dimensional partial differential equation with boundary and initial conditions. To determine the approximated solution to a homogeneous and nonhomogeneous problem of control, the least squares method has been used. It has been proved that the solution of the mathematical model, i.e. that of a partial differential equation of a parabolic type, determined by least squares method, is the approximation of an exact theoretical solution, which is expressed by the Green's function. The comparison of the exact and approximated solutions of the mathematical model for the Dirichlet marginal problem is shown in one-dimensional space. The paper presents the algorithm for solving a homogeneous problem as well as the possibility of designing the algorithm for solving a non-homogenous problem.

DOI:10.2507/daaam.scibook.2012.15

6. Acknowledgement

This work has been supported by the Scientific Grant Agency of Slovak Republic under project Vega No.1/0286/11 Dynamic Hybrid Architecture of the Multiagent Network Control System.

7. References

Bellman, R. (1967). Introduction to the Mathematical Theory of Control Processes I. Academic Press

Butkovskij, A. G. (1975). Methods of Control Systems wih distributed parameters, Nauka Moskva

Butkovskij, A. G. (1982). Green's functions and transfer functions Handbook. Ellis Horwood ,limited, New York

Courant, R. (1943). Variational Methods for Solution of Equilibrium and Vibration. Bull.Amer.Math.Soc. Volume 49, Number (1), p. 1-23

Easton, D. (1976). Nonlinear least square-the levenberg algorithm revisidet. J.Aust.Math.Soc.B. 19, p. 348-357

Hrennikoff, A. (1941). Solution of Problems in Elasticity by the Frame-work Methods, ASME, J.Appl.Mech.8

Hrubina, K. (1992). Solving Optimal Control Problems for Systems with Distributed Parameters by Means of Iterative Algorithms of Algebraic Methods. Cybernetics and Informatics. Slovak Academy of Sciences, Bratislava

Hrubina, K. and Jadlovska, A. (2002). Optimal Control and Approximation of Variational Inequalities. Kybernetes The International Journal of Systems and Cybernetics. Vol 9/10, Emerald England, pp. 1401 - 1408, ISSN 492X.0368

Hrubina, K. (2007). Numerical optimal control problems for systems with distributed parameters by algorithms of algebraic methods. Manufacturing Engeenering. No 2, p. 81-84, ISSN 1335-7972, Presov

Hulko, G., Antoniova, M., Belavy, C., Belansky, J., Szuda, J., & Vegh, P.(1998). Modeling Control and Desing of Distributed Parameter Systems Publishing house STU Bratislava, ISBN 80-227-1083-0

Jadlovska, A. & Hrubina, K. (2011). Algorithms of optimal Control Methods for Solving Game Theory Problems. Kybernetes The International Journal of Cybernetics, Systems and Management Sciences. Vol. 40 No. V2, 2011, pp 290 - 299. Emerald Group Publishing Limited, ISSN 0368-492X

Jadlovska, A., Katalinic, B., Hrubina, K., Macurova,A., & Wessely, E., (2011). Optimal Control of Nonlinear Systems with Constraints. DAAAM International Scientific Book, Katalinic, B.(Ed), Vienna, Austria, pp.265-282, ISBN 978901509-84-1, ISSN 1726-9687

Lasiecka,I. & Triggiani, R. (2000). Control Theory for Parcial Differential Equations. Continuous and Approximation Theories, Cambridge University Press, ISBN 0-521-43408-4

Legras, J. (1971). Methodes et techniques de Uanalyse numerique, Dunod, Paris

Lions, J. L. (1968). Controle optimal de systemes gouvernes par des equations aux derives partielles, Dunod, Quathier - Villars, Paris

Pontryagin,L.C. Boltianskij, V.G., Kamkrelidze, P.V. & Misenko, E.F. (1983). Mathematical Theory of Optimal Processes, Nauka Moskva

Tripathi, S.M. (2008). Modern Control Systemes (An Introduction), Infinity Science Pres LLC, Hingham, Massachusetts, New Delhi, ISBN 078-1-934015-21-6

Authors' data: Assoc. Prof. PhD. Jadlovska, A[nna] *, Univ. Prof. Dipl.-Ing. Dr.h.c.mult. Dr.techn. Katalinic, B[ranko] **, Assoc. Prof. PhD. Hrubina, K[amil] ***; Assoc. Prof. PhD. Macurova, A[nna]****; Assoc. Prof. CSc. Wessely, E[mil] ****, * Technical University of Kosice, Letna 1, Kosice, Slovakia, ** University of Technology, Karlsplatz 13, 1040, Vienna, Austria, *** Informatech Ltd., Kosice, Slovakia, **** University of Security Management in Kosice, Slovakia, anna.jadlovska@tuke.sk, katalinic@mail.ift.tuwien.ac.at, kamil.hrubina@tuke.sk, anna.macurova@tuke.sk, emil.wessely@vsbm.sk