# Eigenfunction expansions for a Sturm-Liouville problem on time scales.

Abstract

In this paper we investigate a Sturm-Liouville eigenvalue problem on time scales. Existence of the eigenvalues and eigenfunctions is proved. Mean square convergent and uniformly convergent expansions in the eigenfunctions are established.

AMS subject classification: 34L10.

Keywords: Time scale, delta and nabla derivatives and integrals, Green's function, completely continuous operator, eigenfunction expansion.

1. Introduction

Let T be a time scale and a, b [member of] T be fixed points with a < b such that (a, b) is not empty. Throughout, all the intervals are time scale intervals. For standard notions and notations connected to time scales calculus we refer to [4, 5].

In this study we deal with the simple Sturm-Liouville eigenvalue problem

-[y.sup.[DELTA][nabla]](t) = [lambda]y(t), t [member of] (a, b), (1.1)

y(a) = y(b) = 0. (1.2)

Some aspects of Sturm-Liouville eigenvalue problems on time scales have already been considered in the literature (see [1, 6]). In the present paper we are concerned with eigenfunction expansions (generalized Fourier analysis) for problem (1.1), (1.2). In our dicussion an important role is played by certain new type integration by parts formulas on time scales, established recently by the author [7, 9]. These formulas contain delta and nabla derivatives and integrals at the same time and they are elaborated in Section 2. Next in Section 3 it is shown, by using the Hilbert-Schmidt theorem on symmetric completely continuous operators, that the eigenvalue problem (1.1), (1.2) has a system of eigenfunctions that forms an orthonormal basis for an appropriate Hilbert space. This yields mean square convergent (that is, convergent in an [L.sup.2]-metric) expansions in eigenfunctions. Finally, in Section 4 uniformly convergent expansions in eigenfunctions are obtained when the expanded functions satisfy some smoothness conditions.

2. Integration by Parts Formulas

The aim of this section is to present two integration by parts formulas on time scales, given below in Theorem 2.4. These formulas will be employed in the subsequent sections. They were recently established by the author in  (see also ).

First we formulate a theorem which gives a relationship between the delta and nabla derivatives. For its proof see [3, Theorem 2.5 and Theorem 2.6]. The derivatives at the end points of intervals are understood to be one-sided derivatives.

Theorem 2.1.

(i) If f : [a, b] [right arrow] R is continuous on [a, b] and [DELTA]-differentiable on [a, b) with continuous [f.sup.[DELTA]], then f is [nabla]-differentiable on (a, b] and

[f.sup.[nabla]](t) = [f.sup.[DELTA]]([rho](t)) for all t [member of] (a, b].

(ii) If f : [a, b] [right arrow] R is continuous on [a, b] and [nabla]-differentiable on (a, b] with continuous [f.sup.[nabla]], then f is [DELTA]-differentiable on [a, b) and

[f.sup.[nabla]](t) = [f.sup.[nabla]] ([sigma](t)) for all t [member of] [a, b).

The next theorem (see  and ) gives a relationship between the delta and nabla integrals.

Theorem 2.2. Let f : [a, b] [right arrow] R be a continuous function. Then

(i) [[integral].sup.b.sub.a] f(t)[DELTA]t = [[integral].sup.b.sub.a] f ([rho](t))[nabla]t,

(ii) [[integral].sup.b.sub.a] f(t)[nabla]t = [[integral].sup.b.sub.a] f([sigma](t))[DELTA]t.

Proof. We only prove (i) as (ii) can be proved similarly. Take an arbitrary partition P of [a, b]:

P = {[t.sub.0], [t.sub.1], ..., [t.sub.n]} [subset] [a, b], a = [t.sub.0] < [t.sub.1] < ... < [t.sub.n] = b.

Let us set for each i [member of] {1, ..., n}

[M.sub.i] = sup{f(t) : t [member of] [[t.sub.i-1], [t.sub.i])}, [M'.sub.i] = sup{f([rho](t)) : t [member of] ([t.sub.i-1], [t.sub.i]]}

and form upper Darboux [DELTA]-sum U(f,P) and upper Darboux [nabla]-sum U'([f.sup.[rho], P) by

U(f,P) = [n.summation over (i=1)] [M.sub.i] ([t.sub.i] - [t.sub.i-1]), U'([f.sup.[rho]], P) = [n.summation over (i=1)] [M'.sub.i] ([t.sub.i] - [t.sub.i-1]),

respectively, where [f.sup.[rho] denotes the function [f.sup.[rho]](t) = f([rho](t)). Then, since f is continuous and [f.sup.[rho]] is left-dense continuous, we get that f is [DELTA]-integrable over [a, b) AND [f.sup.[rho]] is [nabla]-integrable over (a, b] and that (see )

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.1)

On the other hand, it is not difficult to see that from continuity of f on [a, b] it follows that [M.sub.i] = [M'.sub.i] for any i [member of] {1, ..., n} and hence U(f,P) = U'([f.sup.[rho]], P) for all partitions P of [a, b]. Therefore from (2.1) we get the statement (i) of the theorem. []

Remark 2.3. Another proof of Theorem 2.2 can be given by using Theorem 2.1. Indeed, let F : [a, b] [right arrow] R be a [DELTA]-antiderivative for f on [a, b], that is, F is continuous on [a, b], [DELTA]-differentiable on [a, b), and [F.sup.[DELTA](t) = f (t) for all t [member of] [a, b). Then we have, using Theorem 2.1(i),

[F.sup.[nabla](t) = [F.sup.[nabla]([rho](t)) = f ([rho](t)) for t [member of] (a, b],

so that F is at the same time a [nabla]-antiderivative for [f.sup.[rho]] on [a, b]. Therefore

[[integral].sup.b.sub.a] f([rho](t))[nabla]]t = F(b) - F(a) = [[integral].sup.b.sub.a] f(t)[DELTA]t.

The statement Theorem 2.2(ii) can be proved in a similar manner by using Theorem 2.1(ii).

Now let us formulate and prove the main result of this section.

Theorem 2.4. Let f and g be continuous functions on [a, b]. Suppose that f is [DELTA]-differentiable on [a, b) with continuous and bounded [f.sup.[DELTA]] and g is [nabla]-differentiable on (a, b] with continuous and bounded [g.sup.[nabla]] Then

[[integral].sup.b.sub.a] [f.sup.[DELTA]](t)g(t)[DELTA]t = f(t)g(t) [|.sup.b.sub.a] - [[integral].sup.b.sub.a] f(t)[g.sup.[nabla]](t)[nabla]t, (2.2)

[[integral].sup.b.sub.a] [f.sup.[nabla]](t)g(t)[nabla]t = f(t)g(t) [|.sup.b.sub.a] - [[integral].sup.b.sub.a] f(t)[g.sup.[DELTA]](t)[DELTA]t, (2.3)

Proof. It is enough to prove (2.2) as (2.3) is a modification of (2.2). To prove (2.2) note that by the product rule for [DELTA]-derivative we have

[(fg).sup.[DELTA]](t) = [f.sup.[DELTA]]t)g(t) + f ([sigma](t))[g.sup.[DELTA]](t).

Further, [DELTA]-integrating both sides of the last equation we get

f(t)g(t) [|.sup.b.sub.a] = [[integral].sup.b.sub.a] [f.sup.[DELTA]](t)g(t)DELTA]t + [[integral].sup.b.sub.a] f([sigma](t))[g.sup.[DELTA](t)[DELTA]t. (2.4)

On the other hand, using Theorem 2.1(ii) and Theorem 2.2(ii) we have

[[integral].sup.b.sub.a]f([sigma](t))[g.sup.[DELTA]](t)[DELTA]t = [[integral].sup.b.sub.a] f([sigma](t))[g.sup.[nabla]]([sigma](t))[DELTA]t = [[integral].sup.b.sub.a] f(t)[g.sup.[nabla]](t)[nabla]t. (2.5)

Substituting (2.5) into the right-hand side of (2.4) we arrive at (2.2). []

3. Mean Square Convergent Expansions

Denote by H the Hilbert space of all real [nabla]-measurable functions y : (a, b] [right arrow] R such that y(b) = 0 in the case b is left-scattered, and that

[[integral].sup.b.sub.a] [y.sup.2](t)[nabla]t < [infinity],

with the inner product (scalar product)

<y, z> = [[integral].sup.b.sub.a] y(t)z(t)[nabla]t

and the norm

[parallel]y[[parallel] = [square root of <y, y>] = {[[integral].sup.b.sub.a] [y.sup.2](t)[nabla]t}.sup.1/2].

Next denote by D the set of all functions y [member of] H satisfying the following three conditions:

(i) y is continuous on (a, b], y(b) = 0, there exists y(a) := [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and y(a) = 0.

(ii) y is continuously [DELTA]-differentiable on (a, b), there exist (finite) limits [y.sup.[DELTA]](a) := [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

(iii) [y.sup[DELTA] is [nabla]-differentiable on (a, b] and [y.sup.[DELTA][nabla]] [member of] H.

Obviously D is a linear subset dense in H. Now we define the operator A : D [subset] H [right arrow] H as follows. The domain of definition of A is D and we put

(Ay)(t) = -[y.sup.[DELTA][nabla]](t), t [member of] (a, b],

for y [member of] D.

Definition 3.1. A complex number e is called an eigenvalue of problem (1.1), (1.2) if there exists a nonidentically zero function y [member of] D such that

-[y.sup.[DELTA][nabla]](t) = [lambda]y(t), t [member of] (a, b).

The function y is called an eigenfunction of problem (1.1), (1.2), corresponding to the eigenvalue [lambda].

We see that the eigenvalue problem (1.1), (1.2) is equivalent to the equation

Ay = [lambda]y, y [member of] D, y [not equal to] 0. (3.1)

Theorem 3.2. We have

<Ay, z> = <y, Az> for all y, z [member of] D, (3.2)

<Ay, y> = [[integral].sup.b.sub.a] [[y.sup.[DELTA]](t)].sup.2][DELTA]t for all y [member of] D. (3.3)

Proof. Using integration by parts formulas (2.2), (2.3) we have for all y, z [member of] D

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where we have used the boundary conditions u(a) = u(b) = 0 for functions u [member of] D.

Simultaneously we have also got

<Ay, y> = -[y.sup.[DELTA](t)y(t) [|.sup.b.sub.a] + [[integral].sup.b.sub.a] [[[y.sup.[DELTA]](t)].sup.2][DELTA]t = [[integral].sup.b.sub.a] [[y.sup.[DELTA]](t)].sup.2][DELTA]t.

The theorem is proved.

Relation (3.2) shows that the operator A is symmetric (self-adjoint), while (3.3) shows that it is positive:

<Ay, y> > 0 for all y [member of] D, y [not equal to] 0.

Therefore all eigenvalues of the operator A are real and positive and any two eigenfunctions corresponding to distinct eigenvalues are orthogonal. Besides, it can easily be seen that eigenvalues of problem (1.1), (1.2) are simple, that is, to each eigenvalue there corresponds a single eigenfunction up to a constant factor (equation (1.1) can not have two linearly independent solutions satisfying y(a) = 0).

Now we are going to prove the existence of eigenvalues for problem (1.1), (1.2). Note that

ker A = {y [member of] D : Ay = 0}

consists only of the zero element. Indeed, if y [member of] D and Ay = 0, then from (3.3) we have [y.sup.[DELTA]](t) = 0 for t [member of] [a, b) and hence y(t) = constant on [a, b]. Then using the condition y(a) = 0 (or y(b) = 0) we get that y(t) [equivalent to] 0.

It follows that the inverse operator [A.sup.-1] exists. To present its explicit form we introduce the Green function (see [2, 3, 9] and [4, Sec.8.4])

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3.4)

Then

([A.sup.-1]u)(t) = [[integral].sup.b.sub.a] G(t, s)u(s)[nabla]s for any u [member of] H. (3.5)

The equations (3.4) and (3.5) imply that [A.sup.-1] is a completely continuous (or compact) symmetric linear operator in the Hilbert space H.

The eigenvalue problem (3.1) is equivalent (note that [lambda] = 0 is not an eigenvalue of A) to the eigenvalue problem

Bu = [mu]u, u [member of] H, u [not equal to] 0,

where

B = [A.sup.-1] and [mu] = 1/[lambda].

In other words, if e is an eigenvalue and y [member of] D is a corresponding eigenfunction for A, then [mu] = [[lambda].sup.-1] is an eigenvalue for B with the same corresponding eigenfunction y; conversely, if [mu] [not equal to] 0 is an eigenvalue and u [member of] H is a corresponding eigenfunction for B, then u [member of] D and [lambda] = [[mu].sup.-1] is an eigenvalue for A with the same eigenfunction u.

Note that [mu] = 0 cannot be an eigenvalue for B. In fact, if Bu = 0, then applying to both sides A we get that u = 0.

Next we use the following well-known Hilbert-Schmidt theorem (see, for example, [10, Sec.24.3]): For every completely continuous symmetric linear operator B in a Hilbert space H there exists an orthonormal system {[[psi].sub.k]} of eigenvectors corresponding to eigenvalues {[[mu].sub.k]} ([[mu].sub.k] = 0) such that each element f [member of] H can be written uniquely in the form

f = [summation over k] [c.sub.k][[psi].sub.k] + h,

where h [member of] ker B, that is, Bh = 0. Moreover,

Bf = [summation over k] [[mu].sub.k][c.sub.k][[psi].sub.k],

and if the system {[[psi].sub.k]} is infinite, then lim [[mu].sub.k] = 0 (k[right arrow][infinity]).

As a corollary of the Hilbert-Schmidt theorem we have: If B is a completely continuous symmetric linear operator in a Hilbert space H and if ker B = {0}, then the eigenvectors of B form an orthogonal basis of H.

Applying the corollary of the Hilbert-Schmidt theorem to the operator B = [A.sup.-1] and using the above described connection between the eigenvalues and eigenfuncions of A and the eigenvalues and eigenfunctions of B we obtain the following result.

Theorem 3.3. For the eigenvalue problem (1.1), (1.2) there exists an orthonormal system {[[psi].sub.k]} of eigenfunctions corresponding to eigenvalues {[[lambda].sub.k]}. Each eigenvalue [[lambda].sub.k] is positive and simple. The system {[[psi].sub.k]} forms an orthonormal basis for the Hilbert space H. Therefore the number of the eigenvalues is equal to N = dim H. Any function f [member of] H can be expanded in eigenfunctions [[psi].sub.k] in the form

f(t) = [N.summation over (k=1)] [c.sub.k][[psi].sub.k](t), (3.6)

where ck are the Fourier coefficients of f defined by

[c.sub.k] = [[integral].sup.b.sub.a] f(t)[[psi].sub.k](t)[nabla]t. (3.7)

In the case N = [infinity] the sum in (3.6) becomes an infinite series and it converges to the function f in metric of the space H, that is, in mean square metric:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3.8)

Note that since

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

we get from (3.8) the Parseval equality

[[integral].sup.b.sub.a] [f.sup.2](t)[nabla]t = [N.summation over (k=1)[c.sup.2.sub.k]. (3.9)

Remark 3.4. Above in the definition of the Hilbert space H we required the condition y(b) = 0 for functions y : (a, b] [right arrow] R in H in the case b is left-scattered. This is needed to ensure that D is dense in H. It is also needed for validity of the mean square convergent expansion (3.6) for any function f in H, since in the case b is left-scattered (3.6) must be held at t = b as a pointwise equality (according to (3.8)) and then from [[psi].sub.k](b) = 0 we necessarily get f(b) = 0. Note also that the condition y(b) = 0 for H is necessary to guarantee the equality H=D in the discrete case T = Z.

Remark 3.5. It is easy to see that the dimension of the space H is finite if and only if the time scale interval (a, b) consists of a finite number of points, and in this case dim H is equal to the number of points in the interval (a, b).

Remark 3.6. If we denote by [psi](t, [lambda]) the solution of equation (1.1) satisfying the initial conditions

[psi](a, [lambda]) = 0, [[psi].sup.[DELTA]](a, [lambda]) = 1,

then the eigenvalues of problem (1.1), (1.2) will coincide with the zeros of the function [psi](b, [lambda]) (characteristic function of problem (1.1), (1.2)). So we have proved existence of zeros of [psi](b, [lambda]) by proving existence of eigenvalues of problem (1.1), (1.2). It is possible (see ) to prove existence of zeros of [psi](b, [lambda]) directly and to get in this way existence of the eigenvalues.

4. Uniformly Convergent Expansions

In this section we prove the following result (we assume that dim H = [infinity], since in the case dim H < [infinity] the series becomes a finite sum).

Theorem 4.1. Let f : [a, b] [right arrow] R be a continuous function satisfying the boundary conditions f(a) = f(b) = 0 and such that it has a [DELTA]-derivative [f.sup.[DELTA]](t) everywhere on [a, b), except at a finite number of points [t.sub.1], [t.sub.2], ..., [t.sub.m], the [DELTA]-derivative being continuous everywhere except at these points, at which [f.sup.[DELTA]] has finite limits from the left and right. Besides assume that [f.sup.[DELTA]] is bounded on [a, b) {[t.sub.1], [t.sub.2], ..., [t.sub.m]}. Then the series

[[infinity].summation over (k=1)][c.sub.k][[psi].sub.k](t), (4.1)

where

[c.sub.k] = [[integral].sup.b.sub.a]f(t)[[psi].sub.k](t)[nabla]t, (4.2)

converges uniformly on [a, b] to the function f.

Proof. We employ a method applied in the case of the usual (T = R) Sturm-Liouville problem by Steklov . First for simplicity we assume that the function f is [DELTA]-differentiable everywhere on [a, b) and that [f.sup.[DELTA]] is continuous and bounded on [a, b).

Consider the functional

J(y) = [[integral].sup.b.sub.a] [[[y.sup.[DELTA]](t)].sup.2][DELTA]t

so that we have J(y) [greater than or equal to] 0. Substituting in the functional J(y)

y = f(t) - [n.summation over (k=1)][c.sub.k][[psi].sub.k](t),

where [c.sub.k] are defined by (4.2), we obtain

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4.3)

Next, applying integration by parts formula (2.2), we get

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where [[delta].sub.kl] is the Kronecker symbol and where we have used the boundary conditions f(a) = f(b) = 0, [[psi].sub.k](a) = [[psi].sub.k](b) = 0, and the equation-[[psi].sup.[DELTA][nabla].sub.k] (t) = [[lambda].sub.k][[psi].sub.k](t). Therefore we have from (4.3)

J (f - [n.summation over (k=1)][c.sub.k][[psi].sub.k] = [[integral].sup.b.sub.a] _ [[[f.sup.[DELTA]](t)].sup.2] [DELTA]t - [n.summation over (k=1)] [[lambda].sub.k][c.sup.2.sub.k].

Since the left-hand side is nonnegative we get the inequality

[[infinity].summation over (k=1)[[lambda].sub.k][c.sup.2.sub.k] = [[integral].sup.b.sub.a] [[f[DELTA](t)].sup.2][DELTA]t (4.4)

analogous to Bessel's inequality, and the convergence of the series on the left follows. All the terms of this series are nonnegative, since [[lambda].sub.k] > 0.

Note that the proof of (4.4) is entirely unchanged if we assume that the function f satisfies only the conditions stated in the theorem. Indeed, when integrating by parts, it is sufficient to integrate over the intervals on which [f.sup.[DELTA]] is continuous and then add all these integrals (the integrated terms vanish by f(a) = f(b) = 0 and the fact that f, [[psi].sub.k], and [[psi].sup.[DELTA].sub.k] are continuous on [a, b]).

We now show that the series

[[infinity].summation over (k=1) [absolute value of [c.sub.k][[psi].sub.k](t)] (4.5)

is uniformly convergent on the interval [a, b]. Obviously from this the uniformly convergence of series (4.1) will follow.

Using the integral equation

[psi]k(t) = [[lambda].sub.k][[integral].sup.b.sub.a]G(t, s)[[psi].sub.k](s)[nabla]s

which follows from [[psi].sub.k] = [[lambda].sub.k][A.sup.-1][[psi].sub.k] by (3.5), we can rewrite (4.5) as

[[infinity].summation over (k=1)][[lambda].sub.k] [absolute value of [c.sub.k][g.sub.k](t)], (4.6)

where

[g.sub.k](t) = [[integral].sup.b.sub.a] G(t, s)[[psi].sub.k](s)[nabla]s

can be regarded as the Fourier coefficient of G(t, s) as a function of s. By using inequality (4.4), we can write

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4.7)

where [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (t, s) is the delta derivative of G(t, s) with respect to s. The function appearing under the integral sign is bounded (see (3.4)), and it follows from (4.7) that

[[infinity].summation over (k=1)][[lambda].sub.k][g.sup.2.sub.k](t) [less than or equal to] M,

where M is a constant. Now replacing [[lambda].sub.k] by [square root of ([lambda].sub.k])] [square root of ([lambda].sub.k])], we apply the Cauchy-Schwarz inequality to the segment of series (4.6):

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and this inequality, together with the convergence of the series with terms [[lambda].sub.k][c.sup.2.sub.k] (see (4.4)), at once implies that series (4.6), and hence series (4.5) is uniformly convergent on the interval [a, b].

Denote the sum of series (4.1) by [f.sub.1](t):

[f.sub.1](t) = [[infinity].summation over (k=1)][c.sub.k][[psi].sub.k](t). (4.8)

Since the series in (4.8) is uniformly convergent on [a, b], we can multiply both sides of

(4.8) by [[psi].sub.l](t) and then [nabla]-integrate it term-by-term to get

[[integral].sup.b.sub.a][f.sub.1](t)[[psi].sub.l](t)[nabla]t = [c.sub.l].

Therefore the Fourier coefficients of [f.sub.1] and f are the same. Then the Fourier coefficients of the difference [f.sub.1] - f are zero and applying the Parseval equality (3.9) to the function [f.sub.1] - f we get that [f.sub.1] - f = 0, so that the sum of series (4.1) is equal to f(t). []

Remark 4.2. The proofs of Theorem 3.3 and Theorem 4.1 can easily be generalized to the case of equation

-[p(t)[y.sup.[DELTA]](t)][nabla] + q(t)y(t) = [lambda]y(t),

where p is continuously [nabla]-differentiable, p(t) > 0, and q is continuous with q(t) [greater than or equal to] 0.

Received February 4, 2007; Accepted April 1, 2007

References

 Ravi P. Agarwal, Martin Bohner, and Patricia J.Y. Wong, Sturm-Liouville eigen-value problems on time scales, Appl. Math. Comput., 99(2-3):153-166, 1999.

 Douglas R. Anderson, Gusein Sh. Guseinov, and Joan Hoffacker, Higher-order self-adjoint boundary-value problems on time scales, J. Comput. Appl. Math., 194(2):309-342, 2006.

 F. Merdivenci Atici and Gusein Sh. Guseinov, On Green's functions and positive solutions for boundary value problems on time scales, J. Comput. Appl. Math., 141(1-2):75-99, 2002. Dynamic equations on time scales.

 Martin Bohner and Allan Peterson, Dynamic equations on time scales, Birkhauser Boston Inc., Boston, MA, 2001. An introduction with applications.

 Martin Bohner and Allan Peterson, Advances in dynamic equations on time scales, Birkhauser Boston Inc., Boston, MA, 2003.

 Chuan Jen Chyan, John M. Davis, Johnny Henderson, and William K.C. Yin, Eigenvalue comparisons for differential equations on a measure chain, Electron. J. Differential Equations, pages No. 35, 7 pp. (electronic), 1998.

 Metin Gurses, Gusein Sh. Guseinov, and Burcu Silindir, Integrable equations on time scales, J. Math. Phys., 46(11):113510, 22, 2005.

 Gusein Sh. Guseinov, Integration on time scales, J. Math. Anal. Appl., 285(1):107-127, 2003.

 Gusein Sh. Guseinov, Self-adjoint boundary value problems on time scales and symmetric Green's functions, Turkish J. Math., 29(4):365-380, 2005.

 A.N. Kolmogorov and S.V. Fomin, Introductory real analysis, Revised English edition. Translated from the Russian and edited by Richard A. Silverman. Prentice-Hall Inc., Englewood Cliffs, N.Y., 1970.

 V.A. Steklov, Osnovnye zadachi matematicheskoi fiziki, "Nauka", Moscow, second edition, 1983, Edited and with a preface by V. S. Vladimirov.

Gusein Sh. Guseinov

Department of Mathematics, Atilim University, 06836 Incek, Ankara, Turkey

E-mail: guseinov@atilim.edu.tr