# Complexity Estimates for Severely Ill-posed Problems under A Posteriori Selection of Regularization Parameter.

1 Introduction. Problem setting

The present paper is dedicated to estimations of information complexity for severely ill-posed problems such as Fredholm's integral equations with smooth kernels. In this paper the authors continue researches started in [25,26]. Herewith, in the indicated papers the regularization parameter was selected depending on a level of input data error as well as on a parameter describing smoothness of desired solution. The essential disadvantage of such selection is that the regularization parameter have to conform with exact value of the mentioned smoothness. It is obvious that such approach is suitable only if we know smoothness of desired solution. But, usually, such information is either not available or not accurate. Therefore, a posteriori rule of choosing the regularization parameter is necessary by practical experiment. Such rule does not require any additional information about smoothness of the solution. One of the most common a posteriori rules is the Morozov discrepancy principle, that was applied at the first time to moderately ill-posed problems in [5,11,12]. For severely ill-posed problems this principle, in particular, was employed in [21]. Another a posteriori rule is the balancing principle, which called also the Lepskii principle. This principle was applied at the first time by O.V. Lepskii to statistical problems (see [6]). Further, for solving linear ill-posed problems this principle was involved in [3,13,24], and for nonlinear problems it was considered in [2]. Recall, the balancing principle consists in the selection of a regularization parameter to balance two functions in the error estimate of the algorithm. In the present paper for efficient solving severely ill-posed problems economical projection methods are designed. The standard Tikhonov method is used as regularazer. In Subsections 4.1-4.2 the discrepancy principle is applied as a stoping rule, and in 4.3-4.4 we use the balancing principle. Herewith, it will be established that proposed strategies not only achieve optimal order of accuracy on the class of problems under consideration, but also they are economical in the sense of used discrete information amount. Now, we give a rough statement of the problem.

Let us consider Fredholm's integral equation of the first kind

Ax(t) = f (t), t [member of] [0,1] (1.1)

with Ax(t) = [[integral].sup.1.sub.0] a(t,[tau])x(tau])d[tau], acting continuously in [L.sub.2] := [L.sub.2](0; 1). Suppose that Range(A) is not closed in [L.sub.2] and f [member of] Range(A). We also assume that a perturbation [mathematical expression not reproducible] is given instead of the right-hand side f of the equation (1.1). Following [21], as a severely ill-posed problem we will understand the equation (1.1) for which a kernel of A is much smoother than the solution of (1.1). Namely, we will assume that exact solution satisfies some logarithmic source condition, e.g.

Mp(A) := {u : u = [ln.sup.-p] [(A*A).sup.-1] v, [parallel]v[parallel] [less than or equal to] p}, (1.2)

where p, [rho] are some positive parameters, and [A.sup.*] is the adjoint of A. The operator function [ln.sup.-p][(A*A).sup.-1] is well-defined by its spectral decomposition

[mathematical expression not reproducible], (1.3)

[mathematical expression not reproducible]. Here [[lambda].sub.k], k = 1, 2,..., are singular values of A. Note, that the exact information about smoothness, namely, the parameter p, is usually not available by practical experiment. For this reason the set

[mathematical expression not reproducible] (1.4)

is considered in place of [M.sub.p](A). Here [p.sub.1] < [infinity] is an upper bound for possible values of p. Within the framework of our researches we construct approximations to the exact solution [x.sup.[dagger]] (1.1), which has minimal norm in [L.sub.2] and belongs to the set M(A). From now on, we assume that the parameter p is unknown.

It is known, that at the first time severely ill-posed problems was considered in [8]. Note in [29] it was established that an error of any approximation method for (1.1) with solutions from (1.4) can't be less than O([ln.sub.-p] [[delta].sup-1]). Methods guaranteeing this order of accuracy on the class of problems under consideration are used to call order-optimal. Later on, research of severely ill-posed problems was continued in the series of works, in particular, [4,7,21,24,29]. For severely ill-posed problems error estimates of the Tikhonov method in finite-dimensional subspaces are given in [21] with parameter choice by discrepancy principle, and in [24] with parameter choice by the balancing principle.

Now, we introduce a class of equations under consideration and give a statement of problem. Let [{[e.sub.1]}.sup.[infinity].sub.i=1] be some orthonormal basis in [L.sub.2]. By [P.sub.m] we denote the orthogonal projection onto span{[e.sub.1], [e.sub.2],..., [e.sub.m]} : [P.sub.m][phi](t) = [[summation].sup.m.sub.i=1] ([phi], [e.sup.i])[e.sub.i](t). Consider the following class of operators (1.2):

[mathematical expression not reproducible], (1.5)

where [mathematical expression not reproducible] otherwise.

If the kernel a(t, [tau]) of A has mixed partial derivatives up to order r by both variables and the inequalities

[mathematical expression not reproducible]

hold for all i, j = 0,1, ..., r, then it is known (see, e.g. [19]), that A [member of] [H.sup.r.sub.[gamma]] for some [gamma] = ([[gamma].sub.0], [[gamma].sub.1]).

From now on, class of equations (1.1) with operators belonging to [H.sup.r.sub.[gamma]] (1.5) and solutions from M(A1.4) will be denoted by ([H.sup.r.sub.[gamma]],M(A)). Within the framework of the present paper we concentrate on the study of projection methods for solving equations from ([H.sup.r.sub.[gamma]], M (A)).

Any projection scheme for discretization of equations (1.1) with the perturbed right-hand side [f.sub.[delta]] one can define by means of a finite set of inner products

([Ae.sub.j], [e.sub.i]), (i,j) [member of] [OMEGA], (1.6)

[mathematical expression not reproducible], (1.7)

where [OMEGA] to be a bounded domain of the coordinate plane [1, to) x [1, [infinity]). The inner products (1.6), (1.7) are used to call the Galerkin information about (1.1). Here card([OMEGA]) is the total number of the inner products (1.6). In particular, if [OMEGA] = [1,n] x [1,m] then one deals with the standard Galerkin discretization scheme, card([OMEGA]) = nm. Researches for various classes of ill-posed problems related to such scheme of discretization were conducted in a number of works among which we mention [9,20].

Definition 1. A projection method of solving (1.1) is a mapping P = P([OMEGA]) : [L.sub.2] [right arrow] [L.sub.2] that use Galerkin's information (1.6), (1.7) about (1.1). The mapping P([OMEGA]) provides a correspondence between the right-hand side of the equation being solved and an element P([A.sub.[OMEGA]])[f.sub.[delta]], which is a polynomial by the basis {[e.sub.i]}.sup.[infinity].sub.i=1] with harmonic numbers from [w.sub.2] := {j: (i,j) [member of] [OMEGA]}. This element is taken as an approximate solution of (1.1).

In general, we do not require from the method P either linearity, stability or continuity. Such a general approach is useful when we compare approximation properties of various methods of solving (1.1).

The error of the method P([OMEGA]) on the class of equations ([H.sup.r.sub.[gamma]], M(A)) is defined as

[mathematical expression not reproducible]

The minimal radius of the Galerkin information is given by

[mathematical expression not reproducible].

This value describes the minimal possible accuracy of projection methods, while the Galerkin information amount is bounded. Note, that [mathematical expression not reproducible] is related with information complexity of the corresponding problem. The information complexity of a problem is defined as the least amount of discrete information needed to find an approximate solution with a given precision; the algorithmic complexity is defined as the minimal number of arithmetic operations that must be performed to construct such solution. Thus, [mathematical expression not reproducible] characterizes information complexity on the class of problems ([H.sup.4.sub.[gamma]], M(A)).

It should be noted that investigation of complexity for various kind of problems was started in [30,31]. In these works the fundamentals of general theory of optimal algorithms were introduced. In recent years the interest to complexity of ill-posed problems is increasing. In the work [20] first economical projection methods for solving moderately ill-posed problems were designed. The standard Galerkin method was applied as discretization scheme. The first estimates of information and algorithmic complexity for moderately ill-posed problems were obtained in [19,22,23]. The authors point to the fact that optimal order of such values are achieved under a modified Galerkin scheme that is called hyperbolic cross. The study of complexity for severely ill-posed problems was started relatively recently. These researches are highlighted in the series of works, we mention [9,25,26,28]. Note, that [9] continues researches started by [20]. Here the authors consider economical discretization under the standard Galerkin scheme for problems with solutions satisfying a general source condition (which contains also severely ill-posed problems). In [25,26] information and algorithmic complexity estimates are obtained for severely ill-posed problems. It is established that optimal order of these values are achieved within the framework of the projection method where the modified Galerkin scheme

is applied as discretization. For a priori parameter choice the general regularizator (the generate function satisfies the Bakushinskii conditions) is used. In [28] for the stopping rule according with the balancing principle error estimates and information complexity estimates are given within the framework of the same projection method with the modified Galerkin scheme as discretization. The latest result related to discussed problem is highlighted in [10]. Here, the authors consider a class of ill-posed problems, which is given by one fixed operator.

As we have mentioned above, the paper continues researches started in [25,26,28]. In the present work economical projection methods with different a posteriori rules of regularization parameter choice will be developed for solving severely ill-posed problems.

2 Regularization and discretization methods

To guarantee stable approximations we apply the standard Tikhonov method. In this method a regularized solution [x.sub.[alpha]] is defined as the solution of the variational problem

[mathematical expression not reproducible]. (2.1)

For a numerical realization of the standard Tikhonov method it is necessary to carry out all computations with finite amount of input data. For that reason the variational problem (2.1) is replaced by

[mathematical expression not reproducible]

where [A.sub.n] is an operator of the finite rank.

To economical discretization of equations belonging to ([H.sup.r.sub.[gamma]], M(A)) we employ a modification of the standard Galerkin scheme which is called a hyperbolic cross. It should be noted that the hyperbolic cross was applied at the first time by K.I. Babenko [1] to compute the Kolmogorov widths on some classes of periodic functions that have dominant mixed partial derivatives.

The idea of applying the hyperbolic cross to operator equations of the second kind belongs to S.V. Pereverzev, and he implemented it in the series of works (see e.g. [14, 15,16, 18]). The efficiency of the hyperbolic cross for ill-posed problems has been demonstrated in [17,19,26,27].

In our research we apply a projection scheme with [[OMEGA] = [[GAMMA].sub.n], where

[mathematical expression not reproducible] (2.2)

is a hyperbolic cross on the coordinate plane by the basis {[e.sub.i]}.sup.[infinity].sub.i=1] involved in the definition of the class [H.sup.r.sub.[gamma]]. One can find an approximate solution from an operator equation of the second kind

[alpha]x + [A.sup.*.sub.n] [A.sub.n]x = [A.sup.*.sub.n]f[delta].

Thus we seek an approximation x = [x.sup.[delta].sub.[alpha].sub.n] as

[mathematical expression not reproducible], (2-3)

where [g.sub.[alpha]] ([lambda[) = ([alpha] + [lambda].).sup.-1], and

[mathematical expression not reproducible] (2.4)

Moreover, we introduce following auxiliary elements

[mathematical expression not reproducible]. (2.5)

3 Auxiliary results

The Section contains some definitions, facts, and also the series of auxiliary assertions which are needed later.

It is well-known (see e.g. [32]) that for any linear bounded operator A the following inequalities hold:

[mathematical expression not reproducible] (3.1)

Lemma 1. ([32, p. 34]) If g is a bounded Borel measurable function on [mathematical expression not reproducible] [0, [[gamma.sup.2.sub.0],

Lemma 2. (see [21]) Let [x.sup[.[dagger]] [member of] Mp(A) and [mathematical expression not reproducible]. Then for sufficiently small [alpha] [member of] (0, [e-2p) it holds

[mathematical expression not reproducible]

where [x.sun/a is determined by (2.5).

Lemma 3. ([21]) Let x) [mathematical expression not reproducible], and a is such that

[parallel][Ax.sub.[alpha]] - f[parallel] < d' [delta],

where d > 0 is some positive constant and [x.sub.[alpha]] is determined by (2.5). Then the estimate

[mathematical expression not reproducible]

is fulfilled. The constant [[xi].sub.1] > 0 depends only on d, [rho] and p.

Lemma 4. Let [x.sup.([dagger]) [member of] M(A). For any [alpha] > 0 and n [member of] N the estimate

[mathematical expression not reproducible]

holds, where [mathematical expression not reproducible] are determined by (2.5),(2.3), correspondingly.

Proof. First, we note that

[mathematical expression not reproducible] (3.3)

Further, let's consider the decomposition

[mathematical expression not reproducible]

where

[mathematical expression not reproducible]

Now we estimate [S.sub.1], [S.sub.2]. By relations (3.1), (3.2) we immediate find

[mathematical expression not reproducible]

It remains to estimate the norm of [S.sub.2]. Rewrite [S.sub.2] as follows

[mathematical expression not reproducible],

where

[mathematical expression not reproducible]

Further, we bound norms of [[bar.s].sub.1] and [bar.s].sub.2]. By (3.1), (3.2) and (3.3), we obtain

[mathematical expression not reproducible]

Thus,

[mathematical expression not reproducible]

Summing up the above bounds, we finally get

[mathematical expression not reproducible].

The lemma is proved.

Lemma 5. ([4, 24]) Let [x.sup.([dagger]] [member of] M(A). Then for any a > 0 the inequality

[mathematical expression not reproducible]

holds, where

[mathematical expression not reproducible] (3.4)

and [x.sub.[alpha]] a is determined by (2.5).

Lemma 6. Let [x.sup.[dagger]] [member of] M(A). Then for any [alpha] > 0 and n [member of] N the inequality

[mathematical expression not reproducible]

holds, where [x.sub.[alpha]] and [x.sub.[alpha],n] are determined by (2.5).

Proof. First, rewrite

[mathematical expression not reproducible]

where

[mathematical expression not reproducible]

By (3.1) we have [mathematical expression not reproducible].

It remains to estimate [parallel][T.sub.1][parallel]. By means of (3.2) we rewrite [T.sub.1] as

[mathematical expression not reproducible],

where

[mathematical expression not reproducible]

Further we bound the norms of [[bar.T].sub.1] and [[bar.T].sub.2]. By (3.1) we obtain

[mathematical expression not reproducible]

It remains to estimate [[bar.T].sub.2]. By (3.2) we have

[mathematical expression not reproducible]

and applying (3.1) we get [mathematical expression not reproducible]. Hence,

[mathematical expression not reproducible]

Thus, [mathematical expression not reproducible]. The lemma is proved.

Lemma 7. The following two-side estimate holds:

[mathematical expression not reproducible]

Proof. By means of (2.2) it holds card ([[GAMMA].sub.n]=) = [[summation].sub.n.sub.k=0] card([Q.sub.k]), with

[mathematical expression not reproducible]

Hence,

[mathematical expression not reproducible]

Hence,

[mathematical expression not reproducible]

Clearly, [2.sup.2n]n < card([GAMMA].sub.n]) [less than or equal to] 2 * [2.sup.2n]n. The lemma is proved.

It is known (see., e.g. [22]), that for any A [member of] [H.sup.r.sub.[gamma]] the inequality

[mathematical expression not reproducible] (3.5)

is fulfilled, where [A.sub.n] is determined by (2.4), and [mathematical expression not reproducible]

4 Error estimates

In this section two approaches for solving severely ill-posed problems from ([H.sup.r.sub[gamma]], M(A)) are proposed. These methods consist in combination of the standard Tikhonov regularization with discrepancy principle (Algorithm I) and balancing principle (Algorithm II). The difference between these principles is in the follows: the changes of values for parameter a correspond to the opposite directions, namely, a is decreasing for discrepancy principle, and a is increasing for balancing principle.

4.1 Algorithm I (Discrepancy principle as a stopping rule)

Let the regularization parameter [alpha] be selected as

[mathematical expression not reproducible] (4.1)

[mathematical expression not reproducible]

and the discretization parameter n is selected as follows

[bar.c] [square root of [n2.sup.2rn]] = 4[delta]/(5p)]. (4.2)

The equality means that as parameter n we take the rounded up number to solution of (4.2). Further, we describe proposed algorithm with the discrepancy principle as a stopping rule concerning to the studied problem.

Algorithm 1.

1. Input data: A [member of] [H.sup.r.sub.[gamma]], [f.sub.[delta]], [delta], p;

2. To construct [A.sub.n] (2.4) and [mathematical expression not reproducible] we compute the inner products (1.6), (1.7);

3. The cycle: m = 1, 2,..., [mathematical expression not reproducible] (2.3) is computed by solving the equation

[mathematical expression not reproducible]

The cycle is running as long as stopping rule conditions (4.3),(4.4) will be met;

4. A stopping rule (the discrepancy principle)

[mathematical expression not reproducible], (4.3)

[mathematical expression not reproducible], (4.4)

where m < M, d > [square root of 2 + 1], and [mathematical expression not reproducible] is determined by (2.3).

Introduced projection method (2.4), (4.1)-(4.4) is denoted as [P'.sub.1].

Lemma 8. Let [x.sup.[dagger]] [member of] M(A), A G [H.sup.4.sub.[gamma]], [[alpha].sub.M] meets (4.3), (4.4) is satisfied with d > [absolute value of 2 + 1], and the parameter n in (2.4) is chosen as (4.2). Then there are the constants [d.sub.1], [d.sub.2] > 0, such that the two-side estimate

[mathematical expression not reproducible]

is fulfilled.

Proof. First, note that by (3.5) and (4.2) it holds

[mathematical expression not reproducible].

If [[alpha].sub.M] meets the condition (4.3) then

[mathematical expression not reproducible]

and applying Lemma 4 we obtain

[mathematical expression not reproducible]

At the same time, keeping in mind (4.4), for [sigma] = [[sigma].sub.M-1] we have

[mathematical expression not reproducible]. (4.5)

Using the inverse triangle rule we can find

[mathematical expression not reproducible]. (4.6)

By spectral decomposition of the operator A (see (1.3)) we get

[mathematical expression not reproducible]

Hence,

[mathematical expression not reproducible]. (4.7)

Substituting (4.5) and (4.6) in (4.7) we finally obtain

[mathematical expression not reproducible].

Thus, the lemma is proved with [d.sub.1] = [theta](d - [square root of 2 - 1])[delta], [d.sub.2] = [theta](d + [square root of 2] + 1)[delta].

4.2 Error estimate of the algorithm I

Theorem 1. Let [x.sup.[dagger]] [member of] M(A), A [member of] [H.sup.4.sub.[gamma]], the parameters of regularization [[alpha].sub.M] and discretization n are chosen as (4.3) and (4.2), correspondingly. Then the estimate

[mathematical expression not reproducible] (4.8)

holds, where the constant [??] > 0 only depends on [mathematical expression not reproducible] is determined by (2.3).

Proof. It is obvious that

[mathematical expression not reproducible].

Applying (3.1) the last term can be estimated as

[mathematical expression not reproducible]. (4.9)

By Lemmas 3, 6 and relations (4.2), (4.9) we obtain

[mathematical expression not reproducible]

Further, if for [[alpha].sub.M] selected from (4.3) it holds [[alpha].sub.M] [greater than or equal to] [delta] then for sufficiently small [delta] we have

[mathematical expression not reproducible]

where [mathematical expression not reproducible].

Otherwise, if [[alpha].sub.M] [less than or equal to] [delta] then by Lemmas 2 and 8 we get

[mathematical expression not reproducible].

Hence, [mathematical expression not reproducible]. Thus,

[mathematical expression not reproducible]

where [mathematical expression not reproducible]. The theorem is proved with [mathematical expression not reproducible.

4.3 Algorithm II (Stopping rule according to balancing principle)

Let the discretization parameter n be selected, as before, according with (4.2):

[bar.c] [square root of [n2.sup.-2rn] = 4[delta]/(5p). (4.10)

First, we formulate the result giving an error estimate of the method with the balancing principle as a stop rule. This bound depends only on input data error [delta] and the regularization parameter [alpha].

Theorem 2. Let [x.sup.[dagger]] [member of] M(A), A [member of] [H.sup.4.sub.[gamma]]. Then the estimate

[mathematical expression not reproducible] (4.11)

holds, where [x.sup.[delta].sub.a,n] is determined by (2.3), and [[xi].sub.2] is determined by (3.4).

Proof. By triangle rule we get

[mathematical expression not reproducible].

By Lemmas 5,6 and relations (4.9), (4.10) we obtain

[mathematical expression not reproducible].

The theorem is proved.

For our further manipulations we denote as [mathematical expression not reproducible]. Thus the error estimate (4.11) can be rewritten as follows

[mathematical expression not reproducible],

where [[psi].sub.1](a) and [[psi].sub.2] ([alpha]) for [alpha] [right arrow] [infinity] are monotonically increasing and decreasing convex functions, respectively. Let us fix some real number q > 1 and define by [D.sub.M] the set of possible values for the parameter [alpha]:

[mathematical expression not reproducible],

where [mathematical expression not reproducible].

For solving equation (1.1) we propose a numerical algorithm that is distinguished from algorithm I by applying the balancing principle as the stopping rule [13].

Algorithm 2.

1. Given data: A [member of] [H.sup.4.sub.[gamma]], [delta], [f.sub.[delta]], p;

2. Compute the Galerkin information:

[mathematical expression not reproducible]

3. Compute the set of elements [mathematical expression not reproducible];

for i = 1 to M solve the equations

[mathematical expression not reproducible]

end;

4. Design the set of possible indexes:

[mathematical expression not reproducible] (4.12)

5. Choose [[alpha].sub.j] according to the rule [i.sub.+] = max{i : i [member of] [D.sup.+.sub.n]};

6. [mathematical expression not reproducible] is an approximate solution.

Let's denote numerical method (2.4),(4.10)-(4.12) by [P'.sub.2]. The optimality of the projection method [P'.sub.2] is established in the next subsection.

4.4 Error estimate for Algorithm II

For further mathematical manipulations we introduce the auxiliary values

[mathematical expression not reproducible].

Theorem 3. Let A [member of] [H.sup.4.sub.[gamma]] and [x.sup.[dagger]] [member of] M(A). Then for the projection method [P'.sub.2] the following error bound

[mathematical expression not reproducible]

takes place.

Proof. The proof is analogous to the proof of Theorem 4.1 [28].

Theorem 4. Let A [member of] [H.sup.4.sub.[gamma]] and [x.sup.[dagger]] [member of] M(A). Then error bound for the projection method [P'.sub.2] is the following

[mathematical expression not reproducible],

where [[kappa].sub.p] is some constant that does not depend on [delta].

Proof. The proof of the Theorem is analogous to the proof of Theorem 4.2 [28].

5 Minimal radius of Galerkin's information

Now, we will obtain an error estimate for the methods [Pi.sub.], i = 1, 2. This estimate will be an upper bound for [R.sub.n,[delta]. Note that the proof of this theorem is cloth to that of Theorem 5.1 from [28], but for completeness and wholeness of the paper we keep the proof of this theorem.

Theorem 5. For sufficiently small [delta] the estimate

[mathematical expression not reproducible]

is fulfilled, where the constants [c.sub.i,p] > 0, i = 1, 2, do not depend on [delta]. Moreover,

[mathematical expression not reproducible].

Proof. We establish an upper bound for [R.sub.N,[delta]]. First, let's consider the algorithm [P'.sub.1] (2.4), (4.1)-(4.4). We express the right-hand side of (4.8) by N, where [N = [c'n2.sup.2n], 1 < c' [less than or equal to] 2 (see Lemma 7). Due to (4.2) we have

[mathematical expression not reproducible]

By obvious transformations we obtain ln N = ln c' + 2n ln2 + ln n. Hence, n [less than or equal to] ln N/2ln2, and [[delta].sup.-1] we estimate as follows

[mathematical expression not reproducible]

For any [mu] > 0 there is some [N.sub.0] that for all N [greater than or equal to] [N.sub.0] it holds ln N [less than or equal to] [N.sup.u]. Consequently,

[mathematical expression not reproducible]

Without loss of generality we assume [mu] such that r(1 - [mu]) - 1/2[mu] > 0. Then (4.8) we rewrite as

[mathematical expression not reproducible].

By definition of [R.sub.N,[delta]] ([H.sup.4.sub.[gamma]], M(A)) it follows

[mathematical expression not reproducible].

Further, we consider the algorithm [P'.sub.2] (2.4),(4.10)-(4.12). It is easy to establish that the estimate

[mathematical expression not reproducible]

is fulfilled.

It remains to establish

[mathematical expression not reproducible].

The theorem is completely proved.

Below we formulate a result giving the order estimate of the minimal radius for the Galerkin information.

Theorem 6. The two-side estimate

[mathematical expression not reproducible]

holds, where [mathematical expression not reproducible]. The indicate optimal order is achieved within the framework of the algorithms [P'.sub.i] i = 1, 2.

The lower bound for [R.sub.N,[delta]] was established in [28], and the upper estimate is obtained in Theorem 5.

Remark 1. Comparing results for the strategy from [9], where the standard Galerkin method was applied as discretization scheme, with that of Theorem 6, we can conclude that all these approaches achieve the optimal order of accuracy on the class of severely ill-posed problems under study. In the same time the modification for the Galerkin scheme (2.2) allows to reduce essentially the amount of discrete information.

Moreover, if we compare obtained results with that of [25,26], where the regularization parameter was selected a priori, first we note that in the present paper we consider a more wide class of equations (compare (1.4) and (1.2)). On the one hand, the problem becomes more complicate (we reject from knowing the exact value of p); on the other hand, the aim of our research is the same, namely, we have to provide the optimal order of accuracy with minimal cost of used Galerkin's information. If we compare discrete information amount then we faced with the fact that it is larger (on logarithmic factor) in the case of unknown parameter p. The increasing of discrete information amount one can consider as a "penalty" for rejection to know a smoothness of desired solution.

https://doi.org/10.3846/13926292.2017.1307284

References

[1] K.I. Babenko. Approximation of periodic functions of many variables by trigonometric polynomials. Dokl. Acad. Nauk SSSR, 132(2):247-250, 1960.

[2] F. Bauer and T. Hohage. A Lepsij-type stopping rule for regularized Newton methods. Inverse Probl., 21(6):1975-1991, 2005. https://doi.org/10.1088/02665611/21/6/011.

[3] U. Hamarik and T. Raus. About the balancing principal for choice of the regularization parameter. Num.. Func. Anal. and Optim., 30(9-10):951-970, 2009. https://doi.org/10.1080/01630560903393139.

[4] T. Hohage. Regularization of exponentially ill-posed problems. Numer. Fund. Anal. Optim.., 21(3-4):439-464, 2000. https://doi.org/10.1080/01630560008816965.

[5] V.K. Ivanov. The approximate solution of operator equations of the first kind. Zh. Vychisl. Mat. Mat. Fiz., 6(6):197-205, 1966. https://doi.org/10.1016/00415553(66)90171-6. (in Russian)

[6] O. Lepskii. A problem of adaptive estimation in Gaussian white noise. Theory Probab. Appl., 36:454 -466, 1990.

[7] Sh. Lu and S.V. Pereverzev. Regularization Theory for Ill-posed Problems. Selected Topics. Inverse and Ill-posed Problems Series, 58. De Gruyter, Berlin, 2013. https://doi.org/10.1515/9783110286496.

[8] B.A. Mair. Tikhonov regularization for finitely and infinitely smoothing operators. SIAM J. Math. Anal., 25(1):135-147, 1994. https://doi.org/10.1137/S0036141092238060.

[9] P. Mathe and S.V. Pereverzev. Discretization strategy for ill-posed problems in variable Hilbert scales. Inverse Problems, 19(6):1263-1277, 2003. https://doi.org/10.1088/0266-5611/19/6/003.

[10] P. Mathe and S.V. Pereverzev. Complexity of linear ill-posed problems in Hilbert space. J. Complexity. Special Issue, pp. 50-67, 2016.

[11] V.A. Morozov. Regularization of incorrectly posed problems and the choice of regularization parameter. Zh. Vychisl. Mat. Mat. Fiz., 6(1):242-251, 1966. https://doi.org/10.1016/0041-5553(66)90046-2.

[12] V.A. Morozov. Choice of parameter in solving functional equations by the method of regularization. Dokl. Akad. Nauk SSSR, 175(1):170-175, 1967.

[13] S. Pereverzev and E. Schock. On the adaptive selection of the parameter in regularization of ill-posed problems. SIAM J. Numer. Anal., 43(5):2060-2076, 2005. https://doi.org/10.1137/S0036142903433819.

[14] S.V. Pereverzev. On the complexity of the problem of finding the solutions of Fredholm equations of the second kind with smooth kernels. I. Ukrain. Mat. Zh., 40(1):71-76, 1988. https://doi.org/10.1007/bf01056451.

[15] S.V. Pereverzev. On the complexity of the problem of finding the solutions of Fredholm equations of the second kind with smooth kernels. II. Ukrain. Mat. Zh., 41(2):169-173, 1989. https://doi.org/10.1007/BF01060382.

[16] S.V. Pereverzev. Hyperbolic cross and the complexity of the approximate solution of Fredholm integral equations of the second kind with differentiable kernels. Sibirsk. Mat. Zh., 32(1):85-92, 1991. https://doi.org/10.1007/bf00970164.

[17] S.V. Pereverzev. Optimization of projection methods for solving ill-posed problems. Computing, 55(2):113-124, 1995. https://doi.org/10.1007/BF02238096.

[18] S.V. Pereverzev and C. Scharipov. Information complexity of equations of second kind with compact operators in Hilbert space. J.Complexity, 8(2):176-202, 1992. https://doi.org/10.1016/0885-064X(92)90014-3.

[19] S.V. Pereverzev and S.G. Solodky. The minimal radius of Galerkin information for the Fredholm problems of first kind. J. Complexity, 12(4):401-415, 1996. https://doi.org/10.1006/jcom.1996.0025.

[20] R. Plato and G.M.Vainikko. On the regularization of projection methods for solving ill-posed problems. Numer. Math., 57(1):63-79, 1990. https://doi.org/10.1007/BF01386397.

[21] E. Schock and S.V. Pereverzev. Morozov's discrepancy principle for Tikhonov regularization of severely ill-posed problems in finite-dimensional subspaces. Numer. Fund. Anal. Optim., 21(7-8):901-916, 2000.

[22] S.G. Solodky. Optimal approximation for solving linear ill-posed problems. J.Complexity, 15(4):123-132, 1999.

[23] S.G. Solodky. Optimization of the projection methods for solving linear ill-posed problems. Zh. Vychisl. Mat. Mat. Fiz., 39(2):195-203, 1999.

[24] S.G. Solodky and G.L. Myleiko. About regularization of severely ill-posed problems by standard Tikhonov's method with the balancing principle. Math. model. anal., 19(2):199-215, 2014. https://doi.org/10.3846/13926292.2014.909898.

[25] S.G. Solodky and G.L. Myleiko. The minimal radius of Galerkin information for severely ill-posed problems. Journal of Inverse and Ill-Posed Problems, 22(5):739-757, 2014. https://doi.org/10.1515/jip-2013-0035.

[26] S.G. Solodky and G.L. Myleiko. On optimization of projection methods for solving some classes of severely ill-posed problems. J. Appl. Anal., 95(4):826841, 2016. https://doi.org/10.1080/00036811.2015.1036748.

[27] S.G. Solodky and E.V. Semenova. On the optimal order of accuracy of an approximate solution to the Symm's integral equation. Zh. Vychisl. Mat. Mat. Fiz., 52(3):472-488, 2012.

[28] S.G. Solodky and E.V. Semenova. About minimal informational efforts by solving exponantially ill-posed problems. Journal of Num. and Appl. Math., 2:90-100, 2015.

[29] U. Tautenhahn. Optimality for ill-posed problems under general source condition. Numer. Funct. Anal. Optim., 19(3-4):377-398, 1998. https://doi.org/10.1080/01630569808816834.

[30] J.F. Traub, G. Wasilkowski and H. Wozniakowski. A General Theory of Optimal Algorithms. Academic Press. - New York., 1980.

[31] J.F. Traub, G. Wasilkowski and H. Wozniakowski. Information-Based Complexity. Boston : Academic Press, 1988.

[32] G.M. Vainikko and A.Yu.Veretennikov. Iteration Procedures in Ill-posed Problems. Nauka, Moscow, 1986. (in Russian)

Sergii G. Solodky, Ganna L. Myleiko, Evgeniya V. Semenova

Institute of Mathematics National Academy of Sciences of Ukraine 3, Tereschenkivska St., 01601 Kiev, Ukraine

E-mail(corresp.): anna_mileyko@ukr.net

E-mail: solodky@imath.kiev.ua

E-mail: lebedeva@ipnet.kiev.ua

Received August 31, 2016; revised February 27, 2017; published online May 15, 2017

* This work was done while the authors were visiting the Johann Radon Institute for Computational and Applied Mathematics (RICAM). The authors gratefully acknowledge the partially support in the scope of joint European Project "Approximation Methods for Molecular Modelling and Diagnosis Tools" ("AMMODIT"), Grant no: 645-672.
COPYRIGHT 2017 Vilnius Gediminas Technical University
No portion of this article can be reproduced without the express written permission from the copyright holder.