# ON GENERALIZED ITERATED TIKHONOV REGULARIZATION WITH OPERATOR-DEPENDENT SEMINORMS.

1. Introduction. We consider an equation of the form

(1.1) [Kx = y],

where K : X [right arrow] Y is a compact linear operator between generic Hilbert spaces X and Y over R or C Let Rg(K) be the range of K. We assume y [member of] Rg(K), i.e., that the problem (1.1) has a solution [x.sup.[dagger]] = [K.sup.[dagger]]y of minimal norm. Here [K.sup.[dagger]] denotes the (Moore-Penrose) pseudo inverse operator of K, which is unbounded when K is compact with infinite-dimensional range. Hence problem (1.1) is ill-posed and has to be regularized in order to compute a numerical solution .

We want to approximate the solution [x.sup.[dagger]] of equation (1.1) when only an approximation [y.sup.[delta]] of y is available with

(1.2) [[y.sup.[delta]]] = y + [eta] and [parallel] [eta] [parallel] [less than or equal to] [delta]],

where [eta] is called the noise vector and [delta] is called the noise level. Since [K.sup.[dagger]] [y.sup.[delta]] is not necessarily close to [x.sup.[dagger]], we approximate [x.sup.[dagger]] by [x.sup.[delta].sub.[alpha]] := [R.sub.[alpha]][y.sup.[delta]], where {[R.sub.[alpha]]} is a family of continuous operators depending on a parameter [alpha] . In that sense, a class of such regularizers are the so-called filter-based methods that exploit the singular value expansion (s.v.e.) of the operator K by acting on its spectrum in order to diminish the effect of the noise on the reconstructed solution. Indeed, if we indicate by [([[sigma].sub.m]; [v.sub.m], [u.sub.m]).sub.m [member of]N] the s.v.e of K, then we can express [R.sub.[alpha]] in the following way:

[mathematical expression not reproducible]

for every y that belongs to the domain of [K.sup.[dagger]] and where [??] is a family of functions (filters) that, under suitable conditions, compensate the decay to zero of the sequence of singular values {[[sigma].sub.m]} of the compact operator K; see, e.g., . Namely, we can write

(1.3) [[x.sub.[alpha].sup.[delta]] = [R.sub.[alpha][y.sup.[delta]]] = [x.sup.[dagger]] - [e.sub.a] + [e.sub.n]],

where

[mathematical expression not reproducible]

is the approximation error and

[mathematical expression not reproducible]

is the noise error. The role of the filter function [F.sub.[alpha]]] is then to mediate between the approximation error and the noise error.

A classical example is Tikhonov regularization with the filter [F.sub.[alpha]]]([sigma]) = [[sigma].sup.2]/[[sigma].sup.2]+[alpha]. It is well known that the Tikhonov filter is affected by a couple of undesirable properties such as an oversmoothing effect on the regularized solution [x.sup.[delta].sub.[alpha]] and a saturation of the convergence rate. Indeed, if we consider for example the compact embedding operator [J.sub.s] : [H.sup.s]([0,2[pi]]) [right arrow] [L.sup.2]([0, 2[pi]]) of the fractional Sobolev space [H.sup.s]([0, 2[pi]]) onto [L.sup.2]([0, 2[pi]]) with s > 0, then the regularized solution [x.sup.[delta].sub.[alpha]] belongs to the fractional Sobolev space [H.sup.2s], i.e., [x.sup.[delta].sub.[alpha]] lies in a more regular space than the true solution. Furthermore, Tikhonov regularization under suitable a-priori assumptions and an a-priori choice rule, [alpha] = [alpha]([delta]) ~ c x [[delta].sup.2/3], is of optimal order, and the best possible convergence rate obtainable is

[mathematical expression not reproducible]

On the other hand, let Q be the orthogonal projector onto Rg(K). If

[mathematical expression not reproducible]

then [x.sup.[dagger] = 0 as long as Rg(K) is not closed. This shows that Tikhonov regularization for an ill-posed problem with a compact operator never yields a convergence rate that is faster than O([[delta].sup.2/3]) because the method saturates at this rate; see [10, 7].

In the last years, new types of Tikhonov-based regularization methods were studied in  and  under the name of Fractional or Weighted Tikhonov and in [17, 19] in order to dampen the oversmoothing effect on the regularized solution of classic Tikhonov and to exploit the information carried by the spectrum of the operator. Special attention was devoted to Fractional Tikhonov regularization studied and extended in [9, 1, 15], while for Hermitian problems, the fractional approach was combined with Lavrentiev regularization; see [21, 14].

In this paper we present a generalization of the filters in [15, 17]. We show that the aforementioned methods can be seen as a generalized Tikhonov method of the form

(1.4) [mathematical expression not reproducible]

or equivalently,

(1.5) [[R.sub.[alpha]f] y = (K* K+ [alpha] f [(K*K)).sup.-1] K* y],

where f: [0, [[sigma].sub.1]] [right arrow] R is a bounded measurable function and the associated filter function has the structure

(1.6) [[F.sub.[alpha],f] ([sigma]) = [[[sigma].sup.2]/[[sigma].sup.2] + [alpha]f ([[sigma].sup.2])]].

We investigate the regularization and convergence properties of the filter, and we study conditions under which the regularization of the embedding operator [j.sub.s] : [H.sup.s] ([0, 2[pi]]) [right arrow] [L.sup.2] ([0, 2[pi]]) recovers the regularized solution in [H.SUP.T] with t [less than or equal to] 2s, i.e., such that the oversmoothing effect of the Tikhonov filter is reduced and more features of the true solution can be recovered. Moreover, a saturation result for the regularization method (1.4) is proposed. To overcome such saturation, we apply the iterative approach as it was done for the classical iterated Tikhonov method . Employing the techniques in , we introduce the iterative nonstationary variant for the regularization method (1.4) with the function f defined according to the proposal in  and give sufficient conditions for convergence and stability. Nonstationary versions of Tikhonov methods can provide better restorations, see, e.g., , as also confirmed by our numerical results, and in particular let us avoid the nontrivial problem of finding the optimal parameter [alpha].

Iterated Tikhonov with a general penalty term has recently been investigated in , where both convergence and regularization properties are proved. Here, we consider general penalty terms that are particular functions of the operator, and we prove stronger results. For instance, we provide the convergence rate in the noise-free case and the regularization property in the noisy case under a weaker condition. Moreover, this paper is to be considered an evolution and integration of , providing new results and a generalization of some of the techniques introduced in that paper.

The paper is organized as follow. In Section 2 we set the basic notations and preliminaries. In Section 3 we introduce the filter functions which we study and provide order optimality results. In Sections 4 and 5 we investigate the smoothing properties of these methods and their saturation levels, respectively. In Section 6 we present their iterated versions and provide convergence results. In Section 7 we give selected examples which confirm the theoretical analysis, while Section 8 is devoted to concluding remarks.

2. Preliminaries. In this section we recall some classical results on regularization methods that will be used in our subsequent analysis. The main concern is to prescribe sufficient conditions for which a family of approximate solutions converges uniformly to the exact true solution with respect to the noise level and with the best possible upper bound on the rate of convergence. All the propositions and theorems that are not new will include a reference to the paper where they first appeared. For a detailed description of the following results, see [7, 20]. DEFINITION 2.1 (Generalized Inverse). We define the generalized inverse of a compact linear operator [??],

[mathematical expression not reproducible]

where

[mathematical expression not reproducible]

As discussed in the introduction, we consider the problem (1.1) where only an approximation [y.sup.[delta]] of y satisfying (1.2) is available. Therefore, [x.sup.[dagger]] = [K.sup.[dagger]]y with y [member of] Dom([K.sup.[dagger]]) cannot be approximated by [K.sup.[dagger]] [y.sup.[delta]] due to the unboundedness of [K.sup.[dagger]] and hence, in practice the problem (1.1) is approximated by a family {[R.sub.[alpha]]} of neighboring well-posed problems , where [R.sub.[alpha]] should be continuous for every [alpha] and [R.sub.[alpha]] [right arrow] [K.sup.[dagger]] pointwise for [alpha] [right arrow] 0. These properties are formalized in the next definition.

DEFINITION 2.2. By a regularization method for [K.sup.[dagger]] we mean any family of operators

[mathematical expression not reproducible]

with the following properties:

(i) [R.sub.[alpha]] : Y [right arrow] X is a bounded operator for every [alpha].

(ii) For every y G Dom([K.sup.[dagger]]) there exists a mapping [alpha] : [R.sub.+] x Y [right arrow] (0, [alpha]0) [member of] R, (i.e., a choice rule) with [alpha] = [alpha]([delta], y ), such that

[mathematical expression not reproducible]

and

[mathematical expression not reproducible]

Throughout this paper, c is a constant which can change from one instance to the next. For the sake of clarity, if more than one constant will appear in the same line or equation, we distinguish them by means of a subscript.

The special regularization methods which we are interested in are filtering-based methods largely investigated in .

PROPOSITION 2.3 (). Let K : X [right arrow] Y be a compact linear operator. Let[sigma](K) be the closure of [??], and let [K.sup.[dagger]] be its generalized inverse. Let [R.sub.[alpha]] : y [right arrow] X be a family of operators defined for every [alpha] [member of] (0, [[alpha].sub.0]) as

(2.1) [mathematical expression not reproducible]

where [F.sub.[alpha]] : [0, [[sigma].sub.1]] [CONTAINS] [sigma](K) [right arrow] R is a Borel function such that

(2.2a) [mathematical expression not reproducible]

(2.2b) |[F.sub.[alpha]] ([[sigma].sub.m])| [less than or equal to] c < [infinity], where c does not depend on ([alpha], m),

(2.2c) [mathematical expression not reproducible]

Then [R.sub.[alpha]] is a regularization method with [parallel] [R.sub.[alpha]] [parallel] = c([alpha]), and it is called a filter-based regularization method while [F.sub.[alpha]] is a filter function.

For ease of notation we introduce the following notations

(2.3) []x.sub.[alpha]] := [R.sub.[alpha]]y, y G Dom([K.sup.[dagger]])],

(2.4) [[x.sub.[alpha]] := [R.sub.[alpha]][y.sup.[delta]], [y.sup.[delta]] [member of] y].

A choice rule [alpha]([delta], [y.sup.[delta]]) that depends only on the noise level [delta], i.e., [alpha]([delta], [y.sup.[delta]]) = [alpha]([delta]), is called a-priori choice rule. Since there can be no uniform convergence for any regularization method if Rg(K) is not closed, see [7, Proposition 3.11], convergence rates can only be proven on subsets of X, namely under a-priori assumptions in terms of the exact solution.

We thus report hereafter the definition of optimal order under a-priori assumptions.

DEFINITION 2.4 (Optimal order under an a-priori assumption). For every given v, [rho] > 0, let

[mathematical expression not reproducible]

A regularization method [R.sub.[alpha]] is called of optimal order under the a-priori assumption [x.sup.[dagger]] [member of] [X.sub.v,[rho]] if

(2.5) [mathematical expression not reproducible]

where for any general set [??], [delta] > 0, and for a regularization method [R.sub.[alpha]], we define

[DELTA]([delta], M, [R.sub.[alpha]]) := sup {[parallel] [x.sup.[dagger]] - [x.sup.[delta].sub.[alpha]] [parallel] : [x.sup.[dagger]] [member of] M, [parallel] y - [y.sup.[delta]] [parallel] [less than or equal to] [delta]}.

If [rho] is not known, as it will be usually the case, then we relax the definition by introducing the set

[mathematical expression not reproducible]

and we say that a regularization method [R.sub.[alpha]] is of optimal order under the a-priori assumption [x.sup.[dagger]] [member of] [X.sub.v] if

[mathematical expression not reproducible]

Roughly speaking, depending on the space X, [X.sub.v,[rho]] describes the regularity of the exact solution [x.sup.[dagger]] in terms of subdomains of X.

Finally, we are ready to state the main result of this section and one of the most important tools that we use for the study of the upcoming new filter methods. The next theorem provides sufficient conditions for a regularization method to be of optimal order under an a-priori assumption, i.e., it provides sufficient conditions in order to achieve the best upper bound on the convergence rate with respect to the noise level.

THEOREM 2.5 (). Let K : X [right arrow] Y be a compact linear operator, v and [rho] > 0, and let [R.sub.[alpha]] : Y [right arrow] X be a filter-based regularization method. If there exists a fixed [beta] > 0 such that

(2.6a) [mathematical expression not reproducible]

(2.6b) [mathematical expression not reproducible]

then [R.sub.[alpha]] is of optimal order under the a-priori assumption [x.sup.[dagger]] [member of] [X.sub.v,[rho]] with the choice rule

[mathematical expression not reproducible]

3. Two generalized Tikhonov methods and a mixing approach. In this section we discuss two recent types of regularization methods which generalize the classical Tikhonov method and which were first introduced and studied in  and , plus a mixing method with the intent to merge the features of both the preceding methods. We use the notation [F.sub.[alpha]] to indicate the new filters, where x will be replaced by the extra parameter introduced by the respective method. Every method is studied separately to avoid confusion and misunderstandings.

DEFINITION 3.1 (). We define the weighted-I Tikhonov method as the filter-based method

[mathematical expression not reproducible]

where the filter function is

(3.1) [mathematical expression not reproducible]

for [alpha] > 0 and r [greater than or equal to] 0. For r = 1 the classic Tikhonov filter is recovered.

According to (2.3) and (2.4), we fix the following notation

[x.sub.[alpha],r] := [R.sub.[alpha],r]y, y [member of] Dom([K.sup.[dagger]]),

[x.sup.[delta].sub.[alpha],r] := [R.sub.[alpha],r][y.sup.[delta]], [y.sup.[delta]] [member of] y.

DEFINITION 3.2 (). We define the weighted-II Tikhonov method as the filter-based method

[mathematical expression not reproducible]

where the filter function is

(3.2) [mathematical expression not reproducible]

for [alpha] > 0 and j [member of] N. For j = 0 the classic Tikhonov filter is recovered.

REMARK 3.3. With reference to (1.3), let us observe that for large j, the weighted-II filter [F.sub.[alpha]j] ([sigma]) is almost 1 when [sigma] belongs to the signal subspace and is almost the standard Tikhonov filter [F.sub.[alpha]]([sigma]) = [[sigma].sup.2]/[[sigma].sup.2]+[alpha]. when [sigma] belongs to the noise subspace. The idea is that in the signal subspace, i.e., when [sigma] ~ [[sigma].sub.1], where the noise error norm is controlled, the regularization is minimal, avoiding as much as possible the approximation error, while the action of the filter function is focused on the noise subspace, i.e., when [sigma] ~ 0. Roughly speaking, the weighted-II filter acts like a switch for the regularization to take place. Like above, we fix the following notation

[x.sub.[alpha]j] := [R.sub.[alpha]j]y, y [member of] Dom([K.sup.[dagger]]),

[x.sub.[alpha]j] := [R.sub.[alpha],j][y.sup.[delta]], [y.sup.[delta]] [member of] Y.

Given an operator W on any Hilbert space, if we consider the seminorm [parallel] x [parallel] w induced by W, i.e., [[parallel] x [parallel].sup.2.sub.W] := ([W.sub.x], [W.sub.x]), then the weighted-I Tikhonov method can also be defined as the unique minimizer of the functional

(3.3) [mathematical expression not reproducible]

where the seminorm [parallel] * [parallel] w is induced by the operator [??]. For r > 1,W has to be interpreted as the Moore-Penrose pseudo inverse. Looking for a stationary point in equation (3.3), we have that

[mathematical expression not reproducible]

from which we deduce the following expression for the operator [R.sub.[alpha]r],

[mathematical expression not reproducible]

In the same way, the weighted-II Tikhonov method can be defined as the unique minimizer of the functional

[mathematical expression not reproducible]

where [??], and by similar calculations as above, we can deduce that

[mathematical expression not reproducible]

Both these methods can be classified in the more general context of weighted generalized inverse methods, namely as

(3.4)[mathematical expression not reproducible]

or

(3.5) [[R.sub.[alpha]]y = [[K* K + [alpha][LAMBDA]* [LAMBDA]].sup.-1] K*y],

where [LAMBDA] is a suitable operator. We do not go into details; for references, see [7, Chapter 8]. We just observe that if [LAMBDA]* [LAMBDA] and K*K commute, then, indicating by [([[lambda].sub.n]; [v.sub.n], [u.sub.n]).sub.n[member of]N] the s.v.e. of [LAMBDA], the operator (3.5) can be expressed as

(3.6) [mathematical expression not reproducible]

Now, let f : [0, [[sigma].sub.1]] [right arrow] [0, [infinity]) be a Borel-measurable function and consider the operator f(K*K), which commutes with K*K. From equations (3.6), (3.1), and (3.2), it is clear that both the weighted-I and weighted-II filter methods are of the form (3.4) with [??] and where

[mathematical expression not reproducible]

respectively. This is the reason that motivated us to rename the original method of Hochsten-bach and Reichel, which appeared in , into weighted-I Tikhonov method and subsequently to rename the method of Huckle and Sedlacek appearing in  into weighted-II Tikhonov method. In this way it is easier to distinguish them from the fractional Tikhonov method of Klann and Ramlau in .

We can now introduce a new mixed method that makes use of both of the weighted-I and weighted-II methods by combining their filter functions.

DEFINITION 3.4. Fixing [F.sub.[alpha],r,j]] such that

[mathematical expression not reproducible]

where

[mathematical expression not reproducible]

we define the mixed method as the filter-based method

[mathematical expression not reproducible]

Let us notice that for r = 1 and j = 0 we recover the standard Tikhonov method.

As above, we use the notation

[x.sub.[alpha],r,j] := [R.sub.[alpha],r,j]y, y [member of] Dom([K.sup.[dagger]]),

[x.sup.[delta].sub.[alpha],r,j] := [R.sub.[alpha],r,j][y.sup.[delta]], [y.sup.[delta]] [member of] Y.

The idea behind this method is to exploit the features of the weighted-II filter function, as observed in Remark 3.3, and use it as a switch for the regularization. Roughly speaking,

[F.sub.[alpha],r,j]([sigma]) ~ 1 when [sigma] ~ [[sigma].sub.1], [F.sub.[alpha],r,j]([sigma]) ~ [F.sub.[alpha],r]([sigma]) when [sigma] ~ 0.

This regularization method satisfies all the nice properties of the weighted-I method that we are going to present soon after (see Propositions 3.5, 4.1, and 5.1): it is of optimal order, it saturates at the rate of [??], it undersmooths the reconstructed solution for 0 < r < 1, and it oversmooths the reconstructed solution for r > 1. For brevity, we skip most of the details of the proofs of statements regarding the mixed method. Indeed they can be easily recovered adapting and combining the techniques used in Propositions 3.5, 3.6, 4.2, and 5.1.

Proceeding further with the analysis, the optimal order of the weighted-I Tikhonov regularization was proved in . The following proposition summarizes this result, highlighting the dependence on r of v, and provides a converse result.

PROPOSITION 3.5 (). Let K be a compact linear operator with infinite-dimensional range. For every given r [greater than or equal to] 0, the weighted-I Tikhonov method [R.sub.[alpha],r] is a regularization method of optimal order under the a-priori assumption [x.sup.[dagger]] [member of] [X.sub.v,[rho]] with 0 < v [less than or equal to] r + 1. The best possible rate of convergence with respect to [delta] is [??], which is obtained for [??]. On the other hand, if [parallel] [x.sup.[dagger] - [x.sub.[alpha],r] [parallel] = O([alpha]), then [x.sup.[dagger]] [member of] [X.sub.r+1].

The next proposition provides a proof for the optimal order of the weighted-II Tikhonov regularization, which instead fails to have a converse result, namely it is not possible to infer any regularity of [x.sup.[dagger]] from the convergence rate of [parallel] [x.sup.[dagger]] - [x.sub.[alpha]r] [parallel].

PROPOSITION 3.6. Let K be a compact linear operator with infinite-dimensional range. For every given integer j [greater than or equal to] 0, the weighted-II Tikhonov method [R.sub.[alpha],j] is a regularization method of optimal order under the a-priori assumption [x.sup.[dagger]] [member of] [X.sub.v,[rho]] with 0 < v [less than or equal to] 2. The best possible rate of convergence with respect to [delta] is [??], which is obtained for [??].

Proof. The validity of (2.2a), (2.2b), and (2.2c) are trivial to verify, having as a consequence that the weighted-II Tikhonov is a regularization filter method. Without loss of generality, we suppose that [[sigma].sub.1] = 1. The left-hand side of condition (2.6b) takes the form

[mathematical expression not reproducible]

which is bounded from above by

[mathematical expression not reproducible]

Let us study the function g([sigma]). By taking the derivative, we observe that [sigma]* is a maximal point of g if and only if v [member of] [0, 2] and [sigma]* satisfies the equation

[mathematical expression not reproducible]

It is not difficult to see that there exists only one such [[sigma].sub.*] [member of] [0,1]; see Figure 3.1. Indeed, the functions

[mathematical expression not reproducible]

are such that [[psi]'.sub.1] [less than or equal to] 0 and [[psi]'.sub.2] > 0 on (0,1). Therefore, if we define

[[phi]([sigma]) = [[psi].sub.2] ([sigma]) - [[psi].sub.1] ([sigma])],

and fixing [[alpha].sub.0] < 2/(2 - v) in the case j = 1, it holds that

[mathematical expression not reproducible]

By standard calculus arguments, there exists only one [sigma]* with [phi]([sigma]*) = 0, namely, the value [??], where

[mathematical expression not reproducible]

If we let [alpha] [member of] (0, [[alpha].sub.0]) with [[alpha].sub.0] < [infinity], then necessarily [[sigma].sub.*] [member of] [??], where [??]. Since h([[sigma].sub.*]) = 0 if and only if [[sigma].sub.*] = 1, h([[sigma].sub.*]) is uniformly bounded away from 0, i.e., h([[sigma].sub.*]) [member of] [c 1] with c = c([[alpha].sub.0] j v) > 0 and independent of [alpha]. Henceforth, we can write

[mathematical expression not reproducible]

with 0 < c [member of] [c, 1], and then we have

(3.7) [mathematical expression not reproducible]

which is (2.6b) with [beta] = 1/2. The validity of (2.6a) is shown again by studying the function g with fixed v = 1. Therefore, from Theorem 2.5, as long as 0 < v [less than or equal to] 2, if [x.sup.[dagger]] [member of] [X.sub.v [rho]], the method achieves order optimality (2.5), and the best possible rate of convergence obtainable with respect to [delta] is [??] for v = 2.

REMARK 3.7. Observe that the optimal order for the weighted-II Tikhonov is independent of the auxiliary parameter j.

COROLLARY 3.8. Let K be a compact linear operator with infinite-dimensional range. For every given r > 0, j [member of] N, the mixed method [R.sub.[alpha],r,j] is a regularization method of optimal order under the a-priori assumption [x.sup.[dagger]] [member of] [X.sub.v,[rho]] with 0 < v [less than or equal to] r + 1. The best possible rate of convergence with respect to [delta] is [??], which is obtained for [??] with v = r + 1. On the other hand, if [parallel] [x.sup.[dagger]] - [x.sub.[alpha],r,j] [parallel] = O([alpha]), then [x.sup.[dagger]] [member of] [X.sub.r+1].

Proof. Exploiting the structure of [??] with [f.sub.1] and [f.sub.2] as in Definition 3.4, the proof can be adapted without difficulties combining the techniques used in the proofs of Proposition 3.6 and [1, Proposition 10].

4. Smoothing effect. In this section we deal with the oversmoothing property that affects the classical Tikhonov regularization method. Indeed, it was observed that the approximate solution is smoother than the true solution, i.e., it lives in a space of higher regularity. We will see that the weighted-I and fractional filters can overcome the oversmoothing effect by a proper choice of their extra regularization parameters. In order to understand this kind of behavior easily, we are going to restrict our study to the fractional Sobolev spaces [H.sup.s] of one-dimensional functions.

Let [OMEGA] = [0,2[pi]] and let [J.sub.s] : [H.sup.s] ([OMEGA]) [right arrow] [L.sup.2] ([OMEGA]) be the embedding operator of the Hilbert space [H.sup.s] ([OMEGA]) with s [member of] (0, [infinity]). [J.sub.s] is compact with s.v.e. given by

[mathematical expression not reproducible]

Let us consider the following equation

(4.1) [[J.sub.s]x = y].

Since [J.sub.s] is compact and therefore ill-conditioned, we regularize the above problem by introducing a family of filter functions,

[mathematical expression not reproducible]

The true solution [x.sup.[dagger]] i.e., [J.sub.s][x.sup.[dagger]] = y, belongs to [H.sup.s]. We consider the regularity of the approximate solution [x.sup.[delta].sub.[alpha]] when dealing with general inexact data [y.sup.[delta]] [member of] [L.sup.2]. The next propositions clarify the [H.sup.p] spaces in which the approximate solutions live according to the filter method.

PROPOSITION 4.1 (). For data [y.sup.[delta] [member of] [L.sup.2] ([OMEGA]), the approximate solution [x.sup.[delta].sub.[alpha]r] of the weighted-I Tikhonov filter for equation (4.1) belongs to [H.sup.s(r+1) ([OMEGA]).

PROPOSITION 4.2. For data [y.sup.[delta]] [member of] [L.sup.2] ([OMEGA]), the approximate solution [x.sup.[delta].sub.[alpha],j] of the weighted-II Tikhonov filter for the problem (4.1) belongs to [H.sup.s] ([OMEGA]) for any j [member of] N.

Proof. We have that

[mathematical expression not reproducible]

Then, the Fourier coefficients of [x.sup.[delta].sub.[alpha],j] are given by

[mathematical expression not reproducible]

from which we can calculate the [H.sup.p]-norm,

[mathematical expression not reproducible]

Setting w := [(1 + [m.sup.2]).sup.-s] [member of] (0, [2.sup.-s]], it is not difficult to prove that the function

[mathematical expression not reproducible]

is bounded and

[mathematical expression not reproducible]

Therefore, since

[mathematical expression not reproducible]

we can conclude that

[mathematical expression not reproducible]

The right hand-side of the above inequality is bounded for every data [y.sup.[delta]] [member of] [L.sup.2] ([OMEGA]) if p [less than or equal to] 2s.

In Proposition 4.1, for r = 1, we recover the classical Tikhonov filter and its oversmoothing property, i.e., if the true solution [x.sup.[dagger]] [member of] [H.sup.s], then the approximate solution [x.sup.[delta].sub.[alpha]] [member of] [H.sup.2s]. Therefore, compared to the standard Tikhonov filter, the weighted-I Tikhonov filter smoothes the approximate solution less for every 0 < r < 1. We say that it under smooths the approximate solution. The weighted-II Tikhonov instead does not provide any undersmoothing effect for any j [member of] N.

5. Saturation results. The following proposition deals with a saturation property similar to a well-known result for classical Tikhonov, cf. [7, Proposition 5.3]. We extend it to generalized Tikhonov methods of the form (1.4) with the penalty term being measured by the operator-dependent seminorm [??], where f : [0, [[sigma].sub.1]] [right arrow] [0, [infinity]) is a Borel-measurable function such that the corresponding filter function (1.6) satisfies the properties in (2.2).

Again, we fix the following notation

[mathematical expression not reproducible]

PROPOSITION 5.1 (Saturation for weighted Tikhonov regularization). Let K : X [right arrow] Y be a compact linear operator with infinite-dimensional range, and let [R.sub.[alpha],f] be the corresponding family of regularization operators as in equation (1.5). Let [alpha] = [alpha]([delta], [y.sup.[delta]]) be any parameter choice rule, and let

(5.1) [[[sigma].sup.2]/f ([[sigma].sup.2] ) ~ c[[sigma].sup.S] as [sigma] [right arrow] 0],

with c, s > 0. If

(5.2) [mathematical expression not reproducible]

then [x.sup.[dagger]] = 0, where Q is the orthogonal projector onto R(K).

Proof. Define

[mathematical expression not reproducible]

By the assumption that K has infinite-dimensional range, it follows that [lim.sub.m[right arrow][infinity]] [[sigma].sub.m] = 0.

According to the expression of [R.sub.[alpha],f] in (1.5), we have

[mathematical expression not reproducible]

Hence by (3.1), it holds that

[mathematical expression not reproducible]

Since [F.sub.[alpha],j] satisfies (2.2a), we can deduce that f cannot be identically zero in any interval of the form [0, [lambda]], and therefore it is possible to divide by f([[sigma].sub.m.sup.2]) if we take a suitable subsequence [??]. Without loss of generality, we assume {[[sigma].sub.m]} = {[[sigma].sub.mn]}. Then, we have that

[mathematical expression not reproducible]

and passing to the limsup, recalling assumption (5.1) and that [[delta].sub.m] = [[sigma].sub.m.sup.s+1], we get

(5.3) [mathematical expression not reproducible]

where c is a positive constant.

Thanks to equation (1.5), we have

[mathematical expression not reproducible]

thus,

[mathematical expression not reproducible]

By hypothesis, [lim.sub.[sigma][right arrow]0 [[sigma].sup.2] / f ([[sigma].sup.2]) = [lim.sub.[sigma][right arrow]0 c[[sigma].sup.s] = 0, and by item (ii) in Definition 2.2, it holds that

[mathematical expression not reproducible]

i.e., {[[alpha].sub.m]} is uniformly bounded. Henceforth,

[mathematical expression not reproducible]

and then

(5.4) [mathematical expression not reproducible]

Since [[delta].sub.m] = [[sigma].sub.m.sup.s+1] and, again from [lim.sub.m[right arrow][infinity] [[sigma].sub.m.sup.2] / f([[sigma].sub.m.sup.2]) = [lim.sub.m[right arrow][infinity] c[[sigma].sub.m.sup.S], it follows from (5.4) that

[mathematical expression not reproducible]

Then, if [x.sup.[dagger]] [not equal to] 0,

(5.5) [mathematical expression not reproducible]

because by assumption, [??].

Hence, the second term on the right-hand side of (5.3) tends to 1. Since by assumption the left-hand side of (5.3) tends to 0, we obtain

[mathematical expression not reproducible]

Now, from (5.2) we have that [??] as well, so that if [x.sup.] [not equal to] 0, we obtain from (5.5) applied to the preceding inequality the contradiction 0 [greater than or equal to] c > 0. Hence, [x.sup.[dagger]] = 0.

Note that for f([[sigma].sup.2]) = 1 (classical Tikhonov) the previous proposition gives exactly Proposition 5.3 in  with s = 2.

For f([[sigma].sup.2]) = [[sigma].sub.1-r] and [??], we recover saturation results for the weighted-I and weighted-II regularization methods, respectively. Indeed,

[mathematical expression not reproducible]

We can state the following corollaries.

COROLLARY 5.2 (). With the same notation of Proposition 5.1, let [R.sub.[alpha],r] be the family of regularization operators as in Definition 3.1. If

[mathematical expression not reproducible]

then [x.sup.[dagger]] = 0.

Observe that by taking a large value of r, it is possible to overcome the saturation result of classical Tikhonov and obtain a convergence rate arbitrarily close to O([delta]).

COROLLARY 5.3. With the same notation of Proposition 5.1, let [R.sub.[alpha],j] be the family of regularization operators as in Definition 3.2. If

[mathematical expression not reproducible]

then [x.sup.[dagger]] = 0.

In this case instead, weighted-II Tikhonov saturates at the same level as classical Tikhonov, independently of the choice of the parameter j.

COROLLARY 5.4. With the same notation of Proposition 5.1, let [R.sub.[alpha],r,j] be the family of regularization operators as in Definition 3.4. If

[mathematical expression not reproducible]

then [x.sup.[dagger]] = 0.

Proof. This follows immediately from

[mathematical expression not reproducible]

6. Iterated weighted-II Tikhonov regularization. To bypass the saturation property observed in the previous Section 5, we now propose an iterated regularization method based on the weighted-II Tikhonov filter, see Proposition 6.3, following the same approach adopted in . We first investigate the stationary case, and then we study its nonstationary version in order to avoid the crucial and not easy choice of the regularization parameter. Note that the iterated version of the weighted-I Tikhonov was already studied in .

6.1. Stationary iterated weighted-II Tikhonov. We start by stating a general definition of stationary iterated Tikhonov methods with operator-dependent norms.

DEFINITION 6.1. Let f : [0, [[sigma].sub.1]] [right arrow] [0, [infinity]) be a Borel-measurable function such that [??] satisfies conditions (2.2a), (2.2b), and (2.2c). We define the stationary iterated general weighted Tikhonov method as

(6.1) [mathematical expression not reproducible]

or equivalently,

[mathematical expression not reproducible]

We define [x.sup.n,[delta].sub.[alpha],f] as the n-th iteration of (6.4) whenever y is replaced by [y.sup.[delta]].

Then, by applying the above definition to the special case [??], we have the following new definition.

DEFINITION 6.2 (SIWT-II). We define the stationary iterated weighted-II Tikhonov method (SIWT-II) as a stationary iterated general weighted Tikhonov method with the function [??], namely,

(6.2) [mathematical expression not reproducible]

with [alpha] > 0 and j [member of] N, or equivalently,

[mathematical expression not reproducible]

We define [x.sup.n,[delta].sub.[alpha],j] as the n-th iteration of (6.2) whenever y is replaced by [y.sup.[delta]].

PROPOSITION 6.3. For any given n,j [member of] N, the SIWT-II in (6.2) is a filter-based regularization method with filter function

[mathematical expression not reproducible]

Moreover, the method is of optimal order under the a-priori assumption [x.sup.[dagger]] [member of] [X.sub.v,[rho]],for j [member of] N and 0 < v [less than or equal to] 2n, with the best convergence rate [??], which is obtained for [??] with v = 2n.

REMARK 6.4. Let us observe that forn = 1 we recover the convergence rate of O [??] in Proposition 3.6.

Proof. Multiplying both sides of (6.1) by (K*K + [alpha]f [(K*K)).sup.n-1] and iterating the process and since f (K*K) and K*K commute, see [23, Section 12.24, p. 326], we get

[mathematical expression not reproducible]

where we have used the well-known formula [??]. Therefore, the filter function in (2.1) is equal to

[mathematical expression not reproducible]

Setting

[mathematical expression not reproducible]

we obtain the filter function for the SIWT-II, i.e.,

[mathematical expression not reproducible]

Again, condition (2.2c) is straightforward to verify. Moreover, we recover the following relation

[mathematical expression not reproducible]

from which it follows that

[mathematical expression not reproducible]

Therefore, when we specialize the general function f in the above inequalities to that in

Definition 3.2, conditions (2.2a), (2.2b), and (2.6a) follow immediately for every j [member of] N thanks to Proposition 3.6. Finally, condition (2.6b) becomes

[mathematical expression not reproducible]

and then by using the same approach as in (3.7), it is easy to verify that the last term in the above inequality is bounded by [[alpha].sup.[beta]v] with [beta] = 1/2 if and only if 0 < v [less than or equal to] 2n. The remaining assertions follow from an application of Theorem 2.5. D

6.2. Nonstationary iterated weighted Tikhonov regularization. We introduce a non-stationary version of the iteration (6.2). We study convergence, and we prove that the new iteration is a regularization method.

DEFINITION 6.5. Let [{[[alpha].sub.n]}.sub.n[member of]N] [subset] [R.sub.>0] be a sequence of positive real numbers, and let [f.sub.n] : [0, [[sigma].sub.1]] [right arrow] [0, [infinity]) be a sequence of Borel-measurable functions such that [??] satisfies (2.2a), (2.2b), and (2.2c) for every n. We introduce the following nonstationary generalized weighted Tikhonov method,

(6.3) [mathematical expression not reproducible]

or equivalently,

[mathematical expression not reproducible]

DEFINITION 6.6 (NSWIT-II). Let [{[[alpha].sub.n]}.sub.n[member of]N] [subset] [R.sub.>0] and [{[j.sub.n}.sub.n[member of]N] [subset] N be sequences of positive real numbers and integers, respectively. We define the nonstationary iterated weighted-II Tikhonov method (NSIWT-II) as the nonstationary generalized weighted Tikhonov method with [??], namely,

(6.4) [mathematical expression not reproducible]

or equivalently,

[mathematical expression not reproducible]

6.2.1. Convergence analysis. We are interested in conditions for the sequence {[[alpha].sub.n]} such that the iteration (6.4) converges.

Let [??] be the spectral decomposition of the self-adjoint operator K* K, and let [sigma](K*K) denote its spectrum. Then from well-known facts from functional analysis , we can write [??], wheref : [sigma](K*K) [subset] R [right arrow] C is a Borel-measurable function and {E[x.sub.1], [x.sub.2]) is a regular complex Borel measure for every [x.sub.1], [x.sub.2] [member of] X. Hereafter, without loss of generality, we assume [[sigma].sub.1] = 1, i.e., [parallel] K [parallel] = 1.

THEOREM 6.7. For every [x.sup.[dagger]] [member of] X, the method (6.3) converges to [x.sup.[dagger]] as n [right arrow] [infinity] if and only if

[mathematical expression not reproducible]

diverges for every [sigma] [member of] [sigma](K) \ {0}.

Proof. Rewriting equation (6.3) using y = K[x.sup.[dagger]], we have

[mathematical expression not reproducible]

from which it follows that

(6.5) [mathematical expression not reproducible]

where for convenience we set [??]. As a consequence, the method converges for every [x.sup.[dagger]] if and only if

[mathematical expression not reproducible]

for every [x.sup.[dagger]] [member of] X, i.e., if and only if

[mathematical expression not reproducible]

for every Borel-measure (E[x.su[.[dagger]], [x.sup.[dagger]]) induced by [x.sup.[dagger]] [member of] X. Since

[mathematical expression not reproducible]

for every n and since

[mathematical expression not reproducible]

the theorem on dominated convergence implies the following equality

[mathematical expression not reproducible]

Hence, the method is convergent for every [x.sup.[dagger]] [member of] X if and only if

[mathematical expression not reproducible]

for (E[x.sup.[dagger]], [x.sup.[dagger]])-a.e. [[sigma].sup.2] and every induced Borel measure (E[x.sup.[dagger]], [x.sup.[dagger]]),i.e., for every [sigma] [member of] [sigma](K)\{0}. Now using the fact that

[x.sup.[dagger]]

whenever [{[t.sub.k]}.sub.k[member of]N] is a sequence of positive real numbers such that 0 [less than or equal to] [t.sub.k] < 1 (), the assertion follows.

COROLLARY 6.8. For every [x.sup.[dagger]] [member of] X, the method (6.4) converges to [x.sup.[dagger]] as n [right arrow] [infinity] if and only if

[mathematical expression not reproducible]

diverges for every [sigma] [member of] [sigma](K) \ {0}.

COROLLARY 6.9. If [??] diverges, then the NSIWT-II method converges.

Proof. Let us observe that

[mathematical expression not reproducible]

Then, if the left-hand side of the above inequality diverges for every [sigma] [member of] [sigma](K) \ {0}, then the result follows.

If [??], we can possibly have three different cases:

[mathematical expression not reproducible]

In the first two cases, [??] for every [sigma] > 0, and then the corresponding series diverges. In the latter case instead, [??] for every [sigma] > 0, and hence, the series [??] and [??] converge or diverge simultaneously by the asymptotic comparison test. Then, by [??], we deduce that [??] diverges for every [sigma] > 0, and the NSIWT-II method converges.

Now, we investigate the convergence rate of NSIWT-II.

THEOREM 6.10. Let [??] be a convergent sequence of the nonstationary generalized weighted Tikhonov method (6.3) with [x.sup.[dagger]] [member of] [X.sub.v] for some v > 0, and let [??] be a divergent sequence of positive real numbers. If

(6.6a) [mathematical expression not reproducible]

(6.6b) [mathematical expression not reproducible]

then

[mathematical expression not reproducible]

Proof. From equation (6.5), for [x.sup.[dagger]] [member of] [X.sub.V], we have

[mathematical expression not reproducible]

by (6.6b) and the dominated convergence theorem. Now the assertion follows from hypothesis (6.6a).

For example, in the case of nonstationary iterated Tikhonov, i.e., for [f.sub.k] ([[sigma].sup.2]) = 1, setting [??], the conditions (6.6a) and (6.6b) are satisfied with [??]. Anyway, this is not always the best convergence rate. Indeed, in practice a common choice for {[[alpha].sub.k]} is the geometric sequence [[alpha].sub.k] = [[alpha].sub.0][q.sup.k] with 0 < q < 1 and [[alpha].sub.0] fixed, which provides a convergence rate of the form O([q.sup.n]). We invite the interested reader to study more details in the following papers: ,  and [1, Corollary 27].

Contextualizing the above theorem to the case of NSIWT-II, we have the following corollary.

COROLLARY 6.11. If conditions (6.6a) and (6.6b) are satisfied for

[mathematical expression not reproducible]

then we have

[mathematical expression not reproducible]

6.2.2. Analysis of convergence for perturbed data. Now, we consider the convergence of the NSIWT-I/II methods when the initial datum y is perturbed. Again, we initially prove a more general statement involving the method (6.3) with initial datum [y.sup.[delta], and then the convergence results for perturbed data in the NSIWT-I/II cases will follow as a corollary. We use the notation [??] for the solution of the method (6.3) with perturbed initial datum [y.sup.[delta]].

THEOREM 6.12. Under the assumptions of Theorem 6.7, let [f.sub.k] : [0, [[sigma].sub.1]] [right arrow] R be such that

(6.7) [mathematical expression not reproducible]

If{[[delta].sub.n]} is a sequence convergent to 0 with [[delta].sub.n] [greater than or equal to] 0 such that

(6.8) [mathematical expression not reproducible]

then,

[mathematical expression not reproducible]

Proof. From the definition of the method (6.3), for every given m [less than or equal to] n, we find that

[mathematical expression not reproducible]

thus,

[mathematical expression not reproducible]

Hence, by induction, setting [g.sub.i] (K*K) = [[alpha].sub.i] [[K*K + [[alpha].sub.i] [f.sub.i] (K*K)].sup.-1] [f.sub.i] (K*K) for every fixed n, we have

[mathematical expression not reproducible]

Since

[mathematical expression not reproducible]

and

[mathematical expression not reproducible]

it follows that

[mathematical expression not reproducible]

Finally, by Theorem 6.7, [??] as n [right arrow] [infinity], and therefore by equation (6.8), [??] for n [right arrow] [infinity].

COROLLARY 6.13. Under the assumptions of Corollary 6.9, let {[[alpha].sub.k]}, {[j.sub.k]} be such that [j.sub.k] [less than or equal to] [[alpha].sub.k.sup.-1]. If {[[delta].sub.n]} is a sequence convergent to 0 with [[delta].sub.n] [greater than or equal to] 0 such that

[mathematical expression not reproducible]

then the NSIWT-II method with perturbed data is convergent, that is,

[mathematical expression not reproducible]

Proof. Applying Theorem 6.12 with

[f.sub.k]([[sigma].sup.2]) = [(1 - [[sigma].sup.2]).sup.jk],

we have that

[mathematical expression not reproducible]

Therefore

[mathematical expression not reproducible]

and the hypothesis (6.7) is satisfied and the assertion follows at once.

6.3. Iterated mixed method. With the same idea in mind that led us to introduce the filter method in Definition 3.4, we give the following definition of an iterated method which combines both the iterated weighted-I and the iterated weighted-II methods.

DEFINITION 6.14 (NSM). Let [{[[alpha].sub.n]}.sub.n[member of]]N], [{[r.sub.n]}.sub.n[member of]N] [subset] [R.sub.>0], and [{[j.sub.n]}.sub.n[member of]N] [subset] N be sequences of positive real numbers and integers, respectively. We define the nonstationary iterated mixed method (NSM) as a nonstationary generalized Tikhonov method (see Definition 6.5) with

[mathematical expression not reproducible].

We skip all the convergence analyses as they can be studied and recovered without great efforts by adapting the proofs in Proposition 6.3, Corollaries 6.8, 6.9, 6.13, and Theorem 6.10.

7. Numerical examples. Here we present some numerical examples, in the direct and iterated cases, spanning all the methods we discussed in the previous sections.

To produce our results we have used Matlab R2015a on a desktop PC provided with an Intel iCore i5-4460 processor with 8 GB of RAM running Windows 8.1. We add the "noise-vector" [eta] to the noise-free right-hand side vector y, where [eta] has normally distributed pseudorandom entries with mean zero in all examples and is normalized to correspond to a chosen noise-level

[[xi] = [[parallel] [eta] [parallel]/[parallel] y [parallel]]].

As a stopping criterion for the iterative methods, we use the discrepancy principle  that terminates the iterative method at the iteration

[mathematical expression not reproducible]

i.e., this criterion stops the iterations when the norm of the residual reaches the norm of the noise so that the latter does not dominate the reconstructed solution. All the iterative methods are initialized with the zero vector.

To compare the restorations with the different methods, we consider both the visual representation and the relative restoration error (RRE), that is,

[[parallel] [x.sup.[dagger]] - [x.sup.[dagger]] [parallel]/[parallel] [x.sup.[dagger]] [parallel]],

for the computed approximations x.

7.1. Example 1. The following test case is Heat from the toolbox Regularization Tools by Hansen  with the parameter [kappa] = 3 and using 300 points. We add a noise vector with [xi] = 0.05. The comparisons between direct methods have been made by choosing the best set of parameters for each method.

In Table 7.1 we can observe that the mixed method gives the lowest relative restoration error.

Since the true signal [x.sup.[dagger]] is a smooth function, the best parameter r is larger than 1. This is consistent with the theory: indeed according to Proposition 4.1, for r [greater than or equal to] 1 the Weighted-I method oversmooths the approximate solution more when compared to the standard Tikhonov method. In Figure 7.1 we plot the [alpha]-values against the RRE for every method. We can detect an empirical behavior for the Weighted-II method which was already observed in , i.e., its curve stays below the Tikhonov curve as well as below all the other ones. Indeed, the slope of the curve for the Weighted-II method in a right neighborhood of its minimum is less than the slopes of the curves for all the other methods calculated in a right neighborhood centered at their minimums. This means that an overestimation of the best parameter [alpha] would affect the RRE less.

Interestingly, it appears that the mixed method combines the best features of both the Weighted-I and the Weighted-II, or more precisely, the Weighted-II method enhances the Weighted-I method. This is clearly seen in both Figures 7.1 and 7.2. Figures 7.2(c)-(d) highlight the differences between the standard Tikhonov method and the mixed method: the head of the signal is reconstructed better by the mixed method, and at the same time it dampens the rough oscillations that appear in the tail of the reconstructed signal by the Tikhonov method.

Finally, we propose a comparison between the nonstationary iterative versions of the preceding methods. We denote by NSIT the nonstationary iterated Tikhonov method and with NSM the nonstationary iterated mixed method. In Table 7.2 we report the RRE values and the corresponding sequences of parameters that we use for the methods. Even in this case, the NSM method produces a better approximation, and in Figure 7.2(e) a comparison between the approximate solutions derived from NSIT and from NSM is reported. We observe that the behavior of the nonstationary iterative methods is not significantly affected by the choice of [[alpha].sub.0] and q. In practice, in the iterative cases we did not have to make any particular choice on the parameters. Of course, a lower RRE would be obtained by stopping the iterative method at the best iteration corresponding to the minimum RRE. Nevertheless, despite a larger RRE, by visual inspection of Figure 7.2, the peak is restored more accurately with NSM than with the Tikhonov method.

7.2. Example 2. For completeness and in order to give an overall view, here we report a table that summarizes different test problems from the toolbox Regularization Tools by Hansen . We have studied two different cases, where for each test we add a noise vector with [xi] = 0.05 and [xi] = 0.01, respectively. We compare only the direct methods, and the comparisons have been made by choosing the best set of parameters for each method. In most of the cases the mixed method provides the best RRE, and when it does not, it improves the reconstructed solution compared to the cases that use the weighted-I or the weighted-II method alone.

7.3. Example 3. The following test case is Foxgood from the toolbox Regularization Tool by Hansen  with n = 300 points. We added to the observed data a noise vector with [xi] = 0.01.

In this example we are going to examine only the nonstationary iterative algorithms. As in the previous example, we choose a geometric sequence for the [[alpha].sub.n] in every iterated method, while the [j.sub.n] and the [r.sub.n] are monotonically increasing sequences. The numerical results in

Table 7.4 and the plots in Figure 7.3 show that the NSM method provides again the best approximate solution.

8. Conclusions. We have studied fundamental properties such as convergence, smoothing effects, and saturation for both direct and iterative methods that arise when using general operator-dependent seminorms in the penalty term. In particular, we have seen that the Weighted-II method performs well as a switch between different regularization effects, and it can be used in combination with other suitable filters to enhance the reconstruction. Clearly, the combination of more filters requires the estimation of more parameters, and techniques like those proposed in  should be considered.

As future work, the filtering operators introduced in this paper could be applied also to nonstationary preconditioned iterative methods like the approximated iterated Tikhonov methods with a general penalty term proposed in , which extends the technique introduced in  using a generalized Tikhonov method as preconditioner. Moreover, they could be applied as smoothers in multilevel regularization methods like in .

REFERENCES

 D. BIANCHI, A. BUCCINI, M. DONATELLI, AND S. SERRA-CAPIZZANO, Iterated fractional Tikhonov regularization, Inverse Problems, 31 (2015), Art. Id. 055005, 34 pages.

 M. BRILL AND E. SCHOCK, Iterative solution of ill-posed problems--a survey, in Model Optimization in Exploration Geophysics, A. Vogel, ed., Theory Practice Appl. Geophys. 1, Vieweg, Braunschweig, 1987, pp. 13-37.

 A. BUCCINI, Regularizing preconditioners by non-stationary iterated Tikhonov with general penalty term, Appl. Numer. Math., 116 (2017), pp. 64-81.

 A. BUCCINI, M. DONATELLI, AND L. REICHEL, Iterated Tikhonov regularization with a general penalty term, Numer. Linear Algebra Appl., 24 (2017), Art. Id. e2089, 12 pages.

 M. DONATELLI AND M. HANKE, Fast nonstationary preconditioned iterative methods for ill-posed problems, with application to image deblurring, Inverse Problems, 29 (2013), Art. Id. 095008, 16 pages.

 M. DONATELLI AND S. SERRA-CAPIZZANO, Filter factor analysis of an iterative multilevel regularizing method, Electron. Trans. Numer. Anal., 29 (2007/08), pp. 163-177. http://etna.ricam.oeaw.ac.at/vol.29.2007-2008/pp163-177.dir/pp163-177.pdf

 H. W. ENGL, M. HANKE, AND A. NEUBAUER, Regularization of Inverse Problems, Kluwer, Dordrecht, 1996.

 S. GAZZOLA AND L. REICHEL, A new framework for multi-parameter regularization, BIT, 56 (2016), pp. 919-949.

 D. GERTH, E. KLANN, R. RAMLAU, AND L. REICHEL, On fractional Tikhonov regularization, J. Inverse Ill-Posed Probl., 23 (2015), pp. 611-625.

 C. W. GROETSCH, The Theory of Tikhonov Regularization for Fredholm Equations of the First Kind, Pitman, Boston, 1984.

 M. HANKE AND C. W. GROETSCH, Nonstationary iterated Tikhonov regularization, J. Optim. Theory Appl., 98 (1998), pp. 37-53.

 M. HANKE AND P. C. HANSEN, Regularization methods for large-scale problems, Surveys Math. Indust., 3 (1993), pp. 253-315.

 P. C. HANSEN, Regularization tools: a Matlab package for analysis and solution of discrete ill-posed problems, Numer. Algorithms, 6 (1994), pp. 1-35.

 M. E. HOCHSTENBACH, S. NOSCHESE, AND L. REICHEL, Fractional regularization matrices for linear discrete ill-posed problems, J. Engrg. Math., 93 (2015), pp. 113-129.

 M. E. HOCHSTENBACH AND L. REICHEL, Fractional Tikhonov regularization for linear discrete ill-posed problems, BIT, 51 (2011), pp. 197-215.

 G. HUANG, L. REICHEL, AND F. YIN, On the choice of solution subspace for nonstationary iterated Tikhonov regularization, Numer. Algorithms, 72 (2016), pp. 1043-1063.

 T. K. HUCKLE AND M. SEDLACEK, Tikhonov-Phillips regularization with operator dependent seminorms, Numer. Algorithms, 60 (2012), pp. 339-353.

 E. KLANN AND R. RAMLAU, Regularization by fractional filter methods and data smoothing, Inverse Problems, 24 (2008), Art. Id. 025018, 26 pages.

 F. LENTI, F. NUNZIATA, C. ESTATICO, AND M. MIGLIACCIO, Spatial resolution enhancement of earth observation products using an acceleration technique for iterative methods, IEEE Geosci. Remote Sensing, 12 (2015), pp. 269-273.

 A. K. LOUIS, Inverse und Schlecht Gestellte Probleme, B. G. Teubner, Stuttgart, 1989.

 S. NOSCHESE AND L. REICHEL, Lavrentiev-type regularization methods for Hermitian problems, Calcolo, 52 (2015), pp. 187-205.

 W. RUDIN, Real and Complex Analysis, 3rd ed., McGraw-Hill, New York, 1987.

 --, Functional Analysis, McGraw-Hill, New York, 1991.

DAVIDE BIANCHI ([dagger]) AND MARCO DONATELLI ([dagger])

([dagger]) Dipartimento di Scienza e Alta Tecnologia, Universita dell'Insubria, via Valleggio 11, 22100 Como, Italy ({d.bianchi9, marco.donatelli}@uninsubria.it).

* Received March 3, 2017. Accepted July 22, 2017. Published online on September 18, 2017. Recommended by L. Reichel. Davide Bianchi and Marco Donatelli are partially supported by INdAM-GNCS Gruppo Nazionale per il Calcolo Scientifico and by MIUR-PRIN 2012 (grant 2012MTE38N).
```                      TABLE 7.1
Heat problem. Parameters and RRE for the direct methods.

Method       [alpha]    j  r     RRE

Tikhonov     1.3e-03    0  1     0.0241
Weighted-I   3.42e-04   0  1.48  0.0193
Weighted-II  3.1e-03   32  1     0.0186
Mixed        6.32e-04  32  1.48  0.0155

TABLE 7.2
Heat problem. RRE values for the nonstationary iterative methods and
iteration numbers between brackets.

Method    [[alpha].sub.n] = [[alpha].sub.0][q.sup.n]  [j.sub.n]

NSIT      [[alpha].sub.0] = 0.01, q = 0.7             0
NSWIT-I   [[alpha].sub.0] = 0.01, q = 0.7             0
NSWIT-II  [[alpha].sub.0] = 0.01, q = 0.7             4n
NSM       [[alpha].sub.0] = 0.01, q = 0.7             4n

Method    [r.sub.n]    RRE

NSIT      1            0.0733(4)
NSWIT-I   1.2 + n/100  0.0590(3)
NSWIT-II  1            0.0530(3)
NSM       1.2 + n/100  0.0485(3)

TABLE 7.3
RRE values for different problems from the toolbox Regularization
Tools. Here n = 300.

Problem        Method

baart(n)       Tikhonov: [alpha] = 2.2e-05
Weighted-I: [alpha] = 1.2e-05, r = 0.85
Weighted-II: [alpha] = 2e-06, j = 32
Mixed: [alpha] = 2e-06, r = 0.85, j = 32
Tikhonov: [alpha] = 2.2e-05
deriv2(n; 3)   Weighted-I: [alpha] = 6.2e-05, r = 0.8
Weighted-II: [alpha] = 3.2e-05, j = 16
Mixed: [alpha] = 7.2e-05, r = 0.8, j = 16
Tikhonov: [alpha] = 5.6e-02
Weighted-I: [alpha] = 6.8e-02, r = 0.7
gravity(n; 3)  Weighted-II: [alpha] = 1.7e-02, j = 32
Mixed: [alpha] = 4.6e-02, r = 0.7, j = 32
Tikhonov: [alpha] = 2.44e-02
Weighted-I: [alpha] = 1.18e-02, r = 1.2
phillips(n)    Weighted-II: [alpha] = 1.35e-02, j = 32
Mixed: [alpha] = 7.9e-03, r = 1.2, j = 32

Problem                  RRE
[xi] = 5%   [xi] = 1%

baart(n)       0.5560      0.1174
0.4672      0.0870
0.3390      0.0874
0.1804      0.0757
0.1555      0.0491
deriv2(n; 3)   0.1425      0.0449
0.1187      0.0477
0.1327      0.0430
0.1166      0.0858
0.0968      0.0738
gravity(n; 3)  0.1145      0.0779
0.0945      0.0752
0.0645      0.0244
0.0628      0.0182
phillips(n)    0.0765      0.0177
0.0623      0.0144

TABLE 7.4
Foxgood problem. RRE values for the nonstationary iterative methods and
iteration numbers between brackets.

Method    [[alpha].sub.n] = [[alpha].sub.0][q.sup.n]   [j.sub.n]

NSIT      [[alpha].sub.0] = 0.05, q = 0.8              0
NSWIT-I   [[alpha].sub.0] = 0.05, q = 0.8              0
NSWIT-II  [[alpha].sub.0] = 0.05, q = 0.8              4n
NSM       [[alpha].sub.0] = 0.05, q = 0.8              4n

Method    [r.sub.n]    RRE

NSIT      1            0.0320(15)
NSWIT-I   1.2 + n/100  0.0195(21)
NSWIT-II  1            0.0231(17)
NSM       1.2 + n/100  0.0188(24)
```
COPYRIGHT 2017 Institute of Computational Mathematics
No portion of this article can be reproduced without the express written permission from the copyright holder.