# Stability of the recovery of missing samples in derivative oversampling.

Abstract

This paper deals with the problem of reconstructing a band-limited signal when a finite subset of its samples and of its derivative are missing. The technique used, due to P.J.S.G. Ferreira, is based on the use of a particular frame for band-limited functions and the relative oversampling formulas. We study the eigenvalues of the matrices arising in the procedure of recovering the lost samples, finding estimates of their eigenvalues and their dependence on the oversampling parameter and on the number of missing samples. When the missing samples are consecutive, the problem may become very ill-conditioned. We present a numerical procedure to overcome this difficulty, also in presence of noisy data, by using the Tikhonov regularization technique.

Key words and phrases: Frame, Riesz basis, oversampling, sampling formulas, band-limited functions.

2000 AMS Mathematics Subject Classification--94A20, 41A15

1 Introduction

This paper deals with the problem of recovering missing samples of a band-limited function and of its derivative via frame reconstruction sampling formulas. A bandlimited signal is a function which belongs to the space [B.sub.[omega]] of functions in [L.sup.2] (R) whose Fourier transforms have support in [-[omega], [omega]]. Functions in this space can be expanded in terms of the orthonormal basis of translates of the sinc function. The coefficients of the expansion are the samples of the function at a uniform grid on R, with "density" [omega]/[pi] (Nyquist density). Such expansions, called sampling formulas, have been generalized by replacing the orthonormal basis with more general families, like Riesz bases and frames, formed by the translates of one or more functions (generators). Frames, unlike Riesz bases, are overcomplete and their redundancy provides a perfect tool for the recovery of missing data.

The recovery of missing samples of band-limited functions via oversampling has been first studied by R.J. Marks II in [10], [11]. In [4], P.J.S.G. Ferreira extended Marks' technique to generalized Kramer sampling, showing that, under suitable oversampling assumptions, any finite set of missing samples can be recovered from the others by solving a linear system (I - S)X = B, where the matrix S is positive definite. In [5] the same author studied the eigenvalues of the matrix S in dependence of the oversampling parameter and the number of missing samples; see also [6] for a related result. Successively, D.M.S. Santos and Ferreira considered the case of a two-channel derivative oversampling formula obtained by projecting the generators of a Riesz basis of the space [B.sub.[omega]] and their duals into the space [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] with [[omega].sub.a] < [omega] [12], see also [13]. With this technique, they obtain a pair of dual frames, although the dual frame is not the canonical one. In their paper, the authors show that a finite number of missing samples either of the function or of its derivative can be recovered, solving in each case a non singular linear system similar to the one-channel case. In [13], the authors consider also the case when samples of both the function and the derivative are simultaneously missing. The technique of Santos and Ferreira has been applied to more general two-channels by J. M. Kim and K. H. Kwon [8], who gave sufficient conditions for the recovery of missing samples, from a single channel and from both channels. However, as the authors observe, these conditions do not apply to the derivative oversampling formula studied in [12].

Successively, in [3], the author gave new derivative oversampling formulas of any order, where the dual generator is the canonical one and in [2] found an expression of the dual in the two-channel case. The use of the canonical dual allowed the author to prove in [2] that simultaneously missing samples of the function and of the derivative can be recovered. However, the generators of the canonical duals are much less explicit than the non-canonical dual used by Santos and Ferreira, see [2, (5.2) p. 178] and (2.10). In this paper we study the problem of the recovery' of missing samples of the function and of its derivative in Ferreira's two-channel derivative formula. The technique we use, first proposed in [12], extends in a natural way the one for a single channel in [5] and consists in solving a system (I - S)X = C, where the matrix S (see (3.8)) is a block matrix depending on the dual generators and on the position of the missing samples, and the unknowns X are the missing samples of the function and its derivative. To the best of our knowledge, no results are known about the possibility of solving this system, i.e. of recovering simultaneous missing samples of the function and of its derivative. We obtain estimates of the minimum and maximum eigenvalues of the block submatrices of the matrix S, show that in some cases the matrix reduces to a lower triangular matrix and provide several numerical experiments on the dependence of the eigenvalues on the parameters of the problem, i.e. the oversampling parameter [tau], the number of missing samples and their distance. Moreover, we experiment the recovery of missing samples and the reconstruction of a signal, paying particular attention to the computational aspects and to the ill-conditioned problem of contiguous missing samples, also with noisy data. We solve this problem by using the Tikhonov regularization, a technique typical of the treatment of inverse problems that consists in considering a family of approximate solutions [X.sub.[lambda]] depending on a positive parameter [lambda], called the regularization parameter. When the data are noise-free, the solution [X.sub.[lambda]] converges to the exact solution as the regularization parameter tends to zero. In the case of noisy data, one can obtain an optimal approximation of the exact solution for a positive value of the regularization parameter [lambda] ([1], Section 5).

The paper is organized as follows. In Section 2 we establish notations and collect some results to be used later. In Section 3 we describe the technique for the recovery of the missing samples. Section 4 is dedicated to the study of the eigenvalues of the coefficient matrices in the system; mostly when the missing samples are equidistant. In this case we find estimates of the minimum and maximum eigenvahies of the diagonal submatrices [S.sub.11] and [S.sub.22] of S (see (3.8)). Moreover, we find that, in some cases, the matrix S is lower triangular and that half of its eigenvalues are equal to 2r - [r.sup.2], the other half are equal to [r.sup.2], where r is the oversampling parameter. This extends a result in [5] to two channels. We also present some numerical experiments supporting the theory. Finally, in Section 5, we analyze the case of contiguous missing samples both for one and two channels, when the problem may become very ill-conditioned; we present a numerical procedure to solve it, also in the presence of noisy data.

2 Preliminaries

In this section we collect some results on frames to be used later. We begin by introducing some notation. The Fourier transform of a function f in [L.sup.1] (R) is

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

In this paper vectors in [C.sup.N] are to be considered as column-vectors. However, to save space, we shall write x = ([x.sub.l], [x.sub.2], ... ,[x.sub.N]) to denote the column-vector whose components are xl, ... , XN. Let to be a positive number; we shall say that a closed subspace H of [L.sup.2](R) is to-shift-invariant if H is invariant under translation by [t.sub.o]. Given a subset [PHI] = {[phi].sub.j] : j = 1,2} of H, we denote by [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] the set

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where [[tau].sub.a]f(x) = f(x + a). The family [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is a frame for H if the operator [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]: H defined by [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is continuous and surjective. It is well known that this happens if and only if there exist two constants 0 < A [less than or equal to] B such that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

The constants A and B are called frame bounds. Denote by [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] the adjoint of [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], defined by [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. The operator [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is called the frame operator. Denote by [PHI] * the family {[[phi].sub.j] *: j = 1,2}, where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.1)

If [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is a frame for H, then [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is also a frame of H, called the canonical dual frame, and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] explicitly

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.2)

The elements of [PHI] are called generators of the frame and the elements of [PHI] * canonical dual generators. Given a frame [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] of a Hilbert space H, the expansion of an element in terms of translates of the generators is not unique; a frame [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] such that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.3)

is called a dual frame for [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. We remind that the coefficients [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] with respect to the canonical dual frame have the minimal [l.sup.2] norm among all the sequences that represent an element f in terms of a given frame [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. If the family [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is a frame for H and the operator [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is injective, then Ev,to is called a Riesz basis.

Let [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] be a frame of a [t.sub.o]-shiff-invariant Hilbert space H with the canonical dual generators [PHI] *. Denote by P the orthogonal projection on a [t.sub.o]-shift-invariant subspace V of H, then [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] are families of dual frames for V, so that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Note that [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is not necessarily the canonical dual. In this paper we study families [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] where [[phi].sub.1] and [[phi].sub.2] have Fourier transform [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] are such that [omega] [less than or equal to] 2[pi]/[t.sub.o]. The interest of these families in applications resides in their connection derivative sampling formulas for the space [B.sub.[omega]]. To simplify notation, throughout the paper we shall set

h = 2[pi]/[t.sub.o]

The family [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] defined by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.4)

is a Riesz basis for the space [B.sub.h] (see [7, p.135]). The Fourier transforms of the dual generators are

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.5)

hence the dual generators are

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where sinc(x) = sin(x)/x. For any f [member of] [B.sub.h] the coefficients of the expansion in (2.2) are

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Thus the expansion formula (2.2) becomes

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

for any f [member of] [B.sub.h]. Note that the convergence is uniform. This formula is called a first order derivative sampling formula. The sampling frequency for an N-channel formula is [omega]/(N[pi]) and is called Nyquist frequency. By using a frame instead of a Riesz basis, in [12] the authors obtained a two-channel derivative formula where the sampling frequency in each channel is greater than the Nyquist frequency [omega]/(2[pi]) (first order derivative oversampling formula). The frame is obtained by projecting on [B.sub.[omega]] the Riesz basis [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] for [B.sub.h], where h = 2[pi]/[t.sub.o] and [omega] < h. We denote by r the ratio

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.7)

Since the projection commutes with the translations, to project the family [E.sub.[PSI]] and the dual family, it is sufficient to project on [B.sub.[omega]] the generators and the dual generators. Thus, by (2.4), we obtain that the generators of the frame [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] are defined by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.8)

and by (2.5) we obtain the Fourier transform of the dual generators

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.9)

A simple calculation shows that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.9)

Thus we obtain the following derivative oversampling formula

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.9)

We observe that 1/r = 2[pi]/([[omega]t.sub.o]) is the ratio between the sampling frequency and the Nyquist frequency and that r [member of] (0,1). We shall be mainly interested to the case 1/2 < r < 1, since for 0 < r [less than or equal to] 1/2 it is possible to use one channel separately for the function and for its derivative [12], [13]. Note that if r is close to 1, the frame is close to a Riesz basis, while if r is small, the frame is very redundant.

3 The system for the recovery of missing samples

In this section we briefly describe the method for the recovery of a finite number of missing samples via two-channel derivative oversampling. We may rewrite equation (2.11) as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3.1)

and computing the derivative of both sides, gives

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3.2)

Here

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3.3)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3.4)

Observe that [[??]'.sub.1] (0) = 0.

Let [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] be the corresponding set of missing samples. By evaluating equations (3.1) and (3.2) in [l.sub.k][t.sub.o] and separating the unknown samples from the known ones, we obtain

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3.5)

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3.6)

Equations (3.5) form a system of 2N equations in the 2N unknowns

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3.7)

which can be written in matrix form: denote by S = S(L/, r) the real matrix

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3.8)

where [S.sub.11], [S.sub.12], [S.sub.21], [S.sub.22] are the submatrices whose entries are

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3.9)

k, j = 1, ... , N. Then the system (3.5) may be written

(I- S)Z = C, (3.10)

where I is the 2N x 2N identity matrix, C = [[([c.sub.k]).sup.2N].sub.k=1] is defined by (3.6) and Z = ([X.sub.1], [X.sub.2], ... [X.sub.N], [Y.sub.1], [Y.sub.2], ... [Y.sub.N]). The unknowns [X.sub.k] are the missing samples of the function and the [Y.sub.k] are the missing samples of the derivative. Notice that, by (2.10), (3.3) and (3.4), if r = 1, then S is the identity matrix. The four submatrices are real, Su and \$22 are symmetric, while [S.sub.12] and [S.sub.21] are antisymmetric. Moreover, if the distance between any two consecutive [l.sub.k]'s is a constant, then the four submatrices are Toeplitz, but S is not Toeplitz. Note that for r [not equal to] 1 the matrix S is neither symmetric nor antisymmetric.

Suppose that in (3.7) only the values {[X.sub.i] = f([l.sub.i][t.sub.o]): i = 1, ... ,N} are missing, while all the sample values of f' are known. By considering only the first N equations of the system (3.10) and by separating the known from the unknown samples, we obtain

(I - [S.sub.11])X = [C.sub.1] + [S.sub.12]Y,

where I is the N x N identity matrix, [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. This equation may be rewritten as

(I - [S.sub.11])X = [B.sub.1] (3.11)

where [B.sub.1] = [C.sub.1] + [S.sub.12]Y. Similarly, if in (3.7) only the values {[Y.sub.i] = f'([l.sub.i][t.sub.o]), i = 1, ... , N} are missing, and the samples of f are known, one solves the system

(I - [S.sub.22])Y = [B.sub.2] (3.12)

in the unknowns [Y.sub.i], I = 1, ..., N, where [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. If r [member of] (0, 1), then the eigenvalues of the symmetric matrices [S.sub.11] and [S.sub.22] are in the interval (0, 1) [12]. This implies that the recovery of a finite number of missing samples of the function is possible, if all the samples of the derivative are known and, vice-versa, samples of the derivative can be recovered, if all the samples of the function are known. In the next section we shall find bounds for the minimum and maximum eigenvalues of the matrices [S.sub.11] and [S.sub.22].

To the best of our knowledge, no results are known about the possibility of solving system (3.10), i.e. of recovering simultaneous missing samples of the function and of its derivative. Notice that the system is solvable if and only if 1 is not an eigenvalue of the matrix S. On the basis of numerical evidence, we conjecture that all the eigenvalues of S are real and that they lie in the interval (0, 1) for all r [member of] (0, 1). In Section 4 we present several numerical experiments supporting our conjecture.

4 The stability of the matrices.

This section is dedicated to the study of the eigenvalues of the matrices S, [S.sub.11] and [S.sub.22] in the two-channel system (3.10). First, we briefly summarize the one-channel case and recall some stability results obtained in [5]. The one-channel formula is

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4.1)

where [t.sub.o] < [pi]/[omega]. In this case the oversampling parameter is r = [omega][t.sub.o]/[pi]. Let U = {[l.sub.1], [l.sub.2], ... , [l.sub.N]} be a set of integers and

{f([l.sub.k][t.sub.o]): k= 1, ... ,N} (4.2)

be the set of missing samples; then the system to be solved is

(I - R)X = B, (4.3)

where I is the identity N x N matrix,

R(j,k) = r sinc([pi]r([l.sub.j] - [l.sub.k])), j,k = 1, ... , N, (4.4)

X = {f([l.sub.k][t.sub.o]),k = 1, ... ,N}, B = ([b.sub.1],[b.sub.2], ... ,[b.sub.N]), and

[b.sub.j] = r [summation over (k[member of] U)] f ([kt.syub.o]) sinc ([pi] r([l.sub.j[ - [l.sub.k])), j = 1, ... ,N (4.5)

The matrix R is symmetric and positive definite. In [5], the author shows that all its eigenvalues are in (0, 1) and observes that there are two situations that lead to a ill-conditioned problem: when r is close to 1 and when the integers [l.sub.k] are contiguous. In the first case, the frame is close to a Riesz basis, when the recovery is impossible and R becomes the identity matrix. In the second case, the maximum eigenvalue of the matrix S grows rapidly with N; moreover it can be close to 1 also for small values of N (see Figure 5 in [5]). In both cases, the spectral condition number of the matrix (I - R), which controls the propagation of errors on the data, becomes very large. Next, we investigate the eigenvalues of the matrix S in the two channel case (see (3.8)) and of its submatrices [S.sub.11] and [S.sub.22]. Since the latter are real symmetric matrices, their eigenvalues give their condition number. We shall denote by [a] the maximum integer less than or equal to a. Given an N x N matrix, we shall denote with [[lambda].sub.j](A), j = 1, ... , N its eigenvalues and by [[lambda].sub.min] (A) and [[lambda].sub.max] (A) its minimum and maximum eigenvalue. Let U = {[l.sub.1], [l.sub.2] ... ,[l.sub.N]} C Z and let {f([l.sub.j][t.sub.o]), f'([l.sub.j][t.sub.o]) 1 [less than or equal to] j [less than or equal to] N} a set of missing samples. Then

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where r is the oversampling parameter (2.7). Indeed, by (3.9) and (2.10), the trace of [S.sub.11] is N(2r - [r.sup.2]); by (3.9) and (3.4), the trace of [S.sub.22] is N[r.sup.2]. Moreover, since S is a real matrix, its trace is [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]; hence

Re([[lambda].sub.min](S)) < r < Re([[lambda].sub.max](S)). (4.6)

We observe that, if the oversampling parameter r tends to l, i.e. the frame tends to a Riesz basis, then S tends to the identity matrix.

Following Ferreira, we shall now consider the case in which the set that locates the positions of the missing samples is U = {[mi.sub.1], [mi.sub.2], ... ,[mi.sub.N]}, where m is an integer and I = {[i.sub.1], [i.sub.2],.., [i.sub.N]} is a set of integers; in the following we shall denote such sets by m I. The interest for studying these cases lies in the technique of interleaving the samples of a signal, prior to their transmission or archival; the advantage of this procedure is that the transmitted (or stored) information becomes less sensitive to the burst errors that typically affect contiguous set of samples [5]. We shall investigate how the stability of the method depends on the interleaving factor m. In Proposition 4.1 and Theorem 4.2 below we find estimates for the eigenvalues of the matrices [S.sub.11] and [S.sub.22], thus generalizing a result of Ferreira for one channel [4, Theorem 1]. First, we consider the case of integer mr. The following proposition, which extends to two channels a result in [5], shows that in this case the matrix S is lower triangular and that N eigenvalues are equal to 2r - [r.sup.2] and N are equal to [r.sup.2]. Hence, if r is rational r = p/q, one can take m to be a multiple of q and obtain a lower-triangular matrix S.

Proposition 4.1. Let m be a positive integer; suppose U = ([mi.sub.1], [mi.sub.2], ... ,[mi.sub.N]}, where [i.sub.j], j = 1, ... ,N, are integers and let r be a real number in (0,1). If mr is an integer, then

[S.sub.11] = (2r - [r.sup.2])I [S.sub.22] = [r.sup.2] I [S.sub.12] = 0.

Moreover the entries of the matrix [S.sub.21] are

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Proof. Since [t.sub.o] = 2[pi]r/[omega], by (3.9) with [l.sub.k] = [mi.sub.k], one obtains that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

hence by (2.10) since mr is an integer

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where [[delta].sub.j,k] is the Kronecker symbol. This shows that [S.sub.11] = (2r - [r.sup.2])I. Similarly, by using (3.9) and (3.4), we obtain that [S.sub.22] = [r.sup.2]I and [S.sub.12]= 0. From (3.3) one finds that the off-diagonal entries of the matrix [S.sub.21] are

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

The diagonal entries in [S.sub.21] are equal to zero, since [??] (0) = 0.

In the following theorem we find estimates for the minimum and maximum eigenvalues of the matrix [S.sub.11] and [S.sub.22]. The estimates do not depend on the number of missing samples and are simple to compute.

Theorem 4.2. Suppose that 0 < r < 1 and U = mI, where I = {[i.sub.1], [i.sub.2], ... , [i.sub.N]} and m is a positive integer. If m r is not integer, then for j = 1, ... , N

[[alpha].sub.11] < [[lambda].sub.j] ([S.sub.11]) < [[beta].sub.11] (4.7)

[[alpha].sub.22] < [[lambda].sub.j] ([s.sub.22]) < [[beta].sub.22] (4.8)

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Proof. From (2.9), by Fourier inversion and a change of variable, we obtain

[[??].sub.1](x) = 1/[square root of 2[pi] [[integral].sup.r.sub.-r] (1 - |y|)[e.sup.ixy[omega]/r] dy. (4.9)

Hence by (3.9) with [l.sub.k] = [mi.sub.k],

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Let x = ([x.sub.0], [x.sub.1], ... , [X.sub.N-1]) be a column-vector in [C.sup.N] and denote by x* its conjugate transpose. Then

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where P(y) is the 1/m-periodic function

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4.10)

By splitting the integral in two parts and changing variable, the above equation may be written

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4.11)

where we have set

M(y) = [[absolute value of P(y)].sup.2] + [[absolute value of P(-y)].sup.2]. (4.12)

Next we write the integral in (4.11) as a sum of D + 1 integrals

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4.13)

Since M [greater than or equal to] 0, we majorize (1 - y) in each integral and obtain the estimate

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4.14)

Next, we prove that for all k = 0, ... , D - 1

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4.15)

Indeed by (4.12), by changing variables and using the (1/m)-periodicity of P,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

By (4.10) and the Plancherel formula, since x has norm equal to 1, we have

[integral.sup.1/m].sub.0]] [[absolute value of P(y)].sup.2] dy = 1/m. (4.16)

This concludes the proof of equation (4.15). Next we prove that

[integral.sup.r].sub.D/2m]] M(y) dy < 1/m (4.17)

Indeed, from (4.12), by changing variable in the integral and using the 1/m-periodicity of P, we obtain

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

The length 2r - D/m of the interval of integration is less than the period 1/m of P, thus inequality (4.17) follows from (4.16).

From (4.14), by using (4.15) and (4.17), we obtain

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4.18)

Thus, we have proved the upper bound for the eigenvalues of [S.sub.11]. From (4.13), by observing that the second integral is positive and using (4.15) we obtain

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

This proves the lower bound and concludes the proof of inequality (4.7). We omit the proof of inequality (4.8), which is similar.

In Table 1 and Table 2, we present some examples: we let U = {0, 8, 16, 24} and compare the minimum and maximum eigenvalues of [S.sub.11] and [S.sub.22] with their estimates given by Theorem 4.2, for various values of r.

From Proposition 4.1 one can see that, if mr is an integer, then the eigenvalues of [S.sub.22] are smaller than the eigenvalues of [S.sub.11]. The following corollary shows that this is also true when mr is not an integer, provided that m is sufficiently large.

Corollary 4.3. Let m be a positive integer, U = mI and I = {[i.sub.1], [i.sub.2], ... , [i.sub.N]}, 0 < r < 1. If mr is not an integer and m > (1 + 2r)/(2r(1 - r)), then

[[lambda].sub.max] ([S.sub.22]) < [[lambda].sub.min] ([S.sub.11]). (4.19)

Proof. By (4.7) and (4.8) it is sufficient to prove that if m > (1 + 2r)/(2r(1 - r)), then [[beta].sub.22] < [[alpha].sub.11]. This inequality is equivalent to [D.sup.2]/2m + D/2m + r - D < 0. Since 2mr - 1 < D [less than or equal to] 2mr, we have

[D.sup.2] / 2m + D / 2m + r - D < 2mr(r - 1) + 2r + 1

from which the corollary follows.

Numerical experiments show that for m < (1 + 2r)/(2r(1 - r)) the maximum eigenvalue of [S.sub.22] may be larger than the minimum eigenvalue of [S.sub.11].

We shall now describe the behavior of the eigenvalues of the matrix S in dependence of the parameters r and N. The numerous experiments that we have performed suggest the conjecture that the eigenvalues of this matrix are real, positive, and less than 1. In what follows, we have ordered the eigenvalues so that [[lambda].sub.i] < [[lambda].sub.i+l] for i = 0, ... , N - 2. Figure 1 shows the largest eigenvalue of S as function of the number N of contiguous points, for various values of r. Note that, as r approaches 1, the largest eigenvalue gets close to 1. i.e. the smallest eigenvalue of the matrix I - S tends to zero. This happens even for a small number of missing samples, as for the one-channel case (see [5], Figure 5). Thus. since the spectral condition number of a matrix is greater or equal to the ratio of the absolute values of the largest and the smallest eigenvalues, cond (I - S) becomes very large. In Table 3 we show the spectral condition numbers of (I - S) for U = {0, 1, 2, ... , 9}, for various values of r.

[FIGURE 1 OMITTED]

Figure 2 shows some of the eigenvalues of S, namely [[lambda].sub.i], i = 1, 5, 10, 11, 15, 20, as functions of the parameter r for I = (0, 1, 2, 3,... , 9}, m = 6 (left) and m = 10 (right). The dashed lines are the parabolas 2r - [r.sup.2] and [r.sup.2]. One can see that, for large values of m, the graphs of the eigenvalues concentrate around the graphs of the two parabolas. We observe that. in the one-channel case, the eigenvalues have a similar behavior (see Figure 7 in [5]).

We shall now discuss the behavior of the eigenvalues of the matrix S as the parameter m grows. By using (3.9) with [l.sub.k] = [mi.sub.k], k = 1, ... , N, (2.10), (3.3) and (3.4), one can see that, when the parameter m tends to infinity, the off-diagonal entries tend to zero; for each [member of] > 0 denote by [m.sub.[member of]] the positive integer (depending on N) such that, if m > [m.sub.[member] of], the off-diagonal entries of the matrix are less than [member of]. For such [m.sub.[member of]], by the Gershgorin theorem, N eigenvalues lie in the circle of center 2r - [r.sup.2] and radius c and N eigenvalues lie in the circle of center [r.sup.2] and radius [member of]. Figure 3 illustrates this behavior: it shows the eigenvalues [[lambda].sub.i] i = 1, 5, 10, 11, 15, 20 in dependence of the parameter m for r = .7 and 27 = {0, 1, ... , 9}. Note that the limits of the eigenvalues are 2r - [r.sup.2] and [r.sup.2].

[FIGURE 2 OMITTED]

[FIGURE 3 OMITTED]

5 Reconstruction: numerical results

In this section we present some numerical experiments on the recovery of missing samples and on the reconstruction of a signal via oversampling formulas. In particular, we analyze the case of contiguous missing samples, when the problem may become very ill-conditioned, depending on the parameter r and the number of missing samples. Indeed, if r is close to 1, or the number of missing samples is large, the condition number of the matrices I - R and I- S is large, so that the system greately amplifies the errors in the data. We solve this problem, also in presence of noisy data, by using the Tikhonov regularization technique which we shall now describe briefly. To solve an ill-condizioned linear system AX = B one looks for the solution of the least-square problem

[min.sub.x] ([[parallel]AX - B[parallel].sup.2] + [lambda][[parallel]X[parallel].sup.2]).

where [lambda] is a real positive number, called the regularization parameter. For each value of [lambda], this problem has a unique solution [X.sub.[lambda]] which is given by

([A.sup.T] A + [lambda]I)[X.sub.[lambda]] = [A.sup.T] B.

If one has an estimate [epsilon] of the norm of the error, as in our case, then an approximation of the optimal value of the parameter [lambda] can be obtained by using Morozov's discrepancy principle, i.e. by choosing the value of [lambda] for which the discrepancy [[parallel][AX.sub.[lambda]] - B[parallel].sup.2] is equal to [[epsilon].sup.2] ([1], Section 5).

Following Ferreira, in all our experiments we shall use the test function

g(x) = sinc([pi](x - 2.1)) - 0.7 sinc([pi](x + 1.7)) (5.1)

which has band [-[pi], [pi]] [5].

First we present some numerical experiments for the recovery of contiguous missing samples via the one-channel formula (4.1). To compute the sum in (4.5), we must truncate it to the values k such that |k| [less than or equal to] M, for some integer M, thus introducing an error. In [5] the author makes the choice M = 40 to reconstruct the signal (5.1) when U = {0,1,...,5} and r = 0.6 (hence [t.sub.o] = 0.6). One can verify that for M = 40 the norm of the truncation error is of order [10.sup.-3] and that the choice M = 500 reduces it to [10.sup.-4]; but a larger value of M does not reduce it further, owing to round-off errors. In Table 4 we show the values [x.sub.k] = [l.sub.k][t.sub.o] (column 1), the exact samples g([x.sub.k]) (column 2), the computed samples [g.sub.k] and the relative errors [e.sub.k] with M = 40 (columns 3 and 4) and with M = 500 (columns 5 and 6). The condition number of the matrix I - R is 3.08 x [10.sup.4]. In Figure 4 we have plotted the original and the reconstructed signal in both cases. One can see the better results obtained in the case 114 = 500, where we have zoomed the graphs, representing them in the interval [-5, 5], to make them distinguishable. By the preceding considerations, in all the experiments described below, we have chosen M = 500 to reduce the error due to the truncation error.

Next, we have considered the problem of data affected by noise. We have chosen U = {-2,-1,0, 1,2,3}, r = 0.6 and have introduced a random noise of order [10.sup.-2]. Note that the condition number of the matrix I - R is still 3.08 * [10.sup.4], since it does not depend on the positions of the contiguous missing samples (see (4.4)). However, since the problem is ill-conditioned, this larger perturbation on the data caused huge errors in the solution of (3.10) (see Table 5, columns 3 and 4). To obtain better results, we have used the Tikhonov regularization technique with the discrepancy principle; we notice that this was possible since the size of the norm of the noise is known. In Table 5 we show [x.sub.k] = [l.sub.k][t.sub.o] and the exact values g([x.sub.k]) of the missing samples (columns 1 and 2), the recovered samples [g.sub.k] and the relative errors [e.sub.k] obtained without regularization (columns 3 and 4) and with regularization (columns 5 and 6). As in the previous example, by comparing columns 2 and 5, one can see that the absolute errors are very small: the plots of the real and reconstructed signal would be indistinguishable.

[FIGURE 4 OMITTED]

We shall now present some experiments that involve the two-channel formula. To compute the right-hand side of (3.10), we have truncated the sums in (3.6) retaining only the terms for which [absolute value of n] [less than or equal to] 500. As in the one-channel case, this leads to an error of order [10.sup.-4]. Our experiments show that, as in the one-channel case, the recovery and the reconstruction of missing samples of the function and of its derivative is rather efficient if the distance between the missing samples is large and the oversampling parameter r is not too close to 1. To Dye an example, for I = {-1, 0, 1, 2, 3, 4}, m = 4 and r = 0.7, the maximum absolute error is less then 8 x [10.sup.-4]. The error increases when the parameter m gets smaller, or r larger.

Next, we experiment the recovery of consecutive missing samples of g and g' when U = {-2,-1, 0, 1, 2, 3}, r = 0.6 (hence [t.sub.o] = 1.2). The results are shown in Tables 6 and 7. In this case the condition number of the matrix I - S in (3.10) is 3.67 x [10.sup.7], much larger than in the one-channel case. This has the effect of propagating strongly the truncation error, thus producing huge errors of the solution, much worse than in the one-channel case. Thus, with this distribution of missing points and this value of the parameter r, the system is very ill-conditioned; an application of the Tikhonov regularization technique with discrepancy principle leads to the results in columns 5 and 6 in Tables 6 and 7. In Figure 5 we show the plots of the original signal g and of the computed one obtained with regularization.

Of course, by using a smaller oversampling parameter, one can reduce the condition number of the matrix I - S, obtaining much better results: by using r = 0.3 (hence [t.sub.o] = 0.6), and the same set U = {-2,-1, 0, 1, 2, 3}, one gets cond(I - S) = 1.92.[10.sup.4]. In Section 2 we have observed that, in the two-channel formula, the case r [member of] (0, 0.5) implies oversampling in each channel separately, thus the one-channel formula can be used separately for the signal and its derivative. However, the two-channel formula may be more efficient; we have experimented the recovery of consecutive samples of the function g and its derivative by using the one-channel formula for g and for g' (with r = 0.6) and with the two-channel formula (with r = 0.3). Table 8 shows the computed samples [g.sub.k] and the relative errors [e.sub.k], obtained with the one-channel formula (columns 3 and 4) and the two-channel formula (columns 5 and 6) for r = 0.6 and r = 0.3 respectively. The results for the derivative are even better

[FIGURE 5 OMITTED]

6 Conclusions

This paper deals with the recovery of a finite number of missing samples of a band-limited signal via the two-channel derivative oversampling formula. Unlike the one-channel case, studied in [5], where the reconstruction matrix R is symmetric, the reconstruction matrix S in the two-channel case is no longer symmetric. The matrix S has a 2 x 2 block structure, where the diagonal blocks are symmetric and the off-diagonal blocks are antisymmetric. This makes the analysis of the stability of the reconstruction procedure more difficult than in the one-channel case. We analyzed in detail the case where the locations of the missing samples/d = {[mi.sub.1], [mi.sub.2], ... , [mi.sub.N]} are equally spaced. Here m is a positive integer and the ik's are consecutive numbers. In this case we have found bounds for the eigenvalues of the diagonal blocks [S.sub.11] and [S.sub.22], depending on m and on the oversampling parameter r. We proved that, if mr is an integer, then the matrix S is lower triangular and that half of its eigenvalues are equal to 2r - r2, while the other half are equal to [r.sup.2]. We also presented numerical experiments supporting our conjecture that all the eigenvalues of S are real and lie in (0, 1) for every value of r [member of] (O, 1).

In the second part of the paper, we tried the reconstruction procedure on a test function, for a single channel and two channels, both in the case of noise-free and noisy data. In the ill-conditioned problem of consecutive missing samples we have found a regularized solution via the Tikhonov method with the discrepancy principle.

References

[1] M. Bertero and P. Boccacci, Introduction to Inverse Problems in Imaging, Institute of Physics Publ., Bristol, 1998.

[2] V. Del Prete, Recovery of missing samples in oversampling formulas for bandlimited functions, Sampl. Theory Signal Image Process., 8(2), 161- 180, 2009.

[3] V. Del Prete, Frames and oversampling formulas for band-limited functions, Ann. Mat. Pura Appl. (4), 189(3), 445-474, 2010.

[4] P. J. S. G. Ferreira, Incomplete sampling series and the recovery of missing samples from oversampled band-limited signals, IEEE Trans. Signal Processing, 40(1), 225- 227, 1992.

[5] P. J. S. G. Ferreira, The stability of a procedure for the recovery of lost samples in band-limited signals, Signal Processing, 40(3), 195-205, 1994.

[6] P. J. S. G. Ferreira, Stability issues in error control coding in the complex field, interpolation and frame bounds, IEEE Sig. Proc. Letters, 7(3), 57-59, 2000.

[7] J. R. Higgins, Sampling Theory in Fourier and Signal Analysis. Foundations, Oxford University Press, Oxford, 1996.

[8] J. M. Kim and K. H. Kwon, Recovery of finite missing samples in two-channel oversampling, Sampl. Theory Signal Image Process., 6(2), 185-198, 2007.

[9] V. A. Morozov, Methods for Solving Incorrectly Posed Problems, Springer, Berlin, 1984.

[10] R. J. Marks II, Restoring lost samples from an oversampled band-limited signal, Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP-31, 3, 752-755, June 1983.

[11] R. J. Marks II, Introduction to Shannon Sampling and Interpolation Theory, Springer, Berlin, 1991.

[12] D. M. S. Santos and P. J. S. G. Ferreira, Reconstruction from missing function and derivative samples and oversampled filter banks, Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 04, 3, 2004, 941 - 944.

[13] D. M. S. Santos, P. J. S. G. Ferreira and J. M. N. Vieira, Study of the recovery of missing samples for function and derivative oversampled filter banks, Proceedings of the 2005 International Workshop on Sampling Theory and Applications, SampTA 2005, Samsun, Turkey.

Paola Brianzi

Dipartimento di Matematica, Universita di Genova,

via Dodecaneso 35, 16146 Genova, Italia

brianzi@dima, unige.it

Vincenza Del Prete

Dipartimento di Matematica Universita di Genova,

via Dodecaneso 35, 16146 Genova, Italia

delprete@dima.unige.it
```Table 1: [[lambda].sub.min]([S.sub.11]), [[lambda].sub.max]
([S.sub.11]) and their estimates for U= {0, 8, 16, 24}
and several values of r.

[[lambda] [[lambda]
[[alpha] .sub.min] .sub.max] [[beta]
r .sub.11] ([S.sub.11]) ([S.sub.11]) .sub.11]

0.55 0.719 0.768 0.811 0.844
0.6 0.773 0.813 0.859 0.898
0.7 0.859 0.903 0.926 0.984
0.8 0.891 0.946 0.967 1.016
0.9 0.929 0.984 0.998 1.055
0.95 0.936 0.996 0.999 1.063

Table 2: [[lambda].sub.min]([S.sub.22]), [[lambda].sub.max]([S.sub.22])
and their estimates for U= {0, 8, 16, 24} and several values of r.

[[lambda] [[lambda]
[[alpha] .sub.min] .sub.max] [[beta]
r .sub.22] ([S.sub.22]) ([S.sub.22]) .sub.22]

0.55 0.219 0.271 0.315 0.350
0.6 0.281 0.317 0.391 0.427
0.7 0.430 0.470 0.535 0.603
0.8 0.516 0.594 0.659 0.709
0.9 0.711 0.766 0.871 0.932
0.95 0.820 0.877 0.962 1.056

Table 3: Condition number of I - S for U = {0, 1, 2, ... , 9} and
r = 0.1, 0.2, ... , 1.

r cond(I - S) r cond(I - S)

0.1 8.571E+01 0.6 2.829E+13
0.2 5.870E+04 0.7 1.661E+16
0.3 6.187E+05 0.8 2.474E+17
0.4 1.133E+08 0.9 4.096E+17
0.5 3.513E+10

Table 4: Recovered samples with the one-channel formula, r=0.6: exact
samples (col 2), computed samples and relative errors with M = 40
(col.s 3 and 4) and M = 500 (col.s 5 and 6).

M = 40

[[x.sub.k] g([[x.sub.k]) [g.sub.k] [e.sub.k]

0.0 0.1529 0.1132 0.2598
0.6 -0.2906 -0.5344 0.8390
1.2 0.0856 -0.4833 6.6498
1.8 0.9221 0.2132 0.7687
2.4 0.8416 0.3498 0.5844
3.0 0.0710 -0.0872 2.2288

M = 500

[[x.sub.k] [g.sub.k] [e.sub.k]

0.0 0.1498 0.0199
0.6 -0.3096 0.0653
1.2 0.0410 0.5206
1.8 0.8664 0.0603
2.4 0.8029 0.0459
3.0 0.0585 0.1751

Table 5: Recovered samples with the one-channel formula, r=0.6,
with noise added: exact samples (col 2), computed samples and relative
errors without regularization (col.s 3 and 4),and with regularization
(col.s 5 and 6).

no regul.

[[x.sub.k] g([[x.sub.k]) [g.sub.k] [e.sub.k]

-1.2 -0.5237 50.9126 98.2227
-0.6 0.1580 180.6426 1142.5116
0.0 0.1529 306.9469 2006.8097
0.6 -0.2906 307.9712 1060.8440
1.2 0.0856 183.2893 2141.4934
1.8 0.9221 53.8130 57.3621

with regul.

[[x.sub.k] [g.sub.k] [e.sub.k]

-1.2 -0.4852 0.0734
-0.6 0.2141 0.3550
0.0 0.1430 0.0652
0.6 -0.3608 0.2416
1.2 0.0435 0.4918
1.8 0.9248 0.0030

Table 6: Recovered samples of g with two channels, r = 0.6,
no noise added: exact samples (col. 2), computed samples and
relative errors (col.s 3 and 4), computed samples and the
relative errors (col.s 5 and 6) with regularization.

no regul.

[[x.sub.k] g([[x.sub.k]) [g.sub.k] [e.sub.k]

-2.4 -0.1868 -0.0580 -0.6893
-1.2 -0.5237 2.1433 -5.0929
0.0 0.1528 10.3983 67.0174
1.2 0.0856 12.0422 139.7624
2.4 0.8416 5.1810 5.1561
3.6 -0.1782 0.1425 -1.7997

with regul.

[[x.sub.k] [g.sub.k] [e.sub.k]

-2.4 -0.0785 0.5799
-1.2 -0.4569 0.1274
0.0 0.1832 0.1989
1.2 -0.1897 3.2170
2.4 -0.1480 1.1758
3.6 -0.4405 1.4720

Table 7: Recovered samples of g with two channels, r = 0.6,
no noise added: exact samples (col. 2), computed samples and
relative errors (col.s 3 and 4), computed samples and the
relative errors (col.s 5 and 6) with regularization.

no regul.

[X.sub.k] g'([X.sub.k]) [g'.sub.k] [e.sub.k]

-2.4 -0.9399 -0.4742 -0.4955
-1.2 1.0457 5.5797 4.3356
0.0 -0.7350 5.4046 -8.3534
1.2 1.4159 -2.6352 2.8611
2.4 -1.0603 -7.1265 -5.7212
3.6 0.2127 -0.8092 4.8046

with regul.

[X.sub.k] g'([X.sub.k]) [g'.sub.k] [e.sub.k]

-2.4 -0.9399 -0.7919 0.1575
-1.2 1.0457 0.8325 0.2039
0.0 -0.7350 -0.5690 0.2258
1.2 1.4159 0.5867 0.5857
2.4 -1.0603 -0.9327 0.1203
3.6 0.2127 0.8036 2.7786

Table 8: Recovered samples of g. Col.s 3 and 4: computed samples and
relative errors with one channel and r = 0.6. Col.s 3 and 4: computed
samples and relative errors with two channels and r = 0.3.

1 chan. r = .6

[X.sub.k] g([X.sub.k]) [g.sub.k] [e.sub.k]

-1.2 -0.5237 -0.5321 0.0160
-0.6 0.1580 0.1350 0.1454
0.0 0.1529 0.1255 0.1790
0.6 -0.2906 -0.3059 0.0527
1.2 0.0856 0.0840 0.0184
1.8 0.9221 0.9239 0.0020

2 chan. r = .3

[X.sub.k] g([X.sub.k]) [g.sub.k] [e.sub.k]

-1.2 -0.5237 -0.5261 0.0046
-0.6 0.1580 0.1506 0.0466
0.0 0.1529 0.1451 0.0507
0.6 -0.2906 -0.2926 0.0069
1.2 0.0856 0.0879 0.0270
1.8 0.9221 0.9235 0.0016
```