# On the impossibility of uniform sparse reconstruction using greedy methods.

AbstractIt has previously been shown that a trigonometric polynomial having at most M non-vanishing coefficients can be recovered from N = O (M log(D)) random samples by the greedy methods thresholding and orthogonal matching pursuit with high probability. In this note we show that these results cannot be made uniform in the sense that a single (random) sampling set cannot guarantee recovery of all such M-sparse trigonometric polynomials simultaneously with high probability using the two greedy methods.

Key words and phrases: random sampling, trigonometric polynomials, greedy algorithms, orthogonal matching pursuit, thresholding, sparse recovery, compressed sensing

2000 AMS Mathematics Subject Classification--94A20, 42A05

1 Introduction

The basic goal of the sparse recovery (compressed sensing) problem is to exactly reconstruct a sparse signal c [member of] [C.sup.D] with at most M nonzero components, M [much less than] D, from

N = O(M [log.sup.n](D)) (1)

nonadaptive linear measurements of c [7, 2, 13]. These measurements are given by the vector [PHI]c [member of] [C.sup.N], where [PHI] is an N x D matrix.

Basically two approaches for the reconstruction techniques have been proposed: [l.sub.1]-minimization (Basis Pursuit) [2, 4, 5, 7] and greedy methods such as simple thresholding and orthogonal matching pursuit (OMP) [10, 16]. Both types of methods are able to reconstruct a sufficiently sparse signal c exactly with high probability if the measurement matrix [PHI] is a random Gaussian or Bernoulli matrix [7, 4, 1, 9]. Moreover, if [PHI] is a partial random Fourier matrix, then there are rigorous results of the same type for [l.sub.1]-minimization and thresholding, while for OMP the claim is supported by partial theoretical results and vast numerical experiments in [10, 12].

Despite recent progress on efficient solvers for [l.sub.1]-minimization, usually greedy algorithms are still considered faster than Basis Pursuit. In particular, it is hard to beat simple thresholding in terms of computation speed. However, [l.sub.1]-minimization has the advantage that, recovery holds uniform in the sense that a single (random) measurement matrix can guarantee exact reconstruction simultaneously of all sparse signals (in the range of (1)) [3, 7, 12, 14]. In contrast, it is known that for the Gaussian ensemble and in the range of (1) thresholding and OMP cannot recover all signals--even supported on one fixed set T--with high probability using the same matrix [8], and this phenomenon easily extends to the Bernoulli ensemble. In this note we show that also for partial Fourier matrices thresholding and OMP do not guarantee uniform recovery if N is only linear in the sparsity M (i.e., in the range of (1)). More precisely, if N [less than or equal to] C[M.sup.2], then there exists an M-sparse signal depending on the (randomly chosen) matrix [PHI] such that exact reconstruction fails for thresholding with high probability, while the corresponding statement for OMP holds under the condition N [less than or equal to] < C[M.sup.3/2]. We note, however, that a recent variant called regularized orthogonal matching pursuit does ensure uniform recovery [11].

2 Notation and Previous results

For some finite subset [GAMMA] [subset] [Z.sup.d], d [member of] N, we let [[PI].sub.[GAMMA]] denote the space of all trigonometric polynomials in dimension d whose coefficients are supported on [GAMMA]. An element f of [[PI].sub.[GAMMA]] is of the form f (x) = [[summation].sub.k[epsilon][GAMMA]] [c.sub.k] [e.sup.2[pi]ik x x], x [member of] [[0,1].sup.d], with Fourier coefficients [c.sub.k] [member of] C. The dimension of [[PI].sub.[GAMMA]] will be denoted by D := [absolute value of [GAMMA]].

A trigonometric polynomial is called M-sparse if at most M coefficients [c.sub.k] are non-zero and the set of all M-sparse trigonometric polynomials in [[PI].sub.[GAMMA]] is denoted by [[PI].sub.[GAMMA]](M). The goal is to reconstruct such a sparse trigonometric polynomial f [member of] [[PI].sub.[GAMMA]](M) from sample values f([x.sub.1]),..., f([x.sub.N]), where the number N of sampling points [x.sub.1],..., [x.sub.N] [member of] [[0, 1].sup.d] is small compared to the dimension D.

Given the sampling set X = ([x.sub.1],..., [x.sub.N]) we denote by) [F.sub.X] the N x D matrix (recall that D = [absolute value of [GAMMA]]) with entries

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (2)

Then clearly f([x.sub.j]) = [([F.sub.X]c).sub.j] if c is the vector of Fourier coefficients of f. Let [[phi].sub.k] denote the k-th column of [F.sub.X], i.e.,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

then [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] By [F.sub.TX] we denote the restriction of [F.sub.X] to the columns indexed by a subset T [subset] [GAMMA].

We use the following two probability models for [x.sub.1],..., [x.sub.N]:

(1) The sampling points [x.sub.1],..., [x.sub.n] are independent and uniformly distributed random variables on the cube [[0, 1].sup.d].

(2) The sampling points [x.sub.1],..., [x.sub.n] are independent and uniformly distributed on the grid 1/m [Z.sup.d.sub.m] = [{0, 1/m,...,m-1/m}.sup.d] for some m [member of] N, m [greater than or equal to] 2.

In both cases the matrix [F.sub.X] is a random partial Fourier matrix. More precisely, in the first case it is a non-equispaced Fourier matrix, and in the second case it is a submatrix consisting of random rows of the discrete Fourier matrix on [Z.sup.d.sub.m].

So far mainly [l.sub.1] -minimization (Basis Pursuit) [2, 4, 5, 8, la, 12] and greedy algorithms [9, 10, 12] were proposed as methods for reconstructing the vector c of Fourier coefficients of a sparse trigonometric polynomial and, hence, the polynomial itself. Denote by y = (f([x.sub.j])).sup.N.sub.j=1] the vector of sample values. Basis Pursuit (BP) consists in determining the solution of the minimization problem

min [parallel]c[parallel][sub.1] = [summation over (k[member of][GAMMA])][absolute value of [c.sub.k]] subject to [F.sub.X] c = y = [(f([x.sub.j])).sup.N.sub.j=1].

This problem can be solved with convex optimization techniques. The following reconstruction theorem has been shown recently [12, 14, 15, 4].

Theorem 2.1. Let the sampling set X = ([x.sub.1],..., [x.sub.n]) be chosen according to the probability model (1) or (2), and suppose

N/log(N) [greater than or equal to] CM [log.sup.2](M) log (D) log[([[epsilon].sup.-1)]. (3)

Then with probability at least 1 - [epsilon] Basis Pursuit reconstructs every M-sparse trigonometric polynomial.

Since M, N [less than or equal to] D, relation (3) is satisfied in particular if

N [greater than or equal to] CM [log.sup.4](D)log[([[epsilon].sup.-1)].

Neglecting the log-factors, the required number of samples N scales linear in the sparsity M.

Presumably the simplest greedy algorithm is thresholding. It consists of the following steps:

1. Determine the set T [subset] [GAMMA] corresponding to the M largest correlations [absolute value of <y, [phi].sub.k]], k [member of] [GAMMA].

2. Compute the coefficients corresponding to the orthogonal projection of y onto the linear span of {[phi].sub.k], k [member of] T}, i.e., the non-zero coefficients of c are determined as the minimizer of the least spares problem min [[parallel][F.sub.TX]c-y[parallel].sub.2].

Thresholding is much faster in practice than Basis Pursuit. In [10] the following reconstruction theorems have been shown.

Theorem 2.2. Let f [member of] [[PI].sub.[GAMMA]](M) with Fourier coefficients c. Define its dynamic range by

R: = [max.sub.k[member of]T] [absolute value of [c.sub.k]]/[max.sub.k[member of]T] [absolute value of [c.sub.k]].

Choose the sampling points ([x.sub.1], ..., [x.sub.n]) according to the probability model (1) or (2). If for some [epsilon] > 0

N [greater than or equal to] C M [R.sup.2] log(4D/[epsilon]), (4)

then with probability at least 1 - [epsilon] thresholding recovers f exactly. The constant C is no larger than 17.89.

Apart from the dependence on the dynamic range R there is a subtle difference to the recovery theorem for BP above. Recovery holds only for the given sparse polynomial with high probability, while Theorem 2.1 states that a single sampling set can recover all sparse polynomials with high probability. One can remove this drawback for thresholding; however, this comes at the cost of dramatically increasing the required number of samples.

Theorem 2.3. Choose the sampling set X = ([x.sub.1],..., [x.sub.n]) according to the probability models (1) 07" (2), and assume that

N [greater than or equal to] C[M.sup.2] [R.sup.2] log (D/[epsilon]).

Then with probability at least 1 - [epsilon] thresholding recovers all M-sparse trigonometric polynomials for which the dynamic range of their coefficients is at most R.

Explicit. (and small) constants can be found in [10]. Below we will show that the quadratic dependence of the number of samples N on the sparsity M cannot be improved if one requires uniformity, i.e., recovery of all sparse trigonometric polynomials from a single sampling set.

Orthogonal Matching Pursuit (OMP) is an iterative greedy algorithm, which adds a new element of the support T in each step by maximizing the correlation of the current residual with the remaining columns [[phi].sub.k]. Formally, it consists of the following steps.

1. Initialize: Set current residual [r.sub.0] := y and support set [T.sub.0] := 0.

2. Iterate until a stopping criterion is met (iteration counter s):

i. Determine [k.sub.s] := [argmax[kappa][member of]T] [absolute value of ([r.sub.s-1], [phi].sub.k])] and [T.sub.s] := [T.sub.s-1] [union] {[[k.sub.s]]}

ii. Update the residual by [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] where [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] denotes the orthogonal projection of y onto the span of {[[phi].sub.[kappa]], [kappa] [member of] [T.sub.s]}.

3. Set T = [T.sub.s;] the non-zero coefficients of c are given by [P.sub.T]y = [summation.sub.[kappa][member of]T] [c.sub.[kappa][phi][kappa]].

More details on the implementation of this algorithm can be found in [10]. In practice, it is slower than thresholding but usually faster than BP. Moreover, numerical tests indicate a higher recovery rate than thresholding and a similar average (non-uniform) rate as BP.

Due to stochastic dependency issues, it seems difficult to analyze fully the performance of OMP, but at least we can say something about the first step [10].

Theorem 2.4. Let f [member of] [[PI].sub.[GAMMA]] (M) with coefficients supported on T. Choose random sampling points ([x.sub.1],..., [x.sub.n]) according to one of our two probability models. If

N [greater than or equal to] CM log(D/[epsilon]),

then with probability at least 1 - [epsilon] OMP selects an element of the true support T in the first iteration.

Compared to the corresponding result for thresholding (Theorem 2.2), the dependence on the dynamic range is removed, and numerical experiments in [10] indicate that this remains true also for the further iterations. However, again the above statement is non-uniform, in the sense that a single sampling set X is good only for the given polynomial but not necessarily for all sparse polynomials simultaneously. Similarly, as for thresholding, one can remove this drawback at the cost of dramatically increasing the required number of sampling points, which actually allows a statement for the full application of OMP.

Theorem 2.5. Let X = ([x.sub.1], ..., [x.sub.n]) be chosen according to the probability model (1) or (2). Assume that

N [greater than or equal to] [CM.sub.2] log(D/[epsilon]).

Then with probability at least 1 - [epsilon] OMP recovers every M-sparse trigonometric polynomial in M steps.

We will show below that the above statement is close to optimal if one requires uniformity. The number of required samples N increases at least faster than [M.sup.3/2], so Theorem 2.4 cannot be made uniform.

3 Main Results

The following result shows that Theorem 2.3 cannot be significantly improved if one requires uniformity. For simplicity we restrict [GAMMA] to a particular set of basic frequencies, although the statement holds also for more general sets (see also Remark 4).

Theorem 3.1. Let [GAMMA] = [Z.sup.d.sub.m] with m [greater than or equal to] 2. Let X = ([x.sub.1],..., [x.sub.n]) be randomly chosen according to the probability model (1) or (2). Suppose M [less than or equal to] [absolute value of [GAMMA]/4 = [m.sub.d]/4 and

N [less than or equal to] 1/4[sigma] [M.sup.2] - 7/2 (M-1) (5)

for some [sigma] > 2. Then with probability exceeding

1 - 4/M - 1/[([sigma] -1).sup.2]

there exists an M-sparse trigonometric polynomial (depending on X) which thresholding fails to reconstruct.

The proof actually provides an explicit trigonometric polynomial (depending on X), which thresholding fails to reconstruct with the stated probability. A similar statement holds also for OMP.

Theorem 3.2. Let [GAMMA] = [Z.sup.d.sub.m] with m [greater than or equal to] 2. Let X = ([x.sub.1],..., [x.sub.n]) be randomly chosen according to the probability model (1) or (2). Suppose M [less than or equal to] [absolute value of [GAMMA]] = [m.sub.d]/4 and

N [less than or equal to] [tau]/5 [M.sup.3/2] - 7/2(M-1) (6)

for some [tau] < 1. Then with probability exceeding 1 - 4/M - [[tau].sup.2] there exists an M-sparse trigonometric polynomial (depending on X) which OMP fails to reconstruct in M steps.

The exponent 3/2 in (6/ is probably not optimal, and we expect that (6) can be improved to N [less than or equal to] [C.sub.n][M.sup.2-1/n] for any n [greater than or equal to] 2 with a constant [C.sub.n] depending on n. However, this will probably require even more tedious and technical computations than the present proof. Since the present statement already shows that the recovery Theorem 2.4 for OMP cannot be made uniform without spoiling the linear dependence N = O(M), we did not further pursue this issue here.

We note that the above theorem does not exclude the possibility that the pathological M-sparse trigonometric polynomial is reconstructed after M + 1 or more steps. The proof of the theorem actually shows that OMP selects a wrong element in the first step. However, if the reconstructed support set T' contains the true support set T (which is possible if OMP does more than M steps) and if T' is not unreasonably large (say [absolute value of T'] [[less than or equal to] 2 [absolute value of T]) then it will usually happen that the coefficients on T' \ T will be set to zero so that nevertheless the correct polynomial is recovered. This scenario was actually observed in numerical experiments. Nevertheless we expect that a version of Theorem 2.4 holds, which shows an impossibility of uniform recovery even when OMP is allowed to perform more than M steps.

4 Proofs

We develop the proofs of both Theorems 3.1 and 3.2 in parallel.

First note that by linear algebra necessarily N [greater than or equal to] 2M if some method (in particular OMP and thresholding) is able to recover all M-sparse trigonometric polynomials, see also [6, Lemma 3.1]. Indeed, if N < 2M and [absolute value of T] = 2M then [F.sub.TX] is not invertible, i.e., there exists c supported on T with [F.sub.TX]c = 0. Split T into [T.sub.1] and [T.sub.2] with [absolute value of [T.sub.1]] = [absolute value of [T.sub.2]] = M and T = [T.sub.1] [union] [T.sub.2.] Denote by [c.sub.i] the vector that coincides with c on [T.sub.i] and is zero outside (i = 1, 2), i.e., both [c.sub.1] and [c.sub.2] are M-sparse. Then c = [c.sub.1] + [c.sub.2,] but [F.sub.x] c = [F.sub.x]([c.sub.1] + [c.sub.2)] = 0, i.e., [F.sub.x] ([c.sub.1]) = [F.sub.x](- [c.sub.2]). Hence, both [c.sub.1] and -[c.sub.2] provide the same measurements and no reconstruction method can distinguish between these two M-sparse vectors. Hence, from now on we assume N [greater than or equal to 2M in addition to (5) or (6).

Both for Thresholding and OMP we choose the M-sparse trigonometric polynomial as follows. Let T [member of] [GAMMA] of size M be the support of its coefficients; and let l [member of] [GAMMA]/ T. Later we give some restrictions on T and l, which are not essential, however. Then we choose the non-zero components of c as

[c.sub.k] := <[[phi].sub.l], [[phi].sub.k]>, k [member of] T.

Clearly, [c.sub.k] depends on the sampling set X. Let

y = [F.sub.X]c = [summation over (k[member of]T]<[[phi].sub.l], [[phi].sub.k]>[[phi].sub.k] (7)

be the corresponding vector of sample values. Thresholding fails to select the correct support T if

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Hence, fixing some k [member of] T and [alpha] > 0 the probability that thresholding succeeds can be estimated from above by

P([absolute value of <y, [[phi].sub.k]]>[absolute value of <y, [[phi].sub.l]>])[less than or equal to] P([absolute value of <y,[[phi].sub.k]>]>[alpha]) + P([absolute value of <y, [[phi].sub.l]>]<[alpha]).

Now we consider OMP. It selects a wrong element in the first step, and consequently cannot recover c in M steps, if

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Similarly as above, the probability that OMP does not fail in the first step can be upper bounded by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (8)

Hence, both OMP and thresholding require an analysis of P([absolute value of <y, [[phi].sub.l]>] < [alpha]) and P([absolute value of <y, [[phi].sub.k]>] > [alpha]). Assuming [alpha] [less than or equal to] 1/2 E[absolute value of (y, [[phi].sub.l])], we can estimate the first term as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (9)

Similarly, assuming [[alpha].sup.2] [sigma]E[[[absolute value of <y, [[phi].sub.k]>].sup.2]] for [sigma] > 1, we can estimate

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (10)

An [alpha] satisfying the two assumed conditions exists if and only if

[(E[absolute value of <y, [[phi].sub.l]>].sup.2] [greater than or equal to] 4[sigma]E[[[absolute value of <y, [[phi].sub.k]>].sup.2]].

It remains to compute the expectations E[absolute value of <y, [[phi].sub.l]>], E[[[absolute value of <y, [[phi].sub.l]>].sup.2]], E[[[absolute value of <y, [[phi].sub.k]>].sup.2]], and E[[[absolute value of <y, [[phi].sub.k]>].sup.4]].

Lemma 4.1. For l [not member of] T and y given by (7) it holds E[absolute value of <y, [[phi].sub.l]>] = MN.

Proof. By definition of y we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Furthermore,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Since the [x.sub.j] are independent and uniformly distributed on [[0, 1].sup.d] or on the grid 1/m[Z.sup.d.sub.m], and since l [not equal to] k, we have

E[exp(2[pi]i(l - k) x ([x.sub.j] - [x.sub.j]'))] = [[delta].sub.j,j]'.

Hence, E[[absolute value of <[[phi].sub.l], [[phi].sub.k]>].sup.2] = N and E[[absolute value of <y, [[phi].sub.k]>]] = MN.

Lemma 4.2. It holds E [[[absolute value of <y, [[phi].sub.l]>].sup.2]] = [N.sup.2]M(M + 1) - MN.

Proof. We have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Furthermore,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

If k = k', then the expectation in the above sum equals 1 if and only if {[j.sub.1], [j.sub.3]} = {[j.sub.2], [j.sub.4},] and vanishes otherwise. Hence, for [j.sub.1,][j.sub.2,][j.sub.3,] [j.sub.4] [member of] {1, ..., N} this happens N + 2N(N - 1) = N(2N - 1) times. If k [not equal to] k' then the expectation equals 1 if and only if [j.sub.1] = [j.sub.2] and [j.sub.3] = [j.sub.4] and vanishes otherwise. This happens [N.sup.2] times for [j.sub.1,] ..., [j.sub.4] [member of] {1, ..., N}. Combining everything, we obtain

E[[[absolute value of <y, [[phi].sub.l]>].sup.2]] = MN(2N - 1) + M(M - 1)[N.sup.2] = [N.sup.2]M(M + 1) MN.

Lemma 4.3. For [k.sub.0] [member of] T and y given by (7) we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Proof. By definition of y

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

The expectation in the previous expression equals

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

If k = k' then the expectation in the above sum equals 1 if and only if [j.sub.1] = [j.sub.3] and either [k.sub.0] = k = k' or [j.sub.2] = [j.sub.4.] Hence, if [k.sub.0] = k = k' this leaves [N.sup.3] possibilities for [j.sub.1], ..., [j.sub.4] [member of] {1, ..., N}, while for [k.sub.0] [not equal to] k there are [N.sub.2] possibilities. Further, if k [not equal to] k' then the expectation equals 1 if and only if [j.sub.1] = [j.sub.2] = [j.sub.3] = [j.sub.4] (giving N possibilities) or k' = [k.sub.0] and [j.sub.1] = [j.sub.2] = [j.sub.3] (giving (M-1)[N.sup.2] possibilities for k and [j.sub.1], ..., [j.sub.4)] or k = [k.sub.0] and [j.sub.1] = [j.sub.3] = [j.sub.4] (resulting in (M - 1)[N.sup.2] possibilities for k' and [j.sub.1,] ..., [j.sub.4]). Altogether we obtain the stated expression for [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Lemma 4.4. Suppose [l.sub.n] [equivalent to] 1 mod 4 for some component [l.sub.n] of l [member of] [Z.sup.d] and k [equivalent to] 0 mod 4 for all k [member of] T. Then for [k.sub.0] [member of] T and y given by (7) it holds

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

The proof of this lemma is elementary but quite tedious, and postponed to the next section.

Equipped with these auxiliary results, we turn to the proofs of our main results. Choose T and l as in Lemma 4.4. Since M [less than or equal to] [absolute value of [GAMMA]]/4 this is certainly possible for our choice of F. By Lemma 4.3

E[[[[absolute value of <y, [[phi].sub.k]>].sup.2]].sup.2] = [N.sup.6] + [N.sup.5](6M - 6) + [N.sup.4](11[M.sup.2] 24M + 13) + [N.sup.3](6[M.sup.3] 24[M.sup.2] + 30M - 12) + [N.sup.2]([M.sup.4] 6[M.sup.3] + 13[M.sup.2] 12M + 4). (11)

Then a calculation shows that for k [member of] T

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Since N [greater than or equal to] 2M, it is straightforward to verify that

E[[[[absolute value of <y, [[phi].sub.k]>].sup.2]].sup.2] [greater than or equal to] E[[[absolute value of <y, [[phi].sub.k]>].sup.4]] - E[[[[absolute value of <y, [[phi].sub.k]>].sup.2]].sup.2] (12)

for all M [greater than or equal to] 20 (say). Hence, under condition [[alpha].sup.2] [greater than or equal to] [sigma]E[[[absolute value of <y, [[phi].sub.k]>].sup.2]] by (10) we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Furthermore, by (9) and Lemmas 4.1 and 4.2 we get

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

under condition [alpha] [less than or equal to] 1/2 E[[absolute value of <y, [[phi].sub.l]>]. Summarizing, if

[(E[absolute value of <y, [[phi].sub.l]>]).sup.2] [greater than or equal to] 4[sigma]E[[[absolute value of <y, [[phi].sub.k]>].sup.2]], (13)

then the probability that thresholding succeeds can be upper bounded by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

By Lemmas 4.1 and 4.3 the condition (13) is equivalent to

[M.sup.2] [N.sup.2] [greater than or equal to] 4[sigma]((M - 1)(M - 2)N + 3(M - 1)[N.sup.2] + [N.sup.3])).

Since N [greater than or equal to] 2M, this is satisfied under (5). Moreover, for M < 20, the maximal N satisfying (5) is less than 1 and then the statement of the theorem becomes trivial. Hence, for all valid M, N (12) is satisfied and the proof of Theorem 3.1 is finished.

By (8) the probability that OMP succeeds in M steps can be estimated from above by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

We choose [sigma] such that M/[([sigma] - 1).sup.2] = [[tau].sup.2] < 1, i.e., [sigma] = [square root of (M)] [[tau].sup.-1] + 1. As above (13) is satisfied under condition (5), which now reads

N [less than or equal to] 1/4[square root of M][[tau].sup.-1] + 1 [M.sup.2] 7/2(M - 1).

This is certainly satisfied if

N [less than or equal to] [tau]/5 [M.sup.3/2] 7/2(M - 1). (14)

Then the probability that OMP succeeds in M steps can be upperestimated by

4/M + [[tau].sup.2].

Moreover, if M < 20, then as above the minimal N satisfying (14) is less than 1 and, hence, again we can omit the condition M [greater than or equal to] 20 ensuring (12). This completes the proof of Theorem 3.2.

Remark.

* The proof shows that the conditions on F and M can be slightly relaxed in Theorems 3.1 and 3.2. We only have to require the existence of l and T as in Lemma 4.4. In particular, [GAMMA] = [{-q, -q + 1, ..., q}.sup.d] is also a valid choice corresponding to spaces of trigonometric polynomials of degree at most q.

* In order to improve Theorems 3.1 and 3.2 with respect to the probability estimate and the exponent 3/2 in (6), one might work with higher moments rather than only the 4th moment as in Lemma 4.4. However, computing higher moments will be even more tedious than the proof of Lemma 4.4.

5 Proof of Lemma 4.4

We have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Given ([j.sub.1], ..., [j.sub.8]), we let A = ([A.sub.1], ..., [A.sub.k)] be a partition of {1, ..., 8} into r disjoint subsets (called blocks) such that i,i' [member of] {1, ..., 8} is contained in the same block A [member of] A if and only if [j.sub.i] = [j.sub.i]' =: [j.sub.A]. By B(r) we denote the collection of all such partitions of {1,..., 8} into r blocks. Then by independence the previous sum can be written

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

The product in the previous expression contributes to the sum if and only if

[summation over (I [member of] A,i [less than or equal to] 4)] [(-1).sup.i](l [k.sub.i]) + [summation over (I [member of] A,i [greater than or equal to] 5)] [(-1).sup.i]([k.sub.i-4] [k.sub.0]) = 0 for all A [member of] A. (15)

The condition k [equivalent to] 0 mod 4 for all k [member of] T implies that the second sum above is always 0 modulo 4. Thus, if the sum above vanishes then necessarily also the first sum must be 0 mod 4. But, due to the condition [l.sub.n] [equivalent to] 1 mod 4, this can only happen if l cancels completely, which in turn implies that either the pairs {1, 2} and {3, 4} or the pairs {1, 4} and {2, 3} are each contained in the same block of the partition A. Note that this means as well that the partition A contains at most six blocks and we can write

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where D(r) = [[summation].sub.A[member of]B(r)] [C.sub.r] (A) and

[C.sub.r](A) = #{([k.sub.1], [k.sub.2], [k.sub.3], [k.sub.4]) [member of] [T.sup.4] : (15) is satisfied}.

Hence, we need to determine the coefficients D(r), r = 1, ..., 6. For r = 6 we must consider only the partitions

[A.sub.1] ={{1, 2}, {3, 4}, {5}, {6}, {7}, {8}}, [A.sub.2] = {{1, 4}, {2, 3}, {5}, {6}, {7}, {8}}.

But (15) then means [k.sub.i] = [k.sub.0] for all i = 1, ..., 4, and we conclude that D(6) = 2. Now consider r = 1. Then the only partition consists of the block A = {1, ..., 8} and (15) is satisfied for all possible choices of ([k.sub.1], [k.sub.2], [k.sub.3], [k.sub.4]) [member of] [T.sup.4]. Hence, D(1) = [[absolute value of T].sup.4] = [M.sup.4].

Now consider r = 5. We have the following possible partitions:

* A = {{1, 2, 3, 4}, {5}, {6}, {7}, {8}}. As above, (15) necessarily requires [k.sub.1] = [k.sub.2] = [k.sub.3] = [k.sub.4] = [k.sub.0]; hence, [C.sub.5](A) = 1.

* We take the two partitions {{1, 2}, {3, 4}} and {{1, 4}, {2, 3}} of the numbers {1, ..., 4}, and then add one of the numbers 5, ..., 8 to one of these blocks and let the other three blocks of A consist of the remaining numbers each. This gives an overall number of 16 possible partitions, and as above (15) requires [k.sub.1] = [k.sub.2] = [k.sub.3] = [k.sub.4] = [k.sub.0]. Hence, [C.sub.5](A) = 1 for each of those 16 partitions.

* We take again the two partitions {{1, 2}, {3, 4}} and {{1, 4}, {2, 3}}. Then we form a partition of {5, ..., 8} into two blocks of one elements and one block of two elements and combine these into a partition of {1, ..., 8}. In case the two-element block is {5, 6}; then the single element blocks are {7}, {8}, and (15) requires [k.sub.3] = [k.sub.4] = [k.sub.0]. Now, if we form the combination with {{1, 4}, {2, 3}}, then (15) requires also [k.sub.2] = [k.sub.3] = [k.sub.0] and [k.sub.1] = [k.sub.4] = [k.sub.0] and, hence, [C.sub.5](A) = 1 for the corresponding partition. If we form the combination with {{1, 2}, {3, 4}}, then (15) requires [k.sub.1] = [k.sub.2], and we conclude [C.sub.5](A) = M for A = {{1, 2}, {3, 4}, {5, 6}, {7}, {8}}. A similar situation occurs for the two-element blocks {5, 8}, {6, 7}, {7, 8}. Now, if the two-element block is {5, 7}, then (15) for the remaining one-element blocks requires [k.sub.2] = [k.sub.4] = [k.sub.0]. Now for both {{1, 2}, {3, 4}} and {{1, 4},{2, 3}} this implies that also [k.sub.1] = [k.sub.4] = [k.sub.0] and, hence, [C.sub.5](A) = 1 for the corresponding partition A. The same occurs for the two-element block {6, 8}. Counting all cases, we have [C.sub.5](A) = 1 for eight partitions A and [C.sub.5](A) = M for four partitions.

Altogether we have

D(5) = 1 + 16 + 8 + 4M = 4M + 25.

Now let r = 4. Then we have the following possibilities.

* We take {1, 2, 3, 4} and add one of the numbers 5, ..., 8 to it. Further, we choose the remaining three numbers as single element blocks. Then (15) requires [k.sub.1] = [k.sub.2] = [k.sub.3] = [k.sub.4] = [k.sub.0] and, hence, [C.sub.4] (A) = 1 for each of the four of such partitions.

* We take {{1, 2}, {3, 4}}, add two of the numbers 5, 6, 7, 8 to one of these blocks, and take the remaining numbers as single element blocks. Consider first the resulting partition A = {{1, 2, 5, 6}, {3, 4}, {7}, {8}}. Then (15) is equivalent to [k.sub.3] = [k.sub.4] = [k.sub.0]; hence, [C.sub.4](A) = [M.sup.2], and similarly for A = {{1, 2}, {3, 4, 7, 8}, {5}, {6}}. The remaining six partitions constructed in this way satisfy [C.sub.4](A) = M, as can easily be seen. The same considerations hold, of course, if we start with {{1, 4}, {2, 3}}. Counting cases, we get four times [C.sub.4](A) = [M.sup.2] and 12 times [C.sub.4](A) = M.

* We take {{1, 2}, {3, 4}}, add one of the numbers 5, 6, 7, 8 to {1, 2} and another one to {3, 4}, and let the remaining numbers form a single element block each. Assume first that the resulting partition is A = {{1, 2, 5}, {3, 4, 7}, {6}, {8}}. Then (15) means [k.sub.2] = [k.sub.4] = [k.sub.0]; hence, [C.sub.4](A) = [M.sup.2], and similarly for the other three partitions where 5 or 6 is added to {1, 2} and 7 or 8 is added to {3, 4}. If A = {{1, 2, 5}, {3, 4, 6}, {7}, {8}}, then (15) is satisfied iff [k.sub.2] = [k.sub.3] = [k.sub.4] = [k.sub.0]; hence, [C.sub.4](A) = M, and as well for the other three partitions where either 5 and 6 or 7 and 8 are added to the first blocks. The same considerations apply if we start with {{1, 4}, {2, 3}}. Counting cases yields eight times [C.sub.4](A) = [M.sup.2] and eight times [C.sub.4](A) = M.

* As one block of the partition we take {1, 2, 3, 4} and as the remaining three blocks we take a partition of {5, 6, 7, 8} into two one-element blocks and one two-element block. If the one-element blocks are {5}, {6}, then [k.sub.1] = [k.sub.2] = [k.sub.0] and (15) applied to {1, 2, 3, 4} yields also [k.sub.3] = [k.sub.4] and then (15) is also satisfied for the remaining block {7, 8}. Hence, in this case [C.sub.4](A) = M. The same appears for the two-element blocks {5, 6}, {5, 8}, {6, 7}. Now if theone-element blocks are {5}, {7}, then [k.sub.1] = [k.sub.3] = [k.sub.0]. Further, (15) is satisfied for both A = {1, 2, 3, 4} and {6, 8} if and only if [k.sub.2] + [k.sub.4] = 2[k.sub.0]. This is satisfied in particular if [k.sub.2] = [k.sub.4] = [k.sub.0], but at most for M pairs {[k.sub.2], [k.sub.4]}; hence 1 [greater than or equal to] [C.sub.4](A) [greater than or equal to] M for this partition. The same holds for the two-element block {5, 7}. Counting cases, we get four times [C.sub.4](A) = M and twice 1 [greater than or equal to] [C.sub.4] (A) [less than or equal to] M.

* We take {{1, 2}, {3, 4}} and as the remaining two blocks we take a partition of {5, 6, 7, 8}. If the partition of {5, ..., 8} is {{5, 6}, {7, 8}} then (15) is satisfied if and only if [k.sub.1] = [k.sub.2] and [k.sub.3] = [k.sub.4], hence [C.sub.4](A) = [M.sup.2] in this case. If the added two blocks are {5, 8} and {6, 7}, then (15) is satisfied if and only if [k.sub.1] = [k.sub.2] = [k.sub.3] = [k.sub.4]; hence, [C.sub.4](A) = M. If we add {5, 7} and {6, 8}, then (15) is satisfied if and only if [k.sub.1] = [k.sub.2] = [k.sub.3] = [k.sub.4] = [k.sub.0]; hence [C.sub.4](A) = 1. If the remaining blocks are {5} and {6, 7, 8}, then (15) is satisfied if [k.sub.1] = [k.sub.2] = [k.sub.0] and [k.sub.3] = [k.sub.4]; hence, [C.sub.4](A) = M, and this holds as well if the single element block is {6}, {7}, or {8}. Clearly, similar observations hold if we start with {{1, 4}, {2, 3}}. Counting cases yields twice [C.sub.4](A) = [M.sup.2], ten times [C.sub.4](A) = M, and twice [C.sub.4](A) = 1.

We conclude that

D(4) [less than or equal to] 4 + 4[M.sup.2] + 12M + 8[M.sup.2] + 8M + 4M+ 2M + 2[M.sup.2] + 10M + 2 = 14[M.sup.2] + 36M + 6,

(but also D(4) [greater than or equal to] 14[M.sup.2] + 34M + 8). Now let r = 3. We distinguish the following cases.

* We take {1, 2, 3, 4}, and add two of the numbers 5, ..., 8 to it and take the remaining two as single element blocks. Suppose first that these are {5}, {6}. Then (15) is satisfied if and only if [k.sub.1] = [k.sub.2] = [k.sub.0] and [k.sub.3] = [k.sub.4], hence [C.sub.3](A) = M. The same holds if the single element blocks are {5}, {8} or {6}, {7} or {7}, {8} giving a total of 4 possibilities. Further, if the single element blocks are {5}, {7} or {6}, {8}, then (15) is satisfied if and only if [k.sub.1] = [k.sub.3] = [k.sub.0] and [k.sub.2] + [k.sub.4] = 2[k.sub.0]; hence, 1 [less than or equal to] [C.sub.3](A) [less than or equal to] M. Althogether, we have four times [C.sub.3](A) = M and twice 1 [less than or equal to] [C.sub.3](A) [less than or equal to] M.

* Take {1, 2, 3, 4}, and add one of the numbers 5, ..., 8 to it and form two blocks from the remaining three numbers, i.e., one single element block and one two element block. Suppose first that these blocks are {5}, {7, 8}. Then (15) is satisfied iff [k.sub.1] = [k.sub.2] = 0 and [k.sub.3] = [k.sub.4]; hence, [C.sub.3](A) = M. The same holds for {6}, {7, 8} and {7}, {5, 6} and {8}, {5, 6}. Now suppose we have the blocks {5}, {6, 8}. Then (15) is satisfied iff [k.sub.1] = [k.sub.3] = [k.sub.0] and [k.sub.2] + [k.sub.4] = 2[k.sub.0]; hence, 1 [less than or equal to] [C.sub.3](A) [less than or equal to] M. This holds as well for {6}, {5, 7}. Counting cases yields 4 times [C.sub.3](A) = M and twice 1 [less than or equal to] [C.sub.3](A) [less than or equal to] M.

* We start with {1, 2}, {3, 4} and take {5, 6, 7, 8} as third block. Then (15) is satisfied iff [k.sub.1] = [k.sub.2] and [k.sub.3] = [k.sub.4], hence [C.sub.3](A) = [M.sup.2]. The same holds for the partition A = {{1, 4}, {2, 3}, {5, 6, 7, 8}}.

* Take {1, 2}, {3, 4}, form a partition of {5, 6, 7, 8} into two blocks and add one these two blocks to one of the sets {1, 2}, {3, 4}. Consider first the resulting partition A = {{1, 2, 5, 6}, {3, 4}, {7, 8}}. Then (15) is satisfied iff [k.sub.3] = [k.sub.4], and hence, [C.sub.3](A) = [M.sup.3]. This holds as well for the partition {{1, 2}, {3, 4, 7, 8}, {5, 6}}. The partition A = {{1, 2, 7, 8}, {3, 4}, {5, 6}} requires [k.sub.1] = [k.sub.2] and [k.sub.3] = [k.sub.4]; hence, [C.sub.3](A) = [M.sup.2], and similarly for A = {{3, 4, 5, 6}, {1, 2}, {7, 8}}. If we have A = {{1, 2, 5, 7}, {3, 4}, {6, 8}}}, then (15) is satisfied iff [k.sub.2] = [k.sub.4] = 2[k.sub.0] and [k.sub.3] = [k.sub.4] while [k.sub.1] is free. Thus, M [less than or equal to] [C.sub.3](A) [less than or equal to] [M.sup.2], and similarly for the three partitions {{1, 2, 6, 8}, {3, 4}, {5, 7}}, {{3, 4, 5, 7}, {1, 2}, {6, 8}} and {{3, 4, 6, 8}, {1, 2}, {5, 7}}. If A = {{1, 2, 5, 8}, {3, 4},{6, 7}}, then (15) is satisfied iff [k.sub.2] = [k.sub.3] = [k.sub.4]; hence, [C.sub.3](A) = [M.sup.2], and similarly for the remaining three partitions consisting of one four-element block and two two-element blocks.

If the resulting partition is A = {{1, 2, 5}, {3, 4}, {6, 7, 8}}, then (15) is satisfied iff [k.sub.2] = [k.sub.0] and [k.sub.3] = [k.sub.4]; hence, [C.sub.3](A) = [M.sup.2], and similarly if 6 is added to {1, 2}, or 7 or 8 are added to {3, 4}, giving a total of four partitions with [C.sub.3](A) = [M.sup.2]. Now consider A = {{1, 2, 7}, {3, 4}, {5, 6, 8}}. Then (15) holds iff [k.sub.3] = [k.sub.4] = [k.sub.0] and [k.sub.1] = [k.sub.2]; hence, [C.sub.3](A) : M. Similarly, if 8 is added to {1, 2}, or 5 or 6 are added to {3, 4}. Next, consider A = {1, 2, 5, 6, 7}, {3, 4}, {8}}. Then (15) is satisfied iff [k.sub.3] = [k.sub.4] = [k.sub.0]; hence, [C.sub.3](A) = [M.sup.2]. The same holds as well for the partitions where 7 is a single element block and {5, 6, 8} is added to {1, 2}; and 5 or 6 are single element blocks and the remaining numbers are added to {3, 4}. Finally, consider A = {{1, 2, 6, 7, 8}. {3, 4}, {5}}. Then (15) is equivalent to [k.sub.1] = [k.sub.0] and [k.sub.3] = [k.sub.4]; thus, [C.sub.3](A) = [M.sup.2], and similarly if 6 is a single element block and the remaining numbers are added to {1, 2}, or 7 or 8 are single element blocks and the remaining numbers are added to {3, 4}. Counting cases yields twice [C.sub.3](A) = [M.sup.3], 18 times [C.sub.3](A) = [M.sup.2], four times M [less than or equal to] [C.sub.3](A) [less than or equal to] [M.sup.2], and four times [C.sub.3](A) = M.

Similar considerations as above apply if we start with {1, 4}, {2, 3}; hence, we must take each of the above quantities into account twice.

* Finally, take {1, 2}, {3, 4}, partition the remaining numbers {5, 6, 7, 8} into three blocks, and add one of them to {1,2} and another one to {3, 4}. Suppose first that we result in {{1, 2, 5, 6}, {3, 4, 7}, {8}}. Then (15) is satisfied iff [k.sub.4] = [k.sub.0]; thus, [C.sub.3](A) = [M.sup.3]. The same holds if the role of 7 and 8 is interchanged, or the four-element block is {3, 4, 7, 8}, resulting in four possible partitions giving [C.sub.3](A) = [M.sup.3]. Now, suppose we have A = {{1, 2, 7, 8}, {3, 4, 5}, {6}}. Then (15) requires [k.sub.2] = [k.sub.0] and [k.sub.3] - [k.sub.4] = [k.sub.1] - [k.sub.0]; hence, M [less than or equal to] [C.sub.3](A) [less than or equal to] [M.sup.2], and there are three further partitions for which similar considerations hold. Next, consider A = {{1, 2, 5, 7}, {3, 4, 6}, {8}}. This implies [k.sub.4] = [k.sub.0] and [k.sub.2] + [k.sub.3] = 2[k.sub.0], hence, M [less than or equal to] [C.sub.3](A) [less than or equal to] [M.sup.2], and the same holds for 7 further partitions. The partition A = {{1, 2, 5}, {3, 4, 8}, {6, 7}} implies [k.sub.2] = [k.sub.3] = [k.sub.0]; hence, [C.sub.3](A) = [M.sup.2], and the same holds if 5 is replaced by 6 and/or 8 by seven (giving a total of four partitions). Consider A = {{1, 2, 7}, {3, 4, 8}, {5, 6}}. Then [k.sub.3] = [k.sub.0] and [k.sub.1] = [k.sub.2]; hence, [C.sub.4](A) = [M.sup.2], and similarly if 7 and 8 are interchaned, and as well if the roles of 5, 6 and 7, 8 are exchanged (giving again a total of four partitions). Next, take A = {{1, 2, 8}, {3, 4, 5}, {6, 7}}. Then (15) is equivalent to [k.sub.2] = [k.sub.3] and [k.sub.2] - [k.sub.4] = [k.sub.1] - [k.sub.0]; thus, M [less than or equal to] [C.sub.3](A) [less than or equal to] [M.sup.2], and similarly if both 8 is interchanged with 7 and 5 with 6. Finally, consider A = {{1, 2, 8}, {3, 4, 6}, {5, 7}}. Then (15) is satisfied iff [k.sub.1] + [k.sub.3] = 2[k.sub.0] and [k.sub.3] + [k.sub.2] = [k.sub.4] + [k.sub.0]; thus, 1 [less than or equal to] [C.sub.4](A) [less than or equal to] [M.sup.2], and similarly for A = {{1, 2, 7}, {3, 4, 6}, {6, 8}}. Counting cases gives four times [C.sub.3](A) = [M.sup.3], 14 times M [less than or equal to] [C.sub.3](A) [less than or equal to] [M.sup.2], eight times [C.sub.3](A) = [M.sup.2], and twice 1 [less than or equal to] [C.sub.3](A) [less than or equal to] [M.sup.2].

Again, the same consideration as above are valid if we start from {1, 4}, {2, 3}. Collecting all the above cases, we conclude that

D(3) [less than or equal to] 4M + 2M + 4M + 2M+ 2[M.sup.2] + 2(2[M.sup.3] + 18[M.sup.2] + 4[M.sup.2] + 4M) + 2(4[M.sup.3] + 14[M.sup.2] + 8[M.sup.2] + 2[M.sup.2]) = 12[M.sup.3] + 94[M.sup.2] + 20M,

(but D(3) [greater than or equal to] 12[M.sup.3] + 54[M.sup.2] + 52M + 8).

Finally, let r = 2. Then we distinguish the following cases:

* Consider A = {{1, 2, 3, 4}, {5, 6, 7, 8}}. Then (15) is equivalent to [k.sub.1] - [k.sub.2] + [k.sub.3] - [k.sub.4] = 0 and, hence, [M.sup.2] [less than or equal to] [C.sub.2](A) [less than or equal to] [M.sup.3].

* Take A = {1, 2, 3, 4}, form a partition of {5, 6, 7, 8} into two blocks, and add the elements of one these blocks to A. If a single element is added to A, say 5, then [k.sub.0] - [k.sub.2] = [k.sub.3] - [k.sub.4], [M.sup.2] [less than or equal to] [C.sub.2](A) [less than or equal to] [M.sup.3] and there are four possibilities of doing this. If an element, say 5, is kept as a single element block then [k.sub.1] = [k.sub.0], hence, [C.sub.2](A) = [M.sup.3] and again there are four possibilities for this. If {5, 6} is added to A and {7, 8} remains as two-element block, then [k.sub.3] = [k.sub.4]; hence, [C.sub.2](A) = [M.sup.3], and the same holds with the roles of {5, 6} and {7, 8} interchanged, and furthermore if we replace both blocks by {5, 8}, {6, 7}. Now, for .A = {{1, 2, 3, 4, 5, 7}, {6, 8}} (15) is equivalent to [k.sub.2] + [k.sub.4] = 2[k.sub.0] and, hence, [M.sup.2] [less than or equal to] [C.sub.2](A) [less than or equal to] [M.sup.3], and similarly for A = {{1,2, 3, 4, 6, 8}, {5, 7}}. Altogether we have six times [M.sup.2] [less than or equal to] [C.sub.2](A) < Ma and eight times [C.sub.2](A) = [M.sup.3].

* Take {1, 2}, {3, 4}, form a partition of {5, 6, 7, 8} into two blocks and add one block to {1, 2} and the other one to {3, 4}. If the resulting partition is A = {{1,2, 5, 6}, {3, 4.7, 8}}, then (15) is always satisfied; hence, [C.sub.2](A) = [M.sup.4]. If A = {{1,2,7,8},{3 4.5,6}}, then (15) is equivalent to [k.sub.1]-[k.sub.2] = [k.sub.3]-[k.sub.4]; hence, [M.sup.2] [less than or equal to] [C.sub.2](A) [less than or equal to] [M.sup.3] x For A = {{1,2,5,8},{3,4,6,7}} (15) is equivalent to [k.sub.2] = 54; thus, [C.sub.2](A) = [M.sup.3], and similarly for A = {{1,2,6,7}, {3,4,5,8}}. If A = {{1,2,5,7}, {3,4,6,8}}, then [k.sub.2]+[k.sub.3] = [2k.sub.0] and [M.sup.2] [less than or equal to] [C.sub.2](A) [less than or equal to] [M.sup.3]; and similarly for ,4 = {{1, 2, 6, 8}, {3, 4, 5, 7}}. If A = {{1, 2,5}, {3,4,6,7,8}}, then (15) means [k.sub.2] = [k.sub.0] and [C.sub.2](A) = [M.sup.3], and similarly, if 6 is added to {1,2}, or 7 or 8 is added to {3,4}. If A = {{1,2,7}, {3,4.5 6,8}}, then (15) requires [k.sub.1] - [k.sub.2] = [k.sub.3] - [k.sub.0]; hence, [M.sup.2] [less than or equal to] [C.sub.2](A) [less than or equal to] [M.sup.3], and the same holds if 8 is added to {1,2}, or 5 or 6 is added to {3,4}. Counting cases yields once [C.sub.2](A) = [M.sup.4], seven times [M.sup.2] [less than or equal to] [C.sub.2](A) [less than or equal to] [M.sup.3] , and six times [C.sub.2] (A) = [M.sup.3].

Clearly, the same considerations apply if we start with {1,4}, {2, 3}.

Summing up all possibilities, we obtain

D(2) [less than or equal to] [M.sup.3] + 6 [M.sup.3] + 8 [M.sup.3] + 2([Ms.up.4] + 7[M.sup.3] + 6 [M.sup.3]) = 2[M.sup.4] + 41[M.sup.3] (and D(2) [greater than or equal to] 2[M.sup.4] + 21 [M.sup.2]). Finally, we obtain

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

which is precisely the statement of Lemma 4.4.

ACKNOWLEDGEMENT

I wish to express my gratitude to Roman Vershynin who suggested the counterexample worked out in this note.

References

[1] R. Baraniuk, M. Davenport, R. DeVore, and M. Wakin, A simple proof of the restricted isometry property for random matrices, Constr. Approx., to appear.

[2] E. Candes, J. Romberg, and T. Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inform. Theory, 52(2), 489-509, 2006.

[3] E. Candes, J. Romberg, and T. Tao, Stable signal recovery from incomplete and inaccurate measurements. Comm. Pure Appl. Math., 59(8), 1207-1223, 2006.

[4] E. Candes and T. Tao, Near optimal signal recovery from random projections: universal encoding strategies?, IEEE Trans. Inform. Theory, 52(12), 5406-5425, 2006.

[5] S. S. Chen, D. L. Donoho, and M. A. Saunders, Atomic decomposition by Basis Pursuit, SIAM J. Sci. Comput., 20(1), 33-61, 1999.

[6] A. Cohen, W. Dahmen, and R. DeVote, Compressed sensing and best k-term approximation, preprint, 2006.

[7] D. L. Donoho, Compressed sensing, IEEE Trans. Inform. Theory, 52(4), 1289-1306, 2006.

[8] D. L. Donoho, For most large underdetermined systems of linear equations the minimal 11 solution is also the sparsest solution, Commun. Pure Appl. Anal., 59(6), 797-829, 2006.

[9] A. C. Gilbert and J. A. Tropp, Signal recovery from random measurements via orthogonal matching pursuit, IEEE Trans. Inform. Theory, 53(12), 4655-4666, 2007.

[10] S. Kunis and H. Rauhut, Random sampling of sparse trigonometric polynomials II--orthogonal matching pursuit versus basis pursuit, Found. Comput. Math., to appear.

[11] D. Needell and R. Vershynin, Uniform uncertainty principle and signal recovery via regularized orthogonal matching pursuit, preprint, 2007.

[12] H. Raubut, Stability results for random sampling of sparse trigonometric polynomials, IEEE Trans. Inform. Theory, to appear.

[13] H. Rauhut, Random sampling of sparse trigonometric polynomials, Appl. Comput. Harmon. Anal., 22(1), 16-42, 2007.

[14] M. Rudelson and R. Vershynin, Sparse reconstruction by convex relaxation: Fourier and Gaussian measurements. In Proc. CISS 2006 (40th Annual Conference on Information Sciences and Systems), 2006.

[15] M. Rudelson and R. Vershynin, On sparse reconstruction from Fourier and Gaussian measurements, Comm. Pure Appl. Math., to appear.

[16] J. A. Tropp. Greed is good: Algorithmic results for sparse approximation. IEEE Trans. Inform. Theory, 50(10), 2231-2242, 2004.

Holger Rauhut

NuHAG, Faculty of Mathematics, University of Vienna,

Nordbergstr. 15, A-1090 Wien, Austria

holgcr.rauhut@univie.ac.at

Printer friendly Cite/link Email Feedback | |

Author: | Rauhut, Holger |
---|---|

Publication: | Sampling Theory in Signal and Image Processing |

Article Type: | Report |

Geographic Code: | 4EUAU |

Date: | May 1, 2008 |

Words: | 9031 |

Previous Article: | Non-uniform sampling and reconstruction from sampling sets with unknown jitter. |

Next Article: | A memorial tribute to Isao Someya, 1915-2007. |

Topics: |