Printer Friendly

Orthogonal least squares solutions for linear operators.

Abstract. This paper solves the problem of finding, in a least squares sense, the coefficients of a series expansion of a function in terms of a chosen orthogonal basis from the knowledge not of the function itself but from the action of a linear operator upon it. The coefficiens are evaluated by inner product with a set of functions related to the orthogonal basis through the adjoint operator of the linear operator. Examples for both differential operators and integral ones as well as related properties are given.

Key words. orthogonal polynomials, linear operators, gradient operator, Radon transform

AMS(MOS) Subject Classifications. 33C90, 33C47, 42C05, 42C15, 47A05

1. Introduction. Orthogonal polynomials have properties that are very useful for solving mathematical and physical problems. They provide a convenient method for expanding a function in a series of linearly independent terms. Moreover, in many physical problems the expansion coefficients can be interpreted as physical magnitudes with a specific meaning and that is why among all the sets of orthogonal basis in which a function can be expanded, only one is usually chosen. For instance, the Zernike polynomials are a set of orthogonal polynomials that arise in the expansion of a wavefront function for optical systems with circular pupils [1]. Expansion coefficients in Zernike's basis have concrete physical meaning related to the shape of the distortion of a wavefront from an ideal spherical one. Some of them even have proper name as astigmatism, coma, defocus etc.

On the other hand, in some physical experiments the information concerning the function (the coefficients of which we want to determine) is not the function itself but an indirect magnitude or magnitudes that can be represented as differential or integral operators acting upon the function. For instance local slopes (gradient operator) or line integrals (Radon transform) etc. The problem is how to evaluate the expansion coefficients in the chosen orthogonal basis since the action of the linear operators upon the orthogonal polynomials give rise to a new set of functions that, in general, are no longer orthogonal, and the least square approximation for the function can not be directly performed in an orthogonal sense.

We will see that this problem is solved by using the generalized Green's theorem and the construction of the adjoint operator. Examples and related properties will be shown.

2. Statement of the problem. Let f (r) be a function of n variables, r = ([x.sub.i],....[x.sub.n]), and let [Q.sub.N] (r) be an approximating polynomial given by

[Q.sub.n] (r) = [N.summation over (k=1)] [a.sub.k] [P.sub.k] (r)

where {[P.sub.k] (r)} represents a set of orthogonal polynomials in a n-dimensional volume S2 bounded by a surface C under the following inner product:


The fact that the polynomials [P.sub.k] (r) are orthogonal, allow to obtain the expansion coefficients, ak, that minimize the following norm of the difference between f (r) and [Q.sub.N] (r)

||f(r) - [Q.sub.N](r)|| = [[integral].sub.[omega]] (f(r) - [Q.sub.N](r))[sup.2]d[omega]

by simply performing the inner product of f (r) with the corresponding polynomial [P.sub.k] (r):

(2.1) [a.sub.k] = [[integral].sub.[omega]] [P.sub.k](r) f (r) d[omega]

On the other hand, if L represents a linear operator (or a set of linear operators) the expansion coefficients, [a.sub.k], of the approximating function can not be obtained generally in the orthogonal sense as defined in (2.1) (1) since {L([P.sub.k](r))} is no longer a set of orthogonal functions.

To solve the problem in the orthogonal sense explained above, we will make use of the generalization of the Green's Theorem [2] and the adjoint operator of L, L. Thus if f and g are functions of r the Green's Theorem establishes that:


where V is a generalized vector depending not only on f and g but also on their partial derivatives for differential linear operators and it can be set to zero for integral operators[2]. Integrating both sides of (2.2) over S2 and making use of the divergence theorem[3] in the right side, we get


d s being the outward drawn vector element of area. From this egression and (2.1) it can be deduced that if there exists a set of functions {[g.sub.k](r)} such that L([g.sub.k](r)) = [P.sub.k](r) then the expansion coefficient [a.sub.k] can be evaluated by inner product of L (f (r)) and A(r) plus a surface integral containing boundary conditions for some cases of differential operators:


3. Examples. In what follows the problem will be illustrated with two different kind of linear operators: A differential one, the gradient of a function, and an integral one, the Radon transform.

3.1. The gradient operator. If the linear operator represents the gradient of the function


the above result allow us to deduce that if there exists a set of vector functions Ilk P) I such that

(3.1) [nabla] * [g.sub.k] (r) = -[P.sub.k] (r) in r [member of] [omega] [g.sub.k] (r) * n = 0 in r [member of] C

being n an unitary outward drawn vector to C, then the expansion coefficients can be evaluated independently, in the orthogonal sense, as

[a.sub.k] = [[integral].sub.[omega]] [g.sub.k] (r) * [nabla] f (r) d[omega]

Examples of vector functions orthogonal to the gradient of Zernike polynomials are obtained in a different mathematical context in references [6] and [7].

3.1.1. Properties of vector functions. In general, the set of partial differential equations given in (3.1) has not an unique solution for the set of vector function [g.sub.k](r). The choice of the set of functions, is, in principle, irrelevant, and therefore "the easier to solve the better" is a criterium to find the vector functions. Nevertheless if [nabla] f (r) represents a physical magnitude the choice of the vector functions can improve the results. For instance let us see what happen if [nabla] f (r) is affected by uncorrelated noise vector, v, with zero mean, and covariance [[sigma].sup.2][delta](r - r')

Let us denote by [a.sub.k] the estimate of the coefficients when evaluated by

[a.sub.k] = [[integral].sub.[omega]] [g.sub.k] (r) * ([nabla] f (r) + v) d[omega]

Then by taking the ensemble-averaged mean square error, the variance associated to the estimated coefficient is given by


Thus, those vector functions minimizing (3.2) yield to a better estimate of f (r). This can be get by demanding that [g.sub.k] (r) satisfy not only (3.1) but also to be irrotational [8], i.e., there exists an scalar function [[xi].sub.k] (r) such that

[g.sub.k] (r) = [nabla] [[xi].sub.k] (r)

Thus the vector functions can be found by solving the following Poisson equation with Neumann boundary condition

(3.3) [[nabla].sup.2] [[xi].sub.k] (r) = -[P.sub.k](r) in r [member of] [omega] [nabla][[xi].sub.k] (r) * n = 0 in r [member of] C

As an example, the Zernike Polynomials formula as well as the related [[xi].sub.k] (r) functions obeying (3.3) is presented


(the usual complex and two index notation has been used, cn,m are normalization constants

[1] and [[delta].sub.0,m] = {0 if m [not equal to] 0) {1 if m = 0)

It must be stressed the fact that depending on the set of chosen functions, {[P.sub.k](r)}, and the characteristics of the integration domain [omega], it is not always possible to obtain minimum norm solution for all the set vector functions as well as polynomic solutions as for the previous case. For instance, for orthogonal polynomials in an annulus, as C represents inner and outer circumferences, then [nabla][[xi].sub.k](r) * n = 0 in r [member of] C represents two boundary conditions and it is not possible to find polynomic solutions for all the basis elements since terms in the shape log r, [r.sup.-m] sin(m[theta]), [r.sup.-m] cos(m[theta]) are needed to fullfill both conditions. For a square or rectangular domain and a orthogonal basis made by Cartesian product of orthogonal polynomials in one dimension it is not possible to get minimum norm solution for all the basis elements due to the need of four boundary conditions.

3.2. The Radon transform in two variables. It is well-known that Radon transform can be considered as linear operator which in polar coordinates can be written as


being a [member of] (0, [infinity]) and

K(r, [phi]; p, 0) = [delta](p - r cos [phi] cos [theta] - r sin [phi] sin [theta])

thus representing the integral of the function f (r, [phi]) over the line defined in normal form by p and [theta] [4]

p-x cos [theta] - y sin [theta] = 0

For integral operators, V can be set to zero [2] and thus, with some algebra, it can be easily shown that the adjoint operator Rg(p, [theta]) is given by



K(r, [phi]; p, [theta]) = p/r K(p, [theta]; r, [phi]) = p/r [delta](r - p cos [phi] cos [theta] - p sin [phi] sin [theta])

representing the integral over the circumference that is tangent to the line defined by p and [theta] and passes through the origin of coordinates (the tangency point and the origin define the extremes of a diameter).

In this way if there exist a set of functions [h.sub.k](r) such that

[Rh.sub.k](p,[theta]) = [pP.sub.k](p,[theta])



For a = 1 and choosing as orthogonal basis the Zernike polynomials, the [h.sub.k](r) functions are related to Chebyshev polynomials of second kind[4] (2)

As an example the first six orthogonal polynomials of Appell and Kampe de Feriet[5] for the unit circle and the corresponding [h.sub.k] (r) functions are shown


4. Conclusions. By the use of the generalized Green's theorem it is possible to find set of functions that are orthogonal to the action of linear operators upon a set of orthogonal polynomials. This is equivalent to solve linear equations, both differential equations and integral equations, in an orthogonal least squares sense without solving an eigenfunctions problem.

For the gradient operator the set of functions orthogonal to the gradient of the elements of the chosen basis is not unique, and it has been shown that in some cases it is possible to find only one set obeying additional conditions, that minimize the norm of the difference between the function and its approximating polynomial if the gradient of the function is affected by gaussian noise of zero mean and spatially uncorrelated.

For linear operators, as Radon transform, since no boundary conditions are needed, the set is unique and depends only on the chosen basis and the chosen integration domain The method can be extended to inner products with weights different than unity as well as to orthogonal functions in general.

Acknowledgements. This work was supported by Ministerio de Educcion y Ciencia, Grant No. AYA 2004-07773-C02-02.


[1] M. BORN AND E. WOLF, Principles of Optics, Pergamon Press, New York, 1986.

[2] P.M. MORSE AND H. FESHBACH, Methods of Theoretical Physics, McGraw-Hill Book Co., Inc., New YorkToronto-London, 1953, pp. 999-1978.

[3] H. F. WEIMBERGER, Partial Differential Equations, Blaisdell Publishing, New York, 1965.

[4] R.B. MARK, On the reconstruction of a function on a circular domain from a sampling of its line integrals, J. Math. Anal. Appl., 45 (1974), pp. 357-374.

[5] P. APPELL AND J. KAMPE DE FERIET, Eontions Hypergeometriques et hyperspheriques-Polynomes d'Hermite, Gauthier-Villars, Paris, 1926.

[6] V. P. AKSENOV AND Yu. N. IS AEV, Analytical representation of the phase and its mode components reconstructed according to the wave front slopes, Opt. Lett., 17 (1992), 1180.

[7] E. ACOSTA, S. BARA, M. A. RAMA AND S. RIOS, Determination of phase mode components in terms of local wave-,front slopes: an analytical approach, Opt. Lett., 20 (1995), 1083.

[8] C.J. SOLOMON, S. RIOS, E. ACOSTA AND S. BARA, Modal wavefront projectors of minimum error norm, Opt. Commun.,155 (1998), 251.

(1) Unless we wanted the approximating function to be defined in terms of the eigenfunctions of the linear operator.

(2) The orthogonality between Radon transform of Zernike polynomials and Chebyshev polynomials need integration in the radial coordinate between -1 and 1

EVA ACOSTA ([dagger])

* Received December 1, 2004. Accepted for publication May 15, 2005. Recommended by J. Arvesd.

([dagger]) Departamento de Fesica Aplicada, Facultad de Fesica, Universidad de Santiago de Compostela, 15782 Spain (
COPYRIGHT 2006 Institute of Computational Mathematics
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2006 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Acosta, Eva
Publication:Electronic Transactions on Numerical Analysis
Date:Aug 1, 2006
Previous Article:Uniform convergence of monotone iterative methods for semilinear singularly perturbed problems of elliptic and parabolic types .
Next Article:Weierstrass' Theorem in weighted Sobolev spaces with K derivatives: announcement of results.

Related Articles
Frames for undergraduates.
A primer on linear models.
Special volume on orthogonal polynomials, approximation theory, and harmonic analysis.
Orthogonal polynomials and recurrence equations, operator equations and factorization.
Systems of orthogonal polynomials defined by hypergeometric type equations.
Continuous symmetrized Sobolev inner products of order N (II).
q-orthogonal polynomials related to the quantum group [U.sub.q](so(5)).
An electrostatic interpretation of the zeros of the freud-type orthogonal polynomials.
Linear models; the theory and application of analysis of variance.
Advanced linear algebra for engineers with MATLAB.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters