Printer Friendly

Levy-based error prediction in circular systematic sampling.

INTRODUCTION

A long-standing problem in stereology is variance estimation in systematic sampling. One class of problems involves estimation of an integral of the form

Q = [[integral].sup.2[pi].sub.0] x([theta])d[theta], (1)

where x([theta]) is an integrable function on [0, 2[pi]), called the measurement function. The estimator typically considered is based on circular systematic sampling and takes the form

[[??].sub.n] = [2[pi]/n] [n-1.summation over (i=0)] x([THETA] + 2[pi]i/n), n [greater than or equal to] 1,

where [THETA] is uniformly distributed in [0, 2[pi]/n). For instance, if Y is a bounded convex planar set containing the origin O, examples of Eq. 1 are

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

where r([theta]) and h([theta]) are the radial function and the support function of Y in direction [theta], respectively. The geometric identity (Eq. 1) is in these cases a consequence of polar decomposition in the plane and an identity for mean width (Schneider, 1993, Eq. 5.3. 2). If instead Y is a bounded convex spatial set containing O, the volume and the surface area of Y may be estimated by a two-step procedure which involves circular systematic sampling in a section through O and the use of the cubed radial function or the squared support function (Gundersen, 988; Cruz-Orive, 2005). Yet another example is volume estimation by the so-called vertical rotator (Jensen and Gundersen, 1993).

In Gual-Arnau and Cruz-Orive (2000), a design-based procedure of approximating the variance of [[??].sub.n], based on modelling the covariogram of x([theta]) by a polynomial model, is developed. Hobolth and Jensen (2002) consider a model-based procedure, where the measurement function is assumed to be a realization of a periodic stationary stochastic Gaussian process X = {X([theta]) : [theta] [member of] [0, 2[pi])}. It is shown in Hobolth and Jensen (2002) that the covariogram model considered in the paper by Gual-Arnau and Cruz-Orive (2000) is a special case of a p-order covariance model for the stochastic process X in the model-based set-up.

The p-order covariance model is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (2)

where the model parameters satisfy p > 1/2, [alpha], [beta] > 0. In Hobolth et al. (2003), this parametric covariance function has been used in the modelling of the radial function of a random star-shaped planar particle. In this case, p determines the smoothness of the particles boundary while [alpha] and [beta] determine the 'global' and the 'local' shape of the particle, respectively. (Note that in Eq. 2, [[lambda].sub.1] is set to zero which ensures that the reference point of the particle is approximately the centre of mass.) The model-based counterpart of the design-based methodology provided in Gual-Arnau and Cruz-Orive (2000) was further developed in Jonsdottir et al. (2006), where the general form of the p-order covariance model (Eq. 2) was used to obtain a more accurate approximation of the prediction error E[([[??].sub.n] - Q).sup.2].

In Hobolth and Jensen (2002) and Jonsdottir et al. (2006), the process X is assumed to be Gaussian. Motivated by the fact that powers of the radial function and the support function are used in practice, we will in this paper consider non-Gaussian models, obtained as a kernel smoothing of a so-called Levy basis. As we will show, it is possible under the Levy-based model to derive the distribution of the error predictor [[??].sub.n] - Q which may be markedly non-Gaussian for the moderate sizes of n used in practice.

Levy-based modelling has been popular in recent years, e.g. in the modelling of turbulent flows, spatio-temporal growth, spatial point processes and random fields (Barndorff-Nielsen and Schmiegel, 2004; Jonsdottir et al., 2008; Hellmund et al., 2008; Jonsdottir et al., 2013). More specifically, we will consider stochastic processes of the form

X([theta]) = [mu] + [[integral].sup.2[pi].sub.0] k([theta] - [phi])Z(d[phi]), [theta] [member of] [0, 2[pi]),

where [mu] determines the mean of the process, Z is a homogeneous and factorizable Levy basis on [0, 2[pi]) and k is a deterministic kernel function. In principle, any covariance model, including the p-order covariance model, can be induced under this modelling framework, by assuming a specific form of the kernel function (see the next section). Under the p-order model, it is easy to control the local and global fluctuations of the stochastic process X. The Levy-based models with p-order covariance thus constitute a flexible and tractable model class. In particular, this model class has more structure than the non-Gaussian models considered in Hobolth et al. (2003) and this allows us to derive distributional results.

The composition of the remaining part of the paper is as follows. First, a theoretical background for stationary periodic processes with period 2[pi], based on kernel smoothing of a Levy basis, is given. Then, estimation of E[([[??].sub.n] - Q).sup.2] under the general Levy-based model is discussed. The distribution of the error predictor [[??].sub.n] - Q under the Levy-based model is derived, and it is shown how to estimate this distribution. An example of random particles simulated from a Levy-based model is given together with the distribution of the n-point area estimator of these particles. Finally, a discussion is provided. Some technical derivations are deferred to an appendix.

LEVY-BASED STOCHASTIC PROCESSES ON THE CIRCLE

This section provides an overview of stationary periodic processes on [0, 2[pi]) based on integration with respect to a Levy basis. For further details on the general theory on Levy bases, in particular, the integration with respect to a Levy basis, the reader is referred to Barndorff-Nielsen and Schmiegel (2004) and Hellmund et al. (2008).

Let X = {X([theta]) : [theta] [member of] [0, 2[pi])} be a 2[pi] periodic stationary stochastic process on [0, 2[pi]), given by

X([theta]) = [mu] + [[integral].sup.2[pi].sub.0] k([theta] - [phi])Z(d[phi]), [theta] [member of] [0, 2[pi]), (3)

where [mu] determines the mean of the process, Z is a homogeneous and factorizable Levy basis on [0, 2[pi]) and k is an even periodic kernel function with period 2[pi] and a Fourier representation

k([theta]) = [[xi].sub.0] + [[infinity].summation over (s=1)] [[xi].sub.s] cos(s[theta]). (4)

The model (Eq. 3) is the continuous analogue of the following discrete model

X([theta]) = [mu] + [summation over [phi]] k([theta] - [phi])Z([phi]),

where the sum is over an equally spaced set of angles and the random variables Z([phi]) are independent and identically distributed, the common distribution being infinitely divisible. The integral in Eq. 3 is formally defined as a limit in probability (Rajput and Rosinski, 1989). A spatio-temporal version of Eq. 3 has previously been considered in Jonsdottir et al. (2008).

The Levy basis Z is extended by Z(A + 2[pi]m) = Z(A) for all m [member of] Z and all Borel sets A [member of] B([0, 2[pi])). A Levy basis has the property that Z([A.sub.1]), ..., Z([A.sub.n]) are independent when [A.sub.1], ..., [A.sub.n] [member of] B([0, 2[pi])) are disjoint and Z(A) is infinitely divisible for any A [member of] B([0, 2[pi])). The assumption of homogeneity implies that all the finite-dimensional distributions of Z are translation invariant.

If Z is Gaussian, the integral in Eq. 3 exists if k is [L.sup.2]-integrable with respect to the Lebesgue measure on [0, 2[pi]). When Z is a so-called Levy jump basis (e.g., Gamma or inverse Gaussian Levy basis), the integral exists if k is integrable with respect to the Lebesgue measure on [0, 2[pi]) and if [[integral].sub.R] [absolute value of r] V(dr) < [infinity], where V is the Levy measure associated with Z. These results follow from Hellmund et al. (2008, Lemma 1).

An important entity associated with a Levy basis is its spot variable Z' which is infinitely divisible. Without loss of generality we will in what follows assume that the spot variable Z' is centered, Z' = W - E(W), where W is an infinitely divisible random variable.

The following theorem characterizes the distribution of the stochastic variable X([theta]).

Theorem 1. The stochastic variable X([theta]) has the cumulant generating function

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (5)

where [K.sub.W](t) is the cumulant generating function of W. Moreover, the derivatives of [K.sub.X](t) are given by (when they exist)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (6)

where [K.sup.(r).sub.W](t) denotes the r'th derivative of [K.sub.W](t).

Proof. The result is obtained by using that the cumulant generating function of the integral f * Z = [[integral].sup.2[pi].sub.0] f([theta])Z(d[theta]) of a function f with respect to a homogeneous and factorizable Levy basis Z is given by

[K.sub.f*Z](t) = [[integral].sup.2[pi].sub.0] [K.sub.Z'](tf([theta]))d[theta], (7)

where Z' is the spot variable associated with Z, cf. Hellmund et al. (2008, Eq. 10).

It follows that the cumulants of the stochastic variable X([theta]) are given by

[[kappa].sub.1](X([theta])) = [mu],

[[kappa].sub.r](X([theta])) = [[kappa].sub.r](W) [[integral].sup.2[pi].sub.0] k[([phi]).sup.r] d[phi], r [greater than or equal to] 2,

where [[kappa].sub.r](W) denotes the r-th cumulant of the stochastic variable W. Possible choices of the distribution of W are the Gaussian, Gamma and inverse Gaussian distributions. When the kernel function is proportional to an indicator function, k([theta]) = [c1.sub.A]([theta]) for A [member of] B([0, 2[pi])), the marginal distribution of X([theta]) will be of the same type as that of W. Otherwise, the marginal distribution will not be as simple, but the process X will inherent the name of the distribution of the underlying spot variable, e.g., when W is Gamma distributed, X is called a Gamma Levy process, irrespectively of the choice of the kernel function. We will typically assume that [[kappa].sub.2](W) = 1, i.e., the skewness and kurtosis of W are equal to the third and fourth cumulant, respectively. In Table 1, we give the cumulant generating function, third and fourth cumulants of W for the three distributions mentioned above. Note that as W has unit variance, the Gamma and inverse Gaussian Levy bases are only determined by a single parameter [eta] > 0. The Levy measures V of the Gamma and inverse Gaussian Levy bases satisfy the condition [[integral].sub.R] [absolute value of r] V(dr) < [infinity] for the existence of the integral in Eq. 3, cf. Hellmund et al. (2008, Example 3).

In principle, any covariance function can be modelled within this set-up. This can be seen, using the theorem below.

Theorem 2. The stochastic process X has a mean value [mu] and a covariance function

c([theta]) = [[lambda].sub.0] + [[infinity].summation over (s=1)] [[lambda].sub.s]cos(s[theta]), [theta] [member of] [0, 2[pi]),

where

[[lambda].sub.0] = 2[pi][[xi].sup.2.sub.0][[kappa].sub.2](W), [[lambda].sub.s] = [pi][[xi].sup.2.sub.s][[kappa].sub.2](W), s [greater than or equal to] 1. (8)

Proof. Using that

c([theta]) = Cov(X([theta]), X(0)) = [[kappa].sub.2](W) [[integral].sup.2[pi].sub.0] k([theta] - [phi])k(-[phi])d[phi],

we easily obtain Eq. 8.

Using Theorem 2, we can construct the candidate kernel k that induces a given covariance function c. For instance, if c follows the p-order model (Eq. 2), then k is of the form specified in Eq. 4 with

[[xi].sub.0] = [square root of ([[lambda].sub.0]/2[pi][[kappa].sub.2](W))], [xi] = 0,

[[xi].sub.s] = 1/[square root of ([pi][[kappa].sub.2](W)[[alpha] + [beta]([s.sup.2p] - [2.sup.2p])], s [greater than or equal to] 2.

This choice of kernel will for a Gaussian Levy basis give a well-defined integral in Eq. 3 if p > 1/2, while for a Gamma or an inverse Gaussian basis the integral is well-defined if p > 1.

ESTIMATING E[([[??].sub.n] - Q).sup.2] UNDER THE LEVY-BASED MODEL

In Hobolth and Jensen (2002), the focus was on the prediction error E[([[??].sub.n] - Q).sup.2]. If the covariance function of X has the following Fourier expansion

c([theta]) = [[lambda].sub.0] + [[infinity].summation over (s=1)] [[lambda].sub.s]cos(s[theta]), [theta] [member of] [0, 2[pi]),

then it was shown in Hobolth and Jensen (2002) that

E[([[??].sub.n] - Q).sup.2] = [[infinity].summation over (k=1)] [[lambda].sub.nk], (9)

where [[lambda].sub.nk] is the Fourier coefficient of order n x k in the Fourier expansion of the covariance function of X. Note that in Hobolth and Jensen (2002), circular systematic sampling on [0,1) instead of [0, 2[pi]) is considered, so Eq. 9 represents an adjusted version of Hobolth and Jensen (2002, Eq. 8).

In Hobolth et al. (2003), a procedure for estimating the prediction error under a Gaussian p-order model was developed, based on a Fourier expansion of X

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

where [??] means equality in distribution and

[A.sub.0] = 1/2[pi] [[integral].sup.2[pi].sub.0] X([theta])d[theta],

[A.sub.s] = 1/[pi] [[integral].sup.2[pi].sub.0] X([theta])cos(s[theta])d[theta],

[B.sub.s] = 1/[pi] [[integral].sup.2[pi].sub.0] X([theta])sin(s[theta])d[theta].

When X is a periodic stationary Gaussian process, the Fourier coefficients of X become independent and normally distributed, [A.sub.s] ~ [B.sub.s] ~ N(0, [[lambda].sub.s]). As suggested in Hobolth et al. (2003), the parameters [alpha] and [beta] in the p-order model can then be estimated using maximum likelihood estimation based on the first S Fourier coefficients,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (10)

where [[lambda].sub.s]([alpha], [beta]), s = 2, ..., S, satisfy (2) and [a.sub.s] and [b.sub.s], s = 2, ..., S, denote discretized Fourier coefficients of X.

In this section, we will show that this procedure can be used under the general Levy-based model. The following theorem gives the distribution of the Fourier coefficients and their relations under the general Levy-based model. In the Gaussian case, the theorem can be found e.g., in Dufour and Roy (1976).

Theorem 3. The stationary Levy-based stochastic process X can be written in terms of its Fourier coefficients as

X([theta]) = [A.sub.0] + [[infinity].summation over (s=1)] ([A.sub.s]cos(s[theta]) + [B.sub.s]sin(s[theta])),

[theta] [member of] [0, 2[pi]),

where [A.sub.0] = [mu] + [[xi].sub.0]Z([0, 2[pi])),

[A.sub.s] = [[xi].sub.s] [[integral].sup.2[pi].sub.0] cos (s[phi])Z(d[phi]) [B.sub.s] = [[xi].sub.s] [[integral].sup.2[pi].sub.0] sin (s[phi])Z(d[phi]). (11)

Moreover, the Fourier coefficients are pairwise uncorrelated and the Fourier coefficients of order s have the same distribution which is characterized by the cumulant generating function [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], where

[K.sub.U](t) = [[integral].sup.2[pi].sub.0] [K.sub.W](t cos([theta]))d[theta].

Proof. Writing the kernel function in terms of its Fourier representation and then calculate the Fourier coefficients of X gives Eq. 11. The Fourier coefficients are uncorrelated as for all r, s [greater than or equal to] 1,

Cov([A.sub.s], [B.sub.r]) = [[kappa].sub.2](W) [[integral].sup.2[pi].sub.0] cos(s[phi])sin(r[phi])d[phi] = 0,

and

Cov([A.sub.s], [A.sub.r]) = [[kappa].sub.2](W) [[integral].sup.2[pi].sub.0] cos(s[phi])cos(r[phi])d[phi] = 0,

Cov([B.sub.s], [B.sub.r]) = [[kappa].sub.2](W) [[integral].sup.2[pi].sub.0] sin(s[phi])sin(r[phi])d[phi] = 0,

for all r, s [greater than or equal to] 1, r [not equal to] s. The cumulant generating function of [A.sub.s] is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

and a similar argument shows that [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

The cumulant generating function of [A.sub.s] and [B.sub.s] yield simple expressions for their cumulants, which are given by

[[kappa].sub.1]([A.sub.0]) = [mu], [[kappa].sub.r]([A.sub.0]) = 2[pi][[xi].sub.0][[kappa].sub.r](W), r [greater than or equal to] 2,

and

[[kappa].sub.r]([A.sub.s]) = [[kappa].sub.r]([B.sub.s]) = 2[pi][[xi].sup.r.sub.s] (r - 1)!!/r!! [[kappa].sub.r](W)1(r even),

for s [greater than or equal to] 1 and r [greater than or equal to] 1. Here and in the following, we let for a positive integer n,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

This means that [A.sub.s] and [B.sub.s] have mean, variance, skewness and kurtosis of the following form:

[[kappa].sub.1]([A.sub.s]) = 0, [[kappa].sub.2]([A.sub.s]) = [pi][[xi].sup.2.sub.s][[kappa].sub.2](W),

[[gamma].sub.1]([A.sub.s]) = 0, [[gamma].sub.2]([A.sub.s]) = [3/4[pi]] [[gamma].sub.2](W),

where [[gamma].sub.2](W) is the kurtosis of W. Moreover, the normalized Fourier coefficients of order s = 1, 2, ..., obtained by [[??].sub.s] = [A.sub.s]/[[xi].sub.s] and [[??].sub.s] = [B.sub.s]/[[??].sub.s], will all have the same distribution characterized by the cumulant generating function [K.sub.U](t).

From Theorems 2 and 3, it follows that for a non-Gaussian Levy basis Z the Fourier coefficients will be uncorrelated with variance [[lambda].sub.s]([alpha], [beta]). Furthermore, the distribution of the Fourier coefficients in the non-Gaussian and Gaussian model only differs in even cumulants of order four and higher. Therefore, Eq. 10 can be regarded as a pseudo-likelihood function for ([alpha], [beta]) also in the non-Gaussian case.

Fig. 1 shows the small difference in the saddlepoint densities of the normalized Fourier coefficients and the Gaussian density for different values of [eta] in the case of a Gamma Levy basis. Furthermore, a simulation study indicated that the estimates of ([alpha], [beta]) are robust against deviations from Gaussianity in the underlying distribution, but the mean square error of the estimates increases somewhat as [eta] decreases.

THE DISTRIBUTION OF [[??].sub.n] - Q UNDER THE LEVY-BASED MODEL

In the previous section, we have seen that the method developed in Hobolth et al. (2003) for estimating E[([[??].sub.n] - Q).sup.2] based on a Gaussian process is robust against departures from the distributional assumption. In this section, we will derive the distribution of the error predictor [[??].sub.n] - Q which may be markedly non-Gaussian.

Theorem 4. Under the Levy-based model, the error predictor is distributed as

[[??].sub.n] - Q ~ 2[pi] [[integral].sup.2[pi].sub.0] [k.sub.n]([phi])Z(d[phi]), (12)

where

[k.sub.n]([phi]) = [[infinity].summation over (s=1)] [[xi].sub.sn] cos(sn[phi]).

The distribution of [[??].sub.n] - Q is characterized by its cumulant generating function

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Proof. Recall that

[[??].sub.n] = [2[pi]/n] [n-1.summation over (i=0)] X([THETA] + 2[pi]i/n), (n [greater than or equal to] 1),

where [THETA] is uniformly distributed in [0, 2[pi]/n). Without loss of generality we can assume that [THETA] = 0 as the distribution of [[??].sub.n] does not depend on [THETA]. The result given in Eq. 12 is obtained by observing that the mean of the kernel functions is given by

[1/n] [n-1.summation over (j=0)] k([2[pi]j/n] - [phi]) = [[xi].sub.0] + [[infinity].summation over (s=1)] [[xi].sub.sn]cos(sn[phi]).

The expression for the cumulant generating function of [[??].sub.n] - Q is a consequence of Eq. 7.

As the cumulant generating function of [[??].sub.n] - Q has a simple form, its cumulants are easily available. In particular, it enables us to obtain a saddlepoint approximation of its density. An alternative is to use Theorem 4 for simulating the distribution of [[??].sub.n] - Q.

Example. Let us consider a Levy-based model (Eq. 3) for X with a Gamma Levy basis Z and k chosen such that the covariance function of X follows a p-order model. Under this model the Fourier coefficients [A.sub.s], and [B.sub.s], s [greater than or equal to] 1, of X([theta]) have mean, variance, skewness and kurtosis,

[[kappa].sub.1]([A.sub.s]) = 0, [[kappa].sub.2]([A.sub.s]) = [pi][[xi].sup.2.sub.s] = [[lambda].sub.s]([alpha], [beta]),

[[gamma].sub.1]([A.sub.s]) = 0, [[gamma].sub.2]([A.sub.s]) = 9/2[pi][eta].

This model may, for instance, be used to model the squared radial function of random star-shaped planar particles containing the origin. Fig. 2 shows examples of particles simulated from such a model, using different values of [eta]. The value of p, [alpha] and [beta] was p = 2, log [alpha] = 6 and log [beta] = -3. We used [eta] = 2, 4, 16; a low value of [eta] corresponds to an underlying distribution with a heavy tail. The value of [eta] controls the frequency and size of the irregularities of the boundary of the particles. Small values of [eta] will produce particles with few large fluctuations on the particle boundary and less smaller fluctuations. Higher values of [eta] will produce particles with more frequently occurring moderate fluctuations across the boundary.

Fig. 3 shows the corresponding saddlepoint densities of the estimated area of the particles for two different values of n.

In applications, it is needed to estimate the parameter [eta] of the underlying Levy basis Z, i.e., the parameter of the distribution of W. For a given kernel function k, we have a simple expression for the cumulant generating function of X and its derivatives (when they exist), cf. Theorem 1. We suggest estimating the parameter [eta] determining the Levy basis by considering the saddlepoint approximation of the density of X([theta]). For more details on saddlepoint approximations, cf. Jensen (1995). The first order saddlepoint approximation of the density is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

where [??](x) is the solution to the saddlepoint equation [K'.sub.X](t) = x.

Note that the saddlepoint equation is non-linear when Z is non-Gaussian, but can be solved numerically, using a Newton method for a given kernel function k. A better approximation of the density of X([theta]) is obtained by multiplying the density with the correction factor

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Given an estimation of the kernel function k we can establish a pseudo-likelihood function based on the observations of the stochastic process X given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Here, [??](x([[theta].sub.i])) is calculated using the approximated kernel function

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

where [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is obtained using the estimates of ([alpha], [beta]). Note that we need to normalize the densities [??]([x.sub.i]) for each value of [eta], when maximizing the likelihood function L([eta]). If the estimated likelihood function is an increasing function of [eta], this suggests that the underlying Levy basis is Gaussian.

The cumulant generating function and its derivatives have simple analytic expressions, when the underlying Levy basis is a Gamma basis or Inverse Gaussian basis. These expressions can be derived by combining Theorem 1 and Table 1. For a Gamma Levy process the cumulant generating function of X([theta]) and its derivatives are given by

[K.sub.X](t) = t([mu] - 2[pi][[xi].sub.0][square root of [eta]]) - [eta] [[integral].sup.2[pi].sub.0] log(1 - tk([phi])/[square root of [eta]]d[phi],

[K'.sub.X](t) = [mu] - 2[pi][[xi].sub.0][square root of [eta]]) + [eta] [[integral].sup.2[pi].sub.0] k([phi])/[square root of [eta]] - tk([phi]) d[phi],

[K.sup.(r).sub.X](t) = (r - 1)![eta] [[integral].sup.2[pi].sub.0] k[([phi]).sup.r]/[([square root of [eta]] - tk([phi])).sup.r] d[phi], r [greater than or equal to] 2.

For an inverse Gaussian Levy process the cumulant generating function of X([theta]) and its derivatives are given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

In both cases, the saddlepoint approximation of the density of X([theta]) are easily obtainable using numerical integration and a Newton algorithm for finding the saddlepoint. Note that the saddlepoint approximation of the density function for an arbitrary [eta] can be written in terms of the saddlepoint approximation of the density, the cumulant generating derivatives and saddlepoint solution for [eta] = 1.

CONCLUSION AND PERSPECTIVES

We have developed a Levy-based error prediction in circular systematic sampling. In contrast to previous model-based methods, we consider a flexible class of non-Gaussian measurement functions based on kernel smoothing of a homogeneous Levy basis. In particular, we have derived the distribution of the error predictor in circular systematic sampling. The modelling framework allows us to consider in principle any given covariance structure of the measurement function, in particular the popular p-order covariance model which enables controlling the local and global fluctuations of the measurement function.

Relation to generalized p-order models. Note that as the Levy-based process X is strictly stationary, it can be shown that X has a polar expansion of the form

X([theta]) = [mu] + [square root of 2] [[infinity].summation over (s=0)] [square root of [C.sub.s]] cos(s([theta] - [D.sub.s])),

where the random variables

[C.sub.s] = 1/2([A.sup.2.sub.s] + [B.sup.2.sub.s]) = [[lambda].sub.s][Z.sub.s], [D.sub.s] ~ U[0, 2[pi]/s], s [greater than or equal to] 1,

are independent, cf. the Appendix, and E([Z.sub.s]) = 1. Here, the variable [Z.sub.s] can be expressed as

[Z.sub.s] = [1/2[pi][[kappa].sub.2]] [[integral].sup.2[pi].sub.0] [[integral].sup.2[pi].sub.0] cos(s([theta] - [phi])) Z([d[theta])) Z([d[phi])).

This shows that the Levy-based models are closely related to the generalized p-order models proposed in Hobolth et al. (2003), but the Levy-based models have more structure that allows for derivation of distributional results.

Modelling particles in 2D and 3D. The model (Eq. 3) can be used directly to model the shape of featureless two-dimensional particles by assuming that a particle Y is a stochastic deformation of a template particle [Y.sub.0]. If [r.sub.0] is a radial function of a template particle, we let the radial function of Y be of the form R([theta]) = [r.sub.0]([theta]) + X([theta]), where X is a zero mean Levy-based stochastic process. The strength of this technique is two-folded. Firstly, the global and local fluctuations of the Levy-based stochastic process are controlled by the variance of the Fourier coefficients which are determined by the kernel function. Secondly, the underlying Levy basis determines the frequency and size of the irregularities of the process. Finally, the methodology presented can be extended to model the shape of three-dimensional featureless particles, by considering Levy-based processes on the unit sphere [S.sub.2]. Hansen et al. (2011) consider three-dimensional Levy particles using different covariance models. The focus is here on the Hausdorff dimension of the boundary of particles obtained using a Gaussian basis.

Approximations of densities. The saddlepoint approximation was applied here to obtain an approximation of the density of X([theta]) in the Levy-based stochastic model. As the cumulant generating function is easily obtainable for stochastic convolutions of the type

X([xi]) = [integral] f([eta] - [xi])Z(d[eta]), [xi] [member of] [R.sup.n],

where f : [R.sup.n] [right arrow] R and Z is a homogeneous Levy basis on [R.sup.n], the saddlepoint approximation of the density is an attractive tool for studying Levy-based convolution models in general. For the stochastic processes considered here, other types of approximations of densities can also be considered based on approximating [A.sub.s] and [B.sub.s] by differences of variables from the same family of distributions as W. As an example one could consider approximating the density of the Fourier coefficients by a type II McKay distribution (Holm and Alouini, 2004), when the underlying Levy basis is a Gamma Levy basis.

Choice of kernel function. Finally, it should be emphasized that assuming that the kernel function is even does not affect the flexibility of the induced covariance model. When k is not necessarily even,

k([theta]) = [[xi].sub.0] + [[infinity].summation over (s=1)] ([[xi].sub.s,1] cos(s[theta]) + [[xi].sub.s,2]sin(s[theta])),

the covariance function of X is given by

C([theta]) = 2[pi][[xi].sup.2.sub.0] + [pi] [[infinity].summation over (s=1)] ([[xi].sup.2.sub.s,1] + [[xi].sup.2.sub.s,2])cos(s[theta]), [theta] [member of] [0, 2[pi]).

Moreover, the Fourier coefficients of the process X will still be uncorrelated and have the same distribution described by the cumulant generating function

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

and cumulants given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

APPENDIX: A NOTE ON POLAR EXPANSION OF A STATIONARY PROCESS

Consider a stationary process X = {X([theta]) : [theta] [member of] [0, 2[pi])} with Fourier expansion

X([theta]) = [A.sub.0] + [[infinity].summation over (s=1)] ([A.sub.s] cos(s[theta]) + [B.sub.s] sin(s[theta])),

and Polar expansion

X([theta]) = [mu] + [square root of 2] [[infinity].summation over (s=0)] [square root of ([C.sup.s])] cos(s([theta] - [D.sub.s])),

where [C.sub.s] = 1/2([A.sup.2.sub.s] + [B.sup.2.sub.s]) and s[D.sub.s] = arctan([B.sub.s]/[A.sub.s]), s [greater than or equal to] 1. It is easily seen that

X([theta] + h) = [A.sub.0] + [[infinity].summation over (s=1)] ([A.sub.s](h)cos(s[theta]) + [B.sub.s](h)sin(s[theta])),

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

and V is a rotation matrix. As for each h [member of] [0, 2[pi]), {X([theta]) : [theta] [member of] [0, 2[pi])} and {X([theta] + h) : [theta] [member of] [0, 2[pi])} have the same distribution,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

We now have that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

and consequently [E.sub.s] is uniformly distributed on the unit circle, i.e., arctan([B.sub.s]/[A.sub.s]) is uniformly distributed on [0, 2[pi]) and [D.sub.s] is uniformly distributed on [0, 2[pi]/s). Now consider the conditional distribution of [square root of (2[C.sub.s])] = [square root of [A.sup.2.sub.s] + [B.sup.2.sub.s] given [E.sub.s]. As

[square root of 2[C.sub.s]] [absolute value of [E.sub.s] = [e.sub.s] ~ [square root of 2[C.sub.s]]] V [E.sub.s] - [e.sub.s],

the conditional distribution does not depend on [e.sub.s] and hence [square root of (2[C.sub.s])] is independent of [E.sub.s] and [D.sub.s].

doi: 10.5566/ias.v32.p117-125

ACKNOWLEDGEMENTS

The authors want to express their gratitude to Jan Pedersen for fruitful discussions. This research was supported by MINDLab and Centre for Stochastic Geometry and Advanced Bioimaging, supported by the Villum Foundation.

REFERENCES

Barndorff-Nielsen OE, Schmiegel J (2004). Levy based tempo-spatial modeling; with applications to turbulence. Uspekhi Mat Nauk 159:63-90.

Cruz-Orive LM (2005). A new stereological principle for test lines in three-dimensional space. J Microsc 219:18-28.

Dufour JM, Roy R (1976). On spectral estimation for a homogeneous random process on the circle. Stoch Proc Appl 4:107-20.

Gual-Arnau X, Cruz-Orive LM (2000). Systematic sampling on the circle and the sphere. Adv Appl Prob SGSA 32:628-47.

Gundersen HJG (1988). The nucleator. J Microsc 151:3-21.

Hansen LV, Thorarinsdottir TL, Gneiting T (2011). Levy particles: Modelling and simulating star-shaped random sets. Research report 4, CSGB, Department of Mathematical Sciences, Aarhus University. Submitted.

Hellmund G, Prokesova M, Jensen EBV (2008). Levy-based Cox point processes. Adv Appl Prob SGSA 40:603-29.

Hobolth A, Jensen EBV (2002). A note on design-based versus model-based variance estimation in stereology. Adv Appl Prob SGSA 34:484-90.

Hobolth A, Pedersen J, Jensen EBV (2003). A continuous parametric shape model. Ann Inst Statist Math 55:227-42.

Holm H, Alouini MS (2004). Sum and difference of two squared correlated Nakagami variates in connection with the McKay distribution. IEEE Trans Commun 52:1367-76.

Jensen EBV, Gundersen HJG (1993). The rotator. J Microsc 170:35-44.

Jensen JL (1995). Saddlepoint Approximations. Oxford: Clarendon Press.

Jonsdottir KY, Hoffmann LM, Hobolth A, Jensen EBV (2006). On variance estimation in circular systematic sampling. J Microsc 222:212-6.

Jonsdottir KY, R0nn-Nielsen A, Mouridsen K, Jensen EBV (2013). Levy based modelling in brain imaging. Scand J Stat, in press.

Jonsdottir KY, Schmiegel J, Jensen EBV (2008). Levy-based growth models. Bernoulli 14:62-90.

Rajput R, Rosinski J (1989). Spectral representation of infinitely divisible processes. Prob Theory Relat Fields 82:451-87.

Schneider R (1993). Convex Bodies: The Brunn-Minkowski Theory. Cambridge: Cambridge University Press.

KRISTJANA YR JONSDOTTIR ([mail]) and EVA B. VEDEL JENSEN

Department of Mathematics, Aarhus University, Ny Munkegade 118, 8000 Aarhus C, Denmark

e-mail: kyj@imf.au.dk, eva@imf.au.dk

(Received October 8, 2012; revised May 14, 2013; accepted May 15, 2013)

Table 1. Examples of the distribution of W, together with the
corresponding cumulant generating function, and the third and fourth
cumulants.

distribution           Gaussian                 Gamma
     W                  N(0,1)      [GAMMA]([eta], [square root of
                                    [eta]])

[K.sub.W](t)         [t.sup.2]/2    -[eta] log([1 - t]/[square root of
                                    [eta]]

[[kappa].sub.3](W)        0         2/[square root of [eta]]

[[kappa].sub.4](W)        0         6/[eta]

distribution         Inverse Gaussian
     W               IG([[eta].sup.3], [eta])

[K.sub.W](t)         [[eta].sup.4](1 - [square root of ([1 - 2t]/
                     [[eta].sup.2])

[[kappa].sub.3](W)   3/[[eta].sup.2]

[[kappa].sub.4](W)   15/[[eta].sup.4]
COPYRIGHT 2013 Slovenian Society for Stereology and Quantitative Image Analysis
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2013 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Original Research Paper
Author:Jonsdottir, Kristjana Y.R.; Jensen, Eva B. Vedel
Publication:Image Analysis and Stereology
Article Type:Report
Geographic Code:4EUDE
Date:Jun 1, 2013
Words:5712
Previous Article:Quantitative characterization of microstructure of pure copper processed by ECAP.
Next Article:Exact simulation of a Boolean model.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters