# Properties of matrix variate confluent hypergeometric function distribution.

1. Introduction

The matrix variate gamma distribution has many applications in multivariate statistical analysis. The Wishart distribution, which is the distribution of the sample variance covariance matrix when sampling from a multivariate normal distribution, is a special case of the matrix variate gamma distribution.

The purpose of this paper is to give a generalization of the matrix variate gamma distribution and study its properties.

We begin with a brief review of some definitions and notations. We adhere to standard notations (cf. Gupta and Nagar ). Let A = ([a.sub.ij]) be an m x m matrix. Then, A' denotes the transpose of A; tr(A) = [a.sub.11] + ... + [a.sub.mm]; etr(A) = exp(tr(A)); det(A) = determinant of A; norm of A = [parallel]A[parallel] = maximum of absolute values of eigenvalues of the matrix A; A > 0 means that A is symmetric positive definite; and [A.sup.1/2] denotes the unique symmetric positive definite square root of A > 0. The multivariate gamma function [[GAMMA].sub.m](a) is defined by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (1)

The m x m symmetric positive definite random matrix X is said to have a matrix variate gamma distribution, denoted by X ~ Ga(m, v, [theta], [OMEGA]), if its probability density function (p.d.f.) is given by

det [(X).sup.v-(m+1)/2] etr (-[[OMEGA].sup.-1] X/[theta])/[[GAMMA].sub.m](v)[[theta].sup.mv] det [([OMEGA]).sup.v], (2)

where [OMEGA] is a symmetric positive definite matrix of order m, [theta] > 0, and v > (m - 1)/2. For [OMEGA] = [I.sub.m], the above density reduces to a standard matrix variate gamma density and in this case we write X ~ Ga(m, v, [theta]). Further, if [X.sub.1] ~ Ga(m, [v.sub.1], [theta]) and [X.sub.2] ~ Ga(m, [v.sub.2], d) are independent gamma matrices, then the random matrix [([X.sub.1] + [X.sub.2]).sup. -1/2][X.sub.1][([X.sub.1] + [X.sub.2]).sup.-1/2] follows a matrix variate beta type 1 distribution with parameters [v.sub.1] and [v.sub.2].

By replacing etr(-[[OMEGA].sup.-1] X/d) by the confluent hypergeometric function of matrix argument [sub.1] [F.sub.1]([alpha]; [beta];-[[OMEGA].sup.-1] X/[theta]), a generalization of the matrix variate gamma distribution can be defined by the p.d.f.:

C (v, [alpha], [beta], [theta], [OMEGA]) det [(X).sup.v-(m+1)/2] [sub.1][F.sub.1] ([alpha]; [beta]; -1/[theta] [[OMEGA].sup.-1] X), (3)

where X > 0 and C(v, [alpha], [beta], [theta], [OMEGA]) is the normalizing constant. In Section 2, it has been shown that, for [beta] - v > (m - 1)/2, [alpha] - v > (m - 1)/2, v > (m - 1)/2, [theta] > 0, and [OMEGA] > 0, the normalizing constant can be evaluated as

C(v, [alpha], [beta], [theta], [OMEGA]) = [[GAMMA].sub.m]([alpha])[[GAMMA].sub.m] ([beta] - v) det [([theta][OMEGA]).sup. -v]/[[GAMMA].sub.v](v)[[GAMMA].sub.v]([beta])[[GAMMA].sub.m]([alpha] - v). (4)

Therefore, the p.d.f. in (3) can be written explicitly as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (5)

where [beta] - v > (m - 1)/2, [alpha] - v > (m - 1)/2, v > (m - 1)/2,[theta] > 0, [OMEGA] > 0, and [sub.1][F.sub.1] is the confluent hypergeometric function of the first kind of matrix argument (Gupta and Nagar ). Since the density given above involves the confluent hypergeometric function, we will call the corresponding distribution a confluent hypergeometric function distribution. We will write X ~ C[H.sub.m](v, [alpha], [beta], [theta], [OMEGA], kind 1) to say that the random matrix X has a confluent hypergeometric function distribution defined by the density (5). It has been shown by van der Merwe and Roux  that the above density can be obtained as a limiting case of a density involving the Gauss hypergeometric function of matrix argument. For [alpha] = [beta], the density (5) reduces to a matrix variate gamma density and for [OMEGA] = [I.sub.m] it slides to

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (6)

where [beta] - v > (m - 1)/2, [alpha] - v > (m - 1)/2, v > (m - 1)/2, and [theta] > 0.In this case we will write X ~ C[H.sub.m](v, [alpha], [beta], [theta], kind 1). The matrix variate confluent hypergeometric function kind 1 distribution occurs as the distribution of the matrix ratio of independent gamma and beta matrices. For m = 1, (6) reduces to a univariate confluent hypergeometric function kind 1 density given by (Orozco-Castaneda et al. )

[GAMMA]([alpha])[GAMMA]([beta] - v)/[[theta].sup.v][GAMMA](v)([beta])[GAMMA]([alpha] - v) [x.sup.v-1] [sub.1][F.sub.1] ([alpha]; [beta]; - x/[theta]), x > 0,

where [beta] - v > 0, [alpha] - v > 0, v > 0, [theta] > 0, and [sub.1][F.sub.1] is the confluent hypergeometric function of the first kind (Luke ). The random variable x having the above density will be designated by x ~ CH(v, [alpha], [beta], [OMEGA], kind 1). Since the matrix variate confluent hypergeometric function kind 1 distribution is a generalization of the matrix variate gamma distribution, it is reasonable to say that the matrix variate confluent hypergeometric function kind 1 distribution can be used as an alternative to the gamma distribution quite effectively.

Although ample information about matrix variate gamma distribution is available, little appears to have been done in the literature to study the matrixvariate confluent hypergeometric function kind 1 distribution.

In this paper, we study several properties including stochastic representations of the matrix variate confluent hypergeometric function kind 1 distribution. We also derive the density function of the matrix quotient of two independent random matrices having confluent hypergeometric function kind 1 and gamma distributions. Further, densities of several other matrix quotients and matrix products involving confluent hypergeometric function kind 1, beta type 1, beta type 2, and gamma matrices are derived.

2. Some Definitions and Preliminary Results

In this section we give some definitions and preliminary results which are used in subsequent sections.

A more general integral representation of the multivariate gamma function can be obtained as

[[GAMMA].sub.m](a) = det[(Y).sup.a] [[integral].sub.R>0] etr(-YT)det[(R).sup.a -(m+1)/2] dR, (8)

where Re(a) > (m - 1)/2 and Re(Y) > (m- 1)/2. The above result can be established for real Y > 0 by substituting X = [Y.sup.1/2]R[Y.sup.1/2] with the Jacobian J(X [right arrow] R) = det[(Y).sup.(m+1)/2] in (1) and it follows for complex Y by analytic continuation.

The multivariate generalization of the beta function is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (9)

where Re(a) > (m - 1)/2 and Re(b) > (m - 1)/2.

The generalized hypergeometric function of one matrix, defined in Constantine , is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (10)

where [a.sub.i], i = 1, ..., p, [b.sub.j], j = 1, ..., q are arbitrary complex numbers, X is an m x m complex symmetric matrix, [C.sub.[kappa]](X) is the zonal polynomial of m x m complex symmetric matrix X corresponding to the ordered partition k = ([k.sub.1], ..., [k.sub.m]), [k.sub.1] [greater than or equal to] ... [greater than or equal to] [k.sub.m] [greater than or equal to] 0 [k.sub.1] + ... + [k.sub.m] = k and [summation over ([kappa][??]k)] denotes the summation over all partitions k. The generalized hypergeometric coefficient (a)K used above is defined by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (11)

where [(a).sub.r] = a(a+ 1) ... (a + r- 1), r = 1, 2, ..., with [(a).sub.0] = 1. Conditions for convergence of the series in (10) are available in the literature. From (10), it follows that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (12)

[sub.1][F.sub.1](a; c; X) = [[infinity].summation over (k=0)][summation over ([kappa][??]k)] [(a).sub.[kappa]][C.sub.[kappa]](X)/[(c).sub.[kappa]]k! (13)

[sub.2][F.sub.1] (a, b; c; X) = [[infinity].summation over (k=0)][summation over ([kappa][??]k)] [(a).sub.[kappa]] [(b).sub.[kappa]]/[(c).sub.[kappa]] [C.sub.[kappa]](X)/k!, [parallel]X[parallel] < 1. (14)

By taking [a.sub.p] = [b.sub.q] = c in (10), it can be observed that

[sub.p][F.sub.q] [[a.sub.1], ..., [a.sub.p-1], c; [b.sub.1], ..., [b.sub.q-1], c; X) = [sub.p-1] [F.sub.q-1] ([a.sub.1], ..., [a.sub.p-1]; [b.sub.1], ..., [b.sub.q-1]; X). (15)

Substituting p = 2, q = 1 in (15) and using (12), the Gauss hypergeometric function [sub.2][F.sub.1] (a, c; c; X) is reduced as

[sub.1][F.sub.1] (a, c; c; X) = [sub.1][F.sub.0] (a; X) = det [([I.sub.m] - X).sup.-a], [parallel]X[parallel] < 1. (16)

The integral representations of the confluent hypergeometric function [sub.1][F.sub.1] and the Gauss hypergeometric function [sub.2][F.sub.1] are given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (17)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (18)

where Re(a) > (m- 1)/2 and Re(c - a) > (m- 1)/2.

Further generalizations of (8) and (9) in terms of zonal polynomials, due to Constantine , are given as

[[integral].sub.R>0] etr (-YR) det [(R).sup.a-(m+1)/2] [C.sub.[kappa]] (XR) dR = [[GAMMA].sub.m](a)[(a).sub.[kappa]] det [(Y).sup.-a] [C.sub.[kappa]]([Y.sup. -1]X) (19)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (20)

respectively.

For Re([alpha]) > (m - 1)/2 and Re([beta]) > (m - 1)/2, we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (21)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (22)

We can establish (21) and (22) by expanding [sub.p][F.sub.q] in series form by using (10) and integrating term by term by applying (19) and (20) and finally summing the resulting series.

Note that the series expansions for [sub.1][F.sub.1] and [sub.2][F.sub.1] given in (13) and (14) can be obtained by expanding etr(XR) and det[([I.sub.m] - XR).sup.-b], [parallel]XR[parallel] < 1, in (17) and (18) and integrating R using (20). Substituting X = [I.sub.m] in (18) and integrating, we obtain

[sub.2][F.sub.1] (a, b; c; [I.sub.m]) = [[GAMMA].sub.m](c)[[GAMMA].sub.m](c - a - b)/[[GAMMA].sub.m](c - a) [[GAMMA].sub.m](c - b), (23)

where Re(c-a-b) > (m-1)/2,Re(c-a) > (m-1)/2,Re(cb) > (m - 1)/2, and Re(c) > (m- 1)/2. The hypergeometric function [sub.1][F.sub.1] (a; c; X) satisfies Kummer's relation

[sub.1][F.sub.1] (a; c; -X) = etr (-X) [sub.1][F.sub.1] (c - a; c; X). (24)

For properties and further results on these functions the reader is referred to Constantine , James , Muirhead , and Gupta and Nagar . The numerical computation of a hypergeometric function of matrix arguments is very difficult. However, some numerical methods are proposed in recent years; see, Hashiguchi et al.  and Koev and Edelman .

The generalized hypergeometric function with m x m complex symmetric matrices X and Y is defined by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (25)

It is clear from the above definition that the order of X and Y is unimportant; that is,

[sub.p][F.sup.(m).sub.q] ([a.sub.1], ..., [a.sub.p]; [b.sub.1], ..., [b.sub.q]; X, Y) = [sub.p][F.sup.(m).sub.q] ([a.sub.1], ..., [a.sub.p]; [b.sub.1], ..., [b.sub.q]; Y, X). (26)

Also, if one of the argument matrices is the identity, this function reduces to the one argument function. Further, the two-matrix argument function [sub.p][F.sup.(m).sub.q] can be obtained from the one-matrix function [sub.p][F.sub.q] by averaging over the orthogonal group O(m) using a result given in James [6, Equation 23]; namely,

[[integral].sub.O(m)] [C.sub.[kappa]] (XHYH') (dH) = [C.sub.[kappa]](X)[C.sub.[kappa]](Y)/[C.sub.[kappa]]([I.sub.m]), (27)

where (dH) denotes the normalized invariant measure on O(m). That is,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (28)

given in James [6, Equation 30].

Finally, we define the inverted matrix variate gamma, matrix variate beta type 1, and matrix variate beta type 2 distributions. These definitions can be found in Gupta and Nagar  and Iranmanesh et al. .

Definition 1. An m x m random symmetric positive definite matrix X is said to have an inverted matrix variate gamma distribution with parameters [mu], [theta], and [PSI], denoted by X ~ InvGa(m, [mu], [theta], [PSI]),if its p.d.f. is given by

det [(X).sup.-[mu]-(m+1)/2] etr (-[[PSI].sup.-1][X.sup. -1]/[theta])/det[([theta][PSI]).sup.[mu]][[GAMMA].sub.m]([mu]), X > 0, (29)

where [mu] > (m - 1)/2, [theta] > 0, and W is a symmetric positive definite matrix of order m.

Definition 2. An m x m random symmetric positive definite matrix U is said to have a matrix variate beta type 1 distribution with parameters a(> (m - 1)/2) and b(> (m - 1)/2), denoted as U ~ B1(m, a, b), if its p.d.f. is given by

det[(U).sup.a-(m+1)/2]det [([I.sub.m] - U).sup.b-(m+1)/2]/[B.sub.m](a, b), 0 < U < [I.sub.m]. (30)

Definition 3. An m x m random symmetric positive definite matrix V is said to have a matrix variate beta type 2 distribution with parameters a(> (m - 1)/2) and b(> (m - 1)/2), denoted as V ~ B2(m, a, b), if its p.d.f. is given by

det [(V).sup.a-(m+1)/2] det[([I.sub.m] + V).sup.(-a+b)]/[B.sub.m](a, b), V > 0. (31)

Note that if U ~ B1(m, a, b), then [([I.sub.m] - U).sup.-1] U ~ B2(m, a, b). Further, if [X.sub.1] and [X.sub.2] are independent, [X.sub.1] ~ Ga(m, [v.sub.1], [theta]) and [X.sub.2] ~ Ga(m, [v.sub.2], 0), then [([X.sub.1] + [X.sub.2]).sup.-1/2] [X.sub.1][([X.sub.1] + [X.sub.2]).sup.-1'2] ~ B1(m, [v.sub.1], [v.sub.2]) and [X.sup.-1/2.sub.2][X.sub.1][X.sup.-1/2.sub.2] ~ B2(m, [v.sub.1], [v.sub.2]).

We conclude this section by evaluating the normalizing constant C(v, [alpha], [beta], [theta], [OMEGA]) in (3). Since the density over its support set integrates to one, we have

[[C(v, [alpha], [beta], [theta], [OMEGA])].sup.-1] = [[integral].sub.X>0] det [(X).sup.v-(m+1)/2] [sub.1][F.sub.1]([alpha]; [beta]; -1/[theta] [[OMEGA].sup. -1]X) dX. (32)

By rewriting [sub.1][F.sub.1] using Kummer's relation (24) and integrating X by applying (21), we get

[[C(v, [alpha], [beta] [theta], [OMEGA])].sup.-1] = [[GAMMA].sub.m] (v) det [([theta][OMEGA]).sup.v] [sub.2][F.sub.1] (v, [beta] - [alpha]; [beta]; [I.sub.m]), (33)

where Re(') > (m- 1)/2. Finally, writing [sub.2][F.sub.1](v, [beta] - [alpha]; [beta]; [I.sub.m]) in terms of multivariate gamma functions by using (23), we obtain

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (34)

where Re([beta] - v) > (m - 1)/2, Re([alpha] - v) > (m - 1)/2, [theta] > 0, Re(v) > (m - 1)/2, and [OMEGA] > 0.

3. Properties

In this section we study several properties of the confluent hypergeometric function kind 1 distribution defined in Section 1. For the sake of completeness, we first state the following results established in Gupta and Nagar .

(1) Let X ~ C[H.sub.m](v, [alpha], [beta], [theta], [OMEGA], kind 1) and let A be an m x m constant nonsingular matrix. Then, AXA' ~ C[H.sub.m](v, [alpha], [beta], [theta], A[OMEGA]A', kind 1).

(2) Let X ~ C[H.sub.m] (v, [alpha], [beta], [theta], kind 1) and let H bean m x m orthogonal matrix whose elements are either constants or random variables distributed independent of X. Then, the distribution of X is invariant under the transformation X [right arrow] HXH' if H is a matrix of constants. Further, if H is a random matrix, then H and HXH1 are independent.

(3) Let X ~ C[H.sub.m](v, [alpha], [beta], [theta], [OMEGA], kind 1). Then, the cumulative distribution function (cdf) of X is derived as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (35)

where [LAMBDA] > 0.

(4) Let [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], where [X.sub.11] is a q x q matrix.

Define [X.sub.11x2] = [X.sub.11] - [X.sub.12][X.sup.-1.sub.22] [X.sub.21] and [X.sub.22 x 1] = [X.sub.22] [X.sub.21] [X.sup.-1.sub.11][X.sub.12]. If X ~ C[H.sub.m](v, [alpha], [beta], [theta], kind 1), then (i) [X.sub.11] and [X.sub.22x1] are independent, [X.sub.11] ~ C[H.sub.q](v, [alpha] - (m - q)/2, [beta] - (m - q)/2,9, kind 1) and [X.sub.22x1] ~ C[H.sub.m-q](v - q/2, [alpha] - q/2, [beta] - q/2, [theta], kind 1), and (ii) [X.sub.22] and [X.sub.11x2] are independent, [X.sub.22] ~ C[H.sub.m-q](v, [alpha] - q/2, [beta] - q/2,9, kind 1) and [X.sub.11x2] ~ C[H.sub.q](v - (m - q)/2, [alpha] - (m - q)/2, [beta] - (m - q)/2, [theta], kind 1).

(5) Let A be a q x m constant matrix of rank q([less than or equal to] m). If X ~ C[H.sub.m](v, [alpha], [beta], [theta], kind 1), then AXA' ~ C[H.sub.q](v, [alpha] - (m - q)/2, [beta] - (m - q)/2, [theta], AA', kind 1) and [(A[X.sup.-1] A').sup.-1] ~ C[H.sub.q] (v - (m - q)/2, [alpha] - (m - q)/2, [beta] - (m - q)/2,[theta], [(AA').sup.-1], kind 1).

(6) Let X ~ C[H.sub.m](v,[alpha], [beta], [theta], kind 1) and let a be a nonzero m-dimensional column vector of constants, then [(a'a).sup.-1] (a'Xa) ~ CH(v, [alpha] - (m - 1)/2, [beta] - (m - 1)/2,[theta], kind 1) and a'a[(a'[X.sup.-1] a).sup.-1] ~ CH(v - (m - 1)/2,[alpha] - (m - 1)/2,[beta] - (m - 1)/2, [theta], kind 1). Further, if y is an m-dimensional random vector, independent of X, and P(y [not equal to] 0) = 1, then it follows that [(y'y).sup.-1] (y'[X.sup.-1]y) ~ CH(v,[alpha] - (m - 1)/2,[beta] - (m - 1)/2, [theta], kind 1) and y'y[(y'[X.sup.-1] y).sup.-1] ~ CH(v - (m - 1)/2, [alpha](m - 1)/2,[beta] - (m - 1)/2,[theta], kind 1).

It may also be mentioned here that properties (1)-(6) given above are modified forms of results given in Section 8.10 of Gupta and Nagar .

If the m x m random matrices [X.sub.1] and [X.sub.2] are independent, [X.sub.1] ~ C[H.sub.m] (v, [a.sub.1] + v, 2v, [theta], kind 1) and [X.sub.2] ~ C[H.sub.m] (v, [a.sub.2] + v, 2v, [theta], kind 1), v = y/2 + (m + 1)/4, then Roux and van der Merwe  have shown that [X.sup.-1/2.sub.2][X.sub.1][X.sup.-1/2.sub.2] has matrix variate beta type 2 distributions with parameters [a.sub.2] and [a.sub.1].

The matrix variate confluent hypergeometric function kind 1 distribution can be derived as the distribution of the matrix ratio of independent gamma and beta matrices. It has been shown in Gupta and Nagar  that if Y ~ Ga(m, v, [theta]) and U ~ B1(m, a, b), then [U.sup.-1/2]Y[U.sup.-1/2] ~ C[H.sub.m](v, a + v, a + b + v, [theta], kind 1).

The expected values of X and [X.sup.-1], for X ~ C[H.sub.m](v, [alpha], [beta], [theta], kind 1), can easily be obtained from the above results. For any fixed a [member of] [R.sup.m], [alpha] [not equal to] 0,

E [a' Xa/a'a] = E([v.sub.1]), (36)

where [v.sub.1] ~ CH(v, [alpha] - (m - 1)/2, [beta] - (m - 1)/2,[theta], kind 1), and

E [a'[X.sup.-1]a/a'a] = E (1/[v.sub.2]), (37)

where [v.sub.2] ~ CH(v - (m - 1)/2, [alpha] - (m - 1)/2, [beta] - (m - 1)/2, [theta], kind 1). Hence, for all a [member of] [R.sup.m],

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (38)

which implies that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (39)

The Laplace transform of the density of X, where X ~ C[H.sub.m](v, [alpha], [beta], [theta], kind 1), is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (40)

where we have used (24) and (21). From the above expression, the Laplace transform of the density of X, where X ~ C[H.sub.m](v, [alpha], [beta], [theta], [OMEGA], kind 1), is derived as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (41)

Theorem 4. Let X ~ C[H.sub.m](v, [alpha], [beta], [theta], [OMEGA], kind 1); then

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (42)

where Re(h + v) > (m- 1)/2, Re(h) < [alpha] - v - (m- 1)/2, and Re(h) < [beta] - v - (m - 1)/2.

Proof. From the density of X, we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (43)

Now, evaluating the above integral by using (34), we get

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (44)

where Re(h + v) > (m - 1)/2, Re(h) < [alpha] - v - (m - 1)/2, and Re(h) < [beta] - v - (m - 1)/2. Finally, simplifying the above expression, we get the desired result.

Corollary 5. Let x ~ CH(v, [alpha], [beta], [theta], kind 1); then

E ([x.sup.h]) = [[theta].sup.h][GAMMA]([beta] - v)[GAMMA](v + h)[GAMMA]([alpha] - v - h)/[GAMMA](v)[GAMMA]([alpha] - v)[GAMMA]([beta] - v - h), (45)

where Re(h + v) > 0, Re(h) < [alpha] - v, and Re(h) < [beta] - v.

Using (42) the mean and the variance of det(X) are derived as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (46)

where [beta] > v + (m + 1)/2, [alpha] > v + (m + 1)/2, and

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (47)

where [beta] > v + (m + 3)/2 and [alpha] > v + (m + 3)/2. For m x m symmetric matrix A, E[[C.sub.[kappa]](AX)] is derived as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (48)

Replacing [sub.1][F.sub.1] ([alpha]; [beta]; -[[OMEGA].sup.-1] X/d) by its integral representation, namely,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (49)

where Re([beta] - [alpha]) > (m - 1)/2 and Re([alpha]) > (m - 1)/2, one obtains

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (50)

Now, evaluating the above integral by using (19), we obtain

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (51)

Finally, evaluating the integral involving Y by using (Khatrzi )

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (52)

we get

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (53)

where Re([alpha] - v) > (m - 1)/2 + [k.sub.1] and Re([beta] - v) > (m - 1)/2 + [k.sub.1]. Proceeding similarly and using the result (Khatrzi )

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (54)

the expected value of CK(A[X.sup.-1]) is derived as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (55)

where Re(v) > (m - 1)/2 + [k.sub.1]. Finally, evaluating the above integral using (20) we obtain

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (56)

In the next theorem, we derive the confluent hypergeometric function kind 1 distribution using independent beta and gamma matrices.

Theorem 6. Let [X.sub.1] and [X.sub.2] be independent, [X.sub.1] ~ Ga(m, v, [theta]) and [X.sub.2] ~ B1(m, a, b). Then, [X.sup.-1/2.sub.2][X.sub.1][X.sup.-1/2.sub.2] - C[H.sub.m](v, a + v, a + b + v, d, kind 1).

Proof. See Gupta and Nagar .

Theorem 7. Let [X.sub.1] and [X.sub.2] be independent, [X.sub.1] - Ga(m, v, [theta]) and [X.sub.2] - B1(m, a, b). Then, [X.sup.1/2.sub.1][X.sup. -1.sub.2][X.sup.1/2.sub/1] ~ C[H.sub.m](v, a + v, a + b + v, [theta], kind 1).

Proof. The result follows from Theorem 6 and the fact that [X.sup.-1/2][X.sub.1][X.sup.-1/2.sub.2] and [X.sup.1/2.sub.1][X.sup. -1.sub.2][X.sup.1/2.sub.1] have same eigenvalues, and the matrix variate confluent hypergeometric function kind 1 distribution is orthogonally invariant.

Theorem 8. Let [X.sub.1] and [X.sub.2] be independent, [X.sub.1] - Ga(m, v, [theta]) and [X.sub.2] - B1(m, a, b). Then, [([I.sub.m] - [X.sub.2]).sup. -1/2][X.sub.1][([I.sub.m] - [X.sub.2]).sup.-1/2] C[H.sub.m] (v, b + v, a + b + v, [theta], kind 1).

Proof. Noting that [I.sub.m] - [X.sub.2] - B1(m,b,a) and using Theorem 6 we get the result.

Theorem 9. Let [X.sub.1] and [X.sub.2] be independent, [X.sub.1] - Ga(m, v, [theta]) and [X.sub.2] ~ [B.sub.2](m, a, b). Then, [([I.sub.m] + [X.sub.2]).sup.1/2] [X.sub.1] [([I.sub.m] + [X.sub.2]).sup.1/2] ~ C[H.sub.m](v, b + v, a + b + v, [theta], kind 1).

Proof. The desired result is obtained by observing that [([I.sub.m] + [X.sub.2]).sup.-1] - B1(m, b, a) and using Theorem 6.

Theorem 10. Let [X.sub.1] and [X.sub.2] be independent, [X.sub.1] ~ Ga(m, v, [theta]) and X2 - B2(m, a, b). Then, [([I.sub.m] + [X.sup.-1.sub.2]).sup.1/2][X.sub.1] [([I.sub.m] + [X.sup.-1.sub.2]).sup.1/2] ~ C[H.sub.m](v, a + v, a + b + v, [theta], kind 1).

Proof. Noting that [([I.sub.m] + [X.sup.-1.sub.2]).sup.-1] - B1(m, a, b) and using Theorem 6 we get the result.

Theorem 11. Let [X.sub.1] and [X.sub.2] be independent, [X.sub.1] ~ Ga(m, [v.sub.1], [theta]) and [X.sub.2] ~ Ga(m, [v.sub.2], [theta]). Then, ([X.sub.1] + [X.sub.2])[X.sup.-1.sub.1]([X.sub.1] + [X.sub.2]) ~ C[H.sub.m]([v.sub.1] + [v.sub.2], 2[v.sub.1] + [v.sub.2], 2[v.sub.1] + 2[v.sub.2], [theta], kind 1).

Proof. It is well known that [X.sub.1] + [X.sub.2] and [([X.sub.1] + [X.sub.2]).sup.1/2] [X.sub.1][([X.sub.1] + [X.sub.2]).sup.-1/2] are independent, [X.sub.1] + [X.sub.2] - Ga(m, [v.sub.1] + [v.sub.2], [theta]) and [([X.sub.1] + [X.sub.2]).sup.-1/2] [X.sub.1][([X.sub.1] + [X.sub.2]).sup.-1/2] - B1(m, [v.sub.1], [v.sub.2]). Therefore, using Theorem 7, ([X.sub.1] + [X.sub.2])[X.sup.-1.sub.1] ([X.sub.1] + [X.sub.2]) - C[H.sub.m] ([v.sub.1] + [v.sub.2], 2[v.sub.1] + [v.sub.2], 2[v.sub.1] + 2[v.sub.2], [theta], kind 1).

Theorem 12. Let [X.sub.1] and [X.sub.2] be independent, [X.sub.1] ~ Ga(m, [v.sub.1], [theta]) and [X.sub.2] ~ Ga(m, [v.sub.2], [theta]). Then, ([X.sub.1] + [X.sub.2])[X.sub.-1.sub.2] ([X.sub.1] + [X.sub.2]) - C[H.sub.m]([v.sub.1] + [v.sub.2], [v.sub.1] + 2[v.sub.2], 2[v.sub.1] + 2[v.sub.2], [theta], kind 1).

Proof. The proof is similar to the proof of Theorem 11.

4. Distributions of Sum and Quotients

In statistical distribution theory it is well known that if [X.sub.1] and [X.sub.2] are independent, [X.sub.1] ~ Ga(m, [v.sub.1], [theta]) and [X.sub.2] Ga(m, [v.sub.2], [theta]), then [X.sup.-1/2.sub.2][X.sub.1][X.sup.-1/2.sub.2] - B2(m, [v.sub.1], [v.sub.2]), [([X.sub.1] + [X.sub.2]).sup.-1/2][X.sub.1][([X.sub.1] + [X.sub.2]).sup.-1/2] - B1(m, [v.sub.1], [v.sub.2]), and [X.sub.1] + [X.sub.2] Ga(m, [v.sub.1] + [v.sub.2]). In this section we derive similar results when [X.sub.1] and [X.sub.2] are independent confluent hypergeometric function kind 1 and gamma matrices, respectively.

Theorem 13. Let [X.sub.1] and [X.sub.2] be independent, [X.sub.1] ~ C[H.sub.m]([v.sub.1], [[alpha].sub.1], [[beta].sub.1], [theta], kind 1) and [X.sub.2] - Ga(m, [v.sub.2], [theta]). Then, the p.d.f. of Z = [X.sup.-1/2.sub.2][X.sub.1][X.sup. -1/2.sub.2] is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (57)

Proof. Using the independence, the joint p.d.f. of [X.sub.1] and [X.sub.2] is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (58)

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (59)

Making the transformation Z = [X.sup.-1/2.sub.2][X.sub.1][X.sup.-1/2.sub.2], [X.sub.2] = [X.sub.2], with the Jacobian J([X.sub.1], [X.sub.2] [right arrow] Z, [X.sub.2]) = det[([X.sub.2]).sup.(m+1)/2] we obtain the joint p.d.f. of Z and [X.sub.2] as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (60)

Now, integrating X2 in (60) by applying (21) and substituting for K, we obtain the desired result.

Corollary 14. Let [X.sub.1] and [X.sub.2] be independent, [X.sub.1] C[H.sub.m]([v.sub.1], [[alpha].sub.1], [[beta].sub.1], [theta], kind 1) and [X.sub.2] - Ga(m, [v.sub.2], [theta]). Then, the p.d.f. of [Z.sub.1] = [X.sup.-1/2.sub.2][X.sup.-1.sub.1][X.sup.1/2.sub.2] is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (v2)

Corollary 15. Let [X.sub.1] and [X.sub.2] be independent, [X.sub.1] ~ Ga(m, [v.sub.1], [theta]) and [X.sub.2] ~ C[H.sub.m]([v.sup.2], [[alpha].sub.2], [[beta].sub.2], [theta], kind 1). Then, thep.d.f. of [Z.sub.3] = [X.sup.-1/2][X.sub.1][X.sup.-1/2] is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (62)

Proof. Interchanging subscripts 1 and 2 in Corollary 14, the p.d.f. of [Z.sub.2] = [X.sup.1/2.sub.1][X.sup.-1.sub.2][X.sup.1/2] is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (63)

where [X.sub.1] ~ Ga(m, [v.sub.1], [theta]) and [X.sub.2] ~ C[H.sub.m]([v.sub.2], [[alpha].sub.2], [[beta].sub.2], [theta], kind 1). Now, the result follows from the fact that [Z.sub.2] = [X.sup.1/2.sub.1][X.sup.-1.sub.2][X.sup.1/2.sub.1] and [Z.sub.3] = [X.sup. -1/2][X.sub.1][X.sup.-1/2.sub.2] have the same eigenvalues, and the distribution Z2 is orthogonally invariant.

Corollary 16. Let [X.sub.1] and [X.sub.2] be independent, [X.sub.1] ~ C[H.sub.m]([v.sub.1], [[alpha].sub.1], [v.sub.1] + [v.sub.2], [theta], kind 1) and [X.sub.2] - Ga(m, [v.sub.2], [theta]). Then, [X.sup.-1/2][X.sub.1][X.sup.-1/2] - B2(m, [v.sub.1], [[alpha].sub.1] - [v.sub.1]).

Corollary 17. Let the random matrices [X.sub.1] and [X.sub.2] be independent, [X.sub.1] - Ga(m, [v.sub.1], [theta]) and [X.sub.2] - C[H.sub.m]([v.sub.2], [[alpha].sub.2], [v.sub.2] + [v.sub.1], [theta], kind 1). Then, [X.sup.-1/2.sub.2][X.sub.1][X.sup.-1/2.sub.2] ~ B2(m, [[alpha].sub.2] - [v.sub.2], [v.sub.2]).

Corollary 18. Let [X.sub.3] and [X.sub.2] be independent, [X.sub.1] - Ga(m, [v.sub.1], [theta]) and [X.sub.2] - Ga(m, [v.sub.2], [theta]). Then, [X.sup. -1/2][X.sub.1][X.sup.-1/2] - B2(m, [v.sub.1], [v.sub.2]).

Theorem 19. Let [X.sub.1], [X.sub.2], and [X.sub.3] be independent, [X.sub.1] ~ Ga(m, [mu], [theta]), [X.sub.2] ~ B1(m, a, b), and [X.sub.3] ~ Ga(m, v, [theta]). Then, thep.d.f. of Z = [X.sup.-1/2][X.sup.-1/2]X1[X.sup.-1/2][X.sup.-1/2] is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (64)

Proof. Using the independence of [X.sub.1], and [X.sub.2] and Theorem 6, [X.sup.-1/2.sub.2][X.sub.1][X.sup.-1/2.sub.2] - C[H.sub.m]([mu, a + [mu], a + b + [mu], kind 1). Further, using independence of [X.sup.-1/2.sub.2][X.sub.1][X.sup.-1/2.sub.2] and [X.sub.3] and Theorem 13, we obtain the desired result.

Corollary 20. Let [X.sub.1], [X.sub.2], and [X.sub.3] be independent, [X.sub.1] ~ Ga(m, [mu], [theta]), [X.sub.2] - B1(m, a, b), and [X.sub.3] - Ga(m, a + b, [theta]). Then, [X.sup.-1/2][X.sup.-1/2][X.sub.1][X.sup.1/2.sub.2][X.sup.-1/2] - B2(m, [mu], a).

Proof. For v = a + b, the p.d.f. of Z = [X.sup.-1/2.sub.3][X.sup. -1/2.sub.2][X.sub.1][X.sup.-1/2.sub.2] x [X.sup.-1/2.sub.3] given in the above theorem slides to

[[GAMMA].sub.m] (a + [mu])/[[GAMMA].sub.m](a)[[GAMMA].sub.m]([mu]) det[(Z).sup.[mu]-(m+1)/2]/det [([I.sub.m] + Z).sup.[mu]+a+b].

Now, simplifying the Gauss hypergeometric function as

[sub.2][F.sub.1] (b, [mu] + a + b; a + b + [mu]; [([I.sub.m] + Z).sup.-1] Z) = [sub.1][F.sub.0] (b; [([I.sub.m] + Z).sup.-1]Z) = det [([I.sub.m] - [([I.sub.m] + Z).sup.-1] Z).sup.-b] = det [([I.sub.m] + Z).sup.b], (66)

where we have used (16), the desired result is obtained.

Corollary 21. Let V and U be independent, V - B2(m, [mu], v) and U - B1(m, a, b). Then, the p.d.f. of Z = [U.sup.-1/2]M[U.sup.-1/2] is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (67)

Further, if v = a + b, then [U.sup.-1/2]M[U.sup.-1/2] - B2(m, [mu], a).

Proof. Observe that Z = [X.sup.-1/2.sub.3][X.sup.-1/2.sub.2][X.sub.1][X.sup. -1/2.sub.2][X.sup.-1/2.sub.2][X.sup.-1/2] and [X.sup.-1/2][X.sup.-1/2][X.sub.1][X.sup.-1/2][X.sup.-1/2] have same eigenvalues and the distribution of Z is orthogonally invariant. Therefore, the random matrices [X.sup.-1/2.sub.3][X.sup.-1/2.sub.2][X.sub.1][X.sup. -1/2.sub.2][X.sup.-1/2.sub.3] and [X.sup.-1/2.sub.2] [X.sup.-1/2.sub.3][X.sub.1][X.sup.-1/2.sub.3][X.sup. -1/2.sub.2] have identical distribution. Now, setting V = [X.sup.-1/2.sub.3][X.sub.1][X.sup.-1/2.sub.3] and U = [X.sub.2], where [X.sup.-1/2.sub.3][X.sub.1][X.sup.-1/3.sub.3] ~ B2(m, [mu], v) and [X.sub.2] ~ B1 (m, a, b), we observe that [X.sup.-1/2.sub.3][X.sup.-1/2.sub.2][X.sub.1][X.sup. -1/2.sub.2][X.sup.-1/2.sub.3] and [U.sup.-1/2]V[U.sup.-1/2] have identical distribution.

Theorem 22. Let [X.sub.1] and [X.sub.2] be independent, [X.sub.1] ~ C[H.sub.m] ([v.sub.1], [[alpha].sub.1], [[beta].sub.1], [theta], kind 1) and [X.sub.2] ~ Ga(m, [v.sub.2], [theta]). Then, the p.d.f. of R = [([X.sub.1] + [X.sub.2]).sup.-1/2] [X.sub.1][([X.sub.1] + [X.sub.2]).sup.-1/2] is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (68)

and the p.d.f. of S = [X.sub.1] + [X.sub.2] is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (69)

Proof. Substituting R = [([X.sub.1] + [X.sub.2]).sup.-1/2] [X.sub.1][([X.sub.1] + [X.sub.2]).sup.-1/2] and S = [X.sub.1] + [X.sub.2] with the Jacobian J([X.sub.1], [X.sub.2] [right arrow] R, S) = det[(S).sup.(m+1)/2] in (58) we obtain the joint p.d.f. of R and S as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (70)

Now, integration of S by using (21) yields the density of R. The marginal density of S is obtained by integrating R by using (22).

It may be remarked here that the density of R given in the above theorem can also be obtained from the density of Z = [X.sup.-1/2.sub.2][X.sub.1][X.sup.-1/2.sub.2] derived in Theorem 13 by making the transformation R = [([I.sub.m] + Z).sup.-1] Z.

Corollary 23. Let [X.sub.1] and [X.sub.2] be independent random matrices, [X.sub.1] ~ Ga(m, [v.sub.1], [theta]) and [X.sub.2] ~ C[H.sub.m]([v.sub.2], [[alpha].sub.2], [[beta].sub.2], [theta], kind 1).

Then, the p.d.f. of [R.sub.1] = [([X.sub.1] + [X.sub.2]).sup. -1/2][X.sub.1][([X.sub.1] + [X.sub.2]).sup.-1/2] is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (71)

and the p.d.f. of S1 = [X.sub.1] + [X.sub.2] is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Proof. Interchanging subscripts 1 and 2 in Theorem 22, the p.d.f. of [R.sub.2] = [([X.sub.1] + [X.sub.2]).sup.-1/2] [X.sub.2][([X.sub.1] + [X.sub.2]).sup.-1/2] is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (73)

where now [X.sub.1] ~ Ga(m, [v.sub.1], 0) and [X.sub.2] ~ C[H.sub.m]([v.sub.2], [[alpha].sub.2], [[beta].sub.2], [theta], kind 1). The desired result is now obtained by observing that [R.sub.2] = [I.sub.m] - [R.sub.1]. Similarly, the p.d.f. of [S.sub.1] is obtained by interchanging subscripts 1 and 2 in the p.d.f. of S.

Corollary 24. Let the random matrices [X.sub.1] and [X.sub.2] be independent, [X.sub.1] ~ C[H.sub.m]([v.sub.1], [[alpha].sub.1], [v.sub.1] + [v.sub.2], [theta], kind 1) and [X.sub.2] - Ga(m, [v.sub.2], 0). Then, [([X.sub.1] + [X.sub.2]).sup.-1/2] [X.sub.1][([X.sub.1] + [X.sub.2]).sup.-1/2] B1(m, [v.sub.1], [[alpha].sub.1] - [v.sub.1]).

Proof. The desired result is obtained by substituting [[beta].sub.1] = [v.sub.1] + [v.sub.2] in the p.d.f. of R and simplifying the resulting expression by using (16).

Corollary 25. Let the random matrices [X.sub.1] and [X.sub.2] be independent, [X.sub.1] - Ga(m, [v.sub.1], [theta]) and [X.sub.2] - CH(m, [v.sub.2], [[alpha].sub.2], [v.sub.1] + [v.sub.2], [theta], kind 1). Then, [([X.sub.1] + [X.sub.2]).sup.-1/2] [X.sub.1][([X.sub.1] + [X.sub.2]).sup.-1/2] B1(m, [[alpha].sub.2] - [v.sub.2], [v.sub.2]).

Proof. The desired result is obtained by substituting [[beta].sub.2] = [v.sub.1] + [v.sub.2] in the p.d.f. of [R.sub.1] and simplifying the resulting expression by using (16).

Corollary 26. Let the random matrices [X.sub.1] and [X.sub.2] be independent, [X.sub.1] - C[H.sub.m]([v.sub.1], [[alpha].sub.1], [[alpha].sub.1] + [v.sub.1] + [v.sub.2], [theta], kind 1) and [X.sub.2] Ga(m, [v.sub.2], 0). Then [X.sub.1] + [X.sub.2] ~ C[H.sub.m] ([v.sub.1] + [v.sub.2], [[alpha].sub.2] + [v.sub.2], [[alpha].sub.1] + [v.sub.1] + [v.sub.2], [theta], kind 1).

Proof. The result is obtained by substituting [[beta].sub.1] = [[beta].sub.1] + [v.sub.1] + [v.sub.2] in the p.d.f. of S and simplifying the resulting expression by using (15).

Corollary 27. Let the random matrices [X.sub.1] and [X.sub.2] be independent, [X.sub.1] ~ Ga(m, [v.sub.1], [theta]) and [X.sub.2] ~ C[H.sub.m]([v.sub.2], [[alpha].sub.2], [[alpha].sub.2] + [v.sub.1] + [v.sub.2], [theta], kind 1). Then [X.sub.1] + [X.sub.2] ~ C[H.sub.m]([v.sub.1] + [v.sub.2], [[alpha].sub.2] + [v.sub.1], [[alpha].sub.2] + [v.sub.1] + [v.sub.2], [theta], kind 1).

Proof. The result is obtained by substituting [[beta].sub.2] = a2 + v1 + v2 in the p.d.f. of S and simplifying the resulting expression by using (15). ?

Corollary 28. Let the random matrices [X.sub.1] and [X.sub.2] be independent, [X.sub.1] ~ Ga(m, [v.sub.1], [theta]) and [X.sub.2] ~ Ga(m, [v.sub.2], [theta]). Then, [([X.sub.1] + [X.sub.2]).sup.-1/2] [X.sub.1][([X.sub.1] + [X.sub.2]).sup.-1/2] ~ B1 (m, [v.sub.1], [v.sub.2]) and [X.sub.1] + [X.sub.2] ~ Ga(m, [v.sub.1] + [v.sub.2], [theta]).

Proof. Substitute [[beta].sub.1] = [[alpha].sub.1] in the p.d.f. of R or [[beta].sub.2] = [[alpha].sub.2] in the p.d.f. of [R.sub.1] and simplify the resulting expression to get the desired result. ?

5. Related Distributions

This section gives distributional results for the determinant of the random matrix distributed as confluent hypergeometric function kind 1.

In an unpublished report, Coelho et al.  have shown that if z is a positive random variable and E([z.sup.h]) is defined for h in some neighborhood of zero, then the moments E([z.sup.h]) uniquely identify the distribution of z. In the next theorem, we will use this result to derive the distribution of the product of two independent confluent hypergeometric function kind 1 variables.

Theorem 29. If [x.sub.1] ~ CH(v [alpha], [beta], [theta], kind 1) and [x.sub.2] ~ CH(v + 1/2, [alpha] + 1, [beta] + 1,0, kind 1) are independent, then 2 [square root of ([x.sup.1][x.sub.2])] ~ CH(2v, 2[alpha], 2[beta], [theta], kind 1).

Proof. The hth moment of 2 [square root of ([x.sub.1] [x.sub.2])] is derived as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (74)

Now, using the duplication formula for gamma function, namely,

[GAMMA](2z) = [GAMMA](z)[GAMMA](z + 1/2)/[2.sup.1-2z][square root of ([pi])], (75)

the hth moment of 2 [square root of ([x.sub.1][x.sub.2])] is rewritten as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (76)

where v >0, [alpha] - v > 0, [beta] - v > 0, 2v > h, 2([alpha] - v) > h, and 2([beta] - v) > h.

Finally, comparison of the above expression with the one given in (45) yields the desired result.

Theorem 30. If X ~ C[H.sub.m] (v,[alpha],[beta],[OMEGA],[theta], kind 1), then det([[OMEGA].sup.1/2]X[[OMEGA].sup.-1/2]) is distributed as [[PI].sup.m.sub.u=1] [z.sub.i] where [z.sub.1], ..., [z.sub.m] are independent, [z.sub.i] ~ CH(v- (i - 1)/2,a - i + 1, [beta] - i + 1,[theta], kind 1), i = 1, ..., m.

Proof. Writing multivariate gamma functions in terms of ordinary gamma function, (42) is rewritten as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (77)

Now, comparing the above expression with (45), we get E[det[([[OMEGA].sup.-1/2]X[[OMEGA].sup.-1/2 ]).sup.h]] = [[PI].sup.m.sub.t=1] E([z.sup.h.sub.i]).

Corollary 31. If X ~ C[H.sub.2](v, [alpha], [beta], [theta], kind 1), then

2det [(X).sup.1/2] ~ CH (2v -1,2[alpha] - 2,2[beta] - 2,[theta], kind 1). (78)

Proof. For m = 2, det(X) is distributed as [z.sub.1][z.sub.2], where [z.sub.1] and [z.sub.2] are independent, [z.sub.1] ~ CH(v, a, p, 9, kind 1) and [z.sub.2] ~ CH(v-1/2, [alpha]-1, [beta]-1, [theta], kind 1). From Theorem 29, we have 2+zf[z.sub.2] ~ CH(2v - 1,2[alpha]- 2,2[beta] - 2,[theta], kind 1).

Corollary 32. If X ~ Ga(m, v,[theta],[OMEGA]), then det([[OMEGA].sup. -1/2]X[[OMEGA].sup.-1/2]) is distributed as [[PI].sup.m.sub.t=1][z.sub.i], where [z.sub.i], ..., [z.sub.m] are independent, [z.sub.i] ~ Ga(v - (i - 1)/2,[theta]), i = 1, ..., m.

6. Distribution of Eigenvalues

In this section, we derive density of eigenvalues of random matrix distributed as confluent hypergeometric function kind 1.

Theorem 33. Let A be a positive definite random matrix of order m with the p.d.f. f(A). Then, the joint p.d.f. of the eigenvalues [l.sub.1], [l.sub.2], ..., [l.sub.m] of A is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (79)

where f > [l.sub.2] > ... > [l.sub.m] > 0, L = diag([l.sub.1], [l.sub.2], ..., [l.sub.m]), and (dH) is the unit invariant Haar measure on the group of orthogonal matrices O(m).

Proof of Theorem 33 and several other results can be found in Muirhead .

Theorem 34. If X ~ C[H.sub.m](v, [alpha], [beta], [theta], [OMEGA], kind 1), then the joint p.d.f. of the eigenvalues [x.sub.1], [x.sub.2], ..., [x.sub.m] of X is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (80)

where 0 < [x.sub.m] < ... < [x.sub.1] < [infinity], L = diag([x.sub.1], ..., [x.sub.m]), and is the two-matrix argument confluent hypergeometric function.

Proof. The p.d.f. of X is given by (5). Applying Theorem 33, we obtain the joint p.d.f. of the eigenvalues x1, ..., xm of X as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (81)

Now, using (28), we obtain the desired result. ?

7. A Generalized Form

In this section, we give a more general form of the matrix variate confluent hypergeometric function kind 1 distribution by introducing an additional factor etr(-[[PSI].sup.-1] X/[theta]) in the p.d.f. (5). The p.d.f. of X, in this case, is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (82),

where X > 0. We will write X ~ [CH.sub.m](v, [alpha], [beta], [theta], [OMEGA], W, kind 1) if the density of X is given by (82). For W = Im and 9=1, the above p.d.f. reduces to

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (83)

which is a special case of the generalized hypergeometric function density defined by Roux .

Theorem 35. Let Z | Z ~ InvGa(m, [mu], [delta], [Z.sup.-1]). Further, let the prior distribution of Z be a generalized matrix variate confluent hypergeometric kind 1 distribution with parameters v, [alpha], [beta], [theta], [OMEGA], and W, Z ~ C[H.sub.m](v, [alpha], [beta], [theta], [OMEGA], W, kind 1). Then, the marginal distribution of Z is a generalized inverted matrix variate beta with the density

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (84)

where Z > 0.

Proof. By definition, the marginal density of Z, denoted by m(Z), is obtained as

m(Z) = [[integral].sub.[SIGMA]>0] f(Z|[SIGMA])[pi]([SIGMA]) d[SIGMA]. (85)

Now, substituting for f(Z | Z) and n(Z), we get

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (86)

Finally, evaluating the above expression by using (21) and simplifying, we get the desired result.

Theorem 36. Let Z | [SIGMA] ~ InvGa(m, [mu], [[SIGMA].sup.-1]). Further, let the prior distribution of [SIGMA] be a generalized matrix variate confluent hypergeometric function kind 1 distribution parameters v, [alpha], [beta], [theta], [OMEGA], and [PSI], [SIGMA] ~ C[H.sub.m] (v, [alpha], [beta], [theta], [OMEGA], [PSI], kind 1). Then, the posterior distribution ofZ is a generalized matrix variate confluent hypergeometric kind 1 distribution with parameters [mu] + v, [alpha], [beta], [theta], [OMEGA], and [([[PSI].sup.-1] + [Z.sup. -1]).sup.-1].

Proof. By definition and Theorem 35, we have

[pi]([SIGMA]|Z) = f(Z|[SIGMA])[pi]([SIGMA])/m(Z). (87)

Now, substituting appropriately, we get

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (88)

which is the desired result.

From the above results it is quite clear that the generalized matrix variate confluent hypergeometric function kind 1 distribution as the prior distribution is conjugate. Thus, this distribution may be used as an alternative to matrix variate gamma distribution.

http://dx.doi.org/ 10.1155/2016/2374907

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

The research work of Daya K. Nagar was supported by the Sistema Universitario de Investigation, Universidad de Antioquia, by Project no. IN10164CE.

References

 A. K. Gupta and D. K. Nagar, Matrix Variate Distributions, Chapman & Hall/CRC Press, Boca Raton, Fla, USA, 2000.

 G. J. van der Merwe and J. J. J. Roux, "On a generalized matrixvariate hypergeometric distribution," South African Statistical Journal, vol. 8, pp. 49-58, 1974.

 J. M. Orozco-Castaneda, D. K. Nagar, and A. K. Gupta, "Generalized bivariate beta distributions involving Appell's hypergeometric function of the second kind," Computers and Mathematics with Applications, vol. 64, no. 8, pp. 2507-2519, 2012.

 Y. L. Luke, The Special Functions and Their Approximations, vol. 1, Academic Press, New York, NY, USA, 1969.

 A. G. Constantine, "Some non-central distribution problems in multivariate analysis," Annals of Mathematical Statistics, vol. 34, pp. 1270-1285, 1963.

 A. T. James, "Distributions of matrix variates and latent roots derived from normal samples," Annals of Mathematical Statistics, vol. 35, pp. 475-501, 1964.

 R. J. Muirhead, Aspects of Multivariate Statistical Theory, Wiley Series in Probability and Mathematical Statistics, John Wiley & Sons, New York, NY, USA, 1982.

 H. Hashiguchi, Y. Numata, N. Takayama, and A. Takemura, "The holonomic gradient method for the distribution function of the largest root of a Wishart matrix," Journal of Multivariate Analysis, vol. 117, pp. 296-312, 2013.

 P. Koev and A. Edelman, "The efficient evaluation of the hypergeometric function of a matrix argument," Mathematics of Computation, vol. 75, no. 254, pp. 833-846, 2006.

 A. Iranmanesh, M. Arashi, D. K. Nagar, and S. M. Tabatabaey, "On inverted matrix variate gamma distribution," Communications in Statistics-Theory and Methods, vol. 42, no. 1, pp. 28-41, 2013.

 J. J. Roux and G. J. van der Merwe, "Families of multivariate distributions having properties usually associated with the Wishart distribution," South African Statistical Journal, vol. 8, pp. 111-117, 1974.

 C. G. Khatrzi, "On certain distribution problems based on positive definite quadratic functions in normal vectors," Annals of Mathematical Statistics, vol. 37, pp. 468-479, 1966.

 C. A. Coelho, R. P. Alberto, and L. M. Grilo, "When do the moments uniquely identify a distribution," CMA 13-2005, Centro de Matematica e Aplicaijbes, Departamento de Matematica,

 J. J. J. Roux, "On generalized multivariate distributions. South African Statist," South African Statistical Journal, vol. 5, pp. 91100, 1971.

Arjun K. Gupta, (1) Daya K. Nagar, (2) and Luz Estela Sanchez (2)

(1) Department of Mathematics and Statistics, Bowling Green State University, Bowling Green, OH 43403-0221, USA

(2) Instituto de Matematicas, Universidad de Antioquia, Calle 67, No. 53-108, Medellin, Colombia

Correspondence should be addressed to Daya K. Nagar; dayaknagar@yahoo.com

Received 3 September 2015; Accepted 15 December 2015

Title Annotation: Printer friendly Cite/link Email Feedback Research Article Gupta, Arjun K.; Nagar, Daya K.; Sanchez, Luz Estela Journal of Probability and Statistics Report Jan 1, 2016 8899 On a power transformation of half-logistic distribution. Confidence region approach for assessing bioequivalence and biosimilarity accounting for heterogeneity of variability. Functions, Gamma Gamma functions Mathematical research Matrices Matrices (Mathematics)