Printer Friendly

Iterative method for the solution of linear matrix equation AX[B.sup.T] + CY [D.sup.T] = [F.sup.1].

[section]1. Introduction

Some notations will be used. Let [R.sup.m x n] be the set of all m x n real matrices, O [R.sup.n x n] and SO [R.sup.n x n] be the sets of all orthogonal matrices and symmetric orthogonal matrices in [R.sup.n x n], respectively. For a matrix A = ([a.sub.1], [a.sub.2], ..., [a.sub.n]) [member of] [R.sup.m x n], [a.sub.i] [member of] [R.sup.m], i = 1, 2, ..., n. Let [A.sup.T], tr(A) and R(A) be the transpose, trace and the column space of A, respectively. The symbol vec(x) denotes the vec operator, i.e., vec(A) = [([a.sup.T.sub.1], [a.sup.T.sub.2], ..., [a.sup.T.sub.n]).sup.T]. Define < A, B > = tr([B.sup.T]A) as the inner product of matrices A and B with appropriate sizes, which generates the Frobenius norm, i.e. [parallel]A[parallel] = [square root of (<A, A>)] = [square root of tr([A.sup.T]A)]. A [cross product] B stands for the Kronecker product of matrices A and B.

In this paper, we consider the following problems.

Problem I. Given A [member of] [R.sup.m x p], B [member of] [R.sup.n x p], C [member of] [R.sup.m x q], D [member of] [R.sup.n x q], F [member of] [R.sup.m x n], find X [member of] [R.sup.p x p], Y [member of] [R.sup.q x q] such that

AX[B.sup.T] + CY[D.sup.T] = F. (1)

Let SE be the solution set of matrix equation (1).

Problem II. When SE is not empty, for given [X.sub.0] [member of] [R.sup.p x p], [Y.sub.0] [member of] [R.sup.q x q], find ([??], [??]) [member of] [S.sub.E] such that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

There have been many elegant results on the matrix equation AX[B.sup.T] + CY[D.sup.T] = F or its particular forms, with unknown matrices X and Y. Such as, the matrix equation AX - YB = C has been investigated by Baksalary and Kala [1], Flanders and Wimmer [2] have given the necessary and sufficient conditions for its consistency and general solution, respectively. For the matrix equation (1), the solvability conditions and general solution have been derived in [3, 5, 6] by means of g-inverse, singular value decomposition (SVD), generalized SVD (GSVD) [4] or the canonical correlation decomposition (CCD) of matrix pairs, respectively. Especially, the symmetric solution of matrix equation AX[A.sup.T] + BY[B.sup.T] = C has been represented in [7] by GSVD. In addition, as the particular form of matrix equation (1), Peng et al. [8, 9] have respectively obtained the symmetric solution and the least-squares symmetric solution of AXB = C by iterative methods based on the classical Conjugate Gradient Method; Y. X. Peng et al. [10] have considered the symmetric solution of a pair of simultaneous linear matrix equation [A.sub.1]X[B.sub.1] = [C.sub.1], [A.sub.2]X[B.sub.2] = [C.sub.2]. Moreover, the generation solution of matrix equation AXB + C[X.sup.T]D = E and AXB + CYD = F have also been studied by establishing iterative method in [11, 12]. For any initial iterative matrix, they have showed that the associated solutions can be obtained within finite iteration steps in the absence of roundoff errors.

The optimal approximation Problem II occurs frequently in experimental design, see for instance [13, 14]. Here the matrix pair ([X.sub.0], [Y.sub.0]), may be obtained from experiments, and does not satisfy usually the needed forms and minimum residual requirements. Whereas the matrix pair ([??], [??]) is the ones that satisfies the needed forms and the minimum residual restrictions.

Motivated by the iterative methods presented in [8-11], in this paper, we reconsider the above two problems by another way that is different from that mentioned in [12]. By choosing an arbitrary unitary matrix U, Problem I is equivalently transformed into solving the matrix equation MZ[N.sup.T] = F with PZP = Z constraint, where M = (A, C)U, N = (B, D)U, and P related to U is a given symmetric and orthogonal matrix, then Problem I and Problem II can be transformed equivalently into Problems A and B (which will be stated rearwards), respectively. For the Problems A and B, we will find their solutions by establishing iterative algorithm, then, the solutions of Problems I and II can be obtained naturally.

We need the following conception (see [15] for details).

Definition 1.1. Given matrix P [member of] [SOR.sup.n x n], we say that a matrix A [member of] [R.sup.n x n] is generalized centro-symmetric (or generalized central anti-symmetric) with respect to P, if PAP = A (or PAP = - A). The set of n x n generalized centro-symmetric (or generalized central antisymmetric) matrices with respect to P is denoted by [CSR.sup.n x n.sub.p] (or [CASR.sup.n x n.sub.P]).

Obviously, if P = J, J = ([e.sub.n], [e.sub.n-1], ..., [e.sub.2],[e.sub.1]), where [e.sub.i] is the ith column of identity matrix [I.sub.n], the generalized centro-symmetric matrices become centro-symmetric matrices which are widely applied in information theory, numerical analysis, linear estimate theory and so on (see, e.g., [16, 17, 18]).

Definition 1.2. Let matrices M, N [member of] [R.sup.s x t], where s, t are arbitrary integers. We say that matrices M, N are orthogonal each other, if tr([M.sup.T]N) = 0.

The symmetric orthogonal matrices have been discussed in many literatures, and they possesses special structural peculiarity, which can be represented as follows.

Lemma 1.1. [15] If P [member of] [SOR.sup.lxl], and p = rank(1/2([I.sub.l] + P)), then P can be expressed as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where q = l - p, U [member of] O[R.sup.lxl].

Employing Lemma 1.1 and Definition 1.1, we get the following assertion.

Lemma 1.2. Given symmetric orthogonal matrix P with the form of (2). Then A [member of] [CSR.sup.lxl] if and only if

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where X [member of] [R.sup.p x p], Y [member of] [R.sup.q x q] are arbitrary.

For arbitrary orthogonal matrix U [member of] [R.sup.lxl], let symmetric orthogonal matrix P satisfy p = rank([I.sub.l] + P/2) (the order of unknown matrix X in (1)) as in Lemma 1.1. Denote

M = (A C)U [member of] [R.sup.m x l], N = (B D)U [member of] [R.sup.n x l],

then

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where A,B,C,D are the matrices mentioned in Problem I, and X [member of] [R.sup.p x p], Y [member of] [R.sup.q x q]. It is easy to verify that the solvability of matrix equation (1) is equivalent to that of the constrained matrix equation

[MZN.sup.T] = F, Z [member of] [CSR.sup.l x l.sub.P]. (3)

Therefore, for above matrices M, N, F, Problem 1 and Problem II can be transformed equivalently into the following Problems A and B, respectively.

Problem A. Given M [member of] [R.sup.m x l], N [member of] [R.sup.n x l], F [member of] [R.sup.m x n], and P [member of] [SOR.sup.l x l], find Z [member of] [CSR.sup.l x l.sub.P] such that (3) holds.

Problem B. Let matrix [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], where [X.sub.0], [Y.sub.0] as in Problem II, find [??] [member of] S, such that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where S stands for the solution set of Problem A.

This paper is organized as follows. In section 2, an iterative algorithm will be proposed to solve Problem A, and the solution to Problem I can be obtained naturally. In section 3, we will find the solution to Problem II by using this algorithm. In section 4, some numerical examples will be given to validate our results.

[section]2. The iterative method for solving Problem I

The iterative algorithm for Problem I (A) can be expressed as follows. Algorithm 2.1.

Step 1: Input matrices A [member of] [R.sup.m x p], B [member of] [R.sup.n x p], C [member of] [R.sup.m x q], D [member of] [R.sup.n x q], F [member of] [R.sup.m x n], and partition P [member of] [SOR.sup.l x l] as in (2). Let M = (A C)U [member of] [R.sup.m x l], N = (B D)U [member of] [R.sup.n x l]. For arbitrary matrices [X.sub.1] [member of] [R.sup.p x p], [Y.sub.1] [member of] [R.sup.q x q], denote [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Step 2: Calculate

[R.sub.1] = F - M[Z.sub.1]NT,

[P.sub.1] = 1/2([M.sup.T] [R.sub.1]N + P[M.sup.T] [R.sub.1]NP),

k := 1.

Step 3: Calculate

[Z.sub.k+1] = [Z.sub.k] + [[parallel][R.sub.k][parallel].sup.2]/ [[parallel][P.sub.k][parallel].sup.2] [P.sub.k].

Step 4: Calculate

[R.sub.k+1] = F - M[Z.sub.k+1][N.sup.T]

= [R.sub.k] - [[parallel][R.sub.k][parallel].sup.2]/ [[parallel][P.sub.k][parallel].sup.2] M [P.sub.k][N.sup.T],

[P.sub.k+1] = 1/2([M.sup.T] [R.sub.k+1] N + P[M.sup.T] [R.sub.k+1] NP) + [[parallel][R.sub.k+1][parallel].sup.2]/ [[parallel][R.sub.k][parallel].sup.2] [P.sub.k].

If [R.sub.k+1] = 0, or [R.sub.k+1] [not equal to] 0, [P.sub.k+1] = 0, stop. Otherwise, let k := k + 1, go to Step 3.

Remark 1. (a) It is clear that [X.sub.i], [P.sub.i], [member of] [CSR.sup.n x n.sub.P] in Algorithm 2.1.

(b) If [R.sub.k] = 0 in the iteration process, then [Z.sub.k] is a solution of Problem A. Based on the analysis in section I, a solution pair ([X.sub.k], [Y.sub.k]) of Problem I can be derived from

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

In the sequel, we analysis the convergency of Algorithm 2.1.

Lemma 2.1. The sequences {[R.sub.i]}, {[P.sub.i]} (i = 1, 2, ...), generated by Algorithm 2.1, satisfy that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4)

Proof. From Lemmas 1.1, Algorithm 2.1, we obtain

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Therefore,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Thus the proof is completed.

The following conclusion is essential for our main results.

Lemma 2.2. The sequences {[R.sub.i]}, {[P.sub.i]} generated by the iterative method satisfy that

tr([R.sub.i.sup.T][R.sub.j]) = 0,tr([P.sub.i.sup.T][P.sub.j]) = 0, i,j = 1, 2, ..., k (k [greater than or equal to] 2), i [not equal to] j. (5)

Proof. We prove this conclusion by induction.

When k = 2, resorting to Lemma 1.1, Algorithm 2.1 and the proof of Lemma 2.1 yields

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (6)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (7)

Assume that (5) holds for k = s, that is, tr ([P.sup.T.sub.s][P.sub.j]) = 0, [P.sup.T.sub.s][P.sub.j]) =0, j = 1, 2, ..., s - 1.

We can verify similarly to (6) and (7) that tr([R.sup.T.sub.s+1][R.sub.s]) = 0, and tr([P.sup.T.sub.s+1][P.sub.s]) = 0.

Now, it is enough to prove that tr([R.sup.T.sub.s+1][R.sub.j]) = 0, and tr([P.sup.T.sub.s+1][P.sub.j]) = 0.

In fact, when j = 1, noting that the assumptions and Lemma 2.1 implies that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

and

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (8)

Moreover, when 2 [less than or equal to] j [less than or equal to] k, we obtain from Lemma 2.1 that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Similar to the proof of (8), we get tr([P.sup.T.sub.s+1][P.sub.j]) = 0.

Hence, the conclusion holds for all positive integers k. The proof is completed.

Lemma 2.3. Suppose that Z is an arbitrary solution of Problem A, then for the sequences {[Z.sub.k]}, {[P.sub.k]} generated by Algorithm 2.1, we have that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (9)

Proof. When k=1, From the Algorithm 2.1 and Lemma 2.2, noting that [Z.sub.k] [member of] CS[R.sup.lxl.sub.P], we can obtain

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Assume that equality (9) holds for k = s, then

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Moreover,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Therefore, we complete the proof by the principle of induction.

Theorem 2.1. If Problem A is consistent, for arbitrary initial iterative matrix [Z.sub.1] [member of] CS[R.sup.lxl.sub.P], we can obtain a solution of matrix equation (1) within finite iteration steps.

Proof. Suppose that mn < [l.sup.2]. Then, when Problem A is consistent (so is Problem I), we can obtain its solution at most mn + 1 iterative steps. In fact, if [R.sub.i] [not equal to] 0, i = 1, 2, ..., mn, Lemma 2.3 implies that [P.sub.i] = 0, we can compute [Z.sub.mn+1], [R.sub.mn+1] by Algorithm 2.1, which satisfy that tr([R.sup.T.sub.mn+1][R.sub.i]) = 0, tr([R.sup.T.sub.i][R.sub.j]) = 0, where i,j = 1, 2, ..., mn, i = j. Hence, the sequence {[R.sub.i]} consists of an orthogonal basis of matrix space [R.sup.mxn], and we gain [R.sub.mn+1] = 0, i.e., [Z.sub.mn+1] is a solution of Problem A. Furthermore, from Remark 1(b), the solution of Problem I can also be obtained.

From Theorem 2.1, we deduce the following conclusion.

Corollary 1. Problem A is inconsistent (Problem I, either) if and only if there exists a positive integer k, such that[R.sub.k] [not equal to] 0 but [P.sub.k] [not equal to] 0.

Lemma 2.4.[8] Suppose that the consistent linear equations My = b has a solution y0 [member of] R([M.sup.T]), then y0 is the least-norm solution of My = b.

Theorem 2.2. Suppose that Problem A is consistent, let initial iterative matrix [Z.sub.1] = [M.sup.T] HN + P[M.sup.T] H N P, where arbitrary H [member of] [R.sup.mxn], or especially, [Z.sub.1] = 0 [member of] [R.sup.lxl], by the iterative algorithm, we can obtain the unique least-norm solution [Z.sup.*] to Problem A within finite iteration steps. Moreover, the least-norm solution dual ([X.sup.*], [Y.sup.*]) of Problem I can be presented

by U[Z.sup.*][U.sup.T] = ([X.sup.*][Y.sup.*]).

Proof. From the Algorithm 2.1 and Theorem 2.1, let initial iterative matrix [Z.sub.1] = [M.sup.T] H N + P[M.sup.T] H N P, where H is an arbitrary matrix in [R.sup.mxn], then a solution [Z.sup.*] of Problem A can be obtained within finite iterative steps, which takes on in the form of [Z.sup.*] = [M.sup.T] G N + P[M.sup.T] G N P. Hence, it is enough to prove that [Z.sup.*] is the least-norm of Problem A.

Consider the following matrix equations with Z [member of] C S [R.sup.lxl.sub.P]

{M Z [N.sup.T] = F, M P Z P [N.sup.T] = F.

Obviously, the solvability of which is equivalent to that of matrix equation (3).

Denote vec([Z.sup.*]) = [z.sup.*], vec(Z) = z, vec(F) = f, vec(H) = h, then above matrix equations can be transformed equivalently as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (10).

In addition, by the notations, [Z.sup.*] can be rewritten as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

which implies from Lemma 2.4 that [z.sup.*] is the least-norm solution of the linear systems (10) moreover [Z.sup.*] is that of matrix equation (3). Furthermore from the analysis in section I the matrices product U[Z.sup.*][U.sup.T] must be block-diagonal with two blocks e.g. [X.sup.*][Y.sup.*] and they consists of the least-norm solution pair ([X.sup.*], [Y.sup.*]) of Problem I.

[section] 3. The solution of Problem II

Suppose that the solution set of Problem I is not empty. For given matrix ([X.sub.0],[Y.sub.0]) in Problem II, that is, given [Z.sub.0] = [U.sup.T] ([X.sup.0][Y.sup.0] U [member of] CS[R.sup.lxl.sub.P] in Problem B, if (X, Y) [member of] [S.sub.E]

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], we have that

AX[B.sup.T]+CY[D.sup.T] = F [??] MZ[N.sup.T] = F [??] M(Z-[Z.sub.0])[N.sup.T] = (F-M[Z.sub.0][N.sup.T]) [??] M[??][N.sup.T] = [??] (11)

In the last equation, we denote [??] = Z - [Z.sub.0] and [??] = F - M[Z.sub.0][N.sup.T].

If we choose initial iterative matrix [[??].sub.1] = [M.sup.T] [??] N + P[M.sup.T] [??] N P, where [??] [member of] [R.sup.mxn] is arbitrary, or especially, let [[??].sub.1] = 0 [member of] [R.sup.lxl], then the unique least-norm solution [[??].sup.*] of matrix equation (11) can be obtained by Algorithm 2.1. Furthermore, Theorem 2.2 illuminates that the solution dual ([??],[??]) of Problem II can be obtained by

U([[??].sup.*] + [Z.sub.0])[U.sup.T] = ([??][??]). (12)

[section]4. Numerical examples

In this section, some numerical examples tested by M AT LAB software will be given to illustrate our results.

Example 4.1. Input matrices A, B, C, D, F, U, for instance,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Then let p = 6 in Lemma 1.1, the symmetric orthogonal matrix is P = [U.sup.T] diag([I.sub.6], = [I.sub.6])U.

However, [R.sub.i] (i=1,2, ...) will unequal to zero for the influence of calculation errors in the iterative process. Therefore, for arbitrary positive number e small enough, e.g., e = 1.0e = 010, the iteration stops whenever [parallel]Rk[parallel] < [epsilon], and [Z.sub.k] is regarded as a solution of Problem A. Meanwhile, we know from Theorem 2.2 that ([X.sub.k], [Y.sub.k]) is a solution pair of Problem I.

Let [Z.sub.1] = 0, by Algorithm 2.1 and 102 (< 12 x 12) iteration steps, we can get the solution of Problem I, that is,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

and [parallel][R.sub.102][parallel] = 9.9412e = 011 < [epsilon].

In this case, the convergent curve can be protracted (see Figure 1) for the Frobenius norm of the residual (MZ[N.sup.T] = F).

[FIGURE 1 OMITTED]

In addition, for given matrices [X.sub.0] and [Y.sub.0] in Problem II,

Let [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], Writing [??] = Z - [Z.sub.0], and [??] = Fm[Z.sub.0][N.sup.T]. Applying Algorithm 2.1 to matrix equation (11), we obtain its least-norm solution

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Hence, it follows from equality (12) that the unique optimal solution pair ([??],[??]) of Problem II can be expressed as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Example 4.2. Let M and N be a 100 x 100 matrix with element [M.sub.ij] = =1/(i+j+1),(i,j = 1, 2, ..., 100) and a 100 x 100 tri-diagonal matrix with diagonal elements equal to 4 and off-diagonal elements equal to (=1, =1/2, ..., -1/99) and (1, (r), ... 1/99), respectively, and P be a matrix with second-diagonal elements equal to 1 and all other elements equal to 0. Let F = M[??][N.sup.T], where [??] be 100 x 100 matrix with all elements 1. Then the matrix equation (3) is consistent since the above matrix [??] is one of its solutions. Using Algorithm 2.1 and iterating 350 ([much less than] 10000) steps, we paint the convergence curve in Figure 2 for the Frobenius norm of the residual (MZ[N.sup.T] = F).

[FIGURE 2 OMITTED]

From the convergency curve in Figure 2, we can see that the iterative algorithm in this paper is efficient.

The following example subjects to the conclusion in Corollary 1.

Example 4.3. Input matrices B, D, F, U, P as in Example 4.1, let A = C = ones(6), and denote M = (A, C)U, N = (B, D)U.

Making use of Algorithm 2.1 solves Problem I. Let initial iterative matrix [Z.sub.1] =0 [member of] [R.sup.6x6], we get from the 7th iteration that

[parallel][R.sub.7][parallel] = 5.0321e + 003 [much greater than] [parallel][P.sub.7][parallel] = 2.9759.

From Corollary I, we know that matrix equation (1) is inconsistent.

References

[1] J. K. Baksalary, R. Kala, The matrix equation AX - YB = C, Linear Algebra Appl., 25 (1979), 41-43.

[2] H. Flanders, H.K. Wimmer, On the matrix equation AX - XB = C and AX - YB = C, SIAM J. Appl. Math, 32(1977), 707-710.

[3] J. K. Baksalary, R. Kala, The matrix equation AXB + CYD = E, Linear Algebra Appl, 30 (1980), 141-147.

[4] K. E. Chu, Singular value and generalized singular value decompositions and the solution of linear matrix equations, Linear Algebra Appl, 88/89(1987), 83-98.

[5] C. N. He, The general Solution of the matrix equation AXB + CYD = F, Acta Sei Nat Univ Norm Hunan, China, 19(1996), 17-20.

[6] G. P. Xu, M. S. Wei, D.-S. Zheng, On solutions of matrix equation AXB + CYD = F, Linear Algebra Appl, 279(1998), 93-109.

[7] X. W. Chang. J. S. Wang, The symmetric solution of the matrix equations AX + YA = C, AXAT + BYBT = C and (ATXA,BTXB) = (C,D), Linear Algebra Appl., 179 (1993), 171-189.

[8] Y. X. Pen[G, [sigma]n iterative method for the symmetric solutions and the optimal approximation solution of the linear matrix equation AXB = C, Appl. Math. Comput., 160 (2005), 763-777.

[9] Z. Y. Pen[G, [sigma]n iterative method for the least-squares symmetric solution of the linear matrix equation AXB = C, Appl. Math. Comput, 170(2005), 711-723.

[10] Y. X. Peng, X. Y. Hu and L. Zhan[G, [sigma]n iterative method for symmetric solutions and optimal approximation solution of the system of matrix equations [A.sub.1]X[B.sub.1] = [C.sub.1], [A.sub.2]X[B.sub.2] = [C.sub.2], Appl. Math. Comput, 183(2006), No. 2, 1127-1137.

[11] M. H. Wang, X. H. Cheng, M. S. Wei, Iterative algorithms for solving the matrix equation AXB + C[X.sup.T]D = E, Appl. Math. Comput, 187(2007), No. 2, 622-629.

[12] Z. Y. Peng, Y. X. Pen[G, [sigma]n efficient iterative method for solving the matrix equation AXB + CYD = E, Numer. Linear Algebra Appl, 13(2006), 473-485.

[13] T. Meng, Experimental design and decision support, in: Leondes (Ed.), Expert Systems, the Technology of Knowledgy Managenment and Decision Making for the 21th Century, Vol. 1, Academic Press, 2001.

[14] A. P. Liao, Z. Z. Bai, Best approximation of matrix equation AXB + CYD = E, SIAM J. Matrix Anal. Appl, 27(2005), 675-688.

[15] M. L. Liang, C. H. You, L. F. Dai, An efficient algorithm for the generalized centro-symmetric solution of matrix equation AXB = C, Numer. Algor, 44(2007), 173-184.

[16] J. L. Chen, X. H. Chen, Special Matrix, Tsinghua Univ. Press (in Chinese), 2001.

[17] Z. Xu, K. Y. Zhang, Q. Lu, A fast algorithm for a class of Toeplitz matrix (in Chinese), Xibei Univ. Industry Press, 1999.

[18] Weaver J. R., Centrosymmetric matrices, their basic properties, eigenvalues, and eigenvectors, Amer. Math. Monthly, 92 (1985), 711-717.

Lifang Dai ([dagger]), Maolin Liang ([double dagger]) and Wansheng He (#)

College of Mathematics and Statistics, Tianshui Normal University, Tianshui, Gansu 741001, P.R.China E-mail: liangml2005@163.com

(1) This work is supported by the Science Foundation Project of Education Department of Gansu Province (No. 0808-04) and the Science Foundation Project of Tianshui Normal University (No. TSB0819).
COPYRIGHT 2010 American Research Press
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2010 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Dai, Lifang; Liang, Maolin; He, Wansheng
Publication:Scientia Magna
Article Type:Formula
Geographic Code:1USA
Date:Sep 1, 2010
Words:4118
Previous Article:Integral formula in special three-dimansional Kenmotsu manifold K with [eta]-parallel ricci tensor.
Next Article:Smarandachely antipodal signed digraphs.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters