# Conditions for Existence, Representations, and Computation of Matrix Generalized Inverses.

1. Introduction, Motivation, and Preliminaries

Let [C.sup.mxn] and [C.sup.mxn.sub.r] (resp., [R.sup.mxn] and [R.sup.mxn.sub.r]) denote the set of complex (resp., real) m x n matrices and all complex (resp., real) m x n matrices of rank r. As usual, the notation I denotes the unit matrix of an appropriate order. Further, by [A.sup.*], R(A), rank(A), and N(A) are denoted as the conjugate transpose, the range, the rank, and the null space of A [member of] [C.sup.mxn].

The problem of pseudoinverses computation leads to the, so-called, Penrose equations:

(1) AXA = A,

(2) XAX = X,

(3) [(AX).sup.*] = AX,

(4) [(XA).sup.*] = XA. (1)

The set of all matrices obeying the conditions contained in S is denoted by A{S}. Any matrix from A{S} is called the S-inverse of A and is denoted by [A.sup.(S)]. A[{S}.sub.s] is denoted as the set of all S-inverses of A of rank s. For any matrix A there exists a unique element in the set A{1, 2, 3, 4}, called the Moore-Penrose inverse of A, which is denoted by [A.sup.[dagger]]. The Drazin inverse of a square matrix A [member of] [C.sup.nxn] is the unique matrix X [member of] [C.sup.nxn] which fulfills matrix equation (2) in conjunction with

([1.sup.k]) [A.sup.l+1] X = [A.sup.l], l [greater than or equal to] ind (A),

(5) AX = XA, (2)

and it is denoted by X = [A.sup.D]. Here, the notation ind(A) denotes the index of a square matrix A and it is defined by ind(A) = min {j | rank([A.sup.j]) = rank([A.sup.j+1])}. In the case ind(A) = 1, the Drazin inverse becomes the group inverse X = [A.sup.#]. For other important properties of generalized inverses see [1, 2].

An element X [member of] A{S} satisfying R(X) = R(B) (resp., N(X) = N(C)) is denoted by [A.sup.(S).sub.R(B),*] (resp., [A.sup.(S).sub.*,N(C)]). If X satisfies both the conditions R(X) = R(B) and N(X) = N(C) it is denoted by [A.sup.(S).sub.R(B),N(C)]. The set of all {S}-inverses of A with the prescribed range R(B) (resp., prescribed kernel N(X) = N(C)) is denoted by X = [A{S}.sub.R(B),*] (resp., [A{S}.sub.*,N(C)]). Definitions and notation used in the further text are from the books by Ben-Israel and Greville  and Wang et al. .

Full-rank representation of {2}-inverses with the prescribed range and null space is determined in the next proposition, which originates from .

Proposition 1 (see ). Let A [member of] [C.sup.mxn.sub.r], let T be a subspace of [C.sup.n] of dimension s [less than or equal to] r, and let S be a subspace of [C.sup.m] of dimensions m - s. In addition, suppose that R [member of] [C.sup.nxm] satisfies R(R) = T, N(R) = S. Let R have an arbitrary full-rank decomposition; that is, R = FG. If A has a {2}-inverse [A.sup.(2).sub.T,S], then

(1) GAF is an invertible matrix;

(2) [A.sup.(2).sub.T,S] = F[(GAF).sup.-1] G.

The Moore-Penrose inverse [A.sup.[dagger]], the Drazin inverse [A.sup.D], and the group inverse [A.sup.#] are generalized inverses [A.sup.(2).sub.T,S] for appropriate choice of subspaces T and S. For example, the following is valid for a rectangular matrix A :

[mathematical expression not reproducible]. (3)

The full-rank representation [A.sup.(2).sub.T,S] = F[(GAF).sup.-1] G has been applied in numerical calculations. For example, such a representation has been exploited to define the determinantal representation of the [A.sup.(2).sub.T,S] inverse in  or the determinantal representation of the set [A{2}.sub.s] in . A lot of iterative methods for computing outer inverses with the prescribed range and null space have been developed. An outline of these numerical methods can be found in [5-13].

A drawback of the representation given in Proposition 1 arises from the fact that it is based on the full-rank decomposition R = FG and gives the representation of [A.sup.(2).sub.R(R),N(R)]. Besides, it requires invertibility of GAF; in the opposite case, it is not applicable. Finally, representations of outer inverses with given only range or null space or the representations of inner inverses with the prescribed range and/or null space are not covered. For this purpose, our further motivation is well-known representations of generalized inverses [A.sup.(2).sub.T,S] and [A.sup.(1,2).sub.T,S], given by the Urquhart formula. The Urquhart formula was originated  and later extended in [2, Theorem 1.3.3] and [1, Theorem 13, P. 72]. We restate it for the sake of completeness.

Proposition 2 (Urquhart formula). Let A [member of] [C.sup.mxn.sub.r], U [member of] [C.sup.nxp], V [member of] [C.sup.qxm], and X = U[(VAU).sup.(1)]V, where [(VAU).sup.(1)] is a fixed but arbitrary element of (VAG){1}. Then

(1) X [member of] A{1} if and only if rank(VAU) = r;

(2) X [member of] A{2} and R(X) = R(U) if and only if rank(VAU) = rank(U);

(3) X [member of] A{2} and N(X) = N(V) if and only if rank(VAU) = rank(V);

(4) X = [A.sup.(2).sub.R(U),N(V)] if and only if rank(VAU) = rank(U) = rank(V);

(5) X = [A.sup.(1,2).sub.R(U),N(V)] if and only if rank(VAU) = rank(U) = rank(V) = r.

Later, our motivation is the notion of a (b, c)-inverse of an element a in a semigroup, introduced by Drazin in . Following the result from , the representation of outer inverses given in Proposition 1 investigates (R,R)-inverses. Our tendency is to consider representations and computations of (B, C)-inverses, where B and C could be different.

Finally, our intention is to define appropriate numerical algorithms for computing generalized inverses

[A.sup.(2).sub.T,S], [A.sup.(1).sub.T,*], [A.sup.(1).sub.T,S], [A.sup.(2).sub.T,*], [A.sup.(2).sub.*,S], [A.sup.(1,2).sub.T,*], [A.sup.(1,2).sub.*,S], [A.sup.(1,2).sub.T,S] (4)

in both the time-varying and time-invariant cases. For this purpose, we observed that the neural dynamic approach has been exploited as a powerful tool in solving matrix algebra problems, due to its parallel distributed nature as well as its convenience of hardware implementation. Recently, many authors have shown great interest for computing the inverse or the pseudoinverse of square and full-rank rectangular matrices on the basis of gradient-based recurrent neural networks (GNNs) or Zhang neural networks (ZNNs). Neural network models for the inversion and pseudo-inversion of square and full-row or full-column rank rectangular matrices were developed in [16-18]. Various recurrent neural networks for computing generalized inverses of rank-deficient matrices were introduced in [19-23]. RNNs designed for calculating the pseudoinverse

of rank-deficient matrices were created in . Three recurrent neural networks for computing the weighted Moore-Penrose inverse were introduced in . A feedforward neural network architecture for computing the Drazin inverse was proposed in . The dynamic equation and induced gradient recurrent neural network for computing the Drazin inverse were defined in . Two gradient-based RNNs for generating outer inverses with prescribed range and null space in the time-invariant case were introduced in . Two additional dynamic state equations and corresponding gradient-based RNNs for generating the class of outer inverses of time-invariant real matrices were proposed in .

The global organization of the paper is as follows. Conditions for the existence and representations of generalized inverses included in (4) are given in Section 2. Numerical algorithms arising from the representations derived in Section 2 are defined in Section 3. In this way, Section 3 defines algorithms for computing various classes of inner and outer generalized inverses by means of derived solutions of certain matrix equations. Main particular cases are presented in the same section as well as the global computational complexity of introduced algorithms. Illustrative simulation and numerical examples are presented in Section 4.

2. Existence and Representations of Generalized Inverses

Theorem 3 provides a theoretical basis for computing outer inverses with the prescribed range space.

Theorem 3. Let A [member of] [C.sup.mxn] and B [member of] [C.sup.nxk].

(a) The following statements are equivalent:

(i) There exists a {2}-inverse X of A satisfying R(X) = R(B), denoted by [A.sup.(2).sub.R(B),*].

(ii) There exists U [member of] [C.sup.kxm] such that BUAB = B.

(iii) N(AB) = N(B).

(iv) rank(AB) = rank(B).

(v) B[(AB).sup.(1)] AB = B, for some (equivalently every) [(AB).sup.(1)] [member of] (AB){1}.

(b) If the statements in (a) are true, then the set of all outer inverses with the prescribed range R(B) is represented by

[A{2}.sub.R,(B),*] = {B[(AB).sup.(1)] | [(AB).sup.(1)] [member of] (AB){1}}

= {BU | U [member of] [C.sup.kxm], BUAB = B}. (5)

Moreover,

[A{2}.sub.R,(B),*]

= {B[(AB).sup.(1)] + BY ([I.sub.m] - AB[(AB).sup.(1)]) | Y [member of] [C.sup.kxm]}, (6)

where [(AB).sup.(1)] [member of] (AB){1} is arbitrary but fixed.

Proof. (a) (i) [??] (ii). Let X [member of] [C.sup.nxm] such that XAX = X and R(X) = R(B). Then X = BU and B = XW, for some U [member of] [C.sup.kxm] and W [member of] [C.sup.mxk], so B = XW = XAXW = XAB = BUAB.

(ii) [??] (iii). As we know, N(B) [subset or equal to] N(AB). On the other hand, taking into account BUAB = B for some U [member of] [C.sup.kxm], it follows that N(AB) [subset or equal to] N(BUAB) = N(B), and hence N(AB) = N(B).

(iii) [??] (v). Let [(AB).sup.(1)] be an arbitrary {1}-inverse of AB. As N(AB) = N(B) implies B = VAB, for some V [member of] [C.sup.nxm], it follows that

B = VAB = VAB [(AB).sup.(1)] AB = B[(AB).sup.(1)] AB. (7)

(v) [??] (i). Let B = B[(AB).sup.(1)] AB, for some [(AB).sup.(1)] [member of] (AB){1}, and set X = B[(AB).sup.(1)]. Then

XAX = B[(AB).sup.(1)] AB[(AB).sup.(1)] = B[(AB).sup.(1)] = X, (8)

and by X = B[(AB).sup.(1)] and B = B[(AB).sup.(1)]AB = XAB it follows that X is a {2}-inverse of A which satisfies R(X) = R(B).

(iii) [??] (v). This result is well-known.

(b) From the proofs of (i) [??] (ii) and (iv) [??] (i), and the fact that B = BUAB implies U [member of] (AB){1}, it follows that

A [{2}.sub.R(B),*] [subset or equal to] {BU | U [member of] [C.sup.kxm], BUAB = B}

[subset or equal to] {B[(AB).sup.(1)] | [(AB).sup.(1)] [member of] (AB) {1}}

[subset or equal to] A [{2}.sub.R(B),*], (9)

and hence (5) holds.

According to Theorem 1 [1, Section 2] (or [2, Theorem 1.2.5]), the condition (v) ensures consistency of the matrix equation BUAB = B and gives its general solution

{U [member of] [C.sup.kxm] | BUAB = B} {[B.sup.(1)]B [(AB).sup.(1)] + Y

- [B.sup.(1)] BYAB [(AB).sup.(1)] | Y [member of] [C.sup.kxm]}, (10)

whence we obtain

A [{2}.sub.R(B),*] = {BU | [member of] [C.sup.kxm], BUAB = B}

= {B[(AB).sup.(1)] +BY([I.sub.m] - AB[(AB).sup.(1)]) | Y [member of] [C.sup.kxm]}. (11)

This proves is that (6) is true.

Remark 4. Five equivalent conditions for the existence and representations of the class of generalized inverses [A.sup.(2).sub.T,*] were given in [27, Theorem 1]. Theorem 3 gives two new and important conditions (i) and (v). These conditions are related with solvability of certain matrix equations. Further, the representations of generalized inverses [A.sup.(2).sub.T,*] were presented in [27, Theorem 2]. Theorem 3 gives two new and important representations: the second representation in (5) and representation (6).

Theorem 5 provides a theoretical basis for computing outer inverses with the prescribed kernel. These results are new in the literature, according to our best knowledge.

Theorem 5. Let A [member of] [C.sup.mxn] and C [member of] [C.sup.lxm].

(a) The following statements are equivalent:

(i) There exists a {2}-inverse X of A satisfying N(X) = N(C), denoted by [A.sup.(2).sub.*,N(C)].

(ii) There exists V [member of] [C.sup.nxl] such that CAVC = C.

(iii) R(CA) = R(C).

(iv) rank(CA) = rank(C).

(v) CA[(CA).sup.(1)]C = C, for some (equivalently every) [(CA).sup.(1)] [member of] (CA){1}.

(b) If the statements in (a) are true, then the set of all outer inverses with the prescribed null space N(C) is represented by

A[{2}.sub.*,N(C)] = {[(CA).sup.(1)] C | [(CA).sup.(1)] [member of] (CA) {1}}

= {VC | V [member of] [C.sup.nxl], CAVC = C}. (12)

Moreover,

A[{2}.sub.*,N(C)]

= {[(CA).sup.(1)] C + ([I.sub.t] - [(CA).sup.(1)] CA) YC | Y [member of] [C.sup.nxl]}, (13)

where [(CA).sup.(1)] is an arbitrary fixed matrix from (CA){1}.

Proof. The proof is analogous to the proof of Theorem 3.

Theorem 6 is a theoretical basis for computing a {2}-inverse with the prescribed range and null space.

Theorem 6. Let A [member of] [C.sup.mxn], B [member of] [C.sup.nxk], and C [member of] [C.sup.lxm].

(a) The following statements are equivalent:

(i) There exists a {2}-inverse X of A satisfying R(X) = R(B) and N(X) = N(C).

(ii) There exist U [member of] [C.sup.kxl] such that BUCAB = B and CABUC = C.

(iii) There exist U, V [member of] [C.sup.kxl] such that BUCAB = B and CABVC = C.

(iv) There exist U [member of] [C.sup.kxm] and V [member of] [C.sup.nxl] such that BUAB = B, CAVC = C, and BU = VC.

(v) There exist U [member of] [C.sup.kxm] and V [member of] [C.sup.nxl] such that CABU = C and VCAB = B.

(vi) N(CAB) = N(B), R(CAB) = R(C).

(vii) rank(CAB) = rank(B) = rank(C).

(viii) B[(CAB).sup.(1)]CAB = B and CAB[(CAB).sup.(1)]C = C, for some (equivalently every) [(CAB).sup.(1)] [member of] (CAB){1}.

(b) If the statements in (a) are true, then the unique {2}-inverse of A with the prescribed range R(B) and null space N(C) is represented by

[A.sup.(2).sub.R(B),N(C)] = B[(CAB).sup.(1)]C = BUC, (14)

for arbitrary [(CAB).sup.(1)] [member of] (CAB){1} and arbitrary U [member of] [C.sup.kxl] satisfying BUCAB = B and CABUC = C.

Proof. (a) (i) [??] (ii). Let X [member of] [C.sup.nxm] be such that XAX = X, R(X) = R(B), and N(X) = N(C). Then there exists U [member of] [C.sup.kxl] such that X = BUC. Also, B and C satisfy B = XW and C = VX, for some W [member of] [C.sup.mxk], V [member of] [C.sup.lxn]. This further implies

B = XW = XAXW = XAB = BUCAB,

C = VX = VXAX = CAX = CABUC. (15)

(ii) [??] (vi). According to CABUC = C, for some U [member of] [C.sup.kxl], it follows that

R (C) = R (CABUC) [subset or equal to] R (CAB) [subset or equal to] R (C), (16)

and thus R(CAB) = R(C). Further, by B = BUCAB, for some U [member of[ [C.sup.kxl], it follows that

N (B) [subset or equal to] N (CAB) [subset or equal to] N (BUCAB) = N (B), (17)

which yields N(CAB) = N(B).

(vi) [??] (viii). Let [(CAB).sup.(1)] be an arbitrary {1}-inverse of CAB. Since R(CAB) = R(C) implies C = CABW, for some W [member of] [C.sup.kxm], it follows that

C = CABW = CAB [(CAB).sup.(1)] CABW

= CAB [(CAB).sup.(1)] C. (18)

Similarly, N(CAB) = N(B) implies B = VCAB, for some V [member of] [C.sup.nxl] and

B = VCAB = VCAB [(CAB).sup.(1)] CAB

= B [(CAB).sup.(1)] CAB. (19)

(viii) [??] (i). Let CAB[(CAB).sup.(1)]C = C, for some [(CAB).sup.(1)] [member of] (CAB){1}, and set X = B[(CAB).sup.(1)]C. Then

XAX = B[(CAB).sup.(1)] CAB [(CAB).sup.(1)] C = B[(CAB).sup.(1)]C

= X (20)

and by X = B[(CAB).sup.(1)]C, B = B[(CAB).sup.(1)]CAB = XAB, and C = CAB[(CAB).sup.(1)]C = CAX it follows that X is a {2}-inverse of A which satisfies R(X) = R(B), N(X) = N(C).

(vi) [??] (vii). This statement follows from [2, Theorem 1.1.3, P. 3].

(ii) [??] (iii). This is evident.

(iii) [??] (ii). Let U, V [member of] [C.sup.kxl] be arbitrary matrices such that BUCAB = B and CABVC = C. Then

BUC = BUCABVC = BVC, (21)

whence

B = BUCAB = BVCAB,

C = CABVC = CABUC. (22)

Thus, (ii) holds.

(ii) [??] (iv). U [member of] [C.sup.kxl] such that BUCAB = B and CABUC = C. Then

B = B (UC) AB,

C = CA (BU) C,

B (UC) = (BU) C, (23)

which means that (iv) is true.

(iv) [??] (v). Let U [member of] [C.sup.kxm] and V [member of] [C.sup.nxl] such that BUAB = B, CAVC = C, and BU = VC. Then

B = BUAB = VCAB,

C = CAVC = CABU, (24)

which confirms (v).

(v) [??] (iv). Let U [member of] [C.sup.kxm] and V [member of] [C.sup.nxl] such that CABU = C and VCAB = B. Then

VC = VCABU = BU,

B = VCAB = BUAB,

C = CABU = CAVC, (25)

and hence (iv) holds.

(iv) [??] (i). Let U [member of] [C.sup.kxm] and V [member of] [C.sup.nxl] such that BUAB = B, CAVC = C, and BU = VC, and set X = BU = VC. Then

XAX = BUABU = BU = X; (26)

by X = BU and B = BUAB = XAB it follows that R(X) = R(B), and by C = CAVC = CAX it follows that N(X) = N(C). Therefore, (i) is true.

(b) According to the proofs of (i) [??] (ii) and (iv) [??] (i) and the fact that C = CABUC and BUCAB = B, for U [member of] [C.sup.kxl], imply U [member of] (CAB){1}, it follows that

[A.sup.(2).sub.R(B),N(C)] = BUC = B[(CAB).sup.(1)] C, (27)

and hence (14) holds.

Remark 7. After a comparison of Theorem 6 with the Urquhart formula given in Proposition 2, it is evident that conditions (vi) and (vii) of Theorem 6 could be derived using the Urquhart results. All other conditions are based on the solutions of certain matrix equations, and they are new.

In addition, comparing the representations of Theorem 6 with the full-rank representation restated from  in Proposition 1, it is remarkable that the representations given in Theorem 6 do not require computation of a full-rank factorization R = FG of the matrix R. More precisely, representations of [A.sup.(2).sub.R(N),N(C)] from Theorem 6 boil down to the full-rank factorization of [A.sup.(2).sub.R(F),N(G)] from Proposition 1 in the case when BC = R is a full-rank factorization of R and CAB is invertible.

It is worth mentioning that Drazin in  generalized the concept of the outer inverse with the prescribed range and null space by introducing the concept of a (b, c)-inverse in a semigroup. In the matrix case, this concept can be defined as follows. Let A [member of] [C.sup.mxn], X [member of] [C.sup.nxm], B [member of] [C.sup.nxk], and C [member of] Clxm. Then, we call X a (B, C)-inverse of A if the following two relations hold:

XAB = B,

CAX = C (28)

X = BU = VC, for some U [member of] [C.sup.kxm], V [member of] [C.sup.nxl]. (29)

It is easy to see that X is a (B, C)-inverse of A if and only if X is a {2}-inverse of A satisfying R(X) = R(B) and N(X) = N(C).

The next theorem can be used for computing a {1}-inverse X of A satisfying R(X) [subset or equal to] R(B).

Theorem 8. Let A [member of] [C.sup.mxn] and B [member of] [C.sup.nxk].

(a) The following statements are equivalent:

(i) There exists a {1}-inverse X of A satisfying R(X) [subset or equal to] R(B).

(ii) There exists U [member of] [C.sup.kxm] such that ABUA = A.

(iii) R(AB) = R(A).

(iv) AB[(AB).sup.(1])A = A, for some (equivalently every) [(AB).sup.1] [member of] (AB){1}.

(v) rank(AB) = rank(A).

(b) If the statements in (a) are true, then the set of all inner inverses of A whose range is contained in R(B) is represented by

{X [member of] A {1} | R (X) [subset or equal to] R (B)}

= {B[(AB).sup.(1)] | [(AB).sup.(1)] [member of] (AB){1}} (30)

= {BU | U [member of] [C.sup.kxm], ABUA = A}. (30)

Moreover,

{X [member of] A{1} | R (X) [subset or equal to] R([member of])} = {B[(AB).sup.(1)] [AA.sup.(1)]

+ BY - B [(AB).sup.(1)] [ABYAA.sup.(1)] | Y [member of] [C.sup.kxm]}, (31)

where [(AB).sup.(1)] [member of] (AB){1} and [A.sup.(1)] [member of] A{1} are arbitrary but fixed.

Proof. (a) (i) [??] (ii). Let X [member of] [C.sup.nxm] such that AXA = A and R(X) [subset or equal to] R(B). Then X = BU, for some U [member of] [C.sup.kxm], so A = AXA = ABUA.

(ii) [??] (iii). Let ABUA = A, for some U [member of] [C.sup.kxm]. Then R(A) = R(ABUA) [subset or equal to] R(AB). Since the opposite inclusion always holds, we conclude that R(AB) = R(A).

(iii) [??] (iv). Let [(AB).sup.(1)] be an arbitrary {1}-inverse of AB. By R(AB) = R(A) it follows that A = ABV, for some V [member of] [C.sup.kxn], so we have that

A = ABV = AB[(AB).sup.(1)] ABV = AB[(AB).sup.(1)] A. (32)

(iv) [??] (i). Let AB[(AB).sup.(1)]A = A, for some [(AB).sup.(1)] [member of] (AB){1}, and set X = B[(AB).sup.(1)]. It is clear that AXA = A, and by X = B[(AB).sup.(1)] we obtain the fact that R(X) [subset or equal to] R(B).

(iii) [??] (v). This follows from [2, Theorem 1.1.3, P. 3].

(b) On the basis of the fact that A = ABU A implies U [member of] (AB){1} and the arguments used in the proofs of (i) [??] (ii) and (iv) [??] (i), we have that

{X [member of] A {1} | R (X) [subset or equal to] R (B)}

[subset or equal to] {BU | U [member of] [C.sup.kxm,] ABUA = A}

[subset or equal to] {B[(AB).sup.(1)] | [(AB).sup.(1)] [member of] (AB) {1}}

[subset or equal to] {X [member of] A {1} | R (X) [subset or equal to] R (B)}, (33)

which confirms that (30) is true.

Once again, according to Theorem 1 [1, Section 2] (or Theorem 1.2.5 ) we have that

{U [member of] [C.sup.kxm] | ABUA = A} = {[(AB).su.(1)] [AA.sup.(1)] + Y - [(AB).su.(1)] [ABXAA.sup.(1)] | Y [member of] [C.sup.kxm]}, (34)

where [(AB).sup.(1)] [member of] (AB){1} and [(A).sup.(1)] [member of] A{1} are arbitrary elements, whence we obtain that

{X [member of] A{1} | R(X) [subset or equal to]] R(B)} = {BC | U

[member of] [C.sup.kxm], ABUA = A} = {B[(AB).sup.(1)][AA.sup.(1)] + BY

- B[(AB).sup.(1)] [ABYAA.sup.(1)] | Y [member of] [C.sup.kxm]}, (35)

and hence (31) is true.

Theorem 9 can be used for computing a {1}-inverse X of A satisfying N(C) [subset or equal to] N(X). Its proof is dual to the proof of Theorem 8.

Theorem 9. Let A [member of] [C.sup.mxn] and C [member of] [C.sup.lxm].

(a) The following statements are equivalent:

(i) There exists a {1}-inverse X of A satisfying N(C) [subset or equal to] N(X).

(ii) There exists V [member of] [C.sup.nxl] such that AXCA = A.

(iii) N(CA) = N(A).

(iv) A[(CA).sup.(1)]CA = A, for some (equivalently every) [(CA).sup.(1)] [member of] (CA){1}.

(v) rank(CA) = rank(A).

(b) If the statements in (a) are true, then the set of all inner inverses of A whose null space is contained in N(C) is represented by

{X [member of] A(1} | N(C) [subset or equal to] N(X)}

= {[(CA).sup.(1)] C | [(CA).sup.(1)] [member of] (CA) {1}}

= {VC | X [member of] [C.sup.nxl], AVCA = A}. (36)

Moreover,

{X [member of] A{1} | N(C) [subset or equal to] N(X)} = {[A.sup.(1)]A[(CA).sup.(1)]C

+ YC - [A.sup.(1)] AXCA[(CA).sup.(1)] C | Y [member of] [C.sup.nxl]}, (37)

where [(CA).sup.(1)] [member of] (CA){l} and [A.sup.(1)] [member of] A{1} are arbitrary but fixed.

Theorem 10 provides several equivalent conditions for the existence and representations for computing a {1,2}-inverse with the prescribed range.

Theorem 10. Let A [member of] [C.sup.mxn] and B [member of] [C.sup.nxk].

(a) The following statements are equivalent:

(i) There exists a {1,2}-inverse X of A satisfying R(X) = R(B), denoted by [A.sup.(1,2).sub.R(B),*].

(ii) There exist U, X [member of] [C.sup.kxm] such that BUAB = B and ABVA = A.

(iii) There exists W [member of] [C.sup.kxm] such that BWAB = B and ABWA = A.

(iv) N(AB) = N(B) and R(AB) = R(A).

(v) rank(AB) = rank(A) = rank(B).

(vi) B[(AB).sup.(1)] AB = B and AB[(AB).sup.(1)] A = A, for some (equivalently every) [(AB).sup.(1)] [member of] (AB){1}.

(b) If the statements in (a) are true, then the set of all {1,2}-inverses with the prescribed range R(B) is represented by

A[{1,2}.sub.R(B),*] = A[{2}.sub.R(B),*]

= {X [member of] A {1} | R (X) [subset or equal to] R (B)}. (38)

Proof. (a) First we note that the implication (i) [??] (vi) and the equivalences (ii) [??] (iv) and (iv) [??] (vi) follow directly from Theorems 3 and 8. Also, (iv) [??] (v) follows from Theorem 1.1.3  (or Example 10 [1, Section 1]).

(vi) [??] (iii). If we set W = [(AB).sup.(1)], where [(AB).sup.(1)] [member of] (AB){1} is an arbitrary element, then (vi) implies that BWAB = B and ABWA = A.

(iii) [??] (i). If W [member of] [C.sup.kxm] such that BWAB = B and ABWA = A, then by Theorem 3 we obtain the fact that X = BW is a {2}-inverse of A satisfying R(X) = R(B), and clearly X is also a {1}-inverse of A.

(iii) [??] (ii). This implication is evident.

(b) If the statements in (a) hold, then the statements of Theorems 3 and 8 also hold, and from these two theorems it follows directly that (38) is valid.

Theorem 11 provides several equivalent conditions for the existence and representations of [A.sup.(1,2).sub.*,N(C)].

Theorem 11. Let A [member of] [C.sup.mxn] and C [member of] [C.sup.lxm].

(a) The following statements are equivalent:

(i) There exists a {1,2}-inverse X of A satisfying N(X) = N(C), denoted by [A.sup.(1,2).sub.*,N(C)].

(ii) There exist U, V [member of] [C.sup.nxl] such that CAUC = C and AVCA = A.

(iii) There exists W [member of] [C.sup.nxl] such that CAWC = C and AWCA = A.

(iv) N(CA) = N(A) and R(CA) = R(C).

(v) rank(CA) = rank(A) = rank(C).

(vi) CA[(CA).sup.(1)]C = C and A[(CA).sup.(1)]CA = A, for some (equivalently every) [(CA).sup.(1)] [member of] (CA){l}.

(b) If the statements in (a) are true, then the set of all {l, 2}-inverses with the range R(B) is given by

A[{1,2}.sub.*,N(C)] = A[{2}.sub.*,N(C)]

= {X [member of] A{1} | N (C) [subset or equal to] N (X)}. (39)

Theorem 12 is a theoretical basis for computing a {1,2}-inverse with the predefined range and null space.

Theorem 12. Let A [member of] [C.sup.mxn], B [member of] [C.sup.nxk], and C [member of] [C.sup.lxm].

(a) The following statements are equivalent:

(i) There exists a {1,2}-inverse X of A satisfying R(X) = R(B) and N(X) = N(C), denoted by [mathematical expression not reproducible].

(ii) There exist U [member of] [C.sup.kxm] and V [member of] [C.sup.nxl] such that BCAB = B, ABCA = A, CAVC = C, and AVCA = A.

(iii) N(AB) = N(B), R(AB) = R(A), R(CA) = R(C), and N(CA) = N(A).

(iv) rank(AB) = rank(A) = rank(B), rank(CA) = rank(A) = rank(C).

(v) rank(CAB) = rank(C) = rank(B) = rank(A).

(vi) B[(AB).sup.(1)]AB = B, AB[(AB).sup.(1)]A = A, CA[(CA).sup.(1)]C = C, and A[(CA).sup.(1)]CA = A, for some (equivalently every) [(AB).sup.(1)] [member of] (AB){1} and [(CA).sup.(1)] [member of] (CA){1}.

(b) If the statements in (a) are true, then the unique {1,2}-inverse of A with the prescribed range R(B) and null space N(C) is represented by

[A.sup.(1,2).sub.R(B),N(C)] = B[(AB).sup.(1)] A[(CA).sup.(1)]C = BUAVC

= B[(CAB).sup.(1)] C, (40)

for arbitrary [(AB).sup.(1)] [member of] (AB){1}, [(CA).sup.(1)] [member of] (CA){1}, and [(CAB).sup.(1)] [member of] (CAB){1} and arbitrary U [member of] [C.sup.kxm] and V [member of] [C.sup.nxl] satisfying BUAB = B and CAVC = C.

Proof. (a) The equivalence of the statements (i)-(iv) and (vi) follows immediately from Theorem 10 and its dual. The equivalence (i) [??] (v) follows immediately from part (4) of the famous Urquhart formula [2, Theorem 1.3.7].

(b) Let U [member of] [C.sup.kxm] and V [member of] [C.sup.nxl] be arbitrary matrices satisfying BUAB = B and CAVC = C, and set X = BUAVC. Seeing that U [member of] (AB){1} and V [member of] (CA){1}, according to (v) we obtain the fact that ABCA = A and AVCA = A. This implies that

XAX = BUAVCABUAVC = BUAVC = X,

AXA = ABUAVCA = AVCA = A,

R(X) = R(BUAVC) [subset or equal to] R (B),

N (C) [subset or equal to] N (BUAVC) = N (X),

R (B) = R (BUAB) = R (BUAVCAB) = R (XAB)

[subset or equal to] R (X),

N (X) [subset or equal to] N(CAX) = N (CABUAVC) = N (CAVC)

= N (C), (41)

which means that X is a {1,2}-inverse of A satisfying R(X) = R(B) and N(X) = N(C), and hence the second equality in (40) is true.

The same arguments confirm the validity of the first equality in (40).

Corollary 13. Theorem 6 is equivalent to Theorem 12 in the case rank(CAB) = rank(B) = rank(C) = rank(A).

Proof. According to assumptions, the output of Theorem 6 becomes [A.sup.(1,2).sub.R(B),N(C)]. Then the proof follows from the uniqueness of this kind of generalized inverses.

Remark 14. It is evident that only conditions (v) of Theorem 12 can be derived from the Urquhart results. All other conditions are based on the solutions of certain matrix equations and they are introduced in Theorem 12. Also, the first two representations in (40) are introduced in the present research.

3. Algorithms and Implementation Details

The representations presented in Section 2 provide two different frameworks for computing generalized inverses. The first approach arises from the direct computation of various generalizations or certain variants of the Urquhart formula, derived in Section 2. The second approach enables computation of generalized inverses by means of solving certain matrix equations.

The dynamical-system approach is one of the most important parallel tools for solving various basic linear algebra problems. Also, Zhang neural networks (ZNN) as well as gradient neural networks (GNN) have been simulated for finding a real-time solution of linear time-varying matrix equation AXB = C. Simulation results confirm the efficiency of the ZNN and GNN approach in solving both time-varying and time-invariant linear matrix equations. We refer to [28, 29] for further details. In the case of constant coefficient matrices A, B, C, it is necessary to use the linear GNN of the form

[??] = -[gamma][A.sup.T] (AXB - C) [B.sup.T]. (42)

The generalized nonlinearly activated GNN model (GGNN model) is applicable in both time-varying and time-invariant case and possesses the form

[??] (t) = -[gamma]A[(t).sup.T] F (A(t)X(t)B(t) - C(t))B[(t).sup.T], (43)

where F(C) is an odd and monotonically increasing function element-wise applicable to elements of a real matrix C = ([c.sub.kj]) [member of] [R.sup.nxm]; that is, F(C) = (f([c.sub.kj])), wherein f(*) is an odd and monotonically increasing function. Also, the scaling parameter y could be chosen as large as possible in order to accelerate the convergence. The convergence could be proved only for the situation with constant coefficient matrices A, B, C.
```
Algorithm 1: Computing an outer inverse with the prescribed range.

Require: Time varying matrices A(t) [member of] [C.sup.mxn] and B(t)
[member of] [C.sup.nxk].

(1) Verify rank(A(f)B(f)) = rank(B(t)).

If these conditions are satisfied then continue.

(2) Solve the matrix equation B(t)U(t)A(t)B(t) = B(t) with respect to
U(t) [member of] [C.sup.kxm].

(3) Return X(t) = B(t)U(t) = A[(t).sup.(2).sub.R(B),*].
```

Besides the linear activation function, f(x) = x, in the present paper we use the power-sigmoid activation function

[mathematical expression not reproducible]. (44)

Theorem 3 provides not only criteria for the existence of an outer inverse A[(t).sup.(2).sub.R(B),*] with the prescribed range, but also a method for computing such an inverse. Namely, the problem of computing a {2}-inverse X of A satisfying R(X) = R(B) boils down to the problem of computing a solution to the matrix equation BUAB = B, where U is an unknown matrix taking values in [C.sup.kxm]. If U is an arbitrary solution to this equation, then a {2}-inverse X of A satisfying R(X) = R(B) can be computed as X = BU.

The Simulink implementation of Algorithm 1 in the set of real matrices is based on GGNN model (43) for solving the matrix equation B(t)U(t)A(t)B(t) = B(t) and it is presented in Figure 5. The Simulink Scope and Display Block denoted by U(t) display input signals corresponding to the solution U(t) of the matrix equation B(t)U(t)A(t)B(t) = B(t) with respect to the time t. The underlying GGNN model in Figure 5 is

[??](t) = -[gamma]B[(t).sup.T] F (B(t)U(t)A(t)B(t) - B(t))

* [(A(t)B(t).sup.T]. (45)

The Display Block denoted by BU displays inputs signals corresponding to the solution X(t) = B(t)U(t).

The block subsystem implements the power-sigmoid activation function and it is presented in Figure 1.

Theorem 5 reduces the problem of computing a {2}-inverse X of A satisfying N(X) = N(B) to the problem of computing a solution to the matrix equation CAVC = C, where V is an unknown matrix taking values in [C.sup.nxl]. Then X := [A.sup.(2).sub.*,N(C)] = VC.

The Simulink implementation of Algorithm 2 which is based on the GGNN model for solving C(t)A(t)V(t)C(t) = C(t) and computing X(t) = V(t)C(t) is presented in Figure 6. The underlying GGNN model in Figure 6 is

[??](t) = -[gamma](C[(t)A(t)).sup.T]

* F(C(t)A(t)V(t)C(t) - C(t))C[(t).sup.T]. (46)

The Display Block denoted by V(t) displays input signals corresponding to the solution V(t) of the matrix equation CAV(t)C = C with respect to simulation time. The Display Block denoted by ATS2 displays input signals corresponding to the solution X(t) = V(t)C(t).
```
Algorithm 2: Computing an outer inverse with the prescribed null space.

Require: Time varying matrices A(t) [member of] [C.sup.mxn] and C(t)
[member of] [C.sup.lxm].

(1) Verify rank(C(t)A(t)) = rank(C(t)).

If these conditions are satisfied then continue.

(2) Solve the matrix equation C(t)A(t)V(t)C(t) = C(t) with respect to
an unknown matrix V(t) [member of] [C.sup.nxl].

(3) Return X(t) = V(t)C(t) = A[(t).sup.(2).sub.*,N(C)].
```

Theorem 6 provides a powerful representation of a {2}-inverse X of A satisfying R(X) = R(B) and N(X) = N(C). Also, it suggests the following procedure for computing those generalized inverses. First, it is necessary to verify whether rank(CAB) = rank(B) = rank(C). If this is true, then by Theorem 6 it follows that the equations BUCAB = B and CABVC = C are solvable and have the same sets of solutions. We compute an arbitrary solution U of the equation BUCAB = B, and then X = BUC is the desired {2}-inverse of A.

The Simulink implementation of the GGNN model for solving B(t)U(t)C(t)A(t)B(t) = B(t) and computing the outer inverse X(t) = B(t)U(t)C(t) defined in Algorithm 3 is presented in Figure 2. The underlying GGNN model in Figure 2 is

[??](t) = -[gamma]B[(t).sup.T]

* F(B(t)U(t)C(t)A(t)B(t) - B(t))

* [(C(t)A(t)B(t)).sup.T]. (47)

The implementation of the dual approach, based on the solution of C(t)A(t)BV(t)C(t) = C(t) and generating the outer inverse X(t) = B(t)V(t)C(t), is presented in Figure 4. The underlying GGNN model in Figure 4 is

[??](t) = -[gamma][(C(t)A(t)B(t)).sup.T] F(C(t)A(t)B(t)V(t)

* C(t) - C(t))C[(t).sup.T]. (48)

Theorem 8 can be used in a similar way to Theorem 3: if the equation ABU A = A is solvable and its solution U is computed, then a {1}-inverse X of A satisfying R(X) [subset or equal to] R(B) is computed as X = BU. Corresponding computational procedure is given in Algorithm 4.

Similarly, Theorem 9 can be used for computing a {1}inverse X of A satisfying N(C) [subset or equal to] N(X), as it is presented in Algorithm 5.

An algorithm for computing a {1,2}-inverse with the prescribed range is based on Theorem 10. According to this theorem we first check the condition rank(AB) = rank(A) = rank(B). If it is satisfied, then the equation BUAB = B is solvable and we compute an arbitrary solution U to this equation, after which we compute a {2}-inverse X of A satisfying R(X) = ,R(B) as X = BU. By Theorem 10, X is also a {1}-inverse of A. Algorithm 1 differs from Algorithm 6 only in the first step. Therefore, the implementation of Algorithm 6 uses the Simulink implementation of Algorithm 1 in the case when rank(AB) = rank(A) = rank(B).
```
Algorithm 3: Computing a {2}-inverse with the prescribed range and
null space.

Require: Time varying matrices A(t) [member of] [C.sup.mxn] B(t)
[member of] [C.sup.nxk] and C(t) [member of] [C.sup.lxm].

(1) Verify rank(C(t)A(t)B(t)) = rank(B(t)) = rank(C(t)).

If these conditions are satisfied then continue.

(2) Solve the matrix equation B(t)U(t)C(t)A(t)B(t) = B(t) with
respect to an unknown matrix U(t) [member of] [C.sup.kxm].

(3) Return X(t) = B(t)U(t)C(t) = A[(t).sup.(2).sub.R(B),N(C)].

Algorithm 4: Computing a {1}-inverse X of A satisfying R(X)
[subset or equal to] R(B).

Require: Time varying matrices A(t) [member of] [C.sup.mxn] and B(t)
[member of] [C.sup.nxk].

(1) Check the condition rank(A(t)B(t)) = rank(A(t)).

If this condition is satisfied then continue.

(2) Solve the matrix equation A(t)B(t)U(t)A(t) = A(t) with respect to
U(t) [member of] [C.sup.kxm].

(3) Return a {1}-inverse X(t) = B(t)U(t) of A(t) satisfying R(X)
[subset or equal to] R(B).

Algorithm 5: Computing a {1}-inverse X of A satisfying N(C)
[subset or equal to] N(X).

Require: Time varying matrices A(t) [member of] [C.sup.mxn] and C(t)
[member of] [C.sup.lxm].

(1) Check the condition rank(C(t)A(t)) = rank(A(t)).

If this condition is satisfied then continue.

(2) Solve the matrix equation A(t)V(t)C(t)A(t) = A(t) with respect to
an unknown matrix V(t) [member of] [C.sup.nxl].

(3) Return a {1}-inverse X(t) = V(t)C(t) of A(t) satisfying N(C)
[subset or equal to] N(X).

Algorithm 6: Computing a {1,2}-inverse with the prescribed range.

Require: Time varying matrices A(t) [member of] [C.sup.mxn] and B(t)
[member of] [C.sup.nxk].

(1) Check the condition rank(A(t)B(t)) = rank(A(t)) = rank(B(t)).

If these conditions are satisfied then continue.

(2) If the previous condition is satisfied, then solve the matrix
equation B(t)U(t)A(t)B(t) = B(t) with respect to an unknown
matrix U(t) [member of] [C.sup.kxm].

(3) Return a {1,2}-inverse X(t) = B(t)U(t) of A(t) satisfying
R(X) = R(B).
```

Similarly, Theorem 11 provides an algorithm for computing [A.sup.(1,2).sub.*,N(C)]. The implementation of Algorithm 7 uses the Simulink implementation of Algorithm 2 in the case rank(CA) = rank(C) = rank(A).

Theorem 12 suggests the following procedure for computing a {1,2}-inverse X of A satisfying R(X) = ,R(B) and N(X) = N(C). First we check the condition rank(CAB) = rank(B) = rank(C) = rank(A). If this is true,

then the equations BUAB = B and CAVC = C are solvable, and we compute an arbitrary solution U to the first one and an arbitrary solution V of the second one. According to Theorem 12, X = BUAVC is a {1,2}-inverse X of A with R(X) = R(B) and N(X) = N(C).

The Simulink implementation of Algorithm 8 based on the GGNN models for solving B(t)U(t)A(t)B(t) = B(t) and C(t)A(t)V(t)C(t) = C(t) and computing X(t) = B(t)U(t)A(t)V(t)C(t) is presented in Figure 8. In this case, it is necessary to implement two parallel GGNN models of the form

[mathematical expression not reproducible]. (49)
```
Algorithm 7: Computing a {1,2}-inverse with the prescribed null space.

Require: Time varying matrices A(t) [member of] [C.sup.mxn] and C(t)
[member of] [C.sup.lxm].

(1) Check the condition rank(C(t)A(t)) = rank(A(t)) = rank(C(t)).

If these conditions are satisfied then continue.

(2) Solve the matrix equation C(t)A(t)W(t)C(t) = C(t) with respect to
an unknown matrix W(t) [member of] [C.sup.lxm].

(3) Return a {1,2}-inverse X(t) = V(t)C(t) of A(t) satisfying
N(X) = N(C).

Algorithm 8: Computing a {1,2}-inverse with the prescribed range and
null space.

Require: Time varying matrices A(t) [member of] [C.sup.mxn], B(t)
[member of] [C.sup.nxk] and C(t) [member of] [C.sup.lxm].

Require: Verify rank(C(t)A(t)B(t)) = rank(B(t)) = rank(C(t)) =
rank(A(t)).

If these conditions are satisfied then continue.

(1) Solve the matrix equation B(t)U(t)A(t)B(t) = B(t) with respect to
an unknown matrix U(t) [member of] [C.sup.kxm].

(2) Solve the matrix equation C(t)A(t)V(t)C(t) = C(t) with respect to
an unknown matrix V(t) [member of] [C.sup.nxl].

(3) Return X(t) = B(t)U(t)A(t)V(t)C(t) = A[(t).sup.(1,2).sub.R(B),
N(C)].
```

There is also an alternative way to compute a {1,2}-inverse X of A with R(X) = R(B) and N(X) = N(C). Namely, first we check whether rank(CAB) = rank(B) = rank(C) = rank(A). If this is true, then by Theorem 12 it follows that there exists a {2}-inverse of A with the prescribed range R(B) and null space N(C), and each such inverse is also a {1}-inverse of A. Therefore, to compute a {1,2}-inverse of A having the range R(B) and null space N(C) we have to compute a {2}inverse X of A with R(X) = R(B) and N(X) = N(C) in exactly the same way as in Algorithm 3. In other words, we compute an arbitrary solution U to the equation BUCAB = B, and then X = BUC is the desired {1,2}-inverse of A.

3.1. Complexity of Algorithms. The general computational pattern for commuting generalized inverses is based on the general representation B[(CAB).sup.(1)]C, where the matrices A, B, C satisfy various conditions imposed in the proposed algorithms.

The first approach is based on the computation of an involved inner inverse [(CAB).sup.(1)], and it can be described in three main steps:

(1) Compute the matrix product P = CAB.

(2) Compute an inner inverse U = [P.sup.(1)] of P, for example, U = [P.sup.[dagger]].

(3) Compute the generalized inverse as the matrix product BUC.

The second general computational pattern for computing generalized inverses can be described in three main steps:

(1) Compute matrix products included in the required linear matrix equation.

(2) Solve the generated matrix equation with respect to the unknown matrix U.

(3) Compute the generalized inverse of A as the matrix product which includes U.

According to the first approach, the complexity of computing generalized inverses can be estimated as follows:

(1) Complexity of the matrix product P = CAB

+(2) Complexity to compute an inner inverse of P

+(3) Complexity to compute the matrix product BUC

According to the second approach, the complexity of computing generalized inverses can be expressed according to the rule:

(1) Complexity of the matrix product P included in required matrix equation which should be solved.

+(2) Complexity to solve the linear matrix generated in (1)

+(3) Complexity of matrix products required in final representation

Let us compare complexities of two representations from (14). Two possible approaches are available. The first approach assumes computation [A.sup.(2).sub.R(B),N(C)] = B[(CAB).sup.(1)]C and the second one assumes [A.sup.(2).sub.R(B),N(C)] = BUC, where BUCAB = B. Complexity of computing the B[(CAB).sup.(1)] is

(1) complexity of the matrix product P = CAB,

+(2) complexity of computation of [P.sup.(1)],

+(3) complexity of matrix products required in final representation [BP.sup.(1)] C.

Complexity of computing the second expression in (14) is

(1) complexity of matrix products P = CAB,

+(2) complexity to solve appropriate linear matrix equation BUP = B with respect to U,

+(3) complexity of the matrix product BUC.

3.2. Particular Cases. The main particular cases of Theorem 6 can be derived directly and listed as follows.

(a) In the case rank(CAB) = rank(B) = rank(C) = rank(A) the outer inverse [A.sup.(2).sub.R(B),N(C)] becomes [A.sup.(1,2).sub.R(B),N(C)].

(b) If A is nonsingular and B = C = I, then the outer inverse [A.sup.(2).sub.R(B),N(C)] becomes the usual inverse [A.sup.-1].

Then the matrix equation BUCAB = B becomes UA = I and [A.sup.-1] = U.

(c) In the case B = C = [A.sup.*] or when BC = [A.sup.*] is a full-rank factorization of [A.sup.*], it follows that [A.sup.(2).sub.R(B),N(C)] = [A.sup.[dagger].

(d) The choice m = n, B = C = [A.sup.l], l [greater than or equal to] ind(A), or the full-rank factorization BC = [A.sup.l] implies [A.sup.(2).sub.R(B),N(C)] = [A.sup.D].

(e) The choice m = n, B = C = A, or the full-rank factorization BC = A produces [A.sup.(2).sub.R(B),N(C)] = [A.sup.#].

(f) In the case m = n when A is invertible, the inverse matrix [A.sup.-1] can be generated by two choices: B = C = [A.sup.*] and B = C = I.

(g) Theorem 6 and the full-rank representation of {2,4}- and {2,3}-inverses from  are a theoretical basis for computing {2,4}- and {2,3}-inverses with the prescribed range and null space.

(h) Further, Theorems 3 and 5 provide a way to characterize {1,2,4}- and {1,2,3}-inverses of a matrix.

Corollary 15. Let A [member of] [C.sup.mxn] and C [member of] [C.sup.lxm].

(a) The following statements are equivalent:

(i) There exists a {2,4}-inverse X of A satisfying R(X) = R([(CA).sup.*]) and N(X) = N(C).

(ii) There exist U [member of] [C.sup.lxl] such that [(CA).sup.*]UCA[(CA).sup.*] = [(CA).sup.*] and CA[(CA).sup.*]UC = C.

(iii) There exist U,V [member of] [C.sup.lxl] such that [(CA).sup.*]UCA[(CA).sup.*] = [(CA).sup.*] and CA[(CA).sup.*] VC = C.

(iv) There exist U [member of] [C.sup.lxm] and V [member of] [C.sup.nxl] such that [(CA).sup.*]UA[(CA).sup.*] = [(CA).sup.*], CAVC = C, and [(CA).sup.*] U = VC.

(v) There exist U [member of] [C.sup.lxm] and V [member of] [C.sup.nxl] such that CA[(CA).sup.*]U = C and VCA[(CA).sup.*] = [(CA).sup.*].

(vi) N(CA[(CA).sup.*]) = N([(CA).sup.*]), R(CA[(CA).sup.*]) = R(C).

(vii) rank(CA[(CA).sup.*]) = rank([(CA).sup.*]) = rank(C).

(viii) [(CA).sup.*](CA[(CA).sup.*])(1)CA[(CA).sup.*] = [(CA).sup.*] and CA[(CA).sup.*] (CA[(CA).sup.*])(1)C = C, for some (equivalently every) (CA[(CA).sup.*])(1) [member of] (CA[(CA).sup.*]){1}.

(b) If the statements in (a) are true, then the unique {2,4}-inverse of A with the prescribed range R([(CA).sup.*]) and null space N(C) is represented by

[mathematical expression not reproducible], (50)

for arbitrary [(CA[(CA).sup.*]).sup.(1)] [member of] (CA[(CA).sup.*]){1} and arbitrary U [member of] [C.sup.lxl] satisfying [(CA).sup.*]UCA[(CA).sup.*] = [(CA).sup.*] and CA[(CA).sup.*]UC = C.

Proof. (a) This part of the proof is particular case B = [(CA).sup.*] of Theorem 6.

(b) According to general representation of outer inverses with prescribed range and null space, it follows that [mathematical expression not reproducible]. Now, it suffices to verify that X satisfies Penrose equation (4). For this purpose, it is useful to use known result

A[([A.sup.*] A).sup.(1)] [A.sup.*] = A[A.sup.[dagger]], (51)

which implies

XA = [(CA).sup.*] [(CA[(CA).sup.*]).sup.(1)] CA = [(CA).sup.*] [([(CA).sup.*]).sup.[dagger]]

= [(CA).sup.[dagger]] CA (52)

and later XA = [(XA).sup.*]. Hence, (50) holds.

Corollary 16. Let A [member of] [C.sup.mxn] and B [member of] [C.sup.nxk].

(a) The following statements are equivalent:

(i) There exists a {2,3}-inverse X of A satisfying R(X) = R(B) and N(X) = N([(AB).sup.*]).

(ii) There exist U [member of] [C.sup.kxk] such that BU[(AB).sup.*] AB = B and [(AB).sup.*]ABU[(AB).sup.*] = [(AB).sup.*].

(iii) There exist U, V [member of] [C.sup.kxk] such that BU[(AB).sup.*]AB = B and [(AB).sup.*] ABV[(AB).sup.*] = [(AB).sup.*].

(iv) There exist U [member of] [C.sup.kxm] and V [member of] [C.sup.nxk] such that BUAB = B, [(AB).sup.*]AV[(AB).sup.*] = [(AB).sup.*], and BU = V[(AB).sup.*].

(v) There exist U [member of] [C.sup.kxm] and V [member of] [C.sup.nxk] such that [(AB).sup.*] ABU = [(AB).sup.*] and V[(AB).sup.*]AB = B.

(vi) N([(AB).sup.*]AB) = N(B), R([(AB).sup.*]AB) = R([(AB).sup.*]).

(vii) rank([(AB).sup.*]AB) = rank(B) = rank([(AB).sup.*]).

(viii) B[([(AB).sup.*]AB).sup.(1)] [(AB).sup.*]AB = B and [(AB).sup.*] AB[([(AB).sup.*]AB).sup.(1)] [(AB).sup.*] = [(AB).sup.*] for some (equivalently every) [([(AB).sup.*]AB).sup.(1)] [member of] (CAB){1}.

(b) If the statements in (a) are true, then the unique {2,3}-inverse of A with the prescribed range R(B) and null space N([(AB).sup.*]) is represented by

[mathematical expression not reproducible], (53)

for arbitrary [([(AB).sup.*]AB).sup.(1)] [member of] ([(AB).sup.*]AB){1} and arbitrary U [member of] [C.sup.kxk] satisfying BU[(AB).sup.*]AB = B and [(AB).sup.*] ABU[(AB).sup.*] = [(AB).sup.*].

Corollary 17 shows the equivalence between the first representation given in (53) of Corollary 16 and Corollary 1 from .

Corollary 17. Let A [member of] [C.sup.mxn] and B [member of] [C.sup.nxk] satisfy rank(AB) = rank(B). Then

[mathematical expression not reproducible]. (54)

Proof. It suffices to verify

[([(AB).sup.*]AB).sup.(1)] [(AB).sup.*] = [(AB).sup.(1,3)]. (55)

Indeed, since rank((AB)*AB) = rank(AB), it follows that

AB[([(AB).sup.*]AB).sup.(1)] [(AB).sup.*] = AB. (56)

Now, the proof can be completed using the evident fact that AB[([(AB).sup.*]AB).sup.(1)] [(AB).sup.*] is the Hermitian matrix.

In dual case, Corollary 18 is an additional result to Corollary 1 from .

Corollary 18. Let A [member of] [C.sup.mxn] and C [member of] [C.sup.lxm] satisfy rank(CA) = rank(C). Then

[mathematical expression not reproducible]. (57)

Proof. In this case, the identity

[(CA).sup.*] [(CA[(CA).sup.*]).sup.(1)] = [(CA).sup.(1,4)] (58)

can be verified similarly.

Theorem 19. Let A [member of] [C.sup.mxn]. Then

[mathematical expression not reproducible]. (59)

Proof. The equalities

[mathematical expression not reproducible] (60)

Let X [member of] A{1,2,4}, that is, A = AXA, X = XAX, and [(XA).sup.*] = XA, and set U = [X.sup.*]X. Then

X = XAX = [(XA).sup.*] X = [A.sup.*][X.sup.*]X = [A.sup.*]U,

[A.sup.*][UAA.sup.*] = [A.sup.*][X.sup.*][XAA.sup.*] = [A.sup.*][X.sup.*] [(XA).sup.*] [A.sup.*]

= [(AXAXA).sup.*] = [A.sup.*]. (61)

Conversely, let X = [A.sup.*]U and [A.sup.*][UAA.sup.*] = [A.sup.*], for some U [member of] [C.sup.mxm]. According to (5) we have that X [member of] A{1,2}. On the other hand, by X = [A.sup.*]U and [A.sup.*] [UAA.sup.*] = [A.sup.*] it follows that [XAA.sup.*] = [A.sup.*], and it is well-known that it is equivalent to X [member of] A{1,4}. Thus, X [member of] A{1,2,4}.

The following theorem can be verified in a similar way.

Theorem 20. Let A [member of] [C.sup.mxn]. Then

[mathematical expression not reproducible]. (62)

4. Numerical Examples

All numerical experiments are performed starting from the zero initial condition. MATLAB and the Simulink version is 8.4 (R2014b).

Example 21. Consider

[mathematical expression not reproducible]. (63)

(a) This part of the example illustrates results of Theorem 6 and it is based on the implementation of Algorithm 3. The matrices A,B,C satisfy rank(B) = 2, rank(C) = 4, and rank(CAB) = 2. Since the conditions in (vii) of Theorem 6 are not satisfied, there is no an exact solution of the system of matrix equations BUCAB = B and CABUC = C. The outer inverse X = B[(CAB).sup.(1)]C can be computed using the RNN approach, as follows. The Simulink implementation of Algorithm 3, which is based on the GGNN model for solving the matrix equation B(t)U(t)C(t)A(t)B(t) = B(t), gives the result which is presented in Figure 2. The display denoted by U(t) denotes an approximate solution of the matrix equation BU(t)CAB = B. The time interval is [0,0.5], the solver is ode15s, the power-sigmoid activation is selected, and [gamma] = [10.sup.6].

Step 1. Solve the matrix equation B(t)U(t)C(t)A(t)B(t) = B(t) with respect to U(t) using an appropriate adaptation of the GGNN approach developed in [28, 29] and restated in (43). In the particular case, the model becomes

[??](t) = -[gamma]B[(t).sup.T]

* F (B(t)U(t)C(t)A(t)B(t) - B(t))

* [(C(t)A(t)B(t)).sup.T]. (64)

The matrix B is of full-column rank, and it possesses the left inverse [B.sup.-1.sub.l]. Therefore, the matrix equation BUCAB = B is equivalent to the equation UCAB - I = 0. Then the GGNN model (64) reduces to the well-known GNN model for computing the pseudoinverse of CAB. The GNN models for computing the pseudoinverse of rank-deficient matrices were introduced and described in . We further confirm the results derived in MATLAB Simulink by means of the programming package Mathematica. Mathematica gives

[mathematical expression not reproducible], (65)

which coincides with the result displayed in U(t) in Figure 2.

Step 2. The matrix X(t) = B(t)U(t)C(t) is showed in Figure 2, in the display denoted by ATS2. The residual norm of X is equal to [[parallel]XAX - X[parallel].sub.2] = 6.5360016 * [10.sup.-15].

As a confirmation, Mathematica gives

[mathematical expression not reproducible], (66)

which coincides with the contents of the Display Block denoted as ATS2 in Figure 2. Further, the matrix U = [(CAB).sup.[dagger]] is an approximate solution of the matrix equations CABUC = C and BUCAB = B. Also, X = BUC is an approximate solution of (28), since

[parallel]CABUC - C[parallel] = [parallel]CAX - C[parallel] = 223452290 * [10.sup.-14],

[parallel]BUCAB - B[parallel] = [parallel]XAB - B[parallel] = 9.4574123 * [10.sup.-15]. (67)

Therefore, the equations in (28) are satisfied. In addition, (29) is satisfied by the definition of X. Therefore, X is an approximate (B, C)-inverse of A.

Trajectories of the entries in the matrix B(t)U(t)C(t) generated inside the time [0,5 * [10.sup.-2]], using [gamma] = [10.sup.6] and ode15s solver, are presented in Figure 3.

(b) Dual approach in Theorem 6, as well as in the implementation of Algorithm 3, is based on the solution of C(t)A(t)V(t)C(t) = C(t) and the associated outer inverse [X.sub.1](t) = B(t)V(t)C(t). The Simulink implementation of the GGNN model which is based on the matrix equation CABV(t)C = C and the matrix product [X.sub.1](t) = BV(t)C gives the result which is presented in Figure 4. The display denoted by V(t) represents an approximate solution of the matrix equation CABV(t)C = C. The time interval is [0,0.5], the solver is ode15s, the linear activation is selected, and [gamma] = [10.sup.11].

Since the matrix C is right invertible, the matrix equation CABV(t)C = C gives the dual form of the matrix equation for computing [(CAB).sup.[dagger]]; that is, CABV(t) = I.

Therefore, both X and [X.sub.1] are approximations of the same outer inverse of A, equal to B[(CAB).sup.[dagger]]C. To that end, it can be verified that X and [X.sub.1] satisfy [parallel]X - [X.sub.1][parallel] = 4.143699 * [10.sup.-11].

(c) The goal of this part of the example is to illustrate Theorem 3 and Algorithm 1. The matrices A and B satisfy rank(AB) = rank(B), so that it is justifiable to search for a solution U(t) of the matrix equation BU(t)AB = B and the initiated outer inverse X = BU. In order to highlight the results derived by the implementation of Algorithm 1 it is important to mention that

[mathematical expression not reproducible]. (68)

On the other hand, the Simulink implementation gives another element BU(t) from A[{2}.sub.R(B),*], different from [X.sub.1] = [(AB).sup.[dagger]]. The matrix BU(t) is presented in Figure 5. The display denoted by U(t) represents an approximate solution of the matrix equation BU(t)AB = B. The time interval is [0, [10.sup.-2]] and the solver is ode15s.

(d) The goal of this part of the example is to illustrate Theorem 5 and Algorithm 2. Since rank(CA) = rank(C), it is justifiable to search for a solution of the matrix equation CAV(t)C = C. The Simulink implementation of the GGNN model which is based on the matrix equation C(t)A(t)V(t)C(t) = C(t) gives the result which is presented in Figure 6. The display denoted by V(t) represents an approximation of V(t). The display denoted by ATS2 represents the matrix product X = V(t)C(t). The time interval is [0,1] and the solver is ode15s. The activation is achieved by the power-sigmoid function. The corresponding outer inverse of A is x = VC [member of] A[{2}.sub.*,N(C)].

It is important to mention that the results V(t) and X = V(t)C given by the implementation of Algorithm 2 are different from the pseudoinverse of CA and [(CA).sup.[dagger]]C, since

[mathematical expression not reproducible]. (69)

Example 22. The aim of the present example is a verification of Theorem 6 and Algorithm 3 in the important case B = C = [A.sup.T]. For this purpose, we consider the same matrix A as in Example 21. The Mathematica function Pseudoinverse gives the following exact Moore-Penrose inverse of A:

[mathematical expression not reproducible]. (70)

It can be approximated using the Simulink implementation of Algorithm 3 corresponding to the choice B = C = [A.sup.T]. Indeed, according to Example 21, the Simulink implementation of Algorithm 3 approximates the outer inverse [A.sup.T][([A.sup.T][AA.sup.T]).sup.[dagger]] [A.sup.T] = [A.sup.[dagger]]. The implementation and generated results are presented in Figure 7. The GGNN model underlying the implementation is

[??](t) = -[gamma]A(t)

* F (A[(t).sup.T]U(t)A[(t).sup.T]A[(t).sup.T][(t).sup.T] - A[(t).sup.T])

* [(A[(t).sup.T]A(t)A[(t).sup.T]).sup.T]. (71)

The display denoted by U(t) represents an approximate solution of the matrix equation [A.sup.T]U(t)[A.sup.T][AA.sup.T] = [A.sup.T] and the display denoted by MP represents an approximation of [A.sup.[dagger]]. The time interval is [0,0.001], the solver is ode15s, and the scaling parameter is assigned to [gamma] = [10.sup.8].

Example 23. Let us consider the same matrix A as in Example 21 and

[mathematical expression not reproducible]. (72)

The matrices B and C are generated with the purpose of illustrating Theorem 12 and Algorithms 8 and 9. Conditions (iv) and (v) of Theorem 12 are satisfied. Therefore, it is expectable that the results generated by Algorithms 8 and 9 are the same.

The Simulink implementation of Algorithm 9 generates results presented in Figure 8. The simulation is performed within the time interval which is [0,10], the scaling constant is [gamma] = [10.sup.7], and the selected solver is ode15s.

The Simulink implementation of Algorithm 8 generates the results presented in Figure 9. The time interval is [0,0.5], [gamma] = [10.sup.11], and the solver is ode15s.

As a verification, Mathematica gives the following result:

[mathematical expression not reproducible]. (73)

Let us observe that X = [A.sup.(1,2).sub.R(B),N(C)] = B[(CAB).sup.[dagger]]C and [X.sub.1] = B[(AB).sup.[dagger]]A[(CA).sup.[dagger]]C are very close with respect to the Frobenius norm, since [parallel]X - [X.sub.1][parallel] = 4.710014456589536 * [10.sup.-12]. In the case U = [(CAB).sup.[dagger]] and X = BUC, the matrix equations CAX = CABUC = C and XAB = BUC = B are satisfied, since

[parallel]CABUC - C[parallel] = 1.631647583439993 * [10.sup.-13],

[parallel]BUCAB - B[parallel] = 2.405407190529498 * [10.sup.-13]. (74)

Example 24. (a) Consider the time-varying symmetric matrix [S.sub.5], belonging to n x n matrices [S.sub.n] of rank n - 1 from :

[mathematical expression not reproducible]. (75)

The Moore-Penrose inverse of [S.sub.5](t) is equal to

[mathematical expression not reproducible]. (76)

Figure 10 shows the Simulink adopted computation of [S.sub.5][(t).sup.[dagger]] in the time period [0, 5 * [10.sup.-7]] using the solver ode15s and the parameter [gamma] = [10.sup.8].

Trajectories of approximations of the entries in the matrix [S.sub.5][(t).sup.[dagger]] inside the time [0, 5 * [10.sup.-7]] and generated using [gamma] = [10.sup.8] are presented in Figure 11. It is evident that these trajectories follow the graphs of the corresponding different expressions (representing entries) in [S.sup.[dagger].sub.5].

(b) Now, consider the following matrices B(t) and C(t) in conjunction with [S.sub.5](t):

[mathematical expression not reproducible]. (77)

The outer inverse [S.sub.5][(t).sup.(2).sub.R(B),N(C)] of [S.sub.5](t) corresponding to B(t) and C(t) is equal to

[mathematical expression not reproducible]. (78)

Its computation in the time period [0, 5 * [10.sup.-2]] using solver ode15s and the parameter [gamma] = [10.sup.11] is presented in Figure 12.

Example 25. Here we discuss the behaviour of Algorithm 3 in the case when the condition rank(CAB) = rank(B) = rank(C) is not satisfied. For this purpose, let us consider the matrices

[mathematical expression not reproducible]. (79)

These matrices do not satisfy the requirement rank(CAB) = rank(B) = rank(C) of Algorithm 3, since

rank (A) = 5,

rank (B) = 4,

rank (C) = 3,

rank (CAB) = 2. (80)

On the other hand, the conditions rank(AB) = rank(B) and rank(CA) = rank(C) are valid, so that the conditions required in Algorithms 1 and 2 hold. An application of Algorithm 3 in the time [0, [10.sup.-9]], based on the scaling constant [gamma] = [10.sup.7] and the ode15s solver, gives the results for U(t) and X = BUC as it is presented in Figure 13.

An application of the dual case of Algorithm 3 in the time [0, [10.sup.-8]], based on the scaling constant [gamma] = [10.sup.7] and the ode15s solver, gives the results for V(t) and X = BVC as it is presented in Figure 14.

Trajectories of the elements of the matrix B(t)U(t)C(t) in the period of time [0, [10.sup.-9]] are presented in Figure 15.

According to the obtained results, the following can be concluded.

(1) The matrix equation BUCAB = B is not satisfied, since [parallel]BUCAB-B[parallel] = 39.53256. This fact is expectable since the conditions rank(CAB) = rank(B) = rank(C) are not satisfied nor is the matrix B invertible. Similarly, the matrix equation BUCAB = B is not satisfied, since [parallel]CABVC - C[parallel] = 27.412588.

(2) Both the matrices U and V are approximations of [(CAB).sup.[dagger]], since

[mathematical expression not reproducible]. (81)

This means that the solutions of the matrix equations BUCAB = B and CABVC = C given by the GNN model approximate the solution of the GNN model corresponding to the matrix equations UCAB = I and CABV = I, respectively, which is equal to [(CAB).sup.[dagger]].

(3) Accordingly, the output denoted by ATS2 approximates the outer inverse

[mathematical expression not reproducible] (82)

exactly in five decimals. In conclusion, the Simulink implementation of Algorithm 3 computes the outer inverse X = B[(CAB).sup.[dagger]]C which satisfies condition (29) from the definition of the (B, C)-inverse, but not condition (28) from the same definition. In other words, X satisfies neither R(X) = R(B) nor N(X) = N(C).

(4) Observations 2 and 3 finally imply that the GGNN model can be used for online time-varying pseudoinversion of both the matrices A and CAB.

5. Conclusion

The contribution of the present paper is both theoretical and computationally applicable. Conditions for the existence and representations of {2}-, {1,2}-, and {1}-inverses with some assumptions on their ranges and null spaces are proposed. A new computational framework for these generalized inverses is proposed. This approach arises from the derived general representations and involves solutions of certain matrix equations. In general, the methods and algorithms proposed in the present paper are aimed at computation of various classes of generalized inverses of the form B[(CAB).sup.(1)]C, where (CAB)(1) are solutions of the proposed matrix equations solvable under specified conditions.

Our decision is to apply the GGNN approach in finding solutions of required matrix equations. Also, we use Simulink implementation of the underlying RNN models. This decision allows us to extend derived algorithms to time-varying matrices. Also, such an approach makes it possible to compute two types of generalized inverses, namely, inner and/or outer inverses of A and inner inverses of the matrix product CAB. Illustrative numerical examples and simulation examples are presented to demonstrate validity of the derived theoretical results and proposed methods.

It is worth mentioning that the blurring process which is applied on the original image F and produces the blurred image G is expressed in the form of a certain matrix equation of the form

[mathematical expression not reproducible], (83)

wherein it is assumed that s = [m.sub.2] + [n.sub.1] - 1, r = [m.sub.1] + [n.sub.2] - 1, where [n.sub.1] (resp., [n.sub.2]) is the length of the horizontal (resp., vertical) blurring in pixels. Solutions of these types of matrix equations which are based on the pseudoinverse of [H.sub.c] and [H.sub.r] and least squares solutions were investigated in [33-35]. Possible application of the proposed algorithms in finding least squares solutions of matrix equation (83) could be useful for further research.
```
Algorithm 9: Alternative computing of a {1,2}-inverse with the
prescribed range and null space.

Require: Time varying matrices A(t) [member of] [C.sup.mxn], B(t)
[member of] [C.sup.nxk] and C(t) [member of] [C.sup.lxm].

Require: Verify rank(C(t)A(t)B(t)) = rank(B(t)) = rank(C(t)) =
rank(A(t)).

If these conditions are satisfied then continue.

(1) Solve the matrix equation B(t)U(t)C(t)A(t)B(t) = B(t) with respect
to an unknown matrix U(t) [member of] [C.sup.kxm].

(2) Return X(t) = B(t)U(t)C(t) = A[(t).sup.(1,2).sub.R(B),N(C)].
```

https://doi.org/10.1155/2017/6429725

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

The first and second author gratefully acknowledge support from the project supported by Ministry of Education and Science of Republic of Serbia, Grant no. 174013. The first and third author gratefully acknowledge support from the project "Applying Direct Methods for Digital Image Restoring" of the Goce Delcev University.

References

 A. Ben-Israel and T. N. E. Greville, Generalized Inverses: Theory And Applications, Springer, New York, NY, USA, 2nd edition, 2003.

 G. Wang, Y. Wei, and S. Qiao, Generalized Inverses: Theory and Computations, Science Press, New York, NY, USA, 2003.

 X. Sheng and G. Chen, "Full-rank representation of generalized inverse [A.sup.(2).sub.T,S] and its application," Computers & Mathematics with Applications, vol. 54, no. 11-12, pp. 1422-1430, 2007

 P. Stanimirovic, S. Bogdanovic, and M. Ciric, "Adjoint mappings and inverses of matrices," Algebra Colloquium, vol. 13, no. 3, pp. 421-432, 2006.

 Y.-L. Chen and X. Chen, "Representation and approximation of the outer inverse of A[R]s a matrix A," Linear Algebra and its Applications, vol. 308, no. 1-3, pp. 85-107, 2000.

 X. Liu, H. Jin, and Y. Yu, "Higher-order convergent iterative method for computing the generalized inverse and its application to Toeplitz matrices," Linear Algebra and Its Applications, vol. 439, no. 6, pp. 1635-1650, 2013.

 X. Liu and Y. Qin, "Successive matrix squaring algorithm for computing the generalized inverse [A.sup.(2).sub.T,S]," Journal of Applied Mathematics, vol. 2012, Article ID 262034,12 pages, 2012.

 P. S. Stanimirovic and D. S. Cvetkovic-Ilic, "Successive matrix squaring algorithm for computing outer inverses," Applied Mathematics and Computation, vol. 203, no. 1, pp. 19-29, 2008.

 P. S. Stanimirovic and F. Soleymani, "A class of numerical algorithms for computing outer inverses," Journal of Computational and Applied Mathematics, vol. 263, pp. 236-245, 2014.

 Y. Wei, "A characterization and representation of the generalized inverse [A.sup.(2).sub.T,S] and its applications," Linear Algebra and Its Applications, vol. 280, no. 2-3, pp. 87-96,1998.

 Y. Wei and H. Wu, "The representation and approximation for the generalized inverse [A.sup.(2).sub.T,S]," Applied Mathematics and Computation, vol. 135, no. 2-3, pp. 263-276, 2003.

 Y. Wei and H. Wu, "(T, S) splitting methods for computing the generalized inverse A[R]s and rectangular systems," International Journal of Computer Mathematics, vol. 77, no. 3, pp. 401-424, 2001.

 H. Yang and D. Liu, "The representation of generalized inverse [A.sup.(2,3).sub.T,S] and its applications," Journal of Computational and Applied Mathematics, vol. 224, no. 1, pp. 204-209, 2009.

 N. S. Urquhart, "Computation of generalized inverse matrices which satisfy specified conditions," SIAM Review, vol. 10, pp. 216-218, 1968.

 M. P. Drazin, "A class of outer generalized inverses," Linear Algebra and Its Applications, vol. 436, no. 7, pp. 1909-1923,2012.

 J. Jang, S. Lee, and S. Shin, "An optimization network for matrix inversion," in Neural Information Processing Systems, pp. 397-401, College Park, Md, USA, 1988.

 L. Fa-Long and B. Zheng, "Neural network approach to computing matrix inversion," Applied Mathematics and Computation, vol. 47, no. 2-3, pp. 109-120, 1992.

 J. Wang, "A recurrent neutral network for real-time matrix inversion," Applied Mathematics and Computation, vol. 55, no. 1, pp. 89-100, 1993.

 A. Cichocki, T. Kaczorek, and A. Stajniak, "Computation of the drazin inverse of a singular matrix making use of neural networks," Bulletin of the Polish Academy of Sciences, Technical Sciences, vol. 40, pp. 387-394,1992.

 A. Cichock and D. Rolf Unbehauen, "Neural networks for solving systems of linear equations and related problems," IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications, vol. 39, no. 2, pp. 124-138,1992.

 J. Wang, "Recurrent neural networks for computing pseudoin-verses of rank-deficient matrices," SIAM Journal on Scientific Computing, vol. 18, no. 5, pp. 1479-1493,1997

 Y. Wei, "Recurrent neural networks for computing weighted Moore-Penrose inverse," Applied Mathematics and Computation, vol. 116, no. 3, pp. 279-287, 2000.

 Y. Xia, T. Chen, and J. Shan, "A novel iterative method for computing generalized inverse," Neural Computation, vol. 26, no. 2, pp. 449-465, 2014.

 P. S. Stanimirovic, I. S. Zivkovic, and Y. Wei, "Recurrent neural network for computing the Drazin inverse," IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 11, pp. 2830-2843, 2015.

 I. S. Zivkovic, P. S. Stanimirovic, and Y. Wei, "Recurrent neural network for computing outer inverse," Neural Computation, vol. 28, no. 5, pp. 970-998, 2016.

 P. S. Stanimirovic, I. S. Zivkovic, and Y. Wei, "Neural network approach to computing outer inverses based on the full rank representation," Linear Algebra and Its Applications, vol. 501, pp. 344-362, 2016.

 C.-G. Cao and X. Zhang, "The generalized inverse A[R] and its applications," Journal of Applied Mathematics and Computing, vol. 11, no. (1-2), pp. 155-164, 2003.

 K. Chen, S. Yue, and Y. Zhang, "MATLAB simulation and comparison of zhang neural network and gradient neural network for online solution of linear time-varying matrix equation AXB - C = 0," in Proceeding of the International Conference on Intelligent Computing (ICIC '08), D. S. Huang, D. C. Wunsch, D. S. Levine, and K. H. Jo, Eds., vol. 5227 of LNAI, pp. 68-75, Shanghai, China, 2008.

 Y. Zhang and K. Chen, "Comparison on zhang neural network and gradient neural network for time-varying linear matrix equation solving AXB = C Solving," in Proceeding of the International Conference on Industrial Technology (IEEE ICIT '08), April 2008.

 P. S. Stanimirovic, D. S. Cvetkovic-Ilic, S. Miljkovic, and M. Miladinovic, "Full-rank representations of {2,4}, {2,3}--inverses and successive matrix squaring algorithm," Applied Mathematics and Computation, vol. 217, no. 22, pp. 9358-9367, 2011.

 S. Srivastava and D. K. Gupta, "A new representation for [A.sup.(2,3).sub.T,S]," Applied Mathematics and Computation, vol. 243, pp. 514-521, 2014.

 G. Zielke, "Report on test matrices for generalized inverses," Computing, vol. 36, no. 1-2, pp. 105-162,1986.

 P. S. StanimiroviC, I. StojanoviC, V. N. Katsikis, D. Pappas, and Z. Zdravev, "Application of the least squares solutions in image deblurring," Mathematical Problems in Engineering, vol. 2015, Article ID 298689, 18 pages, 2015.

 P. S. Stanimirovic, S. Chountasis, D. Pappas, and I. Stojanovic, "Removal of blur in images based on least squares solutions," Mathematical Methods in the Applied Sciences, vol. 36, no. 17, pp. 2280-2296, 2013.

 P. Stanimirovic, I. StojanoviC, S. Chountasis, and D. Pappas, "Image deblurring process based on separable restoration methods," Computational and Applied Mathematics, vol. 33, no. 2, pp. 301-323, 2014.

Predrag S. Stanimirovic, (1) Miroslav Ciric, (1) Igor Stojanovic, (2) and Dimitrios Gerontitis (3)

(1) Faculty of Science and Mathematics, Department of Computer Science, University ofNis, Visegradska 33,18000 Nis, Serbia

(2) Faculty of Computer Science, Goce Delcev University, Goce Delcev 89, 2000 Stip, Macedonia

(3) Aristoteleion Panepistimion, Thessalonikis, Greece

Correspondence should be addressed to Predrag S. Stanimirovic; pecko@pmf.ni.ac.rs

Received 3 January 2017; Accepted 18 April 2017; Published 5 June 2017

Caption: Figure 1: Block for the implementation of the power-sigmoid activation function (a) and its subsystem (b).

Caption: Figure 2: GGNN model for computing B(t)U(t)C(t)A(t)B(t) = B(t), X(t) = B(t)U(t)C(t).

Caption: Figure 3: Trajectories of elements of the matrix BUC.

Caption: Figure 4: Simulink implementation of the GNN model for computing CABV(t)C = C, [X.sub.1] = BVC.

Caption: Figure 5: Simulink implementation of the GNN model for computing BUAB = B, X = BU [member of] A[{2}.sub.R(B),*].

Caption: Figure 6: Simulink implementation of the GNN model for computing CAVC = C, X = VC [member of] A[{2}.sub.*,N(C)].

Caption: Figure 7: Simulink implementation of the GNN model for computing [A.sup.[dagger]] using Algorithm 3.

Caption: Figure 8: Simulink implementation of Algorithm 9.

Caption: Figure 9: Simulink implementation of Algorithm 8.

Caption: Figure 11: Trajectories of elements of the matrix [S.sub.5][(t).sup.[dagger]].

Caption: Figure 12: The Simulink model for computing B(t)[(C(t)[S.sub.5](t)B(t)).sup.(1)]C(t).

Caption: Figure 13: The implementation of Algorithm 3 when its conditions are not satisfied.

Caption: Figure 14: Dual implementation of Algorithm 3 when its conditions are not satisfied.

Caption: Figure 15: Trajectories of elements in B(t)U(t)C(t) in the period of time [0, [10.sup.-9]].
Title Annotation: Printer friendly Cite/link Email Feedback Research Article Stanimirovic, Predrag S.; Ciric, Miroslav; Stojanovic, Igor; Gerontitis, Dimitrios Complexity Report 1USA Jan 1, 2017 13536 Based on Agent Model and K-Core Decomposition to Analyze the Diffusion of Mass Incident in Microblog. Efficient High-Order Iterative Methods for Solving Nonlinear Systems and Their Application on Heat Conduction Problems. Algorithms Matrices Matrices (Mathematics)