Printer Friendly

The Tensor Pade-Type Approximant with Application in Computing Tensor Exponential Function.

1. Introduction

Tensor exponential function is an important function that is widely used, owing to its key role in the solution of tensor differential equations [1-4]. For instance, Markovian master equation can be written as tensor differential equations ([partial derivative]/[partial derivative]t)P(t) = A * P(f), where the probabilities tensor [mathematical expression not reproducible] [5]. Consider the initial value problem defined by the tensor ordinary differential equation [6, 7]

[??](t) = AY (t),

Y([t.sub.0]) = [Y.sub.0], (1)

where the superimposed dot denotes differentiation with respect to t and A and [Y.sub.0] are given constant tensors. The solution to system (1) is Y(f) = exp[(f - [t.sub.0])A] [Y.sub.0], and exp(x) is the tensor exponential function.

To solve (1), we need to calculate an exponential function about tensor A. Recently, tensor computation, especially eigenvalues of tensors, has attracted attention of many scholars; some important results have been found in the current literature [8-15]. For instance, computation of exp(At) is required in applications such as finite strain hyperelastic-based multiplicative plasticity models [7, 16-19]. Explicitly, for a generic tensor A, the tensor exponential can be expressed by means of its series representation [7] (see p. 749).

exp (A) = [[infinity].summation over (n=0)] 1/n! [A.sup.n], (2)

and the preceding series is absolutely convergent for any argument A and, as its scalar counterpart, can be used to evaluate the tensor exponential function to any prescribed degree of accuracy [16]. The computation of (2) is carried out by simply truncating the infinite series with [n.sub.max] terms:

exp (A) = [[n.sub.max].summation over (n=0)] 1/n! [A.sup.n], (3)

with [n.sub.max] being such that [mathematical expression not reproducible].

However, the accuracy and effectiveness of the preceding algorithm is limited by round-off and choice of termination criterion [16]. Pade approximant has become by far the most widely used one in calculation of exponential function or formal power series due to the following reasons: first, the series may converge too slowly to be of any use and the approximation can accelerate its convergence; second, only few coefficients of the series may be known and a good approximation to the series is needed to obtain properties of the function that it represents [20]. For instance, matrix Padetype approximant (MPTA) [21] can be used to simplify the high degree multivariable system by approximating transfer function matrix G(s) that can be expanded into a power series with matrix coefficients, i.e., G(s) = [[summation].sup.[infinity].sub.i=0] [c.sub.i][s.sup.i], where [c.sub.i] [member of] [C.sup.sxt]. The key to construct TPTA is to maintain the same order of tensor A with different powers. For this issue, we introduce t-product [22, 23] of two tensors to solve it. In addition, in order to give the definition of TPTA, we introduce a generalized linear functional in the tensor space for the first time.

This paper is organized as follows. In Section 2, we provide some preliminaries. First, we introduce the t-product of two tensors; then, we show the definitions of tensor exponential function and the Frobenius norm of a tensor. In Section 3.1, we define the tensor Pade-type approximant by using generalized linear functional; the expression of TPTA is of the form of tensor numerator and scalar denominator; and then we introduce the definition of orthogonal polynomial with respect to generalized linear functional and sketch an algorithm to compute the TPTA. Numerical examples are given and analyzed in Section 4. Finally, we finish the paper with concluding remarks in Section 5.

2. Preliminaries

There arise mainly a problem for approximating tensor exponential function. That is how to expand [e.sup.At] into the power series for order-p (p [greater than or equal to] 3) tensors. For a Symmetric and second-order tensor A, higher powers of A can be computed by the Cayley-Hamilton theorem [24], but it fails for the order-p (p [greater than or equal to] 3) tensors. Therefore, we shall utilize the t-product to obtain higher powers of order-p (p [greater than or equal to] 3) tensors in this section. Firstly, we introduce some notations and basic definitions which will be used in the sequel. Throughout this paper tensors are denoted by calligraphic letters (e.g., A, B), while capital letters represent matrices, and lowercase letters refer to scalars.

An order-p tensor, A, can be written as

[mathematical expression not reproducible]. (4)

Thus, a matrix is considered a second-order tensor, and a vector is a first-order tensor [22], for i = 1, ..., [n.sub.p], denoted by [mathematical expression not reproducible], the tensor whose order is p - 1 and is created by holding the pth index of A fixed at i. For example, consider a third-order tensor, [mathematical expression not reproducible]. Fixing the 3rd index of A, we can get three 3x3 matrices, namely, 2-order tensor, which are [A.sub.1], [A.sub.2], and [A.sub.3] and with elements

[mathematical expression not reproducible], (5)


Now, we will define the t-product of two tensors.

Definition 1 (see [22, 23]). Let [mathematical expression not reproducible]. Then the block circulant pattern tensor of A is denoted by

[mathematical expression not reproducible], (6)

Define unfold(x) to an [n.sub.1] x [n.sub.2] x ... x [n.sub.p] tensor by an [n.sub.1][n.sub.p] x [n.sub.2] x ... x [n.sub.p-1] block tensor in the following way:

[mathematical expression not reproducible]. (7)

If A is order-3 tensor, then unfold(A) is a block vector. Similarly, define fold(x) as the inverse operation, which takes an [n.sub.1][n.sub.p] x [n.sub.2] x ... x [n.sub.p-1] block tensor and returns an [n.sub.1] x [n.sub.2] x ... x [n.sub.p] tensor; then

fold (unfold (A)) = A. (8)

Definition 2 (see [23]). Let [mathematical expression not reproducible]. Then the t-product A*B is the [n.sub.1] x l x [n.sub.3] x ... x [n.sub.p] tensor defined recursively as

A*B = fold (circ (A) *unfold (B)). (9)

Remark 3. If A and B are order-2 tensors, then the product "*" can be replaced by standard matrix multiplication.

Remark 4. The k times power of A is defined as [A.sup.k] = A* A* ... *A (k times); "*" denotes the t-product.

Example 5. Letting

[mathematical expression not reproducible], (10)

then, from Definition 2, we have

[mathematical expression not reproducible]. (11)

Remark 6. One of the characteristic features of t-product is that it ensures that the order of multiplication result of two tensors does not change, whereas other tensor multiplications do not have the feature; that is why we chose the t-product as the multiplication of tensors.

The tensor exponential function is a tensor function on tensors analogous to the ordinary exponential function, which can be defined as follows.

Definition 7. Let A be an [n.sub.1] x [n.sub.2] x ... x [n.sub.p] real or complex tensor. The tensor exponential function of t, denoted by [e.sup.At] or exp(At), is the [n.sub.1] x [n.sub.2] x ... x [n.sub.p] tensor given by the power series

[es.up.At] = [[infinity].summation over (k=0)] 1/k! [(At).sup.k], (12)

where [A.sup.0] is defined to be the identity tensor I (see Definition 8) with the same orders as A.

Definition 8 (see [23]). The n x n x [l.sub.1] x ... x [l.sub.p-2] order-p identity tensor (p > 3) I is the tensor such that [I.sub.1] is the n x n x [l.sub.1] x ... x [l.sub.p-3] order-(p - l) identity tensor, and [I.sub.j] is the order- (p - l) zero tensor, for j = 2, 3, ..., [l.sub.p-2].

By Definition 8, we can define tensor inverse, transpose, and orthogonality. However, we do not discuss these works here, as it is beyond the scope of the present work. For the details of these definitions of tensor, we refer to reader to [22, 23, 25] and the references therein.

Let [mathematical expression not reproducible]; then the norm of a tensor is the square root of the sum of the squares of all its entries [25]; i.e.,

[mathematical expression not reproducible] (13)

This is analogous to the matrix Frobenius norm. The inner product of two same-sized tensors [mathematical expression not reproducible] is the sum of the products of their elements [25]; i.e.,

[mathematical expression not reproducible] (14)

It follows immediately that (A, A) = [[parallel]A[parallel].sup.2].

3. Tensor Pade-Type Approximant

Let f(x) be a given power series with tensor coefficients; i.e.,

[mathematical expression not reproducible]. (15)

Let P denote the set of scalar polynomials in one real variable whose coefficients belong to the real field R and [P.sub.k] denote the set of elements of P of degree less than or equal to k.

Let [mathematical expression not reproducible] be a linear functional on P. Let it act on t by

[phi]([t.sup.i]) = [A.sub.i], i = 0, 1, 2, .... (16)

Then, by the linearity of <p, we have

[mathematical expression not reproducible]. (17)

3.1. Definition of Tensor Pade - Type Approximant. Let [v.sub.n](x) = [b.sub.0] + [b.sub.1]x + ... + [b.sub.0][x.sub.0] ([b.sub.0] [not equal to] 0) be a scalar polynomial of [P.sub.n] of exact degree n. In this case, [v.sub.n](x) is said to be quasi-monic. Define the tensor polynomial [W.sub.n-1](x) associated with [v.sub.n](x) with tensor-valued coefficients, by

[W.sub.n-1](x) = [phi] ([v.sub.n(x) - [v.sub.n](x)/x - t). (18)

It is easily seen that [W.sub.n-1](x) is a tensor polynomial of exact degree n- 1 in x. Set

[[??].sub.n](x) = [x.sup.n][v.sub.n] ([x.sup.-1]), (19)

[[??].sub.n-1](x) = [x.sub.n-1] [W.sub.n-1] ([x.sub.-1]). (20)

Then, the polynomials [mathematical expression not reproducible] are obtained from [v.sub.n](x) and [W.sub.n-1](x), respectively, by reversing the numbering of the coefficients. By the procedure given above, the following conclusion is obtained.

Theorem 9. Let [mathematical expression not reproducible].

Proof. Expanding ([v.sub.n](x) - [v.sub.n](t))/(x - t) in (18) and applying [phi] yields that

[mathematical expression not reproducible]. (21)

Computing [[??].sub.n](x)f(x), we get

[mathematical expression not reproducible]. (22)


[mathematical expression not reproducible]. (23)

Definition 10. [mathematical expression not reproducible] is called a tensor Pade-type approximant with order n for the given power series (15) and is denoted by [(n - 1/n).sub.f](x).

Remark 11. The polynomial [v.sub.n](x), called the generating polynomial of [(n - 1/n).sub.f] with respect to power series f(x), can be arbitrarily chosen.

Remark 12. The tensor Pade-type approximant [(n-1/n).sub.f] possesses the degree constraint, which is caused by its construction process. The constraint implies that the method does not construct tensor Pade-type approximant of type (m/n) when m is different from n-1.

To fill this gap, we define a new tensor Pade-type approximant by introducing a generalized linear functional.

Let [mathematical expression not reproducible] be a generalized linear functional on P. Let it act on t by

[[phi].sup.(q)] ([t.sup.i]) = [A.sub.q+i], i = 0, 1, 2, .... (24)

Similarly to what was done for [W.sub.n-1](x), we consider the polynomial [W.sub.q](x) associated with [v.sub.n](x), and defined by

[mathematical expression not reproducible]. (25)


[[??].sub.q](x) = [x.sup.n-1] [W.sub.q] ([x.sup.-1]), (26)

and define

[mathematical expression not reproducible]. (27)

Then we have the following conclusion.

Theorem 13. Let [mathematical expression not reproducible].

Proof. Let [f.sub.m-n+1] be the formal power series

[f.sub.m-n+1](x) = [[infinity].summation over (j=0)] [A.sub.m-n+1+j] [x.sup.j]. (28)


[x.sup.m-n+1] [f.sub.m-n+1](x) = f(x) [m-n.summation over (i=0)] [A.sub.i][x.sup.i]. (29)

Expanding (25) and using (26) we obtain

[[??].sub.q](x) [n-1.summation over (q=0)] ([q.summation over (i=0)] [b.sub.n-q+i] [A.sub.m-n+1+i])([x.sup.q). (30)

Computing the product [[??].sub.n](x)[f.sub.m-n+1](x), we find that

[mathematical expression not reproducible]. (31)

By Theorem 9, one has

[mathematical expression not reproducible]. (32)

Then, for m [greater than or equal to] n we deduce from (27) and (29) that

[mathematical expression not reproducible]. (33)

Now, we can achieve [(m/n).sub.f](x) by the above procedure, and it will be denoted by [](x)/[[??].sub.n](x).

Definition 14. [R.sub.m,n](x) = [P.sub.m,n](x)/[[??].sub.n](x) is called TPTA with order m + 1 and is denoted by ([m/n).sub.+](x).

Algorithm 15 (compute [R.sub.m,n](x) = [P.sub.m,n](x)/[[??].sub.n](x) with [v.sub.n](x) being arbitrarily chosen).

(1) Set q = m - n + 1 and chose a quasi-monic polynomial [v.sub.n](x).

(2) Use (19) to compute [[??].sub.n](x).

(3) Compute [mathematical expression not reproducible] by (25) and (26), respectively.

(4) Substitute [mathematical expression not reproducible] into (27) to compute [](x).

(5) Set [](x) = [](x)/[[??]](x).

Example 16. Let

[mathematical expression not reproducible]. (34)

Now we apply Algorithm 15 to compute TPTA of type (2/2) for this example.

(1) Chose [v.sub.2](x) = [x.sub.2] -2x + 4, q = m - n + 1 = 1.

(2) Use (19) to compute [[??].sub.2] (x):

[[??].sub.2](x) = 1 - 2x + 4[x.sup.2]. (35)

(3) By using (25) and (26) we get W1 (x)

[mathematical expression not reproducible], (36)


[mathematical expression not reproducible]. (37)

(4) Substituting [mathematical expression not reproducible] into (27), we obtain

[mathematical expression not reproducible]. (38)

(5) Set [R.sub.22](x) = [P.sub.22](x)/[[??].sub.22] (x).It is easy to verify that

[P.sub.22](x)/[[??].sub.2](x) - f(x) = O([x.sup.3]). (39)

3.2. Algorithm for Computing TPTA. Generally, the precision of TPTA is limited, since the denominator polynomials of TPTA are arbitrarily prescribed. In this subsection, in order to improve the precision of approximation, we propose an algorithm for computing the denominator polynomials and illustrate the efficiency of this algorithm in next section. First, we give the following conclusion.

Theorem 17 (error formula).

[mathematical expression not reproducible]. (40)

Proof. Note that [phi] is a linear functional on P, only acting on t. From (18) and (20) we deduced that

[mathematical expression not reproducible], (41)

and then this error formula holds.

In terms of the error formula, it holds that

[mathematical expression not reproducible], (42)

If we impose that [v.sub.n](t) satisfies the condition [phi]([v.sub.n](t)) = 0, then the first term of (42) disappears, and the order of approximation becomes n + 1. If, in addition, we also impose the condition [phi]([tv.sub.n] (t)) = 0, the second term in the expansion of the error also disappears, and the order of approximation becomes n + 2, and so on. We indicate that [v.sub.n](x) depends on n + 1 arbitrary constants; however, on the other side, a rational function is defined apart from a multiplying factor in its numerator and its denominator. It implies that [(n - 1/n).sub.f](x) depends on n arbitrary constants. So let us take vn(t) such that

[phi]([v.sub.n](t) [t.sup.k]) = 0, k = 0, 1, 2, ..., n - 1. (43)

Definition 18. [v.sub.n](t) in (43) is called an orthogonal polynomial with respect to the linear functional [phi] and [(n - 1/n).sub.f] (x) in (42) is also called a TPTA for the given power series (15) when (43) is satisfied.

From (43) we obtain

[phi]([v.sub.n](t)[t.sup.k]) = [n.summation over (i=0)] [b.sub.i], [A.sub.i+k] = 0, k = 0, 1, 2, ..., n - 1. (44)

Let [b.sub.n] = 1 in (44); then it follows that

[n-1.summation over (i=0)] [b.sub.i][A.sub.i+k] = -[A.sub.k+n], k = 0, 1, 2, ..., n - 1. (45)

Forming the scalar product of both sides of (45) with [A.sub.0], [A.sub.1], ..., [A.sub.n-1], respectively we get

[mathematical expression not reproducible]. (46)


[mathematical expression not reproducible]. (47)

and call det([H.sub.n]([A.sub.0])) the Hankel determinant of f(x) with respect to the coefficients [A.sub.0], [A.sub.1], ..., [A.sub.n-1].

Then (46) is converted into

[H.sub.n]([A.sub.0]) x = [[??].sub.n], x = [([b.sub.0], [b.sub.1], ..., [b.sub.0n-1).sup.T]. (48)

In the case of TPTA, [v.sub.n](x) is not arbitrarily chosen any more but is determined by the preceding system. The choice of [v.sub.n](x) can help to improve the accuracy of approximation, but unfortunately we have not been able to guarantee that the solution of system (48) comes into existence, so far. We only give the following basic theorem about system (48) on the basis of linear algebra.

Theorem 19. The solution of (48) exists if and only if

rank([H.sub.n]([A.sub.0])) = rank([H.sub.n]([A.sub.0]) : [[??].sub.n]). Moreover, the solution is unique if det([H.sub.n]([A.sub.0])) = 0.

Proof. The proof of the assertion follows from the simple fact that, for a system of linear equations, described by Ax = b, where A is matrix and x, b are vectors, the solution of system (48) comes into existence for x if and only if rank(A) = rank(A : b); i.e., the right-hand vector must be in the vector space spanned by the columns of the coefficient matrix A. Moreover, if det([H.sub.n]([A.sub.0])) [not equal to] 0, according to Cramer's rule, the solution is unique.

Theorem 20 (existence). Let f(x) be given power series (15); then [(n - 1/n).sub.f] and [(m/n).sub.f] exist and are unique if and only if det([H.sub.n]([A.sub.0])) = [degrees].

Proof. "[??]" By Theorem 19, if det([H.sub.n]([A.sub.0])) = 0, it means that nonhomogeneous equation (48) exists as a unique solution [b.sub.0], [b.sub.1], ..., [b.sub.n-1] for [v.sub.n](x). From (19), it also means that [mathematical expression not reproducible] exist. Hence, by the construction of [(n - 1/n).sub.f], existence holds.

[mathematical expression not reproducible] exist and be unique; then it implies that

rank ([H.sub.n]([A.sub.0])) = rank ([H.sub.n]([A.sub.0]) : [[??].sub.n]), (49)

and if det([H.sub.n]([A.sub.0])) = 0, then rank([H.sub.n]([A.sub.0]) : [[??].sub.n]) < n; the fact that equation (46) has n - rank([H.sub.n]([A.sub.0]) : [[??].sub.n]) solutions, namely, that we can construct n - rank([H.sub.n]([A.sub.0]) : [[??].sub.n]) generating polynomials, which is contradictory to the uniqueness of [(n - 1/n).sub.f], holds.

The proof of existence and uniqueness of (m/n) j is similar to the preceding process.

Theorem 21. Let det([H.sub.n]([A.sub.0])) = 0; then [mathematical expression not reproducible], where the generating polynomial [v.sub.n](x) is given by

[mathematical expression not reproducible], (50)

where [mathematical expression not reproducible] are given by (19) and (20), respectively.

Now, we can derive an algorithm to calculate [(m/n).sub.f](x) using (26), (27), and (50).

Algorithm 22 (compute [(m/n).sub.f](x) = [](x)/[[??].sub.n](x)).

(1) Use (14) to calculate [H.sub.n]([A.sub.0]) and [??].

(2) Use (50) and (19) to compute [v.sub.n](x) and [[??].sub.n](x), respectively.

(3) Set q = m-n + 1 and compute Wq(x) and Wq(x) by

[mathematical expression not reproducible]. (51)

(4) Compute the numerator of TPTA by

[mathematical expression not reproducible].(52)

(5) Obtain [(m/n).sub.f](x) = [](x)/[[??].sub.n](x).

4. Application for Computing the Tensor Exponential Function

The method of truncated infinite series has abroad applications in finite single crystal plasticity for computing tensor exponential function [16]. However, the accuracy and effectiveness of such algorithm are limited by round-off and choice of termination criterion. In this section, we will utilize the

method of TPTA to compute tensor exponential function. We start by briefly reviewing some basic equations that model the behaviour of single crystals in the finite strain range [16].

Consider a single crystal model

F = [F.sup.e][F.sup.p], (53)

where [F.sup.e] and [F.sup.p] denote elastic part and plastic part, respectively.

For a single crystal with a total number [n.sub.syst] of slip systems, the evolution of the inelastic deformation gradient, [F.sup.p], is defined by means of the following rate form:

[mathematical expression not reproducible], (54)

where [[??].sup.[alpha]] denotes the contribution of slip system [alpha] to the total inelastic rate of deformation. The vectors [s.sup.[alpha].sub.0] and [m.sup.[alpha].sub.0] denote, respectively, the slip direction and normal direction of slip system [alpha].

The above tensor differential equation can be discretized in an implicit fashion with use of the tensor exponential function. The implicit exponential approximation to the inelastic flow equation results in the following discrete form:

[mathematical expression not reproducible]. (55)

The above formula is analogous to the exact solution of initial value problem (1) and it is necessary to calculate

exp ([[n.sub.syst].summation over (j=1)]) [DELTA] [[gamma].sup.[alpha]] [cross product] [s.sup.[alpha].sub.0] [m.sup.[alpha].sub.0] (56)

In [7], the author used Algorithm 23 to calculate (56).

Algorithm 23 (truncated infinite series method [7] (p.749)).

(1) Given tensor X, initialise n = 0 and exp(X) = I.

(2) Increment counter n := n+ 1.

(3) Compute n! and [X.sup."].

(4) Add new term to the series exp(X) := exp(X) + (1/n!)[X.sup."].

(5) Check convergence, if

[parallel] [x.sup.n] [parallel]/n! [[epsilon].sub.tol], then exit, else goto (1). (57)

Example 24. Consider a tensor exponential function exp(Ax); the entries of A are [a.sub.121] = 1, [a.sub.221] = -[a.sub.122] = 2, [a.sub.222] = -1, and zero elsewhere.

To find a tensor Pade-type approximation of type (3/3) for the tensor exponential function, first we should expand exp(Ax) into power series by means of Definition 7. We can obtain

[mathematical expression not reproducible]. (58)

By Algorithm 22, the following can be done.

(1) Use (14) to compute [H.sub.3]([A.sub.0]) and [??]:

[mathematical expression not reproducible]. (59)

(2) Use (50) to calculate [v.sub.3](x):

[v.sub.3] 15041/1080 + 3947/60 x + 1189/10 [x.sup.2] 1493/18 [x.sup.3], (60)

and compute [[??].sub.3](x) by(19), so we get

[[??].sub.3] 15041/1080 [x.sup.3] + 3947/60 [x.sup.2] + 1189/10 x 1493/18. (61)

(3) Set q = m - n + 1 = 1 and compute [W.sub.1](x), [[??].sub.1](x):

[mathematical expression not reproducible], (62)


[mathematical expression not reproducible]. (63)

(4) Substitute [mathematical expression not reproducible] into (27) to compute

[mathematical expression not reproducible]. (64)

(5) Obtain [(m/n).sub.f](x) = [P.sub.33](x)/[[??].sub.3](x).

In Table 1 we compare the number of exact figures given by the method of TPTA of type (3/3) with corresponding exact value of exp(Ax) referring to the entries of (1,2,1), (2,2,1), (1,2,2), and (2,2,2). We also compute the norm of absolute residual tensor (denoted by .Res). Here,

[mathematical expression not reproducible], (65)

where the operation [parallel] * [parallel] is defined by (13).

From Table 1, it is observed that the estimates from TPTA can reach the desired accuracy.

Example 25. Let f(x) be given by Example 24.

By Algorithm 22 for preceding example again, we calculate [(m/m).sub.f](1),m = 1,2,3,4,5. The exact value and approximant value associated with the entries of (1,2,1), (2,2,1), (1,2,2), and (2,2,2) are listed in Tables 2 and 3, respectively, where x =1.

From Table 3, we can see that [mathematical expression not reproducible] has the best approximation for this example. We also compute exp(A) by using Algorithm 23, and the corresponding numerical results are listed in Table 4. By comparison of Table 3 with Table 4, we find that it requires at most 6 coefficients (since m = 3) of power series expansion of exp(Ax) to achieve an error of [10.sup.-5] in Algorithm 22, while requiring 11 coefficients in Algorithm 23. It is straightforward to understand that Algorithm 23 is more expensive than Algorithm 22 especially for higher order tensor exponential function. In practical applications, only few coefficients of the series maybe known, so, we may get the desired results by means of TPTA. Thus the effectiveness of the proposed Algorithm 22 is verified.

5. Conclusion

In this paper, we presented tensor Pade-type approximant method for computing tensor exponential function; the expression of TPTA is of the form of tensor numerator and scalar denominator. In order to have a tensor Pade-type approximant with the higher possible precision of approximation, we proposed an algorithm for computing denominator polynomials of TPTA, and its effectiveness has been investigated in one example of tensor exponential function. The key to the TPTA to be applied to the tensor exponential function is that it can be expanded into power series with the same order tensors coefficients by means of t-product. Of course, there are several ways to multiply tensors [26-30], but the order of the resulting tensor maybe changed. For example, if A is n1 xn2 x n3 and B is n1 xm2 xm3, then the contracted product [26] of A and B is n2 xn3 xm2 xm3. So, the choice of the multiplication of two tensors is an open question for expanding tensor exponential function, and the corresponding tensor Pade approximant theoretic is a subject of further research.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.


The work is supported by National Natural Science Foundation (11371243) and Key Disciplines of Shanghai Municipality



[1] S. Dolgov and B. Khoromskij, "Simultaneous state-time approximation of the chemical master equation using tensor product formats," Numerical Linear Algebra with Applications, vol. 22, no. 2, pp. 197-219, 2015.

[2] N. G. van Kampen, Stochastic Process in Physics and Chemistry, North-Holland Personal Library, Elsevier B.V., 3rd edition, 2007

[3] P. GelB, S. Matera, and C. Schutte, "Solving the master equation without kinetic Monte Carlo: tensor train approximations for a CO oxidation model," Journal of Computational Physics, vol. 314, pp. 489-502, 2016.

[4] W. Ding, K. Liu, E. Belyaev, and F. Cheng, "Tensor-based linear dynamical systems for action recognition from 3D skeletons," Pattern Recognition, pp. 75-86, 2017

[5] P. Gelfi, S. Klus, S. Matera, and C. Schutte, "Nearest-neighbor interaction systems in the tensor-train format," Journal of Computational Physics, vol. 341, pp. 140-162, 2017

[6] C. Gu, Y. Z. Huang, and Z. B. Chen, Continued Fractional Recurrence Algorithm for Generalized Inverse Tensor Pade Approximation, Control and Decision, 2018, http://kns.cnki .net/kcms/detail/21.1124.TP.20180416.0932.036.html.

[7] E. A. de Souza, D. Peric', and D. R. J. Owen, Computational methods for plasticity: Theory and Applications, Wiley, 2008.

[8] H. Chen, Y. Chen, G. Li, and L. Qi, "A semidefinite program approach for computing the maximum eigenvalue of a class of structured tensors and its applications in hypergraphs and copositivity test," Numerical Linear Algebra with Applications, vol. 25, no. 1, e2125, 16 pages, 2018.

[9] G. Zhou, G. Wang, L. Qi, and M. Alqahtani, "A fast algorithm for the spectral radii of weakly reducible nonnegative tensors," Numerical Linear Algebra with Applications, vol. 25, no. 2, e2134, 10 pages, 2018.

[10] H. Chen and Y. Wang, "On computing minimal H-eigenvalue of sign-structured tensors," Frontiers of Mathematics in China, vol. 12, no. 6, pp. 1289-1302, 2017

[11] G. Wang, G. Zhou, and L. Caccetta, "Z-eigencvalue inclusion theorems for tensors," Discrete and Continuous Dynamical Systems - Series B, vol. 22, no. 1, pp. 187-198, 2017

[12] K. Zhang and Y. Wang, "An H-tensor based iterative scheme for identifying the positive definiteness of multivariate homogeneous forms," Journal of Computational and Applied Mathematics, vol. 305, pp. 1-10, 2016.

[13] Y. Wang, K. Zhang, and H. Sun, "Criteria for strong H-tensors," Frontiers of Mathematics in China, vol. 11, no. 3, pp. 577-592, 2016.

[14] H. Chen, L. Qi, and Y. Song, "Column sufficient tensors and tensor complementarity problems," Frontiers of Mathematics in China, vol. 13, no. 2, pp. 255-276, 2018.

[15] Y. Wang, L. Caccetta, and G. Zhou, "Convergence analysis of a block improvement method for polynomial optimization over unit spheres," Numerical Linear Algebra with Applications, vol. 22, no. 6, pp. 1059-1076, 2015.

[16] E. A. de Souza Neto, "The exact derivative of the exponential of an unsymmetric tensor," Computer Methods Applied Mechanics and Engineering, vol. 190, no. 18-19, pp. 2377-2383, 2001.

[17] A. Cuitino and M. Ortiz, "A material-independent method for extending stress update algorithms from small-strain plasticity to finite plasticity with multiplicative kinematics," Engineering Computations, vol. 9, no. 4, pp. 437-451, 1992.

[18] A. L. Eterovic and K. Bathe, "A hyperelastic-based large strain elasto-plastic constitutive formulation with combined isotropic-kinematic hardening using the logarithmic stress and strain measures," International Journal for Numerical Methods in Engineering, vol. 30, no. 6, pp. 1099-1114, 1990.

[19] J. C. Simo, "Algorithms for static and dynamic multiplicative plasticity that preserve the classical return mapping schemes of the infinitesimal theory," Computer Methods Applied Mechanics and Engineering, vol. 99, no. 1, pp. 61-112, 1992.

[20] C. Brezinski, Pade-type approximation and general orthogonal polynomials, vol. 50 of International Series of Numerical Mathematics, Birkhauser, Basel, Switzerland, 1980.

[21] C. Gu, "Matrix Pade-type approximant and directional matrix Pade approximant in the inner product space," Journal of Computational and Applied Mathematics, vol. 164, pp. 365-385, 2004.

[22] M. E. Kilmer, K. Braman, N. Hao, and R. C. Hoover, "Third-order tensors as operators on matrices: a theoretical and computational framework with applications in imaging," SIAM Journal on Matrix Analysis and Applications, vol. 34, no. 1, pp. 148172, 2013.

[23] C. D. Martin, R. Shafer, and B. Larue, "An order-p tensor factorization with applications in imaging," SIAM Journal on Scientific Computing, vol. 35, no. 1, pp. A474-A490, 2013.

[24] M. Itskov, "Computation of the exponential and other isotropic tensor functions and their derivatives," Computer Methods Applied Mechanics and Engineering, vol. 192, no. 35-36, pp. 3985-3999, 2003.

[25] T. G. Kolda and B. W. Bader, "Tensor decompositions and applications," SIAM Review, vol. 51, no. 3, pp. 455-500, 2009.

[26] M. E. Kilmer and C. D. Martin, "Factorization strategies for third-order tensors," Linear Algebra and its Applications, vol. 435, no. 3, pp. 641-658, 2011.

[27] B. W. Bader and T. G. Kolda, "Algorithm 862: MATLAB tensor classes for fast algorithm prototyping," ACM Transactions on Mathematical Software, vol. 32, no. 4, pp. 635-653, 2006.

[28] H. A. L. Kiers, "Towards a standardized notation and terminology in multiway analysis," Journal of Chemometrics, vol. 14, no. 3, pp. 105-122, 2000.

[29] L. De Lathauwer, B. De Moor, and J. Vandewalle, "A multilinear singular value decomposition," SIAM Journal on Matrix-Analysis and Applications, vol. 21, no. 4, pp. 1253-1278, 2000.

[30] D. Liu, W. Li, and S.-W. Vong, "The tensor splitting with application to solve multi-linear systems," Journal of Computational and Applied Mathematics, vol. 330, pp. 75-94, 2018.

Chuanqing Gu (iD) (1) and Yong Liu (iD) (1,2)

(1) Department of Mathematics, Shanghai University, Shanghai 200444, China

(2) Changzhou College of Information Technology, Changzhou 213164, China

Correspondence should be addressed to Chuanqing Gu;

Received 21 April 2018; Accepted 22 May 2018; Published 19 June 2018

Academic Editor: Liguang Wang
Table 1: Numerical results of Example 24 at different points by using
Algorithm 22.

X                                     (1,2,1)        (2,2,1)

0.1             [e.sup.A]            0.08200959     0.82282781
        [mathematical expression     0.08200778     0.82282688
            not reproducible]

0.2             [e.sup.A]            0.13495955     0.68377119
        [mathematical expression     0.13493452     0.68375764
            not reproducible]

0.3             [e.sup.A]            0.16712428     0.57369394
        [mathematical expression     0.16701602     0.57363058
            not reproducible]

0.4             [e.sup.A]            0.18456291     0.48575712
        [mathematical expression     0.18427224     0.48557038

X                                     (1,2,2)       (2, 2, 2)

0.1             [e.sup.A]            0.17717281    -0.08200959
        [mathematical expression     0.17717311    -0.08200778
            not reproducible]

0.2             [e.sup.A]            0.31622880    -0.13495955
        [mathematical expression     0.31624235    -0.13493452
            not reproducible]

0.3             [e.sup.A]            0.42630605    -0.16712428
        [mathematical expression     0.42636941    -0.16701602
            not reproducible]

0.4             [e.sup.A]            0.51424287    -0.18456291
        [mathematical expression     0.51442961    -0.18427224

X                                       Res

0.1             [e.sup.A]             8.34e-12
        [mathematical expression
            not reproducible]

0.2             [e.sup.A]             1.62e-9
        [mathematical expression
            not reproducible]

0.3             [e.sup.A]             3.14e-8
        [mathematical expression
            not reproducible]

0.4             [e.sup.A]             2.38e-7
        [mathematical expression
            not reproducible]

Table 2: The exact value of exp (A).

            (1,2,1)        (2,2,1)        (1,2,2)       (2, 2, 2)

exp(A)     0.15904705     0.20883238     0.79116761    -0.15904705

Table 3: Numerical approximations of exp(A) using Algorithm 22 for
Example 25.

m      (1,2,1)        (2,2,1)        (1,2,2)

1     0.50000000     0.00000000     1.00000000
2    -2.12500000     0.75000000     0.25000000
3     0.15503865     0.20377270     0.79622729
4     0.17454584     0.22682585     0.77317481
5     0.17625313     0.19112365     0.80887636

m      (2,2,2)         Res

1    -0.50000000     3.19e-1
2     2.12500000     1.10e+1
3    -0.15503865     8.33e-5
4    -0.17454584     1.12e-3
5    -0.17625313     1.21e-3

Table 4: Numerical results of Example 25 by using Algorithm 23.

[n.sub.max]      (1,2,1)        (2,2,1)        (1,2,2)

1                   1              -1             2
2                   -1            1.5            -0.5
3               1.16666666    -0.83333333     1.83333333
4              -0.50000000     0.87500000     0.12500000
5               0.50833333    -0.14166666     1.14166666
6               0.00277777     0.36527777     0.63472222
7               0.21964285     0.14821428     0.85178571
8               0.13829365     0.22958829     0.77041170
9               0.16541280     0.20246638     0.79753361
10              0.15735119     0.21060267     0.78939732
11              0.15957013     0.20838371     0.79161628

[n.sub.max]     (2, 2, 2)       Res

1                   2           4.33
2                   1           6.02
3              -1.16666666      4.20
4               0.50000000      1.75
5              -0.50833333    4.89e-1
6              -0.00277777    9.77e-2
7              -0.21964285    1.47e-2
8              -0.13829365    1.72e-3
9              -0.16541280    1.62e-4
10             -0.15735119    1.22e-5
11             -0.15957013    8.77e-7
COPYRIGHT 2018 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2018 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Gu, Chuanqing; Liu, Yong
Publication:Journal of Function Spaces
Date:Jan 1, 2018
Previous Article:Synchronization of Different Uncertain Fractional-Order Chaotic Systems with External Disturbances via T-S Fuzzy Model.
Next Article:Slant and Semi-Slant Submanifolds in Metallic Riemannian Manifolds.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters