Printer Friendly

On the solution of a nonlinear semidefinite program arising in discrete-time feedback control design.

1. Introduction

In this paper, the following nonlinear semidefinite programming (NSDP) problem is considered:

min f(X) s.t. H(X) = 0, G(X) > 0, (1)

where J : [R.sup.nxn] x [R.sup.pxr] [right arrow] R, H : [R.sup.nxn] x [R.sup.pxr] [right arrow] [R.sup.nxn], G : [R.sup.nxn] x [R.sup.pxr] [right arrow] [R.sup.nxn] are sufficiently smooth matrix functions, where G > 0 means that G is positive definite. This problem is assumed to be nonlinear and generally nonconvex. The origin of the considered NSDP problem is the static output feedback (SOF) design problem in which the unknown X is partitioned as X = (L, F), where L [member of] [R.sup.nxn] represents the state variable and F [member of] [R.sup.pxr] represents the control variable.

In the last decade NSDP has attracted the attention of many authors in the optimization community. For instance, Jarre [1] introduced an interior-point method for nonconvex semi-definite programs. Leibfritz and Mostafa [2] proposed an interior-point trust region method for a special class of NSDP problems resulting from the continuous-time SOF problem. Kocvara et al. [3] considered an augmented Lagrangian method for a similar NSDP problem. Sun et al. [4] investigated the rate of convergence of the augmented Lagrangian approach for NSDP. Correa and Ramirez [5] proposed sequential semi-definite programming method for nonlinear semi-definite programming. Yamashita and Yabe [6] study local and superlinear convergence of a primal-dual interior point method. Freund et al. [7] proposed a sequential semidefinite programming approach for a nonlinear program with nonlinear semidefinite constraints.

The design problem of optimal output feedback controllers is one of the most studied problems over the last four decades in the community of systems and control. Clearly, this is due to the existence of numerous practical applications in system and control engineering, finance, and statistics. The benchmark collection [8], for instance, contains over 160 applications from system and control engineering only. Makila and Toivonen in the survey [9] summarized several special purpose algorithms such as the Levine-Athans method, the dual Levine-Athans method, and the descent Anderson-Moore method as well as Newton's method for finding the local solution of a matrix optimization problem that corresponds to the discrete-time SOF design problem. Rautert and Sachs [10] proposed a structured quasi-Newton method for the continuous-time SOF problem. Makila [11] introduced a matrix optimization problem related to the discrete-time SOF design problem, which is an extension of the former problem considered in [9]. However, it has not yet been tested numerically.

Several constrained optimization techniques have been developed for constrained optimization problems related to the continuous-time SOF design problem. For instance, Apkarian et al. [12] introduced a linear matrix inequality approach for the [H.sub.2] synthesis problem. Leibfritz and Mostafa [13] proposed two trust region methods for two different formulations of the continuous-time SOF problem. Mostafa [14,15] developed a trust region method for the decentralized SOF problem and an augmented Lagrangian SQP method for the continuous-time SOF problem, respectively. More recently, Mostafa [16] introduced a trust region method for a particular constrained optimization problem that originates from the discrete-time SOF problem.

In the current work a different formulation of the design problem is considered that underlies the NSDP problem formulation (1). In fact such an NSDP problem can be seen as a generalization of the matrix optimization problem introduced by Makila [11]. The goal is to study and analyze an SQP method with line search for finding a local solution of such an NSDP problem. The reduced Hessian with a step decomposition approach (see e.g., [17]) is employed with the SQP method. Moreover, we compare the SQP method numerically versus Newtons method proposed by Makila and Toivonen [9] but applied on the matrix optimization problem [11]. As is well known, SQP methods are among the most effective techniques for nonlinear constrained optimization problems; see, for example, [17]. In particular, the problem structure is exploited with the considered method.

The existence of an initial feasible point [X.sub.0] = ([L.sub.0], [F.sub.0]) with respect to the positive definite constraints is necessary to start the iteration sequence of the SQP method. Such an initial feasible point can be easy to find in simple problems but can be difficult or very time consuming for moderate and complex problems. A parametrization approach is employed on the NSDP problem that overcomes this difficulty.

This paper is organized as follows. In the next section the discrete-time SOF problem is introduced and the related NSDP problem is stated. In Section 3 the SQP method that finds a local solution of the NSDP is presented. Section 4 is devoted to overcome the difficulty of finding an initial feasible point (L0,F0) to start the iteration sequence of the SQP method. In order to demonstrate the considered approach we have tested in Section 5 the SQP method on various test problems derived from benchmark collection [8].

Notations. For a matrix G [member of] [R.sup.nxn] the notation G > 0 (G [greater than or equal to] 0) denotes that G is strictly positive (semi)definite. The symbol [cross product] denotes the Kronecker product of two matrices and <*,*> denotes the inner product operator. Throughout the paper, [parallel]*[parallel] is the Frobenius norm; otherwise; it will be mentioned. Moreover, [I.sub.n] denotes the identity matrix of order n. We omit the argument when it is known from the context; for example, we use [J.sub.k] to denote J([X.sub.k]).

2. The Static Output Feedback Control Problem

The problem of designing an optimal output feedback controller for discrete-time control systems is an important and extensively studied topic in system and control literature; see, for example, the two survey papers by Makila and Toivonen [9] and Syrmos et al. [18] and numerous references therein, including Garcia et al. [19], Lee and Khargonekar [20], Makila [11], Mostafa [21, 22], and Sulikowski et al. [23].

Consider the linear time-invariant control system described by the following discrete time state-space realization:

[x.sub.k+1] = A[x.sub.k] + B[u.sub.k] + [v.sub.k], [y.sub.k] = C[x.sub.k] + [e.sub.k], k [greater than or equal to] 0, (2)

where [x.sub.k] [member of] [R.sup.n] is the state vector, [u.sub.k] [member of] [R.sup.p] is the control (input) vector, [y.sub.k] [member of] [R.sup.r] is the measured output vector, and [v.sub.k] [member of] [R.sup.n] and [e.sub.k] [member of] [R.sup.r] are disturbance (noise) terms. Moreover A, B, and C are given constant matrices of appropriate dimensions.

We will consider the following quadratic cost function as an objective function to be minimized:

J = E [[[infinity].summation over (k=0)] <[x.sub.k], Q[x.sub.k]> + <[u.sub.k], R[u.sub.k]>], (3)

where E[*] is the expected value and Q [member of] [R.sup.nxn] and R [member of] [R.sup.pxp] are given constant weight matrices.

The following control law is often used to close the above control system:

[u.sub.k] = F[y.sub.k], (4)

where F [member of] [R.sup.pxr] is the output feedback gain matrix representing the unknown.

In order to remove the dependency of the problem on the initial state vector [x.sub.0] the following assumption is often imposed.

Assumption 1. Assume that [x.sub.0] is a random variable uniformly distributed over the unit ball such that E[[x.sub.0]] = 0.

Under this assumption the optimal control problem (2)-(4) can be transformed into the matrix optimization problem (see [11]):

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (5)

where Tr(*) is the trace operator and L(F) solves the discrete Lyapunov equation:

L(F) = (A + BFC) L(F) [(A + BFC).sup.T] + V + BF[R.sub.e][F.sup.T][B.sup.T]. (6)

Assumption 2. Throughout the paper we assume that A [member of] [R.sup.nxn], B [member of] [R.sup.nxp], and C [member of] [R.sup.rxn] are given constant matrices. Furthermore, we assume that Q [member of] [R.sup.rxr] and R [member of] [R.sup.pxp] are given constant weight matrices and [R.sub.e] [member of] [R.sup.rxr] and V [member of] [R.sup.nxn] are given covariance matrices. We assume that Q [greater than or equal to] 0 while R, [R.sub.e] and V > 0.

From Lyapunov stability theory the unknown F must be chosen from the following set:

[S.sub.F] = {F [member of] [R.sup.pxr] : [rho](A + BFC) < 1}, (7)

where [rho](*) is the spectral radius. This restriction is required ultimately to guarantee the existence of a unique solution of the discrete Lyapunov equation (6); see, for example, [24]. Moreover, from the Lyapunov stability theory the next lemma gives an equivalence between the asymptotic stability of the system (2) and fulfilling two positive definite constraints.

Lemma 3 ([24, Lemma 5.7.19]). Consider the SOFproblem (2)-(4). An F [member of] [S.sub.F] exists if and only if there exists (L,F) [member of] [D.sub.s], where

[D.sub.s] = {(L, F) [member of] [R.sup.nxn] x [R.sup.pxt]: L > 0, Y (L, F) > 0}, (8)

where

Y(L,F) = L - (A + BFC) L[(A + BFC).sup.T]. (9)

According to Lemma 3 and by considering both of L and F as independent variables, where we skip the imposed dependency of L on F that was considered in (5)-(6), then the minimization problem can be stated as the following NSDP:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (10)

s.t. H(L,F)= -L +(A + BFC) L[(A + BFC).sup.T] + V + BF[R.sub.e][F.sup.T][B.sup.T] = 0, (11)

Y(L,F) > 0, L > 0. (12)

Instead of considering (5)-(7) we would rather consider the constrained minimization problem (10)-(12) which can be seen as a generalization of the former problem. The equivalence between the stability condition (7) and the positive definite constraints of (8) provides a straightforward idea to fulfill such a constraint within the numerical algorithm explicitly. There are, however, different alternatives to handle the positive definite constraints within a numerical algorithm. First, one might incorporate those positive definite terms within the objective function and apply an interior-point technique on the corresponding constrained problem; see [2]. We might also use slack matrix variables for the inequality constraints. Then one solves the problem with only equality constraints.

We end this section by noting that the two sets [S.sub.F] and [D.sub.s] are open. Therefore, it is convenient to replace them by the two subsets [??] and [omega], respectively, where both subsets are assumed to be compact. For example, the level set

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (13)

is compact; see [9].

3. An SQP Method for the Design Problem

Let us consider the minimization problem (10)-(12). The Lagrangian function associated with the equality constraint is defined as

l(L,F,K) = J(L,F) + <K,H(L,F)>, (14)

where K [member of] [R.sup.nxn] is the Lagrange multiplier. The next lemma includes first- and second-order derivatives of l and first derivatives of the function H. Similar results can be found in [16, Lemma 2.1] but on a simpler problem. In the following discussion we denote A(F) = A + BFC and Q(F) = Q + [C.sup.T][F.sup.T]RFC.

Lemma 4. Consider the NSDP problem (10)-(12). The Lagrangian function l(L,F,K) is twice continuously differentiable and its first- and second-order directional derivatives as well as first-order derivatives of the constraint function H are the following:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (15)

where [K.sub.F] = [B.sup.T]KA(F) + RFC and [L.sub.F] = A(F)L[C.sup.T] + BF[R.sub.e].

Proof. By using the directional derivatives with both of l and H the above derivatives are obtained.

The Karush-Kuhn-Tucker (KKT) optimality conditions yield a nonlinear system of matrix equations as given in the next lemma.

Lemma 5 (see, e.g., [7, Section 5]). Let (L, F) [member of] [D.sub.s] be a local minimizer of the problem (10)-(12) and let (10)-(12) be regular at (L,F). Then there exists a Lagrange multiplier K [member of] [R.sup.nxn] satisfying with L and F the following KKT conditions:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (16)

where [K.sub.F] is as defined in Lemma 4 and U = (L, F, K) e [D.sub.s] x [R.sup.nxn].

Clearly, the system (16) is nonlinear in the unknowns L, F, and K. However, the last two equations of (16) might be viewed as discrete Lyapunov equations of the following form

L = A(F) LA[(F).sup.T] + V + BF[R.sub.e][F.sup.T][B.sup.T], (17)

K = A[(F).sup.T]KA(F) + Q(F). (18)

The SQP method is based on minimizing successfully a quadratic program of the form:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (19)

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (20)

All derivatives required for constructing this QP are given in Lemma 4.

An SQP method known also as Newton-Lagrange method generates a sequence of iterates by solving the augmented linear system:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (21)

and updates the step by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (22)

where [alpha] [member of] (0,1] is the step size.

Assumption 6. Throughout this paper we make the following assumptions.

(H1) The mappings J and H are at least twice continuously differentiable.

(H2) For a given (L, F) [member of] [D.sub.s] the mapping [nabla].sub.L]H(L, F) is invertible.

(H3) There exists an ([L.sub.0], [F.sub.0]) [member of] [D.sub.s]. Moreover, assume that for a given ([L.sub.k], [F.sub.k]) [member of] [D.sub.s] there always exists an [[alpha].sub.k] > 0 such that ([L.sub.k] + [[alpha].sub.k][DELTA][L.sub.k], [F.sub.k] + [[alpha].sub.k][DELTA][F.sub.k]) [member of] [D.sub.s].

The next theorem provides us with the linearization of the nonlinear system (16).

Theorem 7. For a given U := (L, F, K) [member of] [D.sub.s] x [R.sup.nxn] the linearized equations of the KKT system (16) are the following:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (23)

where [K.sub.F] and [L.sub.F] are as defined in Lemma 4.

Proof. The linearization of the nonlinear system M(U) = 0 about the point U = (L,F,K) in the direction of [DELTA]U = ([DELTA]L, [DELTA]F, [DELTA]K) takes the form

[nabla]M[(U).sup.T][DELTA]U = -M(U), (24)

where [nabla]M[(U).sup.T] is the Jacobian matrix, with which we directly obtain the system (23).

Under certain assumptions it is possible to show that the SQP direction [DELTA]U given by (23) is well defined. The difficulty that one faces when solving (23) is due to the fact that we do not have an explicit representation of the Jacobian of M(U). However, if we write U = (L, F, K) as a long column vector [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] such that f(vec(U)) = M(U), thenwe are able to evaluate the Jacobian of f using the basic properties of the Kronecker product which is an (2[n.sup.2] + pr) x (2[n.sup.2] + pr) matrix.

Theorem 8. Let U = (L, F, K) [member of] [D.sub.s] x [R.sup.nxn] be given. The Jacobian matrix of M(U) is given by:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (25)

where [K.sub.F] and [L.sub.F] are as defined in Lemma 4, [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is an mn x mn commutation matrix defined as vec([W.sup.T]) = [K.sub.rn] vec(W) for any mxn matrix W .Furthermore, the linear system (23) is explicitly written as

[nabla]M[(U.sup.)]T vec (U) = -vec(M), (26)

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (27)

Proof. For any given three matrices A, B, and C of appropriate dimensions the linear matrix equation

A X B = C (28)

is equivalent to the explicit linear system

([B.sup.T] [cross product] A) vec (X) = vec (C). (29)

If we apply this result on the above linear system the entries of the Jacobian matrix can be easily obtained.

Indeed, the last result shows that the dimension of the Jacobian matrix (25) can be very large because of the Kronecker products. Moreover, the evaluation of the Jacobian matrix explicitly in the SQP method is unnecessary and makes the method impractical. By taking this fact in mind the reduced Hessian approach is rather considered as an ideal candidate for finding the local solution of the NSDP (10)-(12); see, for example, [2, 16].

3.1. The Reduced Hessian Approach. The reduced Hessian approach relies on the existence of the null space and the right inverse of the Jacobian [nabla][H.sup.T]. In optimal control such operators are not available explicitly. However, it is possible to construct them from the problem itself; for more details see, for example, [2,16, 25]. The two operators are defined as

Z(L, F) = (-[H.sup.-1.sub.L] (L, F) [H.sub.F] (L, F), [I.sub.P]), [??](L, F) = (-[H.sup.-1.sub.L] (L, F), 0), (30)

where [I.sub.p] and 0 are the identity and the zero operators, respectively. We note that if the Jacobian [nabla][H.sup.T] is surjective then such a right inverse R exists. We also note that the range space of Z coincides with the null space of the Jacobian [nabla][H.sup.T]; see [16, Lemma 2].

Next, with the help of Z and [??] the following result shows that the linearized equality constraint is decomposed into two discrete Lyapunov equations.

Lemma 9. Let (L, F) [member of] [D.sub.s] be given and let ([DELTA]L, [DELTA]F) [member of] [R.sup.nxn+pxr] be the solution of the QP (19). Then the linearized equality constraint of (19) is decomposed into the following discrete Lyapunov equations:

[DELTA][L.sup.n] = A(F) [DELTA][L.sup.n][(AF).sup.T] + H(L, F), (31)

[DELTA][L.sup.t] = A (F) [DELTA][L.sup.n]A[(F).sup.T] + B[DELTA]F[L.sup.T.sub.F] + [L.sub.F][DELTA][F.sup.T][B.sup.T], (32)

where [L.sub.F] is as given in Lemma 5.

Proof. (see also [16, Lemma 3]). The linearized equality constraint of (19) can be rewritten as

[[nabla].sub.L]H (L, F) [DELTA][L.sup.t] + VPH (L, F) [DELTA]F + [V.sub.L]H (L, F) [DELTA][L.sup.n] + H (L, F) = 0, (33)

where [DELTA]L = [DELTA][L.sup.n] + [DELTA][L.sup.t]. According to the definition of Z and R the above equation can be decomposed as

[[nabla].sub.L]H(L, F) [DELTA][L.sup.n] + H(L, F) = 0 [for all] [DELTA][L.sup.n], [H.sub.L](L, F) [DELTA][L.sup.t] ([DELTA]F) + [H.sub.F] (L, F) [DELTA]F = 0 [for all] [DELTA]F, (34)

which yield the discrete Lyapunov equations (31) and (32), respectively.

Following the proof of Lemma 9 the step ([DELTA]L, [DELTA]F) that solves the QP (19) with the help of Z and [??] is decomposed as

([DELTA]L, [DELTA]F) = Z (L, F) [DELTA]F + [??] (L, F) H (L, F) =: ([DELTA][L.sup.t] ([DELTA]F), [DELTA]F) + ([DELTA][L.sup.n], 0), (35)

where ([DELTA][L.sup.t]([DELTA]F), [DELTA]F) and ([DELTA][L.sup.n], 0) are the tangential and the normal steps, respectively.

Consequently, one can obtain the normal step ([DELTA][L.sup.n], 0) by solving the discrete Lyapunov equation (31) for [DELTA][L.sup.n]. Having obtained such a step component we solve the following tangential subproblem for [DELTA]F:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (36)

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (37)

and [DELTA][L.sup.t]([DELTA]F) is the corresponding solution of the discrete Lyapunov equation (32).

In order to find the local solution of the minimization problem (36) let us apply the necessary optimality conditions, which give the following result.

Lemma 10. Let (L, F, K) [member of] [D.sub.s] x [R.sup.nxn] and [DELTA][L.sup.n] [member of] [R.sup.nxn] be given. Moreover, let [DELTA]F [member of] [R.sup.pxr] be a local minimum for the problem (36). Then AF solves the following linear matrix equation:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (38)

where [DELTA][L.sup.t]([DELTA]F), [DELTA]K([DELTA]F), and S, respectively, solve the discrete Lyapunov equations (32) and

[DELTA]K = A[(F).sup.T] [DELTA]KA(F) + [K.sup.T.sub.F] [DELTA]FC + [C.sup.T] [DELTA][F.sup.T][K.sub.F], (39)

S = A[(F).sup.T] SA(F) + M + [M.sup.T], (40)

where [K.sub.F] and [L.sub.F] are as defined in Lemma 4 and

M = -K + A[(F).sup.T]KA(F) + Q(F). (41)

Proof. We use the derivatives of l given in Lemma 4 to construct [psi]([DELTA]F). If we differentiate y with respect to [DELTA]F and apply the necessary optimality conditions, we obtain

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (42)

which is another representation of (38). As an example of evaluating these derivatives we obtain the last term of (42) as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (43)

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (44)

are the exact solutions of the discrete Lyapunov equations (32) and (40), respectively.

Because of using directional derivatives the resulting linear system (38)-(40) is not given explicitly. However, one can modify the conjugate gradient (CG) method [17] and solve this system iteratively. The problem structure is exploited, where the partition of the unknown X ? (L, F) is utilized.

For a given [DELTA]F calculated by the CG method both of [DELTA][L.sup.t]([DELTA]F) and [DELTA]K([DELTA]F) are uniquely determined by solving the discrete Lyapunov equations (32) and (39), respectively. The CG algorithm for solving the system (38)-(40) is stated in the following lines, where we use [T.sub.i], [D.sub.i], and [E.sub.i] at the ith CG inner iteration to denote the approximation of the step component [DELTA]F, the CG direction, and the residual, respectively.

Algorithm 11 (modified CG method). Let U = (L,F,K) [member of] [D.sub.s] x [R.sup.nxn] and [DELTA][L.sup.n] be given. Moreover, let A, B, C, Q, R, [R.sup.e], and V be given constant matrices. Set [T.sub.0] = 0, [E.sub.0] = -[K.sub.F]L[C.sup.T] - ([B.sup.T]KB + R)F[R.sub.e], and [D.sub.0] = [E.sub.0]. Choose q and [epsilon] [member of] (0,1).

For i ? 0, 1, ..., do

(1) Solve for [DELTA][L.sup.T]([D.sub.i]) and [DELTA]K([D.sub.i]) the discrete Lyapunov equations (32) and 39), respectively.

(2) Calculate

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (45)

(3) If Tr([D.sup.T.sub.i]N([D.sub.i])) [less than or equal to] [epsilon][[parallel][D.sub.i][parallel].sup.2], update the step [DELTA]F as follows: if i = 0, set [DELTA]F = [D.sub.0]; else, set [DELTA]F = [T.sub.i] and exit.

(4) Calculate the ratio [[theta].sub.i] = Tr([E.sup.T.sub.i][E.sub.i]) Tr([D.sup.T.sub.i]N([D.sub.i])). Set [T.sub.i+1] = [T.sub.i] + [[theta].sub.i][D.sub.i] and [E.sub.i+1] = [E.sub.i] - [[theta].sub.i]N([D.sub.i]).

(5) If [parallel][E.sub.i+1][parallel] [less than or equal to] [eta] [parallel][E.sub.0][parallel] set [DELTA]F = [T.sub.i+1] and exit.

(6) Calculate the ratio

[[beta].sub.i] = Tr ([E.sup.T.sub.i+1][E.sub.i+1])/Tr([E.sup.T.sub.i][E.sub.i]). (46)

Set [D.sub.i+1] = [E.sub.i+1] + [[beta].sub.i][D.sub.i], i := i + 1, and go to (1). End (do)

There are two exits in this CG method. The first one takes place when a negative or zero curvature is encountered by the calculated search direction. The second exit occurs when the norm of the new residual is sufficiently small.

Having evaluated both of [DELTA][L.sup.n.sub.k] and ([DELTA][L.sup.t] ([DELTA][F.sub.k]), [DELTA][F.sub.k]), then according to the step decomposition (35) the total step ([DELTA][L.sub.k], [DELTA][F.sub.k]) is obtained. Consequently, the new iterate takes the form

([L.sub.k+1], [F.sub.k+1]) = ([L.sub.k] + [[alpha].sub.k] [DELTA][L.sub.k], [F.sub.k] + [[alpha].sub.k] [DELTA][P.sub.k]), (47)

where [[alpha].sub.k] > 0 is the step size. Starting from [[alpha].sub.0] = 1, we calculate the step size [[alpha].sub.k] by a backtracking rule (e.g., by halving [alpha]) until we satisfy the following sufficient decrease condition:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (48)

where [tau] [member of] (0, 1/2) is a given parameter. Let [[??].sub.k] be the achieved step size. Starting from that [[??].sub.k] we satisfy the following stability condition by backtracking in the same way until an [[alpha].sub.k] > 0 is reached such that

[L.sub.k] + [[alpha].sub.k][DELTA][L.sub.k] > 0, Y ([L.sub.k] + [[alpha].sub.k][DELTA][L.sub.k], [F.sub.k] + [[alpha].sub.k][DELTA][F.sub.k]) > 0. (49)

We assume that there always exists an [[alpha].sub.k] > 0 such that (48) is satisfied. If [alpha] is decreased and reached a lower bound [[alpha].sub.min] > 0 without fulfilling (49), then we might follow one of the ideas often used in modifying the Hessian matrix of Newton-based methods; see, for example, [17, Section 6.3]. Let [W.sub.k] denote any of the two matrices [L.sub.k] + [[alpha].sub.min][DELTA][L.sub.k] or Y([L.sub.k] + [[alpha].sub.min][DELTA][L.sub.k], [F.sub.k] + [[alpha].sub.min][DELTA][F.sub.k]) with a spectral decomposition [W.sub.k] = [??][LAMBDA][[??].sup.T]. A correction matrix [DELTA][W.sub.k] of minimum Frobenius norm that ensures [[lambda].sub.min]([W.sub.k] + [DELTA][W.sub.k]) [greater than or equal to] [delta] [greater than or equal to] 0 is given by

[DELTA][W.sub.k] = [??] diag ([t.sub.i]) [[??].sup.T], (50)

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (51)

The modified matrix is

[W.sub.k] + [DELTA][W.sub.k] = [??] ([LAMBDA] + diag ([t.sub.i])) [[??].sub.T], (52)

where [[lambda].sub.min]([W.sub.k]) means the smallest eigenvalue of [W.sub.k]. Note that the nature of the stability constraint is an inactive constraint for most test problems.

The Matlab function [~, j] = cholinc(M) of the incomplete Cholesky factorization can be used for checking the positive definiteness, where the indicator j is used for that purpose. On the other hand, as mentioned at the end of Section 2 we can check the stability condition via the spectral radius of the closed-loop system matrix; namely, [rho](A([F.sub.k] + [[alpha][DELTA][F.sub.k])) < 1.

We use as a merit function the following [l.sub.1]-penalty function for globalizing the SQP method:

[[PHI].sup.u](L,F) = J(L,F) + [mu][[parallel]H(L, F)[parallel].sub.1], (53)

where [mu] > 0 is the penalty parameter. This function is directionally differentiable.

The following update rule for the penalty parameter p is considered (see, e.g., [17, page 547]):

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (54)

where [gamma] > 0 is a given constant and [DELTA][K.sub.k] is obtained by solving the discrete Lyapunov equation 39).

The following algorithm demonstrates the SQP method that finds the local solution of the NSDP (10)-(12).

Algorithm 12 (the method DSQP). Let ([L.sub.0], [F.sub.0], [K.sub.0]) [member of] [D.sub.s] x [R.sup.nxm] be given. Moreover, let A, B, C, Q, R, Re, and V be given constant matrices. Choose [tau] [member of] (0, 1/2), [gamma] > 0, and [e.sub.tol] [member of] (0, 1).

While [parallel][[nabla].sub.U][l.sub.k][parallel] > [e.sub.tol], do

(1) Calculate [DELTA][L.sup.n.sub.k] solution of the discrete Lyapunov equation (31).

(2) For a given [DELTA][L.sup.n.sub.k] calculate the local solution ([DELTA][L.sup.t]([DELTA][F.sub.k]), [DELTA][F.sub.k]) of the problem (36) byAlgorithm 11. Then set ([DELTA][L.sub.k], [DELTA][F.sub.k]) = ([DELTA][L.sup.n.sub.k], 0) + ([DELTA][L.sup.t]([DELTA][F.sub.k]), [DELTA][F.sub.k]).

(3) Calculate [DELTA][K.sub.k] by solving (39) and then update the penalty parameter [[mu].sub.k] using (54).

(4) Starting from [[alpha].sub.0] = 1 apply a backtracking step-size rule that satisfies the sufficient decrease conditions (48).

(5) If ([L.sub.k] + [[alpha].sub.k] [DELTA][L.sub.k], [F.sub.k] + [[alpha].sub.k][DELTA][F.sub.k])) [not member of] [D.sub.s], then maintain the positive definiteness of (49) as described above.

(6) Set [F.sub.k+i] = [F.sub.k] + [[alpha].sub.k] [DELTA][F.sub.k], [L.sub.k+1] = [L.sub.k] + [[alpha].sub.k] [DELTA][L.sub.k], [K.sub.k+1] = [K.sub.k] + [[alpha].sub.k] [DELTA][K.sub.k], and k := k + 1; then go to step (1).

End (While)

Step (5) of Algorithm 12 can also be modified as follows. Set [F.sub.k+1] = [F.sub.k] + [[alpha].sub.k] [DELTA][F.sub.k]. Then obtain [L.sub.k+1] = L([F.sub.k+1]) and [K.sub.k+1] = K([F.sub.k+1]) by solving the discrete Lyapunov equations (17) and (18), respectively. This means we project the search direction from the space [D.sub.s] to the control-variable space [R.sup.pxr].

The global convergence behavior of the method DSQP is beyond the scope of this work. However, we state some of the main global convergence results, which can be shown under the following general assumptions.

Assumption 13. In addition to Assumption 6 we further assume that the functions J and H together with their derivatives are in [OMEGA] [subset or equal to] [D.sub.s]. We further assume that the operator [H.sup.-1.sub.L] is uniformly bounded in [D.sub.s]; furthermore, we assume that the sequences {[[nabla].sup.2][l.sub.k]], {[L.sub.k]], {[F.sub.k]}, {[K.sub.k]}, and {Z([L.sub.k], [F.sub.k])] are all bounded in [OMEGA].

The next lemma provides an upper bound for the directional derivative of the merit function by the computed search direction.

Lemma 14. Let ([DELTA]L, [DELTA]F, [DELTA]K) be generated by the SQP iteration (21). Then the directional derivative of [[PHI].sup.u](L, F) in the direction ([DELTA]L, [DELTA]F) satisfies

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (55)

where the directional derivatives of [[PHI].sup.u](L, F) are given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (56)

Proof. See [17, Lemma 18.2].

Following [17, Theorem 18.3] the search direction computed by the method DSQP is descent for the merit function [PHI].

Theorem 15. Suppose that ([L.sub.k], [F.sub.k]) e Ds is not a stationary point of the NSDP (10)-(12) and the reduced Hessian [Z.sub.k][[nabla].sup.2][l.sub.k][Z.sub.k] is positive definite. Then the search direction ([DELTA][L.sub.k], [DELTA][F.sub.k]) by the method DSQP is a descent direction for the merit function [PHI] if [mu] is updated by (54).

4. Initial Feasible Solution

As can be seen in Algorithm 12 an initial ([L.sub.0], [F.sub.0]) [member of] [D.sub.s] is required to start the method DSQP. Such a matrix pair can be obtained by using any method that calculates suboptimal solutions (see, e.g., [21, Algorithm Fstab]) or by a pole assignment method; see, for example, [18]. Another alternative is to parameterize the system matrix A such that [rho](A) < 1 and therefore [F.sub.0] can be chosen as the zero matrix. Hence, according to Lemma 3 one can obtain [L.sub.0] > 0 by solving the discrete Lyapunov equation (17). This idea is described in [16] and is stated herein for completeness. In the parametrization approach one replaces the system matrix A by [A.sub.t] = (1 - t)A, where 0 [less than or equal to] t < 1 is a parameter. Then instead of solving the NSDP problem (10)-(12) we solve the following parameterized problem:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (57)

where [A.sub.t](F) = [A.sub.t] + BFC and the constant matrices A, B, C, Q, [R.sub.e], and V are as defined in Section 2. The matrix pair (L, F) must be chosen from the following parameterized set:

[D.sup.t.sub.s] = {(L, F) [member of] [R.sup.nxn] x [R.sup.pxr] : L > 0, [Y.sub.t] (L, F) > 0}. (58)

Obviously as t [right arrow] 0 the set [D.sup.t.sub.s] approaches the set [D.sub.s].

The initialization of the method DSQP can be as follows. Starting from [t.sub.0] = 0, where [t.sub.0] [member of] [0,1), we choose [t.sub.0] as the first candidate such that [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and set [F.sub.0] = 0. Having ([t.sub.0], [F.sub.0]), then initial [L.sub.0] and [K.sub.0] can be obtained by solving the discrete Lyapunov equations:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (59)

Then Algorithm 12 can be applied directly on the parameterized problem (57) but after replacing A(F) by [A.sub.t](F). The parameter t is updated in the main iteration loop according to the decreasing sequence {[t.sub.k]}:

[t.sub.k+1] = [c.sub.k][t.sub.k], (60)

where {[c.sub.k]} [down arrow] 0. As mentioned above we choose [t.sub.0] [member of] [0,1) as the first candidate such that [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], starting from [t.sub.0] = 0.

On the other hand, we might test the method DSQP for finding feasible points of the problem (57), that is, finding points that satisfy the equality constraints as well as the positive definite constraints. This can be achieved if we solve the problem with zero objective function. For that purpose we set the objective function and all its derivatives to be zero in Algorithm 12. As a result and with the exception of (15), the simplification of all formulas has no drawbacks on Algorithm 12. In that case the zero term of the objective function in (15) implies (18) to reduce to

K = [A.sub.t][(F).sup.T]K[A.sub.t] (F), (61)

which cannot be used in its current form to update the multiplier K as suggested in Algorithm 12, because it yields the trivial solution K = 0. In order to overcome this difficulty the multiplier is updated instead by the modified equation:

K = [A.sub.t][(F).sup.T]K[A.sub.t] (F) + [I.sub.n]. (62)

5. Numerical Results

This section includes an implementation of the method DSQP using Matlab. Moreover, the method DSQP is compared versus Newton's method (NM) [9] applied on the minimization problem (5)-(6). Four test problems from the benchmark collection COMPlib [8] are considered in detail in addition to an application of a hydraulic turbine [26]. The benchmark COMPlib includes mainly continuous-time test problems. Therefore, the function c2d from the control system toolbox of Matlab is considered to convert the continuous-time model to the discrete-time counterpart.

For those test problems with [rho](A) < 1 according to Lemma 3 it is possible to start the method DSQP with [F.sub.0] = 0 and to obtain [L.sub.0] by solving the discrete Lyapunov equation (17). However, for test problems with [rho](A) [greater than or equal to] 1 an initial ([L.sub.0], [F.sub.0]) [member of] [D.sub.s] is required; otherwise, the parametrization approach described in Section 4 can be applied. The function dlyap from the control system toolbox of Matlab is used for calculating the approximate solutions of the discrete Lyapunov equations. For all test problems the tolerance is [[epsilon].sub.tol] = 1 x [10.sup.-6]. The constant weight matrices Q and R and the covariance matrices V and [R.sub.e] are taken as the unit matrix.

Example 16. The first test problem is a supersonic transport aircraft under Mach 2.7 flight condition; see [8, AC16]. The discrete-time data matrices are as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (63)

Given the data matrices A, F, C, Q, F, Fe, and V the goal is to calculate a local solution ([L.sub.*], [F.sub.*]) [member of] [D.sub.s] of the NSDP problem (10)-(12) using the method DSQP. The corresponding [F.sub.*] is considered as an approximate solution of the design problem (2)-(4). The open-loop system is discrete-time Schur stable, because [rho](A) = 0.9989 < 1. Hence, according to Lemma 3 one can choose [F.sub.0] = 0 and obtain [L.sub.0] > 0 the corresponding solution of the discrete Lyapunov equation (17). The two methods NM and DSQP converge to the same local minimum after 25 and 21 iterations, respectively. Moreover, we have also tested the method DSQP for finding feasible points as explained at the end of Section 4. The achieved KKT and feasible points are, respectively

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (64)

where [L.sub.*] and [L.sub.feas] are the corresponding solutions of the discrete Lyapunov equation 17). Table 1 shows the convergence behavior of the method DSQP to the local solution.

The method DSQP converges to a local minimum in less number of iterations than the method NM for most of the considered test problems. The next examples have also such a behavior.

Example 17. This test problem is the discrete version of a transport aircraft model [8, AC8]. The problem has the dimensions n = 9, p = 1, and r = 5. Therefore, we do not list the data matrices A, F, and C.

The open-loop system of this example is not discrete-time Schur stable, where [rho](A) = 1.0012 > 1. The two methods NM and DSQP are considered with the parametrization approach, where [t.sub.0] = 0.002 and [c.sub.k] = [(0.8).sup.k]. Both methods successfully converge to the same local minimum after 21 and 18 iterations, respectively. The achieved KKT and feasible points are, respectively

[F.sub.*] = [2.1458 -0.5334 -1.4506 0.0386 1.0690], [F.sub.feas] = [2.5640 -0.6294 -1.6525 0.0469 1.2302], (65)

where [L.sub.*] and [L.sub.feas] are the corresponding solutions of the discrete Lyapunov equation (17).

Table 2 shows the behavior of the two methods while converging to the local solution. The objective functions and the convergence criteria are listed columnwise in that table. The two methods show fast local convergence rate starting from [F.sub.0] = 0.

Figure 1 shows the state variables for the open- and closed-loop systems of the current example, where the achieved output feedback controller matrix [F.sub.*] enforces all state variables to decay to the zero state.

Example 18. This test problem is the decentralized interconnected system [8, DIS3]. The problem has the dimensions n = 8, p = 4, and r = 4. Therefore, we do not list the data matrices A, F, and C.

The open-loop system for this example is discrete-time Schur stable, where [rho](A) = 0.9620 < 1. Starting from [F.sub.0] = 0 where [L.sub.0] is the corresponding solution of (17) the method DSQP successfully converges to a KKT point ([L.sub.*], [F.sub.*]) in 14 iterations, while the method NM requires 16 iterations to converge to the same local minimum. Furthermore, the method DSQP requires 17 iterations to reach a feasible point. Both of the KKT and the feasible points are, respectively,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (66)

where [L.sub.*] and [L.sub.feas] are the corresponding solutions of (17). Furthermore, Table 3 shows the convergence behavior of the method DSQP to the local solution ([L.sub.*], [F.sub.*]) [member of] [D.sub.s].

Example 19. This test problem represents a nuclear reactor model [8, REA3]. The dimensions of this example are n _ 12, p = 1, and r = 3. Therefore, we skip listing the data matrices A, F, and C.

The open-loop system for this example is not discrete-time Schur stable. By applying the parametrization approach, the methods NM and DSQP converge to the same local minimizer after 31 and 27 iterations, respectively. The achieved KKT and feasible points are, respectively,

[F.sub.*] = [-0.0350 -0.3937 -8.3675], [F.sub.feas] = [-0.1269 -1.4385 -3.0157], (67)

where [L.sub.*] and [L.sub.feas] are the corresponding solutions of (17).

Table 4 shows the convergence behavior of the method DSQP to the KKT point.

Example 20 (hydraulic turbine monitoring). This model was considered by Wang and Daley [26], where their objective was to study a fault detection method for that model. However, we aim to apply the SQP method for calculating the required SOF controller [F.sub.*]. Such a controller is supposed to simultaneously stabilize the control system (2) and minimize the quadratic cost function (3) as well.

The hydraulic turbine generating set consists of a reservoir, an inlet pipe, a hydraulic turbine, a generator, a transmission line, and an infinite bus of the produced current. At the end of the inlet pipe a gate is located to control the water flow rate inside the pipe. A change in the gate position yields a change in the speed of the turbine and the produced power.

The dynamical behavior of the hydraulic turbine generating set is represented by the following equations (see [26]):

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (68)

where N(t), H(t), and G(t) are the speed of the turbine, water pressure at the gate, and the gate opening position, respectively. Moreover, M(N(t), H(t), G(t)), Q(N(t), H(t), G(t)), and [M.sub.d](t) are the driving torque on the gate, water flow inside the pipe, and the load into the infinite bus, respectively; [T.sub.a] and [T.sub.w] are total inertia and water starting time constant, respectively. Both of M and Q are assumed to be nonlinear and unknown functions in the variables N(t), H(t), and G(t).

Let [P.sub.n]([N.sub.n], [H.sub.n], [G.sub.n]) be a nominated reference point. By linearizing the system (68) about [P.sub.n] one obtains the following continuous-time linear control system (see [26] for details):

[??](t) = [A.sub.c]x (t) + [B.sub.c]u(t) + [E.sub.c]v(t), (69)

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (70)

The corresponding discrete-time state space model takes the following form:

[x.sub.k+1] = A[x.sub.k] + B[u.sub.k] + E[v.sub.k], [y.sub.k] = C[x.sub.k], (71)

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (72)

By choosing the sample time period [??] = 0.1 sec., the following data matrices are obtained:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (73)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (74)

The discrete-time system (71) is Schur stable, where [rho](A) = 0.9871 < 1. Similar to the above examples we run the methods NM and DSQP on this model using the above data matrices. The two methods converge to the same stationary point after 12 and 11 iterations, respectively, where

[F.sub.*] = [-0.0083 -0.0117]. (75)

Figure 2 shows the state variables of the open- and closed-loop systems. Although the open-loop system is Schur stable, the computed SOF controller [F.sub.*] accelerates the decay of the state variables to the zero state.

In Table 5 we further compare the SQP method for finding the local solution of the NSDP (10)-(12) versus Newton's method applied on the special formulation of the optimization problem (5)-(6). The comparison is performed on 50 test problems from the benchmark collection [8]. In that table we list the problem name, the problem dimensions, the stability indicators for the open- and closed-loop control systems, and the number of iterations of the two methods to reach the stationary points. A dash "-" in the last two columns indicates that the method could not achieve the prescribed accuracy. The obtained results quite show that the SQP method outperforms Newton's method with respect to number of iterations.

6. Conclusion

In this paper, an SQP method with line search is introduced for finding the local solution of some NSDP problem resulting from the discrete-time static output feedback design problem. One of the two unknowns of the KKT point obtained by the numerical method is the output feedback controller matrix F that represents the approximate solution of the design problem.

In order to initialize the SQP method a straightforward strategy was considered to create a starting feasible point with respect to the positive definite constraints (12). The method DSQP outperformed Newton's method [9] applied on a special formulation of the optimization problem. Furthermore, the obtained results quite show that the method DSQP converges from remote starting points to a KKT point at a fast local rate, which is typical for SQP methods. Feasible points of the NSDP problem have been also calculated by the SQP method. Such feasible points can be utilized, for example, by methods that look for the optimal solutions. Finally, the considered numerical experiments show that the proposed algorithm is effective in practical applications related to this problem class.

http://dx.doi.org/10.1155/2014/683797

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

References

[1] F. Jarre, "An interior-point method for non-convex semidefinite programs," Optimization and Engineering, vol. 1, pp. 347-372, 2000.

[2] F. Leibfritz and E. M. E. Mostafa, "An interior point constrained trust region method for a special class of nonlinear semidefinite programming problems," SIAM Journal on Optimization, vol. 12, no. 4, pp. 1048-1074, 2002.

[3] M. Kocvara, F. Leibfritz, M. Stingl, and D. Henrion, "A nonlinear sdp algorithm for static output feedback problems in COMPleib," in Proceedings of the 16th Triennial World Congress of International Federation of Automatic Control (IFAC '05), pp. 1055-1060, July 2005.

[4] D. Sun, J. Sun, and L. Zhang, "The rate of convergence of the augmented Lagrangian method for nonlinear semidefinite programming," Mathematical Programming, vol. 114, no. 2, pp. 349-391, 2008.

[5] R. Correa and H. C. Ramirez, "A global algorithm for nonlinear semidefinite programming," SIAM Journal on Optimization, vol. 15, no. 1, pp. 303-318, 2004.

[6] H. Yamashita and H. Yabe, "Local and superlinear convergence of a primal-dual interior point method for nonlinear semidefinite programming," Mathematical Programming, vol. 132, no. 1-2, pp. 1-30, 2012.

[7] R. W. Freund, F. Jarre, and C. H. Vogelbusch, "Nonlinear semidefinite programming: sensitivity, convergence, and an application in passive reduced-order modeling," Mathematical Programming, vol. 109, no. 2-3, pp. 581-611, 2007.

[8] F. Leibfritz, "COMPlib: COnstraint Matrix-optimization Problem library-a collection of test examples for nonlinear semidefinite programs, control system design and related problems," Tech. Rep., 2004, http://www.complib.de/.

[9] P. M. Makila and H. T. Toivonen, "Computational methods for parametric LQ problems-a survey," IEEE Transactions on Automatic Control, vol. 32, no. 8, pp. 658-671, 1987

[10] T. Rautert and E. W. Sachs, "Computational design of optimal output feedback controllers," SIAM Journal on Optimization, vol. 7, no. 3, pp. 837-852, 1997

[11] P. M. Makila, "Linear quadratic control revisited," Automatica, vol. 36, no. 1, pp. 83-89, 2000.

[12] P. Apkarian, H. D. Tuan, and J. Bernussou, "Continuous-time analysis, eigenstructure assignment, and [H.sub.2] synthesis with enhanced linear matrix inequalities (LMI) characterizations," IEEE Transactions on Automatic Control, vol. 46, no. 12, pp. 1941-1946, 2001.

[13] F. Leibfritz and E. M. E. Mostafa, "Trust region methods for solving the optimal output feedback design problem," International Journal of Control, vol. 76, no. 5, pp. 501-519, 2003.

[14] E. M. E. Mostafa, "A trust region method for solving the decentralized static output feedback design problem," Journal of Applied Mathematics & Computing, vol. 18, no. 1-2, pp. 1-23, 2005.

[15] E. M. E. Mostafa, "An augmented Lagrangian SQP method for solving some special class of nonlinear semi-definite programming problems," Computational & Applied Mathematics, vol. 24, no. 3, pp. 461-486, 2005.

[16] E. M. E. Mostafa, "An SQP trust region method for solving the discrete-time linear quadratic control problem," International Journal of Applied Mathematics and Computer Science, vol. 22, no. 2, pp. 353-363, 2012.

[17] J. Nocedal and S. J. Wright, Numerical Optimization, Springer Series in Operations Research, Springer, New York, NY, USA, 1999.

[18] V. L. Syrmos, C. T. Abdallah, P. Dorato, and K. Grigoriadis, "Static output feedback-a survey," Automatica, vol. 33, pp. 125-137, 1997.

[19] G. Garcia, B. Pradin, and F. Zeng, "Stabilization of discrete time linear systems by static output feedback," IEEE Transactions on Automatic Control, vol. 46, no. 12, pp. 1954-1958, 2001.

[20] J.-W. Lee and P. P. Khargonekar, "Constrained infinite-horizon linear quadratic regulation of discrete-time systems," IEEE Transactions on Automatic Control, vol. 52, no. 10, pp. 1951-1958, 2007.

[21] E. M. E. Mostafa, "Computational design of optimal discrete-time output feedback controllers," Journal of the Operations Research Society of Japan, vol. 51, no. 1, pp. 15-28, 2008.

[22] E. M. E. Mostafa, "A conjugate gradient method for discretetime output feedback control design," Journal of Computational Mathematics, vol. 30, no. 3, pp. 279-297, 2012.

[23] B. Sulikowski, K. Galkowski, E. Rogers, and D. H. Owens, "Output feedback control of discrete linear repetitive processes," Automatica, vol. 40, no. 12, pp. 2167-2173, 2004.

[24] E. D. Sontag, Mathematical Control Theory, vol. 6, Springer, New York, NY, USA, 2nd edition, 1998.

[25] F. S. Kupfer, Reduced SQP in Hilbert space with applications to optimal control, [Ph.D. dissertation], FB IV-Mathematik, Universitat Trier, Trier, Germany, 1992.

[26] H. Wang and S. Daley, "A fault detection method for unknown systems with unknown input and its application to hydraulic turbine monitoring," International Journal of Control, vol. 57, pp. 247-260, 1993.

El-Sayed M. E. Mostafa

Department of Mathematics, Faculty of Science, Alexandria University, Moharam Bey, Alexandria 21511, Egypt

Correspondence should be addressed to El-Sayed M. E. Mostafa; emostafa@alex-sci.edu.eg

Received 20 May 2013; Accepted 11 November 2013; Published 2 January 2014

Academic Editor: Jung-Fa Tsai

TABLE 1: Convergence behavior of the method DSQP for Example
16.

k         [J.sub.k]       [parallel]         [rho](A      [i.sub.cg]
                        [[nabla].sub.U]   ([F.sub.k]))
                           [l.sub.k]
                          [parallel]

1       3.1135e + 005    8.4992e + 007    9.9838e - 001       0
2       2.1007e + 005    3.7773e + 007    9.9745e - 001       1
3       1.4256e + 005    1.6786e + 007    9.9581e - 001       1
19      1.5151e + 003    2.8400e - 003    9.6853e - 001       6
20      1.5151e + 003    7.6411e - 005    9.6853e - 001       6
21      1.5151e + 003    9.0844e - 009    9.6853e - 001       8

TABLE 2: Convergence behavior of the methods NM and DSQP for Example
17.

k       [[??].sub.k]     [parallel]     k      [J.sub.k]
                           [nabla]
                        [[??].sub.k]
                         [parallel]

1       2.7105e + 003   1.3120e + 005   1    1.9178e + 003
2       4.0079e + 003   2.2196e + 005   2    1.9541e + 003
3       6.0553e + 003   3.4516e + 005   3    2.4908e + 003
19      1.5413e + 003   2.3402e - 001   16   1.5413e + 003
20      1.5413e + 003   4.6373e - 003   17   1.5413e + 003
21      1.5413e + 003   7.4319e - 007   18   1.5413e + 003

k         [parallel]       [t.sub.k]
        [[nabla].sub.U]
           [l.sub.k]
          [parallel]

1        5.4776e + 004    2.00e - 003
2        3.9033e + 004    1.60e - 003
3        3.4364e + 004    1.02e - 003
19       1.9083e - 002    1.32e - 016
20       2.2029e - 006    2.98e - 018
21       3.8485e - 011    5.36e - 020

TABLE 3: Convergence behavior of the method DSQP for Example
18.

k         [J.sub.k]       [parallel]         [rho](A      [i.sub.cg]
                        [[nabla].sub.U]   ([F.sub.k]))
                           [l.sub.k]
                          [parallel]

1       1.3612e + 003    1.7167e + 004    9.6199e - 001       1
2       9.6941e + 002    7.6729e + 003    9.4539e - 001       1
3       6.9504e + 002    3.4950e + 003    9.6356e - 001       1
12      6.7653e + 001    1.2410e - 002    9.0019e - 001       5
13      6.7653e + 001    7.0258e - 005    9.0021e - 001       8
14      6.7653e + 001    1.3558e - 008    9.0021e - 001       13

TABLE 4: Convergence behavior of the method DSQP for Example 19.

k         [J.sub.k]        [parallel]       [MATHEMATICAL
                         [[nabla].sub.U]   EXPRESSION NOT
                            [l.sub.k]       REPRODUCIBLE
                           [parallel]         IN ASCII]

0       3.0141e + 008     1.5819e + 010     9.9779e - 001
1       2.7360e + 008     1.2031e + 010     9.9819e - 001
2       2.7696e + 008     1.0624e + 010     9.9877e - 001
25      1.8154e + 004     8.4501e - 001     9.9979e - 001
26      1.8154e + 004     5.7828e - 004     9.9979e - 001
27      1.8154e + 004     7.7346e - 007     9.9979e - 001

k        [t.sub.k]    [i.sub.cg]

0       2.00e - 003       1
1       1.00e - 003       1
2       1.02e - 003       1
25      6.39e - 035       3
26      1.93e - 037       3
27      4.67e - 040       4

TABLE 5: Comparison between the methods NM and DSQP for finding the
local solutions of the problems (5)-(6) and 10)-(12), respectively;
test problems are from [8].

              Problem dimension

Problem     n        p        r

AC1         5        3        3
AC3         5        2        4
AC4         4        1        2
AC6         7        2        4
AC7         9        1        2
AC8         9        1        5
AC15        4        2        3
AC16        4        2        4
AC17        4        1        2
HE1         4        2        1
HE2         4        2        2
HE3         8        4        6
REA1        4        2        2
REA2        4        2        3
REA3        12       1        3
DIS1        8        4        4
DIS2        3        2        2
DIS3        6        4        4
DIS4        6        4        6
AGS         12       2        2
TG1         10       2        2
UWV         8        2        2
IH          21       11       10
EB1         10       1        1
EB2         10       1        1
TF1         7        2        4
PSM         7        2        3
NN2         2        1        1
NN4         4        2        3
NN8         3        2        2
NN11        16       3        5
NN13        6        2        2
NN15        3        2        2
NN16        8        4        4
HF2D10      5        2        3
HF2D11      5        2        3
HF2D12      5        2        4
HF2D13      5        2        4
HF2D14      5        2        4
HF2D15      5        2        4
HF2D17      5        2        4
HF2D18      5        2        2
MFP         4        3        2
TMD         6        2        4
LAH         48       1        1
DLR1        10       2        2
DLR2        40       2        2
DLR3        40       2        2
HF1        130       1        2
ISS1       270       3        3

                Stability indicator         No. of iterations

Problem   [rho](A)   [rho](A([F.sub.*]))      NM       DSQP

AC1        1.0000       9.6958e - 001         11         7
AC3        0.9991       9.5419e - 001         20        19
AC4        1.2942       9.9501e - 001         11        10
AC6        0.9992       9.1586e - 001         19        17
AC7        1.0174       9.9693e - 001          7         5
AC8        1.0012       9.6072e - 001         21        18
AC15       0.9990       9.6497e - 001         23        22
AC16       0.9989       9.6853e - 001         25        21
AC17       0.9723       9.4295e - 001         13        11
HE1        1.0280       9.9116e - 001          5         4
HE2        0.9971       9.6999e - 001         16        13
HE3        1.0088       9.6678e - 001          9         6
REA1       1.2203       8.9332e - 001          8         4
REA2       1.2227       8.9821e - 001          7         4
REA3       1.0000       9.9979e - 001         31        27
DIS1       0.9912       9.5066e - 001         13        11
DIS2       1.1824       7.7117e - 001          8         4
DIS3       0.9620       9.0021e - 001         16        14
DIS4       1.1551       8.7595e - 001          9         6
AGS        0.9786       9.7976e - 001          7         5
TG1        0.9768       9.6791e - 001         19        17
UWV        0.9989       3.0749e - 001         20        19
IH         1.0000       9.7497e - 001          5         3
EB1        0.9990       9.9370e - 001         14         8
EB2        0.9990       9.9370e - 001          9         8
TF1          1          9.9241e - 001         --         8
PSM        0.9495       9.1393e - 001         11         9
NN2        1.0000       9.4185e - 001          3         2
NN4        0.9969       9.3285e - 001         14        12
NN8        0.9971       9.5459e - 001         14        13
NN11       0.9048       9.0783e - 001          8         6
NN13       1.2147       8.0133e - 001          7         4
NN15         1          9.9880e - 001          7         6
NN16       1.0000       9.8135e - 001         10         6
HF2D10     1.0133       9.2187e - 001          4         3
HF2D11     1.0253       8.0074e - 001          4         2
HF2D12     0.9830       9.4527e - 001         13        --
HF2D13     0.9756       8.0712e - 001         13        12
HF2D14     1.0227       9.6188e - 001         14        27
HF2D15     1.1689       8.7364e - 001          8         6
HF2D17     1.0557       8.0215e - 001          9         5
HF2D18     1.0285       9.9607e - 001          8         6
MFP        0.9979       9.9567e - 001         --        11
TMD          1          9.8667e - 001          8         5
LAH        0.9742       9.7416e - 001          3         2
DLR1       0.9995       9.9902e - 001          9         6
DLR2       0.9995       9.9943e - 001          6         5
DLR3       0.9995       9.9943e - 001          7         5
HF1        0.9981       9.9573e - 001          7         6
ISS1       0.9997       9.9962e - 001         11         7
COPYRIGHT 2014 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2014 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Mostafa, El-Sayed M.E.
Publication:Journal of Applied Mathematics
Article Type:Report
Date:Jan 1, 2014
Words:9625
Previous Article:Laguerre collocation method for solving Fredholm integro-differential equations with functional arguments.
Next Article:Stability and selective harvesting of a phytoplankton-zooplankton system.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters