# A Quasi-Monte-Carlo-Based Feasible Sequential System of Linear Equations Method for Stochastic Programs with Recourse.

1. Introduction

Stochastic programming is a framework for modeling optimization problems that involve uncertainty. It has applications in a broad range of areas ranging between finance, transportation, and energy optimization [1, 2]. In the field of industrial production, stochastic programming is also widely used in stochastic control [3-7].

We consider the following two-stage stochastic quadratic programming problem:

min f (x) = P (x) + Q (x), (1a)

subject to c (x) [less than or equal to] 0, (1b)

where

Q (x) = [[integral].sub.[OMEGA]] Q (x, [omega]) p ([omega]) d[omega], (1c)

[mathematical expression not reproducible], (1d)

P(*) : [R.sup.n] [right arrow] R and c(*) : [R.sup.n] [right arrow] [R.sup.m] are twice continuously differentiable. G [member of] [R.sup.sxs] is symmetric positive definite. T [member of] [R.sub.sxn], q [member of] [R.sup.t], and W [member of] [R.sup.txs] are fixed matrices or vectors. [omega] [member of] [R.sup.r] and h(*) are random vectors. p(*) : [R.sup.r] [right arrow] [R.sub.+] is a continuously differentiable probability density function.

Let F = {x [member of] [R.sup.n] | c(x) [less than or equal to] 0} and Z = {y [member of] [R.sup.s] | Wy [less than or equal to] q}. We denote the active constraint by [I.sub.0](x) = {i [member of] I | [c.sub.i](x) = 0}, where I = {1, ..., m}. Throughout the paper, the following hypotheses hold.

Assumption 1. F and Z are bounded.

Assumption 2. At every x [member of] F, the vectors [nabla] [c.sub.i] (x), i [member of] [I.sub.0](x) are linearly independent.

A basic difficulty of solving stochastic optimization problem (1a), (1b), (1c), and (1d) is that the objective function with uncertainty can be complicated or difficult to compute even approximately. The aim of this paper is to give computational approaches based on quasi-Monte-Carlo sampling techniques. To solve stochastic programming problems, one usually resorts to deterministic optimization methods. This idea is a natural one and was used by many authors over the years [8-12]. Deterministic methods were also applied to stochastic programming problems which involve quadratic programming in a vast literature. The extended linear quadratic programming (ELQP) model was introduced by Rockafellar and Wets [13, 14]. Qi and Womersley  proposed an sequence quadratic programming (SQP) algorithm for ELQP problems. To solve ELQP, Chen et al.  suggested a Newton-type approach and showed that this method is globally convergent and locally superlinear convergent. At the same time, Birge et al.  investigated a stochastic Newton method for ELQP with inequality constraint Ax [less than or equal to] b. Global convergence and local superlinear convergence of the method were established.

In order to get a numerical solution of (1a), (1b), (1c), and (1d) based on quasi-Monte-Carlo techniques, consider the following approximation of (1c):

[mathematical expression not reproducible], (2)

where [[omega].sup.i] [member of] [OMEGA] and [[xi].sup.i] is generated by lattice rules [18,19]. Consequently problem (1a), (1b), (1c), and (1d) is approximated by

[mathematical expression not reproducible], (3a)

subject to c (x) [less than or equal to] 0. (3b)

Since Z is bounded, it follows from  that f(x) is twice continuously differentiable. Moreover, from , the approximated objective function [mathematical expression not reproducible] (x) has the following continuous first derivative in [R.sup.n]:

[mathematical expression not reproducible], (4)

where [z.sup.*](x, [[omega].sup.i]) = arg max[-(1/2)[z.sup.T] Gz + [z.sup.T] (h([[omega].sup.i]) - Tx) | z [member of] Z}.

Let [[n.sub.k]}.sup.[infinity].sub.k=1] be an integer sequence satisfying 1 [less than or equal to] [n.sub.1] [less than or equal to] ... < [n.sub.k] [less than or equal to] [n.sub.k+1] [less than or equal to]... and [n.sub.k] [right arrow] [infinity] as k [right arrow] [infinity]. Generate observations {[[omega]', i = 1, ..., [n.sub.k]} on the unit hypercube according to an integration rule. Here, we choose quasi-Monte-Carlo sequences . Since F and Z are compact, it follows from  (or ) that there exists a constant C > 0 such that, for any x [member of] F,

[mathematical expression not reproducible], (5)

[mathematical expression not reproducible]. (6)

The paper addresses a feasible sequential system of linear equations (SSLE) approach to solve (1a), (1b), (1c), and (1d). This study is strongly motivated by recent successful development of various SSLE algorithms for deterministic optimization problems and quasi-Monte-Carlo simulation techniques. SSLE methods for deterministic optimization problems have been proposed by many authors over the years. An interested reader is referred to the literature [22-26] for excellent surveys. Our algorithm has the following interesting features.

(a) Without assuming isolatedness of the accumulation point or boundedness of the Lagrange multiplier approximation sequence, every accumulation point of the iterative sequence generated by the proposed algorithm converges to a KKT point of problem (1a), (1b), (1c), and (1d).

(b) At each iteration, we only to solve four symmetric systems of linear equations with a common coefficient matrix and a simple structure. In the proposed algorithm the last system of linear equation only needs to be solved for achieving a local one-step superlinear convergence rate.

(c) In order to achieve the "working set," the multiplier function [lambda](x) = ([nabla][(x).sup.T] [nabla]c(x) + diag [([c.sup.2.sub.i](x))).sup.-1] [nabla]c[(x).sup.T] [nabla]f(x) is needed to be obtained firstly in . The multiplier function also is suggested by Facchinei et al. , while our algorithm provides a new technique to update the "working set," consequently, without calculating the multiplier function.

(d) In order to find a search direction, a quadratic programming subproblem needs to be solved at each iteration in . Consequently, the Hessian of objective function needs to be approximated by Monte Carlo (or quasi-Monte-Carlo) rule, while for the SSLE methods the approximation is not necessary. Our algorithm solves four linear systems of equations with only the first-order derivative of objective function involved.

The remainder of this paper is organized as follows. Section 2 gives the algorithm of (1a), (1b), (1c), and (1d) and shows the proposed algorithm is well defined. In Section 3 we discuss the convergence of algorithm in detail. We proceed in Section 4 by showing the local superlinear convergence. Finally, our conclusions are presented in Section 5.

2. Algorithm

The Lagrangian function associated with problem (1a), (1b), (1c), and (1d) is defined by

L(x, [lambda]) = f(x) + [[lambda].sup.T] c(x). (7)

A point [x.sup.*] in F is called a KKT point of problem (1a), (1b), (1c), and (1d), if there exits [[[lambda].sup.*] such that the following KKT conditions hold:

[mathematical expression not reproducible], (8)

where

[mathematical expression not reproducible]. (9)

For x, y [member of] F, let

I (x, y, [lambda], [epsilon]) = {i [member of] I | [c.sub.i] (x) + [epsilon][rho] (y, [lambda]) > 0}, (10)

where [epsilon] is a nonnegative parameter and [rho](y, [lambda]) = [square root of [parallel] [PHI](y, [lambda][parallel]] with

[mathematical expression not reproducible]. (11)

From the definition of [rho](x,[lambda]), [rho] ([x.sup.*], [[lambda].sup.*]) = 0 if and only if ([x.sup.*], [[lambda].sup.*]) satisfies KKT conditions (8). In order to achieve the active constraint set in our algorithm, the estimate of set I(x, y, [lambda], [epsilon]) is defined by

[mathematical expression not reproducible], (12)

where

[mathematical expression not reproducible], (13)

and [[delta].sub.2] is a positive parameter in (1/2,1). Since f(x) and c(x) are continuously differentiable, it follows from Theorem 3.15 in  that [mathematical expression not reproducible] is nonnegative and continuous on [R.sup.n+m]. Hence, from (6) and continuous differentiability of [mathematical expression not reproducible], we have that [mathematical expression not reproducible].

For simplicity, let [mathematical expression not reproducible], and

[mathematical expression not reproducible], (14)

where

[mathematical expression not reproducible]. (15)

Now we formally state our algorithm.

Algorithm 3.

(S.0) (Initialization)

Parameters: [mathematical expression not reproducible]

Data: 0 < [w.sub.0] < 1, M > 0, [[lambda].sup.0] = 0 [member of] [R.sup.m], [[epsilon].sub.0] > 0, symmetric positive define matrix [H.sub.0] [member of] [R.sup.nxn], [x.sup.0] = [x.sup.1] [member of] F, and [c.sub.i] ([x.sup.0]) < 0, for every i [member of] I. Sequence [[alpha].sub.k] satisfies [[alpha].sub.k] > 0, for all k, and [[summation].sup.[infinity].sub.k=0] [[alpha].sub.k] < + [infinity]);

Choose [n.sub.0], [n.sub.1] such that [mathematical expression not reproducible]

Generate observations [[[omega].sup.i]: i = 0, ..., [n.sub.0]] by quasi-Monte-Carlo rules and calculate [mathematical expression not reproducible];

Set k = 1.

(S.1) (Choose Working Set)

(S1.1) If [mathematical expression not reproducible]

(S1.2) Set [epsilon] = [[epsilon].sub.k], w := [w.sub.k].

(S1.3) Calculate I([n.sub.k-1],k, [[epsilon].sub.k]) and M([n.sub.k-1], k).

(S1.4) If [parallel]([n.sub.k-1], k, [[epsilon].sub.k])) = [theta], then set [epsilon] = [sigma][epsilon], w = [[sigma].sub.1] w, and go to (1.3).

(S1.5) Set [[epsilon].sub.k+1] = [epsilon], [W.sub.k+1] = w.

(S.2) (Computation of Search Direction)

If I([n.sub.k-1],k, [[epsilon].sub.k]) = 0, then run the following step (S2.1) -(S2.4); otherwise go to (S2.5).

(S2.1) Set l = 0.

(S2.2) Generate observations [[[omega].sup.i]: i = 0, ..., [n.sub.k] +1] by quasi-Monte-Carlo rules and calculate [mathematical expression not reproducible]

(S2.3) Set [mathematical expression not reproducible]

(S2.4) If [mathematical expression not reproducible], then set l = l + 1 and go to (S2.2); otherwise set [mathematical expression not reproducible], and go to (S.3).

(52.5) Set l = 0.

(52.6) Generate observations [[w.sub.i]: i = 0, ..., [n.sub.k] +1] by quasi-Monte-Carlo rules and calculate [mathematical expression not reproducible].

(52.7) Compute ([mathematical expression not reproducible]) by solving the system of linear equation in (d, [lambda])

[mathematical expression not reproducible]. (16)

Set [mathematical expression not reproducible].

(52.8) Let

[mathematical expression not reproducible]. (17)

where [mathematical expression not reproducible].

Compute ([mathematical expression not reproducible]) by solving the system of linear equation in (d, [lambda])

[mathematical expression not reproducible]. (18)

If [mathematical expression not reproducible], then set 1 = 1+ 1 and go to (S2.6); otherwise set [[bar.n].sub.k] = [n.sub.k] + l, and [l.sub.k] = l.

(52.9) Compute ([mathematical expression not reproducible]) by solving the system of linear equation in (d, [lambda])

[mathematical expression not reproducible], (19)

where

[mathematical expression not reproducible], (20)

(S2.10) Compute ([mathematical expression not reproducible]) by solving the system of linear equation in (d, [lambda])

[mathematical expression not reproducible], (21)

where

[mathematical expression not reproducible]. (22)

(5.3) If [mathematical expression not reproducible], then set [mathematical expression not reproducible].

Choose [t.sub.k], the first number t in the sequence {1, [beta], [[beta].sup.2], ...} satisfying

[mathematical expression not reproducible], (23)

[mathematical expression not reproducible]. (24)

(5.4) Compute [N.sub.k] such that [mathematical expression not reproducible]

Set [mathematical expression not reproducible]. Generate a new symmetric positive define matrix [H.sub.k+]1. Set k = k + 1 and go to (S.1).

Remarks

(a) The main purpose of (S.1) is to generate a working set and ensure that the matrix M([n.sub.k-]1,k) is nonsingular, for every k. Hence, ([mathematical expression not reproducible]) is well defined, for all i [member of] {0, 1, 2, 3}. The calculation of set I([n.sub.k-1], k, [[epsilon].sub.k]) specially is different from the one proposed in . We use the solution [mathematical expression not reproducible] of system (18) as a substitute for the multiplier function proposed in . Moreover, M([n.sub.k-1], k) is also uniformly bounded. Details will subsequently be given.

(b) From the construction of the algorithm, four linear systems need to be solved at each iteration. To ensure the iterate sequence globally converges to KKT point of (1a), (1b), (1c), and (1d), we only need to solve the previous three linear systems (16), (18), and (19). The linear systems (16) and (19) play important roles in proving the global convergence. The main aim of the linear system (21) is to guarantee the one-step superlinear convergence rate of the algorithm under mild conditions.

(c) It is not difficult to show that there exists [t.sub.k], the first number of the sequence {1, [beta], [[beta].sup.2],...}, which satisfies the linear search (23) and (24). In Section 4 we will show that [t.sub.k] = 1, for sufficiently large k. Hence, the Maratos effect will be avoided.

(d) In numerical experiments [H.sub.k] is usually updated by the Broyden-Fletcher-Goldfarb-Shanno (BFGS) formula [27, 29]. At any iteration k Algorithm 3 stops as the following termination criteria, with [[epsilon].sub.stop] [member of] (0,1) and maximum iterations [N.sub.max]:

[mathematical expression not reproducible]. (25)

The rest of section is devoted to show that Algorithm 3 is well defined. We firstly give the following hypothesis on the choice of the matrix [H.sub.k].

Assumption 4. There exist positive constants [C.sup.1] and [C.sup.2] such that for all k and d [member of] [R.sup.n]

[mathematical expression not reproducible]. (26)

It is not difficult to see from [c.sub.i]([x.sup.k]) < 0, for every k in nonnegative integer set N, the inner iteration (S.1) terminates finitely.

Lemma 5. [DELTA]f([x.sup.k]) = 0, if there exists some k such that the following conditions hold.

(a) [mathematical expression not reproducible].

Proof. From condition (b), we have that [mathematical expression not reproducible]. It follows that

[mathematical expression not reproducible]. (27)

From independence of k and l, the result follows.

Lemma 6. [nabla]f([x.sup.k]) = 0, if there exists some k such that the following conditions hold.

(a) [mathematical expression not reproducible]

Proof. From condition (b), [mathematical expression not reproducible]. It follows that, as l [right arrow] [infinity]

[mathematical expression not reproducible]. (28)

Therefore, we get [mathematical expression not reproducible], and

[mathematical expression not reproducible]. (29)

Since [mathematical expression not reproducible], we have, as l [right arrow] [infinity],

[mathematical expression not reproducible]. (30)

This completes the proof.

It is easy to see from Lemmas 5 and 6 that [x.sup.k] is a unconstrained stationary point of f, if we are not able to get the next iteration [x.sup.k+l] from the current iteration [x.sup.k]; that is, the inner iterations (S2.1)-(S2.8) terminate infinitely. Since we always have [x.sup.k] [member of] F, this means that [x.sup.k] is actually a KKT point of problem (1a), (1b), (1c), and (1d). In the following section, we assume that the inner iterations (S2.1)-(S2.8) terminate finitely for all k [member of] N; namely, there always exists [l.sub.0] [member of] N such that, for every k [member of] N, one of the following conditions holds.

(i) [mathematical expression not reproducible].

(ii) [mathematical expression not reproducible].

Therefore, the algorithm generates an infinite iterative sequence {[x.sup.k]}.

Lemma 7. If there exists [k.sub.0] such that I ([n.sub.k-1],k, [[epsilon].sub.k]) = [theta] for all k > [k.sub.0], then there exists [bar.[epsilon]] > 0 such that [[epsilon].sub.k] [greater than or equal to] [bar.[epsilon]] for all k [member of] N.

Proof. Since I(([n.sub.k-1],k, [[epsilon].sub.k])) = 0, we have that [mathematical expression not reproducible]. So the result follows from Assumption 4.

Lemma 8. If there exist [k.sub.0] and subset K [subset] N such that I([n.sub.k-1], k, [[epsilon].sub.k]) [not equal to] 0 with k [member of] K and all k > [k.sub.0], then [[epsilon].sub.k+1] = [[epsilon].sub.k] for sufficiently large k.

Proof. Assume to contrary that for any [k.sub.0] [member of] K there always exists k > [k.sub.0] such that [[epsilon].sub.k+1] [not equal to] [[epsilon].sub.k]. From construction of the algorithm, we have that [[epsilon].sub.k] [right arrow] 0. By Assumption 1 and the finiteness of set I, without loss of generality, we can assume that

(i) {[x.sup.k]}.sub.K] [right arrow] [x.sup.*] with [x.sup.*] [member of] F and [[epsilon].sub.k+1] < [[epsilon].sub.k] for all k [member of] K;

(ii) I([n.sub.k-1],k, [[epsilon].sub.k]), k [member of] K keep changeless.

For simplicity, let [I.sub.K]:= I([n.sub.k-1],k, [[epsilon].sub.k]). Since [mathematical expression not reproducible] is bounded, it follows from [[epsilon].sub.k] [right arrow] 0 that [I.sub.K] [subset] [I.sub.0]([x.sup.*]) for sufficiently large k. Hence, by Assumption 4 and step (S.1), we get

[mathematical expression not reproducible], (31)

which contradicts with Assumption 2, and the proof is complete.

From Lemmas 7 and 8 we can directly obtain Lemma 9.

Lemma 9. There exists [bar.[epsilon]] > 0 such that [[epsilon].sub.k] > [bar.[epsilon]] for all k.

Since F is compact, we get Lemma 10.

Lemma 10. M ([n.sub.k-1],k) is nonsingular and uniformly bounded with respect to k [member of] N; that is, there exists [bar.W] > 0 and [bar.M] > 0 such that, for all k [member of] N,

[bar.W] [less than or equal to] det (M([n.sub.k-1],k)) [less than or equal to] [bar.M]. (32)

From Assumption 1 and Lemma 10, the following lemma is then obvious.

Lemma 11. {[mathematical expression not reproducible]} are bounded for j = 0, 1, 2, 3.

Lemma 12. If I([n.sub.k-1], k, [[epsilon].sub.k]) [not equal to] [theta], the following results hold.

(a) [mathematical expression not reproducible]

(b) [mathematical expression not reproducible]

(c) [mathematical expression not reproducible]

Proof. (a) is a direct consequence of linear system (16). It is easy to see from linear systems (16), (18), and (19) that

[mathematical expression not reproducible]. (33)

Therefore, we have

[mathematical expression not reproducible]. (34)

This completes the proof.

3. Convergence

Lemma 13. Suppose the following conditions hold.

(i) {[x.sup.k]}K [right arrow] [x.sup.*].

(ii) I([n.sub.k-1], k, [[epsilon].sub.k]) = [theta] for every k [member of] K.

(iii) There exists [K.sub.0] [subset] K such that [mathematical expression not reproducible]

Then [x.sup.*] is a stationary point; namely, [nabla]([x.sup.*]) = 0.

Proof. We show the conclusion by contradiction. Suppose that [nabla] f ([x.sup.*]) [not equal to] 0. Without loss of generality, we assume that [mathematical expression not reproducible] and [H.sub.k] [right arrow] [H.sup.*]. So there exists [[gamma].sub.1] > 0 such that for sufficiently large k [member of] K

[mathematical expression not reproducible]. (35)

Since [mathematical expression not reproducible], for sufficiently big k [member of] [K.sub.0], there exists [p.sub.0] > 0 such that

[mathematical expression not reproducible]. (36)

So

[mathematical expression not reproducible]. (37)

Let [gamma] = min ([[gamma].sub.1], [bar.[epsilon]] [p.sub.0]. Since [mathematical expression not reproducible], it is obvious that 0([parallel]t[d.sup.k][parallel]) = 0(t). So we have that, for sufficiently large k [member of] K,

[mathematical expression not reproducible]. (38)

Therefore, we have

[mathematical expression not reproducible]. (39)

By (37), for all i [member of] I

[mathematical expression not reproducible]. (40)

It follows that from (39) and (40) that there exists [bar.t] > 0 independent of k such that, for any t [member of] (0, [bar.t]], both (23) and (24) hold. From (39), there exists [k.sub.0] such that, for all k [greater than or equal to] [k.sub.0] with k [member of] K, [bar.[tau]] [greater than or equal to] [t.sub.k] [less than or equal to] [beta][bar.t],

[mathematical expression not reproducible]. (41)

It is not difficult to see from (23) and Lemma 12 that, for sufficiently large k,

[mathematical expression not reproducible]. (42)

Combining with (41), we get

[mathematical expression not reproducible]. (43)

It follows that f([x.sup.k]) [right arrow] - [infinity], which contradicts with the fact that {f([x.sup.k])} is bounded, and the proof is complete.

Lemma 14. Suppose the following conditions hold.

(i) {[[x.sup.k]}.sub.K] [right arrow] [x.sup.*].

(ii) I([n.sup.k-1],k, [[epsilon].sub.k]) = 0 for every k [member of] K.

(iii) There exists [K.sub.0] [subset] K such that [mathematical expression not reproducible].

If I([n.sub.k-2],k -1, [[epsilon].sub.k-1]) = [theta] for every k [member of] [K.sub.0], then [x.sup.*] is a stationary point; namely, [nabla] f([x.sup.*]) = 0.

Proof. From the above conditions, we have that, for every k [member of] [K.sub.0], [[lambda].sup.k] = [[lambda].sup.k-1] = 0, [nabla] f([[bar.x].sup.*]) = 0, and therefore

[mathematical expression not reproducible]. (44)

It follows that as k [right arrow] [infinity]x and k [member of] [K.sub.0]

[mathematical expression not reproducible]. (45)

This completes the proof.

Let [mathematical expression not reproducible] = 0, 1, 2, 3, denote the vectors on [R.sup.m] with components [mathematical expression not reproducible], respectively, where

[mathematical expression not reproducible]. (46)

Lemma 15. Suppose conditions (i)-(iii) hold in Lemma 14. If I ([n.sub.k-2], k-1, [[epsilon].sub.k-1]) = 0 for every k [member of] [K.sub.0], then [x.sup.*] is a stationary point; namely, [nabla]f ([x.sup.*]) = 0.

Proof. Without loss of generality, we suppose that [mathematical expression not reproducible] keep changeless.

From condition (iii) in Lemma 14

[mathematical expression not reproducible]. (47)

Combining with the first equation of linear system (16), [mathematical expression not reproducible]. It is easy to see from (47) that

[mathematical expression not reproducible]. (48)

Therefore, we have

[mathematical expression not reproducible]. (49)

So we get

[mathematical expression not reproducible], (50)

and for sufficiently large k [member of] [K.sub.0]

[mathematical expression not reproducible]. (51)

Let [mathematical expression not reproducible] is a KKT pair of problem (1a), (1b), (1c), and (1d), we have

[mathematical expression not reproducible]. (52)

Therefore, from (50)

[mathematical expression not reproducible]. (53)

If there is M > 0 such that [mathematical expression not reproducible], then we have from (51) that for arbitrary [mathematical expression not reproducible] and sufficiently large k [member of] [K.sub.0]

[mathematical expression not reproducible]. (54)

For [mathematical expression not reproducible], since M ([n.sub.k-1], k) is nonsingular

[mathematical expression not reproducible]. (55)

So we can also get that for sufficiently large k [member of] [K.sub.0]

[mathematical expression not reproducible]. (56)

So we have

[mathematical expression not reproducible]. (57)

Since [I.sub.K] = [theta], [I.sup.+.sub.0] ([[bar.x].sup.*]) = [theta]. It follows that [[bar.[lambda]].sup.*] = 0, and, therefore, [nabla]f([x.sup.*]) = 0. This completes the proof.

Lemma 16. Suppose that {([x.sup.k], [[[lambda].sup.k])}.sub.K] [right arrow] ([x.sup.*], [[lambda].sup.*]). If [mathematical expression not reproducible], then ([x.sup.*], [[lambda].sup.*]) is a KKT pair of problem (1a), (1b), (1c), and (1d).

Proof. If, for every k [member of] K, I([n.sub.k-1], k, [[epsilon].sub.k]) = [theta], the result can be directly obtained from [mathematical expression not reproducible]. Without loss of generality, we suppose that, for all k [member of] K,

(i) I([n.sub.k-1], k, [[epsilon].sub.k]) [not equal to] [theta],

(ii) [I.sub.K] = I([n.sub.k-1], k, [[epsilon].sub.k]) keep changeless.

By Lemma 12 and linear systems (16), (18), and (19), we have

[mathematical expression not reproducible], (58)

So [mathematical expression not reproducible]. (59)

Let [[lambda].sup.*] be an arbitrary accumulation point of [mathematical expression not reproducible]. Since [nabla]f(x) and [nabla]c(x) are continuously differentiable, we get from (6), (16), and (59) that

[mathematical expression not reproducible]. (60)

This completes the proof.

Lemma 17. Assume that the following conditions hold:

(i) {([x.sup.k], [[lambda].sup.k])}.sub.K] [right arrow] ([x.sup.*], [[lambda].sup.*]).

(ii) there exists subset [K.sub.0] [subset] K such that {[mathematical expression not reproducible]}.

Then ([x.sup.*], [[lambda].sup.*]) is a KKT pair of problem (1a), (1b), (1c), and (1d).

Proof. Assume to the contrary that ([x.sup.*], [[lambda].sup.*]) is not a KKT pair of problem (1a), (1b), (1c), and (1d). Without loss of generality, we suppose that conditions (i) and (ii), which are given in proof for Lemma 16, hold for all k [member of] K. It is not difficult to see from Lemma 16 that there exists [[gamma].sub.1] > 0 such that, for sufficiently large k [member of] K,

[mathematical expression not reproducible]. (61)

So that, for sufficiently large k [member of] K,

[mathematical expression not reproducible]. (62)

From (61), [mathematical expression not reproducible] does not converge to 0. Therefore, without loss of generality, we also can suppose that [mathematical expression not reproducible]. Since [rho]([[bar.x.sup.*]], [[bar.[lambda]].sup.*]) [not equal to] 0, for sufficiently large k [member of] [K.sub.0], there exists [[rho].sub.0] > 0 such that

[mathematical expression not reproducible]. (63)

It follows that [I.sub.K] [contains] [I.sub.0]([x.sup.*]). So, for every i [member of] [I.sub.K],

[mathematical expression not reproducible], (64)

and, for i [not member of] [I.sub.K],

[mathematical expression not reproducible]. (65)

In a way similar to the proof of Lemma 13, we get that [mathematical expression not reproducible], which contradicts with the boundedness of f(x), x [member of] F. This completes the proof.

Lemma 18. Assume that the following conditions hold.

(i) [{([x.sup.k], [[lambda].sup.k])}.sub.K] [right arrow] ([x.sup.*], [[lambda].sup.*]).

(ii) There exists subset [K.sub.0] [member of] K such that [mathematical expression not reproducible]

(iii) I([n.sub.k-1],k,[[epsilon].sub.k]) [not equal to] [theta] for every k [member of] K.

If I([n.sup.k-2], k-1, [[epsilon].sub.k-1]) = 0 for every k [member of] [K.sub.0], then ([x.sup.*], [[lambda].sup.*]) is a KKT pair of problem (1a), (1b), (1c), and (1d).

Proof. Without loss of generality, we suppose that [I.sub.K] = I([n.sub.k-1], k, [[epsilon].sub.k]) keep changeless. Let

[mathematical expression not reproducible]. (66)

By Lemma 10, M([x.sup.*]) is nonsingular. Therefore, there exists [bar.d] such that [mathematical expression not reproducible] and ([bar.d], [[lambda].sup.*]) is the unique solution of the following linear system:

[mathematical expression not reproducible]. (67)

From Lemma 13, [[lambda].sup.k-1] = 0 for every k [member of] [K.sub.0], and

[mathematical expression not reproducible], (68)

where [mathematical expression not reproducible].

So we get, as k [right arrow] [infinity] and k [member of] [K.sub.0],

[mathematical expression not reproducible]. (69)

It follows from Lemma 13 that [nabla]f([x.sup.*]) = [nabla]f([[bar.x].sup.*]]) = 0. Therefore, we have that [bar.d] = 0, [[lambda].sup.*] = 0, and the proof is complete.

Lemma 19. Assume that conditions (i)-(iii) in Lemma 18 hold. If ([n.sup.k-2], k-1, [[epsilon].sub.k-1]) [not equal to] [theta] for every k [member of] [K.sub.0], then ([x.sup.*], [[lambda].sup.*]) is a KKT pair of problem (1a), (1b), (1c), and (1d).

Proof. Without loss of generality, we suppose that [mathematical expression not reproducible] keep changeless. In a way similar to Lemma 15, we have

[mathematical expression not reproducible]. (70)

Let [[pi].sup.*] = ([[lambda].sup.*.sub.i] | i [member of] [I.sub.K]) and [[bar.pi].sup.*] denote a vector with the following components:

[mathematical expression not reproducible]. (71)

Since (x, [[bar.[lamba].sup.*]) is a KKT pair of (1a), (1b), (1c), and (1d), we have from (70) that (0/[[pi].sup.*]) is the solution of the following linear system:

[mathematical expression not reproducible]. (72)

On the other hand, since {([x.sup.k], [[lambda].sup.k])}.sub.K] [right arrow] ([x.sup.*], [[lambda].sup.*]), there exists [d.sup.*] such that [mathematical expression not reproducible]. From Assumption 2, M([x.sup.*]) is nonsingular. Therefore, ([d.sup.*], [[lambda].sup.*]) is unique solution of the linear system (72). So we have that [pi]* = [bar.[pi]*]. So [[lambda].sup.*] = [[bar.[lambda]].sup.*], and the proof is complete.

From Lemmas 13-19, we have the following.

Theorem 20. If {([x.sup.k], [[[lambda].sup.k])}.sub.K] [right arrow] ([x.sup.*], [[lambda].sup.*]), then ([x.sup.*], [[lambda].sup.*]) is a KKT pair of problem (1a), (1b), (1c), and (1d).

4. Rate of Convergence

In this section, we will establish the superlinear convergence of Algorithm 3. We suppose that the algorithm generates an infinite iterative sequence {[x.sup.k]} and there exists [k.sub.0] such that I([n.sub.k-1], k, [[epsilon].sub.k]) [not equal to] [theta] with k > [k.sub.0]. That is, (S2.1)-(S2.4) will never be run when k > [k.sub.0] and the inner iterations (S2.5)-(S2.8) terminate finitely. Let [[tau].sup.*] = ([x.sup.*], [[lambda].sup.*]) be an accumulation point of the sequence {[x.sup.k], [[lambda].sup.k])} generated by Algorithm 3. We assume that [[nabla].sup.2]f, [[nabla].sup.2][c.sub.i], i [member of] I are locally Lipschitz continuous on a neighborhood of [x.sup.*]. To ensure the whole sequence {([x.sup.k], [[lambda].sup.k])} converges to ([x.sup.*], [[lambda].sup.*]), we need the following assumption.

Assumption 21. The second-order sufficient condition holds at [[tau].sup.*]; that is, the Hessian [[nabla].sub.xx]([x.sup.*], [[lambda].sup.*]) is positive definite on the space {[alpha] | ([nabla] [c.sub.i]([x.sup.*]), [alpha]) = 0, [for all]i [member of] [I.sub.0] ([x.sup.*])}.

We first introduce a useful proposition as follows.

Proposition 22 (see [25, Proposition 4.1]). Assume that [[omega].sup.*] [member of] [R.sup.t] is an isolated accumulation point of a sequence {[[omega].sup.k]} [subset] [R.sup.t] such that for every subsequence [{[[omega].sup.k]}.sub.K] converges to [[omega].sup.*]; there is an infinite subset [bar.K] [subset] K such that [mathematical expression not reproducible]; then the whole sequence {[[omega].sup.k]} converges to {[[omega].sup.*]}.

Lemma 23. If [{[x.sup.k]}.sub.K] [right arrow] [x.sup.*], then [mathematical expression not reproducible].

Proof. Assume to the contrary that there exists subset [bar.K] [subset] K such that [mathematical expression not reproducible]. By the finiteness of set I and boundedness of sequence {[[lambda].sup.k]}, there exists subset [K.sub.0] [subset] [bar.K] such that [mathematical expression not reproducible] keep changeless. It is not difficult to see from linear system (18) that

p([x.sup.*], [[lambda].sup.*]) = [parallel]H* [bar.d][parallel] [not equal to] 0. (73)

On the other hand, from Theorem 20, ([x.sup.*], [[lambda].sup.*]) is a KKT pair of problem (1a), (1b), (1c), and (1d); it follows that p([x.sup.*], [[lambda].sup.*]) = 0, which contradicts with (73). So we have that [mathematical expression not reproducible].

Lemma 24. If [mathematical expression not reproducible]

Proof. Since multiplier [[lambda].sup.*] is unique with respect to [x.sup.*] and {[[lambda].sup.k]} is bounded, it follows from Theorem 20 that [{[[lambda].sup.k]}.sub.K] [right arrow] [[[lambda].sup.*].

Lemma 25. If {([x.sup.k], [[lambda]k)}.sub.K] ([x.sup.*], [[lambda].sup.*]), then {([x.sup.k-1], [[lambda].sup.k-1])}.sub.K] [right arrow] ([x.sup.*], [[lambda].sup.*]).

Proof. Suppose that ([[bar.x].sup.*], [[bar.[lambda]].sup.*]) is a arbitrary accumulation point of {([x.sup.k-1], [[lambda].sup.k-1])}K. Then, from Theorem 20

[mathematical expression not reproducible]. (74)

In a way similar to the proof of Lemmas 18 and 19, we get that [x.sup.*] = [[bar.x].sup.*], [[lambda].sup.*] = [[bar.[lambda]].sup.*]. From the boundedness of {([x.sup.k-1], [[lambda].sup.k-1])}.sub.K], the result follows.

Lemma 26. If [mathematical expression not reproducible]

Proof. By Lemma 25, I([n.sub.k-1], k, [[epsilon].sub.k]) [subset] [I.sub.0] ([x.sup.*]). From Lemma 23, the result follows.

Lemma 27. Under Assumptions 1, 2, 4, and 21, the whole sequence {([x.sup.k], [[lambda].sup.k])} converges to ([x.sup.*], [[[lambda].sup.*]).

Proof. Suppose that {[x.sup.k]}.sub.K[right arrow][x.sup.*]}. Assumptions 2 and 21 imply that [x.sup.*] is an isolated accumulation point of {[x.sup.k]} . By (S.3) in Algorithm 3, [mathematical expression not reproducible]. It follows from Lemma 23 that

[mathematical expression not reproducible]. (75)

Therefore, we have from Proposition 22 that the whole sequence {[x.sup.k]} converges to [x.sup.*]. By Lemma 24, we have that [[lambda].sup.k] converges to [[lambda].sup.*]. This completes the proof.

Assumption 28. The strict complementarity condition holds at [[tau].sup.*]; that is, [[lambda].sup.*] - c([x.sup.*]) [not equal to] 0.

Lemma 29. Let [I.sup.+.sub.0] ([x.sup.*]) = {i [member of] I | [[[lambda].sup.*].sub.i] > 0} then for all sufficiently big k

[mathematical expression not reproducible]. (76)

Proof. By Theorem 20 and Lemma 27, it is easy to see that

[mathematical expression not reproducible]. (77)

In a way similar to the proof of (70) in Theorem 20, we have the following result:

[I.sup.+.sub.0] ([x.sup.*]) [subset] I {[n.sub.k-1] + [l.sub.k] k, [[epsilon].sub.k]} [subset] Io ([x.sup.*]). (78)

By Assumption 28, the result follows.

By Lemmas 23, 27, and 29, we can directly obtain the following corollary.

Corollary 30. If Assumptions 1, 2, 4, and 21 hold, then for every i = 0, 1, 2, 3

[mathematical expression not reproducible]. (79)

By linear systems (18), (19), and (21), we have

[mathematical expression not reproducible]. (80)

Combining with the fact that [mathematical expression not reproducible], we have the following.

Lemma 31. For sufficiently large k, the following results hold.

[mathematical expression not reproducible]. (81)

Assumption 32. The sequence of matrices {[H.sub.k]} satisfies

[mathematical expression not reproducible], (82)

where [mathematical expression not reproducible]

Note. Assumption 32 is an extended Dennis-More condition. It is used in Qp-free algorithm for nonlinear optimization problems by Yang et al. . We will show that it is a sufficient condition for our algorithm to be superlinearly convergent. In order to show the superlinear convergence, we first introduce the following proposition.

Proposition 33 (see [27, Lemma 4.3]). For sufficiently large k, the direction [mathematical expression not reproducible] can be decomposed into

[mathematical expression not reproducible]. (83)

Lemma 34. For sufficiently large k, if I([n.sub.k-1], k, [[epsilon].sub.k]) [not equal to] [theta], then the step [t.sub.k] = 1 is accepted.

Proof. For i [not equal to] [I.sub.0]([x.sup.*]), due to [c.sub.i] ([x.sup.*]) < 0, it is not difficult to see from Corollary 30 that when k is sufficiently large, [mathematical expression not reproducible]. For i [member of] [I.sub.0]([x.sup.*]), we have from linear system (21) and Lemma 31 that

[mathematical expression not reproducible]. (84)

It follows from [eta] [member of] (2, 3) that, for sufficiently large k, [mathematical expression not reproducible]. So when k is sufficiently large, [mathematical expression not reproducible] is a strictly feasible point of problem (1a), (1b), (1c), and (1d). By (21) and (81), we have

[mathematical expression not reproducible]. (85)

Combining with (84), we have

[mathematical expression not reproducible]. (86)

It follows that for sufficiently large k

[mathematical expression not reproducible]. (87)

From Proposition 33 and Assumption 32, we have that

[mathematical expression not reproducible]. (88)

From linear system (19), for sufficiently large k,

[mathematical expression not reproducible]. (89)

Since [mathematical expression not reproducible] for all i [member of] [I.sub.0] ([x.sup.*]), we have, for sufficiently large k,

[mathematical expression not reproducible]. (90)

By (87), (88), and (89), we get

[mathematical expression not reproducible], (91)

which completes the proof.

Theorem 35. Under stated assumptions, we have

[mathematical expression not reproducible]. (92)

Proof. By the definition of [P.sub.k], we have

[mathematical expression not reproducible]. (93)

It follows from (93) that

[mathematical expression not reproducible]. (94)

Since [mathematical expression not reproducible], it is clear from linear system (21) that

[mathematical expression not reproducible]. (95)

Let [mathematical expression not reproducible]

From (94) and (95), we have

[mathematical expression not reproducible]. (96)

From Assumption 21, it is not difficult to see that when k is sufficiently large, [G.sub.k] have full column rank. It follows from (96) and Assumption 32 that

[mathematical expression not reproducible],

which implies that

[mathematical expression not reproducible]. (98)

This completes the proof.

In sequel, we consider the following case: the KKT point [x.sup.*] of problem (1a), (1b), (1c), and (1d) is an unconstrained stationary with multiplier vector [[lambda].sup.*] = 0. It is clear that [nabla]f ([x.sup.*]) = 0 and also [I.sub.0]([x.sup.*]) = [theta] in this case. Therefore, we have form the construction of Algorithm 3 that, for sufficiently large k, I ([n.sub.k-1],k, [[epsilon].sub.k]) = [theta]. In order to show the superlinear convergence under this case. we firstly give two well-known propositions.

Proposition 36. Assume that f(x) is twice continuously differentiable and [[nabla].sup.2]f(x) is Lipschitz continuous on open convex subset D of F. Then, for arbitrary x, u, v [member of] D, we have

[mathematical expression not reproducible], (99)

where [gamma] is a Lipschitz constant.

Proposition 37. Assume that f(x) and [[nabla].sup.2] f(x) satisfy the conditions in Proposition 36. If [[nabla].sup.2]f(x) is symmetric positive definite, then there exist [epsilon] > 0, [beta] > [alpha] > 0 such that when max{[parallel]u - x[parallel], [parallel]v - x[parallel]} [greater than or equal to] [epsilon] with u, v [member of] D

[mathematical expression not reproducible]. (100)

In order to obtain the superlinear convergence of problem (1a), (1b), (1c), and (1d) under the condition [I.sub.0]([x.sup.*]) = [theta], we give the following assumption.

Assumption 38. The sequence of matrices {[H.sub.k]} satisfies

[mathematical expression not reproducible]. (101)

Lemma 39. If [I.sub.0]([x.sup.*]) = [theta], then the step [t.sub.k] = 1 is accepted for sufficiently large k.

Proof. Since [I.sub.0]([x.sup.*]) = [theta], [c.sub.i] ([x.sup.*]) [not equal to] 0, [for all]i [member of] I. It follows from [mathematical expression not reproducible] that, for sufficiently large [mathematical expression not reproducible]. That is, [mathematical expression not reproducible] is strictly feasible. To the end, we show that inequality (23) also holds when [t.sub.k] = 1. By (5), (6), and [mathematical expression not reproducible], we have, for sufficiently large k,

[mathematical expression not reproducible], (102)

which completes the proof.

Theorem 40. Assume that [I.sub.0]([x.sup.*]) = [theta]. If f(x) and [[nabla].sup.2] f(x) satisfy the conditions in Propositions 36 and 37, then

[mathematical expression not reproducible]. (103)

Proof. Since [mathematical expression not reproducible], we have

[mathematical expression not reproducible]. (104)

It follows from Proposition 36 that

[mathematical expression not reproducible]. (105)

By inequality (6), we have

[mathematical expression not reproducible]. (106)

Hence, from (105) and (106), we have

[mathematical expression not reproducible]. (107)

By [nabla]f ([x.sup.*]) = 0 and Proposition 37,

[mathematical expression not reproducible]. (108)

So, we have

[mathematical expression not reproducible], (109)

which implies that

[mathematical expression not reproducible]. (110)

This completes the proof.

5. Conclusion

In this paper, by quasi-Monte-Carlo-based approximations of the objective function and its first derivative, we have proposed a feasible sequential system of linear equations method for two-stage stochastic quadratic programming problem with inequality constraint. A new technique to update the "working set" is suggested. The feature of the new technique is that, in order to update the "working set," at each iteration we directly make use of the solution [mathematical expression not reproducible] of linear system (16), while we do not calculate the inverse of matrix M-1(x) . Moreover, it also does not need to approximate the Hessian by Monte Carlo (or quasi-Monte-Carlo) rule. Therefore, our algorithm saves the computational cost. The other remarkable feature of this technique is that it can accurately identify active constraints of problem (1a), (1b), (1c), and (1d). It should be pointed out that the technique also is useful for deterministic nonlinear programming problem with inequality constraints. We have shown that the sequence generated by the proposed algorithm converges to a KKT point of the problem globally. In particular the convergence rate is locally superlinear under some additional conditions. To get the superlinear convergence of the algorithm, we still need the strict complementarity assumption. However, we believe that, by using quasi-Monte-Carlo-based approximations and the new identification technique, it is possible to find a new algorithm without strict complementarity assumption. Moreover, how to use parallel optimization techniques [31-33] for the large scale stochastic programs with recourse is an important topic for further research.

https://doi.org/10.1155/2017/1564642

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work is supported by the National Bureau of Statistics of the People's Republic of China (2014LZ41).

References

 J. R. Birge and F. Louveaux, Introduction to Stochastic Programming, Springer, New York, NY, USA, 1997

 S. W. Wallace and W. T. Ziemba, "Applications of stochastic programming," in Proceedings of the MPS/SIAM Series on Optimization, vol. 5, Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2005.

 X. Liu, Y. Li, and W. Zhang, "Stochastic linear quadratic optimal control with constraint for discrete-time systems," Applied Mathematics and Computation, vol. 228, pp. 264-270, 2014.

 G. Li and W. Zhang, "Study on indefinite stochastic linear quadratic optimal control with inequality constraint," Journal of Applied Mathematics, vol. 2013, Article ID 805829, 5004 pages, 2013.

 L. Liu and X. Meng, "Optimal harvesting control and dynamics of two-species stochastic model with delays," Advances in Difference Equations, vol. 2017,18 pages, 2017

 G. Li and M. Chen, "Infinite horizon linear quadratic optimal control for stochastic difference time-delay systems," Advances in Difference Equations, vol. 2015, article 14, 2017

 H. Yang, "Study on stochastic linear quadratic optimal control with quadratic and mixed terminal state constraints," Journal of Applied Mathematics, vol. 2013, Article ID 674327, 11 pages, 2013.

 A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro, "Robust stochastic approximation approach to stochastic programming," SIAM Journal on Optimization, vol. 19, no. 4, pp. 1574-1609, 2009.

 J. Linderoth, A. Shapiro, and S. Wright, "The empirical behavior of sampling methods for stochastic programming," Annals of Operations Research, vol. 142, no. 1, pp. 215-241, 2006.

 J. Yu, M. Li, Y. Wang, and G. He, "A decomposition method for large-scale box constrained optimization," Applied Mathematics and Computation, vol. 231, no. 12, pp. 9-15, 2014.

 G. Bayraksan and D. P. Morton, "A sequential sampling procedure for stochastic programming," Operations Research, vol. 59, no. 4, pp. 898-913, 2011.

 T. Ekin, N. G. Polson, and R. Soyer, "Augmented Markov chain MONte Carlo simulation for two-stage stochastic programs with recourse," Decision Analysis, vol. 11, no. 4, pp. 250-264, 2014.

 R. T. Rockafellar and R. J. Wets, "A dual solution procedure for quadratic stochastic programs with simple recourse," in Numerical methods, vol. 1005 of Lecture Notes in Math, pp. 252265, Springer, Berlin, Germany, 1983.

 R. T. Rockafellar and R. J. Wets, "A Lagrangian finite generation technique for solving linear-quadratic problems in stochastic programming," Mathematical Programming Study, no. 28, pp. 63-93, 1986.

 L. Q. Qi and R. S. Womersley, "An SQP algorithm for extended linear-quadratic problems in stochastic programming," Annals of Operations Research, vol. 56, pp. 251-285,1995.

 X. J. Chen, L. Q. Qi, and R. S. Womersley, "Newton's method for quadratic stochastic programs with recourse," Journal of Computational and Applied Mathematics, vol. 60, no. 1-2, pp. 29-46, 1995.

 J. Birge, X. Chen, and L. Qi, "A stochastic Newton method for stochastic quadratic programs with recourse," https://www.researchgate.net/publication/2255017.

 S. Joe and I. H. Sloan, "Imbedded lattice rules for multidimensional integration," SIAM Journal on Numerical Analysis, vol. 29, no. 4, pp. 1119-1135, 1992.

 S. Joe and I. H. Sloan, "Implementation of a Lattice Method for Numerical Multiple Integration," ACM Transactions on Mathematical Software (TOMS), vol. 19, no. 4,pp. 523-545, 1993.

 H. Niederreiter, "Random number generation and quasi-Monte Carlo methods," Journal of the American Statistical Association, vol. 88, no. 89, pp. 147-153, 1992.

 J. R. Birge, "Quasi-Monte Carlo approaches to option pricing," American Anthropologist, vol. 81, no. 2, pp. 87-388, 1995.

 E. R. Panier, A. L. Tits, and J. N. Herskovits, "A QP-free, globally convergent, locally superlinearly convergent algorithm for inequality constrained optimization," SIAM Journal on Control and Optimization, vol. 26, no. 4, pp. 788-811, 2006.

 L. Sun, G. He, Y. Wang, and C. Zhou, "An accurate active set Newton algorithm for large scale bound constrained optimization," Applications of Mathematics, vol. 56, no. 3, pp. 297-314, 2011.

 Y. Li, T. Tan, and X. Li, "A log-exponential smoothing method for mathematical programs with complementarity constraints," Applied Mathematics and Computation, vol. 218, no. 10, pp. 5900-5909, 2012.

 H.-D. Qi and L. Qi, "A new QP-free, globally convergent, locally superlinearly convergent algorithm for inequality constrained optimization," SIAM Journal on Optimization, vol. 11, no. 1, pp. 113-132, 2000.

 Z. Y. Gao, G. P. He, and F. Wu, "Sequential systems of linear equations algorithm for nonlinear optimization problems with general constraints," Journal of Optimization Theory and Applications, vol. 147, no. 3, pp. 211-226, 2002.

 Y.-F. Yang, D.-H. Li, and L. Qi, "A feasible sequential linear equation method for inequality constrained optimization," SIAM Journal on Optimization, vol. 13, no. 4, pp. 1222-1244, 2003.

 F. Facchinei, A. Fischer, and C. Kanzow, "On the accurate identification of active constraints," SIAM Journal on Optimization, vol. 9, no. 1, pp. 14-32, 1996.

 M. J. D. Powell, "A fast algorithm for nonlinearly constrained optimization calculations, in Numerical Analysis," Lecture Notes in Mathematics, vol. 630, pp. 144-157, 1978.

 S. M. Robinson, "Strongly regular generalized equations," Mathematics of Operations Research, vol. 5, no. 1, pp. 43-62, 1980.

 C. Han, F. Zheng, T. Guo, and G. He, "Parallel algorithms for large-scale linearly constrained minimization problem," Acta Mathematicae Applicatae Sinica, vol. 30, no. 3, pp. 707-720, 2014.

 C. Han, T. Feng, G. He, and T. Guo, "Parallel variable distribution algorithm for constrained optimization with nonmonotone technique," Journal of Applied Mathematics, vol. 2013, Article ID 295147, 420 pages, 2013.

 F. Zheng, C. Han, and Y. Wang, "Parallel SSLE algorithm for large scale constrained optimization," Applied Mathematics and Computation, vol. 217, no. 12, pp. 5377-5384, 2011.

Changyin Zhou, Rui Su, and Zhihui Jiang

College of Mathematics and Systems Science, Shandong University of Science and Technology, Qingdao 266590, China

Correspondence should be addressed to Changyin Zhou; zhoucy123@163.com

Received 5 April 2017; Revised 14 July 2017; Accepted 24 July 2017; Published 24 August 2017

Title Annotation: Printer friendly Cite/link Email Feedback Research Article Zhou, Changyin; Su, Rui; Jiang, Zhihui Mathematical Problems in Engineering Jan 1, 2017 7803 Continuous Analog of Accelerated OS-EM Algorithm for Computed Tomography. Time-Free Solution to SAT Problem by Tissue P Systems. Algorithms