Printer Friendly

Improved full Newton step infeasible O(nL) interior point method based kernel function.

1. Introduction

In this paper, we consider the linear optimization (LO) problem in the standard form:

(P) min {[c.sup.T]x : Ax = b, x [greater than or equal to] 0},

with its dual problem

(D) max {[b.sup.T] y : [A.sup.T] y + s = c, s [greater than or equal to] 0},

where c, x, s [member of] [R.sup.n], b, y [member of] [R.sup.m] and A [member of] [R.sup.mxn] is of full row rank.

The use of Interior Point Method based on the kernel function becomes more desirable because of the efficiency from a computational point of view. Recently Z.Liu et al [9] proposed an algorithm for Infeasible Interior point method IIPM based on a kernel function whose complexity coincides with the best known iteration bound for IIPM. This algorithm was an extension of the original work of Roos [6] in which no line search is needed, it uses a full Newton step instead of a damped step. For the survey of IIPM we refer to the introduction of Roos [6]. Using the same kernel function as in [9], with a modified third equation (3.10) (see Remark (3.2)), our approach targets the [[mu].sup.+]-center instead of [mu]-center used in [9], giving a more slightly wider neighborhood of quadratic convergence for feasibility steps which guarantees that the proximity-measure will be smallest than a threshold tau in a finite number of iterations (do not exceed 4 in our case). The direction used in our work is more natural and better intuitively.

This paper is organized as follows. In Section 2 we present some useful properties in the analysis of feasible IPM which will be exploited in the analysis of our IIPM. In Section 3 we present our improved full-Newton step IIPM. Each main step of the method consists of a feasibility step and several centering steps. We use a more natural feasibility step which targets the [[mu].sup.+]-center (see Remark (3.2)). For the centering steps we exploit a sharper quadratic convergence result which is done in a slightly wider neighborhood for the feasibility steps. Section 4 is devoted to the analysis of our feasibility step. The analysis is more simplified. In Section 5 we obtain the complexity result of our IIPM.

Finally we give some concluding remarks in Section 6.

2. Feasible Newton step for IPMs

In our analysis we recall some useful properties of central path and feasible full Newton step. For more details, we refer to [7], [8]. To solve (P) and (D), one needs to find a solution of the following system of equations.

Ax = b, x [greater than or equal to] 0, [A.sup.T] y + s = c, s [greater than or equal to] 0, xs = 0, (2.1)

In these so-called optimality conditions the first two constraints represent primal and dual feasibility, whereas the last equation is the so-called complementary condition. The nonnegativity constraints in the feasibility conditions make the problem already non-trivial: only iterative methods can find solutions of linear systems involving inequality constraints. The complementary condition is nonlinear, which makes it extra hard to solve this system.

2.1. Central Path

IPMs replace the complementarity condition by the so-called centering condition xs = [[mu]e, where [mu] may be any positive number. This yields the system

Ax = b, x [greater than or equal to] 0, [A.sup.T] y + s = c, s [greater than or equal to] 0, xs = [mu]e. (2.2)

Surprisingly enough, if this system has a solution for some [mu] > 0, then a solution exists for every [mu] > 0, and this solution is unique. This happens if and only if problems (P) and (D) satisfy the interior-point condition (IPC); i.e., if (P) has a feasible solution x > 0 and (D) has a solution (y, s) with s> 0(). If the IPC is satisfied, then the solution of (2.2) is denoted by (x([mu]), y([mu]), s([mu])) and is called the [mu]-center of (P) and (D). The set of all i-centers forms a path, which is called the central path. As [mu] goes to zero, (x([mu]), y([mu]), s([mu])) converge to optimal solutions of problems (P) and (D). Of course, the system (2.2) is still hard to solve, but by applying Newton's method one can easily find approximate solutions.

2.2. Properties of the Newton step

We proceed by describing Newton's method for solving (3), with [mu] fixed. Given any primal feasible x > 0, dual feasible y and s> 0, we want to find displacements [DELTA]x, [DELTA]y and [DELTA]s such that

A(x + [DELTA]x) = b, [A.sup.T] (y + [DELTA]y) + (s + [DELTA]s) = c, (x + [DELTA]x)(s + [DELTA]s) = [mu]e.

According to Newton's method for solving nonlinear equations, we obtain the linear system in the search directions [DELTA]x, [DELTA]y and [DELTA]s:

A[DELTA]x = b - Ax, [A.sup.T][DELTA]y + [DELTA]s = c - [A.sup.T] y - s, x [DELTA]s + s[DELTA]x = [mu]e - xs. (2.3)

Since A has full row rank, and since the vectors x and s are positive, one may easily verify that the coefficient matrix in the linear system (2.3) is nonsingular. Hence, this system uniquely defines the search directions [DELTA]x, [DELTA]y and [DELTA]s. These search directions are used in all existing primal-dual (feasible and infeasible) IPMs.

If x is primal feasible and (y, s) is dual feasible pair, then b - Ax = 0 and c- -[A.sup.T]y - s = 0, whence the above system reduces to

A[DELTA]x = 0, [A.sup.T] [DELTA]y + [DELTA]s = 0, x[DELTA]s + s[DELTA]x = [mu]e - xs, (2.4)

which gives the usual search directions for feasible primal-dual IPMs. Then The new iterates are given by

x + = x + [DELTA]x, y + = y + [DELTA]y, s + = s + [DELTA]s.

An important observation is that [DELTA]x lies in the null space of A, whereas [DELTA]s belongs to the row space of A. This implies that [DELTA]x and [DELTA]s are orthogonal, i.e., [DELTA][x.sup.T] As = 0. As a consequence, we have the important property that, after a full-Newton step, the duality gap assumes the same value as at the [mu]-centers, namely n[mu].

Lemma 2.1. (See [7], Lemma II.47) After a primal-dual Newton step, one has [([x.sup.+]).sup.T][s.sup.+] = n[mu].

We measure proximity of iterates (x, y, s)to the [mu]-center (x([mu]), y([mu]), s([mu])) by the quantity [delta](x, s; [mu]), which is defined as follows:

[delta](x, s; [mu]): = [delta](v):= 1/2 [parallel]v - [v.sup.-1][parallel], where v:= [square root of (xs/[mu])]. (2.5)

In the analysis of the algorithm, the effect on [delta] (x, s; i) of a full- Newton step targeting the [mu]-center of (P) and (D), will be essential. That's why Theorem II.50 of [7] was used in [9]. This theorem states that, if [delta](x, s; [mu]) [less than or equal to] 1, then the primal-dual Newton step is feasible, i.e., [x.sup.+] and [s.sup.+] are nonnegative; moreover, if [delta] < 1, then [x.sup.+] and [s.sup.+] are positive and

[delta]([x.sup.+], [s.sup.+]; [mu]) [less than or equal to] [[delta].sup.2]/[square root of (2(1-[[delta].sup.2]))].

Which implies that the Newton process is locally quadratically convergent. This property has been crucial in the analysis in [7], [9], [4].

The quadratic convergence can be also obtained by using a tighter upper bound for [delta]([x.sup.+], [s.sup.+]; [mu]), which provides a slightly wider neighborhood for the feasibility step of our IIPM. We recall the following interesting result.

Theorem 2.2. ([7], Theorem II.52) If [delta](x, s; [mu]) < 1, then

[delta]([x.sup.+], [s.sup.+]; [mu]) [less than or equal to] [[delta].sup.2]/[square root of (2(1-[[delta].sup.4]))].

We can now deduce the following trivial corollary which we state without proof.

Corollary 2.3. If [delta](x, s; [mu]) [less than or equal to] 1/[4th root of (2)], then [delta]([x.sup.+], [s.sup.+]; [mu]) [less than or equal to] [[delta].sup.2].

3. Infeasible full-Newton step for IIPM

In the case of an infeasible method, we call the triple (x, y, s)an [epsilon]- optimal solution of (P) and (D) if the 2-norms of the residual vectors b -Ax and c -- [A.sup.T] y - s do not exceed [epsilon], and if the duality gap satisfies [x.sup.T]s [less than or equal to] [epsilon]. In this section, we present an infeasible-start algorithm that generates an e-optimal solution of (P) and (D), if it exists, or establishes that no such solution exists.

3.1. Perturbed Problems

At the beginning, we choose arbitrarily [x.sup.0] > 0 and ([y.sup.0], [s.sup.0]) with [s.sup.0] > 0 such that [x.sup.0] [s.sup.0] = [[mu].sup.0]e for some positive number [[mu].sup.0]. We denote the initial values of the primal and dual residuals [r.sup.0.sub.b] and [r.sup.0.sub.c] respectively as

[r.sup.0.sub.b] = b - [Ax.sup.0], [r.sup.0.sub.c] = c - [A.sup.T] [y.sup.0] - [s.sup.0].

For any v such that 0 < v [less than or equal to] 1, we consider the following perturbed problem ([P.sub.v]), defined by

([P.sub.v]) min{[(c - [vr.sup.0.sub.c]).sup.T] x : Ax = b - [vr.sup.0.sub.b], x [greater than or equal to] 0},

and its dual problem ([D.sub.v]), which is given by

([D.sub.v]) max{[(b - [vr.sup.0.sub.b]).sub.T]y: [A.sup.T] y + s = c - [vr.sup.0.sub.c], s [greater than or equal to] 0},

We note that if v = 1 then x = [x.sup.0] yields a strictly feasible solution of ([P.sub.v]), and (y, s) = ([y.sup.0], [s.sup.0]) a strictly feasible dual pair solution of ([D.sub.v]). We deduce that if v = 1 then ([P.sub.v]) and ([D.sub.v]) satisfy the IPC.

sLemma 3.1. ([6], Lemma 1.1]) The original problems (P) and (D) are feasible if and only if, for each v satisfying 0 < v [less than or equal to] 1, the perturbed problems ([P.sub.v]) and ([D.sub.v]) satisfy the IPC.

In the sequel, we assume that (P) and (D) are feasible.

3.2. Central Path of the Perturbed Problems

Let (P) and (D) be feasible and 0 < v [less than or equal to] 1. Then, Lemma (3.1) implies that the perturbed problems ([P.sub.v]) and ([D.sub.v]) satisfy the IPC; hence, their central paths exist. This means that the system

Ax = b - [vr.sup.0.sub.b], x [greater than or equal to] 0, (3.6)

[A.sup.T] y + s = c - [vr.sup.0.sub.c], s [greater than or equal to] 0, xs = [mu]e, (3.7)

has a unique solution for every [mu] > 0. This unique solution is denoted by (x([mu], v), y([mu], v), s([mu], v)) and is the [mu]-center of the perturbed problems ([P.sub.v]) and ([D.sub.v]). In the sequel, the parameters [mu] and v always satisfy the relation [mu] = v[[mu].sup.0].

Note that since [x.sup.0][s.sup.0] = [[mu].sup.0]e, [x.sup.0] is the [[mu].sup.0]-center of the perturbed problem ([P.sub.1]) and ([y.sup.0], [s.sup.0]) the [[mu].sup.0]-center of ([D.sub.1]). In other words, (x([[mu].sup.0],1), y([[mu].sup.0],1), s([[mu].sup.0],1)) = ([x.sup.0], [y.sup.0], [s.sup.0]).

3.3. Description of the Algorithm

It is well known that the efficiency of algorithm is measured by the total number of inner iterations which is referred to as the iteration complexity of the algorithm. The best known iteration bound for IIPMs was first obtained by Mizuno [3]

O(n log max{[([x.sup.0]).sup.T] [S.sup.0], [parallel]b - A[x.sup.0][parallel], [parallel]c - [A.sup.T][y.sup.0] - [S.sup.0][parallel]}/[epsilon]).

Up to a constant, this bound was slightly improved by Roos [6] and then by Gu et al. [4].

At the beginning, we specify our initial iterate ([x.sup.0], [y.sup.0], [s.sup.0]). As usual in infeasible IPMs, we assume that the initial iterates are designed as follows:

[x.sup.0] = [s.sup.0] = [zeta]e, [y.sup.0] = 0, [[mu].sup.0] = [[zeta].sup.2],

where e is the all-one vector of length n, [[mu].sup.0] is the initial dual gap and [zeta] > 0 is such that

[parallel]x* + s* [[parallel].sub.[infinity]] [less than or equal to] [zeta],

for some optimal solution ([x.sup.*], [y.sup.*], [s.sup.*]) of (P) and (D).

At the start of the algorithm, we have initially [delta](x, s; [mu]) = 0, since if v = 1 and [mu] = [[mu].sup.0], then x = [x.sup.0] is the [mu]-center of the perturbed problem ([P.sub.v]) and (y, s) = ([y.sup.0], [s.sup.0]) is the [mu]-center of the perturbed problem ([D.sub.v]). In the sequel, we assume that, at the start of each iteration, just before the feasibility step, [delta](x, s; [mu]) is smaller than or equal to a threshold value [tau] > 0 which is ensured for the first iteration.

Now, we describe one (main) iteration of our algorithm. Suppose that, for some [mu] [member of] (0, [[mu].sup.0]], we have (x, y, s) satisfying the feasibility conditions (3.6) and (3.7) with v = [mu]/[[mu].sup.0] and such that [x.sup.T] s = n[mu] and [delta](x, s; [mu]) [less than or equal to] [tau]. We reduce [mu] to [[mu].sup.+] = (1 - [theta])[mu], with [theta] [member of] (0,1), and find a new iterate ([x.sup.+], [y.sup.+], [s.sup.+]) that satisfies (3.6) and (3.7), with v replaced by [v.sup.+] = (1 - [theta])v = [[mu].sup.+]/[[mu].sup.0] and such that [([x.sup.+]).sup.T][s.sup.+] = n[[mu].sup.+] and [delta]([x.sup.+], [s.sup.+]; [[mu].sup.+]) [less than or equal to] [tau].

To be more precise, this is achieved as follows. Each main iteration consists of a feasibility step and a few centering steps. The feasibility step serves to get an iterate ([x.sup.f], [y.sup.f], [s.sup.f]) that is strictly feasible for ([P.sub.v]+) and ([D.sub.v]+) and close to their [[mu].sup.+] center (x([v.sup.+]), y([v.sup.+]), s([v.sup.+])). In fact, the feasibility step is designed in such a way that [delta]([x.sup.f], [s.sup.f]; [[mu].sup.f]) [less than or equal to] [4th root of (2)], i.e., ([x.sup.f], [y.sup.f], [s.sup.f]) lies in the quadratic convergence neighborhood with respect to the [[mu].sup.+]-center of ([P.sub.v]+) and ([D.sub.v]+). Then we can easily get an iterate ([x.sup.+], [y.sup.+], [s.sup.+]) that is strictly feasible for ([P.sub.v]+) and ([D.sub.v]+) and such that [([x.sup.+]).sup.T][s.sup.+] = n[[mu].sup.+] and [delta]([x.sup.+], [s.sup.+]; [[mu].sup.+] [less than or equal to] [tau], just by performing a few centering steps starting from ([x.sup.f], [y.sup.f], [s.sup.f]) and targeting the [[mu].sup.+]- center of ([P.sub.v]+) and ([D.sub.v]+).

In what follows, we describe the feasibility step in more detail. Suppose that we have a strictly feasible iterate (x, y, s) for ([P.sub.v]) and ([D.sub.v]). This means that (x, y, s) satisfies (3.6) and (3.7), with v = [mu]/[[mu].sup.0] We need displacements [[DELTA].sup.f]x, [DELTA].sup.f]y, [[DELTA].sup.f] s such that

[x.sup.f] = x + [[DELTA].sup.f]x, [y.sup.f] = y + [DELTA].sup.f]y, [s.sup.f] = s + [DELTA].sup.f]s,

are feasible for ([P.sub.v]+) and ([D.sub.v]+). One may verify easily that ([x.sup.f], [y.sup.f], [s.sup.f]) satisfies (3.6) and (3.7), with v replaced by [v.sup.+] = (1 - [theta])v, only if the first two equations in the following system are satisfied:

A[DELTA].sup.f]x = [[theta][vr.sup.0.sub.b], (3.8)

[A.sup.T][[DELTA].sup.f] y + [[DELTA].sup.f]s = [theta][vr.sup.0.sub.b], (3.9)

s[[DELTA].sup.f] x + x [[DELTA].sup.f]s = (1 - [theta])[mu]e - xs, (3.10)

The third equation is inspired by the third equation in the system (2.4) that we used to define the search directions for the feasible case, except that we target the [[mu].sup.+]-center of ([P.sub.v]+) and ([D.sub.v]+).

Remark 3.2. For (3.10), as in [4], we use the linearization of [x.sup.f] [s.sup.f] = (1 - [theta])[mu]e, which means that we are targeting the [[mu].sup.+]-center of ([P.sub.v]+) and ([D.sub.v]+). It makes our direction research slightly modified comparing to the one used by Z. Liu et al. which uses the linearization [x.sup.f] [s.sup.f] = [mu]e targeting the [mu]-center. While in [1], the linearization [x.sup.f] [s.sup.f] = xs is used targeting the old xs.

We conclude that, after the feasibility step, the iterate satisfies the affine equations (3.6) and (3.7), with v = [v.sup.+]. The hard part in the analysis is to guarantee that [x.sup.f] and [s.sup.f] are positive and satisfy [delta]([x.sup.f], [s.sup.f]; [[mu].sup.+]) [less than or equal to] 1/[4th root of (2)].

After the feasibility step, we perform a few centering steps in order to get iterate ([x.sup.+], [y.sup.+], [s.sup.+]) which satisfies [([x.sup.+]).sup.T][s.sup.+] = n[[mu].sup.+] and [delta]([x.sup.+], [s.sup.+]; [[mu].sup.+]) [less than or equal to] [tau]. By using Corollary (2.3), the required number of centering steps can be obtained easily. Indeed, assuming [delta] = [delta]([x.sup.f], [s.sup.f]; [[mu].sup.+]) [less than or equal to] 1/[4th root of (2)], after k centering steps we will have iterates ([x.sup.+], [y.sup.+], [s.sup.+]) that are still feasible for ([P.sub.v]+) and ([D.sub.v]+) and satisfy

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

From this, one deduces easily that [delta]([x.sup.+], [s.sup.+]; [[mu].sup.+]) [less than or equal to] [tau] holds after at most

2 + [??][log.sub.2]([log.sub.2] 1/[tau])[??] (3.11)

centering steps.

We give below a more formal description of the algorithm in Fig. 1:
Figure 1.

Primal-Dual Infeasible IPMs

Input :

 a bound parameter [zeta];
 a threshold parameter [tau] > 0;
 an accuracy parameter [epsilon] > 0;
 a barrier update parameter [theta], 0 < [theta] < 1.

begin

 x := [zeta] e; y := 0;s := [zeta] e; v := 1;
 while max{[x.sup.T] s, [parallel]b - Ax[parallel],
 [parallel]c - [A.sup.T] y - s[parallel]} [greater than or
 equal to] [epsilon] do
 begin
 feasibility step (x, y, s):= (x, y, s) + ([[DELTA].sup.f]x,
 [[DELTA].sup.f]y, [[DELTA].sup.f]s);
 [mu]-update: [mu]:= (1 - [theta])[mu];
 centrality steps:
 while [delta](x, s; [mu]) > [tau] do
 (x, y, s):= (x, y, s) + ([DELTA]x, [DELTA]y, [DELTA]s);
 end while
 end

end


Now we introduce the definition of a kernel function.

We call [psi]: (0, [infinity]) [right arrow] [0, [infinity]) a kernel function if [psi] is twice differentiable and the following conditions are satisfied

(i) [psi]'(1) = [psi] (1) = 0;

(ii) [psi]"(t) > 0 for all t > 0;

(iii) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

We define

[bar.A] = [AV.sup.-1]X, V = diag(v), X = diag(x),

[d.sup.f.sub.x] = v[[DELTA].sup.f]x/x, [d.sup.f.sub.s] = v[[DELTA].sup.f]s/s (3.12)

[bar.v] = v/[square root of 1 - [theta]] (3.13)

[[bar.d].sup.f.sub.x]] = [bar.v][[DELTA].sup.f]x/x, [[bar.d].sup.f.sub.s] = [bar.v][[DELTA].sup.f]s/s. (3.14)

The system (3.8)-(3.10) which defines the search directions [[DELTA].sup.f]x, [[DELTA].sup.f]y and [[DELTA].sup.f]s, can be expressed in terms of the scaled directions [d.sup.f.sub.x] and [d.sup.f.sub.s] as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (3.15)

According to (3.14), equation (3.15) can also be expressed in terms of [[bar.d].sup.f.sub.x] and [[bar.d]sup.f.sub.s] as follows

[[bar.d].sup.f.sub.x] + [[bar.d].sup.f.sub.s] = [[bar.(v)].sup.-1] - [bar.v]. (3.16)

It is clear that the right-hand side of the equation (3.16) is the negative gradient direction of the logarithmic barrier function

[PSI][bar.(v)]:= [n.summation over (i=1)] [psi] [bar.([v.sub.i])], [bar.[v.sub.i]] = [square root of ([x.sub.i][s.sub.i]/(1-[theta])[mu])]

whose kernel function is

[psi](t)= [t.sup.2]-1/2-log(t).

Inspired by the work of ([2],[5],[9]) and by making a slight modification of the standard Newton direction, the new system is defined as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (3.17)

where the kernel function of [PSI] is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (3.18)

As in [9], since [psi]'(t) = t - [t.sup.-p], (3.17) can be rewritten as

[d.sup.f.sub.x] + [d.sup.f.sub.s] = [square root of (1-[theta])] ([bar.[(v).sup.-p]] - [bar.v]. (3.19)

Note that by the factor [square root of (1-[theta])], this direction makes clear that we are targeting the [[mu].sup.+]-center which differs from the choice of Z. Liu [9].

In the sequel, the feasibility step will be based on the equation (3.19).

3.4. Some technical lemmas

The two following lemmas stated without proof, will be useful for our analysis.

Lemma 3.3. (See [9], Lemma 2.4) For any t > 0, one has

[absolute value of [t.sup.-p] - t] [less than or equal to] [[absolute value of [t.sup.-1] - t], p [member of] [0,1].

Lemma 3.4. (See [9], Lemma 2.5) For any t > 0, one has

[absolute value of [t.sup.-1-p/2] - [t.sup.1-p/2]] [less than or equal to] [absolute value of [t.sup.-1] - t], p [member of] [0,1].

By applying Lemma (3.4), one can easily verify so that for any [theta] [member of] (0,1), we have:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (3.20)

and furthermore, according to (2.5), we obtain:

[delta]([v.sup.1-p/2]) [less than or equal to] [delta](v) (3.21)

Lemma 3.5. According to the result of Lemma (2.1), for any p [member of] [0,1] one has

[parallel][v.sup.1-p/2][parallel] [less than or equal to] [square root of (n)].

Proof. By applying Holder inequality, we obtain

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and the result follows.

The following Lemma gives an upper bound for the proximity-measure of [[bar.v].sup.1-p/2].

Lemma 3.6. Let (x, s) be a positive primal-dual pair according to their barrier parameter [mu] > 0, one has

4[[delta].sup.2] ([bar.v].sup.1-p/2] [less than or equal to] 4[(1 - [theta]).sup.1-p/2] [[delta].sup.2] ([v-.sup.1-p/2]) + [[theta].sup.2]n/1-[theta].

Proof. Since the (x, s) is a primal-dual pair and by applying Lemma (3.5), (3.20) and (3.21), we can get:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

since the last term in the equality is nonnegative. This completes the proof of the Lemma.

Lemma 3.7. (See [1], Lemma A.1) For i = 1,... m, let [f.sub.i]: [R.sub.+] [right arrow] R denote a convex function. Then, for any nonzero z [member of] [R.sup.n.sub.+], the following inequality

[n.summation over (i=1)] [f.sub.i]([z.sub.i]) [less than or equal to] 1/[e.sup.T]z] [n.summation over (j=1) [z.sub.j] ([f.sub.j]([e.sup.T]z) + [summation over (i[not equal to]j) [f.sub.i] (0)

holds.

4. Analysis of the feasibility step

Let x, y and s denote the iterates at the start of an iteration, and assume that [x.sup.T] s = n[mu] and [delta](v) [less than or equal to] [tau] which is true at the first iteration since [delta]([v.sup.0]) = 0, according to the choice of ([x.sup.0], [s.sup.0]) stated in Sect.(3.3).

4.1. Feasibility step

As we established in Sect.(3.3), the feasibility step generates new iterate ([x.sup.f], [y.sup.f], [s.sup.f]) that satisfies the feasibility conditions for ([P.sub.v]+) and ([D.sub.v]+), except possibly the nonnegativity constraints. A crucial element in the analysis is to show that, after the feasibility step, we get [delta]([x.sup.f], [s.sup.f]; [[mu].sup.+]) [less than or equal to] 1/[4th root of (2)], i.e., the new iterates ([x.sup.f], [y.sup.f], [s.sup.f]) are positive and within the neighborhood where the Newton process targeting the [[mu].sup.+]-center of ([P.sub.v]+) and ([D.sub.v]+) is quadratically convergent.

Note that (3.19) can be rewritten as

s[DELTA]x + x[DELTA]s = [(1 - [theta]).sup.1+2/2] [[mu].sup.1+p/2]] [(xs)s.up.1- p/2] - xs.

Using xs = [mu][v.sup.2], [bar.v] = v/[square root of (1-[theta]] and [[DELTA].sup.f]x [[DELTA].sup.f] s = [mu][d.sup.f.sub.x][d.sup.f.sub.s], we obtain

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (4.22)

The feasibility condition can now be stated in the following Lemma.

Lemma 4.1. The iterates ([x.sup.f], [y.sup.f], [s.sup.f]) are strictly feasible if and only if (1 - [theta]) [bar.[v.sup.-1-p]] + [d.sup.f.sub.x][d.sup.f.sub.s] > 0.

Proof. If [x.sup.f] and [s.sup.f] are positive then (4.22) makes clear that (1 - [theta]) [[bar.v].sup.1-p] + [d.sup.f.sub.x][d.sup.f.sub.s] > 0, proving the only if part of the lemma. For the proof of the converse implication, we introduce a steplength [alpha] [member of] [0,1] and we define [x.sup.[alpha]] = x + [alph] [[DELTA].sup.f] x and [s.sup.[alpha]] = s + [alpha] [[DELTA].sup.f]] s.

We then have [x.sup.0] = x, [x.sup.1] = [x.sup.f] and similar relations for s. Hence we have [x.sup.0] [s.sup.0] = xs > 0.

Using xs = [mu][v.sup.2], [bar.v] = v/ [square root of (1-[theta]] and [[DELTA].sup.f]x[[DELTA].sup.f]s = [mu][d.sup.f.sub.x][d.sup.f.sub.s], we may write:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Since the right hand-side of the last inequality is nonnegative, it follows that [x.sup.[alpha]][s.sup.[alpha]] > 0 for [alpha] [member of] [0,1]. Hence, none of the entries of [x.sup.[alpha]] and [s.sup.[alpha]] vanishes for 0 [less than or equal to] [alpha] [less than or equal to] 1. Since [x.sup.0] and [s.sup.0] are positive, and [x.sup.[alpha]] and [s.sup.[alpha]] depend linearly on [alpha], this implies that [x.sup.[alpha]] > 0 and [s.sup.[alpha]] > 0 for [alpha] [member of] [0,1]. Hence, [x.sup.1] and [s.sup.1] must be positive, proving the 'if' part of the statement in the lemma.

By slightly modifying the proof of Lemma 3.2 in Z.Liu [9], we can get the following result.

Lemma 4.2. (See Lemma 3.2 in [9]) The iterates ([x.sup.f], [y.sup.f], [s.sup.f]) are strictly feasible if

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4.23)

where: [rho]([delta]):= [delta] + [square root of (1 + [[delta].sup.2])].

The proof of Lemma (4.2), together with p [member of] [0,1], makes clear that the elements of the vector [[bar.v].sup.1-p] satisfy

[square root of (1 - [theta])]/[rho]([delta]) [less than or equal to] [bar.[v.sub.i]] [sup.1-p] [less than or equal to] [rho]([delta])/[square root of (1 - [theta]), i = 1, ...,n. (4.24)

Using (3.12) we may also write

[x.sup.f] = x + [[DELTA].sup.f]x = x + [xd.sup.f.sub.x]/v = x/v(v + [d.sup.f.sub.x], (4.25)

[s.sup.f] = s + [[DELTA].sup.f] s = s + [sd.sup.f.sub.s]/v = s/v(v + [d.sup.f.sub.s]. (4.26)

In the sequel we denote

[[omega].sub.i]: = [[omega].sub.i](v): = 1/2 [square root of ([[absolute value of [d.sup.f.sub.xi].sup.2] + [[absolute value of [d.sup.f.sub.si].sup.2])],

and [omega] := [omega](v):= [parallel]([[omega].sub.1],...,[[omega].sub.n]).

This implies

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

We proceed by deriving an upper bound for [delta]([x.sup.f], [s.sup.f]; [[mu].sup.+]] According to definition (2.5), one has

[delta]([x.sup.f], [s.sup.f]; [[mu].sup.+]) = 1/2 [parallel][v.sup.f] - [([v.sub.f]).sup.-1][parallel], where [v.sup.f]:= [square root of ([x.sup.f] [s.sup.f]/[[mu].sup.+]]. (4.27)

In the sequel, we denote [delta]([x.sup.f], [s.sup.f]; [[mu].sup.+]) by [delta]([v.sup.f]) and we have the following result.

Lemma 4.3. Assuming (1 - [theta]) [bar.[v.sup.1-p]] + [d.sup.f.sub.x] [d.sup.f.sub.s] > 0, one has

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Proof. According to (4.27) and (4.22), one has

[([v.sup.f]).sup.2] = [x.sup.f][s.sup.f]/[mu](1-[theta]) = (1- [theta])[bar.v].sup.1-p] + [d.sup.f.sub.x][d.sup.f.sub.s]/1-[theta]

and

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

For each i we define the function

[f.sub.i]([z.sub.i])=(1 - [theta][bar.v].sub.i] [sup.1-p] + [z.sub.i]/1 - theta] + 1 -[theta]/(1 - [theta]) [[bar.v].sub.i. sup.1-p] - [z.sub.i] - 2, i=1,...,n.

One can easily verify that if (1 - [theta])[bar.v].sub.i.sup.1-p] - [z.sub.i] > 0 then [f.sub.i]([z.sub.i]) is convex in [z.sub.i]. By taking [z.sub.i] = 2[[omega].sup.2.sub.i], we can require

(1 - [theta])[bar.v].sub.i.sup.1-p] - 2[[omega].sup.2.sub.i] > 0

Using (4.24), this holds if

2[[omega].sup.2] [less than or equal to] [(1 - [theta]).sup.3/2]/[rho]([delta]). (4.28)

Furthermore, we can use Lemma (3.7) to get

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Using Lemma (3.6), one may write

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Then

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

due to (4.24) and (4.28). We deduce so

4[[delta].sup.2] ([v.sup.f]) [less than or equal to] 4[(1-[theta]].sup.1-p/2] [[delta].sup.2] (v) + [[theta].sup.2]n/1 - [theta] + [rho](1 - [theta])/[(1 - [theta]).sup.3/2] - 2[rho][[omega].sup.2] + 2[[omega].sup.2]/1-[theta],

by using again a requirement (4.28). This completes the proof of the lemma.

Because we need to have [delta]([v.sup.f]) [less than or equal to] 1/[4th root of (2)], it follows from Lemma (4.3) that it suffices if

4[[delta].sup.2] (v) + [[theta].sup.2]n/1 - [theta] + [rho](1 - [theta])/[(1 - [theta]).sup.3/2] - 2[rho][[omega].sup.2] + 2[[omega].sup.2]/1 - [theta] [less than or equal to] 2[square root of (2)]. (4.29)

Note that the left-hand side of (4.29) is monotonically increasing with respect to [[omega].sup.2]. We choose

[tau] = 1/16, [theta]= [alpha]/4n, 0 [less than or equal to] [alpha] [less than or equal to] 1. (4.30)

By some elementary calculations, for n [greater than or equal to] 1 and [delta](v) [less than or equal to] [tau], we obtain

[omega] [less than or equal to] 0.3808 [right arrow] [delta]([v.sup.f]) [less than or equal to] 1/[4th root of (2)]. (4.31)

Later in this work, we will give a precise value of [alpha].

4.2. Upper Bound for [omega](v)

Let us denote the null space of the matrix [bar.A] as L. So

L :={[xi] [member of] [R.sup.n]: [bar.A][xi] = 0}.

Then the affine space {[xi] [member of] [R.sup.n]: [bar.A][xi] = [theta][vr.sup.0.sub.b]} equals [d.sub.x] + L. Note that the row space of [bar.A] equals the orthogonal complement [L.sup.[perpendicular to]] of L, and [d.sup.f.sub.s] [member of] [theta][vvs.sup.-1][r.sup.0.sub.c] + [L.sup.[perpendicular]]. We recall the following result from Roos [6].

Lemma 4.4. (See Lemma 4.6 in [6]) Let q be the (unique) point in the intersection of the affine spaces [d.sub.x] + L and [d.sub.s] + [L.sup.[perpendicular to]]. Then

2[omega](v) [less than or equal to] [square root of ([parallel]q[[parallel].sup.2] + ([parallel]q[parallel] + 2[sigma] [(v)).sup.2].

Note that (4.31) implies that we must have [omega] < 0.38 to guarantee [delta]S([v.sup.f]) [;less than or equal to] 1/[4th root of (2)]. Due to Lemma (4.4) this will certainly hold if [parallel]q[parallel] satisfies

[parallel]q[[parallel].sup.2] + [([parallel]q[parallel] + 2[delta](v)).sup.2] [less than or equal to] 4 * [(0.38).sup.2]. (4.32)

Furthermore from Ross, we can have

[square root of ([mu])] [parallel]q[parallel] [less than or equal to] [theta] [zeta] [square root of [e.sup.T] (x/s + s/x))]. (4.33)

In what follows, we give bounds for the rate vectors x/s and s/x.

4.3. Bounds for x/s and s/x

Note that x is feasible for [P.sub.v] and (y, s) for [D.sub.v] and moreover [delta](x, s; [mu]) [less than or equal to] [tau], i.e., these iterates are close to the [mu]-centers of [P.sub.v] and [D.sub.v]. Based on this information we need to estimate the sizes of the entries of the vectors x/s and s/x. We recall now a useful result from Roos [6], which is still available for our choice of [tau] = 1/16.

Corollary 4.5. Let [tau] = 1/16 and [delta](v) [less than or equal to] [tau].

Then

[square root of (x/s)] [less than or equal to] [square root (2) x([mu], v)/[square root of ([mu])], [square root of (s/x)] [less than or equal to] [square root (2)]s([mu], v)/ [square root of ([mu])]. (4.34)

Proof. Following the same notations as in the Appendix of Roos ([6], Corollary A.10 and Theorem A.9), and by some elementary calculations, one may easily verify that if [tau] = 1/16, then [tau]' [approximately equal to] 0.0041, [??]([tau]') [approximately equal to] 1.0645 and X([tau]') [approximately equal to] 0.9369, which gives

[??]([tau]')/X([tau]') [approximately equal to] 1.1362 < [square root of (2) ([approximately equal to] 1.4142).

Thus the result follows.

Using (4.34) and substituting into (4.33) yields

[square root of ([mu])] [parallel]q[parallel] [less than or equal to] [theta]v[zeta] [square root of (([2.sub.e.sup.T] (x[([mu], v).sup.2]/[mu] + s([mu], v)/[mu]))]. (4.35)

This gives

[mu][parallel]q[parallel] [less than or equal to] [square root of (2[theta]v[zeta])] [square root of ([parallel]x([mu], v)[[parallel].sup.2] + [parallel]s([mu], v)[[parallel].sup.2])]. (4.36)

By substitution of [mu] = [[mu].sup.0] v = [[zeta].sup.2] v = and [theta] = [alpha]/4n into (4.36), we obtain the following upper bound for the norm of [parallel]q[parallel]

[parallel]q[parallel] [less than or equal to] [alpha]/2[square root of (2)] n[zeta] [square root of ([parallel]x([mu], v)[[parallel].sup.2] + [parallel]s([mu], v)[[parallel].sup.2])]. (4.37)

We define

[kappa]([zeta], v) = [square root of [([parallel]x([mu], v)[[parallel].sup.2] + [[parallel]s([mu], v)[[parallel].sup.2])/[square root of (2n[zeta]), 0 < v [less than or equal to] 1, [mu] = [[mu].sup.0]v,

and

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

we obtain so

[parallel]q[parallel] [less than or equal to] [less than or equal to] [alpha]/2[square root of (n)] [bar.[kappa]] ([zeta]).

Because we are looking for the value that we do not allow o to exceed and in o2der to guarantee that [[delta].sup.f] (v) [less than or equal to] 1/[4th root of (2)], (4.32) holds if [parallel]q[parallel] satisfies [parallel]q[[parallel].sup.2] + [([parallel]q[parallel] + 1/8).sup.2] [less than or equal to] 4 * [(0.38).sup.2], since [delta](v) [less than or equal to] 1/16. This will be certainly satisfied if [parallel]q[parallel] [less than or equal to] 0.4713. We may now deduce the value of [alpha]:

[alpha] = 0.9426/[bar.[kappa]]([zeta]) [square root of (n)], (4.38)

which guarantee that [[delta].sup.f] (v) [less than or equal to] 1/[4th root of (2)] holds. Furthermore, following Sect.4.6 in Roos [6], we can prove that [bar.[kappa]]([zeta]) = [square root of (2n)]. We then conclude by substitution into (4.38) that

[theta] = 0.9426/4[square root of (2)] n. (4.39)

5. Iteration Bound

In the previous sections, we have found that, if at the start of an iteration the iterates satisfies [delta](x, s; [mu]) [less than or equal to] [tau], with [tau] = 1/16, then after the feasibility step, with [theta] as defined in (4.39), the iterates satisfies [delta]([x.sup.f], [s.sup.f]; [[mu].sup.+]) [less than or equal to] 1/[4th root of (2)].

According to (3.11), at most 4 centering steps suffice to get the iterate ([x.sup.+], [s.sup.+]; [[mu].sup.+]) that satisfies [delta]([x.sup.+], [s.sup.+]; [[mu].sup.+]) [less than or equal to] [tau]. So each main iteration consists of at most 4 so-called 'inner' iterations, in each of which we need to compute a new search direction. In each main iteration both the duality gap and the norms of the residual vectors are reduced by the factor 1 - [theta]. Hence, using [([x.sup.0]).sup.T][s.sup.0] = n[[zeta].sup.2], the total number of main iterations is bounded above by

1/[theta] log max{n[[zeta].sup.2],[parallel][r.sup.0.sub,b][parallel], [parallel][r.sup.0.sub.c][parallel]}/[epsilon].

By using (4.39), the total number of inner iterations is so bounded above by

17[square root of (2n)]log max{n[[zeta].sup.2],[parallel][r.sup.0.sub.b][parallel], [parallel][r.sup.0.sub.c][parallel]}/[epsilon].

6. Concluding Remarks

In this paper we presented an improved Full Newton step IIPMs based kernel function. We benefit the nice property of a sharper quadratic convergence which results in a wider neighborhood for the feasibility steps, making algorithm more stable. The iteration bound coincides with the currently best known iteration bound for IIPMs. Future research might focuses on the generalization to other classes of optimization problems, as second-order cone optimization, semi-definite optimization, and also [P.sub.*]-matrix LCP.

References

[1] H. Mansouri C. Roos. Simplified O(nL) infeasible interior-point algorithm for linear optimization using full-newton steps. Optim. Methods Softw, 22:519-530, 2007.

[2] J. Peng C. Roos and T. Terlaky. Self-regular functions and new search directions for linear and semidefinite optimization. Math. Program, 2002.

[3] S. Mizuno. Polynomiality of infeasible-interior-point algorithms for linear programming. Math. Program, 67:109-119, 1994.

[4] G.Gu H.Mansouri M.Zangiabadi, Y.Q.Bai and C.Roos. Improved full-newton step O(nL) infeasible interior-point method for linear optimization. Journal Optimization Theory Application, 145:271-288, 2010.

[5] Y.Q. Bai M. El Ghami and C. Roos. A comparative study of kernel functions for primal-dual interior-point algorithms in linear optimization. SIAM J. Opt, 15:101-128, 2004.

[6] C. Roos. A full-newton step O(n) infeasible interior-point algorithm for linear optimization. SIAM J. Optim, 16:1110-1136, 2006.

[7] C. Roos T. Terlaky and J.-Ph. Vial. Interior Point Methods for Linear Optimization. (2nd edn. of Theory and Algorithms for Linear Optimization. Wiley, Chichester (1997)), Springer New York (2006).

[8] Y.Ye. Interior Point Algorithms. Wiley-Interscience Series in Discrete Mathematics and Optimization. Wiley, New York (1997).

[9] W. Sun Z. Liu and F. Tian. A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function. Applied Mathematics and Optimization, 60:237-251,2009.

S. Bouali and S. Kabbaj

Department of Mathematics and Informatics, University Ibn Tofail, Faculty of Sciences, B.P. 133 Kenitra 14000, Morocco

E-mail: samir_bouali34@yahoo fr, samkabbaj@yahoo.fr
COPYRIGHT 2012 Research India Publications
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2012 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Bouali, S.; Kabbaj, S.
Publication:International Journal of Computational and Applied Mathematics
Date:Mar 1, 2012
Words:6992
Previous Article:Some representations on multiindices unified Voigt functions.
Next Article:An EPQ model of imperfect production processes with shortages and quality cost.

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |