Printer Friendly

Linear inequalities via least squares/Lineaarsed vorratused vahimruutude meetodi abil.

1. INTRODUCTION

Linear inequalities are more complicated than linear equations, because equations as constraints can be transformed into linear inequalities by replacing each equation with the opposing pair of inequalities. But there is no way a linear inequality can be transformed into equations.

The elimination method for solving linear equations remained unknown in Europe until K. Gauss rediscovered it in the 19th century. However, until recently there was no computationally viable method for solving systems of linear constraints including inequalities. Examples of linear constraints with inequalities started to appear in published literature in the mid-18th century. In the 19th and early 20th centuries J. Fourier, J. Farkas, L. Kantorovich and others initiated studies on solving such systems. The simplex method was invented by G. B. Dantzig in 1948 (see Dantzig and Thapa, 2003).

Today the applications of linear inequalities in almost all areas of science are so numerous, so well known and recognized that we do not have to repeat them here. Gale (1969) recommended to use for inequalities the lexicographic variant of the simplex method of Dantzig et al. (1955).

The least-squares method is described thoroughly in books by Lawson and Hanson (1995) and Bjorck (1996). Systems of linear inequalities are considered by Kuhn and Tucker (1956), Hu (1970), Cherrnikov (1971), and Papadimitriou and Steiglitz (1982). The present paper deals with the solution of a system of linear inequalities and other problems by applying the least-squares method (see Section 4). This highly developed method is much older than the simplex method. It is used not only in mathematics but also in statistics, physics, etc. Mainly nonlinear problems are solved by composing a certain number of similar linear least-squares problems, differing in a variable or constraint. In this paper it will be proved that such an idea can be used also for solving systems of linear inequalities and mathematical programming problems.

First of all, the least-squares method is recommended for degenerate and unstable problems. Orthogonal transformation matrices have a natural place in least-squares computations because they leave Euclidean length of a vector invariant.

The basic problem used in this paper is the non-negative least-squares (NNLS) problem, minimizing [[parallel]Ex - b[parallel].sup.2] s.t. x [greater than or equal to] 0, see Section 2. This problem is equivalent to phase I algorithm for the simplex method discussed by Leichner et al. (1993). Their algorithm solves least-squares subproblems and guarantees strict improvement of degenerate problems at each step. A similar algorithm based on the least-squares method was described by Ubi (1989).

Problems related to redundant inequalities and theorems of alternative are considered in Section 3. In Section 4 a system of linear inequalities is solved using the least-squares method. Sections 5 and 6 present several cases of transforming linear and quadratic programming problems to systems of linear inequalities. For example, it is proposed to transform a multiobjective linear programming problem (LP problem) to the system of linear inequalities.

The exact methods for solving a system of linear inequalities can for the most part be categorized as either modified simplex-type methods or projection methods, which are based upon projections onto active sets of constraints. The projection methods are usually more efficient for the system of linear inequalities and related problems and require less storage than the modified simplex-type methods. In this paper we present a projection-type dual algorithm for solving a system of linear inequalities (Section 4).

The main goal of this article is to show how the least-squares method can be applied to studying a system of linear inequalities.

2. ALGORITHMS FOR THE NON-NEGATIVE LEAST-SQUARES (NNLS) PROBLEM

Let us have an NNLS problem, an overdetermined or underdetermined system of linear equations

Ex = b, x [greater than or equal to] 0 (1)

or

min{[phi](x)= 0.5 [[parallel]Ex - b[parallel].sup.2]}, x [greater than or equal to] 0, (2)

where E is an m x n matrix, x [member of] [R.sup.n], b [member of] [R.sup.m] (see Lawson and Hanson, 1995; Bjorck, 1996). In these books the main attention is paid to overdetermined systems, but in the mathematical programming, where m < n, working with systems of normal equations is more labour-consuming.

Assumption 2.1. The columns of the matrix E and vector b are unit vectors,

[parallel][E.sub.j][parallel] = 1, j = 1, ..., n,

[parallel]b[parallel] = 1.

2.1. Seunghyun Kong's NNLS algorithm

Like the simplex algorithm, the NNLS algorithm by Kong (2007) has a basis B consisting of a set of linearly independent columns of E used for solving a system of equations. A feasible basis B is a set of linearly independent columns of E that have all positive components in the solution of

min[[parallel]b - Bx[parallel].sup.2], (3)

where [x.sup.*.sub.B] is its solution. Other components [x.sub.E\B] = 0. Each major iteration of the NNLS solves

min [[parallel]b - B[x.sub.B] - [E.sub.s][x.sub.s][parallel].sup.2], [x.sub.B], [x.sub.s] [greater than or equal to] 0, (4)

with a feasible basis B. It requires at least one minor iteration. The minor iteration is a repeated process (inside a major iteration) of solving unconstrained minimization problems

min [[parallel]b - [bar.B][x.sub.[bar.B]] - [E.sub.s][x.sub.s][parallel].sup.2], (5)

where [bar.B] is the subset of B that remains after dropping some columns out of B, corresponding to non-positive components of [x.sub.B]. At each major iteration, we choose a column [E.sub.s] with (p, [E.sub.s]) > 0, where the residual p = b - B[x.sub.B]. The NNLS algorithm starts with an empty basis B, [rho] = b. Since there is no zero-component solution of the least-squares problem, the solution from each major iteration improves strictly so that there is no degeneracy. The columns corresponding to zero and negative components in the solution are removed even though they are linearly independent of the other columns in B.

Algorithm NNLS: Minimize

[[parallel]b - Ex[parallel].sup.2] [x.sub.E] [greater than or equal to] 0.

1. Initiate [[rho].sup.*.sub.B] = b, B = 0, [x.sup.*.sub.B] = 0, [x.sub.E\B] = 0;

2. while indexes set I(:= ([rho][*.sub.B], [E.sub.j]) > 0) [not equal to] 0 do;

3. for s [member of] I solve problem (4);

4. if solution (4) is non-positive, then

5. solve problem (5) while all components [x.sub.B] corresponding to [bar.B] are positive

6. end if;

7. B =[B, [E.sub.s]];

8. if [rho][*.sub.B] = 0, then stop: x is optimal solution with

min [[parallel]b - E[x.sub.E][parallel].sup.2] = 0;

9. end if;

10. end while;

11. x is optimal solution with

min [[parallel]b - E[x.sub.E][parallel].sup.2] > 0.

2.2. The algorithm by Dantzig and Thapa

The method by Dantzig and Thapa (2003) is similar to the previous one, where the initial basis for the simplex method is found. Start with the basis

B = [E.sub.s], s = [argmax.sub.j] ([E.sub.j], b)

for

([E.sub.j], b) > 0.

At the start of an iteration a basis B is given, with the corresponding components [x.sub.B] > 0. If

[[parallel]b - B[x.sub.E][parallel].sup.2] > 0,

then select an incoming column solving the least-squares problem

[min.sub.[alpha],[beta]][[parallel]b - [alpha]B[x.sub.E] - [beta][E.sub.s][parallel].sup.2] .

Then, similarly to the previous algorithm, the inner cycle starts, during which a certain number of columns are eliminated from the basis B. The components of the vector [x.sub.B] corresponding to these columns are nonpositive. At the same time the sum of squares [[parallel][rho][parallel].sup.2] decreases at each step.

2.3. The algorithm by Ubi

The third method for solving the problem (1) is described by Ubi (2010). It corresponds to the first version of the second method (see Lawson and Hanson, 1995, ch. 24). For the matrix E the QR transformation is computed, where Q is an orthogonal and R is an upper triangular matrix. The order of the matrix R is one at the first iteration, two at the second iteration, etc. Choose the starting point [x.sub.0] = 0; the working set of columns is empty. At each step one variable [x.sub.j0] is activated (for which column [E.sub.j0] forms a minimal angle to the residual [rho] = b - E[x.sub.E] and column [E.sub.j0] is added to the triangular matrix R) or one variable [x.sub.j] [less than or equal to] 0 (and its corresponding column in R) is removed. In the last case all the columns of R corresponding to this and following variables are replaced by the originals from the system (1). Then these Householder transformations which were performed before the variable [x.sub.j] ([x.sub.j] [less than or equal to] 0) are applied to the replaced columns of R. At last the replaced part is transformed into triangular form keeping the reflection normals in the subdiagonal part of R. The Householder transformations are memorized as products (see Lawson and Hanson, 1995). This textbook presents stability analysis of algorithms, where m[n.sup.2] - [n.sup.3]/3 operations are needed for solving the least-squares problem (1).

3. REDUNDANT INEQUALITIES AND THEOREMS OF ALTERNATIVE

Let us consider a system of linear inequalities

Ax [less than or equal to] b, (6)

where A, b, x are given matrices with dimensions m x n, m x 1, n x 1.

Assumption 3.1. The rows [a.sub.i] of the matrix A are non-zero vectors, [parallel][a.sub.i][parallel] [not equal to] 0, i = 1, ..., m.

Definition 3.1. An inequality ([a.sub.i], x) [less than or equal to] [b.sub.i] of the system Ax [less than or equal to] b is called redundant if ([a.sub.i], x) [less than or equal to] [b.sub.i] is a valid inequality for the system obtained from Ax [less than or equal to] b by removing the inequality ([a.sub.i], x) [less than or equal to] [b.sub.i].

Each inequality can be tested for redundancy by solving a linear programming problem, but in certain cases only simpler rules can be applied (see Telgen, 1981; Cheng, 1987; Bazaraa et al., 2005). We shall study the application of the least-squares method to solve that problem.

Definition 3.2. We say that a system of inequalities Ax [less than or equal to] b implies the inequality (c, x) [less than or equal to] d,

Ax [less than or equal to] b [right arrow] (c,x) [less than or equal to] d,

if the inequality (c, x) [less than or equal to] d is satisfied for all x [member of] [x : Ax [less than or equal to] b], where c is a 1 x n vector.

Let us multiply Ax [less than or equal to] b by y = ([y.sub.1], ..., [y.sub.m]) on the left and define

c = yA, y [greater than or equal to] 0. (7)

If the inequality

(y, b) [less than or equal to] d (8)

is also satisfied, then

Ax [less than or equal to] b [right arrow] (c,x) [less than or equal to] d.

It appears that the opposite statement is also true.

Lemma 3.1. (Farkas, 1902) The inequality Ax [less than or equal to] b implies (c, x) [less than or equal to] d if and only if the system (7) has a non-negative solution satisfying (8).

Let us compose an NNLS problem for the systems (7) and (8):

yA = c,

(y, b)+ [y.sub.m+1] = d,

y [greater than or equal to] 0, [y.sub.m+1] [greater than or equal to] 0.

This system has a non-negative solution if and only if Ax [less than or equal to] b implies (c, x) [less than or equal to] d. The least-squares method can be similarly used in the case of Theorems 3.1-3.5 (see Hu, 1970; Dantzig and Thapa, 2003).

Theorem of alternative 3.1. (Gale, 1969) The system Ax [less than or equal to] b has no solution if and only if there is a vector y such that

(y, b) = -1, yA = 0, y [greater than or equal to] 0. (9)

Theorem 3.2. The system Ax [less than or equal to] b has a solution if and only if

yA = 0, y [greater than or equal to] 0 [right arrow] (y, b) [greater than or equal to] 0.

Theorem 3.3. Either the equation

Ax = b, b [not equal to] 0

has a non-negative solution or the inequalities

yA [greater than or equal to] 0, (y, b) < 0

have a solution.

Theorem 3.4. Either the equation

Ax= 0

has a solution x > 0 or

[there exists]y, yA [greater than or equal to] 0, yA [not equal to] 0.

Theorem 3.5. Either the equation

Ax = b, b [not equal to] 0

has a solution or the equations

yA = 0, (y, b)= t [not equal to] 0

have a solution.

4. SOLVING SYSTEMS OF LINEAR INEQUALITIES

4.1. Khachiyan's algorithm

This algorithm decides whether the system of inequalities Ax [less than or equal to] b is consistent or not. Khachiyan (1980) assumed that the entries in A and b are all integers.

Each step of Khachiyan's algorithm determines a point [x.sup.k] and ellipsoid E([x.sup.k], [Q.sup.k]), where [Q.sup.k] is an n x n nonsingular matrix.

E([x.sup.k, [Q.sup.k]) = [[x.sup.k] + [Q.sup.k]y : y [member of] [R.sup.n], [parallel]y[parallel] [less than or equal to] 1].

At each step of the algorithm the associated ellipsoid contains a solution to the given system of linear inequalities, if one exists. The algorithm updates [x.sup.k] and [Q.sup.k] in a such way that the ellipsoid at the next step is smaller than that of the current step, but at the same time is guaranteed to contain a solution to the given system of inequalities (if one exists). However, Khachiyan's algorithm is of little practical value for solving real-world systems of inequalities and LP problems. For a detailed rigorous treatment of the method, we refer the reader to Bland et al. (1981) and Papadimitriou and Steiglitz (1982).

4.2. Gale's algorithm

A well-known approach for solving a system of linear inequalities Ax [less than or equal to] b is based on introducing artificial variables for the simplex method. Gale (1969) used a different method which is based on Theorem 3.1 presented in the previous section. He formulated an LP problem

min{z},

z - (b,y) = 1, (10)

yA = 0, z [greater than or equal to] 0, y [greater than or equal to] 0.

If [z.sub.min] = 0, then the system has no solution. In the opposite case [z.sub.min] = 1 and the optimal solution to the dual problem Ax [less than or equal to] b is found from the optimal simplex table.

4.3. Least-squares method

Let us consider now finding a minimum norm solution to the system of linear inequalities Ax < b

min {z = 0.5 [[parallel]x[parallel].sup.2]}, Ax [less than or equal to] b. (11)

Lawson and Hanson (1995) proved that the problem (11) is equivalent to the least-squares problem

(b, u) = -1, uA = 0, u [greater than or equal to] 0 (12)

or

min{[phi](u)= 0.5[[(1 + (b,u)).sup.2]+ [[parallel]uA[parallel].sup.2]]}, u [greater than or equal to] 0, (13)

where u [member of] [R.sup.m]. Let us compose the dual problem for (11):

min{w = -0.5 [[parallel]x[parallel].sup.2] +(y, b - Ax)}, -x - [A.sup.T]y = 0,

[y.sub.i]([b.sub.i] - ([a.sub.i],x))= 0, i = 1, ..., m, (14)

y [greater than or equal to] 0.

Theorem 4.1. (Cline) The relations

[??] = -[A.sup.T]u/1+(b,u), (15)

[??] = u/1+(b,u) (16)

hold for the optimal solutions to the problems (11), (12), and (14).

Equation (15) was proved by Lawson and Hanson (1995); equation (16) follows from the constraints of the dual problem (14) and equation (15). Vector x is the gradient of the objective function (11). It is expressed as a linear combination of the constraints gradients with non-positive coefficients, see equation (15).

Example 4.1.

[x.sub.1] - [x.sub.2] [less than or equal to] -1, -[x.sub.2] -1. (17)

The non-negative least-squares solution to the problem (12)

-[[??].sub.1] -[u.sub.2] = -1, [u.sub.1] = 0, -[u.sub.1] -[u.sub.2] = 0, u [greater than or equal to] 0 (18)

is u = (0,1/2), x = [(0,1).sup.T]. In this example only the variable u2 and the respective constraints -[x.sub.2] [less than or equal to] -1 are active. The first constraint is also tight but inactive. The least-squares method uses only positive variables. The decrease in the objective function (13) is guaranteed at each step. But the simplex method may stall in the degenerate case, performing a number of iterations at a degenerate vertex before producing any improvement in the objective value (see Leichner et al., 1993).

In Ubi (2007) a minimum norm solution is found to the system of linear inequalities and the results of some test problems are given.

5. TRANSFORMING LP PROBLEMS TO THE SYSTEM OF INEQUALITIES

We consider the LP problem

min{z =(c, x)} (19)

subject to

Ax = b, x [greater than or equal to] 0

and its dual

max{w = (b,y)}, yA [less than or equal to] c, (20)

where A, b, c are given matrices with dimensions m x n, m x 1, 1 x n, x and y are n x 1 and 1 x m vectors of variables.

5.1. The least-squares method

If primal and dual problems have solutions, then the optimal solutions of these problems satisfy the system of linear inequalities

(c, x) [less than or equal to] (y, b),

Ax [greater than or equal to] b, Ax [less than or equal to] b,

yA [less than or equal to] c, x, y [greater than or equal to] 0.

The solution of minimum norm to this system may be found by solving the respective NNLS problem (12).

5.2. Khachiyan's algorithm

Khachiyan (1980) proposed a polynomial-time algorithm for determining a solution to the LP problem (if one exists). It is based on the ellipsoid method which was originally developed by N. Shor, D. Yudin, and A. Nemirovski (see Papadimitriou and Steiglitz, 1982) for convex nonlinear programming. Khachiyan was the first to show that linear programming was in the class P of polynomial-time solvable problems. He reduced an LP problem to solving a certain number system of linear inequalities. The ellipsoid method can efficiently be used for convex programming and theoretically analysing combinatorial optimization problems, but the performance in practice for linear programming was very close to its worst-case bound.

5.3. The initial solution by shifting in the dual problem

Assumption 5.1. The right-hand side of the problem (19) is a positive vector:

[b.sub.i] > 0, i = 1, ..., m.

When the primal-dual simplex method is used, it is usually presumed that the feasible solution to the dual problem is known. If the feasible solution is not readily available, then a non-negative variable [x.sub.n+1] should be introduced and an equation added to the constraints in (19), [x.sub.1] + [x.sub.2] + ... + [x.sub.n+1] = [b.sub.m+1] with [b.sub.m+1] a very large positive constant. Then the feasible solution to the dual problem is available with [y.sub.1] = [y.sub.2] = ... = [y.sub.m] = 0, [y.sub.m+1] = [min.sub.j][c.sub.j]. However, such an initial solution is usually very far from being an optimal one. As the initial solution is easy to find, this problem has earlier not been studied in detail. Ubi (2005, 2010) make a coordinate shift in the dual problem, enabling finding via the least-squares method the initial solution which may turn out to be optimal.

Let us make a shift of coordinates

y = y' + tb. (21)

Example 5.1. Set the dual problem

max [w = 2[y.sub.1] + [y.sub.2] + 2/3[y.sub.3]],

[y.sub.1] - [y.sub.2] - 2[y.sub.3] [less than or equal to] 2,

5[y.sub.1] + [y.sub.2] - [y.sub.3] [less than or equal to] 34/3,

5[y.sub.1] + 3[y.sub.2] + [y.sub.3] [less than or equal to] 17,

[y.sub.1] [less than or equal to] 2, [y.sub.2] [less than or equal to] 2, [y.sub.3] [less than or equal to] 2.

In order to solve this problem, Harris (1975) changes the norms of columns and as a result the number of steps of the simplex method decreases. But in the case of the shift t = 10, the new origin of coordinates is O'(20,10,20/3) and the right-hand side c' = (16/3,-276/3, -359/3,-18, -8,-14/3). By solving the respective NNLS problem (12) we find the minimum norm solution y' = (-18, -25/3, -14/3) to the system y'A [less than or equal to] c' and the corresponding solution to the initial dual problem y* = y' + 10b = (2,5/3,2). The last solution is optimal.

If the objective function of the dual problem is bounded and the shift sufficiently large, then the optimal solution to the dual problem is a feasible solution of minimum norm (see Ubi, 2005). However, a scaling problem arises in the respective NNLS problem when t is too big. To avoid this, Ubi (2007) recommended to use a "moderately large" shift to find the initial solution to the primal-dual simplex method.

Remark 5.1. In addition to yielding a better initial solution, the least-squares method is effective also in case of a degenerate basis. Strict improvement of the objective function is attained at each iteration, see Leichner et al. (1993).

5.4. The initial solution for the chosen value of the objective function

N. Karmarkar (see Papadimitriou and Steiglitz, 1982) assumed that the minimum of the objective function for the LP problem is known. Ubi (2010) assumed only the knowledge of the estimate on this value [z.sub.0]. The parameter [z.sub.0] can be estimated more easily if the problem (19) contains the constraints

lower [less than or equal to] x [less than or equal to] upper.

Assumption 5.2. The columns [A.sub.j] of the matrix A are non-zero vectors, [parallel][A.sub.j][parallel] [not equal to] 0, j = 1, ..., n.

Let us write for the dual problem a system of linear inequalities

-(y,b) [less than or equal to] -[z.sub.0], yA [less than or equal to] c, (22)

which in case of the parameter [z.sub.0] = [z.sub.min] = [w.sub.max] has unique minimum norm solution y, which in a general case is one of the optimal solutions to the dual problem. Let us formulate the NNLS problem (12) corresponding to this system of inequalities:

-[u.sub.0][z.sub.0] + (c, u) = -1,

-[u.sub.0]b + Au = 0,

[u.sub.0] [greater than or equal to] 0, u = [([u.sub.1], ..., [u.sub.n]).sup.T] [greater than or equal to] 0. (23)

In Ubi (2010) the proofs of the following two theorems are given.

Theorem 5.1. If the primal problem (19) has an unbounded optimal solution, then the least-squares problem (23) has for every [z.sub.0] an exact solution u, [phi](u) = 0.

Remark 5.2. If the LP problem (19) is infeasible, then for every [z.sub.0] the system of linear inequalities (22) holds. We solve the problem (23) and obtain by equation (15) a feasible solution to the dual problem (20).

Assumption 5.3. Let us assume that the primal problem (19) and the dual problem (20) have optimal solutions.

Case I: For the chosen parameter z0 the inequality

[z.sub.0] > [z.sub.min] (24)

holds.

Theorem 5.2. In case of inequality (24), the problem (23) has an exact solution [u.sub.0], [u.sub.i], with the corresponding [??],

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Case II:

[z.sub.0] [less than or equal to] [z.sub.min]. (25)

Finding the least-squares solution to the problem (23), with the help of equation (15) we can calculate a feasible solution to the dual problem (20) with the value of the objective function not less than [z.sub.0].

In linear programming it is often sufficient to have a solution for which the objective function of the initial problem is sufficiently small or the objective function of the dual problem is sufficiently large. For example, for an appropriate [w.sub.0] let us solve the problem

(y, b) [greater than or equal to] [w.sub.0], yA [less than or equal to] c

using the least-squares method. The same approach can be applied in goal programming, where each objective function can be replaced by a constraint.

If the LP problem is degenerate, ill-conditioned or we have any initial basis, then it is expedient to start the solution process with the least-squares problem (23).

6. SOLVING QUADRATIC PROGRAMMING PROBLEMS

Hildreth (1957) proposed a quadratic programming algorithm which solves the dual problem--minimizing a quadratic function in the positive orthant. In this section we describe two methods for a strictly convex quadratic programming problem which are based on Hildreth's and least-squares methods.

6.1. Iterative row-action method

Let us have a quadratic programming problem

min{0.5(y,By) + (d,y)}, Gy [less than or equal to] h, (26)

where B is a positive-definite n x n matrix, G is an m x n matrix, y [member of] [R.sup.n], d [member of] [R.sup.n], h [member of] [R.sup.m]. Using the Choleski decomposition with B = [D.sup.T]D and shifting y = [D.sup.-1] x - [B.sup.-1]d, the problem (26) is transformed to the problem (11) - finding the solution of minimum norm to the system of linear inequalities

min{z = 0.5 [[parallel]x[parallel].sup.2]}, Ax [less than or equal to] b, (27)

where A = G[D.sup.-1] and b = h + G[B.sup.-1]d.

Lent and Censor (1980) presented an iterative relaxation procedure for the problem (27) which is an extension of Hildreth's quadratic programming algorithm, specially designed for approximating the minimum-norm element of a polyhedral convex set. They discussed also the option of the relaxation parameter. The presented algorithm is suitable for handling large (or huge) and sparse problems, because no changes in the original matrix are made. At each iteration access is required to only one row of the matrix (see Censor and Zenias, 1997).

6.2. The least-squares method

The problem (26) is transformed to the NNLS problem (12) and solved using the algorithm QPLS. The QR transformation is computed and orthogonal transformations are memorized as products. Assuming [g.sub.j] [less than or equal to] 0, j = 1, ..., m, the equality constraints Cx = g are written in the form Cx [less than or equal to] g. The numerical testing on some difficult problems presented in Ubi (2008) indicate that the algorithm QPLS gives excellent accuracy.

7. CONCLUSION

The highly developed least-squares technique is used in many areas of science. That method can be applied to a system of linear inequalities which is transformed to the non-negative least-squares (NNLS) problem. Three finite algorithms for solving the NNLS problem are described, based on QR decomposition of the coefficients matrix. The NNLS problem can be formulated also for large problems, because the matrix A of the system Ax [less than or equal to] b is not transformed.

Redundant inequalities, theorems of alternative, and linear and quadratic programming problems are studied using the NNLS problem. The numerical testing on some difficult problems indicates that the least-squares method gives excellent accuracy.

The disadvantage of the proposed least-squares method for solving systems of linear inequalities is that it does not treat simple bounds on the variables directly.

Evald Ubi received his PhD in mathematical cybernetics from Kiev Institute of Cybernetics of Ukrainian Academy of Sciences in 1978. Currently he is Associate Professor at the Chair of Mathematical Economics at Tallinn University of Technology, Estonia. His main research interests are system of linear inequalities, linear, quadratic, and stochastic programming, method of least squares.

doi: 10.3176/proc.2013.4.04

ACKNOWLEDGEMENT

The author is grateful to referees for constructive reviews. This study was financed by the Estonian Science Foundation (grant 8760).

REFERENCES

Bazaraa, M. S., Jarvis, J. J., and Sherali, H. D. 2005. Linear Programming and Network Flows. John Wiley and Sons, New Jersey.

Bjorck, A. 1996. Numerical Methods for Least-Squares Problems. SIAM, Linkoping.

Bland, R., Goldfarb, D., and Todd, M. 1981. The ellipsoid method. A survey. Oper. Res., 29, 1039-1091.

Censor, Y. and Zenias, S. A. 1997. Parallel Optimization Theory, Algorithms and Applications. Oxford University Press.

Cheng, M. C. 1987. General criteria for redundant and nonredundant linear inequalities. J. Optim. Theory Appl., 1, 37- 42.

Chernikov, S. 1971. Lineare Ungleichungen. Deutscher Verlag der Wissenschaften, Berlin.

Dantzig, G. and Thapa, M. 2003. Linear Programming 2: Theory and Extensions. Springer, New York.

Dantzig, G., Orden, A., and Wolfe, P. 1955. Generalized simplex method for minimizing a linear form under linear inequality restraints. Pacif. J. Math., 5, 183-195.

Farkas, J. Uber die Theorie der einfachen Ungleichungen. J. Reine Angew. Math., 124, 1902, 1-24.

Gale, D. 1969. How to solve linear inequalities? Amer. Math. Monthly, 6, 589-599.

Harris, P. 1975. Pivot selection methods of the DEVEX LP Code. Math. Prog. Study, 4, 30-57.

Hildreth, C. G. 1957. A quadratic programming procedure. Nav. Res. Logist. Qu., 4, 37-43.

Hu, T. C. 1970. Integer Programming and Network Flows. Addison-Wesley Publishing Company, Massachusetts.

Khachiyan, L. G. 1980. A polynomial algorithm in linear programming. Soviet Math. Dok., 20, 191-194.

Kong, S. 2007. Programming Algorithms Using Least-Squares Method. PhD thesis, Georgia Institute of Technology.

Kuhn, H. and Tucker, W. (eds). 1956. Linear Inequalities and Related Systems. Princeton University Press.

Lawson, C. L. and Hanson, R. J. 1995. Solving Least Squares Problems. SIAM, Philadelphia.

Leichner, S., Dantzig, G., and Davis, J. 1993. Strictly improving linear programming phase I algorithm. An. Oper. Res., 47, 409-430.

Lent, A. and Censor, Y. 1980. Extensions of Hildreth's row-action method for quadratic programming. SIAM J. Control Optim., 18, 444-454.

Papadimitriou, C. H. and Steiglitz, K. 1982. Combinatorial Optimization, Algorithms and Complexity. Prentice-Hall, New- Jersey.

Telgen, J. 1981. Redundancy and Linear Programs. Mathematical Center, Amsterdam.

Ubi, E. 1989. Least-squares method in mathematical programming. Proc. Estonian Acad. Sci., 4, 423-432.

Ubi, E. 2005. Exact and stable least-squares solution to the linear programming problem. Cent. Eur. J. Math., 3, 228- 241.

Ubi, E. 2007. On stable least-squares solution to the system of linear inequalities. Cent. Eur. J. Math., 5, 373-385.

Ubi, E. 2008. A numerically stable least-squares solution to the quadratic programming problem. Cent. Eur. J. Math., 6, 171-178.

Ubi, E. 2010. Mathematical programming via the least-squares method. Cent. Eur. J. Math., 8, 795-806.

Evald Ubi

Department of Economics, Tallinn University of Technology, Akadeemia tee 3, 12618 Tallinn, Estonia; Evald.ubi@ttu.ee

Received 13 September 2012, revised 16 January 2013, accepted 25 February 2013, available online 19 November 2013
COPYRIGHT 2013 Estonian Academy Publishers
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2013 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:MATHEMATICS
Author:Ubi, Evald
Publication:Proceedings of the Estonian Academy of Sciences
Article Type:Report
Date:Dec 1, 2013
Words:5280
Previous Article:Some approximate Gauss-Newton-type methods for nonlinear ill-posed problems/Moned aproksimatiivsed Gauss-Newtoni tuupi meetodid mittelineaarsete...
Next Article:Totally geodesic submanifolds of a trans-Sasakian manifold/ Taielikult geodeetilised trans-Sasaki muutkonna alammuutkonnad.
Topics:

Terms of use | Privacy policy | Copyright © 2022 Farlex, Inc. | Feedback | For webmasters |