Printer Friendly

A game theoretic approach for solving multiobjective linear programming problems.

1 The Multiobjective Linear Programming Problems and the Generalized Nucleolus

Consider the multiobjective linear programing (MOLP) problem, defined by a feasible set

[OMEGA] = {x [member of] [R.sup.n] : Ax [less than or equal to] b, x [greater than or equal to] 0}, (1.1)

and a set of p linear functions to be minimized

U = {[u.sub.k](x) : [u.sub.k](x) = [[alpha].sup.T.sub.k] + [[beta].sub.k], k = 1,... ,p}, (1.2)

where A is a m x n matrix, b [member of] [R.sup.m], [[alpha].sub.k] [member of] [R.sup.n], and [[beta].sub.k] [member of] R, for k = 1, ..., p. It is well known that in general a vector x minimizing one of the functions in U does not minimize the others. Therefore, a concept of solution for a problem like (1) and (2), with p [greater than or equal to] 2, should be defined.

In Game Theory, if (N, v) is a cooperative n--person game with transferable utilities (TU-game), that is N is a finite set, [absolute value of N] = n, and v is a function defined on the set of subsets of N, with v(0) = 0, then any x [member of] [R.sup.n] satisfying

[summation over (i[member of]N)] [x.sub.i] = v(N), [x.sub.i] [greater than or equal to] v({i}), [for all] i [member of] N, (1.3)

is an efficient and coalitional rational outcome of the game, called an imputation (see [6]). For each coalition S, the excess of S associated with an imputation x is e(S, x) = v(S) - x(S), where as usual x(S) = [[summation].sub.i[member of]N] [x.sub.i]; for the set of all coalitions we have a set of p = [2.sup.n] - 2 linear functions

E = {e(S, x) : e(S, x) = v(S) - x(S), S C N, S = 0}. (1.4)

Obviously, the grand coalition which will always have excess zero, due to (1.3), will be missing, and the subset S = 0, which is not considered a coalition, is also missing. The larger is the excess of some coalition S, the unhappier is the coalition, because the smaller is the total gain x(S) of the coalition. Therefore, on the set of imputations, we are looking for outcomes which minimize the excesses of the coalitions. We recognize that (1.3) and (1.4) is a multiobjective linear programming problem, in which (1.3) has the form (1.1) and the set E is similar to (1.2). Of course, in (1.3) we get the form (1.1) if we take as new variables the differences [x.sub.i] - v({i}), and in (1.4) the vectors [[alpha].sub.k] are the characteristic vectors of coalitions, to get the functions of the form (1.2).

In [8], D. Schmeidler has introduced the nucleoulus of a TU game as follows: for each imputation x, consider the vector [theta](x) of the excesses taken in a nonincreasing order; then, the nucleoulus of the game is the set N u (N, v) defined by

N u (N, v) = {x' [member of] I : [theta](x') [[less than or equal to].sub.L] [theta](x), [for all]x [member of] I}, (1.5)

where [[less than or equal to].sub.L] is the lexicographic ordering of the vector of excesses and I is the set of imputations. A full description of the topic can be found in [6].

In [3], M. Justman has introduced the generalized nucleolus associated with a set of linear constraints (1.1), and a set of linear functions (1.2) to be minimized, as follows: for each x, a feasible solution of (1.1), consider the vector [theta](x) of the values of the linear functions (1.2) taken in a nonincreasing order; then the generalized nucleolus is the set N u([OMEGA], U) defined by

N u ([OMEGA], U) = {x' [member of] [OMEGA] : [theta](x') [[less than or equal to].sub.L] [theta](x), [for all] x [member of] [OMEGA]}. (1.6)

A comparison of the properties of the generalized nucleolus and the nucleolus follows the definition in [3]. The following result from [3] is straight forward:

If [OMEGA] is a nonempty compact set and U is a set of continuous functions defined on [OMEGA], then N u ([OMEGA], U) [not equal to] 0.

Note that these conditions hold in the case of a TU game, so that the nucleolus does exist in that case; moreover, as proved by Schmeidler, another result is helping to prove the uniqueness of the nucleolus of a TU game.

Note also that the interpretation of the generalized nucleolus as the point(s) of the minimal unhappiness of the most unhappy player, the second most unhappy player, etc., is an argument showing that the generalized nucleolus is a "good" concept of solution for the multiobjective problems. Another argument leading to the same conclusion is offered by the application of the theorem to the case of multiobjective problems:

Theorem 1.1 If [OMEGA] is a nonempty compact set, an intersection of half spaces defined in (1.1), and U is a set of linear functions (1.2), defined on [OMEGA], then any point of the generalized nucleolus is a Pareto optimum point of [OMEGA] relative to U.

Proof: Recall that a point [x.sup.*] [member of] [OMEGA] is a Pareto optimum point of [OMEGA], relative to the set of functions U, when there does not exist a point x [member of] [OMEGA], such that

[u.sub.k](x) [less than or equal to] [u.sub.k]([x.sup.*]), k = 1, ..., p, (1.7)

and

[u.sub.l](x) < [u.sub.l]([x.sup.*]), for some l [member of] {1, ..., p}. (1.8)

Suppose that [x.sup.*] [member of] N u ([OMEGA], U) is not a Pareto optimum point of [OMEGA]; then, there is a point x [member of] [OMEGA] such that (1.7) and (1.8) hold. We can assume, without loss of generality, that [[theta].sub.k](x) = [u.sub.k](x), [for all]k = 1, ..., p, that is the numbering of functions was chosen in such a way that we have

[u.sub.1](x) [greater than or equal to] [u.sub.2](x) [greater than or equal to] ... [greater than or equal to] [u.sub.p](x). (1.9)

Consider the p-vector [eta]([x.sup.*]) = ([u.sub.k]([x.sup.*])), whose coordinates are the values of the already numbered functions at the point [x.sup.*]. We can not say that [[theta].sub.k]([x.sup.*]) = [u.sub.k]([x.sup.*]) for some k. According to our hypotheses (1.7) and (1.8), we get [[theta].sub.k](x) [less than or equal to] [[eta].sub.k]([x.sup.*]), k = 1, ..., p, and [[theta].sub.l](x) < [[eta].sub.l]([x.sup.*]) for some l [member of] {1, ..., p}. As we have

[[theta].sub.1](x) [less than or equal to] [[eta].sub.1]([x.sup.*]) [less than or equal to] max [u.sub.k]([x.sup.*]) = [[theta].sub.1]([x.sup.*]), (1.10)

the inequality [[theta].sub.1](x) [less than or equal to] [[theta].sub.1]([x.sup.*]) holds. But we have [x.sup.*] [member of] N u ([OMEGA], U), and x [member of] [OMEGA] shows that the inequality [[theta].sub.1](x) [less than or equal to] [[theta].sub.1]([x.sup.*]) can not be true; hence, [[theta].sub.1](x) = [[eta].sub.1]([x.sup.*]) = [[theta].sub.1]([x.sup.*]). If we consider the pair [[theta].sub.2](x) and [[eta].sub.2]([x.sup.*]), and we repeat the reasoning, we get [[theta].sub.2](x) = [[eta].sub.2]([x.sup.*]) = [[theta].sub.2]([x.sup.*]). Now, by induction, we prove that [[theta].sub.k](x) [less than or equal to] [[eta].sub.k]([x.sup.*]), k = 1, ..., p, are satisfied as equalities. This fact contradicts the hypothesis [[theta].sub.1](x) < [[eta].sub.l]([x.sup.*]) for some index l [member of] {1, ..., p}, the theorem follows.

The meaning of Theorem 1.1 is that by taking any point in the set N u ([OMEGA], U) as a solution of the MOLP problem, we are taking in the set of Pareto optimum points a point with supplementary properties.

2 Finding the generalized nucleolus as a solution for an MOLP problem

The main idea of the algorithm to be pressented below for solving MOLP problems will be that of replacing in each step the given problem by a new MOLP problem in which some of the objective functions are included in the constraints defining the new feasible set, equated to some constants; recall that the same idea was used by Kopelowitz in his algorithm for computing the nucleolus (see [4] and [6]). In this way, the number of the objective functions is reduced in any step of the algorithm, until the objective functions are exhausted. The algorithm keeps the generalized nucleolus of the initial feasible set in the successive feasible sets and one proves that the final feasible set represents just the generalized nucleolus, i.e. the solution of the MOLP problem. Moreover, one point in the generalized nucleolus is also obtained by the algorithm. The method was suggested by [4] and [5]. The results needed to justify the algorithm are presented in this section, while the algorithm will be stated in the next section.

Associated with the MOLP problem (1.1) and (1.2), consider the linear programming problem (P):

min{t : x [member of] [OMEGA], [u.sub.k](x) [less than or equal to] t, k = 1, ..., p}. (2.1)

As the second group of restrictions can be written as max [u.sub.k](x) [less than or equal to] t, it is clear that if [OMEGA] [not equal to] 0, the problem (P) has feasible solutions and vice versa. Thus, the case [OMEGA] = 0, which is making the MOLP problem senseless, is discovered in solving the problem (P). Otherwise, if [OMEGA] is a nonempty compact set, the problem (P) has an optimal solution. The crucial point in the construction of a primal method for solving the MOLP problem is the following:

Lemma 2.1 If [OMEGA] is a nonempty compact set and [t.sup.*] is the optimal value of (P), then there is an integer [p.sup.*], (1 [less than or equal to] [p.sup.*] [less than or equal to] p), and a set of indices [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], such that for any optimal solution of (P), let say ([x.sup.*], [t.sup.*]), we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (2.2)

Proof: As [t.sup.*] is the optimal value of (P), suppose that for each k [member of] {1, ..., p}, there is an optimal solution ([x.sup.k], [t.sup.*]) such that [u.sub.k]([x.sup.k]) < [t.sup.*] , where [x.sup.k] [member of] [OMEGA], k = 1, ..., p, are distinct vectors, or not. We want to show that this can not be true. Consider the vector

x = 1/p [p.summation over (k=1)] [x.sup.k], (2.3)

which is a point in [OMEGA], because this is a convex set. For any j [member of] {1, ..., p}, we have

[u.sub.j] ([x.sup.k]) [less than or equal to] [t.sup.*], if j [not equal to] k, [u.sub.k]([x.sup.k]) < [t.sup.*]. (2.4)

From (2.3), by the linearity of the objective functions, we get [u.sub.j](x) < [t.sup.*], for all j [member of] {1, ..., p}. This says that (x, max [u.sub.j] (x)) is a feasible solution of (P), which gives for the objective function a lower value than [t.sup.*], the optimal value. The contradiction proves the lemma.

Note that the lemma says that by adding to the constraints of problem (P) the equations [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], for i = 1, ..., [p.sup.*], no optimal solution has been lost. This is exactly what we want to do in order to find the generalized nucleolus, for which the elements [x.sup.*] will be further proved to form pairs ([x.sup.*], [t.sup.*]) included in the set of optimal solutions of our problem (P).

Obviously, we should explain how the indices of the functions appearing in these equations (2.2) can be found, a fact to be done at the end of this section. Notice that the lemma uses in the proof the linearity of the objective functions, hence the extension to the nonlinear case is really a problem.

Example 2.2 Consider the MOLP problem

Minimize [u.sub.1](x) = 2x + y, [u.sub.2](x) = x - 2y,

on the set [OMEGA], defined by the system of linear inequalities

x - y [less than or equal to] 1, -x + y [less than or equal to] 2, x + y [less than or equal to] 3, x + y [greater than or equal to] 1, x [greater than or equal to] 0, y [greater than or equal to] 0.

Note that t is an unconstrained variable. The problem (P) shown in (2.1) is: Minimize t, subject to

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

We can compute an optimal solution for this problem: [x.sup.*] = 0, [y.sup.*] = 1, [t.sup.*] = 1. One may check its feasibility directly, and the optimality, if we write the dual problem and the complementarity conditions. Moreover, as it will be explained at the end of the section, we shall be able to determine that for any optimal solution of (P), we have 2x + y = 1; hence, [p.sup.*] = 1, and this equation should be added to the constraints of problem (P), to form a new MOLP problem.

Note that in the above problem the optimal solution is unique; therefore, there is no need to form a new MOLP problem and solve it. In general, there are several optimal solutions for the problem (P), so that we should form the second MOLP problem and solve it. If the optimal solution is unique, then for this solution we can compute the corresponding vector [theta]([x.sup.*]) and in the vector will appear the optimal values of t for the next problems. Now, suppose that by some method it has been found a set of indices [PI], with [absolute value of [PI]] = [p.sup.*], 1 [less than or equal to] [p.sup.*] [less than or equal to] p, such that for any optimal solution ([x.sup.*], [t.sup.*]) of (P), we have

[u.sub.k]([x.sup.*]) = [t.sup.*], [for all]k [member of] [PI], (2.5)

and two cases may arise: either [p.sup.*] = p, or [p.sup.*] < p. Obviously, if [p.sup.*] < p, then we have beside (2.5) the inequalities

[u.sub.k]([x.sup.*]) < [t.sup.*], [for all] k [not member of] [PI]. (2.6)

Theorem 2.4 discusses the first case, which will be our stopping criterion of the algorithm, while Lemma 2.5 discusses the second case, and leads to the algorithm further justified by Theorem 2.6. The following result on a general property of any element in the generalized nucleolus is valid in both cases, [p.sup.*] = p, and [p.sup.*] < p.

Lemma 2.3 If [OMEGA] is a nonempty compact set, [t.sup.*] is the optimal value of problem (P), and [p.sup.*] [less than or equal to] p is the integer just defined, then for any element [x.sup.*] [member of] N u([omega], U), we get ([x.sup.*], [t.sup.*]) is an optimal solution of problem (P), and we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (2.7)

Proof: Let (x', [t.sup.*]) be an optimal solution of (P); then, x' [member of] [OMEGA] and we have by (1.6) that [theta]([x.sup.*]) [[less than or equal to].sub.L] [theta](x'). Thus, [[theta].sub.1]([x.sup.*]) [less than or equal to] [[theta].sub.1](x'), and according to the definition of [p.sup.*], we get [[theta].sub.1](x') = [t.sup.*], that is [[theta].sub.1]([x.sup.*]) [less than or equal to] [t.sup.*]. This inequality and the definition of [theta]([x.sup.*]), imply [u.sub.k]([x.sup.*]) [less than or equal to] [t.sup.*], for all k = 1, ..., p, so that ([x.sup.*], [t.sup.*]) is an optimal solution of problem (P). Obviously, the maximal value of the objectives at [x.sup.*] is [t.sup.*], otherwise [t.sup.*] would not be the optimal value. The equalities (2.7) will follow from the definition of [p.sup.*].

Note that by the lemma, no element in the generalized nucleolus is lost by adding (2.5) to the set of constraints. Therefore, in the second case, [p.sup.*] < p, we should further consider the new MOLP problem of minimizing the objectives [u.sub.k](x), [for all]k [not member of] [PI], on the new feasible set

[[OMEGA].sup.*] = {x [member of] [OMEGA] : [u.sub.k](x) = [t.sup.*], [for all]k [member of] [PI]}, (2.8)

where [t.sup.*] is the optimal value of (P) and [PI] is defined by (2.5) and (2.6).

Now, consider the new set

[S.sub.OPT](P) = {x [member of] [OMEGA] : (x, [t.sup.*]) is an optimal solution of (P)} [subset not equal to] [Q.sup.*]. (2.9)

Lemma 2.3 proves the inclusion and shows that N u ([OMEGA], U) [subset not equal to] [S.sub.OPT](P), hence from (2.9) it follows that N u ([OMEGA], U) [subset not equal to] [[OMEGA].sup.*]. Recall that the result holds in both cases [p.sup.*] = p and [p.sup.*] < p; but we shall prove now that in the first case we have [S.sub.OPT] = [[OMEGA].sup.*].

Theorem 2.4 If [OMEGA] is a nonempty compact set, and [t.sup.*] is the optimal value of (P), and for any optimal solution ([x.sup.*], [t.sup.*]) of (P) we have [u.sub.k]([x.sup.*]) = [t.sup.*], for k = 1, ..., p, then we obtain N u([OMEGA], U) = [[OMEGA].sup.*].

Proof: As we already remarked, we have N u ([OMEGA], U) [subset not equal to] [S.sub.OPT](P). Let us suppose that there is x' [member of] [S.sub.OPT](P), such that x' [not member of] N u ([OMEGA], U). Then, there is [x.sup.*] [member of] N u([OMEGA], U) such that [theta]([x.sup.*]) [[less than or equal to].sub.L] [theta](x'). But, from the hypothesis we get [[theta].sub.k](x') = [t.sup.*], [for all]k = 1, ..., p, and therefore we get [[theta].sub.k]([x.sup.*]) = [t.sup.*], [for all]k = 1, ..., p. Hence, [theta](x') = [theta]([x.sup.*]), and the contradiction proves the theorem.

Theorem 2.4. shows that in the case [p.sup.*] = p we have N u([OMEGA], U) = [[OMEGA].sup.*], and the MOLP problem is solved by determining the solution set. Now, one solution is available at the end of the computation needed to solve the problem (P), so that a point in N u([OMEGA], U) is already available. If more solutions are needed, then this set may be further explored. Theorem 2.4 is the stopping criterion for the method, and this is the case described earlier by saying that the objective functions have been exhausted.

It remains to consider the case [p.sup.*] < p; as we already remarked, by Lemma 2.3 we have N u([OMEGA], U) [subset not equal to] [[OMEGA].sup.*], hence [[OMEGA].sup.*] = 0. We form the new MOLP problem ([P.sup.*]), asking to minimize the functions in [U.sup.*] = {[u.sub.k](x) : [u.sub.k](x), [for all]k [not member of] [PI]}, on [[OMEGA].sup.*] shown in (2.8). Moreover, [[OMEGA].sup.*] is a compact set, hence N u([[OMEGA].sup.*], [U.sup.*]) [not equal to] 0. Consider further the second case:

Lemma 2.5 If [OMEGA] is a nonempty compact set, [t.sup.*] is the optimal value of (P), we have [p.sup.*] < p, so that we built the new set [OMEGA].sup.*], given by (2.8), and the problem ([P.sup.*]) is built by using the objectives [U.sup.*], then [x.sup.*] [member of] N u ([[OMEGA].sup.*], [U.sup.*]) implies that ([x.sup.*], [t.sup.*]) is an optimal solution of ([P.sup.*]), and the equalities

[[theta].sub.1]([x.sup.*]) = ... = [[theta].sub.p*] ([x.sup.*]) = [t.sup.*], (2.10)

hold.

Proof: Let (x', [t.sup.*]) be an optimal solution of (P); then, x' [member of] [[OMEGA].sup.*] and we have [theta]([x.sup.*]) [[less than or equal to].sub.L] [theta](x'). Thus, [[theta].sub.1]([x.sup.*]) [less than or equal to] [[theta].sub.1] (x') and according to the definition of [p.sup.*], we get [[theta].sub.1] (x') = [t.sup.*], i.e. [[theta].sub.1]([x.sup.*]) [less than or equal to] [t.sup.*]. This inequality and the definition of [theta]([x.sup.*]) imply

[u.sub.k]([x.sup.*]) [less than or equal to] [t.sup.*], k = 1, ..., p, (2.11)

so that ([x.sup.*], [t.sup.*]) is an optimal solution of (P). Obviously, max [u.sub.k]([x.sup.*]) = [t.sup.*], otherwise [t.sup.*] would not be the optimal value. The equalities given in the lemma follow from the definition of [p.sup.*].

Notice that Lemma 2.5 has proved that we have N u ([[OMEGA].sup.*], [U.sup.*]) [subset not equal to] [S.sub.OPT](P) [subset not equal to] [[OMEGA].sup.*] [subset not equal to] [omega]. It remains to be seen why we should continue to investigate N u ([[OMEGA].sup.*], [U.sup.*]), by creating a new linear programming problem ([P.sup.*]), and how this procedure may end by solving the new problem.

Theorem 2.6 If [OMEGA] is a nonempty compact set, and [t.sup.*] is the optimal value of (P), consider the set [[OMEGA].sup.*] = {x [member of] [OMEGA] : [u.sub.k](x) = [t.sup.*], [for all] k [member of] [PI]}, where [PI] was defined by (2.5) and (2.6), then we have N u ([OMEGA], U) = N u ([[OMEGA].sup.*], [U.sup.*]).

Proof: We shall be proving the double inclusion, to conclude the equality of the two sets; first, prove N u ([OMEGA], U) [subset not equal to] N u ([[OMEGA].sup.*], [U.sup.*]). Let [x.sup.*] [member of] N u ([OMEGA], U); as N u ([OMEGA], U) [subset not equal to] [[OMEGA].sup.*], we have [x.sup.*] [member of] [[OMEGA].sup.*]. If [x.sup.*] [not member of] N u ([[OMEGA].sup.*], [U.sup.*]), then there is x' [member of] N u ([[OMEGA].sup.*], [U.sup.*]) such that [theta](x') [[less than or equal to].sub.L] [theta]([x.sup.*]). From Lemma 2.3 and Theorem 2.4, we have

[[theta].sub.1](x') = [[theta].sub.1]([x.sup.*]), ..., [[theta].sub.p'](x') = [[theta].sub.p']([x.sup.*]); (2.12)

hence, there is an index k', p' [less than or equal to] k' < p, so that [[theta].sub.k](x') = [[theta].sub.k] ([x.sup.*]), [for all]k = 1, ..., k', and [[theta].sub.k'+1](x') < [[theta].sub.k'+1]([x.sup.*]). As x' [member of] [OMEGA], the existence of k' contradicts [x.sup.*] [member of] N u ([OMEGA], U). In consequence, N u ([OMEGA], U) [subset not equal to] N u ([[OMEGA].sup.*], [U.sup.*]). Second, we prove the opposite inclusion; let x' [member of] N u ([[OMEGA].sup.*], [U.sup.*]) and x' [not member of] N u ([OMEGA], U); as x' [member of] [OMEGA], we have [theta]([x.sup.*]) [[less than or equal to].sub.L] [theta](x') for any [x.sup.*] [member of] N u ([OMEGA], U). From Lemma 2.3 and Theorem 2.4, we have

[[theta].sub.1]([x.sup.*]) = [[theta].sub.1](x'), ..., [[theta].sub.p']([x.sup.*]) = [[theta].sub.p'](x'), (2.13)

hence, there is an index k', [p.sup.*] [less than or equal to] k' < p, so that [[theta].sub.k]([x.sup.*]) = [[theta].sub.k](x'), [for all]k = 1, ..., k', and [[theta].sub.k'+1]([x.sup.*]) < [[theta].sub.k'+1](x'). As N u ([OMEGA], U) [subset not equal to] N u ([[OMEGA].sup.*], [U.sup.*]) implies [x.sup.*] [member of] N u ([[OMEGA].sup.*], [U.sup.*]), the last result contradicts x' [member of] N u ([[OMEGA].sup.*], [U.sup.*]). It follows x' [member of] N u ([OMEGA], U), in consequence we shall obtain N u ([[OMEGA].sup.*], [U.sup.*]) [subset not equal to] N u([OMEGA], U). The equality stated in the theorem holds.

Now, the above Theorem 2.6 justifies the step of the algorithm in case that [p.sup.*] < p, i.e. the objective functions have not been yet exhausted. Precisely, for finding the generalized nucleolus of the initial MOLP problem, we can solve a new MOLP problem on the feasible set [[OMEGA].sup.*], defined in (2.8), with respect to the objective functions

[U.sup.*] = {[u.sub.k](x) : [u.sub.k](x) = [[alpha].sup.T.sub.k] + [[beta].sub.k], k [not member of] [PI]}. (2.14)

It remains to explain how can we find a set of indices [PI] introduced in Lemma 2.1; some elements of the duality theory of linear programming will be used. Consider the dual problem of (P), taking into account that t is unconstrained and [OMEGA] can be written under the form

[OMEGA] = {x [member of] [R.sup.n] : [A.sub.i]x [less than or equal to] [b.sub.i], i = 1, ..., m; x [greater than or equal to] 0}, (2.15)

where [A.sub.i] is the row i of A. The problem (P) is:

min{t : [A.sub.i]x [less than or equal to] [b.sub.i], i = 1, ..., m; [[alpha].sup.T.sub.k] x - t [less than or equal to] -[[beta].sub.k], k = 1, ..., p; x [greater than or equal to] 0}. (2.16)

Then, the dual problem (D) is

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (2.17)

where [sigma] [member of] [R.sup.m] and [tau] [member of] [R.sup.p] are the vectors of the dual variables. The complementarity conditions are

[[sigma].sub.i] ([b.sub.i] - [A.sub.i]x) = 0, i = 1, ..., m, [[tau].sub.[kappa]] (t - [u.sub.k] (x)) = 0, k =1, ..., p. (2.18)

Suppose that the problem (P) has been solved. Then, for any optimal solution ([x.sup.*], [t.sup.*]) of (P), and any optimal solution of the dual ([[sigma].sup.*], [[tau].sup.*]) of ( P), by complementary slackness theorem, the complementarity conditions should be satisfied; among them

[[tau].sup.*.sub.[kappa]] ([t.sup.*] - [u.sub.k]([x.sup.*])) = 0, k = 1, ..., p. (2.19)

From (2.17), we have [[summation].sup.p.sub.1] [[tau].sup.*.sub.[kappa]] = 1, hence in [[tau].sup.*] there is at least one positive coordinate. If we denote

[PI] = {k : k [member of] {1, ..., p}, [[tau].sup.*.sub.[kappa]] > 0}, (2.20)

then, for any optimal solution ([x.sup.*], [t.sup.*]) of (P), we have

[u.sub.k]([x.sup.*]) = [t.sup.*], [for all]k [member of] [PI]. (2.21)

If [absolute value of [PI]] = p, the MOLP problem is solved, as explained by Theorem 2.4, if [absolute value of [PI]] = [p.sup.*] < p, then the MOLP problem should be replaced by a new one, as explained by Theorem 2.6.

Example 2.7 Return to Example 2.2, and write the dual problem (2.17) for the problem discussed in Example 2.2:

Minimize g = [[sigma].sub.1] + 2[[sigma].sub.2] + 3[[sigma].sub.3] - [[sigma].sub.4],

subject to

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Among the complementarity conditions we have

[[tau].sub.1](t - 2x - y) = 0, [[tau].sub.2](t - x + 2y) = 0.

For the optimal solution which has been found by solving the problem (P), i.e. [x.sup.*] = 0, [y.sup.*] = 1, [t.sup.*] = 1, together with [[tau].sub.1] + [[tau].sub.2] = 1, give [[tau].sup.*.sub.1] = 1, [[tau].sup.*.sub.2] = 0. As explained above in (2.20) and (2.21), we obtain [PI] = {1}, so that for any optimal solution of (P) we should have 2x + y = 1, equation to be added to the constraints in (P), in order to get the new MOLP problem, to be further solved for getting the generalized nucleolus of the initial problem. This is: minimize the function

[u.sub.2](x) = x - 2y,

on the set [[OMEGA].sup.*] defined by the system of linear inequalities

x - y [less than or equal to] 1, -x + y [less than or equal to] 2, x + y [less than or equal to] 3, x + y [greater than or equal to] 1, 2x + y = 1, x [greater than or equal to] 0, y [greater than or equal to] 0.

If this problem is solved we get the optimal solution [x.sup.*] = 0, [y.sup.*] = 1, [t.sup.*] = -2. The generalized nucleolus has only one solution x = 0, y = 1, and the objectives have the values included in the vector [[theta].sup.T] = (1, -2). Of course, if we know that the solution of (P) is unique, we may compute the values of objectives at that solution and we should not solve the second problem. If there are more objectives and we solved the second MOLP problem without exhausting the objectives, we should go to the third MOLP problem.

3 Algorithm for solving a MOLP problem

The algorithm for solving a MOLP problem, that is for finding a point in the generalized nucleolus of the compact feasible set [OMEGA], relative to the linear functions U, can be stated as follows:

STEP 0: Find out whether [OMEGA] [not equal to] 0, or not. In the second case, stop, the MOLP has no solution. Before step s, s [greater than or equal to] 1, a feasible set [[OMEGA].sup.s] and a system of linear functions [U.sup.s] is available. For s = 1, we have [[OMEGA].sup.1] = [OMEGA], [U.sup.1] = U, [p.sub.1] = [absolute value of [U.sup.1]] = p.

STEP s: 1. Solve the LP problem ([P.sup.s]):

Minimize t, s.t. x [member of] [[OMEGA].sup.s], [u.sup.s.sub.k] [less than or equal to] t, k = 1, ..., [p.sub.s].

Let [t.sup.s] be the optimal value.

2. Find a dual optimal solution ([[sigma].sup.s], [[tau].sup.s]) and determine the set of indices [[PI].sup.s] = {k : [[tau].sup.s.sub.k] > 0}.

3. Update [[OMEGA].sup.s] and [U.sup.s]; [[OMEGA].sup.s] := {x [member of] [[OMEGA].sup.s] : [u.sup.s.sub.k] = [t.sup.s], k [member of] [[PI].sup.s]}, then [p.sub.s] := [p.sub.s] - [absolute value of [PI]], and [U.sup.s] := {[u.sup.s.sub.k] : k =1, ..., [p.sub.s], k [not member of] [[OMEGA].sup.s]};

4. Check whether [p.sup.s] = 0 or [p.sup.s] > 0; in the first case, go to 5, in the second case, take on s := s+1, and go to a new step;

5. Stop the procedure, the solution is the set [[OMEGA].sup.s]; if only one element of the generalized nucleolus is desired, take the x-part of the optimal solution of the problem ([P.sup.s]).

The convergence in a finite number of steps is assured by the fact that in each step at least one objective function of the current MOLP problem is incorporated in the restrictions. Hence, the number of steps is at most p. Of course, a dual method of the method given above can be stated, as it has been done in the case of the nucleolus (see [1] and [2]). Finally, let us remark that in the case p [greater than or equal to] n, if there are n linearly independent objectives, then the generalized nucleolus consists of exactly one point, because at some step of the algorithm all these n functions will be among the constraints equated with some constants and these equations have only one solution.

Acknowledgment. This work started at the Institute of Mathematics, Pisa, under the direction of professors R. Marino and F. Giannessi, and a first draft has been published in the serie Quaderno dei gruppi di ricerca matematica del C.N.R., Feb. 1981.

References

[1] G. Bruynel, On balanced sets with applications in game theory, Bull. Soc. Math. belgique, XXX (1978), 93-108.

[2] I. Dragan, A procedure of finding the nucleolus of a cooperative n person game, Z.O.R., vol. 25 (1981), 119-131.

[3] M. Justman, Iterative processes with nucleolar restrictions, IJGT, Vol. 6,4 (1977), 181-212.

[4] A. Kopelowitz, Computations of the kernels of simple games and the nucleolus of a n-person game, RM 31, The Hebrew Univ. Jerusalem, 1967.

[5] M. Maschler, B. Peleg, L.S. Shapley, Geometric properties of the kernel, nucleolus and related solution concepts, Math. O.R., vol. 4,4 (1979), 303-339.

[6] G. Owen, Game Theory, 3rd edition, Academic Press, New York, 1995.

[7] J. Potters, J. Reijnierse, M. Ansing, Computing the nucleolus by solving a prolonged simplex algorithm, Math. O.R., vol. 21,3 (1996), 757-768.

[8] D. Schmeidler, The nucleolus of a characteristic function game, SIAM Journal Appl. Math., vol. 17,6 (1969), 1163-1170.
COPYRIGHT 2010 American Romanian Academy of Arts and Sciences
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2010 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Dragan, Irinel
Publication:Libertas Mathematica
Date:Jan 1, 2010
Words:5551
Previous Article:Regularization of certain divergent series of polynomials.
Next Article:Diagonal invariance and comparison methods.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters