21,415,176 articles and books

# A new interactive method to solve multiobjective linear programming problems.

1. Introduction

During the past four decades, many methods and algorithms have been developed to solve Multiobjective Programming (MOP), in which some objectives are conflicting and the utility function of the Decision Maker (DM) is imprecise in nature. MOP is believed to be one of the fastest growing areas in management science and operations research, in that many decision making problems can be formulated in this domain. For some engineering applications of MOP problems the interested reader is referred to [1,2]. Decision making problems with several conflicting objectives are common in practice. Hence, for such problems, a single objective function is not sufficient to seek the real desired solution. Because of this limitation, an MOP method is needed to solve many real world optimization problems [3].

Although different solution procedures have been introduced, the interactive approaches are generally believed to be the most promising ones, in which the preferred information of the DM is progressively articulated during the solution process and is incorporated into it [4]. The purpose of MOP in the mathematical programming framework is to optimize r different objective functions, subject to a set of systematic constraints. A mathematical formulation of an MOP is also known as the vector maximization (or minimization) problem. Generally, MOP can be divided into four different categories.

The first and the oldest group of MOP need not to get any information from DM during the process of finding an efficient solution. These types of algorithms rely solely on the pre-assumptions about DM's preferences. In this category, L-P Metric methods are noticeable, algorithms whose objectives are minimization of deviations of the objective functions from the ideal solution. Since different objectives have different scales, they must be normalized before the process of minimization of deviations starts. Therefore, a new problem is minimized which has no scale [5].

The second group of MOP includes gathering cardinal or ordinal preferred information before the solving process initiates. In the method of utility function [6], for example, we determine DM's utility as a function of objective functions and then we maximize the overall function under the initial constraints. The other method in this group, which is extensively used by many researchers, is Goal Programming (GP) [7] in which DM determines the least (the most) acceptable level of Max (Min) functions. Since attaining these values might lead to an infeasible point, the constraints are allowed to exceed, but we try to minimize these weighted deviations.

The third group of MOP provides a set of efficient solutions in which DM has the opportunity to choose his preferred solution among the efficient ones. Although finding an efficient solution in MOP is not difficult, but finding all efficient solutions to render DM is not a trivial task. Many papers have discussed this important issue [8-11]. The set of all efficient feasible solutions in a Multiobjective Linear Programming (MOLP) can be represented by convex combination of efficient extreme points and efficient extreme rays in the feasible region. Therefore, the set of efficient extreme points and efficient extreme rays can be regarded as the solution set for an MOLP problem. Ida [8] develops an algorithm to find the structure of efficient solutions in an MOLP using all efficient extreme points and extreme rays. Pourkarimi et al. [9] represent the structure of efficient solutions as maximal efficient faces. Youness and Emam [11] investigate the relationships among some efficient solutions in the objective space and then obtain all efficient solutions of the MOLP and their structure in the constraint space. Steuer and Piercy [10] use regression analysis to estimate the number of efficient extreme points in MOLP. Even though the final solution among the efficient solutions is usually chosen by DM, but Gass and Roy [12] propose a mathematical method for ranking the set of efficient extreme solutions in an MOLP. The idea is to enclose the given efficient solutions within an annulus of minimum width, where the width is determined by a hypersphere that minimizes the maximum deviation of the points from the surface of the hypersphere.

Finally, the last group of MOP problems provides solutions based on a continuous interaction with DM and tries to reach the preferred solution at the end of the algorithm. Based on this sound idea, there are many developed methods categorized in this group [13-21]. Different procedures may be better suited for different types of decision makers, for different types of decision situations, or for different stages in the decision making process [22]. Homburg [16], for instance, proposed a hierarchical procedure which consists of two levels, a top-level and a base-level. The main idea is that the top-level only provides general preference information from DM. Taking this information into account, the base-level then determines a compromise solution via interaction with DM by using an interactive procedure. As another example, Tchebycheff metric based approaches have become popular in this category for sampling the set of efficient solutions in a continuous interaction with DM to narrow his choices down to a single most preferred efficient solution. The interaction with DM proceeds by generating smaller subsets of the efficient set until a final solution is located [19]. The proposed method by Engau [14] decomposes the original MOP problem into a collection of smaller-sized subproblems to facilitate the evaluation of tradeoffs and the articulation of preferences. A priori preferences on objective tradeoffs are integrated into this process, and DM is supported by an interactive procedure to coordinate any remaining tradeoffs.

There are many advantages on using interactive methods such as:

--There is no need to get any information from DM before the solving process initiates,

--Only minor preferred information are needed during the solving process,

--Since DM continuously contributes via analyst to the problem, he is more likely to accept the final solution,

--There are fewer restricting assumptions involved in these types of problems in comparison with other groups of MOP methods.

However, there are some drawbacks associated with these types of algorithms that the most important ones are as follows:

--The accuracy of the final solution depends entirely upon DM's precise answers. In other words, if DM does not carefully interact with the analyst, the outcome(s) of the final solution may be undesirable,

--There is no guarantee to reach a desirable solution after a finite number of iterations,

--DM needs to make more effort during the process of these algorithms in comparison with other groups.

During the past decades, many researchers have tried to review or to discuss the strengths, the weaknesses, and the comparative studies on the existing methods. Borges and Antunes [23] deal with the sensitivity analysis of the weights in MOLP. Buchanan [24] has an excellent paper that reviews and comments on ten famous methods including [25-27]. Each description is followed by a detailed analysis of the method which consists of comments about the underlying approach, technical aspects and practical considerations. All methods are compared in terms of some important features such as applicability, convergence, and difficulty of questions. Also, Buchanan and Daellenbach [24] describe a laboratory experiment which compares the performance from the user's point of view of four different methods for MOP problems. Reeves and Gonzalez [22] compare the computational performance and the quality of the solutions generated by two similar and yet contrasting interactive procedures. Sun [4] investigates the solution quality in interactive MOP. They include value functions used, weights assigned to the objective functions in the value functions, the size of the efficient set, and the number of objective functions. The feasibility and existence of the ideal and nadir points are also discussed. The work of Vanderpooten [28] is a very good reference to review the main concepts in interactive procedures. After briefly introducing the interactive procedures, a general technical framework for the understanding of existing methods is presented.

The main goals of the mentioned papers are to introduce some criteria to measure the efficiency of various algorithms and to introduce the characteristics of a good method. According to Reeves and Franz [29], the main characteristics of a proper interactive algorithm can be numerated as follows:

1) Minimum amount of information be required from DM,

2) The nature of decision making be simple,

3) If DM provides his answers improperly in some interactions, he can have the opportunity to compensate it in the following interactions,

4) The number of iterations to reach the final solution be reasonable,

5) DM be familiar with the nature of judgments he is asked for,

6) The algorithm be suitable for solving large scale problems.

In this paper, we propose a new algorithm which is mainly in the group of interactive methods. However, we also need to get some information from DM before problem solving initiates; therefore, this algorithm is neither a pure interactive method nor a pure method in the second category. In addition, the proposed algorithm is based upon a novel approach to the problem, starting from an infeasible utopian point and moving towards the feasible region and then the final efficient point. The remaining of this paper is organized as follows. Section 2 provides some of the necessary definitions we need to use in this paper. In section 3, the problem statement and the proposed algorithm are explained. Two numerical examples are demonstrated in section 4 to illustrate the proposed algorithm. Finally, conclusion remarks appear in section 5 to summarize the contribution of the paper.

2. Definitions

Consider an MOLP problem defined as follows,

max {Z = [f.sub.k](X) = [C.sup.T.sub.k] X; k = 1,2, ..., r}

s.t.

M = {X [member of] [R.sup.n] | [A.sub.i] X [less than or equal to] [b.sub.i]; X [greater than or equal to] 0; i = 1,2, ..., m} (1)

where,

[f.sub.k] (X) : is the kth objective function,

[C.sub.k] : is the vector of coefficients in the kth objective function,

X: is an n-dimensional vector of decision variables,

[A.sub.i] : is the ith row of technological coefficients,

[b.sub.i] : is the RHS of the ith constraint, and

M: is the feasible region.

A solution [bar.X] [member of] M is efficient if and only if there does not exist another X [member of] M such that [f.sub.k] (X) [greater than or equal to] [f.sub.k] ([bar.X]) for all and k = 1,2, ..., r and [f.sub.k] (X) > [f.sub.k] ([bar.X]) for at least one k. Then, the vector,

[bar.Z] = {[f.sub.k] ([bar.X]); k = 1,2, ... r} (2)

is called a non-dominated criterion vector. All efficient solutions in M form the efficient set E. Although some interactive algorithms search the entire feasible region M, the majority of them are designed to search only the efficient set E. The vector,

[Z.sup.*] = {[f.sub.k] ([X.sup.*]) | [f.sub.k] ([X.sup.*]) = max [f.sub.k] (X); k = 1,2, ..., r} (3)

is called the ideal point or the ideal criterion vector. It should be mentioned that the ideal criterion vector, and so the ideal solution [X.sup.*], does not usually exist. The vector,

[Z.sup.**] = {[f.sub.k] ([X.sup.**]) | [f.sub.k] ([X.sup.**]) [greater than or equal to] max [f.sub.k] (X); k = 1,2, ..., r} (4)

is called a utopian vector or a utopian point. Unlike the ideal criterion vector, there exist many utopian vectors. Nevertheless, their corresponding [X.sup.**]'s are most likely infeasible.

3. Problem Statement

The majority of methods proposed in the domain of interactive procedures search the feasible region M or the efficient set E through interaction with DM in order to attain the final solution. Here, we develop a new algorithm to solve MOLP problems by starting from a utopian point [X.sup.**] (which is usually infeasible) and moving towards the feasible region M and then the efficient set E via stepwise movements and a plain continuous interaction with DM in order to be in line with his preferences. Since there are many utopian points outside the M, we choose the closest [X.sup.**] to M as the start point, by considering the least sum of weighted deviations from the constraints.

3.1 The Proposed Algorithm

The proposed algorithm attains an efficient solution of an MOLP through the following steps:

Algorithm:

Step 1. Ask DM to determine [a.sub.k], the maximum acceptable reduction in the amount of [f.sub.k] in any interaction. Also, ask him to determine [w.sub.i], a penalty for deviation of each unit from the ith constraint. In the next step, we find a utopian point allowing some deviations from the constraints [x.sub.j] [greater than or equal to] 0, in that the utopian point maybe a point with some negative [x.sub.j]'s. However, we also consider a big penalty, w', for each unit of such deviations.

Step 2. Maximize each [f.sub.k] (X) with consideration of the feasible set M as follows,

max [f.sub.k] (X) = [C.sup.T.sub.k]X

s.t.

M = {X [member of [R.sup.n] | [A.sub.i]X [less than or equal to] [b.sub.i]; X [greater than or equal to] 0; i = 1,2, ... m} (5)

Step 3. Let [f.sub.k] ([X.sup.*]) be the optimal solution for each [f.sub.k] (X); k = 1,2, ..., r. Solve the following GP problem,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (6)

where, represents the deviation from the ith constraint. In this step, we allow our solution to go outside the feasible region. Suppose X is the solution for (6). Set [X.sup.0] = X and go to step 4.

Step 4. Let [[theta].sub.ik] be the angle between [f.sub.i] and [f.sub.k]. Calculate sin [[theta].sub.ik] as follows,

sin [[theta].sub.ik] = [square root of 1 - [C.sub.i] x [C.sub.k]/[absolute value of [C.sub.i]] x [absolute value of [C.sub.k]] (7)

Now, we can determine a small step [delta] by which we move towards the feasible region in each iteration as,

[delta] = min {[a.sub.i]/[absolute value of [C.sub.i]] sin [[theta].sub.ik]; i,k = 1,2, ..., r; i [not equal to] k} (8)

Step 5. Consider constraints [f.sub.k] ([X.sup.0]) [greater than or equal to] [f.sub.k] ([X.sup.*]) which remain active. Now ask DM to see which active objective has the least desirability. Let l be the index for the [f.sub.l] which has the least desirability.

Step 6. Solve the following optimization problem in which we take a step [delta] from [X.sup.0] towards the feasible region while we keep the amount of [f.sub.l],

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (9)

where, [absolute value of .] is the 2-norm. In this step there is no change in the value of [f.sub.l] but we usually expect that the other objective functions get worse, but not necessarily. In other words, we might encounter a situation in which the values of some active or inactive [f.sub.k] get better.

Step 7. If [m.summation over (i=1)] [w.sub.i] [d.sub.i] + w' [n.summation over (j=1)] [d'.sub.j] = 0 then go to step 8, otherwise set [X.sup.0] = X, calculate the new values of [f.sub.k] ([X.sup.0], and go to step 5.

Step 8. [m.summation over (i=1)] [w.sub.i] [d.sub.i] + w' [n.summation over (j=1)] [d'.sub.j] = 0 implies that we are inside the feasible region, but most likely not on the boundary. Therefore, we take a smaller step to be stopped on the boundary by solving,

min {[absolute value of X - [X.sup.0]] | [f.sub.k](X) [greater than or equal to] [f.sub.k] ([X.sup.0]); [A.sub.i]X [less than or equal to] [b.sub.i]; [x.sub.j] [greater than or equal to] 0; i = 1,2, ..., m; j = 1,2, ..., n;k = 1,2, ..., r} (10)

There is no guarantee that the solution of step 8 is a non-dominated one. So, we move on the boundary to reach a non-dominated solution. Set [X.sup.0] = X, S = {1,2, ..., r}, and go to step 9.

Step 9. Ask DM to see which objective in S has the least desirability. Let l be the index for the [f.sub.l] which has the least desirability. Solve the following optimization problem in which we take a step at most with the amount of [delta] from [X.sup.0] on the boundary of the feasible region while we keep the amounts of [f.sub.k]; k = 1,2, ..., r,

max {[f.sub.l] (X) | [absolute value of [f.sub.k] (X) [greater than or equal to] [f.sub.k] ([X.sup.0]); [A.sub.i]X [less than or equal to] [b.sub.i]]; X - [X.sup.0] | [less than or equal to] [delta]; [x.sub.j] [greater than or equal to] 0; i = 1,2, ..., m; j = 1,2, ..., n;k = 1,2, ..., r} (11)

Step 10. If [f.sub.l](X) > [f.sub.l]([X.sup.0]) then set S = {1,2, ... r} and go to step 9, otherwise set S = S - l and go to step 11.

Step 11. If S = [phi] then choose X as the final efficient solution, otherwise set [X.sup.0] = X and go to step 9.

End.

It should be noted that steps 1-8 help us to reach to the feasible region M by starting from the closest utopian point in line with DM's preferences, whereas steps 9-11 guarantee that the final solution is an efficient one, i.e., the final solution is in E.

3.2 Some Lemmas to Determine [delta]

Here, we show how to choose [delta] in Step 4 of the proposed algorithm with the following three lemmas.

Lemma 1: Any step [delta] along gradient vector [C.sub.k] will result a decrease (or increase) of [delta] [absolute value of [C.sub.k]] in [f.sub.k].

Proof: Let [[alpha].sub.kj] be the angle between [C.sub.k] and axis [x.sub.j]. Therefore,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (12)

where, [x.sub.j] is the jth unique vector in an n-dimensional space. The angle between [c.sub.k] and [x.sub.j] helps us to compute the projection of [C.sub.k] over the axis [x.sub.j], i.e., if we take a step [delta] along vector [C.sub.k], the amount of change in each element of [x.sub.j] is [delta] cos [[alpha].sub.kj] or [delta] cos ([pi] - [[alpha].sub.kj]) depending on the direction we choose. Figure 1 depicts the gradient vector [C.sub.k] and its projection in a 2-dimensional space.

Therefore,

[DELTA][x.sub.j] = [delta] cos [[alpha].sub.kj] = [delta] [c.sub.kj]/[absolute value of [C.sub.k]] (13)

or

[DELTA][x.sub.j] = [delta] cos ([pi] - [[alpha].sub.kj]) = -[delta] cos [[alpha].sub.kj] = -[delta] [c.sub.kj]/[absolute value of [C.sub.k]] (14)

Therefore, we can compute the change in the amount of [f.sub.k] as follows,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (15)

We now present a generalized form of Lemma (1).

Lemma 2: Any step [delta] along [C.sub.l] which makes the angle [[theta].sub.lk] with [C.sub.k] will result a decrease (increase) of [delta] [absolute value of [C.sub.k]] cos [[theta].sub.lk] in [f.sub.k]

Proof: It is clear that taking a step [delta] along [C.sub.l] which makes the angle [[theta].sub.lk] with [C.sub.k] is the same as taking a step [delta] cos [[theta].sub.lk] along [C.sub.k]. Using the results of Lemma(1) yields,

[absolute value of [DELTA][f.sub.k]] = [delta] cos [[theta].sub.lk] [absolute value of [C.sub.k]] (16)

[FIGURE 1 OMITTED]

[FIGURE 2 OMITTED]

Lemma 3: Let [H.sub.l] be a hyperplane which is orthogonal to [C.sub.l] and [C.sub.l] makes the angle [[theta].sub.lk] with [C.sub.k]. Any step [delta] on the hyperplane [H.sub.l] in any direction will result a decrease (increase) of [delta] sin [[theta].sub.lk] [absolute value of [C.sub.k]] in [f.sub.k].

Proof: We prove this lemma in two steps. In the first step, let 0 [less than or equal to] [[theta].sub.lk] [less than or equal to] [pi]/2, then taking any step [delta] on

[H.sub.l] in any direction is the same as taking a step [delta] in the direction whose angle with [C.sub.l] is [pi]/2 and therefore makes the angle [pi]/2 [+ or -] [[theta].sub.lk] with [C.sub.k]. Figure 2(a) demonstrates the situation in a 2-dimensional space.

According to lemma (2), taking any step [delta] along the direction which makes the angle [pi]/2 + [[theta].sub.lk] or [pi]/2 - [[theta].sub.lk] with [C.sub.k] will result a change with the amount of [delta] cos([pi]/2 + [[theta].sub.lk]) [absolute value of [C.sub.k]] or [delta] cos([pi]/2 - [[theta].sub.lk]) [absolute value of [C.sub.k]] in [f.sub.k]. Since 0 [less than or equal to] [[theta].sub.lk] [less than or equal to] [pi]/2, we have,

[delta] cos ([pi]/2 + [[theta].sub.lk] [absolute value of [C.sub.k]] = -[delta] sin [[theta].sub.lk] [absolute value of [C.sub.k]] (17)

or

[delta] cos ([pi]/2 + [[theta].sub.lk] [absolute value of [C.sub.k]] = [delta] sin [[theta].sub.lk] [absolute value of [C.sub.k]] (18)

Finally, we have,

[absolute value of [DELTA][f.sub.k]] = [delta] sin [[theta].sub.lk] [absolute value of [C.sub.k]] (19)

Now, in the second step, suppose [pi]/2 [less than or equal to] [[theta].sub.lk] [less than or equal to] [pi]. Taking any step [delta] on [H.sub.l] in any direction is the same as taking a step [delta] in the direction whose angle with [C.sub.k] is [[theta].sub.lk] - [pi]/2 or 3[pi]/2 - [[theta].sub.lk]. Figure 2(b) demonstrates the situation in a 2-dimensional space. Using similar argument used in the first step yields,

[delta] cos([[theta].sub.lk] - [pi]/2) [absolute value of [C.sub.k]] = [delta] sin [[theta].sub.lk] [absolute value of [C.sub.k]] (20)

or

[delta] cos(3[pi]/2 - [[theta].sub.lk]) [absolute value of [C.sub.k]] = -[delta] sin [[theta].sub.lk] [absolute value of [C.sub.k]] (21)

Finally, we have,

[absolute value of [DELTA][f.sub.k]] = [delta] sin [[theta].sub.lk] [absolute value of [C.sub.k]] (22)

Now, we are ready to determine the amount of [delta] properly. Suppose DM determines that he wouldn't expect any reduction more than [a.sub.k] in the amount of [f.sub.k] in any interaction. When we perform step (4) in the algorithm, actually we keep [f.sub.l] unchanged. In order to achieve this goal, we have to take step [delta] on [H.sub.l]. According to lemma (3), the step leads to an increase (decrease) [delta] sin [[theta].sub.lk] [absolute value of [C.sub.k]] in [f.sub.k]. There is no problem in our approach in case [f.sub.k] increases. However, we must ensure that the step [delta] would not worsen [f.sub.k] more than [a.sub.k], which suggest to keep the following condition,

[delta] sin [[theta].sub.lk] [absolute value of [C.sub.k]] [less than or equal to] [a.sub.k]; k =1, ..., r; k [not equal to] l (23)

or

[delta] [less than or equal to] [a.sub.k]/[absolute value of [C.sub.k]] sin [[theta].sub.lk]; k = 1, ..., r; k [not equal to] l (24)

Holding (19) in all interactions throughout the algorithm guarantees that there would be no reduction in any [f.sub.k]; k [not equal to] l more than [a.sub.k]. Since DM is entitled to keep the amount of any [f.sub.k] , the following condition must hold in order to obtain an appropriate [delta],

[delta] [less than or equal to] [a.sub.k]/[absolute value of [C.sub.k]] sin [[theta].sub.lk]; k,l = 1, ..., r; k [not equal to] l (25)

Finally, we are about to determine the best amount of [delta] with consideration of DM's intentions and concurrently reaching to the feasible solution by doing as fewer interactions as possible. Thus, we have,

[delta] = min {[a.sub.k]/[absolute value of [C.sub.k]] sin [[theta].sub.lk]; k,l = 1, ..., r; k [not equal to] l (26)

4. Numerical Examples

In this section we demonstrate implementation of the proposed method using two numerical examples.

4.1 Example 1

Consider the following MOLP problem with two objective functions,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (27)

We first ask DM to specify his sensitivity about the constraints and the objectives. As we already defined, [w.sub.i]'s are the penalties associated with the constraints and [a.sub.k]'s are the permitted amounts of reduction on the objective functions in each iteration. For the sake of simplicity suppose that all constraints have equal sensitivity, i.e., [w.sub.i]; = 1;i =1, ..., 4. Next, we have to determine the acceptable amount of reduction on the objectives [z.sub.1] and [z.sub.2]. For this example, suppose DM specifies 2 and 3 for [a.sub.1], and [a.sub.2], respectively. The appropriate value for 8 can be determined as the following,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Then, we must find [z.sub.1.sup.*] and [z.sup.*.sub.2]. Solving two distinct LP problems with consideration of [z.sub.1] and [z.sub.2] yields ([x.sub.1], [x.sub.2]) = (1.95,5.49) with [z.sup.*.sub.1] = 34.86 and ([x.sub.1], [x.sub.2]) = (6.50,1.47) with [z.sup.*.sub.2] = 35.43 , respectively. In the next step, we obtain the utopian point in which both objectives are satisfied at least with their optimal values, while we reach to a common point. Hence, we have,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (28)

The optimal solution for (28) is ([x.sub.1.sup.**], [x.sub.2.sup.**]) = (5.10,4.96) with ([z.sub.1.sup.**], [x.sub.2.sup.**]) = (34.86,35.43) and [D.sup.**] = 39.02 . In the next step, the DM is asked to select the objective which has the least desirability for him. Suppose in the first interaction the DM adopts [z.sub.2]. Therefore, we must solve the following problem,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (29)

The optimal solution for (29) is ([x.sub.1], [x.sub.2]) = (5.24,4.61) with ([z.sub.2], [z.sub.2]) = (32.89,35.43) and D = 34.65 . Table 1 summarizes the results of implementation of the proposed algorithm during the next iterations.

As one can observe, we have reached to the feasible region after 8 iterations. The final step by which we reach to the feasible region is from ([x.sub.1], [x.sub.2]) = (4.04, 4.10) to ([z.sub.1], [z.sub.2]) = (4.18,3.75) with feasible amounts ([z.sub.1], [z.sub.2]) = (26.65,28.38). So, in order to reach to the feasible region by a smaller step we solve,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (30)

Problem (30) leads to ([x.sub.1], [x.sub.2]) = (4.17,3.75), with ([z.sub.1], [z.sub.2]) = (26.69,28.38) and [delta] = 0.37 , which is the first feasible point on the boundary of the feasible region. Then, the DM is asked to determine the objective function which has the least desirability. Suppose he adopts [z.sub.1], so we solve,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (31)

Problem (31) leads to ([x.sub.1], [x.sub.2]) = (4.17,3.75) with ([z.sub.1], [z.sub.2]) = (26.69,28.38) . As one can see, [z.sub.1] cannot be improved by moving from ([x.sub.1], [x.sub.2]) = (4.17,3.75). So, we have S = {2} and [x.sub.2] is chosen to get improved. We solve,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (32)

Problem (32) leads to ([x.sub.1], [x.sub.2]) = (4.17,3.75) with ([x.sub.1], [x.sub.2]) = (26.69,28.38). As one can see, [x.sub.2] cannot be improved by moving from ([x.sub.1], [x.sub.2]) = (4.17,3.75) . So, S = [phi] and ([x.sub.1], [x.sub.2]) = (4.17,3.75} with ([z.sub.1], [z.sub.2]) = (26.69,28.38) is the final efficient feasible solution.

4.2 Example 2

Consider the following MOLP problem with three objective functions,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (33)

Suppose that the values 12, 5, 45, 2, and 6 are specified by the DM for [w.sub.1], [w.sub.2], [w.sub.3], [w.sub.4], and [w.sub.5], respectively and we consider w' = 1000 . Also, 300, 50, and 30 are determined as the acceptable amount of reduction for [z.sub.1], [z.sub.2], and [z.sub.3]. The optimal value for [delta] is determined as follows,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Now, [z.sub.1.sup.*], [z.sub.2.sup.*], and [z.sub.3.sup.*] must be found. Solving three LP problems with consideration of [z.sub.1], [z.sub.2], and [z.sub.3] separately Yields ([x.sub.1], [x.sub.2], [x.sub.3], [x.sub.4]) = (17.22,35.05,0,0) with [z.sub.1.sup.*] = 2975.87, ([x.sub.1], [x.sub.2], [x.sub.3], [x.sub.4]) = (16.66,4.52,10.20,0) with [z.sub.2.sup.*] = 386.64 , and ([x.sub.1], [x.sub.2], [x.sub.3], [x.sub.4]) = (36.82,0,0,3.98) with [z.sub.3.sup.*] = 310.45 , respectively. Then, we obtain the utopian point in which three objectives are satisfied at least with their optimal values while we reach to a common point. Therefore, we have,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (34)

The optimal solution is ([x.sub.1.sup.**], [x.sub.2.sup.**], [x.sub.3.sup.**], [x.sub.4.sup.**]) = (57.56,30,0,0) with ([z.sub.1.sup.**], [z.sub.2.sup.**], [z.sub.3.sup.**]) = (2975 .60,555 .36,310 .48) and [D.sup.**] = 33380.43 . In the next step, the DM is asked to select the objective which has the least desirability for him. Since the constraint associated with [z.sub.2] is not active, the DM is allowed to select one of the objectives [z.sub.1] or [z.sub.3] to keep its value. Suppose in the first iteration the DM adopts [z.sub.3]. Therefore, the following problem should be solved,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (35)

The optimal solution for (35) is ([x.sub.1], [x.sub.2], [x.sub.3], [x.sub.4]) = (56.74,28.29,-0.17,0) with ([z.sub.1], [z.sub.2], [z.sub.3]) = (2826.35,534.22,310.43) and D = 3148482.

Table 2 summarizes the results of implementation of the proposed algorithm for example 2. Note that the constraint associated with [z.sub.2] is not active till iteration 8. Therefore, he is allowed to choose [z.sub.2] as the objective whose desirability is the least amount from iteration 8.

According to Table 2, we reach to the feasible region in iteration 22. So, solving the following problem helps us to attain the boundary of the feasible region,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (36)

The optimal solution for (36) is ([x.sub.1], [x.sub.2], [x.sub.3], [x.sub.4]) = (31.86,12.52,0,0)with ([z.sub.1], [z.sub.2], [z.sub.3]) = (1320.2,278.8,192.28) and [delta] = 0.44.

Suppose the DM adopts [z.sub.1] as the objective to get improved. Hence,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (39)

The optimal solution for (39) is ([x.sub.1], [x.sub.2], [x.sub.3], [x.sub.4]) = (31.86,12.52,0,0)with ([z.sub.1], [z.sub.2], [z.sub.3]) = (1320.2,278.8,192.28). Since [z.sub.1] does not change, we have S = {2,3}. Then, [z.sub.2] is adopted by the DM to get improved, which leads to,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (40)

The optimal solution for (40) is ([x.sub.1], [x.sub.2], [x.sub.3], [x.sub.4]) = (31.86,12.52,0,0) with ([z.sub.1], [z.sub.2], [z.sub.3]) = (1320.2,278.8,192.28). Obviously, [z.sub.2] remains unchanged; so, S = {3}. The only remaining objective is z3 and we have,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (41)

The optimal solution for (41) is ([x.sub.1], [x.sub.2], [x.sub.3], [x.sub.4]) = (31.86,12.52,0,0) with ([z.sub.1], [z.sub.2], [z.sub.3]) = (1320.2,278.8,192.28). Since similar to [z.sub.1] and [z.sub.2], the amount of [z.sub.3] remains unchanged, we have S = [phi]. Therefore, the final efficient feasible solution is ([x.sub.1], [x.sub.2], [x.sub.3], [x.sub.4]) = (31.86,12.52,0,0) with ([z.sub.1], [z.sub.2], [z.sub.3]) = (1320.2,278.8,192.28).

5. Conclusions

We have proposed a new interactive algorithm to solve MOLP in which we need some initial information about DM's preferences. Unlike the majority of interactive methods, the proposed method starts from the utopian point, which is usually infeasible, and moves towards the feasible region and the efficient set. Based on the results of some proved lemmas, we are able to specify the amount of steps towards the feasible region. Our method satisfies most of the characteristics that a good interactive method needs, such as simplicity of the nature of judgments for DM, having the opportunity to compensate improper decisions in previous interactions, and handling his nonlinear utility. Implementation of the proposed method has been demonstrated by using two numerical examples.

doi: 10.4236/jsea.2009.24031

Received May 20th, 2009; revised June 19th, 2009; accepted June 29th, 2009.

REFERENCES

[1] F. B. Abdelaziz, "Multiple objective programming and goal programming: New trends and applications," European Journal of Operational Research, Vol. 177, pp. 1520-1522, 2007.

[2] M. M. Wiecek, "Multiple criteria decision making for engineering," Omega, Vol. 36, pp. 337-339, 2008.

[3] J. Kim and S. K. Kim, "A CHIM-based interactive Tchebycheff procedure for multiple objective decision making," Computers & Operations Research, Vol. 33, pp. 1557-1574, 2006.

[4] M. Sun, "Some issues in measuring and reporting solution quality of interactive multiple objective programming procedures," European Journal of Operational Research, Vol. 162, pp. 468-483, 2005.

[5] M. Zeleny, "Multiple criteria decision making," MC Graw-Hill, New York, 1982.

[6] R. Kenney and H. Raiffa, "Decisions with multiple objectives: Preferences and value trade-offs," J. Wiley, New York, 1976.

[7] C. Romero, "Handbook of critical issues in goal programming," Pergamon Press, Oxford, 1991.

[8] M. Ida, "Efficient solution generation for multiple objective linear programming based on extreme ray generation method," European Journal of Operational Research, Vol. 160, pp. 242-251, 2005.

[9] L. Pourkarimi, M. A. Yaghoobi and M. Mashinchi, "Determining maximal efficient faces in multiobjective linear programming problem," Journal of Mathematical Analysis and Applications, Vol. 354, pp. 234-248, 2009.

[10] R. E. Steuer and C. A. Piercy, "A regression study of the number of efficient extreme points in multiple objective linear programming," European Journal of Operational Research, Vol. 162, pp. 484-496, 2005.

[11] E. A. Youness and T. Emam, "Characterization of efficient solutions for multi-objective optimization problems involving semi-strong and generalized semi-strong e-convexity," Acta Mathematica Scientia, Vol. 28B(1), pp. 7-16, 2008.

[12] S. I. Gass and P. G. Roy, "The compromise hypersphere for multiobjective linear programming," European Journal of Operational Research, Vol. 144, pp. 459-479, 2003.

[13] J. Chen and S. Lin, "An interactive neural network-based approach for solving multiple criteria decision making problems," Decision Support Systems, Vol. 36, pp. 137-146, 2003.

[14] A. Engau, "Tradeoff-based decomposition and decision-making in multiobjective programming," European Journal of Operational Research, Vol. 199, pp. 883-891, 2009.

[15] L. R. Gardiner and R. E. Steuer, "Unified interactive multiple objective programming," European Journal of Operational Research, Vol. 74, pp. 391-406, 1994.

[16] C. Homburg, "Hierarchical multi-objective decision making," European Journal of Operational Research, Vol. 105, pp. 155-161, 1998.

[17] C. L. Hwang and A. S. M. Masud, "Multiple objective decision making methods and applications," Springer-Verlag, Amsterdam, 1979.

[18] B. Malakooti and J. E. Alwani, "Extremist vs. centrist decision behavior: Quasi-convex utility functions for interactive multi-objective linear programming problems," Computers & Operations Research, Vol. 29, pp. 2003-2021, 2002.

[19] G. R. Reeves and K. R. MacLeod, "Some experiments in Tchebycheff-based approaches for interactive multiple objective decision making," Computers & Operations Research, Vol. 26, pp. 1311-1321, 1999.

[20] R. E. Steuer, J. Silverman, and A. W. Whisman, "A combined Tchebycheff/aspiration criterion vector interactive multi-objective programming procedure," Computers & Operations Research, Vol. 43, pp. 641-648, 1995.

[21] M. Sun, A. Stam, and R. E. Steuer, "Interactive multiple objective programming using Tchebycheff programs and artificial neural networks," Computers & Operations Research, Vol. 27, pp. 601-620, 2000.

[22] G. R. Reeves and J. J. Gonzalez, "A comparison of two interactive MCDM procedures," European Journal of Operational Research, Vol. 41, pp. 203-209, 1989.

[23] A. R. P. Borges and C. H. Antunes, "A visual interactive tolerance approach to sensitivity analysis in MOLP," European Journal of Operational Research, Vol. 142, pp. 357-381, 2002.

[24] J. T. Buchanan and H. G. Daellenbach, "A comparative evaluation of interactive solution methods for multiple objective decision models," European Journal of Operational Research, Vol. 29, pp. 353-359, 1987.

[25] A. A. Geoffrion, J. S. Dyer, and A. Feinberg, "An interactive approach for multi-criterion optimization with an application to the operation of an academic department," Management Science, Vol. 19, pp. 357-368, 1972.

[26] R. E. Steuer and E. U. Choo, "An interactive weighted Tchebycheff procedure for multiple objective programming," Mathematical Programming, Vol. 26, pp. 326-344, 1983.

[27] S. Zionts and J. Wallenius, "An interactive multiple objective linear programming method for a class of underlying nonlinear utility functions," Management Science, Vol. 29, pp. 519-529, 1983.

[28] D. Vanderpooten, "The interactive approach in MCDA: A technical framework and some basic conceptions," Mathematical and Computer Modelling, Vol. 12, pp. 1213-1220, 1989.

[29] G. R. Reeves and L. Franz, "A simplified interactive multiple objective linear programming procedure," Computers and Operations Research, Vol. 12, pp. 589-601, 1985.

(1) Department of Mathematics and Computer Science, Eindhoven University of Technology, Eindhoven, Netherlands; (2) Department of Industrial Engineering, Iran University of Science & Technology, Tehran, Iran.

```Table 1. The detailed information for implementation of the
proposed method for example 1

Iteration     Objective         [x.sub.1]         [x.sub.2]

0        max [z.sub.1]          1.95              5.49
0        max [z.sub.2]          6.50              1.47
0           utopian             5.10              4.96
1        keep [z.sub.2]         5.24              4.61
2        keep [z.sub.2]         5.38              4.25
3        keep [z.sub.1]         5.01              4.32
4        keep [z.sub.2]         5.17              3.91
5        keep [z.sub.1]         4.80              3.97
6        keep [z.sub.1]         4.42              4.04
7        keep [z.sub.1]         4.04              4.10
8        keep [z.sub.2]         4.18              3.75
9          min delta            4.17              3.75
10        max [z.sub.1]          4.17              3.75
11        max [z.sub.2]          4.17              3.75

Iteration         D             [z.sub.1]         [z.sub.2]

0              0               34.86             20.73
0              0               15.32             35.43
0            39.02             34.86             35.43
1            34.65             32.89             35.43
2            30.27             30.88             35.43
3            20.90             30.88             33.69
4            15.87             28.63             33.69
5             6.50             28.63             31.94
6             4.28             28.63             30.18
7             2.14             28.63             28.38
8              0               26.65             28.38
9              0               26.69             28.38
10              0               26.69             28.38
11              0               26.69             28.38

Table 2. The detailed information for implementation of the proposed
method for example 2

Iteration     Objective         [x.sub.1]         [x.sub.2]

0        max [z.sub.1]         17.22             35.05
0        max [z.sub.2]         16.66              4.52
0        max [z.sub.3]         36.82               0
0           utopian            57.56              30
1        keep [z.sub.3]        56.74             28.29
2        keep [z.sub.3]        55.92             26.58
3        keep [z.sub.1]        54.40             27.09
4        keep [z.sub.3]        53.58             25.38
5        keep [z.sub.3]        52.76             23.67
6        keep [z.sub.3]        51.94             21.96
7        keep [z.sub.3]        51.12             20.25
8        keep [z.sub.1]        49.60             20.76
9        keep [z.sub.1]        48.08             21.27
10        keep [z.sub.3]        47.26             19.56
11        keep [z.sub.3]        46.44             17.85
12        keep [z.sub.3]        45.62             16.14
13        keep [z.sub.1]        44.10             16.65
14        keep [z.sub.1]        42.58             17.16
15        keep [z.sub.3]        41.12             16.21
16        keep [z.sub.3]        39.75             15.32
17        keep [z.sub.3]        38.38             14.43
18        keep [z.sub.2]        37.01             13.54
19        keep [z.sub.2]        35.64             12.65
20        keep [z.sub.1]        33.95             12.59
21        keep [z.sub.1]        32.26             12.53
22        keep [z.sub.1]        30.45             12.59
23          min delta           31.86             12.52
24        max [z.sub.1]         31.86             12.52
25        max [z.sub.2]         31.85             12.52
26        max [z.sub.3]         31.86             12.52

Iteration     [x.sub.3]         [x.sub.4]             D

0              0                 0                 0
0             10.2               0                 0
0              0                3.98               0
0              0                 0              33380.43
1            -0.17               0              31484.82
2            -0.33               0              29613.44
3            -1.35               0              27726.82
4            -1.52               0              25860.25
5            -1.68               0              23988.88
6            -1.85               0              22118.36
7            -2.02               0              20244.01
8            -3.04               0              18351.77
9            -4.06               0              16465.06
10            -4.23               0              14599.55
11            -4.39               0              12728.18
12            -4.56               0              10857.66
13            -5.58               0              8965.42
14            -6.6                0              7078.71
15            -5.83               0              5830.92
16            -4.86               0              4855.56
17            -3.88               0              3882.43
18            -2.91               0              2906.67
19            -1.93               0              1933.53
20            -1.07               0              1066.54
21             -0.2               0               203.27
22              0                0.52               0
23              0                 0                 0
24              0                 0                 0
25              0                 0                 0
26              0                 0                 0

Iteration     [z.sub.1]         [z.sub.2]         [z.sub.3]

0            2976.2            348.67            -37.49
0            783.2             386.6             233.08
0            431.88            252.76            310.48
0            2975.6            555.36            310.48
1           2826.35            534.22            310.48
2           2677.35            513.33            310.48
3           2677.35            482.28            283.55
4            2528.2            461.14            283.55
5            2379.2            440.25            283.55
6            2230.0            419.11            283.55
7            2080.7            397.97            283.55
8            2080.7            366.92            256.52
9            2080.7            335.87            229.57
10           1931.65            314.73            229.57
11           1782.65            293.84            229.57
12            1633.4            272.7             229.57
13            1633.4            241.65            202.59
14            1633.4            210.6             175.64
15            1562.3            214.44            177.95
16            1501.6            224.24            183.08
17            1441.2            234.29            188.33
18           1380.55            244.09            193.46
19           1320.15            254.14            198.71
20           1320.15            265.08            195.81
21           1320.15            276.27            193.03
22           1320.15            274.99            182.73
23            1320.2            278.8             192.28
24            1320.2            278.8             192.28
25            1320.2            278.8             192.28
26            1320.2            278.8             192.28
```
COPYRIGHT 2009 Scientific Research Publishing, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.

Title:

Comment:

Author: Printer friendly Cite/link Email Feedback Sadrabadi, Mahmood Rezaei; Sadjadi, Seyed Jafar Journal of Software Engineering and Applications (JSEA) Report Nov 1, 2009 7456 Explanation vs performance in data mining: a case study with predicting runaway projects. An aspect-oriented approach for use case based modeling of software product lines. Algorithms Decision making Decision-making Linear programming