LOW COST EFFICIENT REMEDIAL STRATEGY FOR STAGNATED NELDER MEAD SIMPLEX METHOD.

Byline: J. Ali, M. Saeed, N. A. Chaudhry, M. F. Tabassum and M. Luqman

ABSTRACT:

Nelder-Mead Simplex Algorithm was proposed in 60's and it had been enormously popular direct search method for unconstrained minimization. Despite its popularity, there existed some counter examples on which the method failed to find optimal solutions. This paper proposed a simplex volume based novel strategy for rescuing the method from stagnations or complete failures. The developed method was implemented to solve the state of the art benchmark functions. The comparison of the obtained results witnessed the remarkable low computational cost behavior and superiority of the proposed method over a number of existing methods.

Keywords: Nelder-Mead Simplex Method, Stagnation, Repeated Focused inside Contractions, Remedy and Positive Basis.

INTRODUCTION

Nelder - Mead Simplex Algorithm (NMSA) is an upgraded version of original simplex based search method (Spendley et al., 1962). NMSA is obtained by furnishing Spenley's method with moves like expansion and contraction (Lewis, 2000). NMSA is a popular method in optimization community due to ease of implementation and less computational cost as compared to other derivative free methods (Box, 1957). The convergence results of Multi-directional simplex based method by (Torczon, 1989) cannot be considered for NMSA due to change in interior angles of the simplexes. Some convergence results of NMSA have been established in low dimensions (Lagarias et al., 1998, 2012) which cannot be generalized for higher dimensions. There exist certain counter examples of objective functions on which NMSA fails by performing Repeated Inside Focused Contractions (RFICs) (Han, 2000; McKinnon, 1998 and Dennis and Woods, 1987).

One of the recent counter examples is the 2-dimensional function reported by (McKinnon, 1998) is presented below:

(EQUATIONS)

NMSA fails to converge to optimal point when started with (0, 0), (1, 1), (Eq.) as the vertices of the initial simplex. Another counter example is a 2-dimensional non-convex function reported by (Han, 2000) is as under

(EQUATION)

Whenever the NMSA starts with initial simplexes H1 = {(0, 1), (0, - 1), (1, 0)} and H2 = {(- a, b), (a, - b), (1, 0)} with (Eq.) and (Eq.), it fails to converge to optimal point.

Failure of NMSA has attracted the researchers to moderate and equip NMSA with additional tools. Kelly proposes the idea of oriented restarts based on stagnation detection (Kelly, 1999). Whereas, this modification fails to guarantee the sufficient descent condition (Price et al., 2002). The idea of using positive bases and frames based techniques has been introduced by (Coope and Price, 2003). In another study, a convergent variant of NMSA based on the principle of grid restrainment has also been proposed by (Burmen et al., 2006). For global and large scale optimization Adaptive Nelder-Mead method, Variable Neighborhood Simplex Search method and Distributed Memory based implementation of NMSA have been proposed by (Klein and Neira, 2014; Gao and Han, 2012; Luangpaiboon, 2012 and Luersen et al., 2004). Investigations of efficiencies of a number of variants of NMSA have been carried out by (Parkinson and Hutchinson, 1972).

Through extensive numerical experiments, Byatt recommends a reversing frames based variant of NMSA (Byatt, 2000).

Most of the modifications of NMSA involve the coupling of additional strategies with it whenever it fails in making sufficient progress. In this study, a strategy based on new experimental stagnation detection observations has been proposed. The strategy tries to rescue the stagnated NMSA by selecting certain elements from a set of search directions through the best vertex.

MATERIALS AND METHODS

Nelder-Mead Simplex Algorithm (NMSA): For minimizing an n-dimensional bounded below function (Eq.) each iteration of NMSA required a non-degenerated simplex of n+1 vertices:

(Eq.) with (Eq.).

Triangles and tetrahedrons were considered as two special cases of simplexes in 2 and 3 dimensions respectively. Every iteration of NMSA was initialized by ordering all the vertices of the current simplex as:

(EQUATION)

NMSA focused to improve V 1 indirectly by replacing the worst vertex V n+1 by a new point P using operations like reflection, expansion, contraction and obeying the rule:

(EQUATION)

Where was defined as the centroid or aggregate of all the vertices except the worst vertex:

(EQUATION)

The standard choices for parameter l (1, 2,0.5 and -0.5) for reflection, expansion, outside and outside contraction moves respectively were considered (Nelder and Mead, 1965). The generated points were denoted by (Eq.) respectively. In the case of failures of these moves, a shrink move was applied which replaced all the non-best vertices by new points using the rule:

(EQUATION)

As a result of which the simplex was shrunk towards the best vertex. Iterations of the method were conducted till the fulfillment of some specific termination criterion. Geometry of moves and schematic flowchart of NMSA are presented graphically in Fig. 1(a-b). A detailed description of NMSA with tie breaking rules was described by (Lagarias, 1998).

Non Stagnated Nelder-Mead Simplex Algorithm (NS-NMSA): The following claims were proposed and established for a non-degenerate simplex (Eq.) with edges as column vectors (Eq.) and the matrix M of these column vectors are defined below:

(EQUATION)

Claims: For a non-degenerate simplex

(i) The default search direction (Eq.) and (Eq.) and were linearly independent.

(ii) S and t were never orthogonal.

Proof: The proof was established depending on the non-degeneracy of the current simplex.

The proposed strategy comprised of the following two phases.

i) Stagnation Determination Phase: In this phase to monitor the performance of the method, an iteration of NMSA was referred as a successful one if the best vertex of the simplex was improved otherwise it was taken as a failed iteration. If CFI was the number of consecutive failed iterations then the method was let work until CFI [?] No, No being a fixed positive integer, otherwise another parameter RIC was used to count repeated inside contractions. The method was again let work until RIC [?] N 1. Otherwise stagnation was detected and method was directed to the next phase.

ii) Remedial Phase: Recalling the positive basis B+ for a subspace (Eq.) if (Coope and Price, 2003 and Davis, 1954) the result was n+1 [?] cardinality (B+) [?] 2n reported by (Conn et al., 2009), the remedial phase started by generating a maximal positive standard basis (Eq.). The members of B+ were utilized in two ways as presented below:

(1) Selection of certain search directions (Eq.) satisfying (EQUATION)

(2) When (EQUATION) for some (Eq.) then s was used in place of t.

Supposing that U was the set of all selected (Eq.) the volumes [?] c and [?] E of the current and the expected simplexes were calculated. An expected simplex was obtained by replacing the edge (Eq.) by the selected (Eq.). The improvement was sought in the worst vertex or the best vertex according to the following scheme.

1. Set j =1

2. IF (Eq.), go to step 5 ELSE go to step 3

3. IF (Eq.) go to step 4 ELSE set j = j+1 and go to step 2

4. IF (EQUATION)

Set (EQUATION) and terminated.

ELSE Set j = j+1 and go to step 2

5. If the best vertex was changed update and terminate ELSE complete a frame about the best vertex along the members of at a distance and terminate.

The above resulting stagnation free algorithm was named as Non-Stagnated-Nelder-Mead Simplex Algorithm (NS-NMSA).

Convergence Of NS-NMSA: For any simplex Y its diameter was defined as:

(EQUATION)

Using the aggregate point -, the inner and outer central radii of the simplex were calculated as (Eq.) and (Eq.) respectively. Then

(EQUATION)

The sequence (Eq.) + was constructed as defined below:

(EQUATION)

Provided that (Eq.) and (Eq.).

Then at any iteration k:(Eq.)

Lemma: For n > 1 and the uniformly convex objective function the sequence of diameters of the simplexes generated by NMSA converged to zero provided that the NMSA used infinitely many expansions or contractions (Gao and Han, 2014).

Corollary 1: By (6) and the above lemma we got: (EQUATION)

Proposition: Each iteration of NMSA which reduced the simplex volume reduced at least one of the radii.

Proof: At kth iteration NMSA aimed to replace the worst vertex (Eq.) of the simplex (Eq.) using equation (2) and in case of failure it replaced all the non-best vertices using equation (4). The relation between the volumes of simplexes (Eq.) was given by (Eq.).

This resulted in a decreased volume for a contraction or shrink step. When an inside contraction was accepted, the new simplex changed to (EQUATION).

With usual meanings of centroids (Eq.) and (Eq.) the following result was obtained.

The relation (Eq.) and the equation (7) directed to

(EQUATION)

In the case of shrink step

(EQUATION)

From (8) and (9) it was concluded that at least one central distance reduced whenever simplex volume was reduced.

Corollary: If NMSA performed a finite number of reflections and expansions then the inner radius tended to zero as iterations of NMSA proceeded.

The consequences of above results guaranteed the existence of following assumptions:

a) The sequence (Eq.) was bounded.

b) f was continuously differentiable with Lipschitz gradient in a bounded subset of R n

c) For each (Eq.) in B+, a frame was completed about (Eq.)at a step size (Eq. ) so that

(EQUATION)

d) (EQUATION)

These assumptions established the existence of a stationary point (Nelder and Mead, 1965).

RESULTS AND DISCUSSIONS

Implementation Details: NS-NMSA was implemented by starting with a non-degenerated regular initial simplex (Conn el at., 2009; Jacoby el at., 1972 and Spendley el at., 1962). Other settings were:

(i) The termination criterion for NS-NMSA was:

(EQUATION) (Gao and Han, 2012)

(ii) The parameters No and N1 were set as: No = dimension, N1 =10 for the test problems in table 2 and No = 2(dimension) for counter examples.

(iii) FE (The number of function evaluations required to reach the minimum), (The average simplex value):

(Eq.) and ANG (The angle between S and t ).

Comparison of Performance on McKinnon's Function: The optimal solution found by NS-NMSA was V*= (0, - 0.5) with f (V*) = - 0.25 at V*= (0, - 0.5) with 196 function evaluations. Fig. 2 (d) exhibits the successive simplexes of NMSA and Fig. 3 (a) shows the convergence of ASV for NS-NMSA to the optimum value and divergence of NMSA to a non-optimum value. Fig. 3 (b) presents the diminishing variations of angle ANG of both of the NMSA and NS-NMSA.

Comparison of Performance on Han's Function: NS-NMSA found the optimal point (0, -1.3623898059) costing only 161 and 157 function-evaluations when started with simplexes H 1 and H 2 respectively. NMSA found the optimal point after a long stagnation by costing 86% more computational cost while using simplex H 1 but converged to non-stationary point costing approximately same amount of additional computational cost. Table-1 shows that in both the cases NS-NMSA efficiently found the optimal solutions with very low cost as compared to the original one.

Table 1. Comparison of performances NMSA and NS-NMSA on Han's function.

###NMSA###NS-NMSA

Initial Simplex

###FE###Function Value###FE###Function Value

###H1###1178###5.43970418863036###161###5.43970418863036

###H2###1172###4.84336877871108###157###5.43970418863036

Iteration-wise the progress of NMSA and NS-NMSA for Han's function for two initial simplexes H 1 and H 2 have been shown in Fig. 3(c, d, e, f).

NS-NMSA was also applied to a number of benchmark test problems enlisted by (Jorge, 1981) along with their standard initial guesses. The results were obtained by using a regular simplex from the standard initial guess except McKinnon's function. The comparison of performance of NS-NMSA with other approaches used in the literature is presented in Table-2.

Test functions of various dimensions, presented in Table 2, were also considered for comparing the performance the proposed method. Test functions 1-7 were of dimensions 2, test functions 8-13 were of dimensions 3, 14-19 were of dimensions 4, 20-21 had dimensions 6, 22-23 were of dimensions 8 and test function 24 was of dimensions 10. In the past studies, the problems in Table 2 were also solved by standard NMSA and its convergent variants as reported by (Byatt et al., 2000 and Price et al., 2002). Table-2 showed that NS-NMSA outperformed standard NMSA and its convergent variants on most of the test functions.

For test functions 10 and 11 the NS-NMSA was a little bit costly as compared to convergent variants but found the correct optimal point. NMSA was economical only for the functions 6 and 16 but produced optimal points utilizing a large number of function evaluations for the functions 9, 11 and 17. Some of the problems were also solved by a Direct Search Conjugate Directions Algorithm (DSCDA) as reported by (Coope and Price, 2000a).

As observed from Table 3 the DSCDA was better than NS-NMSA only on the functions 2, 4, 10 and 17. But for the remaining functions NS-NMSA outperformed DSCDA in terms of solution quality.

In another study, a Generating Set Search method based on curvature information (GSSC) was proposed and the superiority of GSSC over Compass Search Method (CSM) based on numerical results was also demonstrated (Frimannslund and Steihaug, 2007). The minimum function values (number of function evaluations) was found by GSSC for functions 1, 3, 4, 5, 7,14, 18, 21, 22 and 25 as 7.2x10 -3 (445.5), 1x10 00 (59), 1x10 12 (77), 6.12x10 -3 (94), 9.84x10 -4 (172), 7.71x10 -3 (344), 9.01x10 -3 (434), 9.37x10 -3 (7421), 6.44x10 -3 (301) and 5.9x10 -3 (180) respectively. It was evident from Table-3 that the performance of NS-NMSA was noticeably better than that of GSSC and hence better than that of CSM.

Price et al., (2008) developed two schemes of axial parallel frames and randomly oriented frames pplied and them 5 times on each of the functions 1, 4, 5, 8, 12, 15, 22, 23 and 24. The comparisons of the results of NMSA with both of the schemes are presented in Table-4.

Table 2. Comparisons of performance NS-NMSA with other approaches.

###(McKinnon, 1998; Coope and (Coope and Price, 2000)###NS-NMSA

###Price, 2000)

No.###Function

###FE###Function###Rank###FE###Function###Rank FE###FunctionValue###Rank

###value###value

1 Rosenbrock###219###1.099x10 18###3###285###1.3905x 10 17###2###190###4.2525x10 19###1

2 Freundenstien and###172###48.9843###2###217###48.9843###3###153###48.984253###1

###Roth

3 Powell badly scaled###754###1.1106x10 25###2###969###4.2398 x10 25###3###796###1.8902x10 27###1

4 Brown badly scaled###335###7.0386x10 18###2###498###7.9979x10 17###3###330###7.5665x10 19###1

5 Beale###162###6.1142x10 18###3###191###.0782x10 18###2###145###1.8681x10 20###1

6 Jenrich and Sampson###133###124.362###2###157###124.362###3###154###124.362###1

7 McKinnon###290###2.5###2###426###2.5###3###196###2.5###1

8 Helical Valley###428###4.7847x10 17###2###342###9.8321x10 16###3###294###1.4833x10 18###1

9 Bard###100004###1.74287###3###1134###1.74287###2###270###0.008214877###1

10 Gaussian###216###1.1279x10 08###3###194###1.1279x10 08###1###207###1.1279x10 08###2

11 Meyer###100004###87.945###3###2801###87.945###1###3204###87.945###2

12 Gulf Research###687###1.1389x10 22###2###529###5.4451x10 19###3###899###1.2223x10 24###1

13 Box###701###3.0574x10 22###2###478###8.7045x10 21###3###455###2.3120x10 27###1

14 Powell singular###956###3.5635x10 28###2###1045###6.735x10 26###3###836###1.422x10 32###1

15 Woods###572###1.5639x10 17###2###656###2.574x10 16###3###438###3.6737x10 18###1

16 Kowalik and Osborne###398###3.0750x10 4###1###653###3.0750x10 04###3###408###3.0750x10 4###2

17 Brown and Dennis###100004###85822.2###3###603###85822.2###2###454###85822.2016###1

18 Penalty I###1371###2.2499x10 05###2###1848###2.2499x10 05###3###888###2.2499x10 05###1

19 Penalty II###3730###9.3762x10 06###2###4689###9.3762x10 06###3###2808###9.3762x10 06###1

20 Biggs Exp6###1130###5.6556x10 03###3###4390###1.1613x10 20###2###2784###1.264x10 22###1

21 Extended###7015###2.7907x10 17###2###3110###1.3584x10 14###3###2033###2.4378x10 19###1

###Rosenbrock

22 Extended Powell###2513###5.1316x10 7###3###7200###6.4382x10 24###2###3239###1.3818x10 29###1

23 Variably dimensional###3780###2.0847x10 16###2###2563###1.2478x10 15###3###1028###2.3046x10 17###1

24 Trigonometric###3105###2.7950x10 5###3###2466###2.7950x10 5###2###2406###6.0101x10 18###1

Overall Ranks###2.333###2.542###1.125

Table 3. Comparisons of performance NS-NMSA with other approaches

###(Coope and Price, 2000)###NS-NMSA

No. Function

###FE###Function value Rank###FE###Function Value###Rank

1 Rosenbrock###380###3.6x10 11###2###190###4.2525x10 19###1

2 Freundenstien and Roth###75###48.984253###1###153###48.984253###2

3 Powell badly scaled###1784###6.7x10 18###2###796###1.8902x10 27###1

4 Brown badly scaled###58###1.4x10 20###1###330###7.5665x10 19###2

5 Beale###87###5.6x10 13###2###145###1.8681x10 20###1

6 Jenrich and Sampson###154###124.4###2###154###124.362###1

7 McKinnon###11###0###2###196###2.5###1

8 Helical Valley###303###4.2x10 11###2###294###1.4833x10 18###1

9 Bard###200###17.43###2###270###0.008214877###1

10 Gaussian###47###1.1x10 8###1###207###1.1279x10 08###2

11 Meyer###9070###87.95###2###3204###87.945###1

12 Gulf Research###655###1.8x10 13###2###899###1.2223x10 24###1

13 Box###227###0.01409###2###455###2.3120x10 27###1

14 Powell singular###242###2.6x10 11###2###836###1.422x10 32###1

15 Woods###315###4.9x10 12###2###438###3.6737x10 18###1

16 Kowalik and Osborne###317###3.1x10 4###2###408###3.0750x10 4###1

17 Brown and Dennis###232###85822###1###454###85822.2016###2

18 Penalty I###---###---###---###888###2.2499x10 05###---

19 Penalty II###---###---###---###2808###9.3762x10 6###---

20 Biggs Exp 6###3403###1.9x10 11###2###2784###1.264x10 22###1

21 Extended Rosenbrock###---###---###---###2033###2.4378x10 19###---

22 Extended Powell###---###---###---###3239###1.3818x10 29###---

23 Variably dimensional###---###---###---###1028###2.3046x10 17###---

24 Trigonometric###---###---###---###2406###6.0101x10 18###---

Overall Ranks###1.78###1.22

Table 4. Comparisons of performance NS-NMSA with those in (Price e al; 2008).

###Scheme 1###Scheme 2###NS-NMSA

No.###Function###FE###Function###Rank###Function###Rank###Function###Rank

###FE###FE

###value###value###Value

1 Rosenbrock###4338###5x10 15###3###4442###8x10 15###2###190###4.2525x10 19###1

4 Brown badly scaled###10598###3x10 6###2###9606###8x10 6###3###330###7.5665x10 19###1

5 Beale###3638###5x10 16###3###3038###3x10 16###2###145###1.8681x10 20###1

8 Helical Valley###8406###3x10 15###2###6595###4x10 15###3###294###1.4833x10 18###1

12 Gulf Research###15583###3x10 12###3###9159###7x10 16###2###899###1.2223x10 24###1

15 Woods###15610###3x10 14###3###15451###3x10 14###2###438###3.6737x10 18###1

22 Extended Powell###11074###1x10 13###2###8381###2x10 4###3###3239###1.3818x10 29###1

23 Variably dimensional###34679###2x10 4###3###42545###3x10 15###2###1028###2.3046x10 17###1

24 Trigonometric###14209###9x10 16###3###14367###8x10 17###2###2406###6.0101x10 18###1

Overall Ranks###2.667###2.334###1

In Tables 2-4, the algorithm that produced the lowest final function value of a test function was ranked 1 and for the equal final function values the algorithm utilizing smaller number of function evaluations was ranked 1. The last rows of Tables 2-4 exhibited the overall ranks of the algorithms for the problems considered for comparisons.

Conclusion: A low cost non-stagnated convergent variant of Nelder-Mead simplex algorithm has been proposed. The numerical results have verified the efficiency of NS-NMSA in finding optimal solutions with smaller computational cost. The statistical rankings confirmed that the NS-NMSA was practically competitive and superior to a number of direct search methods reported in the literature.

REFERENCES

Box, G. E. P. (1957). Evolutionary Operation: A method for increasing industrial productivity. Applied statistics 6: 81-77

Burmen, A., J. Puhan and T. Tuma, (2006). Grid Restrained Nelder-Mead Algorithm. Comput. Optim. Appl. 34(3): 359 - 375

Conn, A. R., K. Scheinberg and L. N. Vicente. (2009). Introduction to Derivative Free Optimization. MPS-SIAM Series on Optimization 16: 146 p

Coope, I. D. and C. J. Price, (2000). Frame Based Methods for Unconstrained Optimization. J. Optim. Theory Appl. 107 (2): 261-274

Coope, I. D. and C. J. Price, (2000a). A direct search conjugate directions algorithm for unconstrained minimization. ANZIAM J. 42(E): 478-498

Coope, I. D. and C. J. Price, (2003). Positive bases in numerical optimization. Comput. Optim. Appl. 21:169-175

Davis, C. (1954). Theory of positive linear dependence. Am. J. Math. 76 (4): 733 - 746

Dennis, J. E. and D. J. Woods, (1987). Optimization on Micro-computers, The Nelder-Mead Simplex Algorithm, New Computing Environments: Microcomputers in Large-Scale Computing: SIAM, Philadelphia. 116-122

Frimannslund, L. and T. Steihaug, (2007). A generating set search method using curvature information. Comput. Optim. Appl. 38: 105-121.

Gao, F. and L. Han, (2012). Implementing the Nelder-Mead Simplex Algorithm with adaptive parameters. Comp. Optim. Appl. 51(1): 259-277

Han, L. (2000). Algorithms in Unconstrained Optimization. University of Connecticut.

Jacoby, S. L. S., J. S. Kowalik and J. T. Pizzo, (1972). Iterative Methods for Nonlinear Optimization Problems, Prentice-Hall, Englewood Cliffs, (New Jersey) 125 p

Jorge, J. M., B. S. Garbow and K. E. Hillstrom, (1981). Testing Unconstrained Optimization Software. ACM T Math Software 7 (1): 17-41

Kelley, C. T. (1999). Detection and Remediation of Stagnation in the Nelder-Mead Algorithm using a Sufficient Decrease Condition. SIAM J Optimiz 10(1): 43-55

Klein, K. and J. Neira, (2014). Nelder Mead Simplex Optimization Routine for Large-Scale Problems: A Distributed Memory Implementation. Comput Econ 43(4):447 - 461

Lagarias, J. C., J. A. Reads, M. H. Wright and P. E. Wright, (1998). Convergence Properties of Nelder-Mead Simplex Algorithm in Low Dimension, SIAM J Optimiz 9 (1): 112-147

Lagarias, J. C., B. Poonen and M. H. Wright, (2012). Convergence of the restricted Nelder-Mead method in two dimensions. SIAM J Optimiz 22: 501-532

Lewis, R. M., V. Torczon and M. W. Trosset, (2000). Direct search methods: then and Now. J Comput Appl Math 124 (1-2): 191-207

Luangpaiboon, P. (2012). Varible Neighborhood Simplex search methods for Global Optimization models. J. Computer Sci 8(4): 613-620

Luersen, M.A., R. Le Riche and F. Guyon, (2004). A constrained, globalized, and bounded Nelder- Mead method. Struct Multidisc Optim 27(1): 43-54

McKinnon, K. I. M. (1998). Convergence of the Nelder-Mead Simplex Method to a non-stationary point. SIAM J Optimiz 9:148-158

Nelder, J. A. and R. Mead, (1965). A simplex method for function minimization. The COM JNL, 7(4): 308-313

Parkinson, J. M. and D. Hutchinson, (1972). An investigation into the efficiency of variants on the simplex method. In F. A. Lootsma, editor, Numerical Methods for Non-linear Optimization. Academic Press, London and New York, p. 115 - 135

Price, C. J., I. D. Coope and D. Byatt, (2002). A Convergent Variant of the Nelder-Mead Algorith. J Optim Thoer Appl 113 (1): 5-19

Price, C. J., M. Reale and B. L. Robertson, (2008). A direct search method for smooth and non-smooth unconstrained minimization. ANZIAM J. 48(CTAC 2006): 927-948

Sanja, S. and S. Singer, (1999). Complexity Analysis of Nelder-Mead Search Iterations. Proceedings of the 1. Conference on Applied Mathematics and Computation Dubrovnik, Croatia, September 13-18, 185-196

Spendley, W., G. R. Hext and F. R. Himsworth, (1962). Sequential Applications of simplex designs in optimization and evolutionary operation. TECHNO METRICS 4: 441-461.

Torczon, V. (1989). Multi-Directional Search: A parallel Search Algorithm for Parallel Machines. Department of Mathematics Sciences, Rice University, Houston.

Woods, D. J. (1985). An Interactive Approach for Solving Multi-Objective Optimization Problems, Technical Report 85-5, Mathematical Science Department, Rice University, Houston, TX. 77251.