# Influence of Lipschitz bounds on the speed of global optimization.

1. Introduction

Global optimization is used in such fields as economics, operations research, biology, engineering design and management as well as other numerous engineering and applied sciences. For example, prediction methods in computational finance forecast various kinds of quantities related to stock markets, like stock prices, stock volatility and ranking measures. Prediction problem of building a multi-stock artificial trader (Bjorkman, Holmstrom 1999) can be used instead of the classical time series approach. The behavior of the trader is controlled by a parameter vector which is tuned for the best performance. Global optimization algorithm DIRECT (Jones et al. 1993) was applied to find optimal parameters. Extension of optimization to decision-making systems is a challenging topic of research (Sakalauskas, Zavadskas 2009).

One of the most investigated fields of global optimization is Lipschitz optimization (Strongin 1978; Horst et al. 1995; Strongin, Sergeyev 2000; Sergeyev, Kvasov 2008). In this paper we consider the problem of global optimization of a Lipschitz continuous objective function f: [R.sup.n] [right arrow] R over a given compact feasible region D [subset or equal to] .sup.n]. A function f: D [subset] R, is said to be Lipschitz continuous if it satisfies the condition

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (1)

where [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] is a Lipschitz constant, Vf (x) is the gradient of the function,[parallel]x[[parallel].sub.q] denotes the [l.sub.q]-norm and 1/ p +1/ q = 1, 1 [less than or equal to] p,q [less than or equal to] [infinity]. Since minimization can be transformed into maximization by changing the sign of the objective function, we will consider only the maximization problem

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (2)

Apart from the global optimum f*, one or all global optimizers [x.sup.*]: f([x.sup.*]) = [f.sup.*] should be found. In Lipschitz optimization only a point [x.sub.opt] [member of] D such that f ([x.sub.opt]) differs from [f.sup.*] by no more than a specified accuracy s can be found.

There are at least four approaches to obtain an estimate of Lipschitz constant (Sergeyev, Kvasov 2009): 1) it can be given a priori; 2) its adaptive global estimate over the whole domain can be used; 3) local Lipschitz constants can be estimated adaptively using the local tuning technique (Sergeyev 1995, 1998); 4) a set of possible estimates can be used (Jones et al. 1993).

The most studied case of problem (2) is the univariate one (n = 1), for which numerous algorithms have been proposed, compared, and theoretically investigated. In the present paper, we are interested in the multivariate case (n [greater than or equal to] 2).

In Lipschitz optimization the upper bound for f (x) over a sub-region I [subset] D is evaluated by exploiting Lipschitz condition (1). It follows from (1) that for all x,y [member of] I

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

If y [member of] I is fixed, then the convex function

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (3)

overestimates f (x) over I. The superscript q indicates the norm used for the upper bound calculation. Let T be a finite set of distinct points in I. Then, the Piyavskii-type upper bound over i, given the function values f(y),y [member of] T, and the Lipschitz constant L , is provided by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (4)

In the univariate case, the function [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]) is piecewise linear, and [[phi].sup.q] can be determined in a simple straightforward way (Piyavskii 1972). For (n > 2), however, problem (4) constitutes a difficult optimization problem.

Therefore, branch and bound algorithms use considerably weaker bounds. In general, weaker bounds belong to the following two simple families [[mu].sub.2] and [[mu].sub.2]. Let

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

denote the diameter of I. For example, if [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] is an n -rectangle, then [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] , and if i is an n -simplex, then the diameter [[delta].sub.q](I) is the length of its longest edge. Afterwards a simple upper bound can be derived from (3):

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (5)

where T [subset] I is a finite sample of points in I, where the function values of f have been evaluated. If I is a rectangle or a simplex, the set T often coincides with the vertex set V(I). A tighter but computationally more expensive than (5) bound is

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (6)

Methods with the [[phi].sup.q] type bounds (4) can hardly be used to solve typical test problems with relatively large Lipschitz constants and (n > 2). Branch and bound algorithms can almost always provide reasonable approximate optimal solutions for (n = 3) (Horst et al. 1995), although the methods usually involve more function evaluations (and thus are less suitable in case of very expensive functions), but much less auxiliary computational time than the methods with the p q type bounds.

Therefore, it is important to investigate possibilities of bounds tighter than (5) and (6), but computationally less expensive than (4). In this paper, we propose a bound based on the function values at the vertices and the circumradius of an n-circumsphere.

2. Branch and bound with simplicial partitions and Lipschitz bound

Branch and bound is a technique for implementation of covering global optimization methods as well as combinatorial optimization algorithms. An iteration of a classical branch and bound algorithm processes a node in the search tree representing a not yet explored subspace of the solution space. The iteration has three main components: selection of the node to process, branching of the search tree and bound calculation. Several selection strategies (Paulavicius et al. 2010) may be used: best-first, depth-first, breadth-first, statistical (Zilinskas, A., Zilinskas, J. 2010). Partitions may be hyper-rectangular, simplicial (Horst 1997; Gorodeckij 1999), hyper-conic or hyper-spherical. Bounds may be computed using envelopes of functions (Androulakis et al. 1995; Adjiman, Floudas 1996; Zlobec 2010), interval arithmetic (Moore 1966; Hansen, Walster 2003; Zilinskas 2005) as well as Lipschitz condition. Most of Lipschitz global optimization branch and bound algorithms use hyper-rectangular partitions. Simplicial partitions are preferable when the values of an objective function at the vertices of partitions are used to compute bounds. Another advantage of simplicial partitions is that they may be used to vertex-triangulate feasible regions of non-rectangular shape defined by linear inequality constraints (Zilinskas 2008), what allows the reduction of search space of problems with symmetric objective functions (Zilinskas 2007).

For simplicial branch and bound, the feasible region should be initially covered by simplices. The most preferable initial covering is face to face vertex triangulation--partitioning of the feasible region into finitely many n-dimensional simplices, whose vertices are also the vertices of the feasible region. We use a standard way of triangulation into n! simplices. All simplices share the diagonal of the feasible region and are of equal hyper-volume. It is also possible to over-cover the feasible region by one simplex (Zilinskas, A., Zilinskas, J. 2002). However, the objective function may be undefined outside the feasible region. Moreover, the hyper-volume of over-covering simplex is considerably larger than the feasible region. For example, one version of over-covering is to fit a hyper-rectangle into a simplex matching a vertex: one vertex of the hyper-rectangle and one vertex of the simplex are matched, the edges of the simplex from this vertex include edges of the hyper-rectangle from this vertex, and the opposite vertex of the hyper-rectangle is placed in the opposite face of the simplex. If a hyper-cube is over-covered using this strategy, the hyper-volume of the simplex is exponentially larger than the hyper-volume of the hyper-cube (approximately en [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.].

There are some possibilities to partition a simplex into smaller ones (Zilinskas, A., Zilinskas, J. 2002). Experiments have shown that the most preferable partitioning is a subdivision of the simplex into two by a hyper-plane passing through the middle point of the longest edge and the vertices which do not belong to the longest edge. The lower and upper bounds for the maximum of the function over a simplex are estimated using the function values at the vertices. At the beginning of the algorithm the rectangular feasible region is covered by simplices and some initial values are initialized. Then the loop is executed until the candidate list is not empty. The simplex with the best upper bounding function value is chosen. If the difference between the upper bound and the global lower bound is larger than the predefined precision e the simplex is subdivided into two new simplices which are inserted into candidate list. Convergence of such a branch and bound algorithm for maximizing a Lipschitz functions over n-simplices by using bisection at the midpoint of one of the longest edges and the upper bounds [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] follows from exhaustiveness of the subdivision and continuity of the function f(x) (Horst et al. 1995).

2.1. Lipschitz bound over simplices based on the function values at the vertices and the radius of the circumscribed sphere

In this section a new Lipschitz bound over simplices is proposed. The bound is often stronger than usually used trivial bounds and still computationally cheap, especially for low-dimensional problems (n [less than or equal to] 3) for which calculation of the radius of the circumscribed sphere is cheap. To find the radius, n + 2 determinants of (n + 1) x (n + 1)-dimensionality matrices must be calculated.

Proposition 1. Let [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] be a Lipschitz continuous objective function, I [subset] D is an n-simplex, [R.sub.2](I) denotes the radius of circumscribed n-sphere and V(I) denotes the set of vertices. Then

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (7)

Proof. An n-simplex I is covered by n-balls [O.sub.i],i = 1,xxx,n +1 such that the radius [R.sub.2](I) is the same for all n-balls and centers coincide with the vertices v of the simplex I (two dimensional example is shown in Fig. 1): [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

[FIGURE 1 OMITTED]

2.2. Calculation of the radius of the circumscribed n-sphere

In this section a general formula to calculate the radius [R.sub.2] of the circumscribed n-sphere is derived. A geometric construction for the radius of the circumscribed 2-sphere (circumscribed circle) is given by Pedoe (Pedoe 1995). The equation of the n-circumsphere of the n-simplex with the vertices [v.sub.1] =([v.sub.1l,xxx, [v.sub.1n]),xxx, [v.sub.n+1] =([v.sub.(n+1)1],xxx,[v.sub.(n+1)n]) expressed in determinant form is

Expanding the determinant by the first row we get

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (8)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (9)

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

Comleting the square for (9) gives:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

which is the n-circumsphere

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

with the circumcenter

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

and the circumradius

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (10)

2.3. Combination of bounds with various norms

Experiments have shown (Paulavicius, Zilinskas 2006, 2007) that no single norm and corresponding Lipschitz constant is the best for all problems and therefore a combination of the infinite, Euclidean and first norms is used for calculation of the upper bound

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (11)

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

The Piyavskii type (4) Lipschitz bound with the first norm cp1 was proposed in (Paulavicius, Zilinskas 2008):

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (12)

However, the first norm does not always give the best bounds (Paulavicius, Zilinskas 2008). The further investigation (Paulavicius, Zilinskas 2009) has shown that an aggregate bound (AB) composed of [[mu].2/2,[infinity].sub.2] and [[phi].sup.1] bounds

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (13)

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

yields better results. The aggregate bound (13) may be improved by including the bound based on the radius of the circumscribed sphere. Therefore, we propose and experimentally investigate an improved aggregate bound (IAB)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (14)

3. Experimental investigation

In this section the results of computational experiments are presented and discussed. Various test problems (n [greater than or equal to] 2) for global optimization from (Hansen, Jaumard 1995; Jansson, Knuppel 1992; Madsen, Zilinskas 2000) have been used in our experiments. The list of all used test problems and their global minima from the literature are shown in Table 1. For the (n = 2, 3) test problems we use the same precision [epsilon] as used by Hansen and Jaumard (Hansen, Jaumard 1995). The proposed algorithm assumes that Lipschitz constants are given a priori. These constants have been evaluated in the same way, as in (Hansen, Jaumard 1995). A very fine grid search algorithm of [1000.sup.n] points for (n = 2, 3) and 100n points for (n = 4, 5, 6) dimensional test functions have been used and the obtained estimates should be close to the best Lipschitz constants. The values are given in (Paulavicius, Zilinskas 2006, 2007). The speed of global optimization has been estimated using the number of function evaluations criterion.

The results of the simplicial branch and bound algorithm with the proposed bound [[psi].sup.2] (7) based on the Euclidean norm and with the bound [mu]2/2 (5) based on the same norm are shown in Table 2. For all test problems the number of function evaluations is smaller when the proposed bound [[PSI].sup.2] is used and on the average it is by 26% smaller than when the bound [[PSI].sup.2] is used. The rate ([[PSI].sup.2] < [[mu].sup.2.sub.2]) shows how often the proposed bound y|/2 is better (more tight) than the bound [[PSI].sup.2.sub.2]. For the used test functions it is better on average in 70% of simplices. However, the bound [[PSI].sup.2] does not always give better bound comparing with (the rate is not equal to 1). If function values at the vertices V(I) of the simplex I differ significantly: [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.], then [[mu].sup.2.sub.2] type bound is more tight comparing with the proposed [[PSI].sup.2] bound. If the difference between the maximal and minimal functions values is smaller than [L.sub.2] ([[delta].sub.2](i) -- [R.sub.2](i)), then the proposed [[PSI].sup.2] type bound gives better results. Therefore, for computationally expensive functions it might be worthwhile to append the bound [[psi].sup.2] to the aggregate bound AB (13). Performance of simplicial branch and bound algorithm with the aggregate bound AB (13) is similar to that of the best branch-and-bound algorithm for Lipschitz optimization and it is often better (Paulavicius, Zilinskas 2009). The numbers of function evaluations on the average is by 23% smaller when the improved aggregate bound IAB (14) is used than when the aggregate bound AB (13) is used.

The value of objective function at the same point is required when computing bounds over neighbor sub-regions. If this value is evaluated for parent sub-region, it is not necessary to evaluate it again. However, in multidimensional case the same points may be involved in subdivision of different sub-regions. It is possible to maintain a list of all points where the objective function is evaluated and avoid evaluation of objective function at such points several times by verifying if the point is in the list before evaluation. In two-dimensional case after initial covering by simplices there are two simplices. When one of them is subdivided into two by a hyper-plane passing through the middle point of the longest edge the function value at this point should be evaluated. Sometime later in the optimization process subdivision of the other simplex may be performed, again requiring the function value at the same point. By verification of this point it is possible to avoid evaluation of the function again. Such vertex verification reduces the number of function evaluations essentially, but also increases computational time and therefore it is more suitable in the case of computationally expensive functions. The numbers of function evaluations of the algorithm with improved aggregate bound and verification of vertices are shown in Table 2 in the column [bar.IAB]. These results using improved aggregate bound [bar.IAB] significantly improve performance of the proposed Lipschitz optimization algorithm (Paulavicius, Zilinskas 2009), whose results were often better than that of the best branch and bound algorithm for Lipschitz optimization.

It is interesting to compare the results with other type of bounds. [alpha]BB method (Androulakis et al. 1995) assumes that objective function is twice continuously differentiable and uses this feature to construct convex underestimators. Since the assumption on Lipschitz continuity is weaker, in general Lipschitz bounds should be worse than that of [alpha]BB, however, the difference is not known. We investigate IAB and a piecewise application of the [alpha]BB underestimator (Gounaris, Floudas 2008) to see the difference. Table 3 presents obtained upper bounds for the maximum value of test problem No. 2 after the same number of evaluations of objective function as results given in (Gounaris, Floudas 2008). Their method subdivides the feasible region into N1 x N2 rectangles which requires (N1 + 1) x (N2 + 1) function evaluations. Therefore, we investigate the upper bounds estimated after 4 (N1 = N2 = 1), 9 (N1 = N2 = 2), 25 (N1 = N2 = 3) and so on function evaluations. As expected, aBB overestimator provides better (more tight) upper bounds. In the beginning the difference is larger, but it becomes smaller when the numbers of partitions increase.

4. Conclusions

In this paper a new Lipschitz bound over simplices is proposed. It is based on the function values at the vertices and the radius of the circumscribed sphere of the simplex. The speed of simplicial branch and bound algorithm with the proposed Lipschitz bounds has been investigated and compared. Test problems of various dimensionalities (n = 2, 3, 4, 5, 6) from the literature have been used for experimental investigation of the algorithm.

The experiments showed that the proposed bound is often better than a usually used bound. For the used test functions the rate ( [[psi].sup.2] < [[mu].sup.2.sub.2]) showed that proposed bound [[PSI].sup.2] is better (more tight) than the bound [[mu].sup.2.sub.2] on average in 70% of simplices. Therefore, the use of the proposed bound in simplicial branch and bound reduces the number of function evaluations by 26% comparing to a usual bound. If the proposed bound is used in an aggregate bound with various norms performance improvement is 23% comparing to the aggregate bound without the proposed bound.

The proposed method appears to be inferior to the [alpha]BB method. This is to be expected because the latter method assumes that the objective function is twice continuously differentiable.

doi: 10.3846/20294913.2012.661170

Acknowledgments

This research was funded by a grant (No. MIP-108/2010) from the Research Council of Lithuania.

Received 04 January 2011; accepted 10 November 2011

References

Adjiman, C. S.; Floudas, C. A. 1996. Rigorous convex underestimators for general twice-differentiable problems, J. Glob. Optim. 9: 23-40. http://dx.doi.org/10.1007/BF00121749

Androulakis, I. P.; Maranas, C. D.; Floudas, C. A. 1995. aBB: a global optimization method for general constrained nonconvex problems, J. Glob. Optim. 7: 337-363. http://dx.doi.org/10.1007/BF01099647

Bjorkman, M.; Holmstrom, K. 1999. Global optimization using the DIRECT algorithm in Matlab, Advanced Modeling and Optimization 1(2): 17-37.

Gorodetskij, S. Iu. 1999. Mnogoekstremalnaia optimizatsiia na osnove trianguliatsii oblasti poiska, Matematicheskoe modelirovanie i optimalnoe upravlenie, Vestnik NNGU2(21): 249-268 (in Russian).

Gounaris, Ch. E.; Floudas, Ch. A. 2008. Tight convex underestimators for C2-continuous problems: II. multivariate functions, J. Glob. Optim. 42: 69-89. http://dx.doi.org/10.1007/s10898-008-9288-8

Hansen, E.; Walster, G. 2003. Global Optimization Using Interval Analysis. 2nd edition. Marcel Dekker, New York.

Hansen, P.; Jaumard, B. 1995. Lipshitz optimization, in Horst, R.; Pardalos, P. (Eds.). Handbook of Global Optimization. Kluwer Academic Publishers, 404-493.

Horst, R. 1997. On generalized bisection of n-simplices, Mathematics of Computation 66(218): 691-698. http://dx.doi.org/10.1090/S025-5718-97-00809-0

Horst, R.; Nast, M.; Thoai, N. V. 1995. New LP bound in multivariate Lipschitz optimization: theory and applications, Journal of Optimization Theory and Applications 86: 369-388. http://dx.doi.org/10.1007/BF02192085

Horst, R.; Pardalos, P. M.; Thoai, N. V. 1995. Introduction to Global Optimization. Kluwer Academic Publishers.

Jansson, C.; Knuppel, O. A. 1992. Global Minimization Method: the Multi-Dimensional Case, Technical Report TU Hamburg-Harburg.

Jones, D. R.; Perttunen, C. D.; Stuckman, B. E. 1993. Lipschitzian optimization without the Lipschitz constant, Journal of Optimization Theory and Application 79(1): 157-181. http://dx.doi.org/10.1007/BF00941892

Madsen, K.; Zilinskas, J. 2000. Testing Branch-and-Bound Methods for Global Optimization, Technical Report IMM-REP-2000-05. Technical University of Denmark.

Moore, R. E. 1966. Interval Analysis. Prentice-Hall.

Paulavi?ius, R.; Zilinskas, J. 2006. Analysis of different norms and corresponding Lipschitz constants for global optimization, Technological and Economic Development of Economy 12(4): 301-306.

http://dx.doi.org/10.1080/13928619.2006.9637758

Paulavi?ius, R.; Zilinskas, J. 2007. Analysis of different norms and corresponding Lipschitz constants for global optimization in multidimensional case, Information Technology and Control 36(4): 383-387.

Paulavi?ius, R.; Zilinskas, J. 2008. Improved Lipschitz bounds with first norm for function values over multidimensional simplex, Mathematical Modelling and Analysis 13(4): 553-563. http://dx.doi.org/10.3846/1392-6292.2008.13.553-563

Paulavi?ius, R.; Zilinskas, J. 2009. Global optimization using the branch-and-bound algorithm with a combination of Lipschitz bounds over simplices, Technological and Economic Development of Economy 15(2): 310-325. http://dx.doi.org/10.3846/1392-8619.2009.15.310-325

Paulavicius, R.; Zilinskas, J.; Grothey, A. 2010. Investigation of selection strategies in branch and bound algorithm with simplicial partitions and combination of Lipschitz bounds, Optimization Letters 4(2): 173-183. http://dx.doi.org/10.1007/s11590-009-0156-3

Pedoe, D. 1995. Circles: A Mathematical View. Washington, DC: Math. Assoc. Amer.

Piyavskii, S. A. 1972. An algorithm for finding the absolute extremum of a function, USSR Computational Mathematics and Mathematical Physics 12: 57-67. http://dx.doi.org/10.1016/0041-5553(72)90115-2

Sakalauskas, L.; Zavadskas, E. K. 2009. Optimization and intelligent decisions, Technological and Economic Development of Economy 15(2): 189-196. http://dx.doi.org/10.3846/1392-8619.2009.15.189-196

Sergeev, Ia. D.; Kvasov, D. E. 2008. Diagonalnye metody globalnoi optimizatsii. Moskva: Fizmatlit (in Russian).

Sergeyev, Ya. D. 1995. An information global optimization algorithm with local tuning, SIAM J. Optim. 5(4): 858-870. http://dx.doi.org/10.1137/0805041

Sergeyev, Ya. D. 1998. Global one-dimensional optimization using smooth auxiliary functions, Mathematical Programming 81(1): 127-146. http://dx.doi.org/10.1007/BF01584848

Sergeyev, Y. D.; Kvasov, D. E. 2009. Lipschitz global optimization and estimates of the Lipschitz constant, in Chaoqun, I. M.; Lean, Y.; Dabin, Z.; Zhongbao, Z. (Eds.). Global Optimization: Theory, Methods and Applications. Global Link Publ.: Hong Kong, 518-521.

Strongin, R. G.; Sergeyev, Ya. D. 2000. Global Optimization with Non-convex Constraints: Sequential and Parallel Algorithms. Kluwer Academic Publishers, Dordrecht.

Strongin, R. G. 1978. Chislennye metody v mnogoekstremalnykh zadachakh. Moskva: Nauka (in Russian).

Zilinskas, A.; Zilinskas, J. 2002. Global optimization based on a statistical model and simplicial partitioning, Computers and Mathematics with Applications 44(7): 957-967. http://dx.doi.org/10.1016/S0898-1221(02)00206-7

Zilinskas, A.; Zilinskas, J. 2010. P-algorithm based on a simplicial statistical model of multimodal functions, TOP 18(2): 396-412. http://dx.doi.org/10.1007/s11750-010-0153-9 Zilinskas, J. 2005. Comparison of packages for interval arithmetic, Informatica 16(1): 145-154.

Zilinskas, J. 2007. Reducing of search space of multidimensional scaling problems with data exposing symmetries, Information Technology and Control 36(4): 377-382.

Zilinskas, J. 2008. Branch and bound with simplicial partitions for global optimization, Mathematical Modelling and Analysis 13(1): 145-159. http://dx.doi.org/10.3846/1392-6292.2008.13.145-159

Zlobec, S. 2010. Characterizing zero-derivative points, J. Glob. Optim. 46: 155-161. http://dx.doi.org/10.1007/s10898-009-9457-4

Remigijus Paulavicius (1), Julius Zilinskas (2)

(1,2) Vilnius University, Institute of Mathematics and Informatics, Akademijos g. 4, LT-08663 Vilnius, Lithuania (1) Vilnius Pedagogical University, Student? g. 39, LT-08106 Vilnius, Lithuania (2) Vilnius Gediminas Technical University, Sauletekio al. 11, LT-10223 Vilnius, Lithuania E-mails: (1) remigijus.paulavicius@vpu.lt (corresponding author); (2) julius.Zilinskas@mii.vu.lt

Remigijus PAULAVICIUS is a junior researcher in Systems Analysis Department at Vilnius University Institute of Mathematics and Informatics, Lithuania. Research interests: local and global optimization, parallel computing.

Julius Zilinskas is a principal researcher in Systems Analysis Department at Vilnius University Institute of Mathematics and Informatics, Lithuania. He is a member of editorial boards of "Central European Journal of Computer Science", "Central European Journal of Engineering", "Computational Management Science", "Informatica", "Mathematical Modelling and Analysis", "Optimization Letters". His research interests include global optimization, visualization of multidimensional data and parallel computing.

Global optimization is used in such fields as economics, operations research, biology, engineering design and management as well as other numerous engineering and applied sciences. For example, prediction methods in computational finance forecast various kinds of quantities related to stock markets, like stock prices, stock volatility and ranking measures. Prediction problem of building a multi-stock artificial trader (Bjorkman, Holmstrom 1999) can be used instead of the classical time series approach. The behavior of the trader is controlled by a parameter vector which is tuned for the best performance. Global optimization algorithm DIRECT (Jones et al. 1993) was applied to find optimal parameters. Extension of optimization to decision-making systems is a challenging topic of research (Sakalauskas, Zavadskas 2009).

One of the most investigated fields of global optimization is Lipschitz optimization (Strongin 1978; Horst et al. 1995; Strongin, Sergeyev 2000; Sergeyev, Kvasov 2008). In this paper we consider the problem of global optimization of a Lipschitz continuous objective function f: [R.sup.n] [right arrow] R over a given compact feasible region D [subset or equal to] .sup.n]. A function f: D [subset] R, is said to be Lipschitz continuous if it satisfies the condition

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (1)

where [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] is a Lipschitz constant, Vf (x) is the gradient of the function,[parallel]x[[parallel].sub.q] denotes the [l.sub.q]-norm and 1/ p +1/ q = 1, 1 [less than or equal to] p,q [less than or equal to] [infinity]. Since minimization can be transformed into maximization by changing the sign of the objective function, we will consider only the maximization problem

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (2)

Apart from the global optimum f*, one or all global optimizers [x.sup.*]: f([x.sup.*]) = [f.sup.*] should be found. In Lipschitz optimization only a point [x.sub.opt] [member of] D such that f ([x.sub.opt]) differs from [f.sup.*] by no more than a specified accuracy s can be found.

There are at least four approaches to obtain an estimate of Lipschitz constant (Sergeyev, Kvasov 2009): 1) it can be given a priori; 2) its adaptive global estimate over the whole domain can be used; 3) local Lipschitz constants can be estimated adaptively using the local tuning technique (Sergeyev 1995, 1998); 4) a set of possible estimates can be used (Jones et al. 1993).

The most studied case of problem (2) is the univariate one (n = 1), for which numerous algorithms have been proposed, compared, and theoretically investigated. In the present paper, we are interested in the multivariate case (n [greater than or equal to] 2).

In Lipschitz optimization the upper bound for f (x) over a sub-region I [subset] D is evaluated by exploiting Lipschitz condition (1). It follows from (1) that for all x,y [member of] I

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

If y [member of] I is fixed, then the convex function

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (3)

overestimates f (x) over I. The superscript q indicates the norm used for the upper bound calculation. Let T be a finite set of distinct points in I. Then, the Piyavskii-type upper bound over i, given the function values f(y),y [member of] T, and the Lipschitz constant L , is provided by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (4)

In the univariate case, the function [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]) is piecewise linear, and [[phi].sup.q] can be determined in a simple straightforward way (Piyavskii 1972). For (n > 2), however, problem (4) constitutes a difficult optimization problem.

Therefore, branch and bound algorithms use considerably weaker bounds. In general, weaker bounds belong to the following two simple families [[mu].sub.2] and [[mu].sub.2]. Let

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

denote the diameter of I. For example, if [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] is an n -rectangle, then [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] , and if i is an n -simplex, then the diameter [[delta].sub.q](I) is the length of its longest edge. Afterwards a simple upper bound can be derived from (3):

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (5)

where T [subset] I is a finite sample of points in I, where the function values of f have been evaluated. If I is a rectangle or a simplex, the set T often coincides with the vertex set V(I). A tighter but computationally more expensive than (5) bound is

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (6)

Methods with the [[phi].sup.q] type bounds (4) can hardly be used to solve typical test problems with relatively large Lipschitz constants and (n > 2). Branch and bound algorithms can almost always provide reasonable approximate optimal solutions for (n = 3) (Horst et al. 1995), although the methods usually involve more function evaluations (and thus are less suitable in case of very expensive functions), but much less auxiliary computational time than the methods with the p q type bounds.

Therefore, it is important to investigate possibilities of bounds tighter than (5) and (6), but computationally less expensive than (4). In this paper, we propose a bound based on the function values at the vertices and the circumradius of an n-circumsphere.

2. Branch and bound with simplicial partitions and Lipschitz bound

Branch and bound is a technique for implementation of covering global optimization methods as well as combinatorial optimization algorithms. An iteration of a classical branch and bound algorithm processes a node in the search tree representing a not yet explored subspace of the solution space. The iteration has three main components: selection of the node to process, branching of the search tree and bound calculation. Several selection strategies (Paulavicius et al. 2010) may be used: best-first, depth-first, breadth-first, statistical (Zilinskas, A., Zilinskas, J. 2010). Partitions may be hyper-rectangular, simplicial (Horst 1997; Gorodeckij 1999), hyper-conic or hyper-spherical. Bounds may be computed using envelopes of functions (Androulakis et al. 1995; Adjiman, Floudas 1996; Zlobec 2010), interval arithmetic (Moore 1966; Hansen, Walster 2003; Zilinskas 2005) as well as Lipschitz condition. Most of Lipschitz global optimization branch and bound algorithms use hyper-rectangular partitions. Simplicial partitions are preferable when the values of an objective function at the vertices of partitions are used to compute bounds. Another advantage of simplicial partitions is that they may be used to vertex-triangulate feasible regions of non-rectangular shape defined by linear inequality constraints (Zilinskas 2008), what allows the reduction of search space of problems with symmetric objective functions (Zilinskas 2007).

For simplicial branch and bound, the feasible region should be initially covered by simplices. The most preferable initial covering is face to face vertex triangulation--partitioning of the feasible region into finitely many n-dimensional simplices, whose vertices are also the vertices of the feasible region. We use a standard way of triangulation into n! simplices. All simplices share the diagonal of the feasible region and are of equal hyper-volume. It is also possible to over-cover the feasible region by one simplex (Zilinskas, A., Zilinskas, J. 2002). However, the objective function may be undefined outside the feasible region. Moreover, the hyper-volume of over-covering simplex is considerably larger than the feasible region. For example, one version of over-covering is to fit a hyper-rectangle into a simplex matching a vertex: one vertex of the hyper-rectangle and one vertex of the simplex are matched, the edges of the simplex from this vertex include edges of the hyper-rectangle from this vertex, and the opposite vertex of the hyper-rectangle is placed in the opposite face of the simplex. If a hyper-cube is over-covered using this strategy, the hyper-volume of the simplex is exponentially larger than the hyper-volume of the hyper-cube (approximately en [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.].

There are some possibilities to partition a simplex into smaller ones (Zilinskas, A., Zilinskas, J. 2002). Experiments have shown that the most preferable partitioning is a subdivision of the simplex into two by a hyper-plane passing through the middle point of the longest edge and the vertices which do not belong to the longest edge. The lower and upper bounds for the maximum of the function over a simplex are estimated using the function values at the vertices. At the beginning of the algorithm the rectangular feasible region is covered by simplices and some initial values are initialized. Then the loop is executed until the candidate list is not empty. The simplex with the best upper bounding function value is chosen. If the difference between the upper bound and the global lower bound is larger than the predefined precision e the simplex is subdivided into two new simplices which are inserted into candidate list. Convergence of such a branch and bound algorithm for maximizing a Lipschitz functions over n-simplices by using bisection at the midpoint of one of the longest edges and the upper bounds [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] follows from exhaustiveness of the subdivision and continuity of the function f(x) (Horst et al. 1995).

2.1. Lipschitz bound over simplices based on the function values at the vertices and the radius of the circumscribed sphere

In this section a new Lipschitz bound over simplices is proposed. The bound is often stronger than usually used trivial bounds and still computationally cheap, especially for low-dimensional problems (n [less than or equal to] 3) for which calculation of the radius of the circumscribed sphere is cheap. To find the radius, n + 2 determinants of (n + 1) x (n + 1)-dimensionality matrices must be calculated.

Proposition 1. Let [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] be a Lipschitz continuous objective function, I [subset] D is an n-simplex, [R.sub.2](I) denotes the radius of circumscribed n-sphere and V(I) denotes the set of vertices. Then

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (7)

Proof. An n-simplex I is covered by n-balls [O.sub.i],i = 1,xxx,n +1 such that the radius [R.sub.2](I) is the same for all n-balls and centers coincide with the vertices v of the simplex I (two dimensional example is shown in Fig. 1): [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

[FIGURE 1 OMITTED]

2.2. Calculation of the radius of the circumscribed n-sphere

In this section a general formula to calculate the radius [R.sub.2] of the circumscribed n-sphere is derived. A geometric construction for the radius of the circumscribed 2-sphere (circumscribed circle) is given by Pedoe (Pedoe 1995). The equation of the n-circumsphere of the n-simplex with the vertices [v.sub.1] =([v.sub.1l,xxx, [v.sub.1n]),xxx, [v.sub.n+1] =([v.sub.(n+1)1],xxx,[v.sub.(n+1)n]) expressed in determinant form is

Expanding the determinant by the first row we get

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (8)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (9)

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

Comleting the square for (9) gives:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

which is the n-circumsphere

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

with the circumcenter

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

and the circumradius

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (10)

2.3. Combination of bounds with various norms

Experiments have shown (Paulavicius, Zilinskas 2006, 2007) that no single norm and corresponding Lipschitz constant is the best for all problems and therefore a combination of the infinite, Euclidean and first norms is used for calculation of the upper bound

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (11)

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

The Piyavskii type (4) Lipschitz bound with the first norm cp1 was proposed in (Paulavicius, Zilinskas 2008):

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (12)

However, the first norm does not always give the best bounds (Paulavicius, Zilinskas 2008). The further investigation (Paulavicius, Zilinskas 2009) has shown that an aggregate bound (AB) composed of [[mu].2/2,[infinity].sub.2] and [[phi].sup.1] bounds

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (13)

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

yields better results. The aggregate bound (13) may be improved by including the bound based on the radius of the circumscribed sphere. Therefore, we propose and experimentally investigate an improved aggregate bound (IAB)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (14)

3. Experimental investigation

In this section the results of computational experiments are presented and discussed. Various test problems (n [greater than or equal to] 2) for global optimization from (Hansen, Jaumard 1995; Jansson, Knuppel 1992; Madsen, Zilinskas 2000) have been used in our experiments. The list of all used test problems and their global minima from the literature are shown in Table 1. For the (n = 2, 3) test problems we use the same precision [epsilon] as used by Hansen and Jaumard (Hansen, Jaumard 1995). The proposed algorithm assumes that Lipschitz constants are given a priori. These constants have been evaluated in the same way, as in (Hansen, Jaumard 1995). A very fine grid search algorithm of [1000.sup.n] points for (n = 2, 3) and 100n points for (n = 4, 5, 6) dimensional test functions have been used and the obtained estimates should be close to the best Lipschitz constants. The values are given in (Paulavicius, Zilinskas 2006, 2007). The speed of global optimization has been estimated using the number of function evaluations criterion.

The results of the simplicial branch and bound algorithm with the proposed bound [[psi].sup.2] (7) based on the Euclidean norm and with the bound [mu]2/2 (5) based on the same norm are shown in Table 2. For all test problems the number of function evaluations is smaller when the proposed bound [[PSI].sup.2] is used and on the average it is by 26% smaller than when the bound [[PSI].sup.2] is used. The rate ([[PSI].sup.2] < [[mu].sup.2.sub.2]) shows how often the proposed bound y|/2 is better (more tight) than the bound [[PSI].sup.2.sub.2]. For the used test functions it is better on average in 70% of simplices. However, the bound [[PSI].sup.2] does not always give better bound comparing with (the rate is not equal to 1). If function values at the vertices V(I) of the simplex I differ significantly: [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.], then [[mu].sup.2.sub.2] type bound is more tight comparing with the proposed [[PSI].sup.2] bound. If the difference between the maximal and minimal functions values is smaller than [L.sub.2] ([[delta].sub.2](i) -- [R.sub.2](i)), then the proposed [[PSI].sup.2] type bound gives better results. Therefore, for computationally expensive functions it might be worthwhile to append the bound [[psi].sup.2] to the aggregate bound AB (13). Performance of simplicial branch and bound algorithm with the aggregate bound AB (13) is similar to that of the best branch-and-bound algorithm for Lipschitz optimization and it is often better (Paulavicius, Zilinskas 2009). The numbers of function evaluations on the average is by 23% smaller when the improved aggregate bound IAB (14) is used than when the aggregate bound AB (13) is used.

The value of objective function at the same point is required when computing bounds over neighbor sub-regions. If this value is evaluated for parent sub-region, it is not necessary to evaluate it again. However, in multidimensional case the same points may be involved in subdivision of different sub-regions. It is possible to maintain a list of all points where the objective function is evaluated and avoid evaluation of objective function at such points several times by verifying if the point is in the list before evaluation. In two-dimensional case after initial covering by simplices there are two simplices. When one of them is subdivided into two by a hyper-plane passing through the middle point of the longest edge the function value at this point should be evaluated. Sometime later in the optimization process subdivision of the other simplex may be performed, again requiring the function value at the same point. By verification of this point it is possible to avoid evaluation of the function again. Such vertex verification reduces the number of function evaluations essentially, but also increases computational time and therefore it is more suitable in the case of computationally expensive functions. The numbers of function evaluations of the algorithm with improved aggregate bound and verification of vertices are shown in Table 2 in the column [bar.IAB]. These results using improved aggregate bound [bar.IAB] significantly improve performance of the proposed Lipschitz optimization algorithm (Paulavicius, Zilinskas 2009), whose results were often better than that of the best branch and bound algorithm for Lipschitz optimization.

It is interesting to compare the results with other type of bounds. [alpha]BB method (Androulakis et al. 1995) assumes that objective function is twice continuously differentiable and uses this feature to construct convex underestimators. Since the assumption on Lipschitz continuity is weaker, in general Lipschitz bounds should be worse than that of [alpha]BB, however, the difference is not known. We investigate IAB and a piecewise application of the [alpha]BB underestimator (Gounaris, Floudas 2008) to see the difference. Table 3 presents obtained upper bounds for the maximum value of test problem No. 2 after the same number of evaluations of objective function as results given in (Gounaris, Floudas 2008). Their method subdivides the feasible region into N1 x N2 rectangles which requires (N1 + 1) x (N2 + 1) function evaluations. Therefore, we investigate the upper bounds estimated after 4 (N1 = N2 = 1), 9 (N1 = N2 = 2), 25 (N1 = N2 = 3) and so on function evaluations. As expected, aBB overestimator provides better (more tight) upper bounds. In the beginning the difference is larger, but it becomes smaller when the numbers of partitions increase.

4. Conclusions

In this paper a new Lipschitz bound over simplices is proposed. It is based on the function values at the vertices and the radius of the circumscribed sphere of the simplex. The speed of simplicial branch and bound algorithm with the proposed Lipschitz bounds has been investigated and compared. Test problems of various dimensionalities (n = 2, 3, 4, 5, 6) from the literature have been used for experimental investigation of the algorithm.

The experiments showed that the proposed bound is often better than a usually used bound. For the used test functions the rate ( [[psi].sup.2] < [[mu].sup.2.sub.2]) showed that proposed bound [[PSI].sup.2] is better (more tight) than the bound [[mu].sup.2.sub.2] on average in 70% of simplices. Therefore, the use of the proposed bound in simplicial branch and bound reduces the number of function evaluations by 26% comparing to a usual bound. If the proposed bound is used in an aggregate bound with various norms performance improvement is 23% comparing to the aggregate bound without the proposed bound.

The proposed method appears to be inferior to the [alpha]BB method. This is to be expected because the latter method assumes that the objective function is twice continuously differentiable.

doi: 10.3846/20294913.2012.661170

Acknowledgments

This research was funded by a grant (No. MIP-108/2010) from the Research Council of Lithuania.

Received 04 January 2011; accepted 10 November 2011

References

Adjiman, C. S.; Floudas, C. A. 1996. Rigorous convex underestimators for general twice-differentiable problems, J. Glob. Optim. 9: 23-40. http://dx.doi.org/10.1007/BF00121749

Androulakis, I. P.; Maranas, C. D.; Floudas, C. A. 1995. aBB: a global optimization method for general constrained nonconvex problems, J. Glob. Optim. 7: 337-363. http://dx.doi.org/10.1007/BF01099647

Bjorkman, M.; Holmstrom, K. 1999. Global optimization using the DIRECT algorithm in Matlab, Advanced Modeling and Optimization 1(2): 17-37.

Gorodetskij, S. Iu. 1999. Mnogoekstremalnaia optimizatsiia na osnove trianguliatsii oblasti poiska, Matematicheskoe modelirovanie i optimalnoe upravlenie, Vestnik NNGU2(21): 249-268 (in Russian).

Gounaris, Ch. E.; Floudas, Ch. A. 2008. Tight convex underestimators for C2-continuous problems: II. multivariate functions, J. Glob. Optim. 42: 69-89. http://dx.doi.org/10.1007/s10898-008-9288-8

Hansen, E.; Walster, G. 2003. Global Optimization Using Interval Analysis. 2nd edition. Marcel Dekker, New York.

Hansen, P.; Jaumard, B. 1995. Lipshitz optimization, in Horst, R.; Pardalos, P. (Eds.). Handbook of Global Optimization. Kluwer Academic Publishers, 404-493.

Horst, R. 1997. On generalized bisection of n-simplices, Mathematics of Computation 66(218): 691-698. http://dx.doi.org/10.1090/S025-5718-97-00809-0

Horst, R.; Nast, M.; Thoai, N. V. 1995. New LP bound in multivariate Lipschitz optimization: theory and applications, Journal of Optimization Theory and Applications 86: 369-388. http://dx.doi.org/10.1007/BF02192085

Horst, R.; Pardalos, P. M.; Thoai, N. V. 1995. Introduction to Global Optimization. Kluwer Academic Publishers.

Jansson, C.; Knuppel, O. A. 1992. Global Minimization Method: the Multi-Dimensional Case, Technical Report TU Hamburg-Harburg.

Jones, D. R.; Perttunen, C. D.; Stuckman, B. E. 1993. Lipschitzian optimization without the Lipschitz constant, Journal of Optimization Theory and Application 79(1): 157-181. http://dx.doi.org/10.1007/BF00941892

Madsen, K.; Zilinskas, J. 2000. Testing Branch-and-Bound Methods for Global Optimization, Technical Report IMM-REP-2000-05. Technical University of Denmark.

Moore, R. E. 1966. Interval Analysis. Prentice-Hall.

Paulavi?ius, R.; Zilinskas, J. 2006. Analysis of different norms and corresponding Lipschitz constants for global optimization, Technological and Economic Development of Economy 12(4): 301-306.

http://dx.doi.org/10.1080/13928619.2006.9637758

Paulavi?ius, R.; Zilinskas, J. 2007. Analysis of different norms and corresponding Lipschitz constants for global optimization in multidimensional case, Information Technology and Control 36(4): 383-387.

Paulavi?ius, R.; Zilinskas, J. 2008. Improved Lipschitz bounds with first norm for function values over multidimensional simplex, Mathematical Modelling and Analysis 13(4): 553-563. http://dx.doi.org/10.3846/1392-6292.2008.13.553-563

Paulavi?ius, R.; Zilinskas, J. 2009. Global optimization using the branch-and-bound algorithm with a combination of Lipschitz bounds over simplices, Technological and Economic Development of Economy 15(2): 310-325. http://dx.doi.org/10.3846/1392-8619.2009.15.310-325

Paulavicius, R.; Zilinskas, J.; Grothey, A. 2010. Investigation of selection strategies in branch and bound algorithm with simplicial partitions and combination of Lipschitz bounds, Optimization Letters 4(2): 173-183. http://dx.doi.org/10.1007/s11590-009-0156-3

Pedoe, D. 1995. Circles: A Mathematical View. Washington, DC: Math. Assoc. Amer.

Piyavskii, S. A. 1972. An algorithm for finding the absolute extremum of a function, USSR Computational Mathematics and Mathematical Physics 12: 57-67. http://dx.doi.org/10.1016/0041-5553(72)90115-2

Sakalauskas, L.; Zavadskas, E. K. 2009. Optimization and intelligent decisions, Technological and Economic Development of Economy 15(2): 189-196. http://dx.doi.org/10.3846/1392-8619.2009.15.189-196

Sergeev, Ia. D.; Kvasov, D. E. 2008. Diagonalnye metody globalnoi optimizatsii. Moskva: Fizmatlit (in Russian).

Sergeyev, Ya. D. 1995. An information global optimization algorithm with local tuning, SIAM J. Optim. 5(4): 858-870. http://dx.doi.org/10.1137/0805041

Sergeyev, Ya. D. 1998. Global one-dimensional optimization using smooth auxiliary functions, Mathematical Programming 81(1): 127-146. http://dx.doi.org/10.1007/BF01584848

Sergeyev, Y. D.; Kvasov, D. E. 2009. Lipschitz global optimization and estimates of the Lipschitz constant, in Chaoqun, I. M.; Lean, Y.; Dabin, Z.; Zhongbao, Z. (Eds.). Global Optimization: Theory, Methods and Applications. Global Link Publ.: Hong Kong, 518-521.

Strongin, R. G.; Sergeyev, Ya. D. 2000. Global Optimization with Non-convex Constraints: Sequential and Parallel Algorithms. Kluwer Academic Publishers, Dordrecht.

Strongin, R. G. 1978. Chislennye metody v mnogoekstremalnykh zadachakh. Moskva: Nauka (in Russian).

Zilinskas, A.; Zilinskas, J. 2002. Global optimization based on a statistical model and simplicial partitioning, Computers and Mathematics with Applications 44(7): 957-967. http://dx.doi.org/10.1016/S0898-1221(02)00206-7

Zilinskas, A.; Zilinskas, J. 2010. P-algorithm based on a simplicial statistical model of multimodal functions, TOP 18(2): 396-412. http://dx.doi.org/10.1007/s11750-010-0153-9 Zilinskas, J. 2005. Comparison of packages for interval arithmetic, Informatica 16(1): 145-154.

Zilinskas, J. 2007. Reducing of search space of multidimensional scaling problems with data exposing symmetries, Information Technology and Control 36(4): 377-382.

Zilinskas, J. 2008. Branch and bound with simplicial partitions for global optimization, Mathematical Modelling and Analysis 13(1): 145-159. http://dx.doi.org/10.3846/1392-6292.2008.13.145-159

Zlobec, S. 2010. Characterizing zero-derivative points, J. Glob. Optim. 46: 155-161. http://dx.doi.org/10.1007/s10898-009-9457-4

Remigijus Paulavicius (1), Julius Zilinskas (2)

(1,2) Vilnius University, Institute of Mathematics and Informatics, Akademijos g. 4, LT-08663 Vilnius, Lithuania (1) Vilnius Pedagogical University, Student? g. 39, LT-08106 Vilnius, Lithuania (2) Vilnius Gediminas Technical University, Sauletekio al. 11, LT-10223 Vilnius, Lithuania E-mails: (1) remigijus.paulavicius@vpu.lt (corresponding author); (2) julius.Zilinskas@mii.vu.lt

Remigijus PAULAVICIUS is a junior researcher in Systems Analysis Department at Vilnius University Institute of Mathematics and Informatics, Lithuania. Research interests: local and global optimization, parallel computing.

Julius Zilinskas is a principal researcher in Systems Analysis Department at Vilnius University Institute of Mathematics and Informatics, Lithuania. He is a member of editorial boards of "Central European Journal of Computer Science", "Central European Journal of Engineering", "Computational Management Science", "Informatica", "Mathematical Modelling and Analysis", "Optimization Letters". His research interests include global optimization, visualization of multidimensional data and parallel computing.

Table 1. Test problems for Lipschitz optimization No. Function n Domain f 1 [MATHEMATICAL 2 [[0,1].sup.2] 2.51997258 EXPRESSION NOT REPRODUCIBLE IN ASCII.] 2 [MATHEMATICAL 2 [-1.5,4] X [-3,3] 1.91322295 EXPRESSION NOT REPRODUCIBLE IN ASCII.] 3 [MATHEMATICAL 3 [[-U].sup.3] 0.51637406 EXPRESSION NOT REPRODUCIBLE IN ASCII.] 4 [MATHEMATICAL 3 [[-2,2].sup.3] 35.9999997 EXPRESSION NOT REPRODUCIBLE IN ASCII.] 5 [MATHEMATICAL 4 [[-5,10].sup.4] 0 EXPRESSION NOT REPRODUCIBLE IN ASCII.] 6 [MATHEMATICAL [[-4,5].sup.4] 0 EXPRESSION NOT REPRODUCIBLE IN ASCII.] 7 [MATHEMATICAL 5 [[-5,5].sup.5] 0 EXPRESSION NOT REPRODUCIBLE IN ASCII.] 8 [MATHEMATICAL 5 [[[-5,5]].sup.5] 0 EXPRESSION NOT REPRODUCIBLE IN ASCII.] 9 [MATHEMATICAL 6 [[-5,5].sup.6] 0 EXPRESSION NOT REPRODUCIBLE IN ASCII.] 10 [MATHEMATICAL 6 [[-6,6].sup.6] 0 EXPRESSION NOT REPRODUCIBLE IN ASCII.] No. Function [epsilon] 1 [MATHEMATICAL 0.355 EXPRESSION NOT REPRODUCIBLE IN ASCII.] 2 [MATHEMATICAL 0.691 EXPRESSION NOT REPRODUCIBLE IN ASCII.] 3 [MATHEMATICAL 0.0506 EXPRESSION NOT REPRODUCIBLE IN ASCII.] 4 [MATHEMATICAL 4.51 EXPRESSION NOT REPRODUCIBLE IN ASCII.] 5 [MATHEMATICAL [L.sub.2] EXPRESSION NOT REPRODUCIBLE IN ASCII.] 6 [MATHEMATICAL [L.sub.2] EXPRESSION NOT REPRODUCIBLE IN ASCII.] 7 [MATHEMATICAL 1.5[L.sub.2] EXPRESSION NOT REPRODUCIBLE IN ASCII.] 8 [MATHEMATICAL 1.5[L.sub.2] EXPRESSION NOT REPRODUCIBLE IN ASCII.] 9 [MATHEMATICAL 4[L.sub.2] EXPRESSION NOT REPRODUCIBLE IN ASCII.] 10 [MATHEMATICAL 4[L.sub.2] EXPRESSION NOT REPRODUCIBLE IN ASCII.] Table 2. The numbers of function evaluations for simplicial branch and bound algorithm with various Lipschitz bounds Problem [[mu].sup.2.sub.2] [[psi].sup.2] rate ([[psi].2] < number [[mu].sup.2.sub.2]) 1 1356 856 0.93 2 3055 1734 0.98 3 19632 14368 0.75 4 50812 20776 0.91 5 1565127 965474 0.65 6 496904 426493 0.64 7 7914387 5826460 0.57 8 8284881 8079412 0.52 9 6269636 1623674 0.57 10 7419819 6818423 0.50 Problem AB IAB [ba.IAB] number 1 967 716 412 2 1807 1495 830 3 16884 12032 3091 4 20487 17105 4684 5 1176018 749518 52078 6 420417 333568 5769 7 1916941 1633849 84406 8 6064924 4590448 162989 9 821892 524940 9840 10 1868983 1685793 25398 Table 3. Comparison of upper bounds Algorithm [N.sub.1] = [N.sub.2] 1 2 4 8 16 32 64 [alpha]BB 15.7 7.35 3.24 1.97 1.93 1.921 1.914 IAB 46.00 30.21 15.69 7.42 3.76 2.428 2.041

Printer friendly Cite/link Email Feedback | |

Author: | Paulavicius, Remigijus; Zilinskas, Julius |
---|---|

Publication: | Technological and Economic Development of Economy |

Article Type: | Report |

Geographic Code: | 4EXLT |

Date: | Mar 1, 2012 |

Words: | 4593 |

Previous Article: | An integrated assessment of Lithuanian economic sectors based on financial ratios and fuzzy MCDM methods. |

Next Article: | ERP system implementation in Latvian manufacturing and construction company. |

Topics: |