Printer Friendly

Search for the Pareto point based on the maximin principle of improvement rates of objective functions.

Introduction

As known, optimization means finding such a point in some space of parameters where the objective function attains the best value. In the literature written in Estonian, the monograph [1] could give a good overview of the conventional optimization theory. Unlike common optimization, polyoptimization deals with more than one objective function. Although the space of parameters is the same for all these functions, as a rule, the best points of objective functions are different. In this case, a certain compromise point must be found in the space of parameters.

By setting the problem, two principally different cases should be distinguished. In the first case, the same stakeholder applies for the best values of all objective functions and thus finding the compromise point is an internal business. In the second case, different stakeholders stand behind different objective functions, but the point in the parameter space can be chosen based on the consensus only. If the consensus has not been reached, the problem tends to shift to the field of game theory that we shall not consider here.

In both cases, the only solution could be some Pareto point of the parameters space. The point x is called the Pareto point or Pareto optimum when no point y can be found where the one value at least of one objective function would be better and that for the rest not worse than at the point x. This point y would be better than x in any sense and the latter would be of no interest any more. The set of all Pareto points is called the Pareto region. As a rule, it includes also the optimum of each single objective function.

Kotarski [2] gives a good summary about the present state of the search (or selection) theory of the Pareto optimum. This work pays a special attention to the results of Georgian mathematician Salukwadze [3-4]. However, these solutions are suitable for the first above-mentioned case where, based on a single man decision, the point can be freely shifted in the parameters space. For the consensus, shifting from the existing point can be in question only in such a way that the value of no objective function will deteriorate. This article is devoted first of all to this problem.

Our results reflect the studies initiated within the Baltic-German-Portuguese co-operation project within the Copernicus program in 1993-1996 [5-6].

In numerous cases, including the one in this project [5, 6], the problem set-up is even more general, so that the objective functions are given partially in the implicit form. Namely, the explicit functions [H.sub.1](z), ..., [H.sub.n](z), z [member of] Z have been given, but Z is a space with a larger dimension than the actual domain of the functions. The dimension is namely constrained with the system of equations W(z) = 0 where the number of equations is less than the dimension of space Z. These equations determine the surface S [subset] Z, the dimension of which is less than that of the space Z by the number of equations. The surface S is the actual domain of the objective functions.

In this case a subspace consisting of the components of the space Z is chosen with its dimension coinciding with the dimension of the space S. The vector x with these coordinates of the point z, which belongs to the said subspace, is called a vector of independent coordinates. The equations W give thus the implicit function [phi], which determines the whole point z according to the vector x of independent coordinates of the point z in such a way that W ([phi](x)) = 0. The objective functions [H.sub.i](z) will change now into the implicit functions of the vector x: [F.sub.i](x) = [H.sub.i]([phi](x)).

In addition, the boundary conditions [z.sub.min] [less than or equal to] z [less than or equal to] [z.sub.max] may be set to the variables z where the inequalities between vectors mean inequalities for all the components at the same time. Then it means for the vectors x inequalities [z.sub.min] [less than or equal to] [phi](x) [less than or equal to] [z.sub.max]. In the space of independent coordinates as many pairs of constraints appear then as large is the dimension of the space Z, while the constraints corresponding to the independent coordinates are planar, and the constraints relative to dependent coordinates are in general curved.

Since the distribution of coordinates into dependent and independent ones is in general free, when fixing at a certain point of the surface S, we shall consider first of all these coordinates independent relative to which the point is on the boundary. When moving along this surface S to some other point located on the boundary of some hitherto dependent coordinate, we shall state this coordinate to be independent now and consider dependent any other independent coordinate relative to which the new point is not on the boundary. Therefore, we shall ensure that we have always-direct contact with planar boundaries.

Let us name herewith an extremely essential and efficient application that has been implemented in practice already, and that, as a matter of fact, turned our attention to the necessity of the following theory and led to its elaboration. Namely in the complex optimal control theory of big power systems (PS) and interconnected power systems (IPS), this approach has been implemented most widely and efficiently for solving hierarchic sub-problems of optimization within the framework of the so-called Generalized Reduced Gradient Method (GRGM) where, in addition to the implicit functions theory, a rational choice of a basis is used (i.e., the composition of the components of vectors of dependent and independent parameters) for speeding up the finding of an optimal solution [9-11].

In the Baltic-German-Portuguese joint project within the framework of the Copernicus program, a basis was laid for the generalization and development of GRGM providing the Pareto optimal correction of so-called trans-national equilibriums (determined by a certain free market mechanism) of IPS (first of all the Baltic IPS with its interstate dispatching center DC BALTIJA located in Riga) in behalf of all the partners under their certain consensus conditions [6-8].

Reduction of Poly-Optimization to Mono-Optimization

Let us have a certain finite number of objective functions [F.sub.1], ... [F.sub.n] defined in the same space of parameters. Let us assume that it is the Hilbert space. Let the objective for each [F.sub.i] be to find an x, such that the value of [F.sub.i](x) would be as great as possible (in case the objective for some function is the diminution of its value, some mathematical transformation would enable to get a certain new function, the increase of which is equal to the decrease of the initial function). Let us assume also that the metrics of the values of various functions [F.sub.i] is unified in a certain natural way so that the changing of their values can be mutually compared (for example, if the values are profits of various parties involved, the statement will be valid in case the currency exchange rates are fixed).

It is known (e.g., the proof can be found in [1]) that the point x is the Pareto optimum of the given functions just in case if and only if there exist numerical coefficients [[alpha].sub.1], ... [[alpha].sub.n], [[alpha].sub.i] [greater than or equal to] 0 for each i and [n.summation over (i=1)][[alpha].sub.i] = 1, so that x is the optimum of the function [n.summation over (i=1)][[alpha].sub.i][F.sub.i]. Thus the choice of the Pareto point is reduced to the selection of the numerical coefficients [[alpha].sub.i], 1 [less than or equal to] i [less than or equal to] n.

It is favorable for each specific [F.sub.i] that its coefficient would be possibly great. If for example [[alpha].sub.i] = 1 and the rest of coefficients are zeros, we get the optimum point of [F.sub.i]. The transition from one Pareto point to another means changing of the coefficients [[alpha].sub.i]. As a rule, these objective functions whose coefficient increases win, and those whose coefficient decreases loose. Thus when one has reached some Pareto point already, one cannot move to another based on the consensus. It is easy to conclude from here that when the initial point from where we started optimization is close to some point in the Pareto region, only the Pareto points close to the initial point will be taken into account.

Considering this, the value of some objective function is always worse at the distant Pareto points than at those close to it as it inevitably is when moving from one point to another in the Pareto region and therefore evidently worse than at the initial point and that would make moving there based on the consensus impossible. The intuition tells that when approaching the Pareto region as orthogonally as the position of the Pareto region relative to the initial point allows, the direction of approach in respect to the gradients of all objective functions is acutely angled, and thus the values of all objective functions will be greater. Adding some component parallel to the Pareto region to the direction of movement, we may impair some objective function. Namely, we shall try to shift the point in the domain of definition of objective functions in such a way that the improvement rate of the most slowly improving objective function would be as high as possible. We call it the maximin principle of the improvement rate of objective functions.

Thus, a question arises how to choose the coefficients so that the optimum of the function [n.summation over (i=1)][[alpha].sub.i][F.sub.i] would be close to the initial point. The constraint [n.summation over (i=1)][[alpha].sub.i] = 1 creates a situation where only the proportion of the coefficients [[alpha].sub.i] is significant. Since the gradient of the function at the optimum point is a zero vector and close to the optimum point a vector with a small norm, the optimum of the function [n.summation over (i=1)][[alpha].sub.i][F.sub.i] is evidently close to the initial point if the choice of the coefficients [[alpha].sub.i] is such that the gradient norm of [n.summation over (i=1)] [[alpha].sub.i] [F.sub.i] at the initial point is small.

Selection of the Direction for the Next Optimization Step

Fixing some point in the space of parameters, we must choose the direction of the next optimization step. By mono-optimization the gradient of objective function determines the direction, because in this direction the objective function improves most rapidly. According to the considerations given in the previous subsection, the direction of the gradient of the function [n.summation over (i=1)][[alpha].sub.i][F.sub.i] must be chosen for the direction while the coefficients [a.sub.i] must be chosen in such a way that the norm of this gradient would be minimal.

Denoting the gradient vector of each objective function [F.sub.i] by [F'.sub.i], we can see that first of all the quantity [parallel][n.summation over (i=1)][[alpha].sub.i][F'.sub.i][parallel] must be minimized with the selection of the coefficients [[alpha].sub.i] under the conditions [[alpha].sub.i] [greater than or equal to] 0 for each i and [n.summation over (i=1)][[alpha].sub.i] = 1. The gradient vectors [F'.sub.i] have naturally been computed at the point where we were before the next step. This is minimization of the value of a quadratic form, and this will not create any principal problem.

In case this point lies on a certain boundary (as known, we can assume that this is a flat boundary), and for the selected coefficients [[alpha].sub.i] the vector [n.summation over (i=1)] [[alpha].sub.i][F'.sub.i] is directed outward the allowed domain, then assuming the continuity of the maximin of improvement rates of objective functions, we can see that the best among the allowed directions must lie just on the boundary. Then this boundary must be considered in the part of the space of independent coordinates at this step and the coefficients [[alpha].sub.i] calculated to take the step in this space of a smaller dimension. For this purpose, the similar norm minimization must be repeated, only each [F'.sub.i] will be replaced by its projection onto this boundary.

If there are more than one boundary where this point lies, and free minimization of the norm gives a direction outward the feasible domain, among the boundaries passing that point the number of such boundaries must be chosen so that when minimizing the norm by the above mentioned method at the intersection of these boundaries as in a separate subspace, we can get the direction, which is an allowed direction also for the rest of boundaries where the point lies.

Having fixed the point under the conditions named in the space of coefficients [[alpha].sub.i] that minimizes the quantity [parallel][n.summation over (i=1)] [[alpha].sub.i][F'.sub.i][parallel], we shall separate those that we name 'fixed to zero' from among the coefficients [[alpha].sub.i]. The coefficient [[alpha].sub.j] is called 'fixed to zero' when the following conditions are met a) [[alpha].sub.j] = 0;

b) Abandoning the constraint [[alpha].sub.j] = 0 for this certain j, in the space of the coefficients [[alpha].sub.i] some shift can be made, and in this case [parallel][n.summation over (i=1)] [[alpha].sub.i][F'[.sub.i] will still decrease.

Let us split the indices 1 [less than or equal to] i [less than or equal to] n into two groups, [I.sub.1] and [I.sub.2]. The indices I where [[alpha].sub.i] has not been fixed to zero belong to the group [I.sub.1], the rest to the group [I.sub.2]. At that [I.sub.2] may prove to be empty.

Improvement Rate of Objective Functions in the Preference Direction

The rate of change of a function in some direction is the scalar product of the gradient of this function and the unit vector of that direction. Let the direction be [n.summation over (i=1)][[alpha].sub.i][F'.sub.i] where the coefficients [[alpha].sub.i] have been obtained in the way shown in the previous section. Let us call this direction a preference direction.

Let k [member of] [I.sub.1] be such that [[alpha].sub.k] > 0. Such an index can certainly be found. In the case where k is not the only element of [I.sub.1], we shall choose an arbitrary j [member of] [I.sub.j] x j [not equal to] k and compare the rate of change of the objective functions in the preference direction.

Since the given choice of coefficients [[alpha].sub.i] minimizes the quantity [[parallel][n.summation over (i=1)] [[alpha].sub.i][F'.sub.i][parallel].sup.2], when changing it in such a way that [[alpha].sub.k] will be replaced by the difference [[alpha].sub.k] - [epsilon] and [[alpha].sub.j] by the sum [[alpha].sub.j] + [epsilon] (the sign of [epsilon] being arbitrary), we obtain instead of [[parallel][n.summation over (i=1)][[alpha].sub.i][F'.sub.i][parallel].sup.2] the expression [[parallel][n.summation over (i=1)][[alpha].sub.i][F'.sub.i][parallel].sup.2] + 2[epsilon]([F'.sub.j] - [F'.sub.k], [n.summation over (i=1)][[alpha].sub.i][F'.sub.i]) + [[epsilon].sup.2][[parallel][F'.sub.j] - [F'.sub.k][parallel].sup.2], which attains the minimum for [epsilon] = 0, because j = [I.sub.1] allows to exclude the restriction [[alpha].sub.i] [greater than or equal to] thereby without any change of the minimum point. Therefore ([F'.sub.j] - [F'.sub.k], [n.summation over (i=1)][[alpha].sub.i][F'.sub.i]) = 0, i.e., ([F'.sub.j] [n.summation over (i=1)][[alpha].sub.i][F'.sub.i]) = ([F'.sub.k], [n.summation over (i=1)][[alpha].sub.i][F'.sub.i]) must be valid.

Thus the scalar product of [F'.sub.j] and [F'.sub.k] by the vector [n.summation over (i=1)][[alpha].sub.i][F'.sub.i], and also by the unit vector in the same direction are equal, i.e., [F.sub.j] and [F.sub.k] change equally in the preference direction.

As for i [member of] [I.sub.2] [[alpha].sub.i] = 0, then

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

As for a given k [member of] [I.sub.1] ([F'.sub.j], [n.summation over (i=1)][[alpha].sub.i][F'.sub.i]) = ([F'.sub.k], [n.summation over (i=1)][[alpha].sub.i][F'.sub.i]) is also valid for each j [member of] [I.sub.1], we can present the final form of the considered expression as follows [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], since [summation over (j [member of] [I.sub.1])][[alpha].sub.j] = 1.

Thus ([F',sub.k], [n.summation over (i=1)][[alpha].sub.i][F'.sub.i]) = [[parallel][n.summation over (i=1)] [[alpha].sub.i][F'.sub.i][parallel].sup.2] and for each j [member of] [I.sub.1], ([F'.sub.j], [n.summation over (i=1)][[alpha].sub.i][F'.sub.i]) = [[parallel][n.summation over (i=1)] [[alpha].sub.i][F'.sub.i][parallel].sup.2] [greater than or equal to] 0.

The last inequality is strict when the direction vector of the preference direction is not a zero vector, i.e., when this point remains outside the Pareto region.

Let us formulate the result as follows.

Theorem 1. In the preference direction all the objective functions [F.sub.j], j [member of] [I.sub.1] will improve and do it with equal rate.

Let us choose now j [member of] [I.sub.2] and this time [epsilon] > 0. If now to replace [[alpha].sub.k] with the difference [[alpha].sub.k] - [epsilon] and [[alpha].sub.j] with the sum [[alpha].sub.j] + [epsilon], the square of the norm gradient of the function [n.summation over (i=1)][[alpha].sub.i][F.sub.i] can only increase, because the change remained within the allowed range of restrictions. Thus, the linear or principal part of the change 2[epsilon]([F'.sub.j] - [F'.sub.k], [n.summation over (i=1)][[alpha].sub.i][F'.sub.i] must be non-negative. It means that ([F'.sub.j], [n.summation over (i=1)][[alpha].sub.i][F'.sub.i]) [greater than or equal to] ([F'.sub.k], [n.summation over (i=1)][[alpha].sub.i][F'.sub.i]), which shows that [F.sub.j] will improve in the preference direction at least as fast as an arbitrary [F.sub.k], k [member of] [I.sub.1].

Moreover, since j [member of] [I.sub.2], a shift is possible in the space of the coefficients [[alpha].sub.i] by eliminating the restriction [[alpha].sub.j] > 0 so that the square of the norm of the gradient [n.summation over (i=1)][[alpha].sub.i][F.sub.i] will decrease. In this case [[alpha].sub.j] will be replaced by the difference [[alpha.sub.j] - [epsilon], [epsilon] > 0, while for each i [not equal to] j, [[alpha].sub.i], [[alpha].sub.i] will be replaced by the sum [[alpha].sub.i] + [[epsilon].sub.i], whereby [summation over (i=j)][[epsilon].sub.i] = [epsilon] and for these i [not equal to] j where [[alpha].sub.i] = 0, [[epsilon].sub.i] must be certainly non-negative. Then the principal changing part of the square of the norm of the above gradient is 2([n.summation over (i=1)][[epsilon].sub.i][F'.sub.i] - [epsilon][F'.sub.j], [n.summation over (i=1)][[alpha].sub.i][F'.sub.i]), which must be negative. Thus, [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

If for some i [not equal to] j, [[epsilon].sub.i] < 0 (if any found), then [[alpha].sub.i] > 0 and thus i [member of] [I.sub.1]. In this case [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], because according to the above, this inequality between the scalar products is valid and the coefficient -[epsilon] is positive. By adding the sides of this non-strict inequality to the corresponding sides of the preceding strict inequality, the strict inequality will remain valid. Applying the same procedure for each i [not equal to] j, where [[epsilon].sub.i] < 0, in turn, we shall get the inequality [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. As in case of the above mentioned [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] for each i, then replacing all the scalar products on the left-hand side of the strict inequality with the scalar product ([F'.sub.k], [n.summation over (h-1)][[alpha].sub.h][F'.sub.h]), the strict inequality will remain valid. Thus [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] will be valid. Let us note that since [epsilon] = [summation over (i [not equal to] j)][[epsilon].sub.i], then [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

From here ([F'.sub.k], [n.summation over (h-1)][[alpha].sub.h][F'.sub.h])<([F'.sub.j], [summation over (h=1)] [[alpha].sub.h][F'.sub.h]). The result could be formulated as follows.

Theorem 2. In the preference direction any [F.sub.j], j [member of] [I.sub.2] will improve faster than an arbitrary [F.sub.i], I [member of] [I.sub.1].

Change of the Minimum Improvement Rates of Objective Functions by the Deviation from the Preference Direction

Let us examine what happens to the improvement rates when we do not start to move in the preference direction in order to take an optimization step, but somewhat in another direction. Based on the maximin principle, we are first of all interested in the fact how the improvement rate of these objective functions will change, which improve with the slowest rate in the preference direction, i.e., the objective functions [F.sub.i], i [member of] [I.sub.1].

Having fixed a certain direction, we should consider the scalar product of the unit vector of this direction and gradient vector [F'.sub.i], i [member of] [I.sub.1]. It would be the same as to project this unit vector onto the vector [F'.sub.i] and multiply this projection by the length of [F'.sub.i].

Let us consider now the subspace formed by the gradient vectors [F'.sub.i], i [member of] [I.sub.1]. Projecting a unit vector of any direction onto a certain vector [F'.sub.i] of them, we can project it first of all onto the mentioned subspace and then project this projection onto the vector [F'.sub.i]. The result would be the same. As by projecting onto a subspace the length of the vector can only decrease, then, consequently, we would get a better result if we elongate the projection of this unit vector onto the subspace to the unit length before projecting it onto the vector [F.sub.i]. This is equivalent to the situation where we would have replaced the considered direction with its projection onto the mentioned subspace.

Thus it is reasonable to consider only these directions, which are located in the subspace formed by the vectors [F'.sub.i], i [member of] [I.sub.1], so as by the substitution of any other direction with its projection onto this subspace the improvement rates of all [F.sub.i], i [member of] [I.sub.1], could only increase.

Let the direction in the mentioned subspace be given in the form [summation over (i [member of] [I.sub.1])] [[beta].sub.i][F'.sub.i]. Since only the proportion of the coefficients [[beta].sub.i] is essential, we can assume [summation over (i [member of] [I.sub.1])][[beta].sub.i] = 1, i.e., [[beta].sub.i] = [[alpha].sub.i] + [[epsilon].sub.i] where [summation over (i [member of] [I.sub.1])][[epsilon].sub.i] = 0. In this case, [parallel][summation over (i [member of] [I.sub.1])][[beta].sub.i][F'.sub.i][parallel] [greater than or equal to] [parallel][summation over (I [member of] [I.sub.1])][[alpha].sub.i][F'.sub.i][parallel] and [[parallel][summation over (i [member of] [I.sub.1])][[beta].sub.i][F'.sub.i][parallel].sup.2] can be expressed in the form [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. Assuming that [summation over (i [member of] [I.sub.1])][[epsilon].sub.i] = 0, this expression must be greater than [[parallel][summation over (i [member of] [I.sub.1])][[alpha].sub.i][F'.sub.i][[parallel].sup.2] or equal to it (if [[epsilon].sub.i] = 0 for each i).

Based on the same assumption ([summation over (i [member of] [I.sub.1])][[epsilon].sub.i][F'.sub.i], [summation over (i [member of] [I.sub.1])][[alpha].sub.i][F'.sub.i]) must equal zero, i. e. the coefficients ([F'.sub.i], [summation over (h [member of] [I.sub.1])][[alpha].sub.h][F'.sub.h]) of all [[epsilon].sub.i] in this linear form are equal since otherwise it could be changed to become negative, changing, if necessary, the signs of all [[epsilon].sub.i] to negative, and thereafter by the proportional approximation of all [[epsilon].sub.i] to zero make [[parallel][summation over (i [member of] [I.sub.1])][[epsilon].sub.i] [F'.sub.i][parallel].sup.2] insignificant in respect of the considered scalar product.

The requirement ([summation over (i [member of] [I.sub.1])][[epsilon].sub.i][F'.sub.i]) = 0 with the condition [summation over (i [member of] [I.sub.1])] = 0 means that all multipliers of [[epsilon].sub.i] in this linear form, i.e. the scalar products ([F'.sub.i], [summation over (h [member of] [I.sub.1])][[alpha].sub.h][F'.sub.h]) should be equal, since otherwise we could change this linear form to become different from zero, choosing for the values of [[epsilon].sub.j] and [[epsilon].sub.k], having different multipliers, the values of [epsilon] and -[epsilon] respectively and taking the rest of [[epsilon].sub.i] equal to zero.

However, [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Thus, if ([summation over (i [member of] [I.sub.1])][[epsilon].sub.i] = 0, ([F'.sub.i], [summation over (h [member of] [I.sub.1])][[epsilon].sub.h][F'.sub.h]) cannot be strictly positive for each i [member of] [I.sub.1]. But if for some [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], then [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (*)

In case this point is not the Pareto point, then based on the Theorem 1 the right-hand side of the last inequality is strictly positive.

If the vectors [summation over (h [member of] [I.sub.1])][[beta].sub.h][F'.sub.h] and [summation over (h [member of] [I.sub.1])][[alpha].sub.h][F'.sub.h] are replaced with the unit vectors of the same direction in the last inequality, then in case if [parallel][summation over (h [member of][I.sub.1])] [[beta].sub.h][F'.sub.h][parallel]>[parallel][summation over (h [member of] [I.sub.1])][[alpha].sub.h][F'.sub.h][parallel], the inequality (*) will become strict. It means that for i in this inequality, [F.sub.i] will increase slower in the direction [summation over (h [member of] [I.sub.1])][[beta].sub.h][F'.sub.h] than in the preference direction.

Thus, we have proved the following result:

Theorem 3. At any deviation from the preference direction there exists at least one [F'.sub.i], i [member of] [I.sub.1], the improvement rate of which decreases against the preference direction.

Thus based on the maximin principle of improvement rates the direction [n.summation over (i=1)][[alpha].sub.i][F'.sub.i] is the most preferred direction.

Implementation

The given results would be implemented according to the following scheme.

Let [F.sub.l], ... [F.sub.n] be the given objective functions. Let us assume that the initial point in the parameter space is [x.sub.0], which is not the Pareto optimum. Let us find the coefficients [[alpha].sub.1], ... [[alpha].sub.n] on the condition that for each i [[alpha].sub.i] [greater than or equal to] 0 and [summation over (i=1)][[alpha].sub.i] = 1 while under these conditions at the point [x.sub.0] the value of [parallel][n.summation over (i=1)][[alpha].sub.i] [F'.sub.i][parallel] would be minimal. Then we shall take the optimization step in such a way as for the conventional mono-optimization when the objective function would be [n.summation over (i=1)][[alpha].sub.i][F.sub.i]. As shown above, the step will be taken in the direction that proceeds from the maximin principle of the improvement rate of objective functions. When reaching the point [x.sub.l] with this step, we shall take it for an initial point and repeat the similar procedure. The process will continue until we have reached the point where in case of choosing the appropriate quantities of [[alpha].sub.i] the value [parallel][n.summation over (i=1)][[alpha].sub.i][F'.sub.i][parellel] is practically zero.

As mentioned in the introduction, the given conceptions have been used for coordinating the performance of power systems of three Baltic countries within the Copernicus programme in 1993-1996 with an objective to maximize energy conservation in the power systems of each state. In the calculations the respective development of GRGM within the programme set KORONA at the disposal of SC BALTIJA was used [5,6].

To the present time this method has been developed further in the game theory approach for the control of international IPS considering both a certain consensus of partner coalitions and disagreements of these coalitions, especially in case of major disturbances [7, 8]. In the case of disagreements between partners instead of the methods given above the methods of game theory must be used [12].

Acknowledgements

The basis of the research presented in this paper was laid in the Baltic-German-Portuguese research project CIPA-CT93-0162 "Computer-Aided Energy Management and Process Control Functions for Stabilization the Interconnected Power system of the Baltic States" in the frames of the EU COPERNICUS PROGRAM in 1993-1997, supervised by Member of Estonian Academy of Sciences, Professor L. Krumm.

Received November 8, 2004

REFERENCES

[1.] Kaasik, U., Kivistik, L. Operation Analysis.--Tallinn, 1982 [in Estonian].

[2.] Kotarski, W. Some Problems of Optimal and Pareto Optimal Control for Distributed Parameter Systems.--Katowice, 1997.

[3.] Salukwadze, M. E. On the existence of solutions in problems of optimization under vector-valued-criteria // J. of Optimization Theory and Applications. 1974. Vol.13, No. 2.

[4.] Salukwadze, M. E. Problems of Optimization in Control Theory.--Tbilisi : Miecniereba, 1975.

[5.] Final Report Project: CIPA-CT93-0162, Computer-Aided Energy Management and Process Control Functions for Stabilization the Interconnected Power System of the Baltic States : Coordinator Dr. M. Klinger, Autors: Fraunhofer Institut fur Informations- und Datenverarbeitung, Dresden--Institute of Energy Research, Tallinn (L. Krumm, U. Kurrel, A. Tauts, O. Terno)--Dispatch Center of Baltic Systems, Riga--Instituto de Engenharia de Sistemas e Computadores, Coimbra, 1996.

[6.] Krumm, L., Kurrel, U., Tauts, A., Terno, O., Vonsovich, M., Zeidmanis, I. The specific features of EPS (IPS) BALTIJA and the advanced development of generally applicable control methods // World Energy Council, Baltic Regional Forum--Energy Strategies in the Baltic States: From Support to Business, Proceedings I, Riga, 1997. P. 132-139.

[7.] Krumm, L., Kurrel, U., Tauts, A., Terno, O., Zeidmanis, I., Krishans, Z., Dale, V., Putnynsh, V., Oleinikova, I. Theory and methods of complex optimization of the interconnected power system // World Energy Council Regional Forum Central and East European Energy Policies, Markets and Technologies for the 21st Century, Proc. Vilnius, 1999. P. 198-210.

[8.] Krumm, L., Kurrel, U., Tauts, A., Terno, O. (EERI); Zeidmanis, I. (DC BALTIJA); Krishans, Z. (IPhE) Power systems control complex optimization in the new market conditions // IFAC Symposium on Power Plants & Power Systems Control (Organized by the IBRA-BIRA Federation), Belgium, Brussels, April 26-29, 2000, Preprints. P. 483-488.

[9.] Krumm, L. The Method of Reduced Gradient in the Control of Power Electric Systems.--Novosibirsk : Nauka, 1977 [in Russian].

[10.] Krumm, L. The Optimization Methods in the Control of Power Systems. --Novosibirsk : Nauka, 1981 [in Russian].

[11.] Balanovski, V., Gamm, A., Grumbkov, Y., Krumm, L., Voitov, O. et al. The Theory and Methods of the Analysis and Control of Steady States in Power Systems (L. Krumm and A. Gamm eds.).--Novosibirsk : Nauka, 1987 [in Russian].

[12.] Tauts, A. Upgrading the maxmin principle for nonzero-sum-games // Proc. Estonian Acad. Sci. Phys. Mathem. 2002. Vol. 51, No. 3. P. 160-169. Received November 8, 2004

A. TAUTS, L. KRUMM *

Department of Electrical Power Engineering Tallinn University of Technology 5, Ehitajate Rd., Tallinn 19086, Estonia

* Corresponding author: e-mail eelab@eeri.ee
COPYRIGHT 2005 Estonian Academy Publishers
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2005 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Tauts, A.; Krumm, L.
Publication:Oil Shale
Date:Jun 1, 2005
Words:5521
Previous Article:Optimal planning of generating units in power system considering uncertainty of information.
Next Article:Integration of control and protection systems in power networks.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters