Printer Friendly

Complexity of the numerical solutions of constrained nonlinear optimal control problems.

Introduction

Optimal control of nonlinear systems is one of the most active subjects in process optimization and control theory. The need to control chemical processes is fundamental in process engineering because optimum state may be dependent on maintaining process conditions within certain boundaries with regards to some economic gains. Emphasis has always been on determining the optimal steady-state operation of processes, but there are many other processes that are operated in transient (dynamic) manner where steady state are not practically reached and whose optimization and control are very vital for economic, environmental and safety reasons. Most of the environmental impact of any process is typically created during major transients, including off specification products, high energy consumption, by products from suboptimal reactor operating conditions (Abel and Marquardt, 2000). Prominent example of such system is the batch processes whose diversity implies that separate optimal operating schemes have to be determined for each process under different situations. Batch processing, because of its enormous flexibility and diversity, is extensively used for production in a variety of industries. (Terwiesch et al, 1994)

The transient operation of these systems constitute what is generally referred to as the dynamic systems in process engineering whose control and optimizations do not completely fit for some classical theory because of the complexity of their mathematical representation. The transient processes are characterized by highly nonlinear models that increase the mathematical complexity of any optimization problem. These among many other considerations motivate the application of optimal control strategies to the dynamic process in manufacturing systems where it may be required to determine input profiles for the system for the purpose of optimization over some period of time according to a specified performance index. Such problems are generally referred to as dynamic optimization problems or open-loop optimal control problems whose process models are characterized by differential- algebraic equations.

It is always assumed that it is possible to either directly or indirectly control the dynamic systems through a variable called the control variable that are generally denoted as u(t).The state of the system is represented by the state variable denoted as x(t), t is used to represent the independent variable often referred to as time and there exist a known relation between the state variable and the control variable. In order to specify what is meant by optimal, an objective function is defined and has to be maximized or minimized. Such objective function for general optimal control problems is represented by Bolza type problem which general enough to accommodate wide class of optimization problem. (Betts, 2001). Stating the problem in a concise manner as a mathematical programming problem comprises of the system of differential -algebraic optimization as stated below:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1)

subject to:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2)

where, [PHI] represents the objective function, g and c are vectors containing functions in the algebraic constraints, x are the process states and u are the process inputs. The solution to this type of dynamic optimization problem is usually some trajectory for the decision variables of the optimization problem. Such decision variables could include set-points for the manipulated process variables (i.e., input variables) or measured process variables (i.e. output variables).

This can also be referred to as dynamic optimization or open loop optimal control problem and can be very difficult to solve analytically but numerical solution techniques are typically used (Tsang et al, 1975; Neuman and Sen, 1988). Conventional Nonlinear Programming (NLP) techniques cannot be directly applied to this problem due to the presence of differential equality constraints. Classical optimal control methods, on the other hand, will deal with continuous control profiles but normally cannot handle general algebraic constraints (such as the ones represented by g or c). Thus, special numerical techniques have to be employed in order to solve these problems.

One of the primary distinctions between optimizing a dynamic system and optimizing a steady-state system is that the degrees of freedom in a dynamic system lie in an infinite dimensional space while the degrees of freedom in a standard steady state optimization problem lie in a finite dimensional space. Solving dynamic optimization problems as this generally fall under two broad frameworks: direct methods and indirect methods (Rippin, 1983).

This paper is a review of various numerical techniques for solving constrained optimal control problems. Considerations are given to the complex computational issues involved in the implementation of various algorithms under each classification. Specifically, the methods are compared on the basis of the following: whether the control profile and/or the dynamic equations have to be discretized; whether the problem formulation requires integration of differential equations at every iteration; the overall problem complexity in terms of the number of decisions to be made; the computational effort required by different methods to solve the same; and the nature of the variables involved. Suitability of these methods to real time optimization or on-line optimization are also considered.

Numerical Solutions and Their Computational Issues:

Most early methods were based on finding a solution that satisfied the maximum principle, or related necessary conditions, rather than attempting a direct minimization of the objective function subject to constraints in equation (2). For this reason, methods using this approach are called indirect methods. Direct methods obtain solutions through the direct minimization of the objective function subject to constraints. In this way the optimal control problem is treated as an infinite dimensional mathematical programming problem (Tsang et al ,1975).

Direct Optimization Methods:

The direct techniques for solving dynamic optimization problems fall under two broad frameworks: variational and discretization methods (Adjiman et al, 1998). The variational approach encompasses the classical methods of the calculus of variations. This method approaches the problem in the original infinite dimensional space and attempt to determine stationary functions through the solution of the Euler-Lagrange equations (Troutman, 1996). The variational approach for solving dynamic optimization problems is extremely attractive because the optimization problem is approached in its original form without any mathematical transformations (Feehery et al ,1997). Hence, the solution is guaranteed to be a rigorous solution to the original problem.

Despite this benefit, the variational approach has several major drawbacks; the Euler-Lagrange equations are difficult to solve numerically because they amount to solving two point boundary value problems. Complicating this issue is the addition of Lagrangian inequality constraints on the state and control variables which is an important factor of optimal control problems. Also, the control constraints cause almost as many numerical difficulties as unbounded problems. Variational problems with inequality constrained state variables has been restricted to determining necessary conditions for yielding stationary functions which does not yield the global minimum and the satisfaction of this condition does not guarantee even that a local minimum has been found (Troutman, 1996). Unfortunately, the necessary and sufficient conditions are known only to match identically for the special cases of unconstrained and control constrained convex problems (Luus, 2000 and Luus et al, 1992).

The discretization approach has the disadvantage that it is only an approximation of the infinite dimensional problem, but it possesses tremendous advantage in that it transforms the original infinite dimensional problem into a problem lying at least partially in a finite space; therefore, the problem can often be solved by standard nonlinear programming methods. In the optimal control problem posed, the decision variables, u(t), are infinite-dimensional. However, in order to utilize discretization techniques, the inputs need to be parameterized using a finite set of parameters. Depending on whether the state equations are integrated explicitly or implicitly, two different techniques have been reported in the literature: sequential or simultaneous techniques respectively.

The simultaneous method is a complete discretization of both state and control variables, and often achieved through collocation (Tsang et al, 1975; Maly and Petzold, 1996). While completely transforming a dynamic system into a system of algebraic equations eliminates the problem of optimizing in an infinite dimensional space, simultaneous discretization has the unfortunate side effect of generating a multitude of additional variables yielding large, unwieldy nonlinear programmes (NLPs) that are often impractical to solve numerically. Previous works by (Biegler, 1984; Cuthrell and Biegler, 1987; Cuthrell and Biegler, 1989; Logson and Biegler, 1992) have considered the issue of discretizing the dynamic equations using finite elements. Using simultaneous method requires awareness of the tradeoff between approximation and optimization.

Sequential discretization is usually achieved through control parameterization (Teo et al ,1991)], in which the control variable profiles are approximated by a series of basic functions in terms of a finite set of real parameters. These parameters then become the decision variables in a dynamic embedded nonlinear programme(NLP). Function evaluations are provided to this NLP through numerical solution of a fully determined initial value problem, which is given by fixing the control profiles. This method has the advantages of yielding a relatively small NLP and exploiting the robustness and efficiency of modern initial value problem and sensitivity solvers (Maly and Petzold, 1996). This method is called Control Vector Parameterization (CVP) in the literature (Ray, 1981) and It has been used in many applications (Fikar et al, 1998 and Sorensen et al, 1996).

Indirect Optimization Methods:

The mathematical formulation of the optimization problem here demands that the problem in equations (1-2) be reformulated using Pontryagin's Minimum Principle (PMP) and the principle of optimality of Hamilton, Jacobi and Bellman (HJB).

Using PMP, the problem of optimizing the cost functional [PHI] in (1) can be reformulated into optimizing the Hamiltonian function H(t) as follows (Bryson and Ho, 1975):

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3)

where ,l [not equal to] 0 is the n-vector of adjoint states (Lagrange multipliers for the system equations), m [sup.3]0 is the s-vector of Lagrange multipliers for the path constraints, and [v.sup.3] 0 the t-vector of Lagrange multipliers for the terminal constraints. The Lagrange multipliers m and v are nonzero when the corresponding constraints are active and zero otherwise so that [m.sup.T](x, u) = 0 and [v.sup.T]g(x,u) = 0 always. The necessary condition for optimality is [H.sub.u] = 0, which implies that x,u,l,m and v exist such that the following equalities hold:

[??] = f(x,u), x(0) = [x.sub.0] (4)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (5)

[[mu].sup.T]c = 0, [v.sup.T] g = 0 (6)

[partial derivative]H/[partial derivative]u = [partial derivative]F/[partial derivative]u + [[??].sup.T] [partial derivative]f/[partial derivative]u + [[mu].sup.T] [partial derivative]c/[partial derivative]u = 0 (7)

The key to PMP-based methods is the necessary condition (7). This, on one hand, provides a closed-form expression for the optimal input as a function of the state and adjoint variables, which will be used in solution methods based on shooting. On the other hand, the gradient information [partial derivative]H/[partial derivative]u available from (7) can be used to generate the search direction in gradient-based methods. Approaches under this principle includes shooting method, gradient method and parametric optimization

In the shooting approach (Ray and Szekely, 1973; Bryson, 1999), the optimization problem is cast into that of solving a system of differential-algebraic equations. This means that a search is done for x, u, l, m and v such that (4)- (7) are satisfied. Also, the state equations (4) and the adjoint equations (5) need to be solved simultaneously. However, the boundary conditions for the state and adjoint equations are split, i.e., the initial conditions of the state equations and the terminal conditions of the adjoint equations are known. Thus, the PMP approach leads to a two-point boundary value problem . The optimal inputs u calculated from (7) depend on both x and l. The resulting two-point boundary value problem reads:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (8)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (9)

[m.sup.T]g = 0, [v.sup.T] c = 0 (10)

However, once the initial conditions l(0) are specified, (9) can in principle be integrated to give l(t). Thus, the initial conditions l(0) completely characterize the optimal inputs, thereby providing an efficient parameterization.

The shooting method treats two point boundary value problem as a multidimensional root finding problem, where l(0) are the roots to be found in order to satisfy ,l ([t.sub.f]). The use and updating of the Lagrange multipliers l and v depend on the problem formulation and the type of optimization algorithm. If a constrained optimization routine is used, since the Lagrange multipliers are handled internally, m and v need not be considered explicitly as decision variables. This approach, also referred to as neighboring extremal method (Bryson and Ho, 1975; Kirk, 1970) or boundary condition iteration (Jaspan and Coull, 1972), has been used in several applications.

Among several difficulties in using this approach (Murthy et al, 1980) is that the method can have stability problems in integrating the adjoint equations forward in time. Furthermore, unless good initial guesses for the adjoint states are available (which is rarely the case since the adjoints represent sensitivity functions), it is computationally expensive to find the optimal solution. The method does not work if there are discontinuities in the adjoints, which is typical in the presence of state constraints. Additional degrees of freedom are necessary to handle these situations.

For gradient method, obtaining an analytical expression for the optimal input from (7) may not always be possible (e.g., the class of non linear dynamic systems). However, (7) provides the gradient along which the decision variables can be updated. The solution through the gradient method resembles the sequential approach of the direct formulation except that the gradient is calculated using (7). This approach has been applied widely (Jaspan and Coull, 1972; Diwekar, 1995 and Raminez, 1997). Control Vector Iteration (Ray, 1981) follows the same algorithm except that the input is not parameterized but considered as an infinite dimensional variable. The main advantage of the gradient method is that the initial guess of the decision variables is not detrimental to the convergence. However, the drawbacks of this approach are: slow convergence close to the optimum, and the large number of decision variables that might be necessary to parameterize the input.

When considering parametric optimization ,two algorithms are involved where the states and adjoints are parameterized. First, the Solution of non-linear algebraic equations is similar to the simultaneous direct optimization method, where parameterization of the states and adjoints followed by appropriate discretization is used to avoid the problems encountered in the integration of the adjoints (Goh and Teo, 1988; Schwarz, 1989). And secondly, quasi-linearization algorithm where the analytical expression for the inputs provided by (7) is used. The two point boundary value problem (8)-(9) is solved by successive linearization (Kirk, 1970 ; Bryson and Ho, 1975). The discretization and quasi-linearization methods work well if the solution is smooth and if the unknown boundary conditions are not particularly sensitive to initialization errors. The method inherits the problems of the simultaneous method in the tradeoff between approximation and optimization (Srinivasan et al, 2000). Also, as in the shooting method, good initial guesses are needed for this method to work well.

The second principle under the indirect approach is the Hamilton Jacobi-Bellman (HJB) formulation that transforms the problem of optimizing the cost functional [PHI] in (2) into the resolution of a partial differential equation by utilizing the principle of optimality in equation (11) (Bryson and Ho, 1975; Kirk, 1970).

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (11)

where V (x,t) is the return function or, equivalently, the minimum cost if the system has the state x at time t [less than or equal to] [t.sub.f]. The link between the PMP and the HJB formulations is that the adjoints are the sensitivities of the cost (return function) with respect to the states:

[[??].sup.T] = [partial derivative]V/[partial derivative]x (12)

The term to be minimized in (11) is the Hamiltonian H. Thus, the partial differential equation (11) represents the time evolution of the adjoints (3):

[??] = d/dt [partial derivative]V/[partial derivative]x = [partial derivative]/[partial derivative]x [partial derivative]V/[partial derivative]t = [partial derivative][H.sub.min]/[partial derivative]x (13)

where [H.sub.min] is the minimum value of the Hamiltonian.

The main algorithm using this principle is the dynamic programme. This approach is equivalent to computing V (x, t) in (11) with discretization performed in both the states and the time direction. The minimization in (11) is performed using an exhaustive search. To make the search feasible, the domain of search has to be restricted. Hence, the inputs are also discretized both in time and amplitude. Consider (11) written for an arbitrary small interval [t,t+Dt]:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (14)

Since the first two terms of the minimization correspond to the difference

V([x.sub.t+[DELTA]t], t + [DELTA]t) - V([x.sub.t], t) = [partial derivative]V/[partial derivative]t [DELTA]t + [partial derivative]V/[partial derivative]x [DELTA]x

Then, (14) can be written as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (15)

The time interval [0, [t.sub.f]] can be divided into P stages, with [[t.sub.p], [t.sub.p+1]] being the time interval corresponding to the [(p + 1).sup.th] stage. Integrating over the time interval [[t.sub.p], [t.sub.p+1]], the return function at time [t.sub.p] can be written as:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (16)

where [x.sub.p+1] is the state at [t.sub.p+1] obtained by integrating the system with inputs u and the initial condition x([t.sub.p]) = [x.sub.p] over the interval [[t.sub.p]; [t.sub.p+1]]. Since the boundary condition of V is known at final time, (16) is solved iteratively for decreasing values of p.

Another complication arises from the discretization, since V ([x.sub.p+1]; [t.sub.p+1]) will only be calculated for a set of discrete values. When integration is performed from a discretization point [x.sup.d.sub.p] at time [t.sub.p], [x.sub.p+1] will typically not correspond to a discretization point. Thus, the question is how the return function at [x.sub.p+1] can be calculated. One option is to interpolate between the return functions at various discretization points at time [t.sub.p+1]. Another one, is to use the optimal control u([[t.sub.p+1], [t.sub.f]]) that corresponds to the grid point closest to [x.sub.p+1] and integrate the system from [t.sub.p+1] to [t.sub.f] to get the return function. (Bellman, 1957 and Kirk, 1970). This approach has been used in numerous applications and the key advantages of this method are: it is one of the few methods available to compute the global minimum, and the number of iterations, and thereby the time needed for the optimization, can be estimated a priori. In addition, the approach provides a feedback policy which can be used for on-line implementation: If, due to mismatch in initial conditions, the real trajectory deviates from the predicted optimal behavior, the optimal inputs that correspond to the x-grid point closest to the real value at a given time instant can be chosen. The major disadvantage of dynamic programming is the computational complexity, though small sized problems can be handled efficiently. However, in the presence of constraints, the computational complexity reduces since the constraints limit the search space.

Comparison of the Numerical Solution Methods:

Optimal control problems are typically large and difficult to solve due to the complex relationships in the stationary conditions. Efficient methods are those which exploit certain aspects of a problem to reduce the amount of computational time and storage. This section attempts to compare, summarize and assess the complexity of all the classes of solution methods discussed. In all the three broad classes of numerical methods discussed, the solution procedures for differential algebraic optimization problem are all parameterizations in one form or another. Thus, the problem complexity can be reduced enormously if it is known or assumed a priori that the class of candidate profiles can be restricted to simple parameterizations without losing significant optimization potential. Here, the issue of accuracy of the problem formulations chosen by different algorithms becomes important. Also, for the practical application of any of these techniques, the issues of solution convergence and computation time need to be analyzed. These issues become even more important for applications that need to be optimized in real-time or on-line. Table 1 shows the summary of the assessment of the various classes of numerical solutions earlier discussed based on the criteria earlier mention in the introduction.

CVP(Sequential approach) is direct approach and does not have any significant advantages compared to other approaches, except that for some problems it might be faster than IDP, with/without a trade-off in accuracy. The convergence properties of CVP for the case of feedback parameterization are better, but with an increase in problem complexity. The collocation method on finite elements is a good candidate for solving even complex problems. However, the method requires a certain level of expertise in choosing an adequate number of finite elements and collocation points. Moreover, for large systems it may result in a large number of equations and constraints, thus dramatically increasing the size of the decision variable vector. Also, the collocation and optimization approach converges to local optimal, especially when successive quadratic programming is used from a poor starting point.

PMP method like CVI approach converges relatively well for nonsingular problems, especially when used with a more sophisticated optimization technique than steepest descent, and yields results of high accuracy due to its relatively fine discretization. However, the computational cost of using a reliable integrator may be high. For singular problems, CVI may take longer to converge, as the gradient of the Hamiltonian depends indirectly on the control. As CVI is based on a condition of optimality that is only necessary but not sufficient, the user needs to numerically verify the optimality by introducing small perturbations. Moreover, state constraints or a large number of end-point constraints can significantly slow down the computations, although input constraints can be enforced with high accuracy. The discretization and quasi-linearization methods work well if the solution is smooth and if the unknown boundary conditions are not particularly sensitive to initialization errors. The method inherits the problems of the simultaneous method in the tradeoff between approximation and optimization. Also, as in the shooting method, good initial guesses are needed for this method to work well.

HJB Method requires a certain amount of tuning as far as region contraction factors and discretization fineness are concerned. The method handles constraints on control variables easily, but is not directly suitable for handling equality or inequality terminal constraints. Modifications to the IDP method, such as use of penalty functions or a use of an outer loop to optimize the parameters have to be done for different problem types, like minimum time problems. Nevertheless, IDP has been shown to be accurate with good convergence properties for small examples. However, depending on the process nonlinearities and the stiffness of the optimization problem, relatively fine level discretization at each instant is required and can substantially increase the computational effort.

Conclusion:

The key characteristic of transient processes is the presence of differential -algebraic equations in the models that makes them more complex compared to the algebraic optimization problems posed by the steady state processes. Majority of classes of solutions to this problem discussed results to approximate solutions through parametization in one form or the other. Two main frame works of solution approaches were identified: direct and indirect methods. Generally, each of the algorithms has the advantage of accuracy and ease of convergence over one another as earlier discussed. Most of the approaches involving integration of the differential state equations are computationally expensive while their parameterized counterparts result in the computation of many variables making them computationally complex. However, the suitability of all these approaches for real time or on-line optimization will demand some forms of manipulations to have the required very low time lag that real time optimization demands.

References

Abel, O. and W. Marquardt, 2000. Scenario-integrated modeling and optimization of dynamic systems. AIChE Journal, 46(4): 803-823.

Adjiman, C.S., S. Dallwig and C.A.A. Floudas, 1998. global optimization method, aBB, for general twice-differentiable constrained NLPs - II. Implementation and computational results. Computers and Chemical Engineering, 22(9): 1159-1179.

Bellman, R.E., 1957. Dynamic Programming. Princeton University Press.

Betts, J.T., 2001. Practical Methods for Optimal Control Using Nonlinear Programming. Advances in Design and Control. SIAM, Philadelphia.

Biegler, L.T., 1984. Solution of dynamic optimization problems by successive quadratic programming and orthogonal collocation. Comp. Chem. Eng., 8: 243-248.

Bryson, A.E. and Ho, Y.C. Applied, 1975 Optimal Control. John Wiley, New York.

Bryson, A.E., 1999. Dynamic Optimization. Addison-Wesley, Menlo Park, California.

Cuthrell, J.E. and L.T. Biegler, 1987. On the optimization of differential-algebraic process systems. AIChE J., 33: 1257-1270.

Cuthrell, J.E. and L.T. Biegler, 1989. Simultaneous optimization methods for batch reactor control profiles. Comp. Chem. Eng., 13: 49-62.

Diwekar, U.M., 1995. Batch Distillation: Simulation, Optimal Design and Control. Taylor and Francis, Washinton.

Dreyfus, S., 1962. Variational problems with inequality constraints. Journal of Mathematical Analysis and Applications., 4: 297-308.

Feehery, W.F., J.E. Tolsma and P.I. Barton, 1997. Efficient sensitivity analysis of largescale differential algebraic systems. Applied Numerical Mathematics, 25(1): 41- 54.

Fikar, M., M.A. Latifi, F. Fournier and Y. Creff, 1998. Control-vector parameterization versus Francis, Washington.

Goh, C.J. and K.L. Teo, 1998. Control parameterization: A unified approach to optimal control iterative dynamic programming in dynamic optimization of a distillation column. Comp. Chem. Eng., 22: S625-628.

Jaspan, R.K. and J. Coull, 1972. Trajectory optimization techniques in chemical reaction engineering. II. Comparison of the methods. AIChE J., 18(4): 867-869.

Kirk, D.E., 1970. Optimal Control Theory : An Introduction. Prentice-Hall, London.

Logson, J.S. and L.T. Biegler, 1992. Decomposition strategies for large-scale dynamic optimization problems. Chem. Engng. Sci., 47: 851.

Luus, R., 2000. Iterative Dynamic Programming. Chapman & Hall/CRC, Boca Raton.

Luus, R., J. Dittrich and F.J. Keil, 1992. Multiplicity of solutions in the optimization of a bifunctional catalyst blend in a tubular reactor. Canadian Journal of Chemical Engineering, 70: 780-785.

Maly, T. and L.R. Petzold, 1996. Numerical methods and software for sensitivity analysis of differential algebraic systems. Applied Numerical Mathematics, 20: 57-79.

Murthy, B.S.N., K. Gangiah and A. Husain, 1980. Performance of various methods in computing optimal policies. Chem. Eng. Journal., 19: 201-208.

Neuman, C.P. and A.A. Sen, 1988. suboptimal control algorithm for constrained problems problems with general constraints. Automatica, 24: 3-18.

Raminez, W.F., 1997. Application of Optimal Control to Enhanced Oil Recovery. Elsevier, The Netherlands.

Ray, W.H., 1981. Advanced Process Control. McGraw-Hill, New York.

Ray, W.H. and J. Szekely, 1973. Process Optimization. John Wiley, New York.

Rippin, D.W.T., 1983. Simulation of single and multiproduct batch chemical plants for optimal design and operation. Computers and Chemical Engineering, 7: 137-156.

Schwarz, H.R., 1989. Numerical Analysis--A Comprehensive Introduction. John Wiley, New York.

Sorensen, E., S. Macchietto, G. Stuart and S. Skogestad, 1996. Optimal control and on-line operation of reactive batch distillation. Comp. Chem. Eng., 20(12): 1491-1498.

Srinivasan, B., C.J. Primus, D. Bonvin and N.L. Ricker, 2000. Run-to-run optimization via constraint control. In IFAC ADCHEM 2000, pages 797{802, Pisa, Italy.

Teo, K., G. Goh and K.A. Wong, 1991. Unified Computational Approach to Optimal Control Problems. Pitman Monographs and Surveys in Pure and Applied Mathematics. John Wiley & Sons, Inc., New York.

Terweisch, P., M. Agarwal and D.W.T. Rippin, 1994. Batch unit operations with imperfect modeling: A survey, Journal of process Control, 4: 238-258.

Troutman, J.L., 1996. Variational Calculus and Optimal Control: Optimization with Elementary Convexity. Springer-Verlag, New York, second edition.

Tsang, T.H., D.M. Himmelblau and T.F. Edgar, 1975. Optimal control via collocation and nonlinear programming. International Journal of Control, 21: 763-768.

O.O. Ogunleye and F.N. Osuolale

Department of Chemical Engineering, Ladoke Akintola University of Technology, P.M.B. 4000, Ogbomoso-Nigeria

Corresponding Author: O.O. Ogunleye, Department of Chemical Engineering, Ladoke Akintola University of Technology, P.M.B. 4000, Ogbomoso-Nigeria

E-mail: ooogunleye@yahoo.com;ooogunleye@lautech.edu.ng
Table 1: Comparison of Numerical Solution Methods

Problem       Algorithm
Formulation

DIRECT        Sequential(CVP)
              Simultaneous(Collocation)
PMP           Shooting (TPBVP)
              Gradient(CVI)
              Parameterization
HJB           Dynamic Programming(IDP)

Problem       Nature of Variables
Formulation
              States           Inputs

DIRECT        Continuous       Continuous
              Parameterized    Parameterized
PMP           Continuous       Continuous
              Continuous       Param eterized
              Parameterized    Parameterized
HJB           Parameterized    parameterized

Problem       Discretization   Integration   Complexity   Computation
Formulation

DIRECT        Yes              Yes           Low          High
              Yes              No            High         Medium
PMP           Yes              Yes           High         High
              Yes              Yes           Low          High
              Yes              No            High         Medium
HJB           Yes              Yes           High         High
COPYRIGHT 2009 American-Eurasian Network for Scientific Information
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2009 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Original Article
Author:Ogunleye, O.O.; Osuolale, F.N.
Publication:Advances in Natural and Applied Sciences
Article Type:Report
Geographic Code:6NIGR
Date:May 1, 2009
Words:4805
Previous Article:Beautiful algebra.
Next Article:Expression and dendrogram analysis of heat shock proteins in culture media of Aeromonas hydrophila.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters