Printer Friendly

Simultaneous measurement bias correction and dynamic data reconciliation.


One of the main thrusts of modern plant operation is to improve the quality of on-line information in distributed control system (DCS). The information about the state of the process is usually corrupted by measurement errors consisting of random noise, measurement bias and outliers. Random noise is due to irreproducible factors that randomly affect the measurements. Measurement bias results when measured values are consistently either higher or lower than the true value of the process variable and is often attributed to improper installation and/or miscalibration of the measuring device. Measurement bias can be considered as one form of gross error. The presence of random noise diminishes the precision of the information sought, while the presence of gross errors introduces inaccurate information. Data reconciliation is a technique used to improve the accuracy and precision of measurements by reducing the impact of the measurement errors. It is formulated as a weighted least-squares objective function where the sum of squared measurement errors are minimized subject to process model constraints, such as mass and heat balances. If measurements containing gross errors are reconciled, the reconciled values become distorted, resulting in degraded information quality. Thus, gross errors need to be compensated within the framework of data reconciliation.

For steady-state processes, data reconciliation and gross error detection have received extensive study in the literature (e.g. Heenan and Serth, 1986; Narasimhan and Mah, 1987; Tong and Crowe, 1995; Rollins et al., 1996). These techniques involve performing statistical testing of the residuals produced from data reconciliation. After testing, the presence of gross errors is noted and suspicious measurements are removed. Data reconciliation is performed again and the re-calculated residuals are re-tested. This recursive scheme is repeated, until no gross error is present in the remaining data. In addition to statistical testing, Soderstrom et al. (2001) recently developed a new approach based on a mixed integer optimization where the detection of gross errors, identification of their magnitudes, as well as data reconciliation can be performed in one step.

For dynamic processes, early attempts to perform data reconciliation and gross error detection can be traced back to the 1970s using Kalman filtering techniques (e.g. Wilsky and Jones, 1976; Watanabe and Himmelblau, 1982). However, these Kalman filter or extended Kalman filter approaches were restricted to the use of process state-space models, and could not handle inequality constraints, such as lower and upper bounds. The problem of measurement bias detection as a separate topic has been widely tackled by techniques of fault detection and diagnosis (e.g. Patton et al., 1989; Gerlter, 1998). Unfortunately, these techniques have not addressed the problem of measurement bias detection/correction in conjunction with data reconciliation. In recent years, there has been a rebirth in research interest in simultaneous data reconciliation and gross error detection for dynamic processes. Albuquerque and Biegler (1996) proposed an approach of measurement outlier detection based on M-estimators (fair functions), while Chen and Romagnoli (1998) applied a technique of cluster analysis for outlier detection. Both approaches were within a framework of non-linear dynamic data reconciliation (NDDR). However, they did not address the specific problem of measurement bias detection and estimation. Rollins et al. (2002) proposed a method using a dynamic global test (DGT) to detect measurement bias for linear dynamic systems, but they did not address the problem of data reconciliation. McBrayer and Edgar (1995) presented a method for detection and estimation of measurement bias within the NDDR algorithm. For detecting measurement bias, these authors used some statistical measures, referred to as "summation test" and "regression test," for the residuals. Their approach required "base case" statistics, generated from a case where no measurement bias occurred. The "base case" statistics were then used as benchmarks to detect bias within the framework of NDDR when a bias was present. After detection, the magnitude of the bias was then estimated. One disadvantage of this approach is that prior knowledge for measurement device is required, i.e., free of measurement bias must be guaranteed in generating the "base case" statistics. Besides, this approach first assumes no bias in the measurements; accordingly, corrections for the bias are inevitably delayed in the algorithm when measurement biases actually occur in real time. Then, Abuelzeet et al. (2002) proposed a strategy for combined measurement outlier and bias detection within NDDR. The NDDR-based algorithms previously studied are complex and on-line computations are expensive because they require discretization of the nonlinear differential equations at each sampling time and then converting the problem into a non-linear programming (NLP) framework. Further, these NDDR-based algorithms used phenomenological process models and assumed that the models exactly represented the true dynamics of the process. Unfortunately, phenomenological models for most chemical processes are often difficult or impractical to obtain and process models inevitably contain some degree of uncertainty. These shortcomings have impeded wide-range applications of the NDDR-based algorithms.

The purpose of this article is to present a novel algorithm for simultaneous measurement bias correction and data reconciliation for dynamic processes when the use of phenomenological models is not practical. This algorithm takes into account process model error as an important contributing factor in the estimation of measurement bias and process state variables. Black-box models were identified and used in this algorithm. It is shown that this algorithm has computational advantages over the NDDR-based algorithm because an analytical solution is available when a linear model is used to approximate a dynamic process. More importantly, the developed algorithm is embedded inside process control loops for enhanced controller performance. To our knowledge, there has been no previous work dealing with the problem of simultaneous data reconciliation and measurement bias correction within closed-loop structure.


We begin by considering a simple single-input-single-output (SISO) process. First, we assume that only measurement noise is present in measured values of a process variable. At sampling time t, the measured value, [y.sub.t], can be described by the additive noise model:

[y.sub.t] = [x.sub.t] + [[epsilon].sub.t] (1)

where [x.sub.t] represents the true value of the process variable and [epsilon]t represents the measurement noise assumed to be white Gaussian noise (i.e., [[epsilon].sub.t] ~ N(0,[[sigma].sup.2])). In addition to the measurement model, we assume a process model is available to describe the dynamics of the process and expressed in the general form:

f([x.sub.t], [x.sub.t-1], ..., [u.sub.t-d], [u.sub.t-d-1]. ..) = [[delta].sub.t] (2)

where [u.sub.t] represents the value of the input variable at time t and d is the number of sample-time delays associated with the input. [[lambda].sub.t] represents the model error, assumed to be white Gaussian noise (i.e., [[delta].sub.t] ~ N(0,[[upsilon].sup.2])). The simple, semi-implicit model form given by Equation (2) is used to make the development of the algorithm simpler and more easily tractable. More complex stochastic models such as ARMAX and Box-Jenkins models can also be used in the algorithm. However, some mathematical manipulations are required to pre-whiten autocorrelated noise terms within the algorithm. Combining information from both the measurement and the process models, the estimate (i.e., reconciled value), [[??].sub.t|t], for the process variable at time t is the one that minimizes the weighted sum of squared measurement and model errors (Bai et al., 2005a), that is:


where [x.sub.L] and [x.sub.U] are the lower and upper bounds for the process state variable and J([[??].sub.t|t]) is the objective function to be minimized, the term in parentheses being the decision variable. We define the residuals between the reconciled data and the raw measurements as [[??].sub.t] = [y.sub.t] - [[??].sub.t|t]. Ideally, [[??].sub.t] should be white Gaussian noise and uncorrelated with [[??].sub.t|t], i.e.:

[[??].sub.t] ~ N (0, [[omega].sup.2]) (4)


Cov([[??].sub.t], [[??].sub.t|t]) = 0 (5)

Since the assumption of Equation (2) is rarely satisfied in practice and a global minimum of Equation (3) may be unavailable within the feasible region, the use of Equations (4) and (5) as detection criteria may not be sensitive enough to detect a bias.

To overcome this disadvantage as well as those discussed in the introduction for other methods, a method for simultaneous measurement bias correction and data reconciliation is developed in this paper. Assuming that a measurement bias is always present in the raw measurements, the measurement model becomes:

[y.sub.t] = [x.sub.t] + [beta] + [[epsilon].sub.t] (6)

where [beta] is the value of a systematic bias in the measurements. If there is no bias present, the value of ??is essentially 0. Given the two models, Equations (2) and (6), the algorithm for simultaneous measurement bias correction and data reconciliation can be written as:


where [[??].sub.t] is the estimate of measurement bias. It is worth noting that a moving window of length N + 1 is employed to more effectively correct the measurement bias as a result of additional temporally redundant information from past measurements and model predictions. The value of the measurement bias is assumed to be constant within the moving window horizon. Also, it is important to note that the assumption of white noise for the model error is usually violated. It is indeed often autocorrelated. Consequently, the reconciled values for the state variable and the estimate for the measurement bias will not be globally optimal.

In general, the solution of the optimization problem given by Equation (7) can be obtained by techniques of NLP, such as the quasi-Newton method. However, if a linear model is used, an analytical solution is available. For simplicity, we assume the dynamic process model only depends on the immediate past input and output, as represented by:

[x.sub.t] = [ax.sub.t-1] + [bu.sub.t-1] + [[delta].sub.t] (8)

where a and b are constant model parameters. Putting Equation (8) into Equation (7), and setting the partial derivatives of the objective function with respect to [[??].sub.t|t], ..., [[??].sub.t-N|t] and [[??].sub.t] equal to zero, it can be shown that the reconciled values, as well as the estimated measurement bias, are given by solving the set of linear equations:

[PHI] z = [GAMMA] (9)



with [[phi].sub.0] = [[sigma].sup.2] + [[upsilon].sup.2], [[phi].sub.1] = - a[[sigma].sup.2] and [[phi].sub.2] = [[sigma].sup.2] + [a.sup.2] [[sigma].sup.2] + [[upsilon].sup.2], and


Solution to Equation (9) is:

z = [[PHI].sup.-1] [GAMMA] (10)

Within the moving window, only the reconciled value for the current measurement, [[??].sub.t|t], and the estimated bias, [[??].sub.t], are retained in the process database for monitoring and control purposes, while other reconciled values for the past measurements, [[??].sub.t-1|t], ... ,[[??].sub.t-N|t], estimated at the current time step can be discarded. The sampling time index is increased to t+1 and the algorithm is repeated.

This algorithm enables the correction and estimation of the measurement bias in a real-time fashion when measurement bias takes place at any given time, because at each sampling time the magnitude of measurement bias is directly estimated. A logic information flow diagram for the proposed algorithm is provided in Figure 1.

Extension of the algorithm of Equation (7) to a multivariable process is straightforward. For a process having M state variables that are measured, Equation (7) becomes:


where [y.sub.t], [[??].sub.t|t] and [[??].sub.t] are associated M x 1 vectors of raw measurements, reconciled values and estimated bias values for the M process state variables. [u.sub.t] is a vector of process input variables. V is a covariance matrix of measurement noise, and [OMEGA] is a covariance matrix of process model errors.



The algorithm was applied to a binary (benzene/toluene) distillation column, shown in Figure 2, via process simulation. The simulation of the distillation column was based on rigorous distillation models (i.e., mass and heat balances, vapour-liquid equilibria, and tray hydraulics) in order to mimic the real plant operation. The column has four PI controllers. Controllers TIC-D and TIC-B are used to control the top and bottom temperatures by manipulating the reflux flow rate and the flow of steam to the reboiler, respectively. Controllers LIC-D and LIC-B are used to control the reflux drum and column base liquid levels by manipulating the distillate flow rate and the bottom product flow rate, respectively. The sampling time period for all the measurements is 30 s. In this article, only the bottom temperature was assumed to be noisy and to be biased in order to provide a straight-forward illustration of the proposed approach. Extending the approach to systems having multiple noisy variables has been given in Equation (11).


Around nominal steady-state, the empirical linear dynamic model for the bottom temperature, previously identified by Bai et al. (2005b):

[T'.sub.B,t] = [0.9228T'.sub.B,t-1] - [0.011R'.sub.t-8] - [0.00385R'.sub.t-9] + 4.867 x [10.sup.-5] [Q'.sub.t-1] + 6.084 x [10.sup.-4] [Q'.sub.t-2] - 7.583 x [10.sup.-3] [F'.sub.t-3] (12)

was used in the algorithm for the case studies presented, where [T.sub.B,t] is the value of the bottom temperature at sampling time t, [R.sub.t] is the value of reflux flow rate, [Q.sub.t] is the reboiler heat duty, and [F.sub.t] is the value of feed flow rate. The prime indicates the deviation form of the variables. The standard deviation of the model error was calculated as 0.11[degrees]C based on deviations between model predictions and simulated ("true") values. In practice, the standard deviation of model error is usually difficult or impossible to determine since the true values of process variables are unknown. Consequently, the covariance matrix of model errors is treated as being diagonal with the diagonal elements being tuning parameters for the algorithm.

Data Reconciliation without Bias Correction

To establish a base case to show the impact of data reconciliation when no measurement bias was present, a simulation was carried out with white Gaussian measurement noise, having a standard deviation of 0.25[degrees]C, added to the "true" values of the bottom temperature. The raw measurements were first processed by the data reconciliation algorithm and then the reconciled values of the bottom temperature (or corrected measurements) were fed to the controller to determine the required control moves for two step changes in the bottom temperature set point. The first step change, at 120 min, was from the nominal value of 117.4 to 119.0[degrees]C; the second at 240 min was from 119.0 to 116.5[degrees]C. Applying the data reconciliation algorithm of Equation (3), the measured, reconciled and the true values for the bottom temperature are presented in Figure 3. As expected, this figure shows that both the accuracy and precision of the reconciled values was better than that displayed by the raw measurements. Overall, good control performance was achieved.

Next, the same simulation was carried out, this time with a constant bias of 1.5[degrees]C added to the raw measurements of the bottom temperature. The data reconciliation procedure was applied again without accounting for the presence of the measurement bias. The simulation results are presented in Figure 4. Although the reconciled values were closer to the true values than the biased measured values, the reconciled values also displayed a large bias. They were severely distorted by the presence of measurement bias and were not reliable to represent the true state of the process. Moreover, the true values of the controlled variable displayed a large deviation from the controller set point. Figure 4 clearly indicates the degradation in performance of both the dynamic data reconciliation and the controller in the presence of measurement bias.


Simultaneous Measurement Bias Correction and Data Reconciliation

The next step was to evaluate the performance of the combined DDR and measurement bias correction algorithm presented in the Algorithm Development section. This was done using the same simulations as used in the previous section, the only change being the implementation of measurement bias correction. A moving window length of 2 (i.e., N=1), was used in the algorithm.

Controller set point changes with constant measurement bias

The raw, reconciled and true values of the bottom temperature are presented in Figure 5. It is seen that the reconciled data gradually approached the true values at the process nominal steady-state when the algorithm was initiated. After this initial period, the algorithm performed very well at the process nominal steady-state with no bias in the reconciled values. When the process was driven to another steady-state, the reconciled values displayed a bias. Nevertheless, the bias correction in the dynamic data reconciliation algorithm was able to make significant corrections for the systematic measurement bias. Compared to Figure 4, the deviations between the true values and the reconciled values and the deviations between the true values and the controller set points were significantly reduced.



To more clearly show the behaviour of the estimated measurement bias, its values were plotted in Figure 6. In the beginning, with the controlled value of the bottom temperature at its nominal value, the estimated measurement bias value converged to the imposed bias of 1.5[degrees]C. However, when the process experienced step changes of the controller set point, a transient in the measurement bias value followed by its convergence to a new steady-state value was observed. In these cases, the steady-state bias values deviated from the true measurement bias. These deviations resulted from the model mismatch experienced as the process moved away from its nominal condition for which the linear process model was developed for non-linear process of the bottom temperature. Perhaps more important is that simultaneous bias correction with DDR was able to keep the true value of the controlled variable within 0.2[degrees]C of the set point once the new steady-state was reached compared to 1.2[degrees]C without bias correction, despite the presence of model mismatch and the inaccuracy of the measurement bias estimate. In general, if model mismatch is significant, a more complex model (e.g. NARMAX, neural network) may be substituted in the algorithm to deal with the non-linearity. The compromise between model simplicity and accuracy has been discussed by Bai et al. (2006).

The magnitude of the bias was estimated with this algorithm at each sampling time. It is interesting to examine the statistics of the estimated bias as an objective criterion to assess the performance of the algorithm. At the process nominal steady-state, the average value of the estimated bias and its standard deviation for 180 sampling times from 30 to 120 min were 1.48[degrees]C and 0.19[degrees]C, respectively, indicating the probability of the bias being in the range of 1.29 ~ 1.67[degrees]C was 68%. At the other two operating points from 120 to 360 min, the statistics calculated for the 480 sampling times were 1.34[degrees]C and 0.28[degrees]C, respectively, showing that, with the same probability, the bias was in the range of 1.06 ~ 1.62[degrees]C. Compared to the true value 1.5[degrees]C, the estimated bias obtained at the process nominal steady-state was more accurate. However, in practice because the true value of the bias is not known, it is important to have a prior knowledge about its range that enables plant engineers to judge the accuracy of the estimated bias value. The prior knowledge comes from their past experience of the performance of the measuring device and, from other process information, for example, the bottom temperature of this distillation column will never exceed the boiling point of pure toluene at the column bottom pressure.


Stochastic external disturbances with abrupt changes in measurement bias

In most chemical processes there is often random variability in the process inputs. Consequently the algorithm was next evaluated under a stochastic input for the feed flow to the column. The feed flow was generated using a model of the form:

[F.sub.t] = [[xi].sub.t]/1 - 0.[9z.sup.-1] (13)

where [[xi].sub.t] is white Gaussian noise. The realization of the feed flow used for this simulation is displayed in Figure 7. In addition, measurement bias often is not constant but varies with time to some extent. Consequently, the performance of the proposed algorithm was evaluated for extreme conditions when the measurement bias had abrupt changes. The measurement bias had an abrupt change from +1.5[degrees]C to -1.5[degrees]C at time 120 min, followed by another abrupt change to +1.0[degrees]C at time 240 min. Applying the algorithm of simultaneous measurement bias correction and data reconciliation for this process, the raw, reconciled and true values of the bottom temperature are presented in Figure 8. The reconciled and the true values displayed some degree of variation around the controller set point due to the variations of the feed flow rate. The reconciled values tracked the true values very well. The impact of the measurement bias on the performance of the controller was almost completely eliminated. The estimated values of the measurement bias are plotted in Figure 9. The estimated biases performed very well in tracking their true values even though the biases were significant and with sudden changes, except that there were relatively larger deviations at the time around 60 min when the magnitudes of the external disturbance were significant.



Effect of window length

The next step was to briefly investigate the effect of the length of the moving window on the performance of the algorithm. This study used the same set point change simulations as used in the Controller Set Point Changes with Constant Measurement Bias subsection. However, the length of the moving window was increased to 4 (i.e., N=3). The raw, reconciled and true values of the bottom temperature are presented in Figure 10, while the results of the estimated bias values are presented in Figure 11. Compared to the results shown in Figures 5 and 6, where a window length of 2 was used, the reconciled values displayed larger variations during the entire process simulated. Nevertheless, the estimated bias values became more accurate. With a window length of 2, the variance of the reconciled bottom temperature and the variance of the estimated measurement bias were [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] = 0.050 and [S.sup.2.sub.[??]] = 0.086, respectively. The variances were defined as the sum of squared differences between the estimated and true values divided by the number of sampling points. With a window length of 4, these variances were [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] = 0.055 and [S.sup.2.sub.[??]] = 0.072, respectively. These results indicate that to achieve more accurate reconciled values, a smaller window length should be used. On the other hand, to obtain better estimates for the bias, a larger window length should be used. This case study indicates the selection of the window length is a trade-off among: (i) the computational effort to implement the algorithm; (ii) the effectiveness of the algorithm to correct the measurement bias; and (iii) the effectiveness of the algorithm to attenuate the measurement noise at the same time. In general, if the window length is too large and the process is subjected to numerous input changes, the algorithm would be more lethargic and some time lag would be induced in the reconciled response of the process, leading to performance degradation. An initial window length of 2 is suggested to start the algorithm. If estimates for the measurement bias display large variations, then an increase in the window length is recommended, however, at the cost of an inflated variance of reconciled process state variable. The window length can be treated as a tuning parameter for the algorithm and is problem-dependent.





This work developed a simple, effective algorithm for simultaneous measurement bias correction, estimation and dynamic data reconciliation considering uncertainties in process models. The values of the identified measurement bias and reconciled state variables can be obtained either by on-line optimization or more conveniently by solving a set of linear equations when linear process models are employed. Simulation results showed that the measurement bias must be considered in the data reconciliation, otherwise the presence of measurement bias can seriously distort the reconciled values. Accurate process models play an important role for the good performance of the algorithm, particularly with respect to minimization of confounding between measurement bias and model mismatch. The length of the moving window has an impact on the quality of the reconciled data and the identified bias value. The measurement bias can be more precisely corrected and identified with an increase of the moving window length, but at the expense of larger variations for reconciled values for the state variables.

a, b constants of model parameters
d number of time delays associated with input
M number state variables
N moving window length
[u.sub.t] value of input variable at time t
[x.sub.t] true value of process variable at sampling time t
[[??].sub.t|t] reconciled or estimated value of process variable
 at time t estimated at time t
[X.sub.L] lower bound of process variable
[X.sub.U] upper bound of process variable
[y.sub.t] measured value of process variable at time t
z vector of the N state variables in the moving
 window and of the bias (Equation (9))
[[epsilon].sub.t] measurement noise at time t
[[??].sub.t] residuals at time t
[[sigma].sup.2] variance of measurement noise
[[delta].sub.t] model random error at time t
[[upsilon].sup.2] variance of model random error
[beta] true value of a systematic bias in measurements
[??] estimate of a systematic bias in measurements
[[beta].sub.L] lower bound for the measurement bias
[[beta].sub.U] upper bound for the measurement bias
[[omega].sup.2] variance of residuals (Equation (4))

Manuscript received January 22, 2006; revised manuscript received June 7, 2006; accepted for publication September 14, 2006


Abu-el-zeet, Z. H., V. M. Becerra and P. D. Roberts, "Combined Bias and Outlier Identification in Dynamic Data Reconciliation," Comput. Chem. Eng. 26, 921-935 (2002).

Albuquerque, J. S. and L. T. Biegler, "Data Reconciliation and Gross Error Detection for Dynamic Systems," AIChE J. 42, 2841-2856 (1996).

Bai, S., J. Thibault and D. D. McLean, "Closed-Loop Data Reconciliation for the Control of a Binary Distillation Column," Chem. Eng. Commun. 192, 1444-1467 (2005a).

Bai, S., D. D. McLean and J. Thibault, "Enhancing Controller Performance Via Dynamic Data Reconciliation," Can. J. Chem. Eng. 83, 515-526 (2005b).

Bai, S., D. D. McLean and J. Thibault, "Impact of Model Structure on the Performance of Dynamic Data Reconciliation," Comput. Chem. Eng. 31, 127-135(2007).

Chen, J. and J. A. Romagnoli, "A Strategy for Simultaneous Dynamic Data Reconciliation and Outlier Detection," Comput. Chem. Eng. 22, 559-562 (1998).

Gerlter, J., "Fault Detection and Diagnosis in Engineering Systems," Marcel Dekker (1998).

Heenan, W. A. and R. W. Serth, "Gross Error Detection and Data Reconciliation in Steam-Metering Systems," AIChE J. 32, 733-742 (1986).

McBrayer, K. F. and T. F. Edgar, "Bias Detection and Estimation in Dynamic Data Reconciliation," J. Process Control 5, 285-289 (1995).

Narasimhan, S. and R. S. H. Mah, "Generalized Likelihood Ratio Methods for Gross Error Identification," AIChE J. 33, 1514-1528 (1987).

Patton, R., P. Frank and R. Clark, "Fault Diagnosis in Dynamic Systems: Theory and Applications," Prentice Hall, New York (1989).

Rollins, D. K., Y. Cheng and S. Devannathan, "Intelligent Selection of Tests to Enhance Gross Error Identification," Comput. Chem. Eng. 20, 517-530 (1996).

Rollins, D. K., S. Devanathan and M. V. B. Bascunana, "Measurement Bias Detection in Linear Dynamic Systems," Comput. Chem. Eng. 26, 1201-1211 (2002).

Soderstrom, T. A., D. M. Himmelblau and T. F. Edgar, "A Mixed Integer Optimization Approach for Simultaneous Data Reconciliation and Identification of Measurement Bias," Control Eng. Practice 9, 869-876 (2001).

Tong, H. and C. M. Crowe, "Detection of Gross Errors in Data Reconciliation by Principal Component Analysis," AIChE J. 41, 1712-1722 (1995).

Watanabe, K. and D. M. Himmelblau, "Instrument Fault Detection in Systems with Uncertainties," Int. J. Syst. Sci. 13, 137-158 (1982).

Wilsky, A. S. and H. L. Jones, "A Generalized Likelihood Ratio Approach to the Detection and Estimation of Jumps in Linear Systems," IEEE Trans. Auto. Control AC-21, 108-112 (1976).

Shuanghua Bai (1), David D. McLean (2) and Jules Thibault (2) *

(1.) Suncor Energy Inc., Fort McMurray, AB, Canada

(2.) Department of Chemical Engineering, University of Ottawa, Ottawa, ON Canada K1N 6N5

* Author to whom correspondence may be addressed. E-mail address:
COPYRIGHT 2007 Chemical Institute of Canada
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2007 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Bai, Shuanghua; McLean, David D.; Thibault, Jules
Publication:Canadian Journal of Chemical Engineering
Date:Feb 1, 2007
Previous Article:On the effect of non-linearity on linear quadratic regulator stability and performance.
Next Article:A simple modification of the Pitzer method to predict second virial coefficients.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters