Printer Friendly

Evaluation of robust functions for data reconciliation in thermal systems/Avaliacao de funcoes robustas para reconciliacao de dados em sistemas termicos.

Introduction

Data reconciliation may be seen as a step towards improving the accuracy of data for use in modeling and optimization processes. According to technique precursors Kuehn and Davidson (1961), data reconciliation is a tool that, among a wide range of applications, allows optimum adjustment measures and estimates based on spatial redundancy and model conceived by conservational balances of mass and energy. Its effectiveness for accuracy is achieved even when the estimated standard deviation of errors is different from the actual ones or when errors follow standard distribution (Jiang, Liu, & Li, 2014). However, errors, especially gross ones, cannot be accurately explained or predicted.

Robust functions have been studied by Huber and Ronchetti (2009) with the use of robust statistical tools to find solutions to problems that lacked the normal Gaussian distribution. An important feature in the reconciliation procedure is the low sensitivity of these functions when the data are corrupted with gross errors (Ozyurt & Pike, 2004).

The ability to predict robustness and to process data with gross errors has been dealt with in many research papers. Ozyurt and Pike (2004) tested the robust functions in case studies mentioned in the literature to establish a criterion for error detection and data reconciliation simultaneously. The good results obtained with different functions underscored the efficiency of Hampel, Cauchy and Logistics functions. Prata, Schwaab, Lima, and Pinto (2010) analyzed the ability to predict robust Welsch function for data reconciliation and detection of outliers in a propylene polymerization reactor represented by a nonlinear dynamic model. Zhang, Shao, Chen, Wang, and Qian (2010) analyzed the robustness of the Least Squares Quasi-weighted function by comparing it to the Weighted Least Square, Fair and Redescending functions. Loss of quality of the Weighted Least Squares function was evident when compared to the proposed function. The focus of the study was to analyze the potential of this strategy for the application of online detection of gross errors. Nicholson, Negrete, and Lorenz (2014) used a sequential approach to numerical integration of nonlinear dynamic models and data reconciliation by using Huber, Fair and Hampel redescending robust functions in an advanced method of moving horizon. Process measures were "contaminated" with large errors to demonstrate the ability to reconcile the robust functions used in data reconciliation.

Even with a motivating panorama in this thematic area, there is still no consensus on the criteria for the selection of robust features. In fact, few studies have been proposed to develop and evaluate new robust functions. Jin, Hung, and Liu (2012) proposed a new formulation called New Target to reconcile data. The reconciliation of results obtained with this function was compared to those in the Cauchy functions and Huber and showed that the New Target provided more accurate data especially when there were one or two gross errors in the measurements.

Alamgir, Khan, Khan, and Khalil (2013) proposed a function called Alamgir Redescending M-estimator (Alarm) based on the modification of the hyperbolic tangent function. Even without having been evaluated in a data reconciliation problem, the results obtained by the Alarm function in the detection of outliers were compared to results of such robust functions such as Tukey Biweight, Andrew Sine, Hampel of three parts, Huber and Ordinary Least Squares, and showed that the increased robustness of the other estimators for the elimination of outliers decreased their efficiency, which was not the case with the Alarm function.

Despite the good results obtained from the New Target, the prediction capacity of this new function only coped with two robust functions. In fact, the question is how it behaves when compared with other robust features including Alarm function which showed good performance in the detection of outliers.

In the case of thermal systems, Jiang et al. (2014) proposed to apply data reconciliation and gross errors detection in steam turbine on-line of a coal-fired power generation unit. Results showed whether data reconciliation had contributed towards the reduction of uncertainties of the estimated rate of primary flow rates, steam turbine heat rate and the heat rate sensitivity coefficients. Martinez-Maradiaga, Bruno, and Coronas (2013) studied data reconciliation technique to the steady state operational data for absorption refrigeration systems and a single-effect ammonia-water absorption chiller was observed. Data reconciliation technique was executed with gross error detection procedure and efficient results were obtained in two steps. The identification and removal of gross errors enabled a data reconciliation step consistent with results. Szega and Nowak (2015) simulated a model of the thermal system of power unit for data reconciliation and proposed a mathematical simulation model of a supercritical steam power unit. The data reconciliation step relied on redundancy and observable measurements. Reconciled data show a decrease in the system's entropy.

Current paper compares the classical Cauchy, Fair, Contaminated Normal and Logistic functions, applicable to data reconciliation problems containing gross errors with the new robust functions, New Target and Alarm. The key aim is to evaluate the potential of these new functions for real time applications. The criterion for selection of functions was based on the average relative error between the amount reconciled and the true rate for the variable. The parameter setting of robust functions was obtained from the literature with a 95% confidence interval. Two cases represented by model processes in steady states, linear and nonlinear, were selected for current analysis. Data were corrupted with gross errors between four and ten times the standard deviation rate randomly generated to simulate real-time behavior.

Methodology

Formulation of data reconciliation problem with robust functions

The data reconciliation problem for processes in steady states represents a class of optimization problem (Sanchez & Romagnoli, 1996) formulated by minimizing an objective function subject to constraints. The optimization problem for data reconciliation may be represented by Equation 1.

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (1)

where:

[N.sub.med] is the number of measured variables;

N is the number of equality constraints, in this case, the issue number of equations;

[N.sub.des] is the number of inequality constraints;

[N.sub.unmed] is the number of unmeasured variables; [rho] ([[epsilon].sub.i]) is the merit function;

f is equality constraint, in this case, the mass balance or energy;

g is the inequality constraint imposed on the problem;

z is the unmeasured variables of the process, estimated concomitantly with reconciliation.

The variables reconciled and unmeasured are limited by [x.sup.inf.sub.i], [z.sup.inf.sub.j] (in the lower region) and x*up, [z.sup.sup.sub.j] (in the upper region). In the merit function, [epsilon] represents the relative error between the measured and reconciled rates, as shown in Equation 2, with [x.sup.m] = the measured variables of the process; [x.sup.r] = the reconciled rates for the variables of the process and ? = standard deviation of the measurements.

[epsilon] = ([[x.sup.m] - [x.sup.r]/[sigma]]) (2)

In the case of reconciliation data with robust functions, the merit function is modified according to the functional form of interest, i.e. if the robust function is a weighted least square function, the merit function is represented according to Equation 3.

[rho]([[epsilon].sub.i]) = [1/2] x [[epsilon].sup.2.sub.i]. (3)

In an assessment of the functions used in the literature, it became clear that the robust functions employed belonged to the redescending M-estimators families which in turn are based on the maximum likelihood function (Hodouin & Everell, 1980). Due to this agreement, the functions Cauchy, Fair, Contaminated Normal, Logistic and new functions New Target and Alarm were selected to analyze the data reconciliation problem, represented by Equations 4 to 10, under function merit.

Weighted least squares (WLS)

[1/2] x [[epsilon].sup.2.sub.i]. (4)

Cauchy

[C.sup.2.sub.c] x In (1 + [[[epsilon].sup.2.sub.i]/[c.sup.2.sub.c]]). (5)

Fair

2 x [c.sup.2.sub.f] x [[[absolute value of [[epsilon].sub.i]]/[c.sub.f]] - In(1 + [absolut value of [[epsilon].sub.i]]/[c.sub.f])]. (6)

Contaminated Normal

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (7)

Logistic

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (8)

New Target

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (9)

Alarm Redescending

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (10)

In Equation 5 to 10, the variables [c.sub.c], [c.sub.f], [p.sub.CN], [b.sub.CN], [c.sub.Lo], [c.sub.NT], a, A, B and [c.sub.Al] are set parameters of robust functions. Parameters may be estimated according to a known distribution (normal) and with specified accuracy, or even estimated together with the reconciliation problem. Rates in Table 1 were used for the study of cases of interest. They were retrieved from Ozyurt and Pike (2004), Jin et al. (2012) and Alamgir etal. (2013), with 95% efficiency.

Figure 1 denotes the performance of different robust functions in the analysis of the influence function (IF) representing the sensitivity of the function with respect to error or contamination related to gross errors and outliers (Zhang et al., 2010; Prata, Schwaab, Lima, & Pinto, 2010). The behavior designed to influence the function is that as the error increases, the function converges to a small, constant rate, and becomes indifferent to contamination by big errors. Figure 1 reveals that all robust functions, Contaminated Normal, New Target and Alarm, have the same behavior within the minimum region, when the influence function approaches a value close to zero.

[FIGURE 1 OMITTED]

In defining performance criterion for the selection of functions, Equation 11 represents aspects, such as convergence, and reduction of relative error (Zhang et al., 2010). The first aspect indicates whether the function may be employed for real time applications; the latter aspect refers to the ability to function closer to true rates and may be a possible indicator for gross error detection procedures.

RER = [[summation over (i)]([MRE.sub.i] - [RRE.sub.i])/[summation over (i)][MRE.sub.i]]x100. (11)

In Equation 11, MRE is the measure's relative error and RRE is the relative reconciled error, represented by Equation 12 and 13.

[MRE.sub.i] = [[absolute value of [x.sub.i] - [x.sup.m.sub.i]]/[x.sub.i]]. (12)

[RRE.sub.i] = [[absolute value of [x.sub.i] - [x.sup.r.sub.i]]/[x.sub.i]]. (13)

where:

[X.sub.i] is the true rate;

[X.sup.m.sub.i] is the measured rate;

[X.sup.r.sub.i] is the reconciled rate.

Cases of interest in current paper have been implemented in a computing environment and reconciled by a nonlinear programming strategy as optimizer (Narashimham & Jordache, 2000). A final tolerance of the objective function 10-6 was specified for all.

Results

The two cases analyzed deal with a processing unit studied by Narashimham and Jordache (2000) and Knopf (2012), comprising a mixer, heat exchanger, splitter and recycle controlled by a bypass valve. The first application study is represented by a linear system of equations and has six streams measured with a single gross error. The energy flow is discarded and only mass flow is taken into consideration. The second application study is a modification of the previous case, with energy and mass flow. The problem represented by a nonlinear system of equations has eleven streams measurements (flows and temperatures) and two gross errors. Figure 2a and b illustrate the two cases, especially the nonlinear system in which streams of energy for the heat exchanger may be noted.

[FIGURE 2 OMITTED]

Application study 1: linear model

Flow sheet details are shown in Figure 2a and the information on the measured variables are presented in Table 2 in which a gross error in stream F2 referring to the exit of the separator and entrance of the heat exchanger, with magnitude 4?, may be perceived.

Figure 3 demonstrates that robust feature WLS showed a low 19% relative error reduction. Other robust features that have proven performance in the literature, such as Contaminated Normal, Fair and Logistic, were also below 50%. On the other hand, New Target and Alarm functions showed a high 70% error reduction which proved to be more accurate than the other functions.

Mitigation of inability of Fair and Logistics functions for great errors may be justified by Figure 1. After a certain magnitude of error, the IF rate of the above functions reaches constant and high level. This behavior has not been observed in the New Target and Alarm Functions that tend to zero posterior to the referred error. This aspect confirms the robustness idealized for the new functions especially in the presence of big errors.

An important point to be highlighted was the lack of robustness of the Contaminated Normal function in this case. Although not commented in the literature, the possible loss of robustness may be attributed to the adjustment parameters of the function.

Application study 2: nonlinear model

Figure 2b demonstrates information on energy balances. Table 3 shows the rates of the measured variables. Two gross errors may be observed in the streams F2 and T7, with regard to the input stream of the hot flow in the heat exchanger and the inlet temperature of the cold flowof heat exchanger respectively with magnitude 4 and 5?.

Figure 4 shows that the function New Target has a less than 15% rate to reduce relative error and indicates that the reconciled rate approaches the measured value, albeit not mitigating the gross error. The other functions showed rates close to 30% for the reduction of relative error. However, tests on the magnitude of errors close to those in current test and big errors in the two streams also obtained an error reduction between 26 and 35%, which was considered satisfactory (Zhang et al., 2010).

A new test was performed at a magnitude of gross error 10? for stream F2 and T7. Table 4 gives the new rates.

Figure 5 reveals that rise in the magnitude of error in streams F2 and T7 resulted in an increase in the reduction of the relative error of functions Alarm and New Target. In this case, a reduction of over 90% occurred.

The behavior by New Target function has been reviewed by Jin et al. (2012) who highlighted increasing function of accuracy according to the increased magnitude of the error. However, the above performance was not followed by the other functions which showed a lower than 10% reduction, with the exception of Contaminated Normal function with an 80% reduction. Although developed for outlier detection, Alarm function also presented an error reduction according to increasing magnitude. In fact, it is a promising alternative for data reconciliation problems. When compared to other functions, Alarm proved to have consistent results, without any loss of accuracy in situations where the magnitude of the error was small.

Conclusion

A general analysis of robust functions New Target and Alarm with regard to data reconciliation problem in linear and nonlinear systems showed great precision for gross errors between 4 - 5 times the standard deviation. New Target lost efficiency and demonstrated a different behavior to that presented by Jin et al. (2012). In the same context, the Alarm function was shown to have efficiency compatible with the Normal Contaminated function. When gross error increased, New Target and Alarm functions revealed higher results than those of functions and indicated a trend to mitigate major errors.

Doi: 10.4025/actascitechnol.v28i2.28188

Acknowledgements

The authors would like to thank the Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior (Capes) and to the Universidade Federal de Sergipe (UFS) for funding current research.

References

Alamgir, A. A., Khan, S. A., Khan, D. M., & Khalil, U. (2013). A new efficient redescending M-estimator: Alamgir Redescending M-estimator. Research Journal of Recent Sciences, 2(8), 79-91.

Hodouin, D., & Everell, M. D. A. (1980). Hierarchical procedure for adjustment and material balancing of mineral processes data. Journal Mineralogy Processing, 7(2), 91-116.

Huber, P. J., & Ronchetti, E. M. (2009). Robust statistics. Hoboken, NJ: John Wiley & Sons Inc.

Jiang, X., Liu, P., & Li, Z. (2014). A data reconciliation based framework for integrated sensor and equipment performance monitoring in power plants. Applied Energy, 134(1), 270-282.

Jin, S., Hung, L. X., & Liu, M. A. (2012). A New Target function for robust data reconciliation. Industrial & Engineering Chemistry Research, 51(30), 10220-10224.

Knopf, F. C. (2012). Modelling, analysis and optimization of process and energy systems. Baton Rouge, NJ: John Wiley & Sons Inc.

Kuehn, D. R., & Davidson, H. (1961). Computer control. II. Mathematics of control. Chemical Engineering Progress, 57(6), 44-47.

Martinez-Maradiaga, D., Bruno, J. C., & Coronas, A. (2013). Steady-state data reconciliation for absorption refrigeration systems. Applied Thermal Engineering, 51 (1-2), 1170-1180.

Narashimham, S., & Jordache, C. (2000). Data reconciliation and gross error detection an intelligent use of process data. Houston, TX: Gulf Publishing Company.

Nicholson, B., Negrete, R. L., & Lorenz, T. B. (2014). On-line state estimation of nonlinear dynamic systems with gross errors. Computers and Chemical Engineering, 70(5), 149-159.

Ozyurt, D. B., & Pike, R. W. (2004). Theory and practice of simultaneous data reconciliation and gross error detection for chemical process. Computers and Chemical Engineering, 28(3), 381-402.

Prata, D. M., Schwaab, M., Lima, E. L., & Pinto, J. C. (2010). Simultaneous robust data reconciliation and gross error detection through particle swarm optimization for an industrial polypropylene reactor. Chemical Engineering Science, 65(17), 4943-4954.

Sanchez, M., & Romagnoli, J. (1996). Use of orthogonal transformations in data classification-Reconciliation. Computers Chemical Engineering, 20(5), 483-493.

Szega, M., & Nowak, G. T. (2015). An optimization of redundant measurements location for thermal capacity of power unit steam boiler calculations using data reconciliation method. Energy, 6(92), 1-7.

Zhang, Z., Shao, Z., Chen, X., Wang, K., & Qian, J. (2010). Quasi-weighted least squares estimator for data reconciliation. Computers and Chemical Engineering, 34(2), 154-162.

Regina Luana Santos de Franca (1) *, Antonio Martins de Oliveira Junior (1), Domingos Fabiano de Santana Souza (2)

(1) Programa de Pos-graduacao em Engenharia Quimica, Universidade Federal de Sergipe, Avenida Marechal Rondon, s/n, Campus Universitario Jose Aloisio de Campos, 49100-000, Sao Cristovao, Sergipe, Brazil (2) Departamento de Engenharia Quimica, Universidade Federal do Rio Grande do Norte, Natal, Rio Grande do Norte, Brazil. *Author for correspondence. E-mail: reginaquimica@gmail.com

Received on June 16, 2015.

Accepted on October 23, 2015.
Table 1. Constants for different p functions, with 95% efficiency.

Function [rho]                   Set parameters

Cauchy                         [c.sub.C] = 2.3849
Fair                           [c.sub.F] = 1.3998
Contaminated Normal    [b.sub.CN] = 10 [p.sub.CN] = 0.235
Logistic                       [b.sub.Lo] = 0.602
New Target                  [c.sub.NT] = 3. A = 0.65
Alarm                            [c.sub.Al] = 3

Table 2. Information on measured variables in application Study 1,
with standard deviation equal to 1.

Streams      True Flow Rates    Measured Flow Rates

[F.sub.1]          100                 101.9
[F.sub.2]           64                 68.45
[F.sub.3]           36                 34.65
[F.sub.4]           64                  64.2
[F.sub.5]           36                 36.64
[F.sub.6]          100                 98.88

Table 3. Information on measured variables in application Study 2,
with standard deviation equal to 1.

Streams      True rates    Measured rates

[F.sub.1]        100           101.91
[F.sub.2]        64             68.45
[F.sub.3]        36             34.65
[F.sub.4]        64             64.20
[F.sub.5]        36             36.44
[F.sub.6]        100            98.88
[F.sub.7]       140.6           140.0
[T.sub.2]        90              90
[T.sub.4]        51              51
[T.sub.7]        20              25
[T.sub.8]        43              43

Table 4. Information on measured variables with gross error of
10.[sigma] with standard deviation equal to 1.

Streams      True rates    Measured rates

[F.sub.1]        100           101.91
[F.sub.2]        64             74.20
[F.sub.3]        36             34.65
[F.sub.4]        64             64.20
[F.sub.5]        36             36.44
[F.sub.6]        100            98.88
[F.sub.7]       140.6           140.0
[T.sub.2]        90              90
[T.sub.4]        51              51
[T.sub.7]        20             30.41
[T.sub.8]        43              43

Figure 3. Comparison of the performance of robust functions with
regard to reduction of relative error--Application Study 1.

Robust     Relative Error
function   Reduction (%)

WLS        19,204
Cauchy     41,983
NC         18,134
NT         72,984
Alarm      78,346
Logistic   37,519
Fair       30,085

Note: Table made from bar graph.

Figure 4. Comparison of performance of robust functions with regard
to reduction of relative error for application Study 2.

Robust     Relative Error
function   Reduction (%)

WLS        29,969
Cauchy     26,137
NC         29,969
NT         14,637
Alarm      28,131
Logistic   26,006
Fair       26,917

Note: Table made from bar graph.

Figure 5. Comparison of performance of robust functions with
regard to the reduction of relative error with 10. [sigma].

Robust     Relative Error
function   Reduction (%)

WLS        0
Cauchy     8,739
NC         86,442
NT         93,562
Alarm      92,633
Logistic   8,128
Fair       9,306

Note: Table made from bar graph.
COPYRIGHT 2016 Universidade Estadual de Maringa
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2016 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:texto en ingles
Author:de Franca, Regina Luana Santos; de Oliveira, Antonio Martins, Jr.; Souza, Domingos Fabiano de Santan
Publication:Acta Scientiarum. Technology (UEM)
Date:Apr 1, 2016
Words:3439
Previous Article:Caviar substitute produced from roes of rainbow trout (Oncorhynchus mykiss)/Sucedaneo de caviar produzido com ovas de truta arco-iris (oncorhynchus...
Next Article:Human thermal comfort and architectural volume/Conforto termico humano e volume arquitetonico.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters