# Robust Nonlinear Regression in Enzyme Kinetic Parameters Estimation.

1. Introduction

Enzymes are molecules that act as biological catalysts and are responsible for maintaining virtually all life processes. Most enzymes are proteins, although a few are catalytic RNA molecules. Like all catalysts, enzymes increase the rate of chemical reactions without themselves undergoing any permanent chemical change in a process. They achieve their effect by temporarily binding to the substrate and, in doing so, lowering the activation energy needed to convert it to a product. The study of the rate at which an enzyme works is called enzyme kinetics and it is often regarded as one of the most fascinating research areas in biochemistry .

Mathematically, the relationship between substrate concentration and reaction rate under isothermal conditions for many of enzyme-catalyzed reactions can be modeled by the Michaelis-Menten equation :

v = [V.sub.max]s/s + [K.sub.m], (1)

where v denotes a reaction rate, s is a substrate concentration, [V.sub.max] is the maximum initial velocity, which is theoretically attained when the enzyme has been "saturated" by an infinite concentration of a substrate, and [K.sub.m] is the Michaelis constant, representing a measure of affinity of the enzyme-substrate interaction. By definition, [K.sub.m] is equal to the concentration of the substrate at half maximum initial velocity. The Michaelis constant, [K.sub.m], is an intrinsic parameter of enzyme-catalyzed reactions and it is significant for its biological function .

Three most common methods, available in the literature, for determining the parameters of Michaelis-Menten equation based on a series of measurements of velocity v as a function of substrate concentration, are Lineweaver-Burk plot, also known as the double reciprocal plot, Eadie-Hofstee plot, and Hanes-Woolf plot. All three of these methods are linearized models that transform the original Michaelis-Menten equation into a form which can be graphed as a straight line. Lineweaver-Burk  (LB) plot, still the most popular and favored plot amongst the researchers, is defined by an equation:

1/v = 1/[V.sub.max] + [[K.sub.m]/[V.sub.max]] [1/s]. (2)

The y-intercept in this plot is 1/[V.sub.max], the x-intercept in second quadrant represents -1/[K.sub.m], and the slope of the line is [K.sub.m]/[V.sub.max].

Eadie-Hofstee  (EH) plot is a semireciprocal plot of v versus v/s. The linear equation has the following form:

v = [V.sub.max] - [K.sub.m] [v/s], (3)

where the y-intercept is [V.sub.max] and the slope is [K.sub.m].

In Hanes-Woolf  (HW) plot, s/v is plotted against s. The linear equation is given by

s/v = [K.sub.m]/[V.sub.max] + [1/[V.sub.max]] s, (4)

where the y-intercept is [K.sub.m]/[V.sub.max] and the slope is 1/[V.sub.max].

In all of the above-described linear transformations, linear regression is used to estimate the slope and intercept of the straight line and afterwards [K.sub.m] and [V.sub.max] are computed from the straight line parameters. Although these methods are very useful for data visualization and are still widely employed in enzyme kinetic studies, each of them possesses certain deficiencies, which make them prone to errors. For instance, Lineweaver-Burk plot has the disadvantage of compressing the data points at high substrate concentrations into a small region and emphasizing the points at lower substrate concentrations, which are often the least accurate . The V-intercept in Lineweaver-Burk plot is equivalent to inverse of [V.sub.max] due to which any small error in measurement gets magnified. Similarly, the Eadie-Hofstee plot has the disadvantage that v appears on both axes; thus, any experimental error will also be present in both axes. In addition, experimental errors or uncertainties are propagated unevenly and become larger over the abscissa thereby giving more weight to smaller values of v/s. Hanes-Woolf plot is the most accurate of the three; however, its major drawbackis that again neither ordinate nor abscissa represents independent values: both are dependent on substrate concentration.

In order to reduce the errors due to the linearization of parameters, Wilkinson  proposed the use of least-squares nonlinear regression for more accurate estimation of enzyme kinetic parameters. Nonlinear regression allows direct determination of parameter values from untransformed data points. The process starts with initial estimates and then iteratively converges on parameter estimates that provide the best fit of the underlying model to the actual data points [9, 10]. The algorithms used include the Levenberg-Marquardt method, the Gauss-Newton method, the steepest-descent method, and simplex minimization. Numerous software packages, such as Excel, MATLAB, and GraphPrism, nowadays include readily available routines and scripts to perform nonlinear least-squares fitting [11, 12].

Least-squares nonlinear regression has been criticized for its performance in dealing with experimental data. This is mainly due to the fact that implicit assumptions related with nonlinear regression are in general not met in the context of deviations that appear as a result of biological errors (e.g., variations in the enzyme preparations due to oxidation or contaminations) and/or experimental errors (e.g., variations in measured volume of substrates and enzymes, imprecisions of the instrumentation). With the presence of outliers or influential observations in the data, the ordinary least-squares method can result in misleading values for the parameters of the nonlinear regression and estimates may no longer be reliable .

In this paper, we propose the use of robust nonlinear regression estimator based on modified Tukey's biweight function for determining the parameters of Michaelis-Menten equation using experimental measurements in enzyme kinetics. The main idea is to fit a model to the data that gives resilient results in the presence of influential observations and/or outliers. To the best of our knowledge, this is the first study that examines the use of this technique for application in Michaelis-Menten enzyme analysis. We employ Monte Carlo simulations to validate the efficacy of the proposed procedure in comparison with the ordinary least-squares method and Eadie-Hofstee, Hanes-Woolf and Lineweaver-Burk plots. In addition, we illustrate the viability of our method by estimating the kinetic parameters of tyrosinase, an important enzyme widely distributed in microorganisms, animals, and plants, responsible for melanin production in mammal and enzymatic browning in plants, extracted from potato and two edible mushrooms.

The remainder of the paper is organized as follows. Section 2 provides a brief overview of the robust estimation model. Section 3 describes the experimental setup used in this research and the diagnostics that will be used to evaluate the effectiveness of the proposed procedure in determination of enzyme kinetic parameters. Results are discussed in Section 4. Finally, Section 5 summarizes the paper with a few concluding remarks.

2. Robust Nonlinear Regression

Nonlinear regression, same as linear regression, relies heavily on the assumption that the scatter of data around the ideal curve follows, at least approximately, a Gaussian or normal distribution. This assumption leads to the well-known regression goal: to minimize the sum of the squares of the vertical distances (a.k.a residuals) between the points and the curve. In practice, however, this assumption does not always hold true. The analytical data often contains outliers that can play havoc with standard regression methods based on the normality assumption, causing them to produce more or less strongly biased results, depending on the magnitude of deviation and/or sensitivity of the procedure. It is not unusual to find an average of 10% of outlying observations in data set of some processes .

Outliers are most commonly thought to be extreme values which are a result of measurement or experimental errors. Barnett and Lewis  provide a more cautious definition of the term outlier, describing it as the observation (or subset of observations) that appears to be inconsistent with the remainder of the dataset. This definition also includes the observations that do not follow the majority of the data, such as values that have been measured correctly but are, for one reason or another, far away from other data values, while the formulation "appears to be inconsistent" reflecting the subjective judgement of the observer whether or not an observation is declared to be outlying.

The ordinary least-squares (OLS) estimate [[??].sub.LS] of the parameter vector [beta] = [[[beta].sub.1], [[beta].sub.2], ..., [[beta].sub.n]] is obtained as the solution of the problem:

[mathematical expression not reproducible], (5)

where n denotes the number of observations, x = [[[x.sub.1], [x.sub.2], ..., [x.sub.n]].sup.T] is a n x p matrix, whose rows are p-dimensional vectors of predictor variables (or regressors), y = [[y.sub.1], [[y.sub.2], ..., [y.sub.n]].sup.T] is a n x 1 vector of responses, and f([x.sub.i], [beta]) is model function. Since all data points are attributed the same weights, OLS implicitly favors the observations with very large residuals and, consequently, the estimated parameters end up distorted if outliers are present.

In order to achieve robustness in coping with the problem of outliers, Huber  introduced a class of so-called M-estimators, for which the sum of function p of the residuals is minimized. The resulting vector of parameters [[??].sub.M] estimated by an M-estimator is then

[mathematical expression not reproducible]. (6)

The residuals are standardized by a measure of dispersion a to guarantee scale equivariance (i.e., independence with respect to the measurement units of the dependent variable). Function [rho](x) must be even, nondecreasing for positive values, and less increasing than the square.

The minimization in (6) can always be done directly. However, often it is simpler to differentiate [rho] function with respect to [beta] and solve for the root of the derivative. When this differentiation is possible, the M-estimator is said to be of [psi]-type. Otherwise, the M-estimator is said to be of [rho]-type.

Let [psi] = [rho]' be the derivative of [rho]. Assuming [sigma] is known and defining weights [w.sub.i] = [psi]([r.sub.i]/[sigma])/[r.sub.i], the estimates [[??].sub.M] can be obtained by solving the system of equations:

[n.summation over (i=1)] [w.sup.2.sub.i] [r.sup.2.sub.i] = 0. (7)

The weights are dependent upon the residuals, the residuals are dependent upon the estimated coefficients, and the estimated coefficients are dependent upon the weights. Hence, to solve for M-estimators, an iteratively reweighted least-squares (IRLS) algorithm is employed. Starting from some initial estimates [[??].sup.(0)], at each iteration t until it converges, this algorithm computes the residuals [r.sup.(t - 1).sub.i] and the associated weights [w.sup.(t - 1).sub.i] = w[[r.sup.(t - 1).sub.i]] from the previous iteration and yields new weighted least-squares estimates.

2.1. Objective Function. Several [rho] functions can be used. Here we opted for Tukey's biweight  or bisquare function defined as

[mathematical expression not reproducible], (8)

where c is a tuning constant and [z.sub.i] = [r.sub.i]/[sigma].

The corresponding [psi]([z.sub.i]) function is

[mathematical expression not reproducible]. (9)

Tukey's biweight estimator has a smoothly redescending f function that prevents extreme outliers to affect the calculation of the biweight estimates by assigning them a zero weighting. As can be seen in Figure 1, the weights for the biweight estimator decline as soon as z departs from 0 and are 0 for [absolute value of (z)] > c. Smaller values of c produce more resistance to outliers, but at the expense of lower efficiency when the errors are normally distributed. The tuning constant is generally picked to give reasonably high efficiency in normal case; in particular c = 4.685 produces a 95% efficiency when the errors are normal, while guaranteeing resistance to contamination of up to 10% of outliers.

In an application, an estimate of the standard deviation of the errors is needed in order to use these results. Usually a robust measure of spread is used in preference to the standard deviation of the residuals. A common approach is to take [??] = MAD/0.6745, where MAD is the median absolute deviation. Despite having the best possible breakout point of 50%, the MAD is not without its weaknesses. It exhibits superior statistical efficacy for the contaminated data (i.e., the data that contains extreme scores); however, when the data approaches a normal distribution, the MAD is only 37% efficient. Furthermore, it is ill-suited for asymmetrical distributions, since it attaches equal importance to positive and negative deviations from location estimate.

Hence, the scale parameter [sigma] is computed using Rousseeuw-Croux estimator [Q.sub.n] :

[Q.sub.n] = d[{[absolute value of ([x.sub.i] - [x.sub.j])]; i < j.sub.}(k)], (10)

where d is a calibration factor and [mathematical expression not reproducible], where h = n/2 +1 is roughly half the number of observations. The estimator [Q.sub.n] has the optimal 50% breakdown point; it is equally suitable for both symmetrical and asymmetrical distributions and considerably more efficient (about 82%) than the MAD under a Gaussian distribution.

3. Experimental Setup

To illustrate the efficacy of the proposed approach we use both artificial data, generated from the Monte Carlo simulations, and the experimental data for the tyrosinase enzyme (EC 1.14.18.1) extracted from three different sources.

3.1. Monte Carlo Simulations. Simulation studies are useful for gaining insight into the examined algorithms strengths and weaknesses, such as robustness, against number of variable factors. There are three outlier scenarios and a total of 18 different situations considered in this research. The data sets are generated from the model:

[y.sub.i] = ([[theta].sub.1]/[x.sub.i])/([[theta].sub.2] + [x.sub.i]) + [[epsilon].sub.i] = 1, 2, ..., n, (11)

where regression coefficients are fixed [theta] = [80 240] for each i = 1, 2 , ..., n. The explanatory variables [x.sub.i] are set to 100 x i and a zero mean unit variance random number with Gaussian density [[epsilon].sub.i] is added as measurement error.

The factors considered in this simulation are (1) level of outlier contamination: 10% or 20%, (2) sample size: small (n = 10), medium (n = 20), or large (n = 50), and (3) distances of outliers from clean observations: 10 standard deviations, 50 standard deviations, or 100 standard deviations. There are 1200 replications for each scenario and all simulations are carried out in MATLAB. The 3 scenarios and the 18 situations considered in this research are summarized in Table 1.

These simulated data are then used to estimate the values of [K.sub.m] and [V.sub.max] using different fitting techniques. The mean estimated values of [K.sub.m] and [V.sub.max] for a particular scenario and fitting technique are subsequently calculated by averaging [K.sub.m] and [V.sub.max] values obtained in each of the 1200 trials. The estimator efficacy is assessed in terms of its bias, precision, and accuracy. Bias is defined as an absolute difference between mean estimated parameter values and known parameter values:

Bias = [absolute value of ([[SIGMA].sup.N.sub.j=1] [[theta].sub.ij]/N - [[??].sub.i])], (12)

where N is the total number of replications in the simulated scenario. The term precision refers to the absence of random errors or variability. It is measured by the coefficient of variation ([C.sub.v]), that is, the standard deviation expressed as a percentage of the mean:

[C.sub.v] = [alpha] [??] [[theta].sub.ij] x 100. (13)

The prediction accuracy is defined as the overall distance between estimated values and true values. The accuracy is measured by a normalized mean squared error (NMSE), that is, the mean of the squared differences between the estimated and the known parameter values normalized by a mean of estimated data:

NMSE = [1/N] (N.summation over (j=1)] [([[theta].sub.ij] - [[??].sub.i]/[[bar.[theta]].sub.ij]).sup.2], (14)

where again N is the total number of replications in the simulated scenario.

3.2. Enzyme Data Sets. Tyrosinase (EC 1.14.18.1) is a ubiquitous enzyme responsible for melanization in animals and plants [19, 20]. In the presence of molecular oxygen, this enzyme catalyzes hydroxylation of monophenols to o-diphenols (cresolase activity) and their subsequent oxidation to o-quinones (catecholase activity). The latter products are unstable in aqueous solution, further polymerizing to undesirable brown, red, and black pigments. Tyrosinase has attracted a lot of attention with respect to its biotechnological applications , due to its attractive catalytic ability, as the catechol products are useful as drugs or drug synthons.

For the purposes of present study, tyrosinase was extracted from potato (Solanum tuberosum) and two species of common edible mushrooms: Agaricus bisporus (Ab) and Pleurotus ostreatus (Po). All the source materials were purchased from the local green market in Split, Croatia. Enzyme extraction was prepared with 100 mL of cold 50 mM phosphate buffer (pH 6.0) per 50 g of a source material. The homogenates were centrifuged at 5000 rpm for 30 min and supernatant was collected. The sediments were mixed with cold phosphate buffer and were allowed to sit in cold condition with occasional shaking. Then the sediments containing buffer were centrifuged once again to collect supernatant. These supernatants were subsequently used as sources of enzyme.

The tyrosinase activity was determined spectrophotometrically at room temperature (i = 25[degrees] C) and [lambda] = 475 nm, measuring the conversion of L-DOPA to red coloured oxidation product dopachrome . The reaction mixture-- obtained after adding a 50 [micro]L of enzyme extract to a cuvette containing 1.2 mL of 50 [micro]M phosphate buffer (pH 6.0) and 0.8 mL of 10 mM L-DOPA--was immediately shaken and the increase in absorbance was measured for 3 minutes. The change in the absorbance was proportional to the enzyme concentration. The initial rate was calculated from the linear part of the recorded progress curve. One unit of enzyme was defined as the amount which catalyzed the transformation of 1 [micro]mol of L-DOPA to dopachrome per minute under the above conditions. The dopachrome extinction coefficient at 475 nm was 3600 [M.sup.-1] [cm.sup.-1].

To determine the values of [K.sub.m] and [V.sub.max] for tyrosinase, experimental kinetic data, summarized in Table 2, was gathered by measuring enzyme activity in a cuvette where 50 [micro]L of enzyme solution was added to 2 mL of 50 mM phosphate buffer (pH 6.0) containing various concentrations of L-DOPA (0-10 mM). In this case, the estimator performance is evaluated by computing mean absolute error (MAE), that is, the mean of the absolute differences between the observed reaction rate, [v.sub.i] and the expected reaction rate, calculated using estimates of [K.sub.m] and [V.sub.max], at a concentration, [s.sub.i]:

MAE = [1/n] [n.summation over (i=1)] ([v.sub.i] - [V.sub.max] x [s.sub.i]/[s.sub.i] + [K.sub.m]), (15)

where n is the number of experimental data points. Mean absolute error is regularly employed quality measure that provides an objective assessment of how well the various estimated values of [K.sub.m] and [V.sub.max] fit the untransformed experimental data.

4. Results and Discussion

4.1. Parameter Estimation Using Simulated Data. Figures 2-5 provide the summary of our simulation outcomes for different sample sizes, different contamination levels, and different outlier distances. By examining the simulation results, it is evident that modified robust Tukey's biweight estimator outperforms all other four alternative fitting techniques with respect to bias, coefficient of variation, and normalized mean square error, yielding both accurate and precise estimates of [K.sub.m] and [V.sub.max] at all test conditions. For example, looking at the set of values obtained for a small sample size (n = 10) with a minimal level of contamination present in the data and minimal outlier scatter (Situation 1, Table 1), we observe that as per the RNR estimator the average estimated values of [K.sub.m] and [V.sub.max] are 240.08 [+ or -] 14.39 and 80 [+ or -] 1.5. When EH, HW, and LB plots were used, the data produced 224.38 [+ or -] 38.01, 241.02 [+ or -] 44.08, and 237.47 [+ or -] 52.05, respectively, as average estimates of [K.sub.m] and 78.18 [+ or -]4.46,79.8 [+ or -]4.85 and 79.42 [+ or -]6.15, respectively, as [V.sub.max]. When OLS estimator was applied, the corresponding average values of [K.sub.m] and [V.sub.max] were 238.71 [+ or -] 40.04 and 79.86 [+ or -] 4.39, respectively. If the reported standard deviations are scaled by dividing them with an appropriate mean, the resulting coefficients of variation of [K.sub.m] and [V.sub.max] are 16.9 and 5.7% (EH), 18.3% and 6.1% (HW), 21.9% and 7.7% (LB), 16.8 and 5.5% (OLS), and 6% and 1.9% (RNR), respectively. Thus, it is revealed that, though all three Hanes-Woolf, Lineweaver-Burke, and ordinary least-squares methods have a low bias (Figures 2 and 4) and produce the results that are in a close proximity of the values obtained by robust nonlinear regression method, their estimates are much more imprecise and as such are of a limited utility. Figure 6 shows the plots of fitted reaction curves for the randomly selected replications of situations with small sample size (Situations 1-6).

For a medium sample size (n = 20) with the same levels of contamination and outlier scatter (Situation 7, Table 1), the analysis for the RNR approach yielded almost identical results, that is, average [K.sub.m] and [V.sub.max] estimates as 239.68 [+ or -] 9.3 and 80 [+ or -] 0.68, respectively. The EH, HW, and LB plots estimated the average Km as 228.83 [+ or -] 26.39, 238.01 [+ or -] 38.12, and 238.52[+ or -]49.21, respectively, and [V.sub.max] as 79.09 [+ or -] 2.12, 79.6 [+ or -] 2.73, and 79.6 [+ or -] 4.07, respectively. The average kinetic parameters, that is, [K.sub.m] and [V.sub.max], obtained by the OLS method were 238.75 [+ or -] 28.43 and 79.88 [+ or -] 2.05, respectively. Again, if the standard deviations are scaled by an appropriate mean, the resulting coefficients of variation of [K.sub.m] and [V.sub.max] are 11.5% and 2.7% (EH), 16% and 3.4% (HW), 20.6% and 5.1% (LB), 11.9% and 2.6% (OLS), and 3.9% and 0.8% (RNR), respectively.

Similarly, for a large sample size (n = 50) with the same levels of contamination and outlier scatter (Situation 13, Table 1), the values of [K.sub.m] and [V.sub.max] are estimated as 232.67 [+ or -] 22338 and 79.71 [+ or -] 0.98 (EH), 243.2 [+ or -] 29.88 and 79.99 [+ or -] 1.11 (HW), 246.97[+ or -]78.33 and 80.15[+ or -]3.43 (LB), 240.68[+ or -]19.19 and 80.08[+ or -]0.86 (OLS), and 239.62[+ or -]7.19 and 80.01 [+ or -]0.31 (RNR), respectively. In this case, the resulting coefficients of variation of [K.sub.m] and [V.sub.max] are 9.6% and 1.2% (EH), 123% and 1.4% (HW), 31.7% and 4.3% (LB), 8% and 1.1% (OLS), and 3% and 0.4% (RNR), respectively. It should be noted that, all the while in all of the aforementioned cases, the Eadie-Hofstee method has coefficients of variation that are highly comparable to those based on ordinary least-squares method; the EH estimated [K.sub.m] and [V.sub.max] values are much further away from the true values than the estimates obtained by Hanes-Woolf, ordinary least-squares, and robust nonlinear regression methods.

With the increase of the contamination level and the outlier scatter, the average estimates of [K.sub.m] and [V.sub.max] values as per linear plots and the OLS method begin to deviate significantly. However, the modified robust Tukey's biweight estimator is able to keep the errors in check and produce the results that are highly comparable and much closer to the true parameter values.

Numerically, by looking at the estimated values, it is hard to tell which of the selected estimators has the overall best performance; nevertheless, with the help of the normalized mean square error method we can see the values of parameters for which the error is minimum. Thus, from Figures 3 and 5, we may say that the best (most accurate) values of [K.sub.m] and [V.sub.max] are obtained in situation 17, for which the minimum error values are 0.0541 and 8.5 x [10.sup.-4], respectively (as obtained by RNR). This proves that the estimated values are more or less similar to the true values. The worst error values of 1.6982 and 0.1517 for RNR method are obtained in situation 4 (Figures 3 and 5). In all other situations, the normalized mean square errors for RNR method are less than 1 and 0.1, respectively (Figures 3 and 5). This shows the credibility and the robustness of the proposed modified Tukey's biweight estimator relative to other methods when outliers or influential observations are present in the data. If we compare the robust nonlinear regression method with ordinary least-squares method, we find that the RNR method normalized mean square errors are on average more than 10 times lower than the normalized mean square errors produced by the OLS method.

4.2. Parameter Estimation Using Experimental Data. The viability of the proposed robust estimator was also tested by using the experimental kinetic data for tyrosinase enzyme. The corresponding [K.sub.m] and [V.sub.max] values, produced by different estimation models, are given in Table 3. Upon closer inspection and analysis of these values, it can be observed that, in case of Ab mushroom and potato tyrosinase, the kinetic values, that is, [K.sub.m] and [V.sub.max], yielded by the RNR method (599 [micro]mol and 0.38 [micro]mol/min for Ab mushroom and 10740 [micro]mol and 0.118 [micro]mol/min for potato, resp.) are much closer to the values yielded by HW plot (555 [micro]mol and 0.381 [micro]mol/min for Ab mushroom and 10659 [micro]mol and 0.11 [micro]mol/min for potato, resp.) than by OLS method (545 [micro]mol and 0.376 [micro]mol/min for Ab mushroom and 25720 [micro]mol and 0.209 [micro]mol/min for potato, resp.). Furthermore, it is interesting to note that the parameter values yielded by LB plot ([K.sub.m] 263 [micro]mol and [V.sub.max] 0.248 [micro]mol/min for Ab mushroom, [K.sub.m] 360 [micro]mol and [V.sub.max] 0.002 [micro]mol/min for Po mushroom, and [K.sub.m] 1216 [micro]mol and [V.sub.max] 0.021 [micro]mol/min for potato) for all three tyrosinase source are very far from the values yielded by other four estimation methods. Figures 7(a), 7(b), and 8(a) show the curves fitted to the experimental data using the modified Tukey's biweight estimator in comparison with the standard linearization methods and the ordinary least-squares non-linear regression. The mean absolute errors between the predicted reaction rates and the actual data are plotted in the right graph in Figure 8. Particularly, the mean errors for RNR method are 0.0048, 1.5 x [10.sup.-4], and 0.0014, respectively, which shows a good fit of the achieved model.

5. Conclusion

When an enzymatic reaction follows Michaelis-Menten kinetics, the equation for the initial velocity of reaction as a function of the substrate concentration is characterized by two parameters, the Michaelis constant, [K.sub.m], and the maximum velocity of reaction, [V.sub.max]. Up to this day, these parameters are routinely estimated using one of these different linearization models: Lineweaver-Burke plot (1/ v versus 1/s), Eadie-Hofstee plot (v versus v/s), and Hanes-Wolfe plot (s/v versus v). Although the linear plots obtained by these methods are very illustrative and useful in analyzing the behavior of enzymes, the common problem they all share is the fact that transformed data usually do not satisfy the assumptions of linear regression, namely, that the scatter of data around the straight line follows Gaussian distribution, and that the standard deviation is equal for every value of independent variable.

More accurate approximation of Michaelis-Menten parameters can be achieved through use of nonlinear least-squares fitting techniques. However, these techniques require good initial guess and offer no guarantee of convergence to the global minimum. On top of that, they are very sensitive to the presence of outliers and influential observations in the data, in which case they are likely to produce biased, inaccurate, and imprecise parameter estimates.

In this paper, a robust estimator of nonlinear regression parameters based on a modification of Tukey's biweight function is introduced. Robust regression techniques have received considerable attention in mathematical statistics literature, but they are yet to receive similar amount of attention by practitioners performing data analysis. Robust nonlinear regression aims to fit a model to the data so that the results are more resilient to the extreme values and are relatively consistent when the errors come from the high-tailed distribution. The experimental comparisons, using both real and synthetic kinetic data, show that the proposed robust nonlinear estimator based on modified Tukey's biweight function outperforms the standard linearization models and ordinary least-squares method and yields superior results with respect to bias, accuracy, and consistency, when there are outliers or influential observations present in the data.

https://doi.org/ 10.1155/2017/6560983

Competing Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The work reported in this paper was supported by Croatian Science Foundation Research Project "Investigation of Bioactive Compounds from Dalmatian Plants: Their Antioxidant, Enzyme Inhibition and Health Properties" (IP 2014096897).

References

 H. J. Fromm and M. Hargrove, Essentials of Biochemistry, Springer, Berlin, Germany, 2012.

 K. A. Johnson and R. S. Goody, "The original Michaelis constant: translation of the 1913 Michaelis-Menten paper," Biochemistry, vol. 50, no. 39, pp. 8264-8269, 2011.

 A. Cornish-Bowden, "The origins of enzyme kinetics," FEBS Letters, vol. 587, no. 17, pp. 2725-2730, 2013.

 H. Lineweaver and D. Burk, "The determination of enzyme dissociation constants," Journal of the American Chemical Society, vol. 56, no. 3, pp. 658-666, 1934.

 G. S. Eadie, "The inhibition of cholinesterase by physostigmine and prostigmine," The Journal of Biological Chemistry, vol. 146, pp. 85-93, 1942.

 C. S. Hanes, "Studies on plant amylases," Biochemical Journal, vol. 26, no. 5, pp. 1406-1421, 1932.

 A. Ferst, Structure and Mechanism in Protein Science: A Guide to Enzyme Catalysis and Protein Folding, Freeman, 2000.

 G. N. Wilkinson, "Statistical estimations in enzyme kinetics," The Biochemical journal, vol. 80, pp. 324-332, 1961.

 H. Motulsky and A. Christopoulus, Fitting Models to Biological Data Using Linear and Nonlinear Regression, GraphPad Software Inc, 2003.

 C. Cobelli and E. Carson, Introduction to Modeling in Physiology and Medicine, Academic Press, 2008.

 S. R. Nelatury, C. F. Nelatury, and M. C. Vagula, "Parameter estimation in different enzyme reactions," Advances in Enzyme Research, vol. 2, no. 1, pp. 14-26, 2014.

 G. Kemmer and S. Keller, "Nonlinear least-squares data fitting in Excel spreadsheets," Nature Protocols, vol. 5, no. 2, pp. 267-281, 2010.

 C. Lim, P. K. Sen, and S. D. Peddada, "Robust nonlinear regression in applications," Journal of the Indian Society of Agricultural Statistics, vol. 67, no. 2, pp. 215-234, 291, 2013.

 R. A. Maronna, R. D. Martin, and V. J. Yohai, Robust Statistics: Theory and Methods, Wiley Series in Probability and Statistics, John Wiley & Sons, New York, NY, USA, 2006.

 V. Barnett and T. Lewis, Outliers in statistical data, Wiley Series in Probability and Mathematical Statistics: Applied Probability and Statistics, John Wiley & Sons, Ltd., Chichester, England, Third edition, 1994.

 P. J. Huber, "Robust estimation of a location parameter," Annals of Mathematical Statistics, vol. 35, no. 1, pp. 73-101, 1964.

 F. Mosteller and J. W. Tukey, Data Analysis and Regression, Addison-Weasley, 1977.

 P. J. Rousseeuw and C. Croux, "Alternatives to the median absolute deviation," Journal of the American Statistical Association, vol. 88, no. 424, pp. 1273-1283, 1993.

 M. R. Loizzo, R. Tundis, and F. Menichini, "Natural and synthetic tyrosinase inhibitors as antibrowning agents: an update," Comprehensive Reviews in Food Science and Food Safety, vol. 11, no. 4, pp. 378-398, 2012.

 S. Y. Lee, N. Baek, and T.-G. Nam, "Natural, semisynthetic and synthetic tyrosinase inhibitors," Journal of Enzyme Inhibition and Medicinal Chemistry, vol. 31, no. 1, pp. 1-13, 2016.

 K. U. Zaidi, A. S. Ali, S. A. Ali, and I. Naaz, "Microbial tyrosinases: promising enzymes for pharmaceutical, food bio-processing, and environmental industry," Biochemistry Research International, vol. 2014, Article ID 854687, 16 pages, 2014.

 Z. Yang and F. Wu, "Catalytic properties of tyrosinase from potato and edible fungi," Biotechnology, vol. 5, no. 3, pp. 344-348, 2006.

Maja Marasovic, (1) Tea Marasovic, (2) and Mladen Milos (1)

(1) Faculty of Chemistry and Technology, University of Split, Rudera Boskovica 35, 21000 Split, Croatia

(2) Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture, University of Split, Rudera Boskovica 32, 21000 Split, Croatia

Correspondence should be addressed to Tea Marasovic; tmarasov@fesb.hr

Received 18 October 2016; Accepted 6 February 2017; Published 5 March 2017

Caption: FIGURE 1: Tukey's biweight estimator objective, [psi], and weight functions for c = 4.685.

Caption: FIGURE 2: Mean estimated values (black dots) and standard deviations of Michaelis constant, [K.sub.m], for different simulated scenarios. Red lines denote true parameter value.

Caption: FIGURE 3: Normalized mean square error of [K.sub.m] for different scenarios.

Caption: FIGURE 4: Mean estimated values (black dots) and standard deviations of maximum initial velocity, [V.sub.max] for different simulated scenarios. Red lines denote true parameter value.

Caption: FIGURE 5: Normalized mean square error of [V.sub.max] for different scenarios.

Caption: FIGURE 6: Curves fitted using different fitting techniques for random replications of situations 1-6.

Caption: FIGURE 7: Curves fitted using different fitting techniques for tyrosinase extracted from Agaricus bisporus (a) and Pleurotus ostreatus (b) mushrooms.

Caption: FIGURE 8: Curves fitted using different fitting techniques fortyrosinase extracted from potato (a). Mean absolute errors for different regression models used (b).
```TABLE 1: The 18 situations considered in the simulations.

Scenario   Situation   Sample size   Outliers   [sigma]

1              1           10          10%      10
2                       20%      10
3                       10%      50
4                       20%      50
5                       10%      100
6                       20%      100
2              7           20          10%      10
8                       20%      10
9                       10%      50
10                       20%      50
11                       10%      100
12                       20%      100
3             13           50          10%      10
14                       20%      10
15                       10%      50
16                       20%      50
17                       10%      100
18                       20%      100

TABLE 2: Input experimental kinetic data sets.

Concentration         Reaction rate [[micro]mol/min]
[[micro]mol]      Ab             Po           Potato

25              0.0243   1.1 x [10.sup.-4]    0.0004
50              0.0292   2.6 x [10.sup.-4]    0.0011
100             0.0546   2.9 x [10.sup.-4]    0.0018
250             0.1388   3.7 x [10.sup.-4]    0.0023
500             0.1726   6.9 x [10.sup.-4]    0.0030
1000            0.2374   13.1 x [10.sup.-4]   0.0101
2500            0.3023   26.9 x [10.sup.-4]   0.0124
5000            0.3395   49.6 x [10.sup.-4]   0.0375
7500            0.3515   50.3 x [10.sup.-4]   0.0485
10000           0.3652   60.7 x [10.sup.-4]   0.0569

TABLE 3: Kinetic parameters, [K.sub.m] and [V.sub.max], values, and
mean absolute errors for tyrosinase extracted from different sources,
estimated using different methods.

Source    Parameter                              Method
LB            HW                  EH

Ab        [K.sub.m]    263.09        555.18              414.09
[V.sub.max]   0.2484        0.3812              0.3470
MAE       0.0511        0.0071              0.0137
Po        [K.sub.m]    359.91        3592.1              992.27
[V.sub.max]   0.0017        0.0079              0.0042
MAE       0.0013   2.6 x [10.sup.-4]   7.5 x [10.sup.-4]
Potato    [K.sub.m]    1216.0         10659              2093.4
[V.sub.max]   0.0208        0.1104              0.0394
MAE       0.0139        0.0022              0.0087

Source    Parameter                     Method
OLS                 RNR

Ab        [K.sub.m]         544.79              599.41
[V.sub.max]        0.3767              0.3798
MAE            0.0064              0.0048
Po        [K.sub.m]         5700.3              6630.5
[V.sub.max]        0.0095              0.0099
MAE       1.7 x [10.sup.-4]   1.5 x [10.sup.-4]
Potato    [K.sub.m]          25720               10740
[V.sub.max]        0.2086              0.1180
MAE            0.0018              0.0014
```
Title Annotation: Printer friendly Cite/link Email Feedback Research Article Marasovic, Maja; Marasovic, Tea; Milos, Mladen Journal of Chemistry Technical report Jan 1, 2017 6100 Molecular Descriptors of Nanotube, Oxide, Silicate, and Triangulene Networks. Comment on "Topological Indices Study of Molecular Structure in Anticancer Drugs". Enzyme kinetics Enzymes Monte Carlo method Monte Carlo methods Parameter estimation Regression analysis