# Construction of a Smooth Lyapunov Function for the Robust and Exact Second-Order Differentiator.

1. Introduction

Differentiators play an important role in (continuous) feedback control systems. For example, it is usually required to differentiate the system's output in order to construct the feedback controller. There are several options to approximate the derivatives of a signal, for instance, the very classical linear filters or Luenberger observers, the digital filters approach [1, 2], the high-gain observers [3], and some nonlinear observers [4, 5]. However, from sliding mode control theory, a class of exact differentiators has emerged. We can mention the first-order robust and exact differentiator (RED) [6] (also known as Super-Twisting algorithm). Initially, such an algorithm was studied through geometric methods, but later the Lyapunov approach provided several interesting results [7-11].

Theoretically, the Super-Twisting algorithm can provide exactly the first derivative of a signal in finite time, if the second derivative is uniformly bounded. To obtain higher order derivatives, one could use first-order RED in cascade. However, this configuration produces a significant loss of precision [12]. Hence, for higher order derivatives, a RED of arbitrary order was proposed in [12], and their properties were analyzed by means of geometric methods and homogeneity properties [13]. Unlike those kinds of proofs, a Lyapunov based approach would be very useful to analyze robustness properties, to design the differentiator's parameters, and to estimate the convergence time.

Lyapunov's direct method is one of the most important tools in analysis and design of nonlinear control systems [14-17]. It has been used for analysis and design for a wide class of nonlinear systems as, for example, continuous [15], variable structure [18], or hybrid [19] systems, adaptive fuzzy controllers [20], and fuzzy optimal control for chaotic discrete-time systems [21].

For the case of the second-order RED [12], a continuous but not differentiable Lyapunov function was proposed in [22]. Although it can be used to design the parameters of the differentiator, a set of nonlinear inequalities involving the parameters of the function and the gains of the differentiator must be solved. Thus, it is desirable to have a differentiable Lyapunov function and an easier procedure to design the parameters of the differentiator.

The contributions of this paper improve the analysis and design of the second-order RED as stated below:

(i) We provide a constructive method to determine a differentiable Lyapunov function for the second-order RED. This is the first time that a Lyapunov function for the second-order RED is provided in the literature.

(ii) The Lyapunov function designing process is useful to obtain a procedure to design the gains of the second-order RED.

(iii) We also provide some different sets of gains for the second-order RED.

This is achieved by using the Lyapunov function designing method proposed in [23]. Such a method has been applied to second-order systems, and in this paper we apply it for a third-order one: the second-order RED. One of the main characteristics of the method is that it allows us to design the parameters of the system by solving a linear system of inequalities or a linear matrix inequality. A preliminary version of these results was presented in [24].

This paper is organized as follows. In Section 2, a brief description is given of the second-order RED and the Lyapunov function we are proposing. Section 3 is dedicated to the design process of the Lyapunov function, and in Section 4 we finish with some concluding remarks.

2. Lyapunov Analysis and Design for the Second-Order RED

In [12], an arbitrary order RED was proposed, but [12] does not provide a method to determine whether for a certain set of gains the RED will converge or how to design gains to achieve convergence of the RED.

In this section, we will first recall Levant's RED [12] and motivate the necessity of selecting appropriate gains. We then show that it is possible to provide a (smooth) Lyapunov function to prove the convergence of the RED for appropriate gains and how to scale these gains. Although the ideas presented in the paper are valid for an arbitrary order RED, we will restrict ourselves to the second-order differentiator for simplicity and concreteness of the presentation.

2.1. The Differentiator. Consider the class of signals [G.sub.[DELTA]], containing time functions [sigma] : [R.sub.+] [right arrow] R having continuous first-order [[sigma].sup.(1)] and second-order [[sigma].sup.(2)] derivatives and a third-order derivative which exists almost everywhere and is bounded: that is, [absolute value of [[sigma].sup.(3)] (t)] [less than or equal to] [DELTA], [for all]t [greater than or equal to] 0, for some nonnegative constant [DELTA]. The second-order RED given by [12]

[mathematical expression not reproducible] (1)

can provide exactly the first- and second-order derivatives of [sigma] [member of] [G.sub.[DELTA]] in finite time; that is, after a finite time T, [s.sub.1](t) = [sigma](t), [s.sub.2](t) = [[sigma].sup.(1)](t), and [s.sub.3] (t) = [[sigma].sup.(2)] (t), for all t [greater than or equal to] T. In (1), as in the whole paper, the following notation is used: for a real variable x [member of] R and a real number [rho] [member of] R, [[??]x[??].sup.[rho]] = [[absolute value of x].sup.[rho]] x sign(x). Note that since (1) has a discontinuous right-hand side, we should interpret its solutions in the sense of Filippov [25].

Differentiator (1) will work only if the gains k = [[[k.sub.1], [k.sub.2], [k.sub.3]].sup.T] are designed properly, as is illustrated in the following example.

Example 1. Consider the function [sigma] : R [right arrow] R given by [sigma](t) = cos(2t). The third derivative of a is bounded by [DELTA] = 4. We simulate (1) with the gains k = [[17.5, 68.4,20].sup.T]. In Figure 1, it can be seen that [s.sub.1], [s.sub.2], and [s.sub.3] converge in finite time to [sigma], [[sigma].sup.(1)], and [[sigma].sup.(2)], respectively. However, with k = [[18, 69,3].sup.T], [s.sub.1] and [s.sub.2] converge to [sigma] and [[sigma].sup.(1)], respectively, but [s.sub.3] does not converge to [[sigma].sup.(2)]; see Figure 2. Now, with k = [[20, 5,10].sup.T], the differentiator's trajectories are bounded, but they do not converge to [sigma], [[sigma].sup.(1)], and [[sigma].sup.(2)]; see Figure 3. Moreover, there are gains that can produce unstable differentiator's trajectories: for instance, k = [[5,4, 25].sup.T]; see Figure 4.

Remark 2. A remarkable property of differentiator (1) is that, in the absence of noise, it converges exactly and in finite time to the true values of [sigma], [[sigma].sup.(1)], and [[sigma].sup.(2)], and it does it robustly, that is, for any signal in the class [sigma] [member of] [G.sub.[DELTA]]. These properties are not achievable by any continuous differentiator, and this is the reason to name it robust and exact differentiator [6,12].

From the last example, the necessity is clear to have a procedure to design the parameters of (1) that guarantee the finite time convergence of [s.sub.1], [s.sub.2], and [s.sub.3] to [sigma], [[sigma].sup.(1)], and [[sigma].sup.(2)], respectively. In the next paragraph, we give a procedure, based on a Lyapunov analysis, to design the gains of the differentiator as those used in Example 1.

2.2. Main Results: Lyapunov Function and Gain Design. Introducing the differentiation errors [x.sub.i] = [s.sub.i] - [[sigma].sup.(i-1)], i = 1, 2, 3, as in [22], the dynamics of (1) can be rewritten as follows:

[mathematical expression not reproducible], (2)

where [pi](t) = -[[sigma].sup.(3)] (t). Note that [pi](t) can be considered as a disturbance for the error dynamics. Thus, the robust and exact convergence of RED (1) is equivalent to the finite time stability of the origin x = 0 of (2), robustly with respect to arbitrary but bounded perturbations [absolute value of [eta](t)] [less than or equal to] [DELTA]. Thus, the meaning of the robustness of the differentiator is in the class of functions it can differentiate. A sufficient condition for establishing this property is the existence of a robust Lyapunov function for (2), that is, a continuously differentiable and positive definite function V(x), having a negative definite derivative [??] along the trajectories of (2) for every [absolute value of [eta](t)] [less than or equal to] [DELTA].

Our goal in this paper is threefold:

(G1) We present a linearly parametrized family F of differentiable Lyapunov functions (LFs) for (2) that gives information about the class of signals [pi] and the set of gains k = [[k.sub.1], [k.sub.2], [k.sub.3]], such that the trajectories of (2) converge to zero in finite time.

(G2) We give a systematic procedure to determine whether there exists a LF V(x) in the family F for a given set of gains k and a given size of perturbations [DELTA] [greater than or equal to] 0. This problem is reduced to find a solution of a finite system of linear inequalities (LI) or a linear matrix inequality (LMI) in the parameters of the family of LFs.

(G3) The system of inequalities to determine whether V(x) is a LF is linear in the parameters of the family F and also linear in the gains k and perturbation's size [DELTA]. However, it is bilinear in both sets of parameters. We provide a procedure to determine a set of gains k, [DELTA] and a LF in the family F that render (1) a RED.

Since the differential equation (2) with [pi](t) [equivalent to] 0 (or the associated differential inclusion for [absolute value of [pi](t)][less than or equal to] [DELTA]) is homogeneous, we will use a family of homogeneous LFs. We therefore recall some definitions on weighted homogeneity for continuous functions and vector fields [16].

Definition 3. Let [[LAMBDA].sup.r.sub.[epsilon]] be the square diagonal matrix given by [mathematical expression not reproducible]. The components of the vector r are called the weights of the coordinates. Then, one has the following:

(a) A function f : [R.sup.n] [right arrow] R is homogeneous of degree m [member of] R if f([[LAMBDA].sup.r.sub.[epsilon]] x) = [[epsilon].sup.m] f(x), [for all]x [member of] [R.sup.n], [for all][epsilon] > 0.

(b) The vector field F = [[[f.sub.1](x), ..., [f.sub.n](x)].sup.T] is homogeneous of degree m [member of] R if every [f.sub.i] is homogeneous of degree [mathematical expression not reproducible].

(c) A dynamical system [??] = F(x), x [member of] [R.sup.n], is said to be homogeneous of degree m if F is homogeneous of degree m.

The extensions of these definitions to discontinuous systems and differential inclusions are very similar and can be found, for example, in [11, 13]. Note that homogeneity is a scaling property that can be very useful. For example, if some characteristic of a function is determined locally, homogeneity allows us to extend it to the whole domain. Thus, if the origin of a homogeneous system is locally asymptotically stable, then homogeneity allows us to ensure global asymptotic stability.

Note that (in the nominal case, [pi](t) = 0), (2) is homogeneous of degree m = -1 with the weights r = [3,2,1]. Therefore, if the origin of (2) is an asymptotically stable equilibrium point, then there exists a homogeneous Lyapunov function for (2) [11, 26]. Moreover, the negative homogeneous degree and the asymptotic stability imply finite time stability of the origin of (2) [13]. Thus, if we are able to find a differentiable homogeneous Lyapunov function for (2), we can ensure global finite time convergence of (1).

Now, we present the main theorem of this paper.

Theorem 4. Consider the class of signals [G.sub.[DELTA]] for any [DELTA] [greater than or equal to] 0. There exist gains k = [[[k.sub.1], [k.sub.2], [k.sub.3]].sup.T] such that (1) is a RED for any signal [sigma] [member of] [G.sub.[DELTA]].

This result is a simple consequence of the following three lemmas.

Lemma 5. There exist [alpha] = [[[[alpha].sub.1], [[alpha].sub.12], [[alpha].sub.2], [[alpha].sub.13], [[alpha].sub.23], [[alpha].sub.3]].sup.T] and k = [[[k.sub.1], [k.sub.2], [k.sub.3]] such that the function V: [R.sup.3] [right arrow] R given by

[mathematical expression not reproducible] (3)

is a homogeneous differentiable Lyapunov function for (2) with [pi](t) = 0, [for all]t [greater than or equal to] 0.

Lemma 5 guarantees the existence of parameters a and k such that the trajectories of (2) converge in finite time to the origin x = 0 for [pi](t) = 0 and such that V(x) in (3) is a Lyapunov function. In Section 3, we give the proof of Lemma 5 by providing a procedure to compute coefficients a and parameters k such that V(x) is a LF, which consist in finding a solution of a system of inequalities (Polya's procedure) or matrix inequalities (SOS procedure), which are linear in [alpha] and in k, but bilinear in both. In Tables 1 and 2, some values of [alpha] and k satisfying Lemma 5 are given, which we found solving those inequalities. It is possible to find many such values for [alpha] and k by exploring the solution set of the inequalities. The nature of the parameters [p.sub.v] and [p.sub.w] will be explained in Section 3.

From Lemma 5, we can obtain RED only for a very narrow class of signals, that is, [G.sub.0], which basically consists of polynomials of degree 3. The following lemma shows that in fact it is a RED for [mathematical expression not reproducible] for some (possible small) value of [[DELTA].sub.0] > 0.

Lemma 6. Suppose that a and k satisfy Lemma 5. Then, there exists a real constant [[DELTA].sub.0] such that the derivative of (3) along the trajectories of (2) is still negative definite for all [pi](t) such that [absolute value of [pi](t)] [less than or equal to] [[DELTA].sub.0].

The proof of this lemma and a procedure to find a value of [[DELTA].sub.0] are given in Section 3.2.3. In Tables 1 and 2, values of [[DELTA].sub.0] satisfying Lemma 6 for each set of parameters [alpha] and k are given.

With the results of Lemmas 5 and 6, we can prove Theorem 4 showing that if for some gains k we have a RED for the class of signals [mathematical expression not reproducible], then it is possible to scale the gains k to obtain a RED for the class of signals [G.sub.[DELTA]] for every positive value of A. This idea of scaling the gains, which has been previously used in [6, 12], is the content of the next lemma.

Lemma 7. Let k = [[[k.sub.1], [k.sub.2], [k.sub.3]].sup.T] and [[DELTA].sub.0] be such that the origin of (2) is finite time stable for any [pi](t), [absolute value of [pi](t)] [less than or equal to] [[DELTA].sub.0]. Then, the origin of (2) is finite time stable for any [pi](t), [absolute value of [pi](t)] [less than or equal to] [DELTA], if their gains are chosen as [k.sub.s] = [[[k.sub.1s], [k.sub.2s], [k.sub.3s]].sup.T] = [[[L.sup.1/3] [k.sub.1], [L.sup.2/3] [k.sub.2], L[k.sub.3]].sup.T], where L [less than or equal to] [DELTA]/[[DELTA].sub.0].

Proof. Consider the diffeomorphism [y.sub.i] = L[x.sub.i], 0 < L [member of] R, i = 1,2,3. Thus, from (2), we obtain the system:

[mathematical expression not reproducible]. (4)

Since by assumption the trajectories of (2) converge to zero in finite time for all the disturbances [pi](t) bounded by [[DELTA].sub.0], it follows that the trajectories of (4) converge to zero in finite time for all disturbances bounded by L[[DELTA].sub.0]. By defining [DELTA] = L[[DELTA].sub.0], we obtain the result.

Now, we give an example about the use of Lemma 7.

Example 8. Consider [absolute value of [pi](t)] [less than or equal to] [DELTA] = 1 and the values for k given in Tables 1 and 2. Thus, with L = [[DELTA].sup.-1.sub.0] in each case, we obtain the scaled values shown in Table 3.

Remark 9. Note that Table 3 can be taken as the starting point for the scaling process given in Lemma 7. Since [[DELTA].sub.0] = 1 in Table 3, we have that L = [DELTA]. For example, by taking the last row of Table 3 and considering [DELTA] = 4, we obtain the values for k (up to round-off error) used in the first part of Example 1.

We have given a procedure to scale the gains of the differentiator, but it is also possible to scale the coefficients of (3) in order to obtain a Lyapunov function for the scaled system.

Corollary 10. Given [alpha], k, and [[DELTA].sub.0] from Lemmas 5 and 6, the function

[mathematical expression not reproducible]. (5)

whose coefficients [[alpha].sub.s] = [[[[alpha].sub.1s], [[alpha].sub.12s], [[alpha].sub.2s], [[alpha].sub.13s], [[alpha].sub.23s], [[alpha].sub.3s]].sup.T] are given by

[[alpha].sub.s] = [[[L.sup.-5/3] [[alpha].sub.1], [L.sup.-2] [[alpha].sub.12], [L.sup.-5/2] [[alpha].sub.2], [L.sup.- 7/3] [[alpha].sub.13], [L.sup.-4] [[alpha].sub.23], [L.sup.-5] [[alpha].sub.3].sup.T], (6)

is a Lyapunov function for

[mathematical expression not reproducible], (7)

where [k.sub.s] and L are obtained from Lemma 7.

In the next section, we describe the procedure to construct the Lyapunov function for (2). The designing process is the proof of Lemma 5. Such process can be used to design additional values for [alpha] and k different from those shown in Tables 1 and 2.

3. Lyapunov Function Designing Process

In the literature, there are several methods to design Lyapunov functions. Classical results are Krasovskii's method [27], the Variable Gradient method [28], and Zubov's method [29]. However, they are very analytic or even useless for systems like (2). Unfortunately, more recently, techniques, like barrier Lyapunov functions [30, 31], cannot be applied to design a Lyapunov function for (2). There are also numerical approaches to construct Lyapunov functions (see [32] and the references therein) but the aim of this paper is to obtain an explicit Lyapunov function that allows us to design the gains of (1). For high order sliding mode algorithms, there are very few approaches. For example, in [33,34], an extension of Zubov's method is given, and it is required to solve a partial differential equation. In [35], the proposed method requires calculating explicitly the trajectories of the system. So it is not easy to apply the mentioned methods to design a Lyapunov function for (2).

In this section, we show the designing process for the Lyapunov function given in the last section. The methodology used here is that provided in [23]. Generally speaking, it consists of the following steps:

(1) Proposing a suitable set of candidate Lyapunov functions V([alpha], x) parametrized in the coefficients [alpha], namely, (9) (see below): such functions and the vector field of (2) belong to a class of functions called generalized forms [23].

(2) Obtaining the function W(x) by taking the derivative of V([alpha], x) along the trajectories of (2): that is, [??] = -W(x).

(3) Finding a polynomial representation of V and W by performing an adequate change of coordinates in each octant of [R.sup.3].

(4) Designing a and k guaranteeing the positive definiteness of the polynomial representation of V and W.

For the last step, we will provide two alternatives. One of them requires solving a system of linear inequalities and the other requires solving a system of linear matrix inequalities.

3.1. Generalized Forms. First, we define a class of functions used in the proposed design method.

Definition 11. A function f : [R.sup.n] [right arrow] R is a generalized form (GF) of degree m if

(a) it is homogeneous (of degree m for some vector of weights r),

(b) it consists of sums, products, and sums of products of terms like

a [[??][x.sub.k][??].sup.p],

b [absolute value of [x.sub.k]].sup.q], a, b [member of] R, 0 [less than or equal to] p, q [member of] Q. (8)

GFs as defined are a generalization of (and contain) the (classical) forms, which are homogeneous polynomials consisting of monomials, which are products of terms [x.sup.p.sub.k], with integer powers p.

Note that the vector field of (2) (in its nominal form) consists of GFs. According to [23], it is suitable to propose a GF as Lyapunov function candidate for a system described by GFs. Recall that (2) is homogeneous of degree m = -1 with the weights r = [[r.sub.1], [r.sub.2], [r.sub.3]] = [3, 2, 1]. So, we start with the following Lyapunov function candidate:

[mathematical expression not reproducible]. (9)

Note that (9) can be homogeneous of degree m with the weights r if [[rho].sub.i] are selected appropriately. The conditions for differentiability and homogeneity of (9) required in [23] are fulfilled by choosing, for example, m = 5, [[rho].sub.0] = [[rho].sub.1] = [[rho].sub.2] = [[rho].sub.5] = 1, [[rho].sub.3] = 3, and [[rho].sub.4] = 4/3. With these considerations, we obtain

[mathematical expression not reproducible]. (10)

By taking the derivative of (10) along the trajectories of (2), we have that ([partial derivative]V/[partial derivative]x)[??] = -W[pi](x) with [mathematical expression not reproducible]. (11) To be able to obtain positive definite terms for each variable in [W.sub.[pi]], it is convenient to set -[[bar.[alpha]].sub.12] = [[alpha].sub.12] > 0, [[bar.[alpha]].sub.13] = [[alpha].sub.13] > 0, - [[bar.[alpha]].sub.23] = [[alpha].sub.23] > 0; thus, (10) becomes (3) and [W.sub.[pi]] becomes

[mathematical expression not reproducible]. (12)

Considering (2) in the nominal case, that is, [pi](t) = 0, we obtain from (12)

[mathematical expression not reproducible]. (13)

The proof of Lemma 5 requires showing that the GFs V(x) and W(x) are positive definite for some values of [alpha] and k. However, for the class of GFs, there are no standard procedures to verify positive definiteness. Nonetheless, for (classical) homogeneous polynomials (forms), there are some very useful and easy-to-compute methods to decide the positive definiteness. One of the most important features of a GF is that it can be described as a set of (classical) forms restricted to a certain domain, by using a suitable change of coordinates in each orthant. Thus, we are able to use the procedures established for (classical) forms to determine positive definiteness of a GF.

We show how to obtain a representation of the GFs V(x) and W(x) as a set of classical forms. Consider the eight octants of [R.sup.3] defined as

[mathematical expression not reproducible]. (14)

We also define S = {z [member of] [R.sup.3]|[z.sub.1] [greater than or equal to] 0, [z.sub.2] [greater than or equal to] 0, [z.sub.3] [greater than or equal to] 0}. Now, consider the function x : S [right arrow] [O.sub.1] given by [[[x.sub.1], [x.sub.2], [x.sub.3]].sup.T] = [[[z.sup.3.sub.1], [z.sup.2.sub.2], [x.sub.3]].sup.T]. Note that this is a change of coordinates diffeomorphic in the interior of S. By applying such a change of coordinates to (3), we obtain

[mathematical expression not reproducible]. (15)

Note that the values of the function V(x) restricted to the set x [member of] [O.sub.1] are in a one-to-one correspondence to the values of the function [V.sub.1](z) defined in z [member of] S. Thus, the positive definiteness of (3) restricted to [O.sub.1] can be determined by verifying the positive definiteness of [V.sub.1] restricted to S. An analogous procedure can be done for the remaining octants with the following change of coordinates:

[mathematical expression not reproducible]. (16)

Octant [O.sub.1] is symmetric to [O.sub.5]. For the last one, with the change of coordinates x = [-[z.sup.3.sub.1], [z.sup.2.sub.2], [z.sub.3]].sup.T], we obtain again [V.sub.1], which is due to the symmetry of (3) with respect to the origin. Therefore, it is sufficient to study [V.sub.1] in order to determine positive definiteness of (3) restricted to [O.sub.1] U [O.sub.5]. The same occurs for the sets [O.sub.2] [union] [O.sub.6], [O.sub.3] [union] [O.sub.7], and [O.sub.4] [union] [O.sub.8]. Thus, to verify positive definiteness of (3), we have to establish positive definiteness of [V.sub.1] (z) and

[mathematical expression not reproducible], (17)

defined on S. Observe that the purpose of the change of coordinates was twofold. Firstly, the functions in the new variables z contain only terms with integer exponents. Secondly, the domain of all the resultant functions is the positive octant S. So the functions [V.sub.1], [V.sub.2], [V.sub.3], and [V.sub.4] are forms restricted to S.

With the same change of coordinates, we can analyze (13) through the set of restricted forms {[W.sub.1], [W.sub.2], [W.sub.3], [W.sub.4]} given by

[mathematical expression not reproducible], (18)

where the coefficients

[mathematical expression not reproducible] (19)

are linear functions of a and affine functions of k. Note that to determine the positive definiteness of (3) and (13) it is necessary and sufficient to verify the positive definiteness of the forms {[V.sub.1], [V.sub.2], [V.sub.3], [V.sub.4]} and {[W.sub.1], [W.sub.2], [W.sub.3], [W.sub.4]} restricted to S. In the next section, we provide two different procedures to design the coefficients of such restricted forms in order to make them positive definite.

3.2. Positive Forms

3.2.1. Polya's Theorem. One result that allows us to check the positive definiteness of classical forms restricted to S is Polya's theorem [36, 37]. We recall a strong version of it.

Theorem 12 (Polya). Let V : [R.sup.n] [right arrow] R be a form such that V(z) > 0 for all z [member of] [P.sub.n] = {z [member of] [R.sup.n] | z [not equal to] 0, [z.sub.i] [greater than or equal to] 0, i = 1, ..., n}. Then, there exists a large enough p [member of] N such that all the coefficients of the form

[G.sub.p] (z) = [([z.sub.1] + [z.sub.2] + ... + [z.sub.n]).sup.p] V (z) (20)

are strictly positive.

Poolya's theorem is a necessary and sufficient condition to ensure positive definiteness of a (classical) form V for positive z. Moreover, it provides a systematic procedure to determine if V is positive definite: we multiply repeatedly by ([z.sub.1] + [z.sub.2] + ... + [z.sub.n]), and if V is positive, we will obtain sooner or later a form with positive coefficients [36, 37].

Thus, we use it as a procedure to design the coefficients of a form to render it positive definite. This means, if the coefficients of V are undetermined, we look for p and for the coefficients of V such that the coefficients of [G.sub.p] are strictly positive. For example, if we design [[alpha].sub.1], [[alpha].sub.12], [[alpha].sub.2], [[alpha].sub.13], [[alpha].sub.23], [[alpha].sub.3] > 0 for [V.sub.3](z), Polya's theorem holds trivially with p = 0.

Suppose we want to determine the coefficients [alpha] that make a form V positive definite (for positive z). For each p, the condition of strict positiveness imposed on the coefficients of the form [G.sub.p] defines a system of linear inequalities in the coefficients of the form V. Thus, for each p, we obtain a system [A.sub.V][alpha] > 0, where [A.sub.V] is a constant matrix whose number of rows is equal to the number of terms in [G.sub.p], and the inequality sign > is understood componentwise.

For our problem, that is, positive definiteness of the restricted forms {[V.sub.1], [V.sub.2], [V.sub.3], [V.sub.4]} and {[W.sub.1], [W.sub.2], [W.sub.3], [W.sub.4]}, we obtain a set of linear inequalities [mathematical expression not reproducible]. Since p is linear in [alpha] and affine in k, it can be rewritten as [beta] = M(k)[alpha] or [beta] = [bar.M]([alpha])[[1, k].sup.T], where M(k) and [bar.M]([alpha]) are matrices, depending affinely on k or a, respectively. Thus, if k is given, we have to solve for [alpha] the system of inequalities

[mathematical expression not reproducible], (21)

which is linear in [alpha]. This is an analysis procedure because we can find a Lyapunov function for a given k.

Now, if we want to design k, the system of inequalities (21) becomes bilinear in [alpha] and k, which is more difficult to solve. In order to avoid this problem, we can choose [alpha] from the set of solutions of the system [mathematical expression not reproducible] and then solve for k the system of linear (in k) inequalities

[mathematical expression not reproducible]. (22)

Systems of linear inequalities like (21) or (22) can be solved numerically. However, by using the software Skeleton [38], it is possible to find the complete set of solutions of such kind of inequalities. We have used that software to find a and k in Table 1. Note that the last two columns correspond to the values of p used to satisfy Polya's theorem.

3.2.2. Sum of Squares (SOS). Another way to determine the positive definiteness of a form is through the sum of squares (SOS) representation. Consider a classical form V : [R.sup.n] [right arrow] R of degree 2m, 0 < m [member of] Z. V is SOS if there exist a finite number of forms [v.sub.j], j = 1,2, ..., N, of degree m such that V(z) = [[summation].sup.N.sub.j=1][[[v.sub.j](z)].sup.2]. Thus, nonnegativeness of a form is direct if it is a SOS. To verify positive definiteness, it is sufficient to verify that V(z) - [epsilon] [[summation].sup.n.sub.i=1] [z.sup.2m.sub.i] is SOS for any 0 < [epsilon] [member of] R. So, a way to design a positive definite form V is to look for its coefficients that allow V(z) - [epsilon] [[summation].sup.n.sub.i=1] [z.sup.2m.sub.i] be SOS.

Note that the forms {[V.sub.1], [V.sub.2], [V.sub.3], [V.sub.4]} are not of even degree and their domain is restricted to S. So, in order to have the forms {[V.sub.1], [V.sub.2], [V.sub.3], [V.sub.4]} and {[W.sub.1], [W.sub.2], [W.sub.3], [W.sub.4]} with even degree defined in the whole [R.sup.3], we perform a simple change of coordinates given by [z.sup.T] = [[y.sup.2.sub.1], [y.sup.2.sub.2], [y.sup.2.sub.3]]. Recall that we have to include the term -[epsilon] [[summation].sup.3.sub.i=1] [y.sup.10.sub.i] in each form of the first set and the term -[epsilon] [[summation].sup.3.sub.i=1] [y.sup.8.sub.i] in the second one. Thus, we obtain the new sets of forms {[V.sub.1y], ..., [V.sub.4y]} and {[W.sub.1y], ..., [W.sub.4y]}. For example, we have

[mathematical expression not reproducible]. (23)

Hence, we have to design [alpha] and k such that the forms {[V.sub.1y], ..., [V.sub.4y]} and {[W.sub.1y], ..., [W.sub.4y]} become SOS.

In [39], it was shown that the problem of determining whether a form V is SOS is equivalent to determining the existence of a positive semidefinite matrix P such that V(y) = [[zeta].sup.T](y)P[zeta](y), where [zeta](y) is a vector of monomials in the variable y. Thus, the problem of verifying the nonnegativeness of a form can be reduced to the linear matrix inequality problem P [greater than or equal to] 0.

Fortunately, there is software that helps in the task of designing the coefficients of a form such that it becomes SOS. We have used the software SOSTOOLS [40] to design the coefficients of {[V.sub.1y], ..., [V.sub.4y]} and {[W.sub.1y], ..., [W.sub.4y]} such that they become SOS. The values of a and k found by SOSTOOLS are shown in Table 2 with [epsilon] = 0.1. More values of a and k can be found by exploring the set of solutions using the SOSTOOLS program.

Up to this point, we have proven Lemma 5.

3.2.3. Robustness. In this subsection, we give a proof of Lemma 6 and a procedure to calculate a value of [[DELTA].sub.0]. Once we have obtained [alpha] for (3) and k for the nominal case of (2), we have to search for a (maximum) value of [[DELTA].sub.0], such that (12) remains negative definite for any [pi](t) with [absolute value of [pi](t)] [less than or equal to] [[DELTA].sub.0].

The argument for the existence of [[DELTA].sub.0] is at the same time the idea of a procedure to compute it. Since the forms {[W.sub.1], ..., [W.sub.4]} or {[W.sub.1y], ..., [W.sub.4y]] are continuous and homogeneous, a small variation of their coefficients does not affect their positive definiteness. Note also that the bounded disturbance [pi](t) only affects linearly some coefficients of (12). Therefore, it is sufficient to consider the extreme values of [pi](t) to analyze (12) for positive definiteness. This can be done by considering [[DELTA].sub.0] as a new parameter and including it in the coefficients of the forms {[W.sub.1], ..., [W.sub.4]} or {[W.sub.1y], ..., [W.sub.4y]} in two different cases: the first one is with

[mathematical expression not reproducible], (24)

and the second one is with

[mathematical expression not reproducible]. (25)

It is important to mention that the two cases have to be analyzed simultaneously by using any of the two procedures described above, with Polya or with SOS. In Tables 1 and 2, we show the values for [[DELTA].sub.0] that we have found.

4. Conclusions

We have provided a differentiable Lyapunov function and some different sets of gains for Levant's second-order differentiator. We have confirmed the usefulness of the method to design Lyapunov functions provided in [23]. In principle, such a method can be used to design Lyapunov functions for the RED of any order as we have done in this paper for the second-order RED and as it has been done in [41] for the first-order RED. One of the main characteristics of the method is that it allows us to reduce the problem of finding a Lyapunov function to the algebraic problem of solving systems of linear inequalities or systems of linear matrix inequalities. We have noted that the designing through SOS is computationally more efficient. However, for the designing through Poolya, there exist techniques to obtain the complete set of solutions for a linear system of inequalities. Finally, the Lyapunov function presented in this work could be used to perform some further analysis of the second-order RED.

http://dx.doi.org/10.1155/2016/3740834

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

The authors acknowledge the financial support from PAPIIT-UNAM (Programa de Apoyo a Proyectos de Investigation e Innovation Tecnologica), Project IN113614; Fondo de Colaboracion II-FI UNAM, Project IISGBAS-100-2015; and CONACYT (Consejo Nacional de Ciencia y Tecnologa), Project 241171, CVU: 371652 and CVU: 557079.

References

[1] S. C. Pei and J.-J. Shyu, "Design of FIR Hilbert transformers and differentiators by eigenfilter," IEEE Transactions on Circuits and Systems, vol. 35, no. 11, pp. 1457-1461, 1988.

[2] B. Kumar and S. C. Dutta Roy, "Design of digital differentiators for low frequencies," Proceedings of the IEEE, vol. 76, no. 3, pp. 287-289, 1988.

[3] H. K. Khalil and L. Praly, "High-gain observers in nonlinear feedback control," International Journal of Robust and Nonlinear Control, vol. 24, no. 6, pp. 993-1015, 2014.

[4] G. Besanijon, Nonlinear Observers and Applications, Springer, 2007.

[5] V. Utkin, J. Guldner, and J. Shi, Sliding Mode Control in Electro-Mechanical Systems, CRC Press, Taylor & Francis, London, UK, 2nd edition, 2009.

[6] A. Levant, "Robust exact differentiation via sliding mode technique," Automatica, vol. 34, no. 3, pp. 379-384, 1998.

[7] Y. Orlov, "Finite time stability and robust control synthesis of uncertain switched systems," SIAM Journal on Control and Optimization, vol. 43, no. 4, pp. 1253-1271, 2004.

[8] A. Polyakov and A. Poznyak, "Reaching time estimation for 'super-twisting' second order sliding mode controller via Lyapunov function designing," IEEE Transactions on Automatic Control, vol. 54, no. 8, pp. 1951-1955, 2009.

[9] J. A. Moreno and M. Osorio, "A Lyapunov approach to second-order sliding mode controllers and observers," in Proceedings of the 47th IEEE Conference on Decision and Control (CDC '08), pp. 2856-2861, IEEE, Cancun, Mexico, December 2008.

[10] J. A. Moreno and M. Osorio, "Strict Lyapunov functions for the super-twisting algorithm," IEEE Transactions on Automatic Control, vol. 57, no. 4, pp. 1035-1040, 2012.

[11] E. Bernuau, D. Efimov, W. Perruquetti, and A. Polyakov, "On homogeneity and its application in sliding mode control," Journal of the Franklin Institute, vol. 351, no. 4, pp. 1866-1901, 2014.

[12] A. Levant, "Higher-order sliding modes, differentiation and output-feedback control," International Journal of Control, vol. 76, no. 9-10, pp. 924-941, 2003.

[13] A. Levant, "Homogeneity approach to high-order sliding mode design," Automatica, vol. 41, no. 5, pp. 823-830, 2005.

[14] R. Sepulchre, M. Jankovic, and P. V. Kokotovic, Constructive Nonlinear Control, Communications and Control Engineering, Springer, London, UK, 1997.

[15] H. K. Khalil, Nonlinear Systems, Prentice Hall, 3rd edition, 2002.

[16] A. Bacciotti and L. Rosier, Liapunov Functions and Stability in Control Theory, Communications and Control Engineering, Springer, Berlin, Germany, 2nd edition, 2005.

[17] M. Malisoff and F. Mazenc, Constructions of Strict Lyapunov Functions, Springer, London, UK, 2009.

[18] Y. Shtessel, C. Edwards, L. Fridman, and A. Levant, Sliding Mode Control and Observation, Control Engineering, Springer, New York, NY, USA, 2014.

[19] D. Liberzon, Switching in Systems and Control, Birkhauser, Boston, Mass, USA, 2003.

[20] Y.-J. Liu and S. Tong, "Adaptive fuzzy control for a class of unknown nonlinear dynamical systems," Fuzzy Sets and Systems, vol. 263, pp. 49-70, 2015.

[21] Y. Gao and Y.-J. Liu, "Adaptive fuzzy optimal control using direct heuristic dynamic programming for chaotic discrete-time system," Journal of Vibration and Control, vol. 22, no. 2, pp. 595-603, 2016.

[22] J. A. Moreno, "Lyapunov function for levant's second order differentiator," in Proceedings of the IEEE 51st Annual Conference on Decision and Control (CDC '12), pp. 6448-6453, December 2012.

[23] T. Sanchez and J. A. Moreno, "A constructive Lyapunov function design method for a class of homogeneous systems," in Proceedings of the 53rd IEEE Annual Conference on Decision and Control (CDC '14), pp. 5500-5505, Los Angeles, Calif, USA, December 2014.

[24] F. A. Ortiz-Ricardez, T. Sanchez, and J. A. Moreno, "Smooth Lyapunov function and gain design for a second order differentiator," in Proceedings of the 54th IEEE Conference on Decision and Control (CDC '15), pp. 5402-5407, Osaka, Japan, December 2015.

[25] A. F. Filippov, Differential Equations with Discontinuous Right-Hand Side, Kluwer, Dordrecht, The Netherlands, 1988.

[26] H. Nakamura, Y. Yamashita, and H. Nishitani, "Smooth lyapunov functions for homogeneous differential inclusions," in Proceedings of the 41st SICE Annual Conference (SICE '02), vol. 3, pp. 1974-1979, IEEE, August 2002.

[27] N. N. Krasovskii, Problems of the Theory of Stability of Motion, Stanford University Press, Stanford, Calif, USA, 1963.

[28] D. G. Schultz and J. E. Gibson, "The variable gradient method for generating liapunov functions," Transactions of the American Institute of Electrical Engineers, vol. 81, no. 4, pp. 203-210, 1962.

[29] V. I. Zubov, Methods of A. M. Lyapunov and Their Applications, P. Noordho: Limited, Groningen, Netherlands, 1964.

[30] K. B. Ngo, R. Mahony, and Z.-P. Jiang, "Integrator backstepping using barrier functions for systems with multiple state constraints," in Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference (CDC-ECC '05), pp. 8306-8312, Seville, Spain, December 2005.

[31] Y.-J. Liu and S. Tong, "Barrier Lyapunov functions-based adaptive control for a class of nonlinear pure-feedback systems with full state constraints," Automatica, vol. 64, pp. 70-75, 2016.

[32] R. Baier, L. Grune, and S. F. Hafstein, "Linear programming based Lyapunov function computation for differential inclusions," Discrete and Continuous Dynamical Systems--Series B, vol. 17, no. 1, pp. 33-56, 2012.

[33] A. Polyakov and A. Poznyak, "Lyapunov function design for finite-time convergence analysis: 'twisting' controller for second-order sliding mode realization," Automatica, vol. 45, no. 2, pp. 444-448, 2009.

[34] A. Polyakov and A. Poznyak, "Unified Lyapunov function for a finite-time stability analysis of relay second-order sliding mode control systems," IMA Journal of Mathematical Control and Information, vol. 29, no. 4, pp. 529-550, 2012.

[35] T. Sanchez and J. A. Moreno, "Construction of lyapunov functions for a class of higher order sliding modes algorithms," in Proceedings of the 51st IEEE Conference on Decision and Control (CDC '12), pp. 6454-6459, IEEE, Maui, Hawaii, USA, December 2012.

[36] G. Polya, "Uber positive Darstellung von Polynomen," Vierteljahrsschrift der Naturforschenden Gesellschaft in Zurich, vol. 73, pp. 141-145, 1928.

[37] G. H. Hardy, J. E. Littlewood, and G. Polya, Inequalities, Cambridge Mathematical Library, 2nd edition, 1988.

[38] N. Y. Zolotykh, "New modification of the double description method for constructing the skeleton of a polyhedral cone," Computational Mathematics and Mathematical Physics, vol. 52, no. 1, pp. 153-163, 2012.

[39] M. D. Choi, T. Y. Lam, and B. Reznick, "Sum of squares of real polynomials," Proceedings of Symposia in Pure Mathematics, vol. 58, no. 2, pp. 103-126, 1995.

[40] S. Prajna, A. Papachristodoulou, and P. A. Parrilo, "SOSTOOLS: Sum of squares optimization toolbox for MATLAB, 2002-2005," http://www.cds.caltech.edu/sostools/, http://www.mit.edu/~parrilo/sostools/.

[41] T. Sanchez and J. A. Moreno, "Construction of Lyapunov functions for high order sliding modes," in Recent Trends in Sliding Mode Control, Institution of Engineering and Technology, 2016. Hindawi Publishing Corporation

Tonametl Sanchez, Jaime A. Moreno, and Fernando A. Ortiz-Ricardez

Instituto de Ingenieria, Universidad Nacional Autonoma de Mexico, 04510 Mexico City, DF, Mexico

Correspondence should be addressed to Tonametl Sanchez; tsanchezr@iingen.unam.mx

Received 5 January 2016; Revised 28 February 2016; Accepted 14 March 2016

Academic Editor: Yan-Jun Liu

Caption: Figure 1: Differentiator's signals.

Caption: Figure 2: Differentiator's signals.

Caption: Figure 3: Differentiator's signals.

Caption: Figure 4: Differentiator's signals.
```Table 1: Parameters [alpha] and k obtained by Polya's procedure.

[[alpha].sub.1]   [[alpha].sub.12]   [[alpha].sub.2]

88                       34                13
460                     129                48
878                     304                93

[[alpha].sub.1]   [[alpha].sub.13]   [[alpha].sub.23]

88                       20                 32
460                      25                 50
878                      62                 50

[[alpha].sub.1]   [[alpha].sub.3]   [k.sub.1]   [k.sub.2]   [k.sub.3]

88                      552             4           3         0.01
460                     85              5           4          0.1
878                     41              6           5          0.2

[[alpha].sub.1]   [[DELTA].sub.0]   [p.sub.v]   [p.sub.w]

88                    0.00043          18          64
460                    0.005           25          78
878                    0.013           32          85

Table 2: Parameters a and k obtained by SOS's procedure.

[[alpha].sub.1]   [[alpha].sub.12]   [[alpha].sub.2]

110                      80                29
231                     168                55
110                      80                29

[[alpha].sub.1]   [[alpha].sub.13]   [[alpha].sub.23]

110                      25                 61
231                      62                 81
110                      25                 61

[[alpha].sub.1]   [[alpha].sub.3]   [k.sub.1]   [k.sub.2]   [k.sub.3]

110                     104             5           4          0.1
231                     74              6           5          0.2
110                     104             3           2          0.1

[[alpha].sub.1]   [[DELTA].sub.0]

110                    0.01
231                    0.017
110                    0.02

Table 3: Scaled parameters k.

[L.sup.1/3]   [L.sup.2/3]
L            [k.sub.1]     [k.sub.2]    L [k.sub.3]

2325.6          53           526.6         23.3
200            29.2          136.8          20
76.9           25.5          90.4          15.4
100            23.2          86.2           10
58.8           23.3          75.6          11.8
50             11.1          27.1            5
```
COPYRIGHT 2016 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.