# Online Adaptive Optimal Control of Vehicle Active Suspension Systems Using Single-Network Approximate Dynamic Programming.

1. IntroductionImprovement in suspension systems plays an important role in achieving the goals of pursuing more comfortable and safer vehicles. Naturally, the suspension systems are then expected to have more intelligence to accommodate themselves in various road conditions. Hence, many control methods as summarized in [1] have been used for suspension controller design. However, model uncertainties due to the parameter uncertainties and unknown road inputs in real suspension systems bring a great challenge for the controller design.

It is well known that optimal controllers are normally designed offline by solving Hamilton-Jacobi-Bellman (HJB) equation. For linear system, linear quadratic regulator (LQR) controller is designed offline by solving the Riccati equation (special case of HJB). However, the main drawback of the conventional LQR method lies in the fact that the system model has to be known precisely in advance to find the optimal control law. In addition, the feedback control gains are obtained offline. Once the feedback gains of the controllers are obtained, they cannot be changed with the different driving environment. Thus, a more efficient control strategy is needed to adaptively cope with an active suspension control problem subject to time varying parameters under different driving situations in real time.

Recent studies summarized in [2] show that approximate dynamic programming (ADP) based controller design merging some key principles from adaptive control and optimal control can overcome the need for the exact model and achieve the optimality at the same time. The learning mechanism of ADP supported by the actor-critic structure has two steps [3]. First, policy evaluation executed by the critic is used to assess the control action, and then policy improvement executed by the actor is used to modify the control policy. Most of the available adaptive optimal control methods are usually based on the dual NN architecture [4-7], where the critic NN and action NN are employed to approximate the optimal cost function and optimal control policy, respectively. The complicated structure and computational burden make the practical implementation difficult.

In this paper, an adaptive optimal control method with simplified structure for the quarter-car active suspension system is proposed. The optimal control action is directly calculated from the critic NN instead of the action-critic dual networks. Meanwhile, robust learning rule driven by the parameter error is presented for the critic NN which is used to identify the optimal cost function. Finally, based on the critic NN, the optimal control action is calculated by online solving the HJB equation. Uniformly ultimately bounded (UUB) stability of the overall closed-loop system is guaranteed via Lyapunov theory. Compared with the conventional LQR control approach, simulation results from a quarter-car active suspension system verify the improved performance in terms of ride comfort, road holding, and suspension space limitation for the proposed ADP-based controller.

The remainder of this paper is organized as follows. In Section 2, the preliminary problem formulation is given. The proposed ADP-based control algorithm is introduced in Section 3. Section 4 presents the simulation results from a vehicle suspension system and, finally, the conclusions for the whole paper are drawn in Section 5.

2. Problem Formulation

The two-degree-of-freedom quarter-car suspension system is shown in Figure 1, which has been widely used in the literatures [8, 9]. It represents the motion of the vehicle body at any one of the four wheels. The suspension system is composed of a spring [k.sub.s], a damper [b.sub.s], and an active force actuator F. The active force can be set to zero for the passive suspension. The sprung mass [m.sub.s] denotes the quarter equivalent mass of the vehicle body. The unsprung mass [m.sub.u] represents the equivalent mass of the tire assembly system. The springs [k.sub.t] and [b.sub.t] represent the vertical stiffness and damping coefficient of the tire, respectively. Vertical displacements of the sprung mass, unsprung mass, and road are denoted by [z.sub.s], [z.sub.u], and [z.sub.r], respectively.

The motion equations of the sprung and unsprung mass shown in Figure 1 are given as follows [10]:

[mathematical expression not reproducible]. (1)

Define the state variables as

[mathematical expression not reproducible], (2)

where [z.sub.s] - [z.sub.u] is the suspension deflection, [[??].sub.s] is the velocity of sprung mass, [z.sub.u] - [z.sub.r] is the tire deflection, and [[??].sub.u] is the velocity of unsprung mass.

Then, (1) can be further rewritten in the following state space form:

[??] = Ax + Bu + [B.sub.r][[??].sub.r], (3)

where

[mathematical expression not reproducible].(4)

The objective of the controller design for the active suspension systems should consider the following three tasks [11].

Firstly, one of the main tasks is to keep good ride quality. This task means to reduce the vibratory forces transmitted from the axle to the vehicle body via a well-designed suspension controller, that is, to reduce the sprung mass acceleration as far as possible when confronting parameter uncertainties [m.sub.s] and unknown road displacement [z.sub.r].

Secondly, in order to assure the good road holding performance, the firm uninterrupted contact of wheels to road should be ensured, and the variations in normal tire load related to vertical tire deflection [z.sub.u] - [z.sub.r] should be small.

Lastly, the suspension space requirements should not exceed the maximum suspension deflection [z.sub.max]; that is, [absolute value of [z.sub.s] - [z.sub.u]] [less than or equal to] [z.sub.max].

The suspension control goal is to find a controller which can ensure the stability of the closed system and minimize the following performance index:

V = 1/2 [[integral].sup.t.sub.0] ([x.sup.T] Qx + [u.sup.T] Ru)d[tau], (5)

where x = [([x.sub.1], [x.sub.2]).sup.T] and Q and R are chosen by the designer which determine the optimality in the optimal control law.

Remark 1. Most of the existing LQR controller designs are based on (3) with the assumption that all of the parameters are known in advance. Besides, the feedback control law of the LQR approach is usually obtained by solving the Riccati equation offline. Once it is obtained, the control law cannot be updated online when subjected to the parameter uncertainties [m.sub.s] and unknown road displacement [z.sub.r]. Thus, it may affect the control performance. Therefore, high performance control approach that can be tolerant of various operating conditions should be developed for the active suspension system design.

3. Controller Design Based on Approximate Dynamic Programming

In this section, an adaptive optimal control method for active suspension systems is realized by using the ADP approach with only one critic network structure instead of the classic action-critic dual networks as presented in Figure 2. The outputs parameters should be the state of the suspension. The implementation of the state feedback control algorithm for the suspension system is based on the requirement that all states of the system can be measurable or part of the states can be estimated by using some estimated methods like Kalman filter. In this paper, we assume that all of the suspension states are available.

The objective of the optimal regulator problem is to design an optimal controller to stabilize system (3) and minimize the infinite horizon performance cost function

V(x) = [[integral].sup.[infinity].sub.t] r(x([tau]),u([tau]))d[tau], (6)

where the utility function with symmetric positive definite matrices Q and R is defined as r(x, u) = [x.sup.T] Qx + [u.sup.T] Ru.

According to the optimal regulator problem design in [4], an admissible control policy should be designed to ensure that infinite horizon cost function (6) related to (3) is minimized. So, the Hamiltonian of (3) is designed as

[mathematical expression not reproducible], (7)

where [V.sub.x] [??] [partial derivative]V(x)/[partial derivative]x denotes the partial derivative of the cost function V(x) with respect to x.

The optimal cost function [V.sup.*](x) is defined as

[mathematical expression not reproducible] (8)

and it satisfies the HJB equation

[mathematical expression not reproducible], (9)

where [mathematical expression not reproducible]. Based on the assumption that the minimum on the right-hand side of (8) exists and is unique, by solving [partial derivative]H(x,u.[V.sup.*.sub.x]/[partial derivative]u = 0, the feedback form for admissible optimal control [u.sub.*] can be derived as

[mathematical expression not reproducible]. (10)

Substituting (10) into (9), we have

[mathematical expression not reproducible]. (11)

Assumption 2 (see [4]). The solution to (9) is smooth, which allows us to bring in informal style of the Weierstrass higher-order approximation theorem. Then, there exists a complete independent basis [psi](x) [member of] [R.sup.I] such that the solution [V.sup.*](x) to (9) is uniformly approximated.

From (10), one can learn that optimal control value [u.sup.*] is based on the optimal cost function [V.sup.*](x). However, it is difficult to solve the HJB equation (11) to obtain [V.sup.*](x) due to the uncertain parameters in matrix A and the unknown road displacement input. The usual method is to get the approximate solution via a critic NN as in [12, 13]. Hence, from Assumption 2, it is justified to assume that there exist weights [W.sub.1] such that the value function [V.sup.*](x) is approximated as

[V.sup.*](x) = [W.sup.T.sub.c][psi](x) + [xi], (12)

where [W.sub.c] [member of] [R.sup.I] is the nominal weight vector, [xi] is the approximation error, [psi](x) [member of] [R.sup.I] is the active function, and I is the number of neurons.

Then, substituting (12) into (10), one obtains

[u.sup.*] = -1/2[R.sup-1][B.sup.T]([nabla][[psi].sup.T](x)[W.sub.c] + [nabla][xi]). (13)

In practical implementation, the critic NN is represented as

u = -1/2[R.sup.-1][B.sup.T][nabla][[psi].sup.T](x)[[??].sub.c], (14)

where [[??].sub.c] is the estimation of nominal [W.sub.c].

Remark 3. There are many ways to get the estimation value of [W.sub.c] such as least-squares [4] or gradient method [14,15]. It has been proved in [16-18] that adaptive estimation method considering the parameter information can greatly improve the convergence speed in contrast to the conventional estimation method driven by the observer error. Inspired from these facts, a novel robust estimation method of [W.sub.c] is presented in the following analysis.

Substituting (12) into (9), one obtains

Y = -[W.sup.T.sub.c]X - [[xi].sub.HJB], (15)

where X = [nabla][psi](x)(Ax + Bu), Y = [x.sub.T] Qx + [u.sub.T] Ru, and [[xi].sub.HJB] = [W.sup.T.sub.c][nabla][psi](x)[B.sub.r][??] + [nabla][xi](Ax + Bu + [B.sub.r][[??].sub.r]) denotes the Bellman error caused by the approximation of the cost function.

The filtered variables [X.sub.f],[Y.sub.f], and [[xi].sub.HJBf] are defined as

[mathematical expression not reproducible], (16)

where [eta] is a filter parameter. It should be noted that the fictitious filtered variable [[xi].sub.HJBf] is just used for analysis.

Then, the auxiliary regression matrix E [member of] [R.sup.lxl] and vector F [member of] [R.sup.l] are defined as

[mathematical expression not reproducible], (17)

where [eta] is a positive constant as defined in (16). The solution to (17) is derived as

[mathematical expression not reproducible]. (18)

Another auxiliary vector M is denoted as

M = E(t)[W.sub.c] + F (t). (19)

Finally, the adaptive law of [W.sub.c] is provided by

[[??].sub.c] = -[mu]M, (20)

where [mu] is the learning gain.

Theorem 4. For system (3) with adaptive optimal control signal ugiven in (14), and adaptive law (20), the optimal control u converges to a small bound around its ideal optimal solution [u.sup.*] in (13).

Proof. Define the Lyapunov function as

L = [L.sub.o] + [L.sub.c], (21)

where [L.sub.o] and [L.sub.c] are defined as

[mathematical expression not reproducible], (22)

[L.sub.c] = [GAMMA] ([x.sup.T]x) + [kappa]V, (23)

where [mu],[GAMMA],[kappa] > 0 are positive constants.

Then, from (18) and (19), one obtains

[mathematical expression not reproducible], (24)

where [mathematical expression not reproducible].

From [16], we know that the persistently excited (PE) X can guarantee that the matrix E(t) defined in (18) is positive definite; that is, [[lambda].sub.min](E) > [sigma] > 0. Then, according to the fact [mathematical expression not reproducible], the time derivative of (22) is calculated as

[mathematical expression not reproducible]. (25)

Finally, [[??].sub.c] converges to the compact set [mathematical expression not reproducible].

From the basic inequality ab [less than or equal to] [a.sup.2][delta]/2 + [b.sup.2]/2[delta] with [delta] > 0, we can rewrite (25) as

[mathematical expression not reproducible]. (26)

The time derivative of (23) can be deduced from (3) and (8):

[mathematical expression not reproducible]. (27)

From (26) and (27), the time derivative of L is [mathematical expression not reproducible] and satisfies the following inequality:

[mathematical expression not reproducible], (28)

where [mathematical expression not reproducible].

[h.sub.1],[h.sub.2],[h.sub.3] are all positive by choosing the appropriate parameters satisfying the following conditions:

[mathematical expression not reproducible]. (29)

Then [??]< 0 if

[mathematical expression not reproducible], (30)

which means the system state x, control input u, and critic NN weight error [[??].sub.c] are all uniformly ultimately bounded (UUB).

Moreover, we have

[mathematical expression not reproducible]. (31)

The steady state of the upper bound of (31) is

[mathematical expression not reproducible], (32)

where [zeta] depends on the critic NN approximation error.

From (32), we know that the optimal control u converges to a small bound around its ideal optimal solution [u.sup.*].

Remark 5. In this paper, a simplified ADP structure with a single critic NN approximator instead of the commonly used complex dual NN approximators (critic NN and actor NN) is proposed. Moreover, the weight updating law of the critic NN is based on the parameter estimation error rather than minimizing the residual Bellman error in the HJB equation by using the least-squares or the gradient methods, so that the weight error of the critic NN is guaranteed to converge to a residual set around zero with faster speed.

4. Simulation Results and Discussions

In this section, numerical simulations are carried out for a sedan active suspension model platform presented in Section 2 to evaluate the effectiveness of the proposed ADP-based adaptive control method designed in Section 3. The main parameters set is listed in Table 1. Comparative results of passive suspension and active suspension with two different control methods (i.e., LQR method and the proposed ADP-based adaptive optimal control method) are presented in the following two different cases.

Case 1 (with bump road displacement). The suspension is excited by a bump road input as [11], which is given by

[mathematical expression not reproducible], (33)

where a and l are the height and the length of the bump, respectively, and [V.sub.s], is the vehicle forward velocity. Here, we select a = 0.1 m, and l = 5 m and [V.sub.s] = 60 km/h.

For the LQR controller design, the weighing matrices Q and R have to be predetermined. The relationship between Q and R is that, for a fixed Q, a decrease in R matrix's values will decrease the transition time and the overshoot but this action will increase the rise time and the steady state error. When R is kept fixed but Q is decreased, the transition time and overshoot will be increased but the rise time and steady state error will be decreased [19]. Based on this rule, Q and R are selected for the performance cost function (6) as R = 2 * [10.sup.-5]I and Q = diag[10, 65,1.8,20]. Then, we can get the state feedback gain matrix from Matlab K = [10.sup.4] * [0.0166 0.5520 -5.5777 -0.2564]. The proposed ADP-based adaptive optimal control (14) with the updating laws (20) for the critic NN (12) is simulated with the following parameters: [eta] = 500, [mu] = 1500 and the critic NN vector activation function is selected as [psi](x) = [[[x.sup.2.sub.1] [x.sub.1][x.sub.2] [x.sub.1][x.sub.3] [x.sub.1][x.sub.4] [x.sup.2.sub.2] [x.sub.2] [x.sub.3] [x.sub.2] [x.sub.4] [x.sup.2.sub.3] [x.sub.3] [x.sub.4] [x.sup.2.sub.4].sup.T].

Simulation results with two different sprung masses are presented in Figures 3 and 4. Compared with the passive suspension system, the active suspension system with the LQR and the proposed ADP-based adaptive optimal control methods have the lower peak and less vibration. Meanwhile, one can see from Figures 3(a) and 4(a) that the proposed ADP-based adaptive optimal control method demonstrates smaller amplitude and faster transient convergence in terms of the suspension deflection and sprung mass acceleration than the LQR control method. Figures 3(b) and 4(b) further provide the simulation results of the suspension deflection and sprung mass acceleration with the varying sprung mass (from 250 kg to 350 kg). One may find that the proposed ADP-based adaptive optimal control method still shows improved performance compared with the LQR control method even under different sprung mass. Besides, Figure 3 also shows that the suspension working space falls into the acceptable ranges to satisfy the suspension space limitation requirement.

In order to ensure the good road holding performance, the ratio between tire dynamic load and stable load should be less than 1; that is, [absolute value of [F.sub.t] + [F.sub.b]]/([m.sub.s] + [m.sub.u])g < 1, where [F.sub.t] = [k.sub.t]([z.sub.u] - [z.sub.r]) and [F.sub.t] = [b.sub.t]([[??].sub.u] - [[??].sub.r]) are the elasticity force and damping force of the tires [20]. Here, the tire damping [b.sub.t] is assumed to be negligible. From Figure 5, one can see that the ratio between tire dynamic load and stable load is always less than 1 for both of the proposed ADP-based adaptive optimal control method and LQR control methods, which implies that the better road holding ability is guaranteed and thus ensures the stability of vehicle. Actuator output forces are presented in Figure 6, from which one can find that the proposed ADP-based adaptive optimal control method needs smaller actuator force than the LQR control approach.

Furthermore, the performance index in terms of Root Mean Square (RMS) for the states in Figures 3 and 4 is listed in Table 2. The fact that all of the RMS values of the proposed ADP-based adaptive optimal control method are smaller than the LQR control method further shows the improved performance of the proposed method.

Case 2 (with sinusoid road displacement). In this case, a sinusoid road displacement of 0.005 sin(5rc) is used and all other simulation parameters are the same as those in Case 1. Comparative simulation results in Figures 7 and 8 further validate the improved performance of the proposed ADP-based adaptive optimal control method compared with the LQR control method. It should be pointed out that poor self-adaptive property of the LQR method leads to the small oscillation of the suspension deflection response and the sprung mass acceleration response as shown in Figure 3 and Figure 4. Meanwhile, the smaller oscillation for the proposed ADP-based adaptive optimal control method may come from the Bellman error caused by the approximation of the value function as shown in (15).

The reasons that lead to the improved performance of the proposed ADP-based adaptive optimal control method during the aforementioned simulation results are analyzed as follows.

From the LQR design process, one can see that it is based on the precise dynamic model (3) with the assumption that all the system parameters are time invariant. The optimal state feedback gain matrix is obtained by solving the Riccati equation offline and cannot be updated online according to the time varying parameters and unknown road input, respectively. So, it may influence the control performance and even lose stability. The proposed ADP-based adaptive optimal control approach provides a novel solution to the online optimal control of the uncertain system. the feedback control law of the proposed ADP-based adaptive optimal control approach can be updated online with the time varying parameters like sprung mass and unknown road displacement input. It therefore could be concluded that self-adaptive property of the proposed ADP-based optimal control method provides a more effective solution for the controller design of the active suspension systems and can greatly enhance the passenger's comfort level and thus achieve the goals of pursuing more comfortable and safer vehicles.

5. Conclusion

In this paper, an ADP-based adaptive optimal controller for active suspension systems considering the performance requirements has been proposed. Compared with the commonly used LQR method, self-adaptive property of the proposed ADP-based adaptive optimal control method has improved performance in terms of the basic tasks of suspension control like ride comfort, road holding, and suspension space limitation. This performance improvement thus will increase the passenger comfort level and at the same time enhance vehicle handling and stabilities when driving on the road. In future work, more comfortable and safer full vehicle suspension system considering the state constraints and actuator saturation limits will be designed.

https://doi.org/10.1155/2017/4575926

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this article.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (no. 51405436 and no. 51375452) and High Intelligence Foreign Expert Project of Zhejiang University of Technology (no. 3827102003T).

References

[1] L. Balamurugan and J. Jancirani, "An investigation on semi-active suspension damper and control strategies for vehicle ride comfort and road holding," Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering, vol. 226, no. 8, pp. 1119-1129, 2012.

[2] F. Lewis and D. Liu, Approximate Dynamic Programming and Reinforcement Learning for Feedback Control, John Wiley & Sons, Hoboken, NJ, USA, 2013.

[3] P. Werbos, "Approximate dynamic programming for real-time control and neural modeling," in Handbook of Intelligent Control, Van Nostrand Reinhold, New York, NY, USA, 1992.

[4] M. Abu-Khalaf and F. L. Lewis, "Nearly optimal control laws for nonlinear systems with saturating actuators using a neural network HJB approach," Automatica, vol. 41, no. 5, pp. 779-791, 2005.

[5] Q. Wei, D. Liu, and G. Shi, "A novel dual iterative Q-learning method for optimal battery management in smart residential environments," IEEE Transactions on Industrial Electronics, vol. 62, no. 4, pp. 2509-2518, 2015.

[6] S. Bhasin, R. Kamalapurkar, M. Johnson, K. G. Vamvoudakis, F. L. Lewis, and W. E. Dixon, "A novel actor-critic-identifier architecture for approximate optimal control of uncertain nonlinear systems," Automatica, vol. 49, no. 1, pp. 82-92, 2013.

[7] D. Liu, X. Yang, D. Wang, and Q. Wei, "Reinforcement-learning-based robust controller design for continuous-time uncertain nonlinear systems subject to input constraints," IEEE Transactions on Cybernetics, vol. 45, no. 7, pp. 1372-1385, 2015.

[8] K. Dhananjay and S. Kumar, "Modeling and simulation of quarter car semi active suspension system using LQR controller," Advances in Intelligent Systems and Computing, vol. 1, pp. 441-448, 2014.

[9] M. Aghazadeh and H. Zarabadipour, "Observer and controller design for half-car active suspension system using singular perturbation theory," Advanced Materials Research, vol. 403-408, pp. 4786-4793, 2012.

[10] R. Rajamani, Vehicle Dynamics and Control, Springer, 2012.

[11] W. Sun, H. Pan, Y. Zhang, and H. Gao, "Multi-objective control for uncertain nonlinear active suspension systems," Mechatronics, vol. 24, no. 4, pp. 318-327, 2014.

[12] P. Yan, D. Liu, D. Wang, and H. Ma, "Data-driven controller design for general MIMO nonlinear systems via virtual reference feedback tuning and neural networks," Neurocomputing, vol. 171, pp. 815-825, 2016.

[13] Q. Wei and D. Liu, "Neural-network-based adaptive optimal tracking control scheme for discrete-time nonlinear systems with approximation errors," Neurocomputing, vol. 149, pp. 106-115, 2015.

[14] X. Zhang, H. Zhang, Q. Sun, and Y. Luo, "Adaptive dynamic programming-based optimal control of unknown nonaffine nonlinear discrete-time systems with proof of convergence," Neurocomputing, vol. 91, pp. 48-55, 2012.

[15] K. G. Vamvoudakis and F. L. Lewis, "Online actor critic algorithm to solve the continuous-time infinite horizon optimal control problem," in Proceedings of the International Joint Conference on Neural Networks (IJCNN '09), pp. 3180-3187, IEEE, Atlanta, Ga, USA, June 2009.

[16] J. Na, J. Yang, X. Wu, and Y. Guo, "Robust adaptive parameter estimation of sinusoidal signals," Automatica, vol. 53, pp. 376-384, 2015.

[17] Y. Lv, J. Na, Q. Yang, X. Wu, and Y. Guo, "Online adaptive optimal control for continuous-time nonlinear systems with completely unknown dynamics," International Journal of Control, vol. 89, no. 1, pp. 99-112, 2016.

[18] P. M. Patre, W. MacKunis, M. Johnson, and W. E. Dixon, "Composite adaptive control for Euler-Lagrange systems with additive disturbances," Automatica, vol. 46, no. 1, pp. 140-147, 2010.

[19] A. Dharan, S. Olsen, and S. Karimi, "LQG control of a semi active suspension system equipped with MR rotary brake," in Proceedings of the 11th WSEAS International Conference on Instrumentation, Measurement, Circuits and Systems, pp. 176-181, 2012.

[20] Y. Huang, J. Na, X. Wu, X. Liu, and Y. Guo, "Adaptive control of nonlinear uncertain active suspension systems with prescribed performance," ISA Transactions, vol. 54, pp. 145-155, 2015.

Zhi-Jun Fu, (1) Bin Li, (2) Xiao-Bin Ning, (1) and Wei-Dong Xie (1)

(1) College of Mechanical Engineering, Zhejiang University of Technology, Hangzhou 310014, China

(2) Department of Mechanical & Industrial Engineering, Concordia University, Montreal, QC, Canada H3G 1M8

Correspondence should be addressed to Zhi-Jun Fu; fuzhijun@zjut.edu.cn

Received 12 December 2016; Revised 6 March 2017; Accepted 14 March 2017; Published 5 April 2017

Academic Editor: Asier Ibeas

Caption: Figure 1: Quarter-car active vehicle suspension.

Caption: Figure 2: Schematic of the proposed controller system.

Caption: Figure 3: The suspension deflection response.

Caption: Figure 4: The sprung mass acceleration response.

Caption: Figure 5: The ratio between tire dynamic load and stable load of active suspension systems.

Caption: Figure 6: Control inputs.

Caption: Figure 7: The suspension deflection response.

Caption: Figure 8: The sprung mass acceleration response.

Table 1: A sedan's vehicle parameters used in simulation. Parameter Value/unit Quarter-car sprung mass ([m.sub.s]) 250 kg Quarter-car unsprung mass ([m.sub.u]) 35 kg Suspension stiffness ([k.sub.s]) 15,000 N/m Vertical stiffness of the tire ([k.sub.t]) 150,000 N/m Suspension damper ([b.sub.s]) 450 N*s/m Damping coefficient of the tire ([b.sub.t]) 1000 N*s/m Table 2: The RMS values for the states (x[10.sup.-8]). ADP LQR [m.sub.s] = 250 kg Suspension deflection 0.0048 1.7220 Sprung mass acceleration 0.0303 3.3670 [m.sub.s] = 350 kg Suspension deflection 0.0048 2.0800 Sprung mass acceleration 0.0239 3.9550

Printer friendly Cite/link Email Feedback | |

Title Annotation: | Research Article |
---|---|

Author: | Fu, Zhi-Jun; Li, Bin; Ning, Xiao-Bin; Xie, Wei-Dong |

Publication: | Mathematical Problems in Engineering |

Article Type: | Report |

Date: | Jan 1, 2017 |

Words: | 4498 |

Previous Article: | Infrared Dim and Small Targets Detection Method Based on Local Energy Center of Sequential Image. |

Next Article: | Improving Localized Multiple Kernel Learning via Radius-Margin Bound. |

Topics: |