Printer Friendly

Swarm robot control for human services and moving rehabilitation by sensor fusion.

1. Introduction

A current trend in robotics is fusing different types of sensors having different characteristics to improve the performance of the robot system and also benefit from the reduced cost of sensors. Sensor fusion is a combination of sensory data that has inherent redundancy and may provide robust recognition of a robot's working environment [1-3].

A service robot, which operates either semi- or fully autonomously to perform services useful to human wellbeing (excluding manufacturing operations [4-9]), requires sensor fusion to enable it to recognize the environment because it is expected to work in dynamic environments such as human living environment.

One of the applications of service robots is rehabilitation environment. The primary objectives of the rehabilitation robots are to either fully or partially perform tasks that benefit the disabled people and support a rehabilitee's manipulative function [10-12]. Conventional, rehabilitation programs relied heavily on the experience and manual manipulation of the rehabilitator. Because rehabilitation must be conducted carefully and the number of rehabilitee continues to increase, a well-designed service robot may prove effective in providing the support required for careful rehabilitation.

Although most service robots developed thus far are based on a single robot application, to provide better service, multiple service robots working together to create a swarm robot team are preferred, particularly for human services and rehabilitation purposes. The swarm robot may not only support the rehabilitee's movement but also conduct various tasks such as collecting or transporting certain objects during rehabilitation.

A swarm service mobile robot operating in the human living environment needs to cope with its dynamic changes. Hence, the fundamental function of the robot is to avoid static and dynamic obstacles. Particularly, mobile robots in the swarm team must maintain their velocity and avoid collisions with other swarm mates [13-20]. Existing studies have employed proximity [14] and vision sensors [17, 21] for swarm robot obstacle avoidance and the radio frequency identification (RFID) for localization and navigation purposes [21-24].

This study presents the collision avoidance control for swarm robots moving in a dynamic environment of moving obstacles. A method is presented for sensor fusion to achieve motion in the dynamic environment. The proposed method combines the information obtained by several proximity sensors, an image sensor, and localization sensors (RFID system). This study applies a leader-follower formation in which the swarm robot team has a leader robot that follows the rehabilitee, while the other robots follow the leader robot. The robot controllers comprise a reference and PI controller. The reference controller generates a robot motion trajectory by referring to sensor information in real-time, and the PI controller makes the robots follow the generated motion trajectory. Various simulation results, which assume the presence of several static and dynamic obstacles in the human living environment, demonstrate the effectiveness of the proposed design.

2. Mobile Robots Dynamics

This study considers a typical two-wheeled differential-drive mobile robot for swarm robots as shown in Figure 1. Notations for Figure 1 and the following equations are given at the end of this section.

The dynamics of the mobile robot are given by

I[??] = ([D.sub.r] - [D.sub.l])L, (1)

M[??] = [D.sub.r] + [D.sub.l]. (2)

The translational and angular velocity and acceleration of the mobile robot are represented by the following equations:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3)

The motor dynamics of the right and left wheels are given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4)

Substituting (1)-(3) into (4), we obtain the state equations as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (5)

The notations used in Figure 1 and (1)--(5) are given as follows:

I: moment of inertia of the robots around the center of gravity

M: mass of the robots

C: damping coefficient

[[tau].sub.u], [F.sub.u]: torque and force applied to the robot to follow the rehabilitee or the leader robot

[phi]: heading angle of the robots

[??], v: angular and translational velocities of the robots

[??], [??]: angular and translational acceleration of the robots

[D.sub.r], [D.sub.l]: driving forces for the right and left wheels

L: half width of the robots

[[theta].sub.r], [[theta].sub.l]: angles of the right and left wheels

[[??].sub.r], [??]l: angular velocity of the right and left wheels

[[??].sub.r], [[??].sub.l]: angular acceleration of the right and left wheels

[psi]: angle between the proximity sensors

[d.sub.i]: distance data retrieved from proximity sensors and a vision sensor (i = 0,1,2, 3)

J: moment of inertia of motors

r,l,t: index for right, left, and time, respectively

[V.sub.r], [V.sub.l]: input voltage to the left and right wheels

K: driving gain of the motor

R: wheels radius.

3. Controller Design

This study applies a reference and PI controller for collision avoidance as shown in Figure 2. The reference controller generates a reference trajectory for the leader and follower robots, and the PI controller makes the robots follow the reference trajectory of the leader.

3.1. Reference Controller Design. The environmental information required to create the reference trajectory is provided by fusing multiple sensors: a Kinect sensor, four proximity sensors, and an RFID system. We assume that the RFID system indicates the position of the rehabilitee. The RFID tag attached to the rehabilitee is read by the RFID reader that is attached to the leader, and this helps the leader in tracking and following the rehabilitee by identifying his/her position. The RFID tag is also attached to the leader whose signal is read by the follower, so that it also can track and follow the leader.

Human position is the goal position for the robots. In this study, human position is assumed to be given from the RFID system, and robots follow the human trajectory as the reference trajectory.

When there is no obstacle on the way from the initial position to the goal position, the input torque and force for the robot in (1) and 2), [[tau].sub.u] and [F.sub.u], are given as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (6)

where [[??].sub.d] is the desired rotational acceleration and [[??].sub.d] is the desired translational acceleration. They are obtained from the reference (human) trajectory.

The control algorithm for the leader and the follower is the same. The existence of obstacles creates virtual torques and forces ([[tau].sub.1], [[tau].sub.2], [F.sub.1], and [F.sub.2]) to the dynamics of mobile robots.

Therefore, in the environments where the presence of several static and dynamic obstacles is assumed, we consider the following reference model to generate the reference trajectory:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (7)

where C[phi],[C.sub.v]: virtual damping coefficient and [[tau].sub.i],[F.sub.i]: virtual torque and force to avoid collision (i = 1,2).

Although damping coefficients are not considered in (1) and (2) because they are normally small, [C.sub.[phi]] and [C.sub.V] are included in (7) to ensure the stable motion of the robot.

The reference trajectories are created by adding virtual torques and forces ([[tau].sub.1], [[tau].sub.2], [F.sub.1], and [F.sub.2]) to the dynamics of mobile robots in (7). The virtual torques and forces are calculated by considering the fused data input from the Kinect sensor, proximity sensors, and the RFID system, as shown in Figure 2.

Torques [[tau].sub.1] and [[tau].sub.3] are designed for collision avoidance and ensuring that the robots move parallel to the virtual wall of the passages, respectively. The force [F.sub.1] provides the deceleration effect based on the distance of robots to the obstacles, and [F.sub.2] provides the deceleration effect based on the approaching speed of dynamic obstacles. These four parameters are used to adjust the magnitude of the virtual external force/torque and are calculated as shown below

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (8)

where sgn(v): sign function of the translational velocity of the robots, [[alpha].sub.i], [[beta].sub.i]: adjusted constants to provide the effective collision avoidance (i = 1,2), [[??].sub.i]: time derivative of the distance [d.sub.i] to obstacles, which corresponds to the approaching robot's speed to the obstacles, and s([d.sub.i]): shape function that represents the relationship between the virtual external force and distance to the obstacles; the shape function is 0 when [d.sub.i] [greater than or equal to] [d.sub.max] and is 1 when [d.sub.i] = 0.

The shape function of the distance in Figure 3 is designed as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (9)

b = exp (- [([d.sub.max] - [bar.d]).sup.2]/2[[sigma].sup.2]), a = 1 - b, (10)

where [sigma], [bar.d]: design parameters for defining the shape function of the virtual external force and [d.sub.max]: maximum distance that can be measured by the sensor.

The virtual external force/torque increases as the distance of the obstacle decreases. The virtual external force will be zero when the distance of the robot from the obstacle is greater than [d.sub.max], which is a design parameter according to the sensor specification.

3.2. PI Controller Design and Dynamics for Simulation. The PI controller is employed to make the robots follow the desired trajectory. In this study, the PI controller is employed by calculating the difference between the desired and actual velocities. Consider

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (11)

The reference velocities ([[??].sub.rd], [[??].sub.ld]) for the right and left wheels are calculated by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (12)

where [[??].sub.rd],[[??].sub.ld]: reference velocities for the right and left wheels, [v.sub.d],[[??].sub.d]: desired translational and angular velocity, [K.sub.p]: proportional gain, and [K.sub.I]: integral gain.

The PI control system design for simulation is presented in Appendix A.

3.3. Stability Analysisy. This study considers the case where the robot is situated inside four walls space as shown in Figure 4; the objective of this arrangement is to investigate the effect of the walls to the robot in order to confirm the stability of the robot system.

The distance from the sensors to the walls in Figure 4 is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (13)

where W, D: distance from the mobile robot to the wall, [x.sub.s], [y.sub.s]: centre position of the robot, and [[phi].sub.s]: robot orientation.

The shape function in 9) is approximated as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (14)

where [s.sub.w] and [p.sub.w] are constants for stability analysis and given by

[s.sub.w] = 1/a [exp {- [(w - [bar.d]).sup.2]/2[[sigma].sup.2]} - b], (15)

[P.sub.w] = W - [bar.d]/a[[sigma].sup.2] exp {- [(w - [bar.d]).sup.2]/2[[sigma].sup.2]}, (16)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (17)

where [s.sub.D] and [p.sub.D] are constants for stability analysis and given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (18)

In a similar manner, we have

s[d.sub.2] = 1/a [exp {- [([d.sub.2] - [bar.d]).sup.2]/2[[sigma].sup.2]} - b],

[congruent to] [s.sub.D] - [Lp.sub.D][[phi].sub.s] + [p.sub.D][y.sub.s], (19)

s[d.sub.3] = 1/a [exp { - [([d.sub.3] - [bar.d]).sup.2]/2[[sigma].sup.2]} - b],

[congruent to] [s.sub.D] - [Lp.sub.w][[phi].sub.s] + [p.sub.w][x.sub.s], (20)

where [bar.d] <W < [d.sub.max] and [bar.d] < D < [d.sub.max] are assumed.

Considering the dynamics in (1) and related equations, we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (21)

where [v.sub.0] is the velocity of the robot.

Substituting (14), (17), (19), and (20) into 21) results in

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (22)

By notating that

Q = sgn ([v.sub.0]) [[alpha].sub.1] {[Lp.sub.w] - [Lp.sub.d]} + [[alpha].sub.2] {[Lp.sub.w] - [Lp.sub.D]}, (23)

we have the following dynamics from (22):

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (24)

From Figure 4, we have

[[??].sub.s] = -[v.sub.0] sin [[phi].sub.s], (25)

where is the orientation of robot in Figure 4 and considered to have small magnitude to analyse the stability in a typical linear manner.

Notating that z = [[[[phi].sub.s] xs] [[??].sub.s] [x.sub.s]].sup.T], we have

[??] = [A.sub.[phi]] [A.sub.[phi]]z + [[tau].sub.u], (26)

where [A.sub.[phi]] is a matrix derived from (24) as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (27)

The determinant det(sI - [A.sub.[phi]]) is given by

[absolute value of sI - [A.sub.[phi]]] = [s.sup.3] + [C.sub.[phi]]/I [s.sup.2] + 2Q/I s + 2[[alpha].sub.1][p.sub.w] + 2[alpha]2[p.sub.w]/I [absolute value of [v.sub.0]]. (28)

Because [p.sub.w] is positive, if the conditions below are satisfied,

Q > 0,

CQ/I - [[alpha].sub.1][p.sub.w] [absolute value of [v.sub.0]] > 0, (29)

then the system in (26) is stable 25].

Next, we consider the translational motion of mobile robots. Considering (2) and related equations, we have

M[??](t) + [C.sub.v]V (t)

= [F.sub.u] - [[beta].sub.1] ([summation over (i=1,2)] s ([d.sub.i])) - [[beta].sub.2] ([summation over (i=1,2)] min ([[??].sub.i], 0)) (30)

Substituting (14), (17), (19) and 20) into 30) results in

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (31)

From

[[??].sub.s] = [v.sub.s] (t) (32)

and notating that z = [[[y.sub.s] [[??].sub.s]].sup.T], we have

[??] = [A.sub.v]z + [F.sub.u], (33)

where [A.sub.v] is a matrix derived from (31) as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (34)

The determinant det(sf - [A.sub.v]) is

[absolute value of sI - [A.sub.v]] = [s.sup.2] + [C.sub.v] + 2[[beta].sup.2]/M s + 2[[beta].sub.1][P.sub.D]/M. (35)

If [p.sub.D], [[beta].sub.1], [[beta].sub.2] > 0, the system in 33) is stable 25].

4. Simulation Results

Computer simulations were performed to verify the effectiveness of the proposed method. Figure 5 shows the initial condition of the human living environment in which several static and dynamic objects exist. It includes tables low enough for the rehabilitee to step over, although these tables are considered to be static obstacles for robots. The passing humans are considered to be the dynamic obstacles that have to be avoided by the robots.

This simulation applies the Kinect sensor to enable the robot to "see" static and dynamic obstacles. Figure 6 shows the simulation result of the Kinect sensor application. The simulation was conducted by taking the distance data input from real Kinect sensor detecting approaching human. The distance data is also compared with that from the proximity sensors. The application of Kinect sensor in Figure 6 shows that when the passing human approaches the robots and the distance from the robot to the human is smaller than the allowed distance, the robots stop when the dwelling period reaches approximately 1500 s.

Figure 7 shows the computer simulation screenshot, and Figure 8 shows the simulation results that demonstrate the effectiveness of the proposed system. Figure 7(a) shows the rehabilitee moving between tables. Figure 7(b) shows the environment in which the rehabilitee moves randomly.

In Figure 8, the green, red, and blue lines are trajectories of the rehabilitee, leader, and follower, respectively. In Figure 8(a), when the rehabilitee walks between tables, some ripples occur for obstacle avoidances. Figure 8(b) shows the result in random setup, in which more ripples are shown. In all these cases, the swarm robots successfully follow the rehabilitee while avoiding obstacles.

5. Conclusion

This study presents the design of a collision avoidance control system for swarm robots moving in an environment that includes moving obstacles. The swarm robots follow the rehabilitee to provide support in performing his/her tasks in a dynamic environment. This study applies a reference and PI controller. The reference controller creates the reference trajectory for the PI controller based on the fused sensor information obtained from the Kinect, proximity sensors and RFID system. The obstacle avoidance trajectory is generated by the reference controller, and the stability of the overall system is analytically verified. Various computer simulations are performed to verify the effectiveness of the proposed method. The rehabilitee was successfully followed by the swarm robots in all situations.

Appendices

A. PI Control System Design for Simulation

We apply the PI controller to (4), and the following dynamics are obtained:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (A.1)

where [[??].sub.rd] and [[??].sub.ld] are reference velocities for right and left wheels.

Substituting (1)-(3) into (A.1) results in

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (A.2)

where [S.sub.r] is the integral of the tracking error given by

[S.sub.r] = [integral]([[??].sub.rd] - [[??].sub.r]) dt. (A.3)

Substituting (A.3) into (A.2), we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (A.4)

Similarly, we have the following dynamics for the left wheel:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (A.5)

[S.sub.l] = [integral] ([[??].sub.id] - [[??].sub.l]) dt. (A.6)

Equations (A.4) and (A.5) give

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (A.7)

Then, we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (A.8)

Defining the state vector by x = [[v [??] [S.sub.r] [S.sub.l]].sup.T], input vector by u = [[[v.sub.d] [[??].sub.d]].sup.T], and output vector by y = [[[S.sub.r] [S.sub.l]].sup.T] in (A.8), we employed the following linear dynamics for simulation:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (A.9)

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (A.10)

B. Computer Simulation Screenshot and Simulation Results

Figure 9 shows several environment setups used in this study. Figures 9(a) and 9(b) show the environments without obstacles in which the rehabilitee moves around the rooms and the robots follow him/her. Figure 9(c) shows the rehabilitee moving between tables. Figures 9(d) and 9(e) show environments in which the rehabilitee steps over the tables and robots avoid them while still following him/her. Figures 9(f), 9(g), and 9(h) show the environment in which the rehabilitee moves randomly.

Figure 10 shows simulation results of complete environment setup in Figure 10 where the green, red, and blue lines are trajectories of the rehabilitee, leader, and follower, respectively. Figures 10(a) and 10(b) show the setup where no obstacles are found. The resulting graphs show no ripples (evidence of obstacle avoidance) since the mobile robots only consider the rehabilitee-robot distance and the distance between the robots, the rehabilitee just walks around, and robots determine the distance to static and dynamic obstacles. In Figure 10(c), the rehabilitee walks between tables; therefore, some ripples caused by obstacle avoidances are shown. Figures 10(d) and 10(e) show the environmental setup where the rehabilitee steps over tables and the robots follow the human and avoid tables. Figures 10(f), 10(g), and 10(h) show results in random setups, in which more ripples are shown. In all these cases, the swarm robots successfully follow the rehabilitee while avoiding obstacles.

http://dx.doi.org/10.1155/2014/278659

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

The authors sincerely appreciate anonymous reviewers for valuable comments and suggestions.

References

[1] J. Llinas and D. L. Hall, "Introduction to multi-sensor data fusion," in Proceedings of the IEEE International Symposium on Circuits and Systems, pp. 537-540, June 1998.

[2] M. Kam, X. Zhu, and P. Kalata, "Sensor fusion for mobile robot navigation," Proceedings of the IEEE, vol. 85, no. 1, pp. 108-119, 1997.

[3] L. Jetto, S. Longhi, and D. Vitali, "Localization of a wheeled mobile robot by sensor data fusion based on a fuzzy logic adapted Kalman filter," Control Engineering Practice, vol. 7, no. 6, pp. 763-771,1999.

[4] T Haidegger, M. Barreto, P Goncalves et al., "Applied ontologies and standards for service robots," Robotics and Autonomous Systems, vol. 61, no. 11, pp. 1215-1223, 2013.

[5] N. Tschichold-Gurman, S. J. Vestli, and G. Schweitzer, "Service robot MOPS: first operating experiences," Robotics and Autonomous Systems, vol. 34, no. 2-3, pp. 165-173, 2001.

[6] Y. Qing-Xiao, Y. Can, F. Zhuang, and Z. Yan-Zheng, "Research of the localization of restaurant service robot," International Journal of Advanced Robotic Systems, vol. 7, no. 3, pp. 227-238, 2010.

[7] Y.-H. Wu, C. Fassert, and A.-S. Rigaud, "Designing robots for the elderly: appearance issue and beyond," Archives of Gerontology and Geriatrics, vol. 54, no. 1, pp. 121-126, 2012.

[8] D. Sun, J. Zhu, C. Lai, and S. K. Tso, "A visual sensing application to a climbing cleaning robot on the glass surface," Mechatronics, vol. 14, no. 10, pp. 1089-1104, 2004.

[9] K. Kim, M. Siddiqui, A. Francois, G. Medioni, and Y. Cho, "Robust real-time vision modules for a personal service robot," in Proceedings of the 3rd International Conference on Ubiquitous Robots and Ambient Intelligence (URAI '06), 2006.

[10] M.-S. Ju, C.-C. K. Lin, D.-H. Lin, I.-S. Hwang, and S.-M. Chen, "A rehabilitation robot with force-position hybrid fuzzy controller: hybrid fuzzy control of rehabilitation robot," IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 13, no. 3, pp. 349-358, 2005.

[11] C. Daimin, Y. Miao, Z. Daizhi, S. Binghan, and C. Xiaolei, "Planning of motion trajectory and analysis of position and pose of robot based on rehabilitation training in early stage of stroke," in Proceedings of the International Conference on Computer, Mechatronics, Control and Electronic Engineering (CMCE '10), pp. 318-321, August 2010.

[12] E. Akdogan and M. A. Adli, "The design and control of a therapeutic exercise robot for lower limb rehabilitation: physiotherabot," Mechatronics, vol. 21, no. 3, pp. 509-522, 2011.

[13] G. C. Pettinaro, I. W. Kwee, L. M. Gambardella et al., "Robotics: a different approach to service robot," in Proceeding of the 33rd International Symposium on Robotics, October 2002.

[14] A. E. Turgut, H. (Celikkanat, F. Gokce, and E. Cahin, "Self-organized flocking in mobile robot swarms," Swarm Intelligence, vol. 2, no. 2-4, pp. 97-120, 2008.

[15] N. Xiong, J. He, Y. Yang, Y. He, T.-H. Kim, and C. Lin, "A survey on decentralized flocking schemes for a set of autonomous mobile robots," Journal of Communications, vol. 5, no. 1, pp. 31-38, 2010.

[16] R. Havangi, M. A. Nekoui, and M. Teshnehlab, "A multi swarm particle filter for mobile robot localization," International Journal of Computer Science Issues, vol. 7, no. 3, pp. 15-22, 2010.

[17] B. Eikenberry, O. Yakimenko, and M. Romano, "A vision based navigation among multiple flocking robots: modeling and simulation," in Proceedings of the AIAA Modeling and Simulation Technologies Conference, pp. 1-11, Keystone, Colo, USA, August 2006.

[18] R. Olfati-Saber, "Flocking for multi-agent dynamic systems: algorithms and theory," IEEE Transactions on Automatic Control, vol. 51, no. 3, pp. 401-420, 2006.

[19] K. Ishii and T. Miki, "Mobile robot platforms for artificial and swarm intelligence researches," International Congress Series, vol. 1301, pp. 39-42, 2007.

[20] H. Lee, E.-J. Jung, B.-J. Yi, and Y. Choi, "Navigation strategy of multiple mobile robot systems based on the null-space projection method," International Journal of Control, Automation and Systems, vol. 9, no. 2, pp. 384-390, 2011.

[21] T. Germa, F. Lerasle, N. Ouadah, and V. Cadenat, "Vision and RFID data fusion for tracking people in crowds by a mobile robot," Computer Vision and Image Understanding, vol. 114, pp. 641-651, 2010.

[22] T Tammet, J. Vain, and A. Kuusik, "Distributed coordination of mobile robots using RFID technology," in Proceeding of the 8th WSEAS International Conference on Automatic Control, Modeling and Simulation, pp. 109-116, March 2006.

[23] K. Prathyusha, V. Harini, and S. Balaji, "Design and development of a RFID based mobile robot," International Journal of Engineering Science & Advanced Technology, vol. 1, no. 1, pp. 3035, 2011.

[24] S. Park and S. Hashimoto, "Indoor localization for autonomous mobile robot based on passive RFID," in Proceeding of the IEEE International Conference on Robotics and Biomimetics, pp. 1856-1861, Bangkok, Thailand, February 2009.

[25] R. Sigal, "Algorithms for the Routh-Hurwitz stability test," Mathematical and Computer Modelling, vol. 13, no. 8, pp. 69-77, 1990.

Tresna Dewi, (1,2) Naoki Uchiyama, (2) Shigenori Sano, (2) and Hiroki Takahashi (2)

(1) Electronic Study Program, State Polytechnic of Sriwijaya, Palembang 30139, Indonesia

(2) Department of Mechanical Engineering, Toyohashi University of Technology, Toyohashi 441-8580, Japan

Correspondence should be addressed to Naoki Uchiyama; uchiyama@tut.jp

Received 19 July 2013; Accepted 11 December 2013; Published 2 February 2014

Academic Editor: Oliver Sawodny
COPYRIGHT 2014 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2014 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Dewi, Tresna; Uchiyama, Naoki; Sano, Shigenori; Takahashi, Hiroki
Publication:Journal of Robotics
Article Type:Report
Date:Jan 1, 2014
Words:4136
Previous Article:Dynamics modeling and control of a quadrotor with swing load.
Next Article:Kinematics and dynamics of an asymmetrical parallel robotic wrist.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |