Printer Friendly

Iterative Learning Algorithm Design for Variable Admittance Control Tuning of A Robotic Lift Assistant System.

INTRODUCTION

Robots are widely used in industry to perform repeating operation in order to free human operators from the tedious, danger and heavy labor. But there are some tasks that are either complex or interact with complex environment. In these cases, human beings are not replaceable and human-robot interactions (HRI) are inevitable. The design and development human performance enhanced robotic devices are applicable not only to industrial assembling robots, moving/carrying assist devices, but also to areas like medical surgical robots, exo-skeleton etc. The benefit of HRI is to extend the human perceiving and analyzing capability in dealing with the difficult tasks in complex environments. Therefore, the main purpose and challenge of control system design is to make robot manipulation more intuitive for human operators.

The approach of admittance control is often adopted in human-robot interaction control. It has been noticed that the control designed with a fixed pair of virtual mass and damping cannot reach desirable performance of handling. From literature review, it shows that the variation in admittance adapting to operator's speed and force can improve the performance. Variable admittance is defined as the function of human intention. Different parameters are used to represent the human intention, such as force, acceleration, or derivative of acceleration. For example, in [1] the human arm stiffness is estimated from experimental data of force and position of the end effector, then it is designed to adjust the viscosity coefficient of the impedance characteristic in cooperative calligraphic writing task. In another case, both acceleration and speed are applied to an intelligent assist device (IAD) [2]. The acceleration is used to adjust virtual damping in order to moderate the force demand, while the speed is defined to change the virtual mass for fine positioning. And in other cases, the time derivative of the contact force to assess acceleration and deceleration intention for variable admittance control scheme [3, 4] was proposed to continuously adapt the virtual damping coefficient as a function of the human intention in a speed based variable impedance control for pick, place and moving tasks.

With different admittance parameter functions and combination defined, it becomes difficult to choose and verify for designers. Online verification is necessary the operator must evaluate which one is easier to manipulate for different tasks. Offline simulation could be used to reduce online testing time and risk, but how to emulate the operator's learning capability is a challenging question. Learning methods have been proposed and studied previously to deal with similar situation in literatures. It is pointed out in [5] that it is difficult to determine admittance matrix elements analytically since they depend upon many physical properties, which are not identified easily. One method is to obtain admittance matrix elements through iterative trials of manipulation operation in a single motion. By using iterative learning algorithm (ILA) in admittance parameter adaptation is also studied. For example, the gain scheduling with a reinforcement learning approach algorithm is adopted [6]. The admittance parameters may not fit all situation and environment. The impropriate admittance parameters design can also cause resonance oscillation in the human-Robot interaction. The authors of [7] use the online fast Fourier transform of the measured manipulator end effector forces in order to detect the oscillations and to adapt the admittance parameters dynamically when the environment is changing. In a power assist wheelchair robot application [8], in order to let the operator get a smooth load feeling, variable admittance based velocity compensation and ILA is proposed to adaptively adjust the control perimeter to maintain the operator's force at same level. The learning ability is used to sense the changing in load and adjust the virtual damping, while the virtual mass is kept as constant.

In this paper, an iterative learning method is proposed for variable admittance control force verification. In practices, there are different virtual damping and virtual mass functions can be used to interpolate the human intention. The challenge is to verify them one by one and compare the performance via real testing. So the learning ability is then introduced to find the best suitable force profile for a pre-defined speed/position trajectory offline. The intuitive relation between force and speed is then tested and verified with offline simulation data. The online learning capability for admittance parameter tuning is going to be considered in next phase. This offline approach of Iterative learning algorithm design is applied to a five-degree-of-freedom lift assist device. The simulation results are used to calibrate and validate the control algorithm.

A LIFT ASSIST SYSTEM

A lift assist system is built for human operator to pick, move and install vehicle panel in assembly line. A human operator can manipulate the lift assist system with 5 degree-of-freedom in directions of x, y, z, pitch and rotation. The schematic of one channels is shown in Figure 1, to show the control structure of one of the axis of lift assistant system. The safety and speed control are implemented in PLC (Programmable Logic Controller). Human operator apply force onto the handle, by which in response, the controller controls motor to move the whole structure based on the force input. Besides the spring mass dynamic, both the admittance for suitable virtual mass and damping should be considered in control design.

As shown in Figure 1, the safety and speed control are implemented in PLC (Programmable Logic Controller). In the speed loop, the encoder of the servo motor feedbacks the position of toothed serving belt to PLC. The position loop is closed by human operator. The admittance controller is designed to fulfill the human intention of generating the speed command to PLC. Then the PLC sends the speed command to servo drives of the motors. The speed command should present the maximum intuition to the operator in order to move the lift system freely at his will. The stability, smoothness and intuitive operation are main three basic requirements for the system design. To further improve the performance, the accuracy and swiftness are to be fine-tuned for different operators. Because the body and arm stiffness of different people may be different. Human being will be included in the entire positioning loop through eyes. Human being as an actuator with different stepping and arm stiffness will affect the performance of whole motion loop. As the operator's handle is mounded on the side of load cell (end-effector), the handle position represents the system position output in x, y, z, pitch and rotation direction.

ADMITTANCE CONTROL

Both impedance control and admittance control can be applied to human-robot interaction system. The impedance control uses the displacement as system input and force as system output. It is more suitable for light inertia, low friction and force sensitive system, like robot arms, etc. small structural system. On the other hand, the admittance uses force as system input and position/velocity as system output. It is applicable to application with large inertia, high friction, and position targeting system. The contacting force and final positioning of end-effector need to pay more attention in control design.

Admittance control for human-robot interaction is based on the admittance model which can be written as:

[f.sub.H]=[m.sub.v](a-[a.sub.0]) + [c.sub.v](v-[v.sub.0]) + [k.sub.v](x-[x.sub.0]) (1)

Where [f.sub.H] is the force sensing from human operator as the system input, m, c and k are mass, damping and spring coefficient individually, the subscription v indicates the virtual component which is going to be designed and implemented in the interactive system, the subscription 0 represent the system equilibrium points, in which the a, v x are respectively the acceleration, speed and position of the load cell. Since the [f.sub.H] represents the human intention to move the end-effector of the lift system to the origin of the motion coordination, the equilibrium points at [a.sub.0], [v.sub.0] [x.sub.0] are zeros in the above model. Furtherly, it is not possible to measure the desired position which is only existing in human mind minimizing the distance between target point and the load. The position loop is closed by human operator, but not by this lift assist control system. The last term of the admittance model is omitted. So the admittance model for this system design is rewritten as:

[f.sub.H] = [m.sub.v][??] + [c.sub.v]v (2)

It is clear to see the system output is speed and input is the force sensing and representing the human intension that how the system need to react to targeting speed and at same time close the position loop. The system model transformed into Laplace domain is as:

[mathematical expression not reproducible] (3)

Where the v(s) and [f.sub.H] (s) are the Laplace transform of speed output v and force input [f.sub.H], and the s is the Laplace variable.

VARIABLE ADMITTANCE CONTROL

To make operator easy to learn the dynamic of the lift assistant system, the more intuitive the admittance control can be the better the whole system performance will be. It is a challenging part of admittance control design to make the operation smooth and intuitive. Defining and tuning the variable admittance, e.g. virtual mass and virtual damping are the ways to improve the intuitiveness of the system operation.

As indicated in figure 1, the actuator system dynamic includes the dynamic of admittance controller, speed between PLC and servo motors, and the structure after motor to load cell (end-effecter). To avoid vibration and unstable phenomenon, the admittance dynamic should be dominant in the whole system. Otherwise the position sensor based predictive control is needed to compensate the dynamic of overhead belt-pulley structure. At this stage, only the admittance control is considered and the dynamic, from equation (3), is modeled as first order system [b/as+1]. where the amplitude of the transfer function is b=[1/[c.sub.v]] and time constant is defined by a=[[m.sub.v]/[c.sub.v]]. It is apparent that, for a specific force, the virtual damping affects the steady state of the speed and the ratio of virtual mass and virtual damping affects the ramping speed or raising time of the speed command. Because the force is generated from operator hand/arm stiffness, the interaction between the movement of load cell and human stepping. The learning process is necessary for the operator before a suitable force profile can be learned to generate the targeting speed profile. The intuitive operation, e.g. capability of being easy to manipulate and easy to learn are the key factor to determine the performance of the whole system. The intuition of operation roughly correspond to magnitude of transfer function frequency response. The manipulation could also be related to the time factor or phase of in frequency domain. Because if the load cell does not move easily, for example in X axis direction, in operator stepping and upper body moving direction, the arm stiffness will increase. As to the whole system response dynamic, the operator will apply certain predictive control after learning the dynamic many times. The best situation should be the best matching between the suitable force domains and targeting speed domain with least human prediction.

The Disadvantage of Fixed Admittance Control

When the admittance parameters, [m.sub.v] and [c.sub.v], are set to the constants, it is hard to have good performance for both low speed and high speed, fast movement and fine positioning[1, 2, 3, 4]. For example, when the admittance parameters are set to higher values, therefore the [1/[c.sub.v]] is smaller, a larger force is generally required to move the load especially at high speed. However, it might be good for fine movements at low speed. When the parameters are set to a low value. therefore [1/[c.sub.v]] is large, it is generally easy to move the load at high speed with small force on handle, but more difficult to move accurately at low speed end. The ratio of [[m.sub.v]/[c.sub.v]] play another role as the raising time to the speed command. When the virtual mass is set to high values, the system will show larger inertia and response slowly, which performance is preferred at low speed. While the small virtual mass will make the system response quickly to the targeting speed, which is preferred at high speed command end. These are the disadvantages of fixed admittance control and it is not easy to reach an intuitive and manipulative system with less learning effort.

Variable Admittance to Improve the Performance

The variable admittance control [1] is proposed to automatically adjust the admittance parameters according to the human intention, then the manipulative capability is improved. The acceleration is used to infer human intention. A high damping is set to the default value, variable virtual damping is adjusted by increasing or decreasing to a factor of the acceleration. To obtain a smoother response, a default virtual mass is chosen by applying a safety factor to the minimal virtual mass to virtual damping ratio. Another method is to use the time derivative of the force to infer human intention [2, 3]. The virtual damping is adjusted by the time derivative of the force, but the virtual mass is kept constant. The model of human operator's arm stiffness is used instead of force applied, which is estimated from experimental data of the force and the position of the end effector [2].

ITERATIVE LEARNING ALGORITHM FOR ADMITTANCE TUNING

Human operator plays the master and guiding role in the human-robot interaction. No matter the goal of the system is of position control or of speed control, human learning takes place in the process of this interaction. Human brain needs to get used to the system dynamic to find the best profile of applying force to generate targeting trajectory and speed profile. This learning process may not be explicit, and the loop is closed un-explicitly by collaborating eyes, hands, and body. While in the closed-loop, the brain works as a controller with powerful learning capability.

At the beginning stage, iterative learning algorithm (ILA) is used to find the best manipulating force input for the admittance control to generate the desired speed profile as the output. The admittance parameters can therefore be tuned to fit the force input into suitable range with more intuitive.

The iterative learning is proposed for this purpose, as desired trajectory and speed profile can be pre-defined and the operation is repeated by the algorithm to learn to find out the best way of manipulation, e.g. the best profile of applying force. This application scenario fall into the iterative learning algorithm category, in which the target output is a periodic signals and input is adjusted during the learning cycles. The learning and memory capability of the ILC can be utilized to find the best fit manipulation offline. The resulting profile can therefore to study the performance of admittance control offline.

The simple structure of ILA in implementation and low cost in computation are other crucial factors to recommend the ILA. The stability does not totally depend on the system model during design procedure [10]. Therefore, it has relatively higher robustness to perturbation in signal and modeling errors than conditions that of other control methods. The ILC is proved to be an effective method in control theory [11, 12] and practical application [13].

Iterative Learning Algorithm

Consider a given discrete-time, linear time-invariant (LTI), single input single output (SISO) system.

[v.sub.i](k) = G(q)[f.sub.i](k) + d(k) (5)

where k is the time index, i is the iteration index, q is the forward time-shift operator, qx(k) = x(k + 1), v. is the speed output, [f.sub.i] is the control input of force, and d is an exogenous signal that repeats during each iteration. Repetitive disturbances, like the friction which is hard to model, can be captured in d. The plant G(q) is a proper rational function of q and has delay, or equivalently relative degree of m. The plant needs to be asymptotically stable. Consider the N sample sequences of inputs and outputs:

[f.sub.i](k)k[member of]{0, X,...,N-1} (6)

[v.sub.i](k),k [member of] {m,m + 1,...,m + N-1} (7)

d(k),k [member of] {m, m + 1,...,m + N-1} (8)

and the desired speed output:

[v.sub.d](k),k [member of] {m,m + 1,...,m + N-1} (9)

with the performance or output-tracking error is defined by

[e.sub.t](k) = [v.sub.d] - [v.sub.t],k [member of] {m, m + 1,...,m + N-1} (10)

For such a systems a PD-type ILC algorithm takes the form:

[f.sub.i+1](k) = [f.sub.i](k) + [alpha][e.sub.i](k + m) + [beta][[??].sub.i],.(k + m), k[member of]{0,1,...,N-1} (11)

where [alpha], [beta] are the learning gain. Considering the system delay or relative degree, [f.sub.i](k) has effects on system output after m step. Therefore, [f.sub.i+1](k) is corrected from previous learned [f.sub.i](k) with error [e.sub.i](k + m) and its derivative. When a PD-type ILA is applied to the admittance model (3), the first order system is easy to be stabilized. But when the non-minimum phase and time delay from motor and belts, the system can be instable by general ILC algorithm. A robust and monotonically convergent ILC algorithm is preferred for this situation. An ILA with forgetting factor [gamma] [13] can be adopted as follow:

[f.sub.i+1](k) = [gamma][f.sub.i](k) + [alpha][e.sub.i](k + m) + [beta][[??].sub.i](k + m) (12)

STABILITY ISSUE

Stability issue is a concern when the human operator gets involved in the system, because it is difficult to mathematically prove the stability of the whole close-loop system. At this stage, only the admittance command learning is used to show the effectiveness of ILA in admittance parameters tuning. In next step, it is necessary to identify dynamic model of the motor and actuator of each axes with small scale safe stimulation signal over certain period of time. Once the dynamic model of system after admittance controller is obtained. The learned force profile representing human operator's output can be applied to whole system simulation. Then the stability can be considered in admittance design and verified before running the real test.

SIMULATION AND VERIFICATION

To emulate the human learning capability offline, an ILA with forgetting factor is applied to admittance model for specific speed profile to find the correspondent manipulating force. To simulate a half circle trajectory in horizontal plane by axis X and Y, sine wave speed profile is adopted for X axis and cosine wave speed for Y axis, as shown in figure 2. Both the fixed admittance control and variable admittance control are applied to the system. After cycles of learning periods, the suitable force profile is learned by the ILA. The simulation results are listed from Figure 3 to Figure 9.

When the fixed admittance control is applied, the virtual mass and virtual damping are chosen as following:

[m.sub.v]=5 (kg)

[c.sub.v] = 10 (Ns/m)

The suitable force profile that can produce the speed to match the desired speed is shown in Figure 3. It indicates a large force needs to be applied up to 55 N. The relationship between force and speed profile is hard to master for human operator, in seeing the raising part is far ahead to the speed profile outline. More prediction and learning process cycles may be taken to learn for both acceleration and deceleration phase. A long period of counter force is needed as shown in the negative part of the manipulating force starting from 22s.

When the variable admittance control is applied to the same system, the virtual damping is chosen as of the function of force, which is used to indicate the human intention. The virtual damping function is defined in equation (13) and the symmetrical profile is shown as the blue curve in Figure 5.

[mathematical expression not reproducible] (13)

The virtual mass is define as the function of equation (6). The purpose of this function is to decrease the virtual mass at the high speed end to gain better mobility, while increase the mass at low speed end to have fine position adjusting capability. The shape of the mass function is shown as the red dash line in Figure (5).

[mathematical expression not reproducible] (6)

After certain learning cycles, the speed output in green color matches the desired speed of red dash line. The learned force profile in the blue curve is demonstrated in Figure 6. As indicated, the amplitude is much less than fixed admittance control, so the robot is easier to handle. The improvement on the intuitiveness and similarity between force and speed profile are shown after the acceleration period (20 ~ 20.25s). It means that human operator needs less learning experience with this intuitive dynamic character. The deceleration period is much reduce to about 0.2s. The Figure 7 shows the virtual damping in blue curve and virtual mass in red dash line varying with the speed and applied force.

The simulation in Figure 8 and 9 show the system behave when there is no deceleration force applied. The blue curve in Figure 9 demonstrate the speed pick up after a short time then follow the desired speed at both high value and low value period. Before the ending point at 22.5s, simulating operator take off the hand, it takes about 0.5s for the system to stop. Because of moving at the low speed value, the system does not produce big position error as shown in green solid line matching the red solid line with reasonable agreement.

CONCLUSIONS

To emulate the human operator learning capability, the Iterative learning Algorithm (ILA) is proposed in this intelligent robot assist lift topic. The fixed admittance control and variable admittance control are designed for offline validation of a robot lift system. After certain learning cycles, a suitable manipulating force profile is calculated for the designated speed profile. The intuitive performance of the admittance control is investigated using the relationship between force profiles to the desired speed trajectory offline before the control strategy can be implemented in test system.

By utilizing iterative learning algorithm, the different admittance strategies are tested offline and the force-speed relationship can also be refined. The human learning practice is then reduced. In this way, the tuning of admittance parameters is more explicitly verified. The different variable admittance algorithm can be compared before running the real tests.

Simulation results have shown the effectiveness of the proposed approach. Pre-tuning and verification of variable admittance can be done in a virtual environment, thereby reducing testing time and the risk.

REFERENCES

[1.] Tsumugiwa, T., Yokogawa, R., and Hara, K., "Variable Impedance Control Based on Estimation of Human Arm Stiffness for Human-Robot Cooperative Calligraphic Task," Presented at IEEE international Conference on Robotics and Automation 2002, USA, May, 2002.

[2.] Lecours, A., Mayer-St-Onge, B., and Gosselin, C., "Variabl Admittance Control of a Four-degree-of-freedom Intelligent Assist Device, " Presented at IEEE international Conference on Robotics and Automation 2012, USA, May 14-18, 2012.

[3.] Duchaine, V, St-Onge, M. B., Gao, D. L., and Gosselin, C., "Stable and Intuitive Control of an Intelligent Assist," IEEE transaction on Haptics. Vol. 5, No. 2, 2012.

[4.] Duchaine, V and Gosselin, C., "General Model of Human-Robot Cooperation Using a Novel Velocity Based Variable Impedance Control, " Presented at Euro Haptics Conference, 2007 and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems. World Haptics 2007. doi: 10.1109/WHC.2007.59

[5.] Hirai, S., Inatsugi, T., and Iwata, K., "Learning of Admittance Matrix Elements for Manipulative Operations," Presented at Intelligent Robots and Systems 1996, 8-8 Nov. 1996, doi:10.4271/950513.

[6.] Buchli, J., Theodorou, E., Stulp, F., and Schaal, S., "Variable Impedance Control A Reinforcement Learning Approach, " Presented at Robotics Science and Systems 2010, Spain, June 27-30, 2010.

[7.] Okunev, V., Nierhoff, T., and Hirche, S., "Human-preference-based Control Design: Adaptive Robot Admittance Controll for Physical Human-Robot Interaction" Presented at IEEE ROMAN, 2012, Nov 2012, doi: 10.1109/ROMAN.2012.6343792.

[8.] Suzuki, T., Zhu, C., Sakai, T., and Okada, Y., "Power Assistance of an Omnidirectional Hybrid Walker and Wheelchair with Admittance Model and Iterative Learning Control, " Presented at IEEE 13th International Workshop on Advanced Motion Control (AMC) 2014, March 2014, doi: 10.1109/AMC.2014.6823323.

[9.] Tsumugiwa, T., Yokogawa, R., and Hara, K., "Variable Impedance Control with Regard to Working Process for Man-Machine Cooperation-Work System," Presented at IEEE International Conference on Intelligent Robots and Systems 2001, USA, Nov, 2001

[10.] Wu, H., Chen, J., Li, M., Durrett, R. et al., "Iterative Learning Control for a Fully Flexible Valve Actuation in a Test Cell," SAE Int. J. Passeng. Cars - Electron. Electr. Syst. 5(1):55-61, 2012, doi:10.4271/2012-01-0162.

[11.] Ahn, H. S., Moore, K. L., and Chen, Y. Q., "Iterative Learning Control: Robustness and Monotonic Convergence for Interval Systems," Springer, New York, ISBN 3-540-40173-3, 2003.

[12.] Bien, Z., and Xu, J. X., "Iterative Learning Control- Analysis, Design, Integration and Application, " Kluwer Academic Publisher, Boston, ISBN 978-0-7923-8213-3, 1998.

[13.] Arimoto, S., Naniwa, T., and Suzuki, H., "Robustness of P-type Learning Control with a Forgetting Factor for Robotic Motions," presented at Decision and Control 1990, Honolulu, HI, USA, Dec 5-7, 1990.

CONTACT INFORMATION

Hai Wu, test system engineering of Research and Development, General motors, at Warren Technical Center.

hai.wu@gm.com

ACKNOWLEDGMENTS

The authors would like to express their appreciations to Dalong Gao, Nathan Thompson, James O'Dell, and Jinglin Li for their supports in implementation, testing and discussion.

DEFINITIONS/ABBREVIATIONS

HRI - Human-Robot Interaction

ILA - Iterative Learning Algorithm

IAD - Intelligent Assist Device

PLC - Programmable Logic Controller

Hai Wu and Meng-Feng Li

General Motors LLC

doi:10.4271/2017-01-0288
COPYRIGHT 2017 SAE International
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2017 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Wu, Hai; Li, Meng-Feng
Publication:SAE International Journal of Engines
Date:Apr 1, 2017
Words:4317
Previous Article:Waste Heat Recovery for Light-Duty Truck Application Using ThermoAcoustic Converter Technology.
Next Article:IIoT-Enabled Production System for Composite Intensive Vehicle Manufacturing.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters