Inverse-DynamicsWholeBodyMotionWhole-body control of a humanoid robot is a challenging task due to the kinematic and dynamic complexity ofthe system. Humanoids are highly kinematically redundant robotswith a non-trivial tree-like structure as well as an unstable nature due to the vertical po- sition. At every moment, the robot must not only reproduce some task, but it should not fall while performing the task, keeping its dynamic balance. As described in the previous section, several methods aiming at solving these problems have been recently proposed and the research area is very active. The generic method proposed in this thesis is based on the inverse-dynamics model ofthe robot considering contact constraints and a set of tasks spec- iﬁed with a given priority. The resolution for each task is posed as a minimization problem, subject to several constraints that ensure dynamic feasibility, and uses a hierarchical scheme based on hierarchized QPs. Since themotion is generated using several prioritized tasks, and the dynamic model ofthe robot is taken into account, the term Dynamic Stack of Tasks (SoT) is sometimes used to refer to the framework. With respect to other operational-space inverse- dynamics (OSID) approaches, the advantages ofthe proposed method are the capability to handle both equality and inequality constraints at any level ofthe hierarchy, the fast compu- tation time that allows it execution in real-time, and the direct feasibility ofthe generated motion, which does not need any further processing or validation before being executed on the robot.
This work is supported by the cooperation agreement ROB4FAM.
Fig. 1: Walking on Debris. Fig. 2: Climbing Stairs. was mostly used by position-controlled robots which gives, in general, the fastest walk for electrically actuated robots, until a recent publication . Torque control needs model-based controllers tuned for each motor, achieved by modeling the actuators ofthe robot . However, position control cannot meet the intrinsic compliance ofthe torque control ones. Torque control is more suitable for interactions with humans and also for multi-contact problems where external interac- tions and several contact points are needed. The approach using the AM dynamics in control has shown great results for walking in torque, even on non-flat terrain [4, 9, 13].
Their application to humanoidrobots, however, is not straightforward, as constraints other than collision avoidance must be taken care of.  proposes a whole-bodymotion planning method that deals with obstacle avoidance and dy- namic balance constraints. It explores a set of pre-computed statically stable postures with a RRT-like algorithm and then filters the configuration space path into a dynamically stable trajectory.  deals with global motion planning under task constraints. The method is also an extension ofthe RRT algorithm, where the sampled configurations are projected on the sub-manifold of CS that solves a task. The limit of this work is that it only considers very specific constraints, basically restrictions on a robot end-effector motion rela- tively to some world fixed frame. This is not sufficient to solve problems such as static stability, which can be seen as positioning an abstract end-effector -the center of mass ofthe robot- within a given region. A similar drawback is that it can not deal with multiple and prioritized constraints, whereas the corresponding local jacobian methods exist in the literature. More recently, a method similar to ours has been presented and used on manipulator arms in .
The emergence ofhumanoid robotics opens new opportunities in assistive robotics for frail persons. Their integration in social environment would allow them to assist elderly or phys- ically disabled people in daily tasks. Indeed, humanoidrobots are, by design, best suited to evolve and interact in a human environment. Indeed, their deployment won’t require a complete restructuring of human environment, such as houses and their implements. Unfor- tunately the current level of ground robotics technologies, such as in artificial intelligence and visual perception, do not allow humanoidrobots to interact physically and socially auto- nomously and intuitively with humans. Recent tools of programming and conveying human intentions are much close to teleoperation tasks. Of course, thehumanoidrobots, whatever sophistications they reach, must be embedded by capabilities of learning through acquiring knowledge using: (i) web services, or by (ii) demonstration or (iii) imitation. When neces- sary skills are acquired, one needs to instruct the robot withthe desired tasks to be achieved. This is done through instruction interfaces such as speech or gestures. Speech recognition have reached a considerable level of achievement but transforming human verbal/langage instructions into robotic actions is a challenging open issue. Gestures, when possible by the user, have limited semantics and you may certainly agree that it is difficult to express most of our daily tasks only by body or hand gestures. There is a clear need for more intuitive and easy-to-instruct interfaces. Note that current agreed trends in robot telerobotics are based on shared-autonomy and task space control. Indeed, it is not necessary to specify the tasks at the trajectory level or even at the intrinsic constraints level. For example, the user won’t instruct the robot by specifying which actuators have to move and by how much; s/he also does not need to say to thehumanoid to keep balance and not collide... as these are handled by the part ofthe embedded autonomy. It is worth noting that instructional process is not unilateral. Indeed, recent studies show that during such teleoperation, thehumanoid robot also affects the operator’s behaviors and feeling . But whatever interface used, the human-robot in- terface synergy and intuitiveness of use is best reached when a high level of embodiment can be achieved.
This work was partly funded by the Oseo Romeo 2 Project.
 E. Marchand, F. Spindler, F. Chaumette. “ViSP for visual servoing: a generic software platform with a wide class of robot control skills”, IEEE Robotics and Automation Magazine, Special issue on “Software Packages for Vision-Based Control ofMotion”, 12(4):40- 52, December 2005, http://team.inria.fr/lagadic/visp
I. I NTRODUCTION
This papers deals withthe problem of generating wholebodymotionwith contacts in real time forhumanoidrobots. This generalized form of locomotion and manipulation would allow such robots to manipulate heavy objects, climb on uneven terrain, use the environment to stay balanced while in multiple contacts (as in Fig. 1) with one single controller. To achieve this the robot’s dynamical model, forces, motor torques, self-collision, multi-contact dynamical balance have to be considered all together on a time horizon. This makes the problem very hard compared to classical instantaneous inverse kinematics. MuJoCo ,  realizes this by using model predictive control and a specific contact model. So far it was mostly tested in simulation . This paper describes a first evaluation towards a full deployment of this approach on a real humanoid robot, namely HRP-2.
software architecture forthe rectification, combination and adaptation of captured mo- tion. Richard Kulpa [Kulpa, 2005] proceeded with these implementations and included new functionalities of adaptation to kinematic, kinetic and dynamic constraints. This led to a motion simulation method called MKM for “Manageable Kinematic Motion” (MKM). One ofthe major interests of this approach is the use of an independent rep- resentation ofthe morphology to carry out almost all operations. Thus, independently from the subject on which themotion capture is done or the avatar to be animated, this method is an abstraction to the problems of adaptation which imply high compu- tation time. MKM presented a possible way for defining models of movements capable of adapting to the environment while respecting at best the style contained in the orig- inal data. This approach gives systematically a link between the biomechanical analysis and its consideration in the kinematic model. To help conceiving such models, [Multon, 2006] have developed a tool forthe definition of models, the “S4D Maker” to which a certain number of necessary constraints could be parameterized. One ofthe applications of these works is the animation of virtual avatars for interactive applications. Forthegenerationof pre-computed animations, it is possible to use other approaches that take more computation time as for example the dynamic simulators. Some commercial soft- ware products have been developed to produce this type of operations and ensure the coherence with captured trajectories (for example, Endorphin of Natural Motion or the products of Havok company). For this type of application, MKM is more dedicated to animate secondary characters or crowds for whom it is desired to obtain intuitive and quick results. Accordingly, this type of approaches is adapted to simulated environments in which it is desired, for example, to test a spatial planning for a neighborhood or a building or to simulate the behaviors of users inside sub-stations to improve the quality of service. This requires a high number of characters to be animated simultaneously and interactive visualization. The adaptation method with MKM is able to resolve this issue, however it is necessary to develop a level of reciprocal coordination between the behav- iors and the adaptation of captured motion. Even though sometimes dynamics is taken
This technique consists in recording or measuring the three-dimensional motions of a physical object or a human performer (Figure 4.1) in order to apply them to a synthetic model. The most commonly used technologies for capturing full-body motions are electromagnetic sensors and optical markers. This techniques have proven to be very useful when trying to generate human-like plausible motions as the recorded motion contains already the complexity ofthe dynamic motion. However, as examples are recorded for specific types of motions and for specific morphologies, an edition process is, more frequently than not, required. The edition process has to preserve the realism ofthemotion as much as possible while allowing to reuse the data. Motions can be edited when wanting to adapt recorded movements to different morphologies as in Gleicher  or recently in Kulpa et al. . Motion capture edition has led to the development ofwhole new approaches, the so-called data-driven techniques. The goal of this methods is to generate new motions from existing recorded examples. A great variety of techniques have been applied for this. Interpolation methods to combine motions with different characteristics have been presented for example in Witkin and Popovic , Unuma et al.  or Rose et al. . Motions have also been adapted to respect spatio- temporal constraints adapting to a given path or environment as for instance in Witkin and Kass , Rose et al. , Popovic and Witkin  or Gleicher [2001b]. Even when using edition methods, the capacity of generating new motions depends largely on the number and discriminability ofthe recorded motions.
body motions on a humanoid robot accounting for various tasks and constraints is inverse kinematics. Based on the task-function approach, this class of methods enables the coordination of robot movements to execute several tasks in parallel and account forthe sensor feedback in real time, thanks to the low computation cost. To some extent, it also enables us to deal with some ofthe robot con- straints (e.g., joint limits or visibility) and manage the quasi-static balance ofthe robot. In order to fully use thewhole range of possible motions, this paper proposes extending the task-function approach to handle the full dynamicsofthe robot multibody along with any constraint written as equality or inequality ofthe state and control variables. The deﬁnition of multiple objectives is made possible by ordering them inside a strict hierarchy. Several models of contact withthe environment can be implemented in the framework. We propose a reduced formulation ofthe multiple rigid planar contact that keeps a low computation cost. The efﬁciency of this approach is illustrated by presenting several multicontact dynamic motions in simulation and on the real HRP-2 robot.
Fig. 1. The robot has to reach a desired position with regards to an object
We claim that a visual servoing control scheme is well suited for vision based walking motiongeneration because it compensates for model errors. Visual servoing proved to be successful for grasping tasks with standing ,  or walking humanoids , . In , visual servoing is used to control a humanoid avatar along landmarks. The upper body is approximated by the kinematic chain that links an on-board camera to the CoM. The lower body is controlled by adding two translational degrees of freedom to the CoM. The translational velocity ofthe CoM is sent to a kinematic locomotion module which control the legs motion. In  a wholebody visual servoing scheme based on a hierarchical stack of task is introduced. However, the footsteps are predefined. The leg motion is thus set to be the task of higher priority. Therefore visual-servoing in this context is projected in the null-space ofthe pre-defined walking path. On the contrary in this work, the controller driving the walk is directly guided by vision.
Contact forces pre-integration forthewholebody estimation of legged robots
Mederic Fourmy † , Thomas Flayols † , Nicolas Mansard † and Joan Sol`a †∗
Abstract— State estimation, in particular estimation ofthe base position, orientation and velocity, plays a big role in the efficiency of legged robot stabilization. The estimation ofthe base state is particularly important because of its strong correlation withthe underactuated dynamics, i.e. the evolution of center of mass and angular momentum. Yet this estimation is typically done in two phases, first estimating the base state, then reconstructing the center of mass from the robot model. The underactuated dynamics is indeed not properly observed, and any bias in the model would not be corrected from the sensors. While it has already been observed that force measurements make such a bias observable, these are often only used for a binary estimation ofthe contact state. In this paper, we propose to simultaneously estimate the base and the underactuation state by using all measurements simultaneously. To this end, we propose several contributions to implement a complete state estimator using factor graphs. Contact forces altering the underactuated dynamics are pre-integrated using a novel adaptation ofthe IMU pre-integration method, which constitutes the principal contribution. IMU pre-integration is also used to measure the positional motionofthe base. Encoder measurements are then participating to the estimation in two ways: by providing leg odometry displacements, contributing to the observability of IMU biases; and by relating the positional and centroidal states, thus connecting thewhole graph and producing a tightly-coupled whole-body estimator. The validity ofthe approach is demonstrated on real data captured by the Solo12 quadruped robot.
Studies on robot motion planning: Majority of studies related to human-
robot object handovers were carried out in the past with traditional robotic arm manipulators attached to either a stationary-base or to a wheeled-base mobile robotic system [ 23 , 53 , 68 , 76 , 111 ] and latter studies are often related to the robot motion planning and navigation in an ample space and lacks proactive behaviour using ‘biped’ locomotion of a humanoid robot which is capable of walking as human does. To the best of our knowledge, bi-directional object handover with biped walking has not been considered in the previous works on the human-robot dyad object handover. However [ 30 , 109 ] have utilized biped walking capable humanoidrobots in their studies but without considering locomotion. Therefore forthe robot to be sufficiently proactive, we believe it is crucial to consider the possibility ofthe robot taking a step to handover or exchange an object withthe human co-worker, in scenarios where short-distance travel is required. We do not focus on the problem ofmotion planning or navigation in a large cluttered environment [ 67 , 74 , 105 ]. However, instead, we concentrated our efforts to solve and optimize object handover problem which requires immediate shared efforts between human-robot dyad in a small space where few steps are necessary and enough for a comfortable and convenient object handover. We proposed simple but effective methods to take advantage of a biped humanoid robot and deal withthe problem of bi-directional object handover using robot whole-body control. Though there are many state-of-the-art methods available to generate walking patterns for a biped humanoid robot, however, the study done in Chapter 4 primarily adopted the walking pattern generator (WPG) which was designed and tested in our group [ 25 , 59 ] along with its native stabilizer [ 26 , 60 ].
In this paper, we propose a transversal answer to these three challenges and report the first complete demonstration of a whole-body MPC on the torque-controlled humanoid robot TALOS . We performed a reactive manipulation task involving 28 degrees of freedom, contacts, balance and collision avoidance over a frequency rate of 100 Hz. To this end, we relied on several key components: first, an efficient OCP solver, tailored for robot dynamics, enables us to evaluate the MPC at 100 Hz. Then, a memory ofmotion is used in parallel to predict a good warm start and helps the solver to handle non-convexities. The robot servo control implements a torque control running at 2.3 kHz and is able to robustly apply the control references produced by the MPC. The paper is structured as follows. Section II discusses the basic formulation of optimal control forwholebodymotionwith contact. Section III explains the construction ofthe memory ofmotion and how it improves the convergence of DDP. The implementation details are given in Section IV. Section V presents the experimental results on thehumanoid robot TALOS.
stepping stones with height variations. We could also generate trajectories with two stepping stones 6 cm higher than the other ones. These terrain irregularities produce a trunk modulation in roll and pitch as can be observed in Fig. 9 d ; the terrain height is used forthe trajectory generation not forthe optimization (as explained in Fig. 5). The execution performance on stepping stones without changes in terrain elevation is shown in Fig. 9 c . Crossing the terrain in Fig. 9 a-d is only possible withthe coupled planner since we managed to increase the foothold region from (20 cm×23.5 cm) to (34 cm×28 cm). For all our optimizations, we define a stability margin of r = 0.1 m which is good trade-off between modeling error and allowed trunk attitude adjustment on the HyQ robot. Note that the origin of this region is defined by the stance frame (see Fig. 7), and that increasing this region enables broader foothold options. In fact, the coupled planning considers the robot’s dynamics as it jointly optimizes the CoM motions and foothold locations, while the decoupled planning can only consider the robot’s kinematics forthe foothold planning. Additionally, we show in simulation that our planner can climb stairs (see Fig. 10). The robot computes a footstep sequence for climbing up and down using the terrain costmap and the same optimization weights as before. This supports that previously tuned weights generalize well in new terrains.
In many areas illustrated in Fig. 1, workers operate often in very unergonomic postures. In such cases, they use multi- contact postures to relax stress, be in better equilibrium or cast their body posture (e.g. contact with knees, shoulders, back). Humanoids shall be endowed with similar multi-contact behaviors, which is a key technology in Comanoid. It allows transforming a humanoid robot into a reconfigurable multi- limb system that can adapt to narrow spaces, increase its equilibrium robustness and even its manipulation capabilities. 1) Multi-contact planning: A recent review in multi-contact technology  reveals that this problem is consensually ap- proached through a multi-level computation. The first level plans contacts around a sort of free-motion pre-planed “guide” that will exploit some properties under various hypothesis (there are many variants), but having as an output a set of contact sequences and associated transitions. In a second phase, the latter are the input for a simplified model (e.g. CoM) to generate consistent centroidal dynamics trajectory under balance criteria (presented later). In the last phase, this generated trajectory is the input to thewholebody controller, which deals also with other task objectives and constraints. The problem in such multi-level computation is to make each phase likely feasible forthe upcoming one, and make sure that if any phase turns out to be not feasible forthe given input, the latter is quickly re-computed from the current state. Although impressive results have been obtained, this approach does not work well in practice. In the context of manufacturing, it is certainly not a good idea to plan contacts as if we do not know at all where and how they should be made. Assembly operations are quite repetitive in many aspects, only few variations are to be dealt with locally.
Despite numerous efforts to address this large scale nonlinear problem with roughly ten thousand variables [Koenemann 2015, Dai 2014b], no solution yet exists to generate physically consistent controls in real-time using humanoid robot embedded computers. On the other hand template models projecting the overall robot dynamics to its CoM are used in research works [Wensing 2014, Orin 2012, Kajita 2003a, Englsberger 2015], and already showing promising performance. Motiongenerationwith template models can sometimes be solved analytically, and in such cases provide fast solutions that are particularly well suited for platforms with limited computational power. However, when increased CPU power is available, MPC-based solutions withthewhole model are much more complete and reliable. Furthermore, as they can be easily modified, they provide more adaptive functionalities. In this chapter, with a bottom-up approach, we are trying to increase the functional level of a control architecture that already works on an existing humanoid robot, HRP-2 [Herdt 2010a]. The point of this chapter is to present extensions ofthe linear MPC scheme presented in [Herdt 2010a], that allows automatic foot placement in real-time. For instance, the problem depicted in Fig. 1.1 shows thehumanoid robot HRP-2 driven by a desired velocity provided by the user. The former scheme [Herdt 2010a] was specifically formulated as a cascade of two quadratic programs (QPs). Foot-step orientations are solution ofthe first problem, while the second solution ofthe second QP provides the CoM trajectory and foot-step positions. This separation is efficient because the constraints are linear. If an obstacle has to be taken into account then the constraints have the shape depicted in Fig. 1.2, which is not convex anymore. To maintain the convexity, the solution would be to pre-process the obstacle and the feasibility area ofthe foot-steps. However a linearization ofthe obstacle boundary is equivalent to adding a linear constraint as depicted in Fig. 1.2. The algorithm proposed in this chapter is doing a similar operation and therefore no pre-processing is necessary. This is one ofthe major contribution of this chapter in comparison to [Herdt 2010a]. The proposed nonlinear extension takes into account the exact expression of constraints such as, for instance, locally avoiding a convex obstacle. Other formulations for walking motiongeneration have already been proposed. [Deits 2014] is using mixed-integer convex optimization for planning foot-steps with Atlas. [Ibanez 2014] is using mixed-integer convex optimization for MPC control and foot steps timing. In this work we introduce three nonlinear inequalities to handle balance, foot step orientation and obstacle avoidance. This new real time walking pattern generator has been successfully tested on thehumanoid robot HRP-2 as depicted in Fig. 1.1. A key ingredient for achieving real-time performance was the following observation:
‡ LGP-ENIT, 47 avenue d’Azereix, 65016, Tarbes, France, Université de Toulouse (INPT) Keywords: humanoid robot; whole-body movement; dynamics; stack of tasks; torque control
This work deals withthegenerationof human-like whole-body movements on anthropomorphic systems. We propose a general framework to generate robot movements from the definition of ordered stack of tasks and a global resolution scheme that enables to consider different kinds of constraints . We compare qualitatively the robot movements generated from this software with similar recorded human movements. We start with a direct global comparison ofbody movements. Then we analyze the magnitude ofthe reconstructed human torques and compare withthe simulated robot torques during themotion.
This chapter presents a novel way to use random motion planning algorithms coupled with local inverse kinematic techniques.
4.1.1 Whole-Body task motion planning
The problem of inverse kinematics for a humanoid robot, or any articulated struc- ture, is to compute a joint motion to achieve a goal task. As therobots we deal with are redundant, it is natural to take advantage of this redundancy by speci- fying multiple tasks, potentially with different priorities. This problem has been widely studied in robotics planning and control literature, and many jacobian- based solutions have been proposed, among which [ 2 , 33 , 50 , 59 ]. Obstacle avoidance can be taken into account with similar local methods. To do so, one has to include the obstacles as other task constraints to be satisfied. A recent contribution on this subject is [ 30 ].
Abstract— This paper describes the implementation of a canonical motiongeneration pipeline guided by vision for a TALOS humanoid robot. The proposed system is using a mul- ticontact planner, a Differential Dynamic Programming (DDP) algorithm, and a stabilizer. The multicontact planner provides a set of contacts and dynamically consistent trajectories forthe Center-Of-Mass (CoM) and the Center-Of-Pressure (CoP). It provides a structure to initialize a DDP algorithm which, in turn, provides a dynamically consistent trajectory for all the joints as it integrates all thedynamicsofthe robot, together with rigid contact models and the visual task. Tested on Gazebo the resulting trajectory had to be stabilized with a state- of-the-art algorithm to be successful. In addition to testing motion generated from high specifications to the stabilized motion in simulation, we express visual features at WholeBody Generator level which is a DDP formulated solver. It handles non-linearities as the ones introduced by the projections of visual features expressed and minimized in the image plan ofthe camera.
appears just before 8s. The robot shifts its CoM to the left foot and puts the right foot on the first stair. Then the robot has three contacts with its environment and starts to utilize the hand contact. It pushes withthe right leg and pulls withthe hand to climb the stair. This particular motion excites the flexibility located under the ankle of HRP-2. Between 10s and 12s ofthe flexibility perturbs the system but the forces on the hand compensate for it. The hand contact stabilizes the robot to be able to move the hand towards the next grasping position at 12s, where the robot is back to a stable state again. During thethe hand movement, the robot’s CoM is effected by the flexibility exertion but not enough to fall down due the stabilizing influence ofthe grasping contact before and after the double support phase. Themotion is repeated once. The only difference is that the hand does not release contact withthe handrail at the end ofthemotion. This helps the robot to stabilize and go back to an equilibrium state. Current consumption