Keywords: digitalhuman model, collaborativerobotics, dynamic simulation, human-robot physical
interaction, ergonomics, robot design, human motion simulation.
Work-related musculoskeletal disorders (WMSDs) are the first cause of occupational diseases in developed countries (Schneider, 2010; Parent-Thirion, 2012; US Department of Labor, 2016). They represent a major health issue and a significant cost for companies. WMSDs develop when biomechanical demands at work repeatedly exceed workers’ physical capacity (Punnett, 2004). Despite growing automation, numerous strenuous tasks cannot be fully automatized, at all or at a reasonable cost. With the increase of product variants built at the same assembly line associated to small order sizes, human flexibility and cognitive skills remain needed. In such situations, collaborativerobotics has the potential to reduce workers exposure to WMSDs risk factors, while keeping them in control of the task execution (Krüger, 2009; Schmidtler, 2015). Collaborativerobotics takes multiple forms, from shared workspace, where a human and a robot work side- by-side without physical separation, to direct physical interaction where a human and a robot cooperatively work on a common task (co-manipulation, Fig. 1). Specifically, co-manipulation robots can provide a variety of benefits, such as strength enhancement, weight compensation or movement guidance (Colgate, 2003).
A digitalhuman tool for guiding the ergonomic design of collaborative robots
Maurice P. 1 , Padois V. 2 , Measson Y. 3 , Bidaud P, 2,4 1 Université de Lorraine, CNRS, Inria, LORIA, F-54000 Nancy, France
2 Sorbonne Universités, UPMC Univ Paris 06, CNRS UMR 7222, Institut des Systèmes Intelligents et de Robotique,
Abstract: Digitalhuman models can be used for biomechanical risk factors
assessment of a workstation and work activity design for which there is no physical equipment that can be tested using actual human postures and forces. Yet, using digitalhuman model software packages is usually complex and time-consuming. A challenging aim therefore consists in developing an easy-to-use digitalhuman model capable of computing dynamic, realistic movements and internal characteristics in quasi-real time, based on a simple description of future work tasks, in order to achieve reliable ergonomics assessments of various work task scenarios. We developed such a dynamic digitalhuman model, which is automatically controlled in force and acceleration and inspired by human motor control and based on robotics and physics simulation. In our simulation framework, the digitalhuman model motion was controlled by real-world Newtonian physical and mechanical laws. We also simulated and assessed experimental insert-fitting activities according to the occupational repetitive actions (OCRA) ergonomic index. Simulation led to satisfactory results: experimental and simulated ergonomics evaluations were consistent, and both joint torques and digitalhuman model movements were realistic and coherent with human-like behaviours and performances.
et al., 2006), Jack, Ramsis, Sammie (Delleman et al., 2004). The manikin is animated through motion cap- ture data, direct or inverse kinematics, or pre-defined postures and behaviors. Various ergonomic assess- ment methods are included in these software prod- ucts.The first class of methods estimates the level of risk depending on the exposure to the main MSD factors. The most widely known are RULA (Rapid Upper Limb Assessment), REBA (Rapid Entire Body Assessment), OWAS (Owako Working Posture Anal- ysis System), the OCRA index (Occupational Repet- itive Action), or the OSHA checklist (Li and Buckle, 1999; David, 2005). The second class of methods consists of equations or tables that give physiolog- ical limits not to exceed in order to minimize the MSD risk during manual handling operations. The most famous are the NIOSH equation (Waters et al., 1993) and the Snook and Ciriello tables (Snook and Ciriello, 1991), which determine a maximum accept- able load weight depending on the task features. Though a wide variety of methods are available, they are not suitable for the design of collaborative robots. They must be optimized considering the whole ac- tivity and the whole human body. But the tasks which may be addressed by these robots are various and often complex, whereas the existing assessment methods are specific either to a type of activity and/or to a body part. So the evaluation of the entire activity will very likely require the use of several methods, the results of which are mostly not homogeneous and therefore cannot be compared. Moreover, what might be the main drawback of these observational methods is that they are static, meaning that dynamic phenomena are not taken into account. Yet it has been established that fast motions increase the risk of MSD because of the efforts they generate in biolog- ical tissues. In collaborativerobotics, evaluating the dynamic stages of the activity is even more important because, though designed to be so, the robot is never perfectly backdrivable. Some phenomena can be hard to compensate, even with a dedicated control law. In this case manipulating the robot requires extra efforts from the worker. For instance, collaborative robots providing strength amplification usually are power- ful thus heavy: they are highly inertial so leaving dynamic stages out of the assessment can lead to an underestimation of the risk.
Damsgaard M., Rasmussen J., Christensen S. T., Surma E., and Zee M.de, 2006. Analysis of musculoskeletal systems in the anybody modeling system. Simulation Modelling Practice and Theory, 14(8), 1100–1111. David G.C., 2005. Ergonomic methods for assessing ex- posure to risk factors for work-related musculoskeletal disorders. Occupational medicine, 55(3), 190–199. Delleman N.J., Haslegrave C.M., and Chaffin D.B., 2004. DigitalHuman Models for Ergonomic design and Engineering. In: Working postures and movements - Tools for evaluation and engineering. CRC Press. Delp S.L., Anderson F.C., Arnold A.S., Loan P., Habib A., John C.T., Guendelman E., and Thelen D.G., 2007. Opensim: open-source software to create and analyze dynamic simulations of movement. IEEE Transactions on Biomedical Engineering, 54(11), 1940–1950. Goldberg D.E., 1989. Genetic algorithms in search, optimization, and machine learning. Addison Wesley. Kajita S., Kanehiro F., Kaneko K., Fujiwara K., Harada K., Yokoi K., and Hirukawa H., 2003. Biped walking pattern generation by using preview control of zero- moment point. Proceedings of the IEEE International Conference on Robotics and Automation, 2, 1620– 1626.
Arts et Métiers ParisTech, LCPI, 151 bd de l’Hôpital, 75013 Paris, France; email@example.com
The current industrial PLM tool generally relies on Concurrent Engineering (CE), which involves conducting product design and manufacturing stages in parallel and integrating technical data for sharing among different experts in parallel. Various experts use domain-specific software to produce various data. This package of data is usually called Digital mock-up (DMU), as well as Building Information Model (BIM) in architectural engineer- ing [ SNA12 ]. For sharing the DMU data, many works have been done to improve the interoperability among the engineering software and among the models in domains of mechanical design [ FR07 ] and eco-design [ RRR13 ]. However, the computer-human interaction (CHI) currently used in the context of CE project reviews is not opti- mized to enhance the interoperability among various experts of different domains. Here the CHI concerns both complex DMU visualization and multi-users interaction.
The second criticism which can be addressed to both kinds of DHM software concerns the animation of the DHM. The DHM motion is generated through forward or inverse kinematics, pre- defined postures and behaviors (e.g. walk towards, reach towards), or from motion capture data. Apart from motion capture, none of these animation techniques enables to come up with a truly realis- tic human motion. Kinematic techniques do not take into account the inertial properties of the human body or external load, so the simulated motion is rarely human-like [ 6 ]. Pre-defined behaviors result in more realistic motions since they rely on a pre-recorded motions database, but only a limited number of behaviors can be simulated and they become unrealistic when external conditions are modified (e.g. adding a load in a reaching motion). In general, the obtained motion is not even dynamically consistent. For instance, the DHM balance is never considered though it affects the relevance of the evaluation [ 18 ]. As for motion capture, the human subject and the avatar must experience a similar environment to obtain a realistic simulation. In particular, the interaction forces with the en- vironment are crucial, so the subject must either be provided with a physical mock-up (Fig. 2 ) or be equipped with complex instrumen- tation (digital mock-up through virtual reality and force feedback devices). Motion capture is therefore highly time and resource con- suming. In order to circumvent the above-mentioned issues, De Magistris et al. developed an optimization-based DHM controller to automatically simulate dynamically consistent motions [ 19 ]. The dynamic controller computes DHM joint torques from a combina- tion of anticipatory feedforward and feedback control. It has many advantages over kinematics techniques, such as ensuring DHM bal- ance and generating hand trajectories that are in accordance with some psychophysical principles of voluntary movements. How- ever, though this controller has been successfully used for a virtual ergonomic assessment, the Jacobian-transpose method used in the feedback control does not guarantee the optimality of the solution, because joint torques limits cannot be explicitly included in the op- timization.
The purpose of these collaborative robots is to decrease the risk of MSD by alleviating the worker’s physical load and improving his posture. One of the main issues in the design process of a collaborative robot is to take into account the human presence and capabilities. Yet performing an er- gonomic assessment of such a robot is essential to check its usefulness to the worker. Many methods exist (Guangyan and Buckle, 1999), based on the observation of an actual worker, but they need a prototype of the robot. It is a significant limitation in terms of cost and time. An alternative is to perform the assessment within a digital world, using a virtual manikin to simulate the worker. Digitalhuman models are already available to evaluate the design of workstations, such as JACK, RAMSIS, SAFEWORK or SAM- MIE (Blanchonette, 2010, and Porter et al., 2004). But they do not allow a fully automatic and dynamic simulation of realistic movements. Moreover, the commercial software frameworks in which these manikins are integrated were not designed with collaborativerobotics in mind.
When automation is introduced into a workplace, the common narrative is that it will replace human performance of dangerous, repetitive, or error-prone tasks. However, it is important to be cautious in the deployment of technology and consider the larger context of work. Failing to do so means it is possible that preferred work tasks will be replaced, human interaction will be reduced, and work paces will increase unsustainably. While the human-robot interaction (HRI) research community has been working for decades on leveraging the uniquely independent strengths of humans and robots to build effective teams, the manufacturing industry has been slower to adopt close integration, “[treating] automation and manual labor as separate issues” . This disparate view has led to several problems which both disadvantage businesses economically and contribute to negative workplaces for workers. First, because the two laborers (robot and human) are viewed as separate entities, when problems occur companies have no means forcollaborative problem solving, often causing work to be stopped entirely. Work stoppage is extremely expensive for companies, with tangible cost estimates ranging between $1M-$7M million dollars per hour [32, 33, 34].
Figure 1: Some humanoid robots from recent years, from left to right: Boston Dy- namics’ Atlas, Kawada’s HRP4, Honda’s ASIMO, KAIST’s HUBO, PAL Robotics’ REEM-C, DLR’s TORO, Aldebaran’s Romeo.
search labs, industry has long adapted robots for manufacturing. Many of these specialized, fixed-base robots are big, powerful machines which are designed to be efficient and fast. However, as a consequence, they need to be isolated because of the inherent danger they represent. Oftentimes, these robots require their own workspace or even safety cages. This impedes them from collaborating alongside humans. However, in recent years, there has been a significant emerging branch of industrial robots that brought along a paradigm shift: collaborative robots that are designed to safely work side-by-side with humans. Although these are slow, in comparison to classic industrial robots, they are still efficient and reliable for cer- tain tasks. Furthermore, they are designed to be easily reconfigurable (as opposed to being specialized) for a variety of tasks. Currently, the technology for this has become mature enough, and is starting to be commercialized by several companies, with Fig. 2 showing some examples. Although some companies adopt the single-arm design of classical industrial robots, others have chosen a dual-arm design, in some cases even including a functional head. The design choice of having two arms and a head seems to have stemmed from copying the upper-body humanoid design, or perhaps to help their applicability for working together with humans. Whatever the case may be for the design choice, these robots only need functional legs to be formally called humanoids. However, adding these legs has a lot of implications on how the robot must be controlled and programmed, returning us to the main topic of this thesis: how do we reconnect collaborative and humanoid robotics?
Despite the lack of cognitive abilities that has a human expert, the software agent has some edges that can be beneficial for a collaborative DR task. The system can access online dictionaries of synonyms, hyponyms, hypernyms and “see also” links, allowing it to find terms related to the user’s verbalization. For the reformulation, the system can make use of the lexical resources on the query terms 18 . In the context of CISMeF, Soualmia 19 o ffers tools to correct, precise and enrich queries. On top of that, the assistant agent can store the previous DR sessions and take advantage of them (by linking information needs, queries and documents) to find terms related to the current search 20 . It also has the ability to launch queries “in background” (i.e. without notifying the user), beforehand suggesting any query modification or launch. It makes possible, before suggesting a modification to the user, to check if it brings interesting results.
study for the variation of some relevant parameters (such as pitch, voice quality, and articulation) under many emo- tional states. Moreover, Cahn in , explains the emotionally driven changes in the voice signal’s acoustic features under physiological effects in order to understand how the vocal acoustic features accompanying internal states differ. Roy and Pentland in , present a spoken affect analysis system that can recognize speaker approval versus speaker disapproval from child-directed speech. Similarly, Slaney and McRoberts in , propose a system that can recognize praise, prohibition, and attentional bids from infant-directed speech. Breazeal and Aryananda in  investigate a more direct scope for affective intent recognition in robotics. They extract some acoustic features (i.e., pitch and energy) and discuss how they can change the total recognition score of the affective intent in robot-directed speech. A framework forhuman emotions recognition from voice via gender differentiation is described in . Generally, the results of the offline emotions recogni- tion in terms of the above mentioned vocal characteristics are reasonable.
Key words : 3D Reconstruction, fruit tree, 3D point cloud, modeling plant structure INTRODUCTION
Rapid and automatic reconstruction of plant structure is an interesting and challenging topic both in computer graphics and agronomic research. In most applications of 3D modelingfor fruit tree, an entire and detailed mesh model is expected to enable potentially further application (e.g. calculating light intersection of canopy, demonstrating the difference between varieties and outside appearance). As such popular methods formodeling plant structure (e.g., L-systems, functional-structural model and interactive design method) will meet difficulties in reconstructing a 3D model of fruit tree with a satisfying accuracy for these kind applications, recently digitizing data from real objects have been used extensively for creating 3D models, and more methods reproduce virtual 3D plant models from real measured data. Electromagnetic digitizers were used earlier to measure the spatial position and orientation of stems and leaves for giving a quantitative assessment to the tree geometry (Sinoquet and Rivet, 1997; Sonohat et al., 2006) However, it is a tedious and time-consuming job in digitizing tree structure by using electromagnetic digitizers, and often not precise enough for accurately capture the detailed organ geometry. Recently non-contact laser scanners have been used for various plant measurement and reconstruction (Kaminuma, et al., 2004; Dornbusch, et al., 2007). Laser scanners enable us to rapidly quantify the surface of an object as a dense set of points. But if an organ or part of the plant is invisible to the scanner, its information will be missed in the captured point cloud. The missing information can be estimated by using existing or statistic knowledge about morphological structure of plants (Xu, et al., 2007). But this will lose accuracy for the measured plant. So these methods are more suitable fordigital entertainment rather than agronomic research. In this paper, we aim to provide a method for automatic and accurate reconstructing the structure of fruit tree from laser scanner measured colored point cloud, and demonstrate some experimental results.
Enhancing the performance of technical postures or movements at work, in sports or in rehabilitation is of great concern for humans, and aims both at improving operational results and at reducing biomechanical demands on the body. Advances in human biomechanics and modeling tools allow to evaluate human performance with more and more details using digitalhuman models . However, the reliability of these force-related biomechanical measurements is questionable because most mappings of motion capture data onto a digital model do not ensure the dynamic consistency of the resulting motion . Then, once an existing movement is evaluated, finding the right modifications to improve the performance is still addressed with extensive trial-and-error processes.
The use of robotic systems can improve the productivity since, like automatic ma- chines, they can execute tasks with high repeatability and reliability. However, they can also be reprogrammed, and therefore, like human operators, can be assigned to differ- ent tasks depending on the production needs. Nonetheless, their productivity can still be limited. As mentioned before, one of the main advantages of having humans perform a given operation is their ability to adapt to mutable contexts, in which, as an example, objects might not be precisely positioned on a conveyor. To provide the same level of ro- bustness and flexibility, robotized manufacturing systems must be equipped not only with proprioceptive sensors, such as motor encoders used in low-level joints control, but also with exteroceptive ones, allowing to gather information about both the objects to interact with and the surrounding environment. These additional sensors often correspond to vi- sion systems composed of one or more cameras, allowing to gather information about the relative position between a target object and the actuated robot. Visual signals can then be used to evaluate control inputs that are robust with respect to several uncertainties, such as modeling and calibration errors, ultimately allowing an arm to precisely position itself with respect to the observed object.
The first goal is achieved by fabricating all parts of this toolkit with a commercial material jetting 3D printer, Stra- tasys Objet500 Connex3, using commercial materials Vero- WhitePlus (a stiff polymer) and Agilus30 (a compliant elastomer), as well as digital blends between the two mate- rials to achieve a range of materials with different stiffnesses. The removal of support material, dissolved by using a che- mical solution (2% NaOH, 1% Na 2 SiO 3 ), is simplified by eliminating the enclosed volumes in the designs. The second and third goals are achieved by evaluating the actuation pressure, speed, and displacement against designs found in literature by using similar mechanisms. The last is demon- strated through the assembly of two robots by using the same components.
contribute to the delivery of diagnostic and prognostic information.
Therefore, achieving CollaborativeDigital Anatomic Pathology is a global integrated effort consisting not only in acquiring all the necessary computer equipment and imaging devices needed for the management of the Anatomic Pathology reports and their corresponding images within the hospital, but also in developing archi- tecture that allows collaborative work between different healthcare facilities. Collaborative processes require sharing or exchanging Anatomic Pathology information (data and images) that is unambiguously understandable to human beings. Digitalizing and standardizing this information so that it becomes also unambiguously understandable by machines allows the development of advanced services supporting the interactions between healthcare providers involved in various activities related to patient care coordination as well as epidemiology or clinical research. CollaborativeDigital Anatomic Pathol- ogy can only be fully achieved using medical informatics standards. The goal of the international integrating the Healthcare Enterprise (IHE) initiative is precisely speci- fying how data standards should be implemented to meet specific health care needs and making systems integration more efficient and less expensive . The international IHE initiative, developed in North Amer- ica, Europe and Asia, builds in many healthcare domains, along annual cycles, integration profiles, each of which being an implementable specification of an interoperable solution fulfilling a set of use cases. Each annual cycle is concluded by the organization of inter- national platforms of interoperability tests (called ‘‘con- nectathons’’) that confer to IHE its unique efficiency. Participation of European researchers in IHE Anatomic Pathology is fostered and partly coordinated by the COST action IC0604 . The results already achieved by IHE Anatomic Pathology, launched in 2005, consist in a technical framework including the integration pro- file “Anatomic Pathology Workflow” that successfully addresses basic image acquisition and reporting pro- cesses within hospitals [3-5].
Competitions based on the concept of games can organize individuals to work toward a common objective with the incentive of a monetary or non-monetary reward. Individuals with a diversity of skills can participate in the task, with participants picking up and contributing in tasks they are best at. Collaboration allows individuals to work together to achieve larger goals. However, meaningful development through competitions requires a careful balance of competition and collaboration to achieve its goals. One of the important tenets of this thesis is that competition and collaboration atre not mutually exclusive. While big competitions ‘challenge’ the public with a difficult objective, a series of smaller challenges can be used to engage multiple participants if the challenge structure includes collaboration. Collaboration among the participants allows for the accomplishment of larger tasks by multiple people, and for the performance of each participant to be improved by learning from others. There are a number of ways to bring collaboration into a competitive model, while retaining the benefits of competition. MMORPGs, or Massively multiplayer online role- playing games as introduced before, always have the common feature of social interaction. The games are designed such that some degree of team work is required in order to achieve game objectives. Strategies are decided upon by communication via typed conversation and due to the large online forum available, players often find like-minded players to collaborate with. While some individuals may be outcasts in the real world, they can become whomever they want in these virtual worlds, and can find other players with similar interests and personalities. In one survey, 39.4% of males and 53.3% of females felt that their MMORPG companions were comparable to or even better than their real world friends .
Researchers and industry practitioners have proposed fully autonomous solutions to the coordination of these teams(Alsever 2011; Bertsimas and Weismantel 2005; Gom- bolay, Wilcox, and Shah 2013). These solutions work well in domains where people are able to fully encode the domain knowledge for the autonomous system. However, people of- ten make use of implicit knowledge or previous experience, which can be time-consuming to translate for an AI agent. In such cases, the human remains a critical component of the system, providing high level guidance and feedback on the generated plans and schedules (Durfee, Boerkoel Jr., and Sleight 2013; Hamasaki et al. 2004; Zhang et al. 2012; Clare et al. 2012). Significant research effort has been aimed at supporting the human’s role through careful design and validation of supervisory control interfaces (Adams 2009; Barnes et al. 2011; Chen, Barnes, and Qu 2010; Cum- mings, Brzezinski, and Lee 2007; Goodrich et al. 2009; Jones et al. 2002; Hooten, T.Hayes, and Adams 2011). Col- laborative human-robot decision-making, in which a human shares decision-making authority with an autonomous robot, is less well studied.