• Aucun résultat trouvé

Automatic modeling of virtual humans and body clothing

N/A
N/A
Protected

Academic year: 2021

Partager "Automatic modeling of virtual humans and body clothing"

Copied!
10
0
0

Texte intégral

(1)

Sept. 2004, VoI.19, No.5, pp.575 584 d. Comput. Sci. & Technol.

A u t o m a t i c M o d e l i n g of Virtual H u m a n s and B o d y Clothing

Nadia M a g n e n a t - T h a l m a n n , Hyewon Seo, and Frederic Cordier

M[RALab, University of Geneva, 25 rue du General Dufour, CH-1211 Geneva, Switzerland E-marl: {thalmann, seo, cordier}~miralab.unige.ch

Revised June 9, 2004.

A b s t r a c t Highly realistic virtual htunan models are rapidly becoming commonplace in computer graphics. These models, often represented by complex shape and requiring tabor-intensive process, challenge the problem of automatic modeling. The problem and solutions to automatic modeling of animatable virtual humans are studied. Methods for capturing the shape of real people, parameterizatlon techniques for modeling static shape (the variety of human body shapes) and dynamic shape (how the body shape changes as it moves) of virtual humans are classified, summarized and compared. Finally, methods for clothed virtual humans are reviewed. K e y w o r d s human body modeling, animatable model, automatic method

1 I n t r o d u c t i o n

H u m a n b o d y modeling and animation have been one of the most difficult tasks encountered by ani- mators. In particular, realistic h u m a n b o d y model- ing requires an accurate geometric surface through- out the simulation.

At this time, a variety of h u m a n b o d y model- ing methodologies are available, t h a t can be classi- fied into three m a j o r categories: creative, recon- structive, and interpolated. Anatomically based modelers, such as Scheepers et al.[ 1], Shen and T h a l m a n n [2], and Wilhelms and Van Gelder [3] fall into the former approach. They observe that the models should mimic actual components of the b o d y and their models consist of multi-layer s for simulating individual muscles, bones and tissues. While allowing for an interactive design, they how- ever require considerable user intervention and thus s~fffer from a relatively slow production time and a lack of efficient control facilities.

Lately, much work has been devoted to the reconstructive a p p r o a c h to build 3D geometry of h u m a n a u t o m a t i c a l l y by capturing existing shape. Some of t h e m rely on stereo [4], structured light [5], or 3D scanners[6] Some systems use 2D images either from video sequences[71 or from photos [8'9]. "vVhile they are effective and visually convincing, one limiting factor of these techniques lies in t h a t they hardly give a n y control to the user; i.e., it is very difficult to automatically modify resulting models to different shapes as the user intends.

T h e third m a j o r category, interpolated model- ing, uses sets of e x a m p l e models with an interpo- lation scheme to construct new models. Because

interpolation provides a way to leverage existing models to generate new ones with a high level of control in an interactive time, it has gained grow- ing p o p u l a r i t y in various graphical objects incIud- ing h u m a n models.

This p a p e r reviews a u t o m a t i c modeling tech- niques for a n i m a t a b l e virtual humans, primarily for real-time applications. We focus our study on b o d y modeling which are readily animatable. Model-based reconstructive m e t h o d s and interpo- lated m e t h o d s are discussed in detail, because anatomical models are more designated to the in- teractive design.

This p a p e r is organized as follows. First we look for m e t h o d s for shape c a p t u r e of real people in Sec- tion 2. Then we review m e t h o d s for modeling the variety of h u m a n b o d y shapes in Section 3. After studying methods for dynamic s h a p e change as the b o d y moves in Section 4, we contimm in Section 5 to the m e t h o d s for dealing with dressed humans. We conclude the p a p e r in Section 6.

2 S h a p e C a p t u r e

Since the advent of 3D image capture tech- nology, there has been a great deai of interest in the application of t h a t technology to the measure- ment of the h~unan body. In the market, there are now available several systems t h a t are optimized either for extracting accurate m e a s u r e m e n t s from parts of the body, or for realistic visualization for use in games, virtual environments and, lately, e- commerce applications [l~ .

For m a n y years, the goal has been to develop * Survey

(2)

576

techniques to convert the s c a n n e d data into c o m - plete, readily animatable models. A p a r t f r o m solv- ing the classical problems such as the hole filling and noise reduction, the internal skeleton hierarchy should be appropriately estimated in order to make t h e m move. Accordingly, several approaches have been under active development to endow semantic structure to the scan data. Dekker e~ a/. [ii] have used a series of meaningful anatomical assumptions in order to optimize, clean and segment d a t a from a H a m a m a t s u whole b o d y range scanner in order to generate quad mesh representations of h u m a n bod- ies and build applications for the clothing industry (see Fig.l). Ju and others [12] introduce methods to automatically segment the scan model to conform it to an animatable model.

Fig.1. After a scanned data (left) is segmented (middle) for an estimation of the skeleton structure, the posture can be modified (right)Jill.

Allen et al. [i3] proposed an optimization tech- nique for estimating poses and kinematics of the h u m a n b o d y scans. A t e m p l a t e model is used, which is equipped with the skeleton hierarchy and the skin mesh with markers placed on it. By finding the DoF t h a t minimizes the difference marker posi- tion, the pose a n d kinematics of the scan could be found. Once the global proportion of the physique is captured, the displacement m a p is added by ray casting. Holes are filled by interpolating the rays.

Seo eta/. [14] adopts a similar optimization tech- nique based on manually selected feature points.

There are two m a i n phases of the algorithm: the skeleton fitting and the fine refinement. T h e skeleton fitting phase finds the linear approxima- tion (posture and proportion) of the scanned model by conforming the t e m p l a t e model to the scanned d a t a through skeleton-driven deformation. Based o n the feature points, the m o s t likely joint p a r a m - eters are f o u n d that m i n i m i z e the distance of cor- responding feature locations. T h e fine refinement p h a s e then iteratively improves the fitting accu- racy b y m i n i m i z i n g the shape difference b e t w e e n

J. Comptzt. ScL & Technol., Sept. 2004, Vo1.19, No.5 the template and the scan model. T h e found shape difference is saved into the displacement m a p of the target scan model. A conformation result on a fe- male scan data is s h o w n in Fig.2.

.e

It ,,

Fig.2. Conformation of a template model (left) onto a scanned data (middle)[ 14]. The template after the confor- mation is shown (right).

It is only recently that it has b e c o m e a pop- ular area the development of technologies espe- cially for h u m a n b o d y modeling. T o recover the degrees of freedom associated with the shape and motion of a moving h u m a n body, most of the exist- ing approaches introduce simplifications by using a model-based approach, l<akadiaris et al, in [15], use 2 D images from three mutually orthogonal views to fit a deformable model to approximate the differ- ent b o d y sizes of subjects. T h e model then can be segmented to different b o d y parts as the subject moves. Plaenkers et al. [16] also use video cameras with stereo pair for the model acquisition of b o d y part. A person's m o v e m e n t s such as walking or raising arms are recorded to several video sequences and the program automatically extracts range in- formation and tracks outline of body. T h e problem to be solved is twofold: first, robustly extract sil-

houette information from the images; second, fit the reference models to the extracted information. T h e data was used to instantiate the models and the models, augmented by our knowledge about hu- m a n b o d y and its possible range of motions, are in turn used to constrain the feature extraction. T h e y focus however more on the tracking of m o v e m e n t and the extraction of a subject's model is consid- ered as the initial part of a tracking process.

Recently, more sophisticated models were intro- duced a n d they limit their aims to the construc- tion of realistic h u m a n model. R e c e n t w o r k of Hilton et al. [9] involves the extraction of b o d y sil- houettes f r o m a n u m b e r of 2 D views (front, side

(3)

Nadia 2r et al.: Automatic i}Iodeling of Virtual Humans and Body Clothing 577 a n d back) a n d the subsequent deformation of a 3 D

template to fit the silhouettes. T h e 3 D views are then m a p p e d as texture onto the d e f o r m e d m o d e l to e n h a n c e realism. Similarly, Lee eta/. Is] p r o p o s e d a feature-based a p p r o a c h w h e r e silhouette informa- tion f r o m three orthogonal images is used to d e f o r m a generic m o d e l to p r o d u c e personalized animat- able model.

(a) (b)

Fig.3. Image-based shape capture by Hilton eta/. [9]. (a) Input images. (b) Reconstructed model.

(a)

(b)

Fig.4. Image-based shape capture by Lee et a/.[ s]. (a) Input images. (b) Reconstructed model,

Based on adding details or features to an exist- ing generic model, these approaches concern mainly the individualized shape and visual realism using a high quality textures. While they are effective and visually convincing in the cloning aspect, these ap- proaches hardly give any control to the user; i.e., it is very difficult to modify these meshes to a dif- ferent shape as the user intends. These approaches have the drawback t h a t they must deal with special cases using ad hoc techniques.

3 S t a t i c S h a p e

A common problem in h u m a n modeling is how to systematically model the variety of h u m a n b o d y shapes. In most cases, modifying existing model, variously named as reference, generic or template, tends to be popular due to the expenses of recov- ering 3D geometry.

Automatically modifying shapes is desirable for at least two reasons. Firstly, it is often the case that we want to modify shapes to meet new needs or requirements. In a garment application, for ex- ample, we might want to create a 3D b o d y model in a way that it satisfies a number of measurement constraints with minimum user intervention, and in an interactive runtime setting. Secondly, auto- matic modification makes it easy to avoid redun- dancy in a crowd. W i t h o u t having to create or reconstruct each individual, one can obtain crowds by blending of existing models, for instance.

In this section, we review several techniques that a u t o m a t e this task.

3.1 A n t h r o p o m e t r i c M o d e l s

Anthropometry, the biological science of h u m a n body measurement, systematically studies h u m a n variability in faces and bodies. Systematic col- lection of anthropometric measurements has m a d e possible a variety of statistical investigations of groups of subjects, which provides useful informa- tion. for the design of products such as clothing, footwear, safety equipment, furniture, vehicles and any other objects with which people interact. Since anthropometry was first introduced in computer gTaphics[17], a number of researchers have inves- tigated the application of anthropometric data in an automatic creation of virtual humans. Spread- sheet Anthropometry Scaling System (SASS) pre- sented by Azula et al. enables the user to create properly scaled h u m a n models that can be manip- ulated in their a n i m a t i o n s y s t e m "Jack ''[Is] (see Fig.5). T h e s y s t e m creates a standardized h u m a n m o d e l based o n a given statistically processed p o p - ulation data or alternatively, a given person's di- m e n s i o n can b e directly used in the creation of a virtual h u m a n model. In the former case, it auto- maticaIly generates dimensions of each s e g m e n t of a h u m a n figure based u p o n population data sup- plied as input. Their initial virtual h u m a n w a s c o m p o s e d of thirty-one segments, of w h i c h twenty- four h a d a geometrical representation. For each s e g m e n t or b o d y structure w i t h geometrical rep- resentation, three m e a s u r e m e n t s were considered,

(4)

578 J. Comput. Sci. • Technol., Sept. 2004, Vol.19, No.5 n a m e l y the s e g m e n t length, width, a n d d e p t h or

thickness. M e a s u r e m e n t s were compiled f r o m the N A S A M a n - S y s t e m s Integration M a n u a l [19] a n d the A n t h r o p o m e t r y Source B o o k [5]. T h e desired di- m e n s i o n w a s primarily i m p l e m e n t e d b y rigid scale of each c o m p o n e n t , although they s h o w e d later the extension of the system e q u i p p e d with partially de- formable models. Later Seo et al.'s paper [2~ re- ported a similar approach, but with face m o d e l s incorporated.

Fig.5. Anthropometric human models by Azuola et al.[ is] More recently, DeCarlo e t a / . [21] have shown t h a t the p r o b l e m of generating face geometries can be reduced to t h a t of generating sets of anthropo- metric m e a s u r e m e n t s by adopting variational mod- eling technique. T h e underlying idea is to generate a s h a p e that shares, as m u c h as possible, the impor- tant properties of a prototype face a n d yet still re- spect a given set of anthropometric m e a s u r e m e n t s . T h e y cast the p r o b l e m as a constrained optimiza- tion: anthropometric m e a s u r e m e n t s are treated as constrains, a n d the r e m a i n d e r of the face is deter- m i n e d b y optimizing an objective function o n the surface. A variety of faces are then automatically generated for a particular population. This is an interesting a p p r o a c h that, unfortunately, is slow in creation time (approximately one m i n u t e per face) o w i n g to the nature of variation modeling. Also, the shape r e m a i n s as a passive constitute as the prototype s h a p e is c o n f o r m e d to satisfy the m e a - s u r e m e n t constraints while "fairness", i.e., s m o o t h - ness of the shape is being maximized. Therefore, every desirable facial feature has to be explicitly specified as a constraint in order to obtain realistic shape in the resulting model that are observable in real faces, such as hooked noseor double chin.

3.2 I n t e r p o l a t i o n T e c h n i q u e s

In literature, a considerable a m o u n t of w o r k has b e e n u n d e r t a k e n w i t h respect to editing exist- ing m o d e l s a n d blending b e t w e e n m o r e t h a n t w o e x a m p l e s to generate n e w ones. A l t h o u g h it is d o m a i n independent, the interpolation techniques or e x a m p l e - b a s e d approaches have b e e n intensively used for parameterized m o t i o n blending to leverage

existing m o t i o n data. R o s e et al_% p a p e r ':Verbs a n d A d v e r b s " [22] has s h o w n results in this area us- ing radial basis functions ( R B F ) . E a c h e x a m p l e of m o t i o n is m a n u a l l y a n n o t a t e d w i t h a set of adverb values, such as happy, angry, tired, etc. After nor- malization, each a n n o t a t e d m o t i o n is used to f o r m a n adverb a n d verb spaces using scattered data in- terpolation. O n c e the continuous range spaces are formulated, at any point in the adverb space, a corresponding motion is derived through R B F in- terpolation of example motions.

Sloan eta/. [23] have s h o w n the application of similar technique in generation of face models. Us- ing a n u m b e r of e x a m p l e face m o d e l s obtained f r o m i m a g e b a s e d capture, they have s h o w n interactive blending results w i t h control parameters such as gender a n d age.

M o r e recently, novel interpolation m e t h o d s that start with range scan data a n d use data interpola- tion to generate controllable diversity of appear- ance in h u m a n face a n d b o d y m o d e l s have b e e n introduced. Arguably, the captured g e o m e t r y of real people provides the best available resource to m o d e l a n d estimate correlations b e t w e e n measure- m e n t s a n d the shape. In the w o r k with similar goals but applied to face models, other researchers [24] have introduced a ' m o r p h a b l e face model' for m a - nipulating an existing m o d e l according to changes in certain facial attributes. N e w faces are m o d e l e d b y forming linear combinations of the prototypes that are collected f r o m 200 s c a n n e d face models. M a n u a l assignment of attributes is used to define shape a n d texture vectors that, w h e n a d d e d to or subtracted f r o m a face, will m a n i p u l a t e a specific attribute (see Fig.6).

"q . . . s h a p p e

Fig.6. Manipulation of a face model by facial attributes [24]. T h e a u t o m a t i c modeling approach introduced by Seo and M a g n e n a t - T h a l m a n n [14] is aimed at re- alistic h u m a n models whose sizes controllable by a

(5)

Nadia Magnenat-Thalmann et al.: A u t o m a t i c Modeling o f Virtual Humans and B o d y Clothing 579

n u m b e r of anthropometric parameters. Instead of statistically analyzed f o r m of anthropometric data, they m a k e directly use of captured sizes a n d shapes of real people from range scanners to determine the shape in relation with the given measurements. T h e b o d y g e o m e t r y is represented as a vector of fixed size (i.e., the topology is k n o w n in a pri- ori) by c o n f o r m i n g t h e t e m p l a t e model onto each scanned models, A c o m p a c t vector representation w a s adopted by using principal c o m p o n e n t anal- ysis ( P C A ) . A n e w desired physique is obtained by deformation of the template model, w h i c h is considered to have two distinct entities - - rigid a n d elastic deformations. T h e rigid d e f o r m a t i o n is represented by the c o r r e s p o n d i n g joint p a r a m e - ters, which will determine the linear a p p r o x i m a t i o n of t h e physique. T h e elastic d e f o r m a t i o n is essen- tially vertex displacements, which, w h e n added t o the rigid deformation, depicts the detail shape of the body. Using the p r e p a r e d dataset from scan- ners, interpolators are f o r m u l a t e d for b o t h defor- mations. Given a new a r b i t r a r y set of measure- ments at runtime, the joint p a r a m e t e r s as well as t h e displacements to be applied on the t e m p l a t e model are evaluated from the interpolators. A n d since an individual can simply be m o d e l e d by pro- viding a n u m b e r of p a r a m e t e r s to the system, m o d - eling a p o p u l a t i o n is r e d u c e d to the p r o b l e m of au- t o m a t i c a l l y generating a p a r a m e t e r set. T h e re- suiting models as shown

in Fig.7 exhibit a visual fidelity, a n d the perfor- m a n c e a n d robustness of the implementation.

Recently, Seo et al. [2s] have s h o w n that the modification of an individual m o d e l could be driven b y statistics that are compiled from the e x a m p l e m o d e l s (see Fig.8). "7*w--

!i

(a) (b) /):, ": (c)

Fig,8. Variations of a body model according to fat percentage[ 25]. (a) Original model. (b) Modification of the physique (fat percentage 38%). (c) Modification of the physique (fat percentage 22%).

Regression models are built u p o n the female scan database, using shape parameters like fat per- centage as estimators, and each component of the b o d y vector as response variables. In order to avoid erroneous estimation arising from relatively small and skewed dataset, sample calibration is preceded. In cases where tall-slim and short-overweight bod- ies are overrepresented in the database, for in- stance, weights for each sample are determined so that the linear function that m a p s the height to the fat percentage has the slope 0. All resulting models remain readily animatable through recalculation of the initial skin attachment.

-20kg -40kg -20kg Original +20kg --M-Okg +20kg

-20cm +20cm

Fig.9. Variations of a body model by modifying the height and the weight[ 261.

Fig.7. Various body models generated by controlling sizing Allen et a/. [26] have s h o w n similar results but

(6)

580

multaneously, by learning a linear mapping from the database models. Once the correspondence is established for all example models, a mapping func- tion was found by solving for a mapping transfor- mation that maps the body feature, such as height and weight, onto the orthogonal body space that has been formed by PCA (see Fig.9).

4 D y n a m i c S h a p e

Modeling of how the body changes shape as it moves has been a long sought problem in character animation. In this section, we review- methods for systematically deforming the skin shape during the animation.

4.1 S k e l e t o n D r i v e n D e f o r m a t i o n ( S D D ) The skeleton-driven deformation, a classical m e t h o d for the basic skin deformation is perhaps the most widely used technique in 3D character ani- mation. In research literature, an early version was presented by Magnenat-Thalmann et al.[ 27], who introduced the concept of Joint-dependent Local Deformation (JLD) operators to smoothly deform the skin surface. This tectmique has been given var- ious names such as Sub-Space Deformation (SSD), linear blend skinning, or smooth skinning. This m e t h o d works first by assig-ning a set of joints with weights to each vertex in the character. The ioca- tion of a vertex is then calculated by a weighted combination of the transformation of the influenc- ing joints as shown in (1).

{

T h e skeletal deformation makes use of an initial character pose, nameIy dress pose, where -1 the transformation m a t r i x of the i-th influencing joint and PD~s~ the Position of the vertex are de- fined. While this m e t h o d provides fast results and is compact in memory, its drawbacks are the un- desirable deformation artifacts in case of impor- tant variation of joint angles among the influencing joints.

4.2 O v e r c o m i n g t h e P r o b l e m s w i t h S k e l e t o n D r i v e n D e f o r m a t i o n

Several a t t e m p t s have been made to overcome the limitation of geometric skin deformation by using examples of varying postures and blending t h e m during animation. Aimed mostly at real-time

J. Comput. S c L & Technol., Sept. 2004, t/'oi.19, No.5 applications, these example-based methods essen- tially seek for solutions to efficiently leverage re- alistic shapes t h a t come either from captured skin shape of real people, physically based on simulation results, or sculpted by skilled designers.

Pose space deformation [~s] approaches the prob- lem by using artistically sculpted skin surfaces-of varying posture and blending them during anima- tion. Each vertex on the skin surface is associated with a linear combination of radial basis functions that compute its position given ~he pose of the mov- ing character. These functions are formed by using the example pairs - - the poses of the character, and the vertex positions that comprise skin sur- face. Two deformation results of a bending arm, one by PSD and the other by the classical skeleton driven deformation, are comparatively illustrated in Fig.10. More recently, Kry et al. [29] proposed an extension of that technique by using principal com- ponent analysis (PCA), allowing for optimal reduc- tion of the data and thus faster deformation.

Fig.lO. Pose space d e f o r m a t i o n (left) result c o m p a r e d to the skeleton driven d e f o r m a t i o n (right) [28].

Fig.ll. Example-based skin deformation by S]oan et all 2s]. Sloan et al. 12a] have s h o w n similar results us- ing R B F for blending the a r m models (see Fig.ll).

(7)

Nadia Magnenat-ThaImann et al.: Automatic Modeling of Virtual Humans and Body Clothing

Their contribution lies in that they make use of equivalent of cardinal basis fnnction. The blending functions are obtained by solving the linear sys- tem per example rather t h a n per degree of free- dom, which potentially is of a large number, thus resulting in an i m p r o v e d performance.

Allen et al. [13] present yet another example- based m e t h o d for creating realistic skeleton-driven deformation (see Fig.12). Unlike previously pub- lished works, however, they start from an unor- ganized scan data instead of using existing mod- els that have been sculpted by artists. After the correspondence is established a m o n g the dataset through a feature-based optimization on a template m o d e l followed by a refitting step, they build blend- ing function based on the pose, using k-nearest- neighbors interpolation. Additionally, they build functions for combining subparts of the body, al- lowing for blending several datasets of different b o d y parts like arms, shoulder and torso.

Fig.12. Example-based skin deformation by Allen et aL [13].

M o r e recently, M o h r et al. [3~ s h o w e d the exten- sion of the S D D b y introducing pseudo joints. T h e skeleton hierarchy is completed with extra joints in- serted b e t w e e n existing ones to reduce the dissimi- larity b e t w e e n t w o consecutive joints. T h e s e extra joints can also be used to simulate s o m e nonlin- ear b o d y deformation effects such as muscle bulges. O n c e all the extra joints have been defined, they use a fitting procedure to set the skinning parameters of these joints. T h e weights a n d the dress position of the vertices are defined by a linear regression so that the resulting skin surface fits to e x a m p l e b o d y shape designed b y artists. H a v i n g weights well de- fined, those e x a m p l e s could be discarded during the runtime.

5 D r e s s e d V i r t u a l M o d e l s

581

Recent years have witnessed the increasing n u m - ber of m o d e l i n g simulation techniques developed on dressed virtual h u m a n s . In this section w e dis- cuss several cloth m o d e l i n g a n d simulation m e t h - ods that are closely related to the b o d y deformation a n d animation.

5.1 G a r m e n t M o d e l i n g

Despite several attempts to capture a n d m o d e l the appearance a n d the behavior of 3 D cloth m o d - els automatically [31'32], g a r m e n t m o d e l s w o r n by virtual m o d e l s today c o m e mostly f r o m an inter- active process. In works presented by Cordier et al.[ 6,3s], the g a r m e n t s w o r n on a 3 D b o d y m o d e l are automatically resized as the b o d y changes its dimension (See Fig.14). T h e y first attach the cloth m e s h to the b o d y surface by defining attachment data of the g a r m e n t m e s h to the skin surface. T h e cloth deformation m e t h o d m a k e s use of the shape of the underlying skin. E a c h vertex of the g a r m e n t m e s h is associated to the closest triangle, edge or vertex of the skin mesh. In Fig.13, the g a r m e n t ver- tex C is in collision with the skin triangle $IS2S3. C' is defined as the closest vertex to C located on the triangle $1S2S3. T h e barycentric coordinates of C' is then with SI, $2 a n d $3.

/

Fig.13. IV[apping of attachment information [33j- T h e s e barycentric coordinates provide an easy w a y to c o m p u t e the rest shape of the garments b y using the location of the skin vertices. For ev- ery position of the triangle

SIS2S3,

the position of the cloth vertex can be easily computed. T h e association b e t w e e n the original b o d y a n d the gar- m e n t permits to s m o o t h l y interpolate the g a r m e n t so that the g a r m e n t remains appropriately w o r n o n the virtuaJ body.

5 . 2 R e a l - T i m e G a r m e n t S i m u l a t i o n

Extensive research has been carried out in cloth simulation. Several of t h e m have focused on the

(8)

582 J. Comput. Sci. & Technol., Sept. 2004, Vo1.19, No.5

quality of garment simulation, where constraint of real-time was not a priority. Their aim was to de- velop a physics-based m e t h o d t h a t is able to sim- ulate the dynamics of cloth independent of its use w h e t h e r as clothing or other situations like in fur- nishing tablecloth. T h e y integrated c o m p l e x colli- sion detection a n d they were able to simulate the physical behavior of garments [34-36].

e

Fig.14. Resizing cloth models in accordance with the body deformation[an].

Other research has focused on the real-time as- pect of the a n i m a t i o n of deformable objects using physicM simulation.

Baraff et a/.[37,3sl have used the Implicit Euler Integration m e t h o d to c o m p u t e the cloth simula- tion. T h e y stated t h a t the bottleneck of fast cloth simulation is the fact t h a t the time-step must be small enough in order to avoid instability. T h e y described a m e t h o d t h a t can stably take large time steps, suggesting t h e possibility of real-time anima- tion of simple objects.

Meyer eta/. [39] a n d Desbrun et a/.[ 4~ have used a hybrid explicit/implicit integration algorithm to a n i m a t e real-time clothes, integrated with this is a

voxel-based collision detection algorithm.

Other research has focused on the collision de- tection, stating t h a t it is one of the bottlenecks to real-time animation. Vassilev et a/.[ 41] proposed to use the z-buffer for collision detection to gen- erate d e p t h and normal maps. C o m p u t a t i o n t i m e of their collision detection does not depend on the complexity of the body. T h e m a i n drawback is t h a t the maps need to be p r e - c o m p u t e d before simula- tion, restricting the real-time application.

Another a p p r o a c h presented by Grzeszczuk et a/. [42] uses a neural network to a n i m a t e d y n a m i c objects. T h e y replaced physics-based models by a large neural network t h a t a u t o m a t i c a l l y learns to simulate similar motions by observing the models in action. Their m e t h o d works in real-time. How- ever, it has not been proven t h a t this m e t h o d can be used for complex simulation such as cloth.

In [6], Cordier et al. introduced a m e t h o d for cloth a n i m a t i o n in real-time. T h e algorithm works in a hybrid m a n n e r exploiting the merits of b o t h the physical-based and geometric deformations. It makes use of predetermined conditions between the cloth and the b o d y model, avoiding complex colli- sion detection and physical deformations wherever possible. G a r m e n t s are segmented into pieces t h a t are simulated by various algorithms, depending on how they are laid on the b o d y surface and w h e t h e r they stick or flow on it.

6 C o n c l u s i o n

Although the p r o b l e m of modeling virtual hu- mans have been the well-studied area, the grow- ing needs of applications where a variety of real- istic, controllable virtual h u m a n s call for efficient solutions t h a t automatically generate high-quality models. We have reviewed several techniques, which are devoted to the generation of static a n d dynamic shape of the h u m a n b o d y automatically.

In comparison with undressed b o d y models, lit- tle attention has been devoted to the task of dress- ing automatically virtual h u m a n modeIs. This sit- uation will change rapid/y, since most of practical applications t o d a y involve v i r t u a l humans, often crowds, with clothes. This is the case particularly with the growing power of graphics hardware a n d software techniques for supporting large n u m b e r of virtual h u m a n models in real-time applications. R e f e r e n c e s

[1] Seheepers F, Parent P~ E, Carlson ~V E, May S F. Anatomy-based modeIing of the human musculature. In

(9)

Nadia Magnenat-Thalmann etal.: A u t o m a t i c Modeling of Virtual Humans and B o d y Clothing 583

Prec. SIGGRAPH'97, Los Angeles, CA, USA, 1997,

pp.163-172.

[2] Shen J, Thalmann D. Interactive shape design using metaballs and splines. In Proc. Implicit Surfaces'95, Grenoble, France, 1995, pp.187-196.

[3] Wilhelms J, Van Gelder A. Anatomically based model- ing. In Prec. SIGGRAPH'97, Los Angeles, CA, USA, 1997, pp.173-180.

[4] Devernay F, Faugeras O D. Computing differential prop- erties of 3-D shapes from stereoscopic images without 3-D models. In Proc. Computer Vision and Pattern

Recognition, Seattle, WA, USA, 1994, pp.208-213.

[5] NASA Reference Publication 1024. The Anthropometry Source Book, Volumes I and II.

[6] Cordier F, Magnenat-Thalmann N. Real-time animation of dressed virtual humans. Computer Graphics Forum, Blackwell Publishers, 2002, 21(3): 327-336.

[7] Fua P. Human modeling from video sequence. Geomat-

ics Info Magazine, July 1999, 13(7): 63-65.

[8] Lee W, Gu J, Magnenat-Thalmann N. Generating ani- matable 3D virtual humans from photographs. In Prec.

Eurographics'2000, Interlaken, Switzerland, 2000, Com- puter Graphics Forum, 19(3): 1-10.

[9] Hilton A, Beresford D, Gentils T, Smith R, Sun W. Vir- tual people: Capturing human models to populate vir- tual worlds. In Prec. Computer Animation, Geneva, Switzerland, IEEE Press, 1999, pp.174-185.

[i0] Daanen H A M, Van de Water G J. Whole body scan- ners. Elsevier Displays, 1998, 19(3): 111-120.

[i1] Dekker L. 3D human body modeling from range data [Dissertation]. University College London, 2000. [12] Ju X, Siebert J P. Conforming generic animatable mod-

els to 3D scanned data. In Proc. 6th Numerisation

3D//Scanning 2001 Congress, Paris, France, 2001.

[13] Allen B, Curless B, Popovic Z. Articulated body defor- mation from range scan data. In Proc. SIGGRAPH'02, San Antonio, TX, USA, Addison-Wesley, 2002, pp.612- 619.

[14] Seo H, Magnenat-ThMmann N. An automatic modelling of human bodies from sizing parameters. In Prec. ACM

SIGGRAPH symposium on Interactive 3D Graphics,

Monterey, CA, USA, 2003, pp.19-26, p.234.

[15] Kakadiaris I A, Metaxas D. 3D human body model ac- quisition from multiple views. In Prec. the Fifth Inter-

national Conference on Computer Vision, Boston, MA,

USA, June 20-23, 1995, pp.618-623.

[16] Plaenkers R, Fun P, D'Apuzzo N. Automated body modeling from video sequences. In Prec. IEEE-ICCV

Workshop on Modeling People, Corfu, Greece, 1999,

pp.45-52.

[17] Dooley M. Anthropometric modeling programs - - A

survey. IEEE Computer Graphics and Applications,

1982, 2(9): 17-25.

[18] Azuola F, Badler N I, Ho P et al. Building

anthropometry-based virtual human models. In Prec.

IMAGE VII Conf., Tuscon, AZ, June, 1994.

[19] NASA Man-Systems Integration Manual (NASA-STD- 3000).

[20] Seo H, Yahia-Cherif L, Goto T, Magnenat-Thalmann

N. GENESIS: Generation of E-population based on sta- tisticM information. In Prec. Computer Animation, Geneva, Switzerland, 2002, pp.81-85.

[21] DeCarlo D, Metaxas D, Stone M. An anthropometric face model using variational techniques. In Prec. SIC-

GRAPH'98, Orlando, FL, USA, Addison-Wesley, 1998,

pp. 67-74.

[22] Rose C, Cohen M, Bodenheimer B. Verbs and ad-

verbs: Multidimensional motion interpolation using

RBF. IEEE Computer Graphics and Applications, 1998, 18(5): 32-40.

[23] Sloan P P, Rose C, Cohen M. Shape by example. ACM

SIGGRAPH Symposium on Interactive 3D Graphics,

NC, USA, 200I, pp.135-143.

[24] Blanz B, Vetter T. A morphable model for the synthesis of 3D faces. In Proc. SIGGRAPH'99, Los Angeles, CA, USA, Addison-'Wesley, 1999, pp.187-194.

[25] Seo H, Cordier F, Magnenat-Thalmann N. Synthe- sizing animatable body models with parameterized shape modifications. In Prec. A C M SIGGRAPH/E-

urographics Symposium on Computer Animation, San

Diego, CA, USA, 2003, pp.120-125.

[26] Allen B, Curless B, Popovic Z. The space of all body

shapes: Reconstruction and parameterization from

range scans. In Proc. SIGGRAPH'03, San Diego, CA, USA, Addison-~Vesley, 2003, pp.587-594.

[27] Magnenat-Thalmann N, Laperri~re R, Daniel Thai- mann. Joint-dependent local deformations for hand an- imation and object grasping. In Proc. Graphics Inter-

face, Edmonton, Alberta, Canada, 1988, pp.26-33.

[28] Lewis J P, Cordner M, Fong N. Pose space deforma- tions: A unified approach to shape interpolation and skeleton-driven deformation. In Proc. SIGGRAPH'O0, New Orleans, LA, USA, 2000, pp.165-172.

[29] Kry P G, James D L, Pal D K. EigenSkin: Real time large deformation character skinning in graphics hard- ware. ACM SIGGRAPH Symposium on Computer An-

imation, San Antonio, TX, USA, 2002, pp.153-159.

[30] Mohr A, Gleicher M. Building efficient, accurate charac- ter skins,from examples. In Prac. SIGGRAPH'03, San Diego, CA, USA, Addison-Wesley, 2003, pp.165-172. [31] Bhat K, Twigg C, Hodgins J etal. Estimating cloth

simulation parameters from video. In Proc. ACM SIG-

GRAPH//Eurographics Symposium on Computer Ani- mation, San Diego, CA, USA, 2003, pp.37-51.

[32] Furakawa T, Gu J, Lee W-S, Magnenat-ThMmann N. 3D clothes modeling from photo cloned human body.

In Proc. Virtual Worlds'2000, Paris, France, 2000,

pp.159-170.

[33] Cordier F, Seo H, Magnenat-Thalmann N. Made-to- measure technologies for online clothing store. IEEE

Computer Graphics and Applications, Special Issue o n

Web Graphics, Jan/Feb 2003, 23(1): 38-48.

[34] Volino P, Magnenat-Thalmann N. Comparing effi- ciency of integration methods for cloth animation. In

Prec. Computer Graphics International'01, Hong- Kong, China, 2001, pp.265-274.

[35] Choi K-J, Ko H-S. Stable but responsive cloth. In Proc.

SIGGRAPH'02, San Antonio, TX, USA, 2002, pp.604-

611.

[36] Eberhardt B, Weber A, Strasser W. A fast flexible particle-system model for cloth draping. IEEE Com-

puter Graphics and Applications, Sept. 1996, 16(5):

52-59.

[37] BaraffD, Witkin A, Kass M. Untangling cloth. In Prec.

SIGGRAPH'03, San Diego, CA, USA, 2003, pp.862-

870.

[38] Baraff D, Witkin A. Large steps in cloth simulation. In Prec. SIGGRAPH'98, Orlando, FL, USA, Addison Wesley, 1998, pp.43-54.

[391

Meyer M, Debunne G, Desbrun M, Barr A H. Interactive

animation of cloth-like objects in virtual reality. Journal

of Visualization and Computer Animation, John Wiley

(10)

584 d. Comput. Sci. & Technol., Sept. 2004, Vo1.19, No.5

[40] Desbrun M, Schr6der P, Barr A H. Interactive animation

of structured deformable objects. In Proc. Graphics In-

terface'99, Kingston, Ontario, Canada, 1999, pp.1-8. [41] Vassilev T, Spanlang B. Fast cloth animation on walk-

ing avatars. Eurographics, Blackwell Publishers, 2001,

20(3): 137-150.

[42] Grzeszczuk R, Terzopoulos D, Hinton G. Neuroani- mator: Fast neural network emulation and control of

physics-based models. In Proc. SIGGRAPH'98, Or-

lando, FL, USA, Addison Vv'esley, 1998, pp.9-20.

N a d i a M a g n e n a t - T h a l m a n n has researched virtual humans for more t h a n

20 years. She studied psy-

chology, biology, and chem- istry at the University of Geneva and obtained her Ph.D. degree in computer sci-

ence in 1977. In 1989 she

founded MIRALab, an inter- disciplinary creative research laboratory at the Uni- versity of Geneva. Some recent awards for her work include the 1992 Moebius Prize for the best multi- media system awarded by the European Community, "Best Paper" at the British Computer Graphics So- ciety congress in 1993, to the Brussels Film Academy for her work in virtual worlds in 1993, and election to the Swiss Academy of Technical Sciences in 1997. She has published several books and more t h a n 300 pa- pers, and is editor of several scientific journals, among which she is Editor-in-Chief of the Visual Computer

published by Springer Verlag and co-Editor-in-Chief of the Journal of Computer A n i m a t i o n and Virtual Worlds published by John Wiley. She can be contacted at thalmann@miralab.unige.ch.

H y e w o n Seo recently obtained her Ph.D. degree in computer science at the MIRALab, University of Geneva, Switzerland. She was awarded a Swiss Federal Scholarship for Foreign Students from 1998 to 1999. She obtained her B.S. and M.S. degrees in computer science from Korea Advanced I n s t i t u t e of Science and Technology (KAIST), Taejon, Korea, in 1996 and 1998, respectively. Her current research interests are in the areas of modeling and deformation of graphical objects, biology-inspired visual computing, and digital h u m a n modeling for human centered design. She is a member of ACM and Eurographics.

F r e d e r i c C o r d i e r recently received his Ph.D. de- gree in computer science at the MIRALab, University of Geneva, where he works on real-time skin and cloth deformation for simulating dressed virtual humans. Be- fore joining the MIRALab, he studied at the Univer- sity Lyon-I in Lyon, France, and received his Master's degree in computer graphics in 1997. Since 1998, he has published over 17 papers in the field of medical imaging, skin deformation, and real-time cloth simula- tion. His current research interests include collision de- tection, physically-based techniques for digital human modeling, and sketch-based interfaces.

Références

Documents relatifs

Notre étude a également abordé les changements photoinduits dans des couches minces du a-Se mais ces changements, en fonction des longueurs d’onde (bleu, blanche naturelle et

Using this tech- nology the aim of this study was to explore whether a PCA-based model could be applied for accurately predicting internal skeleton points, such as joint centers

We propose 3DBooSTeR, a novel method to recover a tex- tured 3D body mesh from a textured partial 3D scan. With the advent of virtual and augmented reality, there is a demand

Le modèle déformable expliqué dans la section 2.3.3 basé sur la méthode s-rep est utilisé pour modéliser les structures cérébrales suivantes: l’amygdale, le noyau

Factors arising from each phase may introduce geometric deviations of the final part, including quality of input file, machine errors, build orientation, process

These results suggest that among male raters, rater self-perceived attractiveness and sociosexuality are important predictors of preference strength for attractive opposite-sex

However, for shape optimization of the type considered here, the situation is slightly better and, as proved in [5],[4], positivity does imply existence of a strict local minimum

We are looking for participants with an interest in eTextiles and basic electronic hardware or software prototyping skills. We assume that participants will have basic