• Aucun résultat trouvé

Controllable and Adaptable Virtual Fashion Model

M ethods and D esign

3.3 Controllable and Adaptable Virtual Fashion Model

At the base of some of the application scenarios lays a real-time virtual fashion model simula-tion library which is named Virtual Try-On (VTO), we therefore introduce this library prior to the scenario sections. The VTO consists of three interconnected parts: A body sizing module,

a motion adaptation module and a real-time cloth simulation library. The combination of these three libraries provides us with a template model that can be fully customized to match the user’s measurements. Any animation that comes with the template body is adapted in real time to fit the personalized anatomy and the cloth simulation library allows for any type of garment to be simulated on the body as well as the real-time adaptation of garment size. This allows the user to quickly evaluate how a certain garment looks on him/her, as well as the real-time visualization of different garment sizes to select a best fit. The body resizing module is based on a parametrized human body model as described by Kasap and Magnenat-Thalmann [70].

Based on a template model we can modify up to twenty-three measurements parameters, corre-sponding to twenty-three different body areas. If the parameter concerned involves a change in the length of a certain body part, the underlying skeleton used in animation is resized as well.

Body sizes are generated to the anthropometric measurement standards, assuring a physically plausible deformation. Since the deformation is based on actual measurements it can be adapted to match the measurements of the user. This process is completely dynamic. The body is mod-ifiable both at the rest pose and during animation. The results of this process can be seen in figure 3.7. Changing body sizes and lengths means that the animation recorded or made for

Figure 3.7: Starting from a template model on the left the body is increasingly adapted during animation to its final anatomy on the right.

the template model does no longer match the new anatomy. A change in lengths of parts of the skeleton will cause visual errors, such as foot skating for example, if they are not taken into account for the animation. And these changes, as well as changes in girth, might result in

incor-rect inter-penetration between body parts. Besides this the animation might become physically implausible because animation based on a small and light body does not look correct on a larger and bigger built avatar. In order to solve these undesirable effects we have integrated a motion adaptation library based on the work of Lyard and Magnenat-Thalmann [71], [72]. Any changes to the parameterized human body model cause the animation to be adapted to the new anatomy resulting a visually pleasing and plausible animation of the personalized avatar. Again all of this can be performed completely interactively and without any significant performance impact on the application.

To complete the virtual fashion model we need to combine the adaptable and animated avatar with a cloth simulation module. Cloth simulation is a complex and computationally intensive process that can easily turn out to be the bottleneck for the performance of your entire appli-cation. This can be partially alleviated by reducing the geometric complexity of the simulated garments, choosing an efficient but less accurate simulation scheme or by approximating the collision interaction between garment and body. In this specific case however the need to give an as accurate as possible representation of garment behaviour and its fit to a body puts some re-strictions on the abstractions we can make. We have therefore integrated a simulation module as described by Magnenat-Thalmann et al.[73]. This module, which uses an optimized implemen-tation of the tensile cloth model by Volino and Magnenat-Thalmann [74] allows us to simulate garments based on measured fabric parameters with fairly realistic results. It furthermore allows us to switch between different garment sizes on the fly based on integrated grading information.

Collision detection is performed using a skinned representation of the garment and its distance to the nearest body vertex as described in [73].

The VTO’s three individual libraries have been integrated through a thin wrapper class that hides most of the complex interaction between the three libraries described above, exposing only basic functionality to load and interact with body and cloth. All is built on top of or in close relation to OpenSceneGraph (OSG). We have created several custom callbacks to perform the necessary per frame updates for skinning procedures and cloth simulation calls. The model itself as well as the garments, animation and parameters used in body sizing are stored in a single COLLADA file1, for which we have modified the OSG plug-in to load custom data. Some examples of the results we obtain with the VTO can be seen in figure 3.8.

In order to put into perspective the use of this simulation we have tested it on a range of hard-ware. The simulation is implemented using C++ and build for x86 systems. The application performs on simulation step a simulation calculation and rendering cycle. Thus every frame rendered is also a simulation cycle. Later versions use a decoupled simulation and rendering, as rendering is often also related to interface aspects and in order to have a more response interface it can benefit from this decoupling. Table 3.2 shows the results of this test. The UMPC as our slowest unit is barely capable of rendering a single frame per second. In fact the frame-rate capturing is rounding it up to a single fps. The tablet which is far superior in respect to the UMPC isn’t doing much better. The only decent results are gotten from desktop systems. Which

1Collada web-link:http://www.collada.org/

Figure 3.8:The Virtual Try On in action. On the left a front and side view of the base model wearing a size 34 dress and on the right a more voluptuous model wearing the same dress scaled to a size 38.

Table 3.2:VTO performance in frames per second

UMPC (vii) Tablet (viii) PC (iii) PC (iv) PC (v) PC (vi)

1 2 20 27 30 30

makes this an excellent example to showcase the benefit of streaming. It should be noted that the implementation of the VTO is a single process and doesn’t use threading, this greatly reduces the performance on current systems which all contain multi-core processors. An attempt to over-come this problem is presented by Kevelham [75] and is partially described in section 3.10 where an early build of the simulation is used as a replacement.