• Aucun résultat trouvé

M ethods and D esign

3.4 Application Scenarios

3.9.4 Multi-user Iterative Image Segmentation

Our segmentation algorithm is based on an iterative and progressive evolution of physically-based discrete deformable models [76], [102], [103]. In our case, these deformable models are represented by 2-Simplex meshes [104] that deform under the influence offorces. Each mesh ver-tex is considered as a particle with mass whose state (position and velocity) is derived from the Newtonian law of motion and the applied forces. At each iteration step, the particles state is up-dated by an implicit Euler numerical integration. External forces are based on image information (e.g., gradients, intensity distribution) to drive the model towards the desired anatomical bound-aries. Conversely, internal forces enforce the mesh geometry to respect smoothness and shape constraints. Shape constraints derive from statistical shape models (SSM) that ensure that meshes can only adopt valid configurations expressed by statistics inferred from a collection of training shapes. SSMs proved to be very efficient and robust in medical image segmentation [105] and have been successfully applied to segment a wide variety of structures (e.g., bone [76], [106], liver [105] and bladder [107]). Our segmentation algorithm allows the simultaneous segmentation of various structures of interest. To cope with models inter-penetration, efficient collision detection and response are implemented. Coupled with a multi-resolution approach (from coarse to fine), a fast and interactive segmentation algorithm is derived. In order to monitor and possibly cor-rect the algorithm evolution, interactive control must be provided to users [108], especially in our context of collaborative work. This is achieved by the means of internal, externalorfrontier constraint points that deform the mesh so that the points are respectively in the interior, at the exterior or on the surface of the mesh (See figure 3.22 and [66]). In practice, constraint points

Figure 3.22: Example of constraint points on MRI images. The blue point (right) represents the internal points (marked inside the ROI shape in green, the red is the external, and the yellow points are the frontier points.

attract or repel meshes by creating forces on some vertices. An example of an internal constraint pointPis depicted in figure 3.23. The closest face of the mesh, here represented by two vertices

P0and P1, is attracted under the action of two forces f1and f2. Each force fiis computed as:

fi=α∗wi(P−P) (3.1)

where wi is the barycentric weight computed from the projection P of Pp on the face, and α denotes a global weighting coefficient specific to the constraint point type (internal, external or frontier). These external forces have a local influence (closest faces are only affected) while the

f1 P1 P

P0

f2 P

Figure 3.23:Illustration of internal CPs): The closest face (P0;P1) to the CPPis attracted by creating 2 forces f1and f2onP0andP1respectively, whose calculation depends onPand its projectionPon the face.

modification of the force weightαcan globally affect the segmentation since all constraint point forces of same type share the same weight. The next section will explain how such weights can be tuned to account for the various collaborative segmentation scenarios. This segmentation algorithm is thus a good candidate for our collaborative application as it fulfils the requirements defined in section 3.9.3 and allows the concurrent segmentation of multiple structures. In this case, the models contour and the constraint points are overlaid in the slice and represent what was previously denoted in section 3.9.3 assegmentation overlay(See also figure 3.21).

3.10 Collaborative Services with shared Data Models

In this application scenario we integrate all aspects of the architecture and present it in three parts, where the first part focuses on the collaborative aspect and sharing data between users, where consistency is high importance. The proposed application is based on generating 3D meshed, using 2D pattern drawing. The second part takes the data output from the first and uses it at a real time basis for its simulation services, together with single access to users for remote rendering of the simulation results. The 2D patterns are transformed into 3D meshes and for each mesh a cloth simulation service is started and starts simulating the given 3D mesh.

Any updates to the base 2D pattern, resets the simulation and updates the 3D mesh. Simulation parameters can be changed by outside services that subscribe to it. This involves 3D adaptive rendering and Simulation Control services. Whereas the 3D Adaptive Rendering service provides the rendering either remotely or can be integrated as the client application. The Simulation Control service provides the means to change several aspects of the simulation, depending on the type of simulation the internal parameters can be changed, but also more general parameters

such as the simulation speed and update rate to the subscribers. The third part takes the output from the second part and uses it as a means populate a virtual environment. The output from the second part is the transformed mesh, which than is used either as an NPC or as a user’s avatar representation. User can connect to the DVE through different clients that as described in the second part, either integrate th 3D Adaptive rendering or subscribe through an 3D Adaptive Rendering service. This application scenario shows the data propagate and mutation from it’s initial creation to it’s deployment and usage within a Virtual World.

Interactive performance in terms of responsiveness is one of key challenging issues for re-mote interactive 3D applications. We introduce a run-time presentation and dynamic interface-adaptation mechanisms which aim to preserve the real-time interactive performance of 3D con-tent, taking into account heterogeneous devices in user-centric pervasive computing environ-ments. To support perceptual real-time interaction with 3D contents, temporal adjustment of presentation quality adaptation is used. In other words it dynamically adjusts the quality of presentation on client devices according to the current device context. As to overcome the in-evitable physical heterogeneity in display capabilities and input controls on client devices, we provided a dynamic user interface reconfiguration mechanism for interaction with 3D contents.

We then extend these concepts to a Multi-user environment, offering a variety of services which are linked together by their functionalities. The full scenario is shown as a service diagram in

Cloth

Figure 3.24:Deployment of the architecture.

figure 3.24, where three Cloth Designer client applications are connected to the Cloth Creator, which maintains the connections, consistency and interactivity notification. It either connects or integrates the Shared Memory Space (SMS) service, which basically is a database containing any data a subscriber wants to store in it. In this case the Cloth Creator is the actual publisher to the SMS, and does not directly act as a subscriber. However since the data is shared with the other Cloth Designers, for consistency, depending on the implementation either the Cloth Cre-ator directly notifies the Cloth Designers or the Cloth Designers are subscribed to the SMS and are updated upon data modification. The possibilities of implementation are further explored in section 4.8. The Cloth Creator stores the 2D pattern as well a generated 3D mesh constructed

from the 2D pattern. Then the Cloth Simulation services take the data and start simulating it.

However without control, there is nothing much going to happen. Therefore the Cloth Simula-tion can be controlled through outside services, as shown in the figure these are the Cloth Viewer Phone, Cloth Viewer Tablet and the Virtual Environment services. The Mesh Adaptation, does not control the direct input to the Cloth Simulation, just the output, this can be done through Adaptive Rendering, or LOD adjustments and stream the content to the Cloth Viewer for render-ing. The Virtual environment is shown as the biggest module. This merely indicates that is an DVE and uses IM mechanisms to keep the data rate to its subscribers optimal, load balances the VE among its resources and provides scalability. The next three sections explain in more detail the three parts of the application scenario.