• Aucun résultat trouvé

A dynamic drawing algorithm for interactive painterly rendering

N/A
N/A
Protected

Academic year: 2022

Partager "A dynamic drawing algorithm for interactive painterly rendering"

Copied!
2
0
0

Texte intégral

(1)

HAL Id: inria-00355619

https://hal.inria.fr/inria-00355619

Submitted on 23 Jan 2009

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

A dynamic drawing algorithm for interactive painterly rendering

David Vanderhaeghe, Pascal Barla, Joëlle Thollot, François X. Sillion

To cite this version:

David Vanderhaeghe, Pascal Barla, Joëlle Thollot, François X. Sillion. A dynamic drawing algorithm

for interactive painterly rendering. SIGGRAPH ’06: ACM SIGGRAPH 2006 Sketches, Aug 2006,

Boston, United States. pp.100, �10.1145/1179849.1179974�. �inria-00355619�

(2)

A dynamic drawing algorithm for interactive painterly rendering (sketch 0387)

David Vanderhaeghe, Pascal Barla, Jo˝elle Thollot, Franc¸ois X. Sillion ARTIS GRAVIR/IMAG INRIA

Figure 1: Three different painterly styles produced by our method at interactive rates. Left: long strokes are drawn along silhouettes using surface normals. Middle: strokes have a common global orientation set by the user. Right: Surface principal curvature is used to orient thick strokes. Note that illumination features such as shading and highlights are correctly represented independently of the user’s stylistic decisions.

1 Motivation

Painterly rendering is a technique that takes inspiration from tra- ditional paintings, usually focusing on effects similar to those achieved with oil or acrylic paint, where the individual brush strokes are more or less perceived individually. The main idea is to render a scene projected on the image plane by a set of 2D vector paint strokes holding style attributes (color, texture, etc). This rep- resentation has the effect of abstracting the rendering by using prim- itives larger than pixels, and emphasizing the image plane through the 2D nature of the paint strokes.

In the general case, paint strokes simultaneously represent informa- tions about objects in the scene (such as shape or reflective proper- ties of a surface from the current point of view) while following a stroke style provided by the user (straight or curved brush strokes, thick or thin outline, etc). During the animation they also follow the 2D or 3D motion of some scene elements. The main issues in painterly rendering originate from these conflicting goals (as stated by Durand [2002]). Temporal coherenceof the strokes motion is of primary interest: it comes from the desire to link the motion of a 2D primitive (a stroke) to that of a 3D primitive (e.g. a surface).

Another important aspect is thedensityof strokes: when zooming in or out from an object, the number of strokes used to represent it must increase or decrease in order to maintain a uniform density in the picture plane while keeping a constant thickness in image space.

Finally, an ideal painterly renderer would let the user fully specify the strokesstylein a way that is independent of the depicted scene, but at the same time should ensure that some properties of the scene are well-represented, such as object silhouettes or lighting.

2 Contributions

We present an object-space, particle-based system that extends the pioneering work of Meier [1996]. Our main contribution is a fully view- and lighting-dependent behavior that explicitly performs a trade-off between a user-specified stroke style and the faithful rep- resentation of the depicted scene. This way, our system offers a broader expressiveness by removing the constraints usually found in previous approaches, while still ensuringtemporal coherence.

Following Durand’s study on computer depiction [2002], our sys- tem is composed or three stages: attributes design, particle distri- bution and strokes drawing. The attributes stage takes as input the 3D scene and computes the scene attributes using G-buffers. At- tributes such as normals or curvature can be used to control strokes attributes, e.g. orientation and thickness. We blur color and depth

attributes to filter out details that could not be represented by thick strokes during the drawing stage. The distribution stage takes as input the 3D scene, distributes 3D particles over the scene surfaces and for each frame projects a subset of the particles to compute the strokes positions while ensuring a complete coverage of the image.

This way our system allows to control strokedensityby means of a view-dependent particles distribution. Finally the strokes drawing stage takes as input the 2D strokes positions, the strokes attributes computed from the extracted G-buffers and a set of user-defined parameters: length, bending, orientation, texture, etc. Then strokes are drawn starting from their 2D position. As strokes attributes re- main constant over the drawing, strokes are cut off whenever the underlying attributes become too far from the starting ones. This ensures that the important details of the scene are kept to obtain a good depiction, while using the user-suppliedstyle.

Compared to Meier’s work, each stage provides improved func- tionality: A large freedom for the choice of strokes attributes, a view-dependent density management, and a faithful representation of scene features. Most of the work is done on the GPU allowing to obtain a dynamic painting algorithm that bears some similarities with traditional techniques and runs at interactive frame rates.

References

DURAND, F. 2002. An invitation to discuss computer depiction. In 2nd International Symposium on Non-Photorealistic Animation and Rendering (NPAR’02).

MEIER, B. J. 1996. Painterly rendering for animation. InSig- graph’96, Proceedings of the 23rd annual conference on Com- puter graphics and interactive techniques, ACM Press, 477–484.

Figure 2: Our painting method preserves depth cues in the scene:

note how paint strokes stop at objects silhouettes.

Références

Documents relatifs

• SUSHI (Simulation in Urban Scene of Heat dIffusion) is used to model the infrared radiation energy budget, the heat convection flux and then to solve the 1D and 2D heat

2014 The possible role of Shockley surface states in the stability of the Si (110) surface with respect to several deformation modes is investigated.. The dangling

Figure 1(d) shows the result of the AMN based classification on one of our test sets. In a quantitative evaluation we obtained 93.8% correctly classified labels. As we can see from

Us- ing this discussion as a basis, we demonstrate that the results of recent human- computer interaction experiments that have tested effectiveness of interactive animation and

• DR(mindestroy) uses weights that are associated with each assigned vari- able; when assigned, the weight of a variable v is initialised with the num- ber of values its

We sought to determine a cutoff diameter of TST induration beyond which the influence of BCG vaccination was negligible in evaluating potential Mycobacterium tuberculosis infection in

In our center and during my personal activity as proctor for trans-apical TAVI procedures (62 trans-apical TAVI in 2 years), I have also experienced the onset of this complication

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des