Haut PDF Efficient Surface Remeshing by Error Diffusion

Efficient Surface Remeshing by Error Diffusion

Efficient Surface Remeshing by Error Diffusion

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

29 En savoir plus

Efficient Depth Map Compression based on Lossless Edge Coding and Diffusion

Efficient Depth Map Compression based on Lossless Edge Coding and Diffusion

Abstract— The multi-view plus depth video (MVD) format has recently been introduced for 3DTV and free-viewpoint video (FVV) scene rendering. Given one view (or several views) with its depth information, depth image-based rendering techniques have the ability to generate intermediate views. The MVD format however generates large volumes of data which need to be compressed for storage and transmission. This paper describes a new depth map encoding algorithm which aims at exploiting the intrinsic depth maps properties. Depth images indeed represent the scene surface and are characterized by areas of smoothly varying grey levels separated by sharp edges at the position of object boundaries. Preserving these characteristics is important to enable high quality view rendering at the receiver side. The proposed algorithm proceeds in three steps: the edges at object boundaries are first detected using a Sobel operator. The positions of the edges are encoded using the JBIG algorithm. The luminance values of the pixels along the edges are then encoded using an optimized path encoder. The decoder runs a fast diffusion-based inpainting algorithm which fills in the unknown pixels within the objects by starting from their boundaries. The performance of the algorithm is assessed against JPEG-2000 and HEVC, both in terms of PSNR of the depth maps versus rate as well as in terms of PSNR of the synthesized virtual views.
En savoir plus

5 En savoir plus

Error-Bounded and Feature Preserving Surface Remeshing with Minimal Angle Improvement

Error-Bounded and Feature Preserving Surface Remeshing with Minimal Angle Improvement

Feature preservation is crucial in remeshing. However, automatically identifying sharp features on a surface mesh is a difficult problem that depends both on the local shape and on the global context and semantic information of the model. This makes feature detection a strongly ill-posed problem. A wide range of approaches address this prob- lem [30]–[32]. However, none of them works reliably for all kinds of meshes. Most remeshing algorithms avoid this problem by assuming that the features have been specified in advance [8], [9], [19], [21], [22], [33]. Some remeshing tech- niques try to preserve features implicitly [34], [35]. Vorsatz et al. [36] first apply a relaxation in the parameter domain, and then snap the vertices to feature edges and corners. Since they separate remeshing and feature-snapping, the resulting mesh quality near sharp features might be poor. Valette [4] alleviates this issue by embedding the Quadric Error Metric approximation (QEM) criterion inside the CVT optimization. However, the performance of their cluster- based method is highly dependent on the quality of the input, and the sharp features might not be well preserved when users specify a small vertex budget. Jakob et al. [37] propose a general framework for isotropic triangular/quad- dominant remeshing using a unified local smoothing oper- ator, in which the edges naturally align to sharp features. However, little attention is paid on the approximation error and element quality. We address this problem by optimizing the element quality explicitly in combination with implicit feature preservation based on the new defined feature in- tensity functions.
En savoir plus

15 En savoir plus

Fab is the most efficient format to express functional antibodies by yeast surface display

Fab is the most efficient format to express functional antibodies by yeast surface display

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

34 En savoir plus

Minimizing the Multi-view Stereo Reprojection Error for Triangular Surface Meshes

Minimizing the Multi-view Stereo Reprojection Error for Triangular Surface Meshes

Another limitation of the algorithms implemented in [6, 20] is due to the fact that the authors assume that the scene is perfectly Lambertian. In real life, such surfaces are rare and, therefore, multi-view stereo algorithms have to be robust to non-Lambertian reflec- tions. To improve the robustness, a common strategy is to modify input images in order to remove specular highlights [23]. However, these methods are strongly limited by the specific lighting configuration. Some authors do not consider the image pixels that poten- tially have specular reflection components: they treat these points as outliers [1, 8]. To compensate for the loss of information they then need a lot of images. Other authors try to improve the robustness to non-Lambertian effects by directly incorporating a specular reflectance model in the mathematical formulation of the problem [9, 21], nevertheless until now, these formulations lead to very complex and consuming optimization processes which return rather inaccurate results. Also, this kind of formulation is necessarily limited to specific materials (having the same reflectance as the one used in the modelling). An- other widespread idea is to use robust similarity measures [10, 12, 19]. However, these are rather complex to implement and cannot deal with strongly erroneous information as in the case of camera occlusions. So the second contribution of this paper is a modification of the Lambertian model in order to take into account the deviations from the constant brightness assumption based on a smooth image-based component. We do not explicitly model the reflectance properties of the scene (contrary to [9, 21]) and other complex pho- tometric phenomena involved by the cameras. Thus the method is fully generic, simple, easy to implement and very efficient. It recovers quite robustly the shape and the diffuse radiance of non Lambertian scenes viewed from flawed cameras.
En savoir plus

11 En savoir plus

Efficient surrogate construction by combining response surface methodology and reduced order modeling

Efficient surrogate construction by combining response surface methodology and reduced order modeling

The sequential approximation approach led here to using 13 basis vectors. On Figure 4 the approximation error based on the residual e rb (cf. Eq. 5) is plotted as a function of the experiment number. Recall that in the sequential approach the experiments are covered sequentially. At the beginning the algorithm solves exactly the finite element problem with the parameters of the first point of the DoE and adds the solution vector to the basis used for the reduced order modeling. It then solves the second experiment on projection on this basis and checks if the residual error is higher than the considered threshold (2*10 -3 here). Obviously only one vector for the reduced basis is insufficient to capture the variations of the displacement field for this problem. This can be seen on Figure 4 by the fact that the residual error for experiment 2 is still significantly above the threshold (red dotted line). In this case the full problem is solved for experiment 2 and the resulting displacement vector is added to the reduced basis. The approach continues sequentially with the following points until the end of the DoE. Each experiment point is first solved projected on the reduced basis. If the
En savoir plus

39 En savoir plus

Numerical simulation of metal forming processes with 3D adaptive Remeshing strategy based on a posteriori error estimation

Numerical simulation of metal forming processes with 3D adaptive Remeshing strategy based on a posteriori error estimation

Conclusion In this paper, an automatic adaptive remeshing method based on local mesh modification has been presented to simulate various 3D metal forming processes in large elastoplastic de- formations. This process has been implemented with a small test displacement step by step in order to adapt automatically to the constantly changing physical fields and geometrical shapes. At each time-step, the remeshing iteration is repeated many times until the total number of optimal elements does not exceed a given threshold. This characteristic can avoid the useless iteration and reduce time cost. To avoid numerical diffusion during the mapping of variables, data transfer error is examined using analytical functions such as linear, quadrat- ic, trigonometric and exponential with volumetric L2 error norms and surface one. The results show that these techniques illustrate different efficiency. So, in order to combine the ad- vantage of these methods the SPR technique will be used to transfer the variables inside the domain and the SPR-P tech- nique will be used to transfer the surface ones. Without equil- ibrate process, the numerical load-displacement curves under large displacement and/or high deformation rate show that after each remeshing step the system is not in mechanical equilibrium anymore. In fact, the numerical curves show a small fluctuation in the load-displacement response after each remeshing step. But with the proposed equilibrated process a good agreement with experimental data has been observed and the imbalance fluctuations are reduced significantly.
En savoir plus

19 En savoir plus

Numerical simulation of metal forming processes with 3D adaptive Remeshing strategy based on a posteriori error estimation

Numerical simulation of metal forming processes with 3D adaptive Remeshing strategy based on a posteriori error estimation

20 7 CONCLUSION In this paper, an automatic adaptive remeshing method based on local mesh modification has been presented to simulate various 3D metal forming processes in large elastoplastic deformations. This process has been implemented with a small test displacement step by step in order to adapt automatically to the constantly changing physical fields and geometrical shapes. At each time-step, the remeshing iteration is repeated many times until the total number of optimal elements does not exceed a given threshold. This characteristic can avoid the useless iteration and reduce time cost. To avoid numerical diffusion during the mapping of variables, data transfer error is examined using analytical functions such as linear, quadratic, trigonometric and exponential with volumetric L2 error norms and surface one. The results show that these techniques illustrate different efficiency. So, in order to combine the advantage of these methods the SPR technique will be used to transfer the variables inside the domain and the SPR- P technique will be used to transfer the surface ones. Without equilibrate process, the numerical load-displacement curves under large displacement and/or high deformation rate show that after each remeshing step the system is not in mechanical equilibrium anymore. In fact, the numerical curves show a small fluctuation in the load-displacement response after each remeshing step. But with the proposed equilibrated process a good agreement with experimental data has been observed and the imbalance fluctuations are reduced significantly. Also, a good quality of elements is proven with the adaptive strategy, but a severe mesh distortion is observed without mesh adaptation which can significantly reduce the efficiency of numerical result. Finally, the overall results are very encouraging and show the efficiency and robustness of the proposed strategy to avoid mesh distortion and to simulate more complex geometry levels with a better filling of the die matrices compared to the standard approach. However, several points can be improved such as damage evolution. Moreover, it’s important to taking into account the contacts management between the piece and rigid tool to avoid any interpenetration.
En savoir plus

22 En savoir plus

Anisotropic Polygonal Remeshing

Anisotropic Polygonal Remeshing

In this paper, we propose a principal curvature stroke-based anisotropic remeshing method that is both efficient and flexible. Lines of minimum and maximum curvature are discretized into edges in regions with obvious anisotropy (Figure 3, left), while tra- ditional point-sampling is used on isotropic regions and umbilic points where there is no favored direction (as typically done by artists; see Figure 3, right). This approach guar- antees an efficient remeshing as it adapts to the natural anisotropy of a surface in order to reduce the number of necessary mesh elements. We also provide control over the mesh density, the adaptation to curvature, as well as over the amount of anisotropy desired in the final remeshed surface. Thus, our technique offers a unified framework to produce quad- dominant polygonal meshes ranging from isotropic to anisotropic, and from uniform to adapted sampling.
En savoir plus

31 En savoir plus

An Efficient Bit Allocation for Compressing Normal Meshes with an Error-driven Quantization

An Efficient Bit Allocation for Compressing Normal Meshes with an Error-driven Quantization

Table 3 gives the PSNR values relative to the proposed coder and to the coder NMC according to the global bitrate for all the models. The proposed coder is always better than NMC. In particular, we observe an improvement up to +2.5 dB, in coding performance. In addition, Fig. 11 provides some visual benefits relative to the use of the proposed coder. This figure shows the distribution of the reconstruction error on the object Feline, quantized with the proposed coder (Fig. 11(a)) and with N M C (Fig. 11(b)). The colour corresponds to the magnitude of the distance point-surface normalized by the bounding box diagonal, between the input irregular mesh and the quantized one (computed with MESH [27]). One can argue that N M C leads to more local errors than the proposed algorithm. Morever, Fig. 12 shows renderings of different com- pressed versions of Venus. This demonstrates that even at low bitrates the meshes quantized with the proposed algorithm is not so far from the original irregular one.
En savoir plus

26 En savoir plus

Diffusion Matrices from Algebraic-Geometry Codes with Efficient SIMD Implementation

Diffusion Matrices from Algebraic-Geometry Codes with Efficient SIMD Implementation

lant and has small coefficients. More recently, the PHOTON hash function [8] introduced the use of matrices that can be obtained as the power of a companion matrix, which sparsity may be useful in lightweight hardware implementations. The topic of finding such so-called recursive diffusion lay- ers has been quite active in the past years, and led to a series of papers investigating some of their various aspects [17,22,2]. One of the most recent developments shows how to systematically con- struct some of these matrices from BCH codes [1]. This allows in particular to construct very large recursive MDS matrices, for instance of dimension 16 over F 2 8 . This defines a linear mapping over a
En savoir plus

19 En savoir plus

Towards better error statistics for atmospheric inversions of methane surface fluxes

Towards better error statistics for atmospheric inversions of methane surface fluxes

studies which build the errors from physical considerations (e.g., Bergamaschi et al., 2010). At most sites (GIF, JFJ and KAS excepted), the 3 methods attribute averaged observa- tion errors that follow the same order: DS > ML > ND. The errors from R ML are in average 34 % smaller than the errors in R DS . The error variances in R ND are calculated to be even smaller (54 % less than in R DS in average). In the Bayesian unbiased framework, the inversion with the tuple from the maximum of likelihood is then expected to be more con- strained by the observations than the one from Desroziers’ scheme. The non-diagonal tuple seems more constrained by the observations than the other two, but the covariances make it difficult to precisely compare only the variances. The 3 methods share the same day/night patterns at all sites apart from the 3 mountain sites (JFJ, KAS and PUY) and the two sites BIS and BIK: compared to the errors during the “night” (17:00–00:00 plus 00:00–12:00), the errors during the “day” (12:00–17:00) are 25 % (resp. 23 % and 31 %) smaller for the DS (resp. ML and ND) method. The errors are consis- tently smaller when the PBL is well developed, i.e., when the local emissions are quickly mixed in the atmosphere, and hence when the CTM more realistically simulates the atmo- spheric concentrations. At the mountain sites (JFJ, KAS and PUY; see Table 1), the rough DS method does not calculate the same patterns as the other two; this primarily suggests that, for the sites mostly located in the free troposphere in spring (characterised by synoptic variability), the averaging on “day” and “night” intervals is less relevant than for the sites influenced by the PBL. Additionally, in mountain sites, the low-precision DS method disagrees with the other two because it cannot compute the errors that occur when the PBL height is close to the site altitude and when polluted air masses can be locally uplifted to the site. This phenomenon occurs at time scales that are smaller than and not synchro- nized with the partitioning made in Desroziers’ scheme, that
En savoir plus

19 En savoir plus

Efficient Delta-Parametrization of 2D Surface-Impedance Solutions

Efficient Delta-Parametrization of 2D Surface-Impedance Solutions

) =  δ + αδ  )  + αδ / ) / (9) ) "#  = "# dans Ω Ο , (10) où  δ est la paramétrisation d’ordre 1 de la section II., ce qui avec (10) garantit bien le respect de nos objectifs (8). Cette expansion reprend pour les termes en δ 2 et en δ 3 les arguments des termes complexes du développement (5), alors que nous ne l’avons jusqu’ici formellement justifié que pour la solution en impédance de surface.

2 En savoir plus

An Efficient Hybrid Optimization Strategy for Surface Reconstruction

An Efficient Hybrid Optimization Strategy for Surface Reconstruction

Université, I2M, I2M UMR 5295 F-33405 Talence, France b Univ. Grenoble Alpes, CNRS, Grenoble INP*, G-SCOP, 38000 Grenoble, France Abstract An efficient and general surface reconstruction strategy is presented in this study. The proposed approach can deal with both open and closed surfaces of genus greater than or equal to zero and it is able to approximate non-convex sets of target points (TPs). The surface reconstruction strategy is split into two main phases: (a) the mapping phase, which makes use of the shape preserving method (SPM) to get a proper parametrisation of each sub-domain composing the TPs set; (b) the fitting phase, where each patch is fitted by means of a suitable Non-Uniform Rational Basis Spline (NURBS) surface without introducing simplifying hypotheses and/or rules on the parameters tuning the shape of the parametric entity. Indeed, the proposed approach aims stating the surface fitting problem in the most general sense, by integrating the full set of design variables (both integer and continuous) defining the shape of the NURBS surface. To this purpose, a new formulation of the surface fitting problem is proposed: it is stated in the form of a special Constrained Non-Linear Programming Problem (CNLPP) defined over a domain having variable dimension, wherein both the number and the value of the design variables are simultaneously optimised. To deal with this class of CNLPPs, a hybrid optimisation tool has been employed. The optimisation procedure is split in two steps: firstly, an improved genetic algorithm (GA) optimises both the value and the number of design variables by means of a two-level Darwinian strategy allowing the simultaneous evolution of individuals and species; secondly, the solution provided by the GA constitutes the initial guess for the subsequent deterministic optimisation, which aims at improving the accuracy of the fitting surfaces. The effectiveness of the proposed methodology is proven through some meaningful benchmarks taken from the literature.
En savoir plus

38 En savoir plus

Efficient Search for Optimal Diffusion Layers of Generalized Feistel Networks

Efficient Search for Optimal Diffusion Layers of Generalized Feistel Networks

each round make the problem easier and in [?], Kales et al. give such a construction for any number of blocks. Our contribution. In this paper, we focus on even-odd permutations and we complete the work on the 10-year-old problem (introduced by [?]) of finding optimal even-odd permutations for 32 blocks, as well as finding optimal even-odd permutations for 28, 30 and 36 blocks which were not given in the previous literature. To do so, we propose a new characterization of a permutation reaching full diffusion after a given number of rounds. Using this characterization, we are able to create a very efficient algorithm, which on the previously mentioned cases yields all the permutations that achieve full diffusion in 9 rounds. Note that our algorithm essentially uses branch-and-bound techniques, and thus it is hard to evaluate the exact complexity. However, the size of the search space goes from 2 43
En savoir plus

21 En savoir plus

Adaptive remeshing for incremental forming simulation

Adaptive remeshing for incremental forming simulation

Simulating this process is a complex task. First, the tool diameter is small compared to the size of the metal sheet. Moreover, during its displacement, the tool deforms almost every part of the sheet, which implies that small elements are required everywhere on the sheet. For implicit simulations, the computation time is thus prohibitive. In this paper, the simulations were performed using the finite element code Lagamine [3] developed at the University of Liege. In order to decrease the simulation time, a new method using an adaptive remeshing has been developed.
En savoir plus

5 En savoir plus

Anisotropic Error Estimate for High-order Parametric Surface Mesh Generation

Anisotropic Error Estimate for High-order Parametric Surface Mesh Generation

t x i Q x i ≥ 1, for all i ∈ {1, ..., m} . (7) The first line of (7) translates the fact that we are looking for the metric with the largest area (or volume in 3D). Since the cost function of this problem is nonlinear, one rewrites it as a problem in L = log(Q). Notice that L is not a metric but only a symmetric matrix. This formulation also allows the discrete counterpart of the problem to be well-posed. Indeed, in [18], it is shown that the discrete form of (7) is ill-posed. For det(Q) = exp(trace(L)), a linear cost function is recovered by replacing Q by L in (7). On the contrary, the constraints which are linear on Q become nonlinear when writing them in terms of L. This can lead to really expensive computations. To avoid this problem, the convexity property of the exponential is used and replaces these constraints by approximated linear ones. More precisely, through the classic convexity inequality, if x ∈ R n satisfies t
En savoir plus

16 En savoir plus

Surface Modification of Powder Metallurgy Titanium by Colloidal Techniques and Diffusion Processes for Biomedical Applications

Surface Modification of Powder Metallurgy Titanium by Colloidal Techniques and Diffusion Processes for Biomedical Applications

the sintered substrate, SintTi–Mo, which presents a uniform and compact diffusion area with depth of 45 mm along the sample. Figure 6c, the modified material by introduc- tion of molybdenum plus a nitride titanium coating, Ti–Mo–TiN, shows the deepest diffusion area of 115 mm. Brightest zones in Figure 6a and b indicate rests of undiffused molybdenum and it is possible to see major porosity in the GreenTi–Mo sample. Microstructures and diffusion layer depths are very reproducible. In previous published works with similar conditions of diffusion temperature, the diffusion layer obtained presents a thickness around 200 mm. However, the diffusion time employed is longer; requiring higher level of energy and the use of an activator. [15] On the other hand, it has been demonstrated that increasing the energy supply to intro- duce an alloying element does not mean an increase in the content of that alloying element. [24] Figure 6d shows the consolidated 5 micron TiN layer obtained by electro- phoretic deposition onto the polished titanium substrate, Ti–TiN. As it can be seen, it is a well-defined coating with a continuous and homogeneous character. The aspect and thickness for this TiN coating can be compared to that presented in Mendoza et al. [22] Both coatings, prepared by electrophoretic deposition, provide similar TiN layers in a range of 5–20 mm of thickness, depending on the substrate material.
En savoir plus

9 En savoir plus

Diffusion thermique de nanocarbones au voisinage d'une surface de polymère thermoplastique

Diffusion thermique de nanocarbones au voisinage d'une surface de polymère thermoplastique

1) Fabrication de films fin de nanotubes Parmi les différentes routes permettant la fabrication de couches composites fines développées plus haut, les méthodes par imprégnation de couches de nanotubes sont les plus adaptées à la fabrication de composites fonctionnels. La plupart des méthodes reposent sur l’utilisation d’un solvant, la polymérisation in-situ ou une combinaison des deux pour contrôler la viscosité du mélange et optimiser la pénétration du polymère dans le film de nanotubes. Une dernière voie moins explorée développée par E. Pavlenko et al. [1] au CEMES consiste à fabriquer des couches fines de composites par diffusion thermique de nanotubes déposés à la surface d’un polymère. Un protocole de fabrication expérimental a été développé pour une application avec du poly-éther-éther cétone. Celle-ci repose sur le dépôt goutte à goutte de dispersion de NMP/MWCNT à haute concentration à la surface du polymère suivi d’un recuit au-dessus du point de fusion du polymère après évaporation du solvant. Après recuit, on observe une imprégnation du film de nanotubes par le polymère et la formation d’une couche fine de nano-composite présentant une épaisseur moyenne d’un micromètre et une concentration élevée en nanotubes à la surface du polymère. La couche fabriquée est opaque et présente une résistivité de l’ordre de la dizaine de kOhm. Les nanotubes sont alignés préférentiellement dans la direction parallèle à la surface et on peut observer la présence d’un front de pénétration à l’interface entre la couche fine nano-composite et le polymère. Cependant, le dépôt goutte à goutte de la dispersion de nanotubes limite l’homogénéité des films de nanotubes fabriqués avant imprégnation. Le NMP possède une pression vapeur très faible et une température d’ébullition située autour de 150 °C et est très efficace pour solvater certains polymères. En conséquence la présence de NMP résiduel dans les couches de composites fabriqués peut dégrader les propriétés électriques et mécaniques des films fabriqués de façon imprédictible. L’utilisation de dispersion de nanotubes dans des solvants présentant une pression vapeur plus élevée et une affinité générale plus faible pour le polymère que le NMP doit permettre de mieux contrôler les propriétés des couches fines nano-composites fabriquées ainsi que d’accélérer le processus de fabrication. La recherche d’un procédé de fabrication rapide permettant la réalisation de couches fines de nanotubes homogènes sur des surfaces de quelques centimètres carrés permet d’envisager plusieurs possibilités d’utilisation telle que la fabrication de capteur de contraintes ou d’espèces chimiques, de bouclier électromagnétique ou encore de procédé de réparation pour la surface de nano-composites structurels.
En savoir plus

143 En savoir plus

TVD remeshing formulas for particle methods

TVD remeshing formulas for particle methods

Figure 1 shows the solution obtained at t = 0.8 for the original and TVD Λ 2 remeshing, for h = 0.02 and, following the above analysis, a CFL number 2/3. The TVD remeshing formula used a Van-Leer limiter. The improvement obtained by the limiter is clear. We continue with the case of the passive transport of a scalar in a 2D incompressible flows. Although it does not enter the general case, this case is useful to illustrate how the TVD formulas are extended to the multidimensional case and how the CFL condition can be relaxed. To deal with advection in a multidimensional field, the approach we choose follows the classical splitting used in finite-difference methods. In a push and remesh methods, it means that particles are advected in one direction, then remeshed, then advected in a second direction and so on. This is clearly a first order in time method and higher order strategies can be devised as for classical differential equations. Note that this method, for not constant velocity values, is no longer equivalent to a finite-difference method, because particle in the second and following advection stages ”see” velocity values at the location where they have been remeshed.
En savoir plus

7 En savoir plus

Show all 10000 documents...

Sujets connexes