In this paper, we have developed an efficient iterative algorithm for the solution of frictional dynamic contactproblems with locally refined geometries. For this, we have employed an overlapping do main decomposition with an independent fine mesh around the contact zone. For linear problems, we have shown that under some reasonable assumptions, the convergence rate of the correspond ing algorithm is bounded independently of the material and dis cretization parameters. Further, we have investigated how the scheme can be extended to nonlinear contactproblems as an inex act inner solver within the semismooth Newton iteration. Finally, we have shown that the overlapping discretization can be ex tended to a locally finer time scale as well as that the resulting cou pled problem is energy conserving in the linear case and can be solved efficiently by an iterative procedure.
Abstract
This paper addresses an efficient and robust automatic adaptive local multilevel mesh refinement strategy for unilateral frictional contactproblems in elasto- statics. The proposed strategy couples the Local Defect Correction multigrid method LDC (Hackbusch, 1984) with the ZZ (Zienkiewicz and Zhu, 1987) a posteriori error estimator. An extension of LDC method to frictional contactproblems is introduced. An interesting feature of this extended LDC algorithm is that it still only lies on interpolations of displacement fields. Neither forces conservation nor exchange of contact status is required between the refinement levels. The ZZ a posteriori error estimator is exploited to automatically build the sub-grids of the LDC method. A criterion linked to the relative error in stress is used. The efficiency of the proposed strategy is analyzed on examples derived from nuclear engineering. Practical numerical choices are proposed and justified. The refinement process automatically varies and stops with respect to a given tolerance. Post-treatments show that the sub-grids focus around the contact areas and that the converged LDC solution always respects the prescribed tolerance.
3. Nonlinear domain decomposition strategies for frictional multi-contactproblems
Domain decomposition methods (substructuring techniques) are ecient because they allow to reduce memory storage and calculation time. Moreover these methods take advantage of the new multi-processor generation of computers as they exhibit an intrinsic parallelism with a high granularity. The main com- ponent of the domain decomposition algorithm is a numerical solver based on the solution of local in- dependent subproblems on subdomains. In addition, these methods are ecient solvers in a classical mono- processor environment as well. First we investigate domain decomposition methods introduced for linear systems. The method used hereafter is the primal Schur complement method which consists in imposing the displacement continuity on the interfaces and in controlling the normal stress gap. Next we develop a strategy to solve a large scale multi-contact problem by using the GNM combined to the domain decomposition method.
Solution of friction contact problems in a general three-dimensional setting is a chal- lenging task. Much research has been conducted for the solution of 2D and 3D con[r]
We have adopted, in the present work, an adaptation technique based on hierar- chical estimators. In other words, we use higher order interpolation to evaluate local errors. It is noteworthy that edge based errors are well suited for contactproblems for their ability to generate anisotropic meshes. These ones have been introduced mainly in Habashi et al. [1996], D’azevedo [1991] and D’azevedo and Simpson [1991]. We have formulated these techniques the case of a mesh r–adaptation procedure. It turns out that, with some restrictions that will be outlined in the paper, the adaptation allows using a moderately coarse mesh with an acceptable accuracy.
1 Introduction
The numerical simulation of contactproblems is still a delicate matter especially when large transforma- tions are involved. In that case, relative large slidings must be taken into account between the contact surfaces and the discretization error induced by linear finite elements may not be satisfactory. In particu- lar, linear elements lead to a facetization of the contact surface, meaning an unavoidable discontinuity of the normal vector to this surface. Uncertainty over the precision of the results, irregularity of the speed
Figure 7. Upper view of the intermediate block with multi-bodies calculation (left)
and with single body calculation (right)
Figure 8. Geometric description
In this calculation, remeshing is allowed. Optimal topology method for tetrahedra is used to generate automatically a better mesh, when degeneracy of elements occurs [COU 97a, COU 00]. This is important for large-strain problems in which the mesh degenerates long before the end of the simulation. The algorithm improves the mesh, and the nodal velocity fields are interpolated from the old to the new mesh. In this study only the slave body is remeshed, while the other remains unchanged throughout a given simulation. Using the remeshing algorithm does not deteriorate the satisfaction of the contact conditions because the possible penetrations of the slave domain into the master ones are small enough to be corrected. The penetrations of the master domain into the slave one are of the same magnitude as the ones dued to the master slave approach.
Geometrically nonlinear formulations handle the situation of large rigid body mo- tion. In contrast to the linear elasticity setting, the rigid body motions, i.e., rotations and translations, do not contribute to the elastic energy. However, the resulting equa- tions cannot be solved directly, and even though small strains are considered, a rather expensive iteration scheme has to be applied. Moreover, when considering high-speed rotation with large time-steps, this method shows unsatisfying numerical results; the approximation of the rotation is quite bad and spurious oscillations might be observed. Such problems become even more difficult to solve in case of contact and have led to many research activities both from the numerical and the theoretical points of view (see [5, 12, 13, 17, 19] and the references therein for an overview of the topic). To tackle both high-speed rotating bodies and contact, a suitable combination of time solvers and algorithms to compute the contact zone has to be applied.
Unité de recherche INRIA Sophia Antipolis 2004, route des Lucioles - BP 93 - 06902 Sophia Antipolis Cedex France Unité de recherche INRIA Futurs : Parc Club Orsay Université - ZAC des Vi[r]
a result is available only in a nite-dimensional setting [10]. Here, we extend this latter approach to the (innite-dimensional) problem of unilateral contact with cohesive forces, assuming the sur- jectivity of the operator B dened by (6.3) and using a compactness argument in the (closure of the) cone of feasible directions. Sections 6.2 and 6.3 are set in a general framework encompassing the particular case of unilateral contactproblems with cohesive forces. In Section 6.4, we analyze mixed nite element approximations of the augmented Lagrangian formulation of unilateral con- tact problems with cohesive forces. Since a nonlinear problem needs to be solved for the normal displacement on Γ , it is convenient to use a collocation method. In the same way, numerical inte- gration can be employed to build the Jacobian matrix in Newton's method. A key point is the use of discontinuous nite element spaces leading to a collocation method, while ensuring an inf-sup condition which is the discrete counterpart of the surjectivity of the operator B. The resulting mixed nite element approximation is nonconforming. Numerous works have been devoted to the error analysis of mixed formulations for unilateral contactproblems, especially for two-eld formulations (bulk displacement-displacement on Γ or bulk displacement-normal stress on Γ ). To our knowledge, the only work dealing with the three-eld augmented Lagrangian formulation is [24] in a conforming and consistent case. Here, we prove a priori error estimates in the present nonconforming setting for various nite element spaces under the simplifying assumption that the cohesive forces are mild enough. In Section 6.5, we describe the algorithms. We prove the convergence of the decomposition-coordination method in the particular case of a convex func- tional split into a convex part and a nonconvex part. Finally, numerical simulations illustrating the theoretical results are presented in Section 6.6.
Figure 1.8: Signorini’s exact solution for our numerical results from Chapter 2. The magenta dots represent the values of the numerical solution at the faces and cells barycenters of an hexagonal mesh (depicted in black). The contact boundary corresponds to the side {y = 0}. terms, either using the trace of the cell unknowns (cell-based trace version) or using directly the face unknowns (face-based trace version). The face-based trace version uses equal order polynomials for cell and face unknowns, whereas the cell-based trace version uses cell unknowns of one order higher than face unknowns. The latter choice turns out to be the only one leading to theoretical optimal error estimates for Signorini’s problem. For Dirichlet conditions, optimal error estimates are established for both versions. The key idea in the analysis, which is inspired from Burman & Ern [24] is to devise a local reconstruction operator without Dirichlet/contact faces contribution. Numerical results are presented for a test case with analytical solution to verify the theoretical predictions. The exact solution is depicted in Figure 2.6. Finally, although we do not exploit here the capability of hybrid discretization methods to support polyhedral meshes, we observe that the idea of using polyhedral meshes for contactproblems has been advocated for instance by Wriggers et al. [131], where the authors use a Virtual Element method combined with either Lagrange multipliers or penalized formulations for contactproblems. The work developed in this chapter has been submitted as the paper [36].
La vitesse de rotation choisie est la vitesse maximale du tour vertical: Ω = 300 tr/min (soit 10π rad/s ou 5 Hz). Si la fr´equence du mode `a n = 10 diam`etres est approch´ee par f c = 200 Hz, il en r´esulte que la fr´equence de l’aube est environ f a = 150 Hz.
Une premi`ere s´erie de mesures permet de voir ´evoluer la r´eponse du carter lorsque la vitesse de rotation du tour augmente. Cette s´erie de mesures a ´et´e r´ealis´ee en mettant le tour en rotation, puis en amenant l’aube au contact (l´eg`ere pression: l’aube touche le carter sur une seule zone longue d’environ 30 cm): un enregistrement des r´eponses de l’aube et du carter est fait sur l’intervalle [0..300Hz] avec une moyenne sur 3 acquisitions. Afin d’essayer de limiter les effets du d´efaut de forme, le contact a ´et´e r´ealis´e dans la partie m´ediane du carter (voir figure VI.3). Le contact aurait pu ˆetre cr´e´e dans la partie basse, puisque le rayon y est pratiquement constant, mais trois raisons s’y opposent:
A similar idea is presented in [Robles, 1995], although here the focus is assembly. In this thesis, an action map for states that might be visited given the nominal [r]
4 Giens 2005
3. Originalité des discrétisations éléments finis
La méthode X-FEM est basée sur la partition de l’unité (Melenk et al., 1996) et propose d’enrichir la base des fonctions de forme (Moës et al., 1999) dans un voisinage de la fissure. Nous concentrerons notre attention sur les éléments finis entièrement coupés en deux par la fissure, enrichis par une fonction Heaviside. La particularité de cette méthode appliquée au contact est que le saut de déplacement dans l’équation [1] est directement relié à ces degrés de liberté enrichis.
et al., 2011]. En fait, le son émis par la respiration n’est pas une vibration comme dans le cas de la parole (vibrations de l’air fait par les cordes vocales) mais une conséquence d’un déplacement d’air engendré par l’inspiration et l’expiration. C’est pour cela que l’amplitude du son émis est faible. L’identification des phases d’expiration et d’inspiration, à partir du spectre acoustique du son issu de la respiration, se fait d’une façon fiable [Alshaer et al., 2011]. En effet, l’algorithme a été testé avec un échantillon de dix sujets portant une ceinture respiratoire. Sur un total de 436 phases de respiration, 424 ont été identifiées correctement. Le Kappa calculé est de 0.96, ce qui indique une forte corrélation entre la mesure par le son et celle par la ceinture respiratoire. Le test exact de Fisher donne une valeur de p inférieure à 0.001, indiquant une forte association entre les deux méthodes. La condition la plus importante pour la validation est que le microphone soit bien positionné. • Utilisation de capteurs très sensibles pour mesurer le déplacement d’air, l’humidité, la température ou la différence de concentration de CO2. Toutes ces techniques (mesure du déplacement d’air, de la condensation [André et al., 2011], de la température [Storck et al., 1996] ou de la différence de concentration de CO2 entre les deux phases de la respiration) ne sont pas des mesures sans contact car les capteurs doivent être placés près du nez ou de la bouche, tel qu’illustré par la figure 2.1. Néanmoins, ce sont des mesures très fiables et sûres mais qui ont un champ d’application moins important que les mesures sans contact. Elles pourraient très bien s’appliquer dans le cas de la téléréadaption à domicile en leur adjoignant un système de liaison sans fil. Point de vue performances, une preuve de concept a été démontrée dans un environnement hospitalier, par un test de marche (durant entre 15 et 40 minutes) donnant le RR en temps réel [André et al., 2011]. Aucune précision quant à la corrélation avec un autre dispositif n’a été présenté. Toutefois, parmi les autres systèmes, les performances varient fortement en fonction de la technologie utilisée.
DFT was used to study the band-structure of graphene-metal complexes, along with their work function and bonding energy for various metals. Those studies allowed dis- tinguishing two categories of complexes upon the strength of the metal-graphene bind- ing: physisorbed graphene, where graphene’s band-structure is mostly preserved; and chemisorbed graphene, where the contact is more intimate and the band-structure of the complex is something different from both metal and graphene. Chemisorbed metals can provide better mechanical stability and electrical connection than physisorbed ones [ 97 ]. However, for the purpose of an equivalent circuit of contacted graphene, in this manuscript there will be no distinction between chemisorbed and physisorbed metals. The formation of the graphene-metal complex is conceptually divided in four steps in Fig. 2.2 . In (a) the clean metal and intrinsic graphene are separated. The different magnitude of their work functions induces doping in graphene when the vacuum potential of the materials gets aligned (b). The common Fermi level is pinned to the metal’s one and graphene’s band structure is shifted (towards higher energies in this case), creating a doping potential ∆E F ”.
The transient heating characteristics of the polymer are used to deduce the appropriate optical intensity and exposure time necessary to cure the resist through its thickn[r]
‘honey badger’ (lit. ‘ripper of ankles’)
Taine-Cheikh (2008: 126) stresses that it is somewhat difficult to trace back the origin of these compounds. Accordingly, she speaks of a process of conver- gence between the two languages, rather than determining the direction of the se- mantic transfer. However, it should be observed that these compound nouns are not attested in other spoken varieties of Arabic. Furthermore, since at least the mid-twentieth century, Berbers in Mauritania have been gradually loosing com- petence in Zenaga, in favour of Arabic (Taine-Cheikh 2012: 100), while Zenaga is rarely acquired as second language by Ḥassāniyya Arabic speakers. In such a context, the most probable agents of contact-induced change were former Berber- dominant speakers who gradually shifted to Arabic. Thus, it seems plausible that the transfer of the semantic properties of Zenaga compounds has been achieved through imposition, rather than through borrowing.
• It is not even necessary for Alice to exist as a physical person, because the virus could transfer on to a surface as a result of coming into contact with other contaminated surfaces, such as articles being sent from an infected region.
• If Alice were infected in the same conditions as Bob but two days before in another supermarket, and if Alice had been informed about the potential risk of infection, then she probably would have done a unitary test (RT-PCR) and, if positive, she would not then touch the items in the supermarket because she would have stayed at home, and probably not infecting Bob.
CONTACT OF LANGUAGES AND CONCEPTUAL TRANSFERS
Abstract
This paper aims to illustrate the relations between culture and language, through two examples. The first one concerns the transfers of use from the word vieux to the word grand in French of Lebanon, through the medium of the Arabic word ﺮﯿﺒﻛ [kabîr] ; the second one concerns the recent transfers of use from the French word problème to the word souci, through the medium of the English word worry.The study is based on a conceptual and cognitive theory of the lexical meaning, where transfers appear as the result of an alteration of the cultural representations in the case of code switching. Phenomena are connected to their political frame, i.e. for the first one, colonialism, and for the second one, the globalisation of liberalism.