Modeling and Simulation in Science, Engineering and Technology
Antonio Romano
Addolorata Marasco
Continuum
Mechanics using Mathematica ®
Fundamentals, Methods, and Applications
Second Edition
Modeling and Simulation in Science, Engineering and Technology
Series Editor
Nicola Bellomo Politecnico di Torino Torino, Italy
Editorial Advisory Board
K.J. Bathe
Department of Mechanical Engineering Massachusetts Institute of Technology Cambridge, MA, USA
M. Chaplain
Division of Mathematics University of Dundee Dundee, Scotland, UK P. Degond
Department of Mathematics, Imperial College London, London, United Kingdom A. Deutsch
Center for Information Services and High-Performance Computing Technische Universität Dresden Dresden, Germany
M.A. Herrero
Departamento de Matematica Aplicada Universidad Complutense de Madrid Madrid, Spain
P. Koumoutsakos
Computational Science & Engineering Laboratory
ETH Zürich Zürich, Switzerland H.G. Othmer
Department of Mathematics University of Minnesota Minneapolis, MN, USA K.R. Rajagopal
Department of Mechanical Engineering Texas A&M University
College Station, TX, USA T.E. Tezduyar
Department of Mechanical Engineering &
Materials Science Rice University Houston, TX, USA A. Tosin
Istituto per le Applicazioni del Calcolo
“M. Picone”
Consiglio Nazionale delle Ricerche Roma, Italy
More information about this series athttp://www.springer.com/series/4960
Continuum Mechanics using Mathematica R
Fundamentals, Methods, and Applications
Second Edition
Antonio Romano
Department of Mathematics and Applications “R. Caccioppoli”
University of Naples Federico II Naples, Italy
Addolorata Marasco Department of Mathematics
and Applications “R. Caccioppoli”
University of Naples Federico II Naples, Italy
ISSN 2164-3679 ISSN 2164-3725 (electronic) ISBN 978-1-4939-1603-0 ISBN 978-1-4939-1604-7 (eBook) DOI 10.1007/978-1-4939-1604-7
Springer New York Heidelberg Dordrecht London
Library of Congress Control Number: 2014948090
Mathematics Subject Classification: 74-00, 74-01, 74AXX, 74BXX, 74EXX, 74GXX, 74JXX
© Springer Science+Business Media New York 2006, 2014
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. Exempted from this legal reservation are brief excerpts in connection with reviews or scholarly analysis or material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the work. Duplication of this publication or parts thereof is permitted only under the provisions of the Copyright Law of the Publisher’s location, in its current version, and permission for use must always be obtained from Springer.
Permissions for use may be obtained through RightsLink at the Copyright Clearance Center. Violations are liable to prosecution under the respective Copyright Law.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.
While the advice and information in this book are believed to be true and accurate at the date of publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or omissions that may be made. The publisher makes no warranty, express or implied, with respect to the material contained herein.
Printed on acid-free paper
Springer is part of Springer Science+Business Media (www.birkhauser-science.com) Additional material to this book can be downloaded fromhttp://extras.springer.com
The motion of any body depends both on its characteristics and on the forces acting on it. Although taking into account all possible properties makes the equations too complex to solve, sometimes it is possible to consider only the properties that have the greatest influence on the motion.Models of ideals bodies, which contain only the most relevant properties, can be studied using the tools of mathematical physics.
Adding more properties into a model makes it more realistic, but it also makes the motion problem harder to solve.
In order to highlight the above statements, let us first suppose that a system S of N unconstrained bodies Ci,i D 1; : : : ; N, is sufficiently described by the model ofN material pointswhenever the bodies have negligible dimensions with respect to the dimensions of the region containing the trajectories. This means that all the physical properties ofCithat influence the motion are expressed by a positive number, themassmi, whereas the position ofCiwith respect to a frameI is given by the position vectorri.t /versus time. To determine the functionsri.t /, one has to integrate the following system of Newtonian equations:
miRri DFi fi.r1; : : : ;rN;rP1; : : : ;rPN; t /;
i D1; : : : ; N, where the forcesFi, due both to the external world and to the other points ofS, are assigned functionsfiof the positions and velocities of all the points ofS, as well as of time. Under suitable regularity assumptions about the functionsfi, the previous (vector) system of second-orderordinarydifferential equations in the unknownsri.t /has one and only one solution satisfying the given initial conditions
ri.t0/Dr0i; rPi.t0/D Pr0i; iD1; : : : ; N:
A second model that more closely matches physical reality is represented by a system S of constrained rigid bodies Ci, i D 1; : : : ; N. In this scheme, the
v
vi Preface extensionofCiand the presence ofconstraintsare taken into account. The position ofCi is represented by the three-dimensional region occupied byCi in the frameI. Owing to the supposed rigidity of both bodiesCiand constraints, the configurations ofSare described byn6N parametersq1; : : : ; qn, which are calledLagrangian coordinates. Moreover, the massmi ofCi is no longer sufficient for describing the physical properties ofCi since we have to know both its density and geometry. To determine the motion ofS, that is, the functionsq1.t /; : : : ; qn.t /, the Lagrangian expressions of the kinetic energyT .q;q/P and active forcesQh.q;q/P are necessary.
Then a possible motion ofSis a solution of the Lagrange equations d
dt
@T
@qPh @T
@qh DQh.q;q/;P hD1; : : : ; n;
satisfying the given initial conditions
qh.t0/Dqh0; qPh.t0/D Pqh0; hD1; : : : ; n;
which once again fix the initial configuration and the velocity field ofS.
We face a completely different situation when, to improve the description, we adopt the model ofcontinuum mechanics. In fact, in this model the bodies are deformable and, at the same time, the matter is supposed to be continuously distributed over the volume they occupy, so that their molecular structure is completely erased. In this book we will show that the substitution of rigidity with the deformability leads us to determine three scalar functions of three spatial variables and time, in order to find the motion ofS. Consequently, the fundamental evolution laws become partial differential equations. This consequence of deformability is the root of the mathematical difficulties of continuum mechanics.
This model must include other characteristics which allow us to describe the different macroscopic material behaviors. In fact, bodies undergo different deformations under the influence of the same applied loads. The mathematical description of different materials is the object of theconstitutive equations. These equations, although they have to describe a wide variety of real bodies, must in any case satisfy some general principles. These principles are calledconstitutive axioms and they reflect general rules of behavior. These rules, although they imply severe restrictions on the form of the constitutive equations, permit us to describe different materials. The constitutive equations can be divided into classes describing the behavior of material categories: elastic bodies, fluids, etc. The choice of a particular constitutive relation cannot be done a priori but instead relies on experiments, due to the fact that the macroscopic behavior of a body is strictly related to its molecular structure. Since the continuum model erases this structure, the constitutive equation of a particular material has to be determined by experimental procedures. However, the introduction of deformability into the model does not permit us to describe all the phenomena accompanying the motion. In fact, the viscosity ofSas well as the friction betweenSand any external bodies produce heating, which in turn causes heat exchanges among parts ofS or betweenS and
its surroundings. Mechanics is not able to describe these phenomena, and we must resort to thethermomechanics of continuous systems. This theory combines the laws of mechanics and thermodynamics, provided that they are suitably generalized to a deformable continuum at a nonuniform temperature.
The situation is much more complex when the continuum carries charges and currents. In such a case, we must take into account Maxwell’s equations, which describe the electromagnetic fields accompanying the motion, together with the thermomechanic equations. The coexistence of all these equations gives rise to a compatibility problem: in fact, Maxwell’s equations are covariant under Lorentz transformations, whereas thermomechanics laws are covariant under Galilean trans- formations.
This book is devoted to those readers interested in understanding the basis of continuum mechanics and its fundamental applications: balance laws, constitutive axioms, linear elasticity, fluid dynamics, waves, etc. It is self-contained, as it illus- trates all the required mathematical tools, starting from an elementary knowledge of algebra and analysis.
It is divided into 13 chapters. In the first two chapters the elements of linear alge- bra are presented (vectors, tensors, eigenvalues, eigenvectors, etc.), together with the foundations of vector analysis (curvilinear coordinates, covariant derivative, Gauss and Stokes theorems). In the remaining ten chapters the foundations of continuum mechanics and some fundamental applications of simple continuous media are introduced. More precisely, the finite deformation theory is discussed in Chap.3, and the kinetic principles, the singular surfaces, and the general differential formulae for surfaces and volumes are presented in Chap.4. Chapter5contains the general integral balance laws of mechanics, as well as their local Eulerian or Lagrangian forms. In Chaps.6and7the constitutive axioms, the thermo-viscoelastic materials, and their symmetries are discussed. In Chap. 8, starting from the characteristic surfaces, the classification of a quasi-linear partial differential system is discussed, together with ordinary waves and shock waves. The following two chapters cover the application of the general principles presented in the previous chapters to perfect or viscous fluids (Chap.9) and to linearly elastic systems (Chap.10). In Chap.11, a comparison of some proposed thermodynamic theories is presented. The great importance of fluid dynamics in meteorology is showed in Chap.12. In particular, in this chapter the arduous path from the equation of fluid dynamics to the chaos is sketched. In the last chapter, we present a brief introduction to the navigation since the analysis of this problem is an interesting example of the interaction between the equations of fluid dynamics and the dynamics of rigid bodies. Finally, in AppendixAthe concept of a weak solution is introduced.
This volume has many programs written with Mathematicar [69]. These programs, whose files will be available at Extras.Springer.com., apply to topics discussed in the book such as the equivalence of applied vector systems, differential operators in curvilinear coordinates, kinematic fields, deformation theory, classifi- cation of systems of partial differential equations, motion representation of perfect
viii Preface fluids by complex functions, waves in solids, and so on. This approach has already been adopted by two of the authors in other books (see [1,33]).1
Many other important topics of continuum mechanics are not considered in this volume, which is essentially devoted to the foundations of the theory. In a second volume [56], already edited, continua with directors, nonlinear elasticity, mixtures, phase changes, electrodynamics in matter, ferromagnetic bodies, and relativistic continua are discussed.
Naples, Italy Antonio Romano
Addolorata Marasco
1The reader interested in other fundamental books in continuum mechanics can consult, for example, the references [26,30,36,66].
1 Elements of Linear Algebra. . . 1
1.1 Motivation to Study Linear Algebra. . . 1
1.2 Vector Spaces and Bases. . . 2
Examples. . . 3
1.3 Euclidean Vector Space. . . 5
1.4 Base Changes . . . 10
1.5 Vector Product . . . 12
1.6 Mixed Product . . . 13
1.7 Elements of Tensor Algebra. . . 14
1.8 Eigenvalues and Eigenvectors of a Euclidean Second-Order Tensor. . . 20
1.9 Orthogonal Tensors. . . 25
1.10 Cauchy’s Polar Decomposition Theorem. . . 28
1.11 Higher Order Tensors. . . 29
1.12 Euclidean Point Space. . . 31
1.13 Exercises . . . 32
1.14 The Program VectorSys. . . 38
Aim of the Program VectorSys. . . 38
Description of the Problem and Relative Algorithm. . . 38
Command Line of the Program VectorSys . . . 39
Parameter List. . . 39
Worked Examples. . . 40
Exercises . . . 42
1.15 The Program EigenSystemAG. . . 42
Aim of the Program EigenSystemAG. . . 42
Description of the Algorithm. . . 43
Command Line of the Program EigenSystemAG. . . 43
Parameter List. . . 43
ix
x Contents
Worked Examples. . . 43
Exercises . . . 44
2 Vector Analysis. . . . 47
2.1 Curvilinear Coordinates. . . 47
2.2 Examples of Curvilinear Coordinates. . . 50
2.3 Differentiation of Vector Fields. . . 53
2.4 The Stokes and Gauss Theorems. . . 58
2.5 Singular Surfaces . . . 59
2.6 Useful Formulae. . . 63
2.7 Some Curvilinear Coordinates. . . 66
Generalized Polar Coordinates. . . 66
Cylindrical Coordinates . . . 68
Spherical Coordinates . . . 69
Elliptic Coordinates. . . 70
Parabolic Coordinates . . . 70
Bipolar Coordinates. . . 71
Prolate and Oblate Spheroidal Coordinates. . . 72
Paraboloidal Coordinates. . . 72
2.8 Exercises . . . 73
2.9 The Program Operator. . . 76
Aim of the Program. . . 76
Description of the Algorithm. . . 76
Command Line of the Program Operator. . . 77
Parameter List. . . 77
Use Instructions. . . 78
Worked Examples. . . 78
Exercises . . . 82
3 Finite and Infinitesimal Deformations . . . 83
3.1 Deformation Gradient . . . 83
3.2 Stretch Ratio and Angular Distortion. . . 86
3.3 Invariants of C and B. . . 89
3.4 Displacement and Displacement Gradient. . . 90
3.5 Infinitesimal Deformation Theory . . . 92
3.6 Transformation Rules for Deformation Tensors. . . 94
3.7 Some Relevant Formulae. . . 95
3.8 Compatibility Conditions. . . 98
3.9 Curvilinear Coordinates. . . 102
3.10 Exercises . . . 103
3.11 The Program Deformation. . . 107
Aim of the Program. . . 107
Description of the Algorithm and Instructions for Use. . . 107
Command Line of the Program Deformation. . . 108
Parameter List. . . 108
Worked Examples. . . 109
Exercises . . . 112
4 Kinematics . . . 115
4.1 Velocity and Acceleration. . . 115
4.2 Velocity Gradient . . . 118
4.3 Rigid, Irrotational, and Isochoric Motions. . . 119
4.4 Transformation Rules for a Change of Frame. . . 122
4.5 Singular Moving Surfaces. . . 123
4.6 Time Derivative of a Moving Volume . . . 126
4.7 Exercises . . . 130
4.8 The Program Velocity. . . 133
Aim of the Program, Input and Output . . . 133
Worked Examples. . . 134
Exercises . . . 135
5 Balance Equations . . . 137
5.1 General Formulation of a Balance Equation. . . 137
5.2 Mass Conservation. . . 142
5.3 Momentum Balance Equation. . . 143
5.4 Balance of Angular Momentum. . . 146
5.5 Energy Balance . . . 147
5.6 Entropy Inequality. . . 149
5.7 Lagrangian Formulation of Balance Equations. . . 152
5.8 The Principle of Virtual Displacements. . . 157
5.9 Exercises . . . 158
6 Constitutive Equations . . . 163
6.1 Constitutive Axioms . . . 163
6.2 Thermoviscoelastic Behavior. . . 167
6.3 Linear Thermoelasticity. . . 173
6.4 Exercises . . . 177
7 Symmetry Groups: Solids and Fluids. . . 179
7.1 Symmetry . . . 179
7.2 Isotropic Solids . . . 182
7.3 Perfect and Viscous Fluids. . . 186
7.4 Anisotropic Solids. . . 189
7.5 Exercises . . . 192
7.6 The Program LinElasticityTensor. . . 193
Aim of the Program. . . 193
Description of the Problem and Relative Algorithm. . . 194
Command Line of the Program LinElasticityTensor. . . 194
Parameter List. . . 195
Worked Examples. . . 195
Exercises . . . 196
xii Contents
8 Wave Propagation. . . 197
8.1 Introduction. . . 197
8.2 Cauchy’s Problem for Second-Order PDEs . . . 198
8.3 Characteristics and Classification of PDEs. . . 202
8.4 Examples. . . 204
8.5 Cauchy’s Problem for a Quasi-Linear First-Order System . . . 207
8.6 Classification of First-Order Systems. . . 209
8.7 Examples. . . 211
8.8 Second-Order Systems. . . 214
8.9 Ordinary Waves. . . 216
8.10 Linearized Theory and Waves. . . 220
8.11 Shock Waves. . . 224
8.12 Exercises . . . 227
8.13 The Program PdeEqClass. . . 229
Aim of the Program PdeEqClass. . . 229
Description of the Problem and Relative Algorithm. . . 229
Command Line of the Program PdeEqClass . . . 230
Parameter List. . . 230
Use Instructions. . . 230
Worked Examples. . . 231
Exercises . . . 232
8.14 The Program PdeSysClass. . . 234
Aim of the Program PdeSysClass. . . 234
Description of the Problem and Relative Algorithm. . . 234
Command Line of the Program PdeSysClass . . . 235
Parameter List. . . 235
Use Instructions. . . 236
Worked Examples. . . 237
Exercises . . . 238
8.15 The Program WavesI. . . 240
Aim of the Program WavesI. . . 240
Description of the Problem and Relative Algorithm. . . 240
Command Line of the Program WavesI. . . 241
Parameter List. . . 241
Use Instructions. . . 241
Worked Example. . . 242
Exercises . . . 243
8.16 The Program WavesII. . . 246
Aim of the Program WavesII. . . 246
Description of the Problem and Relative Algorithm. . . 246
Command Line of the Program WavesII. . . 248
Parameter List. . . 248
Use Instructions. . . 248
Worked Example. . . 248
Exercises . . . 250
9 Fluid Mechanics. . . 251
9.1 Perfect Fluid. . . 251
9.2 Stevino’s Law and Archimedes’ Principle. . . 253
9.3 Fundamental Theorems of Fluid Dynamics . . . 256
9.4 Boundary Value Problems for a Perfect Fluid. . . 261
9.5 2D Steady Flow of a Perfect Fluid. . . 262
9.6 D’Alembert’s Paradox and the Kutta–Joukowsky Theorem. . . 270
9.7 Lift and Airfoils. . . 273
9.8 Newtonian Fluids . . . 278
9.9 Applications of the Navier–Stokes Equation. . . 279
9.10 Dimensional Analysis and the Navier–Stokes Equation. . . 281
9.11 Boundary Layer. . . 282
9.12 Motion of a Viscous Liquid Around an Obstacle. . . 287
9.13 Ordinary Waves in Perfect Fluids. . . 295
9.14 Shock Waves in Fluids. . . 297
9.15 Shock Waves in a Perfect Gas. . . 300
9.16 Exercises . . . 304
9.17 The Program Potential. . . 306
Aim of the Program Potential. . . 306
Description of the Problem and Relative Algorithm. . . 307
Command Line of the Program Potential. . . 307
Parameter List. . . 307
Worked Examples. . . 308
Exercises . . . 312
9.18 The Program Wing. . . 313
Aim of the Program Wing. . . 313
Description of the Problem and Relative Algorithm. . . 313
Command Line of the Program Wing. . . 313
Parameter List. . . 313
Worked Examples. . . 314
Exercises . . . 315
9.19 The Program Joukowsky . . . 315
Aim of the Program Joukowsky. . . 315
Description of the Problem and Relative Algorithm. . . 316
Command Line of the Program Joukowsky . . . 316
Parameter List. . . 316
Worked Examples. . . 317
Exercises . . . 318
9.20 The Program JoukowskyMap . . . 318
Aim of the Program JoukowskyMap. . . 318
Description of the Problem and Relative Algorithm. . . 318
Command Line of the Program JoukowskyMap . . . 318
Parameter List. . . 318
Use Instructions. . . 319
Worked Examples. . . 319
Exercises . . . 321
xiv Contents
10 Linear Elasticity. . . 323
10.1 Basic Equations of Linear Elasticity. . . 323
10.2 Uniqueness Theorems. . . 327
10.3 Existence and Uniqueness of Equilibrium Solutions. . . 329
10.4 Examples of Deformations. . . 333
10.5 The Boussinesq–Papkovich–Neuber Solution. . . 335
10.6 Saint–Venant’s Conjecture. . . 336
10.7 The Fundamental Saint–Venant Solutions. . . 341
10.8 Ordinary Waves in Elastic Systems. . . 344
10.9 Plane Waves. . . 350
10.10 Reflection of Plane Waves in a Half-Space. . . 356
SH Waves. . . 361
10.11 Rayleigh Waves. . . 362
10.12 Reflection and Refraction ofSH Waves. . . 365
10.13 Harmonic Waves in a Layer. . . 368
10.14 Exercises . . . 371
11 Other Approaches to Thermodynamics. . . 373
11.1 Basic Thermodynamics. . . 373
11.2 Extended Thermodynamics . . . 376
11.3 Serrin’s Approach. . . 378
11.4 An Application to Viscous Fluids. . . 382
12 Fluid Dynamics and Meteorology. . . 385
12.1 Introduction. . . 385
12.2 Atmosphere as a Continuous System. . . 387
12.3 Atmosphere as a Mixture. . . 390
12.4 Primitive Equations in Spherical Coordinates. . . 394
12.5 Dimensionless Form of the Basic Equations. . . 397
12.6 The Hydrostatic and Tangent Approximations. . . 401
12.7 Bjerknes’ Theorem. . . 404
12.8 Vorticity Equation and Ertel’s Theorem . . . 406
12.9 Reynolds Turbulence . . . 409
12.10 Ekman’s Planetary Boundary Layer. . . 412
12.11 Oberbeck–Boussinesq Equations . . . 416
12.12 Saltzman’s Equations. . . 418
12.13 Lorenz’s System . . . 421
12.14 Some Properties of Lorenz’s System. . . 424
13 Fluid Dynamics and Ship Motion. . . 429
13.1 Introduction. . . 429
13.2 A Ship as a Rigid Body. . . 430
13.3 Kinematical Transformations. . . 434
13.4 Dynamical Equations of Ship Motion . . . 436
13.5 Final Form of Dynamical Equations. . . 439
13.6 About the Forces Acting on a Ship. . . 440
13.7 Linear Equations of Ship Motion. . . 442
13.8 Small Motions in the Presence of Regular Small Waves. . . 444
13.9 The Sea Surface as Free Surface. . . 448
13.10 Linear Approximation of the Free Boundary Value Problem. . . 451
13.11 Simple Waves. . . 454
13.12 Flow of Small Waves. . . 456
13.13 Stationary Waves. . . 459
A A Brief Introduction to Weak Solutions . . . 463
A.1 Weak Derivative and Sobolev Spaces. . . 463
A.2 A Weak Solution of a PDE. . . 467
A.3 The Lax–Milgram Theorem. . . 469
References. . . 471
Index. . . 475
Chapter 1
Elements of Linear Algebra
In this chapter the fundamental concepts of linear algebra are exposed. More precisely, we introduce vector spaces, bases and contravariant components of a vector relative to a base, together with their transformation formulae on varying the basis. Then, Euclidean vector spaces and some fundamental operations in these spaces are analyzed: vector product, mixed product, etc. The elementary definition of n-tensors is given together with elements of tensor algebra. The problem of eigenvalues of symmetric 2-tensor and orthogonal 2-tensors is widely discussed and Cauchy’s polar decomposition theorem is proved. Finally, the Euclidean point spaces are introduced. The last sections contain exercises and detailed descriptions of some packages written with Mathematicar[69], which allows the user to solve by computer many of the problems considered in this chapter.
1.1 Motivation to Study Linear Algebra
In describing reality mathematically, several tools are used. The most familiar is certainly the concept of a function, which expresses the dependence of some quantities.y1; : : : ; ym/on others.x1; : : : ; xn/bymanalytical relations
yi Dfi.x1; : : : ; xn/; i D1; : : : ; m: (1.1) For instance, the coordinatexof a material point moving on a straight line with harmonic motion varies with timetaccording to the law
xDAsin.!tC'/;
© Springer Science+Business Media New York 2014
A. Romano, A. Marasco,Continuum Mechanics using MathematicaR, Modeling and Simulation in Science, Engineering and Technology, DOI 10.1007/978-1-4939-1604-7__1
1
where!D2andA; , and'are the amplitude, the frequency, and the phase of the harmonic motion, respectively. Similarly, an attractive elastic forceFdepends on the lengtheningsof the spring, according to the rule
Fi D kxi; i D1; 2; 3:
In this formulaFidenotes theith force component,k > 0is the elastic constant, and xiis theith component ofs. A final example is a dielectric, in which the components Di of the electric inductionDdepend on the componentsEi of the electric fieldE:
Later, it will be shown that the tension in a neighborhood I of a point P of an elastic materialS depends on the deformation ofI; both the deformation and the tension are described by 9 variables. Therefore to find the relation between deformation and tension, we need9functions like (1.1), depending on9variables.
The previous examples, and many others, tell us how important it is to study systems of real functions depending on many real variables. In studying mathemat- ical analysis, we learn how difficult this can be, so it is quite natural to start with linearrelationsfi. In doing this we are making the assumption that smallcauses produce smalleffects. If we interpret the independent variablesxiascausesand the dependent variablesyi aseffects, then the functions (1.1) can be replaced with their Taylor expansions at the origin.0; : : : ; 0/. Limiting the Taylor expansions to first order gives linear relations, which can be explored using the techniques oflinear algebra. In this chapter we present some fundamental aspects of this subject.1
1.2 Vector Spaces and Bases
Let <(C) be the field of real (complex) numbers. A vector space on < (C) is an arbitrary set E equipped with two algebraic operations, called addition and multiplication. The elements of this algebraic structure are calledvectors.
The addition operation associates with any pair of vectorsu,vof E their sum uCv2Ein a way that satisfies the following formal properties:
uCvDvCuI
only one element02Eexists for whichuC0DuI
for allu2E; only one elementu2Eexists such thatuC.u/D0: (1.2) The multiplication operation, which associates the product au 2 E to any numbera belonging to < (C) and any vector u 2 E, has to satisfy the formal properties
1For a more extensive study of the subjects of the first two chapters, see [28,38,52], [51].
1.2 Vector Spaces and Bases 3 1uDu;
.aCb/uDauCbu;
a.uCv/DauCav: (1.3)
LetW D fu1; : : : ;urgbe a set ofr distinct vectors belonging to E;r is also called theorderofW. The vector
a1u1C Carur;
where a1; : : : ; ar are real (complex) numbers, is called alinear combination of u1; : : : ;ur. Moreover, the set fu1; : : : ;urgis linearly independent or free if any linear combination
a1u1C Carur D0 (1.4) implies thata1D Dar D0. In the opposite case, they arelinearly dependent.
If it is possible to find linearly independent vector systems of ordern, wheren is a finite integer, but there is no free system of ordernC1, then we say thatnis thedimensionofE, which is denoted byEn. Any linearly independent systemW of ordernis said to be abasisofEn. When it is possible to find linearly independent vector sets of any order, the dimension of the vector space is said to be infinity.
We state the following theorem without proof.
Theorem 1.1. A setW D fe1; : : : ;engof vectors ofEnis a basis if and only if any vector u 2 En can be expressed as a unique linear combination of the elements ofW:
uDu1e1C Cunen: (1.5) For brevity, from now on we denote a basis by.ei/. The coefficientsui are called contravariant componentsofuwith respect to the basis.ei/. The relations among the contravariant components of the same vector with respect two bases will be analyzed in Sect.1.4.
Examples
• The set of the oriented segments starting from a point O of ordinary three- dimensional space constitutes the simplest example of a three-dimensional vector space on<. Here the addition is defined with the usual parallelogram rule, and the productauis defined by the oriented segment having a lengthjajtimes the length ofuand a direction coinciding with the direction ofu, ifa > 0, or with the opposite one, ifa < 0.
• The set of the matrices
0
@a11 a1n
am1 amn
1 A;
where the coefficients are real (complex) numbers, equipped with the usual operations of summation of two matrices and the product of a matrix for a real (complex) number, is a vector space. Moreover, it is easy to verify that one basis for this vector space is the set ofmn-matrices that have all the elements equal to zero except for one which is equal to 1. Consequently, its dimension is equal tomn.
• The setPof all polynomials
P .x/Da0xnCa1xn1C Can
of the same degreenwith real (complex) coefficients is a.nC1/-dimensional vector space, since the polynomials
P1.x/Dxn; P2Dxn1; : : : ; PnC1 D1 (1.6) form a basis ofP.
• The setF of continuous functions on the intervalŒ0; 1is a vector space whose dimension is infinity since, for any n, the set of polynomials (1.6) is linearly independent.
From now on we will use Einstein’s notation, in which the summation over two repeated indices is understood provided that one is a subscript and the other a superscript. In this notation, relation (1.5) is written as
uDuiei: (1.7)
Moreover, for the sake of simplicity, all of the following considerations refer to a three-dimensional vector spaceE3, although almost all the results are valid for any vector space.
A subsetU Eis avector subspaceofEif u;v2E)uCv2E;
a2 <;u2E)au2E: (1.8) LetU andV be two subspaces ofE3 having dimensionspandq(pCq 3), respectively, and letU \V D f0g. Thedirect sumofU andV, denoted by
W DU ˚V;
1.3 Euclidean Vector Space 5
E3
U V W
u1
v1 w
u v
Fig. 1.1 Direct sum
is the set of all the vectorswDuCv, withu2U andv2V:
Theorem 1.2. The direct sum W of the subspaces U and V is a new subspace whose dimension ispCq.
Proof. It is easy to verify thatW is a vector space. Moreover, let.u1; : : : ;up/and .v1; : : : ;vq/be two bases ofU andV, respectively. The definition of the direct sum implies that any vectorw 2 W can be written asw D uCv, whereu 2 U and v2V are uniquely determined. Therefore,
wD Xp iD1
uiuiC Xq iD1
vivi;
where the coefficientsui andvi are again uniquely determined. It follows that the vectors.u1; : : : ;up;v1; : : : ;vq/represent a basis ofW. ut
Figure1.1illustrates the previous theorem forp DqD1:
1.3 Euclidean Vector Space
Besides the usual operations of vector sum and product of a real number times a vector, the operation ofscalarorinner productof two vectors can be introduced.
This operation, which associates a real numberuvto any pair of vectors.u;v/, is defined by the following properties:
1. it is distributive with respect to each argument:
.uCv/wDuwCvw; u.vCw/DuvCuwI 2. it is associative:
auvDu.av/I
3. it is symmetric:
uvDvuI 4. it satisfies the condition
uu 0;
the equality holding if and only ifuvanishes.
From now on, a space in which a scalar product is defined is referred to as a Euclidean vector space.
Property (4) allows us to define thelengthof the vectoruas the number juj Dp
uuI in particular, a vector is aunitornormalvector if
juj D1:
Two vectorsuandvareorthogonalif uvD0:
The following two inequalities, due toMinkowskiandSchwarz, respectively, can be proved:
juCvj juj C jvj; (1.9) juvj juj jvj: (1.10)
The last inequality permits us to define theangle0'between two vectors uandvby the relation
cos' D uv juj jvj;
which allows us to give the inner product the following elementary form:
uvD juj jvjcos':
A vector system.u1; : : : ;un/is calledorthonormalif uiuj Dıij;
1.3 Euclidean Vector Space 7 whereıij is the Kronecker symbol
ıij D
1; iDj;
0; i¤j:
It is easy to verify that the vectors belonging to an orthonormal system fu1; : : : ;umgare linearly independent; in fact, it is sufficient to consider the scalar product between any of their vanishing linear combinations and a vectoruhto verify that thehth coefficient of the combination vanishes. This result implies that the numbermof the vectors of an orthogonal system is3.
The Gram–Schmidt orthonormalization procedure permits us to obtain an orthonormal system starting from any set of three independent vectors.u1;u2;u3/.
In fact, if this set is independent, it is always possible to determine the constants in the expressions
v1Du1; v2D12v1Cu2; v3D13v1C23v2Cu3;
in such a way thatv2is orthogonal tov1andv3is orthogonal to bothv1andv2. By a simple calculation, we find that these conditions lead to the following values of the constants:
12D u2v1
jv1j2 ; 23D u3v2
jv2j2 ; 13D u3v1
jv1j2 :
Then the vectorsvh,hD 1; 2; 3, are orthogonal to each other and it is sufficient to consider the vectorsvh=jvhjto obtain the desired orthonormal system.
Let.ei/be a basis and letu D uiei;v D vjej, be the representations of two vectors in this basis; then the scalar product becomes
uvDgijuivj; (1.11)
where
gij Dei ej Dgj i: (1.12)
When uDv, (1.11) gives the square length of a vector in terms of its components:
juj2Dgijuiuj: (1.13)
Owing to the property (4) of the scalar product, the quadratic form on the right-hand side of (1.13) is positive definite, so that
gdet.gij/ > 0: (1.14)
If the basis is orthonormal, then (1.11) and (1.13) assume the simpler form
uvD X3 iD1
uivi; (1.15)
juj2D X3 iD1
.ui/2: (1.16)
Due to (1.14), the matrix.gij/has an inverse so that, ifgij are the coefficients of this last matrix, we have
gihghj Dıji: (1.17)
If a basis.ei/is assigned in the Euclidean spaceE3, it is possible to define the dualorreciprocal basis.ei/of.ei/with the relations
ei Dgijej: (1.18)
These new vectors constitute a basis since from any linear combination of them, iei DgihiehD0;
we derive
gihi D0;
when we recall that the vectors .ei/ are independent. On the other hand, this homogeneous system admits only the trivial solution1 D 2 D 3 D 0, since det.gij/ > 0:
The chain of equalities, see (1.18), (1.12), (1.17),
eiej Dgihehej Dgihghj Dıji (1.19)
1.3 Euclidean Vector Space 9
u2e2
u2e2
u1e1
e2 e2
e1 e1
u1e1
90 u
Fig. 1.2 Dual bases
proves that ei is orthogonal to all the vectors ej; j ¤ i (see Fig. 1.2 and Exercise1.2). It is worthwhile to note that if the basis.ei/is orthogonal, then (1.18) implies that
ei Dgi iei D ei
gi i
;
so that any vectoreiof the dual basis is parallel to the vectorei. From this relation we have
ei ei D1;
so that
jeijjeijcos0D1;
and
jeij D 1
jeij: (1.20)
In particular, when the basis.ei/is orthonormal,ei Dei,iD1; : : : ; n.
With the introduction of the dual basis, any vectoruadmits two representations:
uDuiei Dujej: (1.21)
The quantitiesui are calledcovariant componentsofu. They can be obtained from the components ofuwith respect to the reciprocal basis or from the projections of uonto the axes of the original basis:
uei D.ujej/ei Dujıij Dui: (1.22) In particular, if the basis is orthonormal, then the covariant and contravariant components are equal.
The relation (1.22) shows that the covariant and contravariant components can be defined respectively as
ui Duei Dgijuj; uj Duej Dgj iui: (1.23) It follows from (1.23) that the scalar product (1.11) can be written in one of the following equivalent forms:
uvDgijuivj Dujvj Dgijuivj: (1.24)
1.4 Base Changes
Covariant or contravariant components of a vector are relative to the chosen basis.
Consequently, it is interesting to determine the corresponding transformation rules under a change of the basis. Let us consider the base change.ei/! .e0i/; when we note that any vectore0i can be represented as a linear combination of the vectors .ei/, we have2
e0i DAjiej; ei D.A1/jie0j: (1.25) In other words, the base change is assigned by giving the nonsingular matrix.Aij/ of the vector components.e0i/with respect to the basis.ei/.
On the other hand, any vectoruadmits the following two representations:
uDu0ie0i Dujej;
from which, taking into account (1.25), there follow the transformation formulae of vector components with respect to base changes:
u0i D.A1/ijuj; ui DAiju0j: (1.26) In order to determine the corresponding transformation formulae of covariant components, we note that (1.23) and (1.25) imply that
u0i Due0i DuAjiej; and then
u0i DAjiuj (1.27)
2It is easy to verify, by using considerations similar to those used at the end of the previous section, that the independence of the vectors.e0i/is equivalent to requiring that det.Aij/¤0:
1.4 Base Changes 11 or
ui D.A1/jiu0j: (1.28)
A comparison of (1.25), (1.26), and (1.27) justifies the definition of covariant and contravariant components given toui andui: the transformation of covariant components involves the coefficient of the base transformation (1.25), whereas the transformation of the contravariant components involves the coefficients of the inverse base transformation.
It is also interesting to determine the transformation formulae of the quantitiesgij
andgij associated with the scalar product. To this end, we note that (1.12) and (1.25) imply that
gij0 De0ie0j DAhiAkjehekI that is,
gij0 DAhiAkjghk: (1.29) Similarly, from the invariance of the scalar product with respect to a base change and from (1.24), we have
g0iju0iv0j Dghkuhvk; and taking into account (1.28), we obtain
g0ij D.A1/ih.A1/jkghk: (1.30) From (1.18), (1.30), and (1.25), it also follows that
e0i D.A1/iheh: (1.31)
Finally, let .ei/ and .e0i/ be two bases of E3 related to each other by the transformation formulae (1.25). When
det.Aij/ > 0; (1.32)
we say that the two bases have the sameorientation. It is easy to recognize that this connection between the two bases represents an equivalence relationRin the setB of the bases ofE3. Moreover,RdividesBin two equivalence classes; any two bases, each of them belonging to one class, define the two possible orientations ofE3.
1.5 Vector Product
Let.ei/be a basis ofE3. Thevector productorcross productis an internal operation which associates, to any pair of vectorsuandvof E3, a new vectoruvwhose covariantcomponents are
.uv/i Dp
g eij hujvh; (1.33)
where ui andvj denote the contravariant components of u andv andeij h is the Levi–Civita symbol, defined as follows:
eij hD 8<
:
0; if two indices are equal;
1; if the permutationijhis even;
1; if the permutationijhis odd.
(1.34)
The attribute even or odd of the index sequenceij hdepends on the number of index exchanges we have to do in order to reproduce the fundamental sequence123.
For example,231is even, since two exchanges are needed: the first one produces 132 and the second, 123. On the other hand, 213 is odd, since just one exchange of indices is sufficient to come back to 123.
From the definition (1.34), we see that the exchange of two indices in the permutation symbol implies a sign change; that is,
eij kD ekj i Dekij D ei kj:
Moreover, we remark that the product of two permutation symbols can be expressed by the Kronecker symbolıusing the following identity
emi qej kq Dımjıi kımkıij Dˇˇ ˇˇımj ımk
ıij ıi k
ˇˇˇˇ: (1.35)
This formula can be memorized as follows: the first two indices of the firsteappear in the determinant as row indices, whereas the first two indices of the secondeare column indices.
It is easy to verify that the composition law (1.33) is distributive, associative, and skew-symmetric. In order to recognize that the present definition of the cross productuvcoincides with the elementary one, we consider an orthonormal basis .ei/such thatjuje1 D u and the plane.e1;e2/containsv. In this basis,u andv have components.juj; 0; 0/and.jvjcos';jvjsin'; 0/. Here'is the angle between vande1; that is, the angle betweenvandu. Finally, due to (1.33), we easily derive thatuvis orthogonal to the plane defined byu;vand its component alonge3is juj jvjsin'.
1.6 Mixed Product 13 It remains to verify that the right-hand side of (1.33) is independent of the basis or, equivalently, that the quantities on the left-hand side of (1.33) transform with the law (1.27). First, from (1.29) we derive
pg0D ˙det.Aij/p
g; (1.36)
where theCsign holds if det.Aij/ > 0and thesign holds in the opposite case.
Moreover, the determinant definition implies that eij hD 1
det.Aqp/ AliAmjAnhelmn; (1.37) so that, taking into account (1.33) and (1.36), we write
.uv/0i Dp
g0eij hu0jv0hD ˙Alielmn
pgumvn:
Then it is possible to conclude that the cross product defines a vector only for base changes which preserve the orientation (Sect.1.4). Vectors having this characteristic are calledaxial vectorsorpseudovectors, whereas the vectors transforming with the law (1.27) are calledpolar vectors.
Examples of axial vectors are given by the momentMOof a forceFwith respect to a poleO or by the angular velocity!, whose direction changes with the basis orientation. This is not true for the force, velocity, etc., which are examples of polar vectors (see Exercise 10).
At a less formal but more intuitive level, the nature of the vector under consideration can be recognized by looking at its behavior under a reflection (see Fig.1.3): if the reflection preserves the direction, we are in the presence of an axial vector, while if the reflection induces a change of its direction, the vector is polar.
1.6 Mixed Product
The mixed product of three vectors u;v;w; is the scalar quantity .uv/w.
When (1.24) and (1.33) are taken into account, we obtain the following coordinate form of the mixed product:
.uv/wDp
g eij hwiujvh: (1.38)
F
Fig. 1.3 Axial vector!(angular velocity)—Polar vectorF(force)
The skew-symmetry properties of the Levi–Civita symbol (see Sect.1.5) allow us to verify the following cyclic property of the mixed product:
.uv/wD.wu/vD.vw/u:
It is easy to recognize the geometric meaning of the mixed product. In fact, it has been already said (Sect.1.5) that the vector product is a vector orthogonal to the plane defined byuandv, having the norm equal to the area of the parallelogram with sidesjujandjvj. The scalar product.uv/wis equal to the componentw along.uv/times the length of.uv/. Finally, the mixed product represents the volume of the parallelepiped with edgesu;v, andw.
1.7 Elements of Tensor Algebra
LetE3be a Euclidean vector space. The mapping
TWE3!E3 (1.39)
islinearif, using the notationT.u/DTu, we have
T.auCbv/DaTuCbTv; (1.40)
for alla; b2 <and for allu;v2E3.
We will let Lin.E3/ denote the set of linear mappings of E3 into E3. These applications are also called endomorphisms, Euclidean double tensors, or 2-tensorsofE3.
TheproductSTof two tensorsSandTis the composition ofSandT; that is, for allu2E3we have
.ST/uDS.Tu/: (1.41)
Generally,ST¤TS; ifST D TS, then the two tensors are said tocommute. The transposeTT ofTis the new tensor defined by the condition
uTTvDv.Tu/; 8u;v2E3: (1.42)
1.7 Elements of Tensor Algebra 15 The following properties can be easily established:
.TCS/T DTT CST; .TS/T DSTTT; .TT/T DT:
A tensorTissymmetricif
TDTT; (1.43)
andskew-symmetricif
TD TT: (1.44)
Thetensor productof vectorsuandvis the tensoru˝vsuch that
.u˝v/wDu vw; 8w2E3: (1.45)
The set Lin.E3/of tensors defined onE3is itself a vector space with respect to the ordinary operations of addition and the product of a mapping times a real number.
For this space the following theorem holds:
Theorem 1.3. The vector space Lin.E3/ of Euclidean double tensors is 9- dimensional. The tensor systems.ei˝ej/; .ei˝ej/; .ei˝ej/are bases of Lin.E3/ so that for allu 2 E3, the following representations of Euclidean double tensors hold:
TDTijei˝ej DTjiei˝ej DTijei˝ej: (1.46) The matrices .Tij/, .Tji/, and .Tij/ are called the contravariant, mixed, and covariant components ofT, respectively. Finally, the relations among the different components are given by the equations
Tij Dgj hTihDgihgj kThk: (1.47) Proof. To verify that the system.ei ˝ej/is a basis of Lin.E3/, it is sufficient to prove that it is a linearly independent system and that any tensorTcan be expressed as a linear combination of the elements of.ei˝ej/. First, from
ijei˝ej D0;
and from (1.45), (1.12), we derive
ijei˝ej.eh/Dijei ejehDijgj hei D0
for anyh D 1; 2; 3. But the vectors.ei/form a basis ofE3, so that the previous relation implies, for any indexi, the homogeneous system
ijgj hD0
of three equations in the three unknownsij, whereiis fixed. Since the determinant det.gij/of this system is different from zero, see (1.14), all the unknownsij with fixedivanish. From the arbitrariness ofi, the theorem follows.
Similarly, from the linear combination ijei˝ej D0;
when (1.19) is taken into account, we have
ijei ˝ej.eh/Dijei ej ehDijıjhei Dihei D0;
so thatih D 0, for any choice ofi andh. In the same way, the independence of tensors.ei˝ej/can be proved.
To show that any tensorTcan be written as a linear combination of any one of these systems, we start by noting that, from (1.21) and the linearity ofT, we have
TuDujTej:
On the other hand,Tej is a vector of E3 and therefore can be represented in the following form, see (1.21):
Tej DTijei:
If the definition of covariant components (1.23) is recalled together with (1.45), we have
TuDTijujei DTijei˝ej.u/:
Finally, owing to the arbitrariness ofu, we obtain TDTijei˝ej: To show (1.46)2, it is sufficient to remark that
TuDujTej; so that, introducing the notationTej DTjiei, we derive
TuDTjiujei:
1.7 Elements of Tensor Algebra 17 Noting that (1.23) and (1.22) hold, we haveuj D gj iui Dgj iei u Dej u, and the previous relation gives
TuDTjiei˝ej.u/:
From this expression, due to the arbitrariness ofu, (1.46)2 is derived. In a similar way, (1.46)3can be proved.
Relation (1.47) is easily verified using (1.46) and the definition of the reciprocal
basis (1.18), so that the theorem is proved. ut
Example 1.1. Verify that the matrices of contravariant or covariant components of a symmetric (skew-symmetric) tensor are symmetric (skew-symmetric), using the definition of transpose and symmetry.
The symmetry ofT(TDTT) and (1.42) imply that vTuDuTv;
so that
viTijuj DuiTijvj DujTj ivi:
The arbitrariness ofuandvleads toTij DTj i. In a similar way, it can be proved thatTij D Tj i whenTis skew-symmetric.
Example 1.2. Using the different representations (1.46), verify that the contravari- ant, mixed and covariant components of the unit or identity tensorI
IuDu; 8u2E3; are given by the following matrices, respectively:
.gij/; .ıji/; .gij/:
Starting from the condition definingI, written in terms of components, .Iijei ˝ej/.uheh/Duiei;
we derive the equation
Iihuhei Dghiuhei;
which implies thatIihDgih. In a similar way the others results follow.
TheimageofT, denoted by Im.T/, is the subset ofE3such that
Im.T/D fv2E3 j 9u2E3WvDTug; (1.48)