Delaunay Triangulation and Meshing
© Editions HERMES, Paris, 1998 Editions HERMES 8, quai du Marche-Neuf 75004 Paris ISBN 2-86601-692-0 Catalogage Electre-Bibliographie George, Paul-Louis*Borouchaki, Houman Delaunay Triangulation and Meshing : Application to Finite Elements. Paris : Hermes, 1998 ISBN 2-86601-692-0 RAMEAU : grilles (analyse numerique) triangulation elements finis, methode des DEWEY : 516 : Geometric. Trigonometric 624.1 : Genie Civil. Techniques de la construction Le Code de la propriete intellectuelle n'autorisant, aux termes de 1'article L. 122-5, d'une part, que les "copies ou reproductions strictement reservees a 1'usage prive du copiste et non destinies a une utilisation collective" et, d'autre part, que les analyses et les courtes citations dans un but d'exemple et d'illustration, "toute representation ou reproduction integrate, ou partielle, faite sans le consentement de 1'auteur ou de ses ayants droit ou ayants cause, est illicite" (article L. 122-4). Cette representation ou reproduction, par quelque proc6d6 que ce soit, constituerait done une contrefacon sanctionnee par les articles L. 335-2 et suivants du Code de la propriete intellectuelle.
Delaunay Triangulation and Meshing Application to Finite Elements
Paul-Louis George Hournan Borouchaki
HERMES
Web Site : http://www.editions-hermes.fr
Contents 1 Triangle, tetrahedron, triangulation, mesh 1.1 Introduction 1.2 About the triangle 1.3 About the tetrahedron 1.4 Simplex 1.5 Triangulation 1.6 Mesh 1.7 Useful element sets 1.7.1 Element sets 1.7.2 About the construction of such sets 1.7.3 About the construction of the edges of a triangulation 1.7.4 About the construction of the faces of a triangulation 1.7.5 About set membership 1.8 Notes
5 5 5 9 13 13 20 23 23 25 30 32 32 32
2 Delaunay triangulation 2.1 Introduction 2.2 From Dirichlet to Delaunay 2.3 Delaunay lemma 2.4 Incremental method 2.5 Other methods 2.5.1 Method by edge swapping in two dimensions . . . . 2.5.2 Divide and conquer 2.5.3 Sweeping algorithm 2.6 Computational aspects 2.6.1 Robustness and complexity 2.6.2 Reduced incremental method scheme 2.6.3 Cavity correction 2.6.4 Using the kernel 2.6.5 Access to the base
33 33 34 38 41 46 46 47 49 50 50 51 52 57 57
II
CONTENTS 2.6.6 Inheritance 2.6.7 Computational background 2.6.8 Dynamic management of the background 2.7 About some results 2.8 Applications 2.9 Notes
58 61 62 63 68 71
3 Constrained triangulation 73 3.1 Introduction 73 3.2 Constraints and triangulation 74 3.2.1 Some definitions 74 3.2.2 Constrained triangulation problems 76 3.3 The two-dimensional case 76 3.3.1 Construction of a Delaunay admissible constraint . . 76 3.3.2 Method by constraint partitioning 79 3.3.3 Method by enforcing the constraints 80 3.4 Constrained Delaunay triangulation 85 3.5 The three-dimensional case 86 3.5.1 Construction of a Delaunay admissible constraint . . 86 3.5.2 Method by constraint partitioning 87 3.5.3 Method by enforcing the constraints 89 3.6 Higher dimensions 99 3.6.1 Constraint partitioning method 99 3.7 Computational aspects in three dimensions 103 3.7.1 Searching the missing constraints 103 3.7.2 Local configurations 103 3.7.3 Tentative scheme for an algorithm 103 3.8 Some application examples 104 3.9 Applications 110 3.10 Notes Ill 4 Anisotropic triangulation 4.1 Introduction 4.2 Notion of a metric 4.2.1 Metrics and distances 4.2.2 Multiple metrics 4.3 Incremental method 4.3.1 Euclidean space 4.3.2 Riemannian space 4.3.3 Discrete approximations in two dimensions 4.3.4 Discrete approximations in three dimensions
113 113 114 115 116 119 120 121 123 125
CONTENTS
III
4.4 Computational aspects 4.5 Some results 4.6 Applications 4.7 Notes
128 129 129 130
5 Meshing in two dimensions 5.1 Introduction 5.2 The empty mesh construction 5.3 Field points (creation) 5.3.1 Several methods for field point creation 5.4 Control space 5.5 Creation along the edges, classical case 5.6 Creation along the edges, isotropic case 5.7 Creation along the edges, anisotropic case 5.8 Advancing-front type creation 5.9 Field points (insertion) 5.10 Optimization 5.11 General scheme for the mesh generator 5.12 Some results 5.13 Notes
131 131 132 136 136 138 140 143 147 149 151 151 152 154 159
6 Parametric surface meshing 6.1 Introduction 6.2 The fundamental forms and related metrics 6.2.1 Metric of the tangent plane 6.2.2 Metric related to the main curvatures 6.2.3 Physically-based metric 6.3 Surface meshing " 6.3.1 General scheme 6.3.2 Construction of a metric in 6.3.3 Classification of the useful metrics 6.3.4 Boundary meshing 6.3.5 Domain meshing 6.3.6 Surface mapping 6.4 Some results 6.5 Applications 6.5.1 Cylindrical meshing 6.5.2 Sampled surface meshing 6.5.3 Arbitrary surface meshing 6.5.4 Adaptive meshing 6.6 Notes
161 161 163 164 166 172 173 174 174 175 176 176 177 177 187 187 190 190 192 192
fi
IV
CONTENTS
7 Meshing in three dimensions 7.1 Introduction 7.2 The empty mesh construction 7.3 Field points (creation) 7.3.1 Several methods for field points creation 7.4 Control space 7.5 Creation along the edges, classical case 7.6 Creation along the edges, isotropic case 7.7 Creation along the edges, anisotropic case 7.8 Advancing-front type creation 7.9 Field points (insertion) 7.10 Specified internal edges and faces 7.11 Optimization 7.12 General scheme of the mesh generator 7.13 About some results 7.14 Notes
195 195 196 199 199 201 201 203 203 204 206 207 207 208 209 213
8 Optimizations 8.1 Introduction 8.2 Mesh quality 8.2.1 Shape and size qualities 8.2.2 Classification 8.2.3 Other (isotropic) quality measures 8.3 Topological operators 8.3.1 Edge swapping in two dimensions 8.3.2 Ball remeshing in two dimensions 8.3.3 Shell transformation in three dimensions 8.3.4 Entity suppression by local remeshing 8.3.5 Suppression by means of reduction 8.3.6 Edge splitting 8.3.7 Valance relaxation 8.4 Geometric operators 8.4.1 Local geometric operator 8.4.2 Global geometric operators 8.5 Remarks on surface optimization 8.6 Algorithmic aspects 8.6.1 How to use an optimization operator 8.6.2 How to control an optimization operator 8.6.3 How to control an optimization process 8.7 Some results 8.8 Applications
215 215 215 216 220 221 222 223 223 223 227 227 229 229 230 230 234 234 235 236 236 237 238 242
CONTENTS 8.9
Notes
V 242
9 Mesh adaptation 243 9.1 Introduction 243 9.2 Mesh adaptivity methods 244 9.2.1 The r-method 244 9.2.2 The /i-method 245 9.2.3 The p-method 245 9.2.4 The ftp-method 247 9.3 Modification versus reconstruction 247 9.3.1 Adaptivity based on local modifications 248 9.3.2 Adaptivity based on a complete reconstruction . . . 250 9.4 General scheme for an adaptation loop 251 9.5 Control space 252 9.5.1 Definition of the successive control spaces 252 9.5.2 Control space construction 253 9.6 Boundary meshing (or remeshing) 256 9.6.1 Curve meshing (or remeshing) 256 9.6.2 Surface meshing (or remeshing) 256 9.7 Domain meshing 256 9.8 Solution interpolation 259 9.9 General scheme of an adaptive loop 264 9.10 Some results 265 9.10.1 An isotropic example 266 9.10.2 An anisotropic example 272 9.11 Notes 276 10 Data structures 277 10.1 Introduction 277 10.2 Useful information (tentative list) 278 10.2.1 Recalling the notion of a mesh 278 10.2.2 For a (static) problem using a P1 approximation . . 278 10.2.3 For a (static) problem using a P2 approximation . . 283 10.2.4 For an adaptive computational process 286 10.2.5 Constraining a mesh 287 10.3 A general data structure 288 10.3.1 A general data structure 288 10.4 A geometric data structure 296 10.4.1 The two-dimensional case 296 10.4.2 A few remarks about three dimensions 299 10.5 Geometric representation 302
VI
CONTENTS 10.5.1 The two-dimensional case 10.5.2 The three-dimensional case 10.6 Mesh data structure 10.6.1 The two-dimensional case 10.6.2 The three-dimensional case 10.7 Notes
302 304 311 311 313 315
11 Boundary meshing 317 11.1 Introduction 317 11.2 Boundary meshing in two dimensions 317 11.2.1 CAD definition of a boundary 318 11.2.2 Related (discrete) database 318 11.2.3 Construction of a polygonal discrete line 318 11.2.4 Meshing 319 11.3 Boundary meshing in three dimensions 327 11.3.1 Curve meshing 328 11.3.2 Meshing of a surface consisting of several patches . . 328 11.3.3 Surface remeshing using optimization 335 11.4 Results 336 11.4.1 Planar curve mesh 336 11.4.2 Mesh of a surface defined by several patches 339 11.4.3 Surface mesh (remeshing via optimization) 341 11.5 Notes 342 12 Finite element applications 12.1 Introduction 12.2 Metric definition and metric construction 12.2.1 Hessian computation 12.2.2 Remark about the metric computation 12.2.3 Metric associated with the usual norms 12.2.4 Relative error metric 12.2.5 Intersection of several metrics 12.2.6 Transfer of the solution from one mesh to the other 12.3 Three CFD examples 12.3.1 General presentation 12.3.2 Supersonic scramjet 12.3.3 Viscous transonic flow for a Naca-0012 12.3.4 Viscous supersonic flow around a cylinder 12.4 Notes
345 345 345 348 349 349 350 350 351 352 352 354 357 360 365
CONTENTS
VII
13 Other applications 13.1 Introduction 13.2 Medial axis and medial surface 13.2.1 Medial axis 13.2.2 Medial surface 13.2.3 Several applications based on the skeleton 13.3 Parallel computing 13.3.1 A posteriori partitioning 13.3.2 A priori partitioning 13.3.3 Partitioning by inductive Delaunay triangulation . . 13.3.4 Partitioning a set of points by induction from the Delaunay triangulation of the convex hull 13.3.5 Partitioning from the domain boundary 13.4 Minimal roughness of a surface 13.5 Notes
367 367 367 367 368 369 370 370 371 371
.
Appendix
383
.
Bibliography
393
Index
411
373 374 380 381
This page intentionally left blank
Preface A wide range of engineering applications uses triangulations or meshes as spatial support. Given a set of points in Rd (d > 2), a triangulation of this cloud of points fills the corresponding convex hull with a set of elements which are, in general, simplicial in nature (triangle for d = 2 and tetrahedron for d = 3), such that some properties are satisfied. Conversely, given a polygonal domain (d — 2) or a polyhedral domain (d = 3), a mesh of this domain covers the domain with simple geometric elements (triangle, quadrilateral, in two dimensions and tetrahedron, pentahedron, hexahedron, in three dimensions) such that some adequate properties hold. Numerous computational geometry papers and books are devoted to the algorithms used to construct triangulations, with special attention paid to those resulting in Delaunay triangulations. These algorithms, in a suitable way, are an important part of the algorithms used to construct meshes. To this end, triangulation algorithms are of great interest for defining meshing algorithms. A large portion of scientific computing in engineering is the solution of partial differential equations of various type (for solid mechanics, fluid mechanics, thermal modeling, ...) by means of the finite element method. This method requires a mesh of the domain upon which the equations are formulated. Thus, meshing algorithms are of major importance in every numerical simulation based on the finite element method. In particular, the accuracy and even the validity of a solution is strongly tied to the properties of the underlying mesh of the domain under consideration. The aim of this book is to describe, in the first chapters, the different algorithms suitable for constructing a triangulation and, more precisely, a Delaunay triangulation. Then, the following chapters will indicate the way in which triangulation methods can be extended to develop meshing algorithms. Only Delaunay type methods are discussed here while observ-
ing that a large variety of meshing algorithms exists. To this end, the book is divided into three parts. The first part, devoted to triangulations, comprises the first four chapters. The second part dealing with meshing algorithms is made up of the five following chapters and the third part discusses several applications in the four last chapters. A technical appendix and an index are also included in the book. In Chapter 1, general definitions relative to elements, triangulations and meshes are given. Algorithmic hints are given regarding some key-issues that will be used extensively in the algorithms and methods developed throughout the book. In Chapter 2, several methods are discussed that result in the construction of a Delaunay triangulation. Given a set of points in Rd with d = 2 or d = 3, we propose several methods that make the construction of the Delaunay triangulation possible. The definition of a Delaunay triangulation is first given and then several construction methods are presented. A "popular" method, referred to as the incremental method, is emphasized and a reduced version of it is discussed in detail. Other approaches are also given. Algorithmic or computational aspects of the reduced method are mentioned, while indicating the numerical difficulties that can be expected along with some proposed solutions. Chapter 3 deals with constrained triangulations. A Delaunay triangulation is given along with a set of constraints. These constraints are indeed a set of edges in two dimensions and a set of edges and faces in three dimensions. The question is then how to enforce these entities into the triangulation so that they exist, in some sense, as entities of the resulting triangulation. The case of higher dimensions is also mentioned. Chapter 4 is devoted to the way in which anisotropic triangulations can be obtained. Given a set of points and a specified metric field, the purpose is to construct a triangulation which satisfies the given field. The metric specifies the properties that the triangulation should enjoy, in terms of prescribed sizes and directional information. The following chapters discuss the way in which arbitrarily shaped domains can be meshed. Algorithms developed to this end are derived from triangulation algorithms as described in the previous chapters. A chapter is devoted to two dimensions, another deals with parametric surfaces, while a third one discusses the three-dimensional case. The two-dimensional case is detailed in Chapter 5. A domain in R2 is given via a discretization of its boundary, where this boundary is given as a
list of segments. The problem then is how to construct a mesh of the given domain. This construction involves mainly two steps, one being related to the triangulation problem (as described previously), the other dealing with the the way in which a suitable set of internal points can be created. The notion of a control space is introduced as a way to govern the creation of the relevant field points, as well as to specify the nature of the expected point to point connections. This framework in discussed for a classical case, where the boundary discretization is the only input data available, for the isotropic case, where desired sizes are specified and, for the pure anisotropic situation, where both directional and size specifications are given. Chapter 6 indicates how to mesh a parametric surface. A field of metrics is constructed following the fundamental forms of the surface. This field serves to control the mesh construction. In particular, we show how to construct a so-called geometric mesh that is a close approximation of the surface. Chapter 7 follows the same steps as Chapter 5 while discussing the meshing problem of a domain in R*. This domain is defined by a discretization of its boundary, in other words, a surface mesh. Chapter 8 is devoted to mesh optimization, the meshes in question being composed of triangles (in two dimensions) or tetrahedra (in three dimensions). Several local tools are introduced and we propose a strategy that makes the development of a global optimization algorithm possible. In Chapter 9, mesh adaptivity is discussed by focusing on the computational aspect of this topic. Two approaches are proposed. The first one, which is only discussed briefly, relies on local modifications of an existing mesh. The other approach which is discussed in more depth, relies on the construction of the entire mesh. The general scheme of a fully automatic adaptivity loop is given. The last chapters show how to use the previous materials and methods in a true application of the finite element method in two dimensions. Chapter 10 introduces some data structures which enable us to develop a meshing theoretical background. A conceptual data structure is proposed and two applications are discussed. The first one is related to a data structure suitable for describing the geometry of the domain of interest while the second structure corresponds to the mesh representation. In Chapter 11 some details are given concerning the way in which a R2 or R3 domain boundary can be meshed or remeshed. The resulting
mesh serves as input for the meshing methods applied in the corresponding domain. In Chapter 12, significant mesh examples are given. To this end, several two-dimensional C.F.D. applications are used. In Chapter 13, several applications of triangulation methods are described which are not necessarily related to the finite element context. Software packages are listed in the appendix that are mainly devoted to mesh generation or mesh management for finite element purposes. These packages deal with mesh generation for two or three dimensions (most of them are issued from INRIA). The book ends with an index and a bibliography. As a conclusion, we would like to thank here those who contributed, in one way or another, to this book. Among these are the members of the Gamma group at INRIA, P.J. Frey, P. Laug, F. Hecht and E. Saltel who have contributed in various ways to this book. We would also like to thank B. Mohammadi and J. Galtier for their help. We are indebted to M. Desnous, P. Joly, A. Marrocco, A. Perronnet and J.D. Boissonnat, whose fruitful comments helped us to improve several aspects of the book. This text is translated from "Triangulation de Delaunay et Maillage. Applications aux elements finis", first published in french by Hermes, Paris, in 1997. The first translation, done by the authors with the help of P.J. Frey, has been greatly improved by Scott A. Canann, whom we would like to thank here.
Chapter 1
Triangle, tetrahedron, triangulation, mesh 1.1
Introduction
This chapter has several goals. First of all, the different notions that will be used have to be precisely defined, then some sets of elements that will be used in most of the algorithms described furthermore will be introduced. In this respect, triangles and tetrahedra as basic elements of the topic are briefly described. To be more general, indications regarding simplices in any dimension are given. Afterwards, some basic notions are introduced concerning triangulations, conforming triangulations, Delaunay triangulation and general meshes. The last part of the chapter deals with some sets of elements that will be used intensively in the following. These sets are defined and indications regarding how they can be constructed are mentioned. The aim of this part is to introduce some basic data structures and to show how they can be used in the present context. It can be noticed that these aspects are discussed in detail in numerous specialized references and are just presented in this chapter to make the framework precise. In principle, only two and three dimensions are described. However some of the definitions, formula or proposed constructions extend without major difficulties to higher dimensions.
1.2
About the triangle
Elementary definitions. While the triangle is a well-known entity, we would like to be clear with respect to some aspects about it.
6
CHAPTER 1. TRIANGLE, TETRAHEDRON, ...
Roughly speaking, a triangle is a 3-sided polygon. It is defined by a triple, namely the ordered list of its three vertices, denoted as P;, which are given counterclockwise A'= (^1,^2,^3). There are six ways (or permutations) for expressing the triple defining a triangle. In the case where an orientation is defined, only three permutations are relevant. Thus, for a triangle in a plane, the orientation is implicitly defined (using the normal of the plane) and the three possible definitions imply that its surface, denoted as SKI is signed. Thus, the triangles we will consider will have a strictly positive surface. For a triangle on an arbitrary surface, introducing the notion of an orientation is not always possible (except if the surface is orientable, which is where all the triangles can be oriented by adjacency starting from a given oriented triangle). Therefore, except for this case, the surface area SK is positive and given
by (1.1)
or
(1.2)
where x,, yi are the coordinates of vertex Pi (i = 1, 3) and |.| stands for the determinant. Moreover, this definition enables us to explicitly define the sides (or edges) of a given triangle. Indeed, edge i, (i — 1,3), denoted as a t , is the edge1 joining (in this order) vertex P{+\ to vertex P{+i (while Pj = Pj-3 if j > 3 is assumed). Additional information can be associated with each triangle (in a triangulation or in a mesh (see below)). This information can be of geometric or physical nature and will be used in the following chapters. Circumcircle. As will be seen later, the circumcircle of a triangle will play an important role in the triangulation algorithms. This circle is defined by its center (circumcenter) and its radius (circumradius). The computation of these two entities can be achieved in two different ways. Solving a linear system is a first solution while using the equation of the circle is an is the numbering convention chosen, while other choices are possible.
1.2. ABOUT THE TRIANGLE
7
alternate way to define the circumcircle. The equation of the circumcircle is then given by with
where I2 — z 2 + y2 and / 2 = z 2 + y 2 , (i = 1, 3); by expanding this determinant, the coordinates, xc et yc, of the circumcenter are explicitly obtained as well as r#, the circumradius. Exercise 1.1.
Establish expression (1.4) and relation (1.3).
The second method comes from finding the intersection of the perpendicular bissectors associated with two edges of triangle K and the circumradius is then obtained as the distance between this intersection point and any of the vertices of the triangle. This can be achieved by solving the following linear system \
/
\
1 / / 2 i
2\
f 2 _i_
2\
2
£3 - #2 ys - y2 J \ yc ) ~~ 2 y (z§ + y|) - (z , + y|) A formula which gives the circumradius, without using the circumcenter, is as follows r
K =
*
2
(1-6)
where Li is the length of edge az and SK is the surface area of triangle K. Exercise 1.2.
Establish relation (1.6).
Inscribed circle. The inscribed circle or incircle associated with a triangle will be also used in the following. In fact, its radius, the inradius, will be of great interest. This quantity, denoted as px, is defined by SK PK = — PK
where p% is the half-perimeter of K.
/, ^ (1.7)
CHAPTER 1. TRIANGLE,
TETRAHEDRON,
Figure 1.1: About the triangle (vertex numbering, edge numbering, incircle and circumcircle). Exercise 1.3.
Establish relation (1.7).
Quality. The quality or aspect ratio of a given triangle K is a value measuring the shape of the element. Our concern with element quality is due to the fact that the accuracy of a finite element computation is strongly dependent of the quality of the elements in the mesh serving as support. In practice, other measures of element quality are possible. Only two of them will be be used here as will be seen below. Furthermore other definitions will be introduced to take specific constraints into account, for example, in the case where sizes or directional properties are required. The first quality measure of an element K can be defined as /^
QK = <*
""max PK
'
= oi-
SK
(1.8)
where a is a normalisation factor such that an equilateral triangle gives a value of one (a = ^), hmax is the longest edge of the triangle, i.e. its diameter and px is its inradius. This quality adequately measures the shape or aspect ratio of a given element. It ranges from 1, for an equilateral triangle, to oo, for a totally flat element; to return to a range of variation from 0 to 1, the inverse of QK must be used. An alternate measure is
(1.9) where (3 is a normalization factor to ensure a unit quality for an equilateral r-
l~3
triangle (/? = ^y), and hs = < / £ L? where Li is the length of edge i of the V *—i
9
1.3. ABOUT THE TETRAHEDRON
triangle. This value measures the aspect ratio of a given element, similarly to the previous definition. Nevertheless, this function appears to be less sensitive in determining the badly-shaped or ill-shaped elements especially in three dimensions. Exercise 1.4. Determine the value of the coefficients a and /3.
1.3
About the tetrahedron
Elementary definitions. As mentioned above, several definitions regarding a tetrahedral element are given in this section. A tetrahedron is a polyhedron with four triangular faces. It is well defined by a quadruple, the ordered list of its four vertices Pi
There exist twelve permutations for expressing the quadruple defining an oriented tetrahedron (and 24 permutations in general). This text assumes that the faces are oriented, thus their normals are also oriented. In addition, the volume is signed. Let VK be the volume of element A", then VK is defined as : %2 — ^1
yV2 *• ~ Vl yi
ys ~~ y\ y4 ~ y\
Z2 ~ Zi
£3 — Z\
(1.10)
£4 — Z\
or
VK = £4
*3 Z4
(1.11)
where £;, yi, Z{ are the coordinates of vertex Pi of the element. This format enables us to implicitly define the four faces of the element. A face of K is an ordered triple, and we have (with a possible permutation preserving the orientation) : •face 1 : P4
PS
•face 2 : Pi
PS
^4,
•face 3 : P4
Pi
Pi,
•face 4 : Pi
P2
PS-
CHAPTER 1. TRIANGLE,
10
TETRAHEDRON,
Similarly, the edges of K are implicitly defined as the following ordered pairs (the edges are considered as entities of the element and not as members of the faces) : • edge 1 : P1
P2,
• edge 2 : P1
P3,
• edge 3 : PI
P4,
• edge 4 : P2
P3,
• edge 5 : P2
PI,
• edge 6 : PS
P4.
Each edge is defined from its first endpoint to its second endpoint. Circumsphere. Each tetrahedron has an associated circumsphere. Its circumcenter and its circumradius are obtained in the same way as in two dimensions. Either using the equation : h(1.12) with
(1.13)
where / 2 = x2 + y2 + z2 and = x\ + yf + zf (i = 1,4), or by solving a linear system of equations to find the circumcenter. This system indicates that this point is located at the intersection of the mediator planes of the 3 edges emanating from a vertex. Then the circumradius is obtained by computing the distance between this point and any of the element vertices. It should be noted that this means of defining the circumsphere is less computationally expensive (in terms of CPU) than the first way. As in two dimensions, there exists a formula that gives the circumradius explicitly, thereby avoiding the compuhtation of the circumcenter.
where a, 6 et c are the products of the lengths of two opposite edges.
1.3. ABOUT THE TETRAHEDRON Insphere.
11
The radius of the insphere, or inradius, is given by :
PK
= s1 + s!V+s3 + s4
(L14)
where Si is the surface area of face i of the element. Exercise 1.5.
Establish Relation (1.14).
Figure 1.2: About the tetrahedron.
Quality. The quality (ies), value(s) measuring the shape of an element, are defined as for a triangle. Hence, we have : /-^
""max
""max^K
PK
3 VK
QK = a - = <x———
/-. -, r \
1.15)
where SK is the sum of the 5,-'s and a = ^j and
where hs = \ J2 L^ and Z/z is the length of edge i of the tetrahedron and 0= ^ r 216'
Exercise 1.6.
Again, find the values of a and ft.
In [Parthasarathy et al. 1993], a list of quality measures can be found along with the analysis of their sensitivity (see also the Chapter 8).
12
CHAPTER 1. TRIANGLE,
TETRAHEDRON,
Figure 1.3: Degenerated triangles and tetrahedron (right-hand side) and well-shaped elements (left-hand side). Important remark. In two dimensions, a triangle with a circumcircle of bounded radius, whose surface is almost zero (or as small as we want) and whose edges have a non-degenerate length, does not exist. However, in three dimensions, a tetrahedron whose volume is as small as we want but whose circumradius is bounded and whose edges enjoy a reasonable length (Figure 1.3) exists. Such an element, the so-called "sliver", is a typical entity of three dimensional tetrahedral meshes (nevertheless, as will be seen, such a sliver can be a Delaunay element). Characteristical values for a regular unit element. Table 1.1 reports the characteristic values related to an equilateral triangle with unit length along with those of a regular unit tetrahedron (a tetrahedron is regular if its faces are equilateral triangles).
Si triangle tetrahedron
& 4
SK
VK rK - ^3 3 A V3 & 12 4 & 4
PK
& 6 & 12
Table 1.1: Characteristical values for a regular unit element.
1.4. SIMPLEX
1.4
13
Simplex
Let d be the spatial dimension, Rd being this space, and let <S be a set of points in Rd. If the A^s are these points, then
represents a linear combination of points in S. These combinations of n n members of «S, for ^ A,; = 1, define a subspace of R which is referred to 1=1 as the affine hull of the A^s. If, for all i, \i > 0, such combinations are said to be convex. The convex hull of <S, denoted as Conv(S), is the subset of Rd generated by all the convex linear combinations of the members of «S. This hull is the smallest convex set including S. The convex hull of a finite number of points in Rd is a polytope and, this set is a closed bounded set of Rd. A polytope in dimension k (whose affine hull is of dimension k) is a fc-polytope. The convex hull of k + 1 points, k < d, not in an affine space of dimension k — 1 is a particular fc-polytope called a simplex or more generally a Ar-simplex. Thus, in two dimensions, a 2-simplex is nothing more than a triangle while, in three dimensions, a 3-simplex is a tetrahedron. In higher dimensions, one can simply say simplex and, in the following, we will say simplex regardless of the spatial dimension. Most of the results and formula introduced for the triangle and the tetrahedron are still valid in higher dimensions. These includes the volume, circumball, inball, circumradius, inradius, as well as the quality definitions.
1.5
Triangulation
Let 5 be a set of points (in arbitrary position2) in Rd (d = 2 or d = 3), the convex hull of 5, denoted as Conv(S), defines a domain Q in Rd. If K is a simplex (triangle or tetrahedron according to d), then Definition 1.1. Tr is a simplicial covering of Q if the following conditions hold 2 The problem we are interested in is the construction of meshes for complex domains, thus, the points we have to proceed are necessarily in any arbitrary position and not in a general position. Let us recall that a set of points is said to be in a general position if there does not exist, in two dimensions, patterns with three cohnear points or patterns with four cocyclic points. In three dimensions, it means that patterns with four coplanar points or five cospherical points are not allowed.
14
CHAPTER 1. TRIANGLE, TETRAHEDRON, • (HQ)
The vertices of the elements in Tr is exactly 5.
• (HI)
Q
...
Every element K in Tr is non empty. • (H3) The intersection of the interior of any two elements is an empty set. Here is a "natural" definition. Condition (HI] is not strictly necessary to define a covering, it is nevertheless pragmatic with respect to the context and, thus, will be assumed. Similarly, we will asuume that the covering will be conformal, and we will be referred to as a triangulation. Definition 1.2. Tr is a conformal triangulation, a conforming triangulation or simply a triangulation of fi if Tr is a covering following Definition 1.1 and if, in addition, the following condition holds : • (H4)
the intersection of any two elements in Tr is either
-the empty set, — a vertex, - an edge, — a face (d = 3). More generally, in d dimensions, such an intersection must be a fc-face3, for k = — 1, ...,d- 1. The Euler formula, and more generally the Dehn-Sommerville relations, relate the number of fc-faces (k = 0, ..., d— 1) in a triangulation of fi. Euler formula.
In two dimensions, we have the general relation ns — na -f- ne + c = 2 ,
(1-18)
where ns is the number of vertices in the triangulation, na is the number of edges, ne is the number of elements and c is the number of connected components of the boundary of the triangulation. Similarly, in three dimensions, one has ns — na + nf — ne = cste , (1-19) nf being the number of faces in the triangulation, cste being a constant related to the topology of the domain, i.e. the genus of its surface 3
A (— l)-face is the empty set, a 0-face is a vertex, a 1-face is an edge, a Ar-face is in fact a fc-simplex with k < d.
1.5. TRIANGULATION
15
Figure 1.4: Conformal triangles (left-hand side) and non conformal triangles (right-hand side). • cste — 1 for a domain homeomorphic to a ball, • cste = 0 for a domain homeomorphic to a torus, • cste = 2 for a domain homeomorphic to a ball having a (spherical) hole, • etc.
Consequently, in two dimensions, if there is no hole in the triangulation, i.e. if the number of connected components of its boundary is exactly one, then ns — na + ne = 1, also, the relation naj — 2 X nd{ + 3 X ne — 0
relates the number of elements (ne), the number of internal edges (na ? ) and the number of boundary edges (naj). In three dimensions, a triangulation of a closed surface satisfies the relation ns — na + nf = 2,
which means that ns, the number of boundary vertices in a triangulation, na the number of its boundary edges and nf the number of its boundary faces are related. It should be noted that numerous other relations exist, also referred to as Euler relations, depending of the relationship which is sought, see [Berger-1978] [Klee-1966] or [Grunbaum-1967].
16
CHAPTER L TRIANGLE, TETRAHEDRON, ...
Representation of a triangulation. A triangulation is a set of entities described in a suitable manner by picking an adequate data structure. Algorithms for triangulation construction will create this structure, or output structure, but will use in fact a specific structure constructed in such a way as to make possible the different steps included in them. Thus, it will be useful for this internal structure to include, in one way or another, a table4 of the elements in the triangulation (each element being referred to by a number) as well as the neighborhood relationships between the elements (see below). These two pieces of information form the adjacency graph of the triangulation. They are the minimal set of information we need to define to obtain a suitable framework. In particular this data set is well suited for incremental algorithms that will be discussed hereafter. Since a triangulation is a graph, it is possible to define the notion of neighborhood with respect to an edge for a planar triangulation (or a surface triangulation) and with respect to a face for a triangulation in three dimensions. In this way a neighborhood relationship for triangles or tetrahedra is defined, giving the element(s) sharing an edge or a face with a given element. In two dimensions, the ith neighbor, (z = 1,3), of a given triangle, if any, is the triangle sharing «„, edge i of the initial triangle. In the case where an edge is a boundary edge, the associated neighbor does not exist and, by convention, its number is set to 0. Thus, if V(i, j) stands for the "table" of the neighbors, V(i,j) = k indicates that neighbor i of triangle j is the element of number k (k = 0 means that edge i of element j is a boundary edge). In other words, the voyeur in triangle j of element k is vertex Pi, the vertex number i of K. The inverse relationship enables us to find, for a given triangle, the voyeurs of its neighboring triangles. It can be obtained using a matrix, the so-called connectivity matrix, denoted as CK. This matrix is defined as CK = (Cij) (i,j = 1,3) where Ca is the index of the vertex of the neighbor i of K voyeur of K, while C,-j, for i / j, indicates the number of the vertex j of K in the neighbor i of element K. To illustrate this definition, we would like to give a simple example. Let K be a triangle and let Ki (i — 1, 3) be its three neighbors (see Table 1.2 and Figure 1.5).
4 At this time, we don't worry about the practical way in which the notion of a table will be implemented in the computer. The notion of a "table" must be taken in a general sense.
1.5.
TRIANGULATION Vertex of index K Ki K2 K3
17
I Pi Pi Pi C
2 Pi A P3 Pi
3 PS PS B Pi
Table 1.2: Vertices in K and vertices in
Figure 1.5: Definition of K and Ki's. Then the corresponding connectivity matrix is
Thus, the connectivity matrix contains the local numbering of the vertices in the neighboring elements, enabling us to establish the relationships between the vertices and the adjacent triangles. The neighborhood relationships between the tetrahedra in a triangulation can be obtained in the same way. Similarly, a connectivity matrix can be constructed. Validity of a triangulation. A triangulation of a set of points (thus, that of its convex hull) is said to be valid if the conditions (#0), (#1), (#2), (#3) and (H4) are satisfied. These properties are numerically expressed by analyzing the set of elements and the neighborhood relationships related to these elements. In fact, it is sufficient to check the following conditions
18
CHAPTER 1. TRIANGLE, TETRAHEDRON,
...
• for (HQ), every point of the set is a vertex of the triangulation,
• for (HI), (HI), (H3) et (H4), — every zero neighbor defines a face whose supporting hyperplane separates the space in two half-spaces, one containing the set of points, — the neighborhood relationships, in terms of faces, is symmetric (with respect to the pairs of adjacent elements), — every element has a strictly positive volume (surface area, if d = 2). The symmetry of the neighborhood relationships means that the neighbors of a given element have this element as a neighbor and that the edge (the face) shared by two elements is listed in an appropriate way in the element and in the corresponding neighbor. A triangulation is said to be valid if it satisfies these conditions related to the covering aspect, topology and metric. Delaunay triangulation. Among the different possible types of triangulations, the Delaunay triangulation 5 will focus our attention. Let us recall that S is a set (or a cloud) of points and that £1 is Conv(S). Definition 1.3. Tr is a Delaunay triangulation of fi if the open6 (balls) discs circumscribed associated with its elements are empty. This criterion, referred to as the empty sphere criterion, Figure 1.6, means that the open balls associated with the elements do not contain any vertices (while the closed balls contain the vertices of the element in consideration only). This criterion is a characterization of the Delaunay triangulation. This property leads to several other characteristics enjoyed by a Delaunay triangulation. About some properties in two dimensions. Among the different possible triangulations of S, the Delaunay triangulation • lexicographically maximizes the minimum angle formed by the triangulation edges, 5
As will be seen hereafter, the triangulations constructed in practice will not be strictly Delaunay. This fact is related, especially, to the round-off errors due to any computer implementation and can also be a consequence of some constraints that will be added in the construction. 6 Considering closed discs leads to the same result while non simplicial elements may be formed.
1.5. TRIANGULATION
19
Figure 1.6: The empty sphere criterion is violated, the disc of K encloses the point P. • lexicographically minimizes the maximum circumradius (circles associated with the triangulation elements). As a consequence, a triangulation whose angles are non obtuse is a Delaunay triangulation. Let T be any triangulation of <S, o:z:(T) be the angles between the edges of T and rt-(T) be the radius of the circumcircles associated with the triangles of T. Let V(T) (respectively W(T}} be the vector defined by the c*j(T)'s (resp. by the n(T)'s) ordered according to the increasing order (resp. decreasing order), then the Delaunay triangulation maximizes (resp. minimizes) V(T] (resp. W(T}} with respect to the ordering defined by V(T\) < V(T-2) if there exists an index j such that oti(T\) — a,- (72) for i < j and c*j+i(7i) < aj+i(72) (resp. W(T\) < W(Ti] if there exists an index j such that r;(7i) = ri(T?) for i < j and r j+1 (7i) < rj Some properties in any dimension. The following results hold in any dimension. For a proof, see [Rajan-1994] • the maximum radius of the minimal spheres associated with the elements is minimum (the minimal sphere of a given element is the smallest sphere that contains this element), • in a Delaunay triangulation, the union of the circumballs associated with the elements sharing an internal point is included in the corresponding union associated with the same point for any other type of triangulation,
20
CHAPTER 1. TRIANGLE, TETRAHEDRON,
...
• the sum of the squares of the edge lengths weighted by the sum of the volumes of the elements sharing these edges is minimal. Moreover, if all the simplices in a triangulation enclose the center of their circumsphere, then this triangulation is a Delaunay triangulation (this property is an extension of the property related to triangulations with non obtuse angles in two dimensions). These properties ensure a certain regularity 7 for the Delaunay triangulation. An important problem is to ensure that a set of input edges (in two dimensions) or a set of input edges and faces (in three dimensions) exists in a triangulation. In the following, Const will denote a set of such entities. Definition 1.4. Tr is a constrained triangulation ofQ if the members of Const are entities ofTr. In particular, a constrained triangulation 8 can satisfy the Delaunay criterion locally, except in some neighborhood of the constrained edges and faces. Up to now, we have implicitly discussed isotropic triangulations. The problem of constructing anisotropic triangulations will be dealt with later. Let us mention, at this time, that an anisotropic triangulation is a triangulation whose elements are aligned with some specified directions.
1.6
Mesh
Now we turn to a different problem. Let fi be a closed bounded domain in R2 or jR3, the question is how to construct a conforming triangulation of this domain. Such a triangulation will be referred to as a mesh of fi and will be denoted by Tr or Th for reasons that will be made clear in the following. Thus, Definition 1.5. Th is a mesh ofQ, if
• (HI]
Q= U K. Kerh
• (H2)
Every element K in Th is non empty.
7 This does not mean, according to some authors, that a Delaunay triangulation is a priori optimal when used in finite element computations, a sliver being a counter example refuting this point of view. 8 While a constrained Delaunay triangulation (for example in two dimensions) is a triangulation which satisfies the empty sphere criterion but where a ball can contain a point in the case where the latter is not seen, due to a constrained edge, by all the vertices of the considered element.
1.6. MESH
21
• (H3)
The intersection of the interior of any two elements is empty.
• (H4)
The intersection of any two elements in Th is either,
— the empty set, - a vertex, — an edge, — a face (d = 3). Clearly, the set of definitions related to a conforming triangulation is met again. The fundamental difference between a triangulation and a mesh is that a triangulation is a covering of the convex hull of a set of points (the latter being given) while a mesh is a covering of a given domain defined, in general, by a discretization of its boundary. Then, at least two new problems occur. They are related to : • the enforcement, in some sense, of the boundary of the domain meaning that the triangulation is a constrained triangulation, • the necessity of constructing the set of points which will form the vertices of the mesh since usually only the boundary points of the given boundary discretization are, in general, given as input. Nevertheless, it is clear that the algorithms to be developed to construct a mesh will be derived, using an appropriate scheme, from those used for constructing a triangulation. Remark. In the finite element method, the meshes9 are, in general, denoted by Th, where the index h of the notation refers to the diameters of the elements in the mesh. These quantities are used in the error majoration theorems, see, for instance, [Ciarlet-1991]. Remark. Meshes, as discussed in this book, will be used in support of finite element computations. Hence, they must also enjoy a series of properties which will be introduced hereafter. In this respect, they must have a suitable quality (see after). This quality is a value that is used in the theoretical theorems of error majoration. 9
It should be noted that people with a finite element background use with the term triangulation as a synonym of the term mesh.
22
CHAPTER 1. TRIANGLE, TETRAHEDRON,
...
Validity of a mesh. Similar to the notion of a valid triangulation, the validity of a mesh is achieved if conditions ( H I ) , (H2), (H3) and (#4) of the mesh definition are satisfied. These properties, as in the case of triangulations, are numerically expressed by analyzing the set of elements in the mesh as well as the neighborhood relationships of the mesh. In practice, it is necessary to check the following • every zero neighbor corresponds to a boundary face of the domain, • the neighborhood relationships, in terms of faces, is symmetric (for the pairs of adjacent elements), • every element has a strictly positive volume (surface area, if d = 2). Quality. The quality QM of a mesh 7^ is related to the quality of the elements in the mesh. It can be expressed in several ways, such as • a global quality defined as (1.20) the distribution of the elements according to their quality, the comparison between the value QM and a target value. QM is the quality of the worst element in the mesh. The target value is the quality of the best element which can be constructed given the worst edge (face), i.e., in general, an edge or a face of the given boundary discretization. In two dimensions, the target value is independent of the data, meaning that it is always possible to construct an equilateral triangle from a given edge (assuming that the optimal point required in this construction belongs to the domain). The same property does not extend to three dimensions. In fact the quality of the best tetrahedron that can be constructed from a given triangular face is strongly related to the quality of this triangle. While not analytically proven, the following relation appears to hold
In this relation, Q^D is the quality of the optimal tetrahedron that can be derived from a given face whose quality is valued
1.7. USEFUL ELEMENT SETS
23
Remark. Size variation between boundary edges (faces) may also alter the mesh quality, but it is not as obvious just how to evaluate this factor. The previous materials only take into account the geometric aspects of the elements. The need of additional features will lead us to the problem of adaptation, which will be discussed later.
1.7
Useful element sets
The algorithms involved when constructing a Delaunay triangulation and more generally a mesh, make extensive use of a certain number of specific sets of elements10. The aim of this section is to introduce these useful sets and to propose several methods to make their construction possible. In this regard, the reader will be more familiar with the notions (notations as well as basic ideas), algorithms and data structures frequently used in the following chapters. 1.7.1
Element sets
To make the notation precise, Tr is a conforming triangulation, P is a point, assumed to be distinct from all element vertices in the triangulation, K is an element in Tr and CK is the circumcircle (circumsphere) associated with K. The sets we are interested in are introduced alphabetically. Formal definitions are given which are then detailed at the time the relevant element set is used. Ball. Let P be an element vertex, the ball associated with P is the set of elements including P as a vertex. In general, this element set11 can have any cardinality. Note that this notion of a ball, as a set of elements, will be used along with the classical topological notion of a ball, the context indicating which type of ball is considered. Base. If P is a point enclosed in the triangulation, the base associated with P is the set of elements in Tr enclosing P. 10
We use both the set of the elements or their unions, the context making clear for any case the relevant notion. 11 This set is also referred to as an umbrella when P is inside the domain or a fan when P belongs to a boundary (d — l)-face. Indeed, the balls we will consider in the following will be of the umbrella type.
24
CHAPTER 1. TRIANGLE, TETRAHEDRON,
...
Conversely, in the case where P is not enclosed in the triangulation, the base associated with P is the set of elements formed by joining P with the edges (faces) of the elements in the triangulation which are visible12 from P. In two dimensions, if P is enclosed in the triangulation, the base is reduced to one element (the one within which P falls), whereas if P belongs to an edge, the base is formed by the two elements sharing this edge. Following this assumption, in three dimensions, the base is reduced to one element (if P falls within an element), includes two elements (if P lies on a face) or encompasses the set of elements sharing an edge (when P lies on this edge). Cavity. Under some assumptions, cf. Chapter 2, the cavity associated with a given point P is the set of elements in Tr whose circumdiscs (circumballs) enclose the point. An adequate definition of this set13 is a key of the Delaunay triangulation algorithm, as will be seen. Pipe. Let s be a segment enclosed in Tr, we assume that the endpoints of s are two vertices in the triangulation. Then, in two dimensions, the pipe associated with s is the set of elements having at least one edge intersected by s. In three dimensions, the pipe, or more precisely, the simple pipe, associated with s is the set of elements which have at least one face intersected by s but have no edges that are intersected by s. The previous definition, in three dimensions, is not sufficient to characterize all the intersection cases between the elements in a triangulation and a given segment s whose endpoints are element vertices. Thus, the notion of a general pipe is required. General pipe. Let s be a segment enclosed in Tr, we assume that the endpoints of s are two vertices in the triangulation. Then, in three dimensions, the general pipe associated with s is the set of elements having at least one face or one edge intersected by s. 12
An edge is said to be visible from a given point if the triangle formed by this edge and this point does not intersect any entity in the triangulation. 13 Also referred to as a core.
1.7. USEFUL ELEMENT SETS
25
From this definition, when an edge is intersected by s, then the general pipe includes at least one shell (see below) whose defining edge is that intersected by s.
Shell. Let a be an edge in Tr, the shell associated with a is the set of elements sharing this edge. In this case, edge a is said to be the defining edge of the shell. This pattern, quite common in three dimensions, can include any number of elements. It will play an important role in the algorithms developed for mesh optimization. A closed shell (i.e. its defining edge is not a boundary edge) includes at least 3 elements. While a shell is, in principle, associated with an edge, for sake of simplicity, we will also name a shell any set of two elements sharing a face. In this case, the shell will be a face shell instead of an edge shell. In two dimensions, the notion of an edge shell infers the two triangles sharing an edge. On the other hand, this restriction does not exist for a three-dimensional edge.
1.7.2
About the construction of such sets
The construction of these sets relies either on a topological analysis of the triangulation, i.e. based on the neighborhood relationships, or on a geometric analysis of the triangulation, requiring computations such as searching, intersection, and so on. In terms of computing issues, a topological construction will be exact while a geometric construction can be affected by the numerical accuracy of the computations. In what follows, we assume that a triangulation is given whose elements are known as well as the corresponding neighborhood relationships. In some cases (for instance when considering the construction of a cavity), we assume also that the circumcenters and the circumradii of the elements are known. Enumerating the ball associated with a vertex. We assume that we have one element with P as a vertex, where P is the point whose ball is sought. In two dimensions, the ball enumeration is completed by using a stack, following the neighborhood relationships, the neighbor of the vertex following or preceding P in the visited element. To ensure convergence one can check that the element candidate to be put on the stack has not been previously added. This method does not use the connectivity matrix. The convergence is completed at the time the element to be put on the
26
CHAPTER 1. TRIANGLE, TETRAHEDRON, ...
stack is the initial element. By using explicitly the connectivity matrix, it is possible to loop around point P, thus avoiding any further control. In three dimensions, the connectivity matrices lead to the same algorithm while using the neighborhood relationships requires a more sophisticated algorithm. It is indeed not a trivial task to ensure that no cyclic paths are created while cycling around a point. To avoid this, a stack is used in combination with dynamic coloring14 of the elements already on the stack in such a way so as to decide if a new element candidate must be put on the stack or is already there, thus the convergence of the method is ensured. This convergence is completed when the stack that determines if any neighbors must be put on the stack is empty. Remark. It is also possible to find the ball of a point without using explicitly the neighborhood relationships. This construction is left as an exercise. Construction of the base associated with a given point. We consider one element, say A'o, in the triangulation and a point P enclosed in the latter15. The problem is to visit the triangulation by adjacency, by using the neighborhood relationships, starting from KQ until one element of the sought base is obtained (Figure 1.7). Two possible paths are proposed to solve this problem. Let G be the barycenter of KQ. Two consecutive elements in the path are defined by a non empty intersection of their common edge (face) with segment [GP]. By definition, this path is well defined and the last element in the path is one element of the sought base. In this process, the computation of the intersection of an edge (a face) with a segment is the sole operation requested. Any edge (face) / intersects [GP] if all the edges (faces) of the virtual simplex [/, P], formed by joining / and P, except / are visible by G. This first method converges necessarily, but each intersection check costs d + 1 (d being the space dimension) surface (volume) computations meaning that the (d — l)-faces are visible or not by G. Therefore, a second method is proposed based on a visibility criterion that reduces the computational expense. In fact, we select in the path the 14
A dynamic coloring avoid to use a boolean to mark the entities under consideration and thus makes the management of the corresponding table easier. This technique can be appb'ed in numerous computational procedures and is a powerful and low-cost way to decide if a given entity has been previously considered or not. 15 The case where P is outside the triangulation will be not used in the following (due to an adequate assumption)
1.7. USEFUL ELEMENT SETS
27
Figure 1.7: Searching the base by adjacency starting from triangle KQ. neighbor associated with the first visibility criterion that is not satisfied (thus, the number of computations is, in average, less than d+ 1.) Two consecutive elements in the path share a (d — l)-face whose supporting hyperplane separates the first element and point P. In this case, the path is not unique, nor well defined. The path can be cyclic making it impossible to reach an element of the base, as can be seen in Figure 1.8.
Figure 1.8: Cyclic path (academic example). Starting from triangle 1, triangles 2, 3, 4, 5 and 6 are visited before returning to triangle 1. The process stopsis if no previously visited triangles can be visited again by changing the way in which a further element is selected. To avoid a cyclical path, we propose to consider in a random order
28
CHAPTER 1. TRIANGLE, TETRAHEDRON, ...
one of the faces (actually a (d — l)-face, i.e. an edge or a face based on the dimension d) of the current simplex, whose supporting hyperplane separates the simplex and point P, in the case where a choice is possible. As such a path always exists, this algorithm converges. This method, relying on visibility criterion, is less expensive than the previous one and has been proven to be computationally efficient. Determining the existance of a cyclical path in a Delaunay triangulation is a tedious problem. Nevertheless, from a practical point of view, it is advised that this possible situation be taken into account. Exercise 1.7. Express, in two dimensions, the visibility criterion in terms of surface sign evaluations (Hint: the analysis of the signs of the surfaces of the triangles formed by joining P with the edges of a given triangle K fully determines the location of P with respect to K (notice that 7 regions are possible)). Exercise 1.8. Repeat Exercise 1.7 in three dimensions (Hint: use the volumes (21 regions are then possible)). Remark. Another method consists of finding the closest vertex to point P by means of adjacency starting from an arbitrary vertex, while decreasing the distance between the current vertex and the point P. One of the elements including the last vertex of the path is then a member of the base associated with P. Indeed, P belongs to the Voronoi cell related to the last vertex found. This method only makes sense for Delaunay triangulations. The efficiency of the two first methods relies on the choice of the simplex initializing the path, KQ. In particular, if KQ is close enough to P, the length of the path joining KQ and P is reduced as well as the number of elements that must be visited. Several techniques enable us to find candidate element KQ close enough to P. The easiest method consists of using a grid16 and in encoding the points of the triangulation in this grid. The grid enables us to sort the points and to group the neighboring points in a packet. Thus, it is easy to find in which box of the grid a given point P falls and, by visiting some neighborhood, a box enclosing a previously encoded point is 16
This very simple data structure can be replaced by a more sophisticated structure such as a quadtree in two dimensions or an octree in three dimensions. One can notice that the term grid is taken in a restrictive sense, here a grid is structured, meaning, in two dimensions, that it is composed of squares or rectangles (each element vertex, except those located on the boundary is shared by exactly four elements).
1.7. USEFUL ELEMENT SETS
29
sought. Hence, any element having this point as a vertex can be selected as an initial guess. Construction of the cavity associated with a given point. In principle, there is no difficulty for constructing the cavity associated with a point. Indeed, to obtain this cavity, it is required to find exhaustively all the elements in the triangulation that violate the Delaunay criterion. To avoid this exhaustive search, expensive by itself, a different method can be employed relying on the fact that the cavity is a connected set (cf. Chapter 2). Let P be the point of interest, assumed to be enclosed in the triangulation, then the base17 (cf. above) can be obtained. The cavity is then initialized by the base and, by adjacency, this set is enriched by incorporating all elements whose circumball encloses P. Each stacked element is marked (by a color) and when we have to examine a neighbor of a previously stacked element, we check first that it has not been previously stacked. Checking if a point falls in a ball relies on the comparison of the distance, d, between P to the circumcenter of the element in question and the corresponding circumradius r d-r
(1.22)
It is well known that the above test, as performed in a computer, leads to major numerical troubles (which is perhaps the main motivation of the following chapters). In the case where the circumcenters and the circumradii are not updated and stored in the structure, it is possible to directly use their equations. Thus, in two dimensions for example, point P with coordinates (xp, yp) falls in the circumcircle (falls in the open disc associated with) or belongs to this circle if A(zP,yp)<0,
(1.23)
where A is the determinant previously introduced (see the Relations 1.4 or 1.13. in this chapter). Enumerating the shell associated with an edge. Let us consider that we have an element with an edge a, the edge defining the shell we are interested in. Finding this shell can be completed by an algorithm quite similar to that used for finding a ball. Using the connectivity matrices ?
or any element whose circumballs enclose P.
30
CHAPTER 1. TRIANGLE, TETRAHEDRON, ...
leads to the result without difficulty, while using the sole neighborhood relationships in terms of faces needs some care. Indeed, it is required to turn around an edge while ensuring the convergence of the process. To avoid putting the same element on the stack several times, it is sufficient to check that every element to be put on the stack for a given element is not that preceding it in the stack. The convergence is completed when the element to be put on the stack is the initial element of the shell in the case of a closed shell. Conversely, a is a boundary edge and the shell is not a closed shell. Exercise 1.9. If a = aft denotes the defining edge of a closed shell, show that it is possible to express this shell in a generic way, written as (Mt',0;, /?, MJ-+I) for i = l,n c with nc the number of elements in the shell and M,-'s the vertices other than a and ft (Mnc+i = MI) (Hint: use the possible permutations for enumerating an element). Construction of the pipe associated with a segment. The algorithm allowing the construction of a pipe (or a general pipe) associated with a segment is similar to the first method used to find the base of a point. Intersection computations are strictly required here, leading to the computation of d+ 1 surfaces (volumes). The algorithm is initialized with one element having one of the endpoints of the segment as a vertex.
1.7.3
About the construction of the edges of a triangulation
Here is an operation that will be intensively used and that could be quite expensive if no efficient algorithm is used. The problem is to construct all the edges (or only some edges with a specific property) of a given triangulation and to store them in such a way as to ensure easy retrieval (cf. below). The two dimensional case. There is no difficulty in finding the edges of a triangulation in two dimensions. A possible method consists in visiting the elements by staking all edges whose voyeur18 is not marked, while marking their voyeur in the element adjacent to the visited element. The algorithm leads then to locally mark the vertices. On the other hand, the storage of these edges must be done carefully. A possible solution is hashing. Let e\ and e^ be the endpoints of an edge 18
The voyeur in a triangle of one of its edges is the vertex other than the endpoints of this edge.
1.7. USEFUL ELEMENT SETS
31
a, we assume that we have a table Sum(l : 2 * np) where np is the number of points19 in the triangulation, a table Link(l : no) with na the number of edges in the triangulation and a table Min(l : na). The storage then consists in (after initializing i to 0 and all the tables to 0) • computing s = e\ + €2, • if Sum(s) = 0, i = i + 1, Sum(s) = z, Min(i) = min(e\, €2), • else if k = Sum(s) and Min(k) ^ min(e\, 62), — if Link(k) = 0 do i = i + 1, Link(k) = i and Mm(i) = — else visit the table Link (i.e. set j = Link(k) and fc = j) while Min(j) ^= ram(ei,e2) to find an index j such that Link(j) = 0 then do i = i + 1, Link(j) = i and Min(i) = min(ei, €2). As a result, i is the number of edges in the triangulation. Moreover, it is easy to analyze table Min in parallel with table Link to check if an edge is a new one or already exists in the stack. This point will be strictly needed in three dimensions (as discussed below). Exercise 1.10. Modify the above algorithm to find the boundary edges of a triangulation. Exercise 1.11. Show that the encoding based on the sum and the min can be replaced by an encoding using the min and the max. For both methods, compare the number of collisions (a collision exists, in the case of an encoding using a sum, if there are pairs of different e,'s for which the sum is the same). The three dimensional case. As mentioned above, constructing the table of the edges of a triangulation in three dimensions is more tedious. This is due to the fact that it is possible to meet the same edge several times while visiting the elements. Thus, the construction is made at the same time that the storage is made, because of the possible collision detection. Using this principle, the previous algorithm can be employed. 19
The points are assumed to be sequentially numbered in a connected way. If this property is not satisfied, np must be the largest point number.
32
CHAPTER 1. TRIANGLE, TETRAHEDRON, ...
1.7.4 About the construction of the faces of a triangulation Here is another step that can be time-consuming if no efficient algorithms are used. The problem is to find all of the faces (or some selected faces) in a triangulation and to store them in a convenient way. To find the faces of a triangulation, an algorithm similar to that used for finding the edges of a triangulation in two dimensions can be employed. On the other hand, the storage of such faces requires some care. In fact, one can follow the type of storage discussed previously in the case of the edges of a triangulation while modifying it slightly. As an exercise, one can see that it is sufficient to store the sum (table Sum), the minimum and the maximum of the numbers of the three vertices of the face (tables Min and Max).
1.7.5
About set membership
The question is to know (quickly) if an edge or a face is a member of a table storing a set of edges or faces. Clearly, the manner in which the storage has been described enables us to answer the question.
1.8
Notes
As a conclusion, this chapter has introduced several definitions relative to elements, triangles and tetrahedra as well as triangulations by making precise the notions of conformity, neighborhood relationships and the background (circumcircles, circumspheres) requested to design the algorithms related to triangulation construction. Some sets of elements that will be used intensively by the algorithms developed in the subsequent chapters have been briefly presented. Indications regarding the construction of such sets have been given leading to the introduction of data structures (such as stacks, tables, ...) and some classical tools (such as hashing, coloring, ...) that make the management of these structures easier. The aim of this book is not the detailed description of these classical data structures and basic tools. We refer the reader to the adhoc literature devoted to computer science, data structures and associated algorithms or computational geometry. For instance, one can consult [Aho et al. 1983] about the data structures, [Shamos,Preparata-1985], [Edelsbrunner-1987] as well as [O'Rourke-1993] or [Mulmuley-1993] and [Boissonnat,Yvinec-1995] which, while dealing with computational geometry, contain materials about data structure construction and management.
Chapter 2
Delaunay triangulation 2.1
Introduction
Triangulations and more specifically the Delaunay triangulation have been investigated for a long time, in particular in computational geometry1. Meanwhile, the increasing demand for suitable meshes in order to perform efficient computation of the solution for realistic problems formulated in terms of partial differential equations led engineers to pay some attention to triangulation techniques. Others are also interested in Delaunay triangulation, for instance in robotics, image processing, CAD environment, geographic sciences, and so on. It is a rather tedious task to try to establish an exhaustive chronology about triangulations. While going back to Euclid2 is not strictly necessary, it seems justified to mention Dirichlet3, VoronoT4 and also Delaunay5, before discussing the recent developments regarding triangulations. A short section aims to summarize the milestones of .this evolution. Practically (speaking), in this work we consider the Euclidean space Rd (with d = 2 or d = 3) along with a usual Euclidean metric. Given a set (or cloud) of points, denoted by <S, we aim at constructing a triangulation of either Conv(S], the convex hull of 5, or Box(S], a box enclosing <S; the purpose of the latter being emphasized in the next chapters. Among the possible triangulations, we focus on the Delaunay triangulation and the proposed method will then be referred to as a Delaunay method. *New name of an old topic. Euclid, 330-270 B.C. 3 Peter Gustav Lejeune-Dirichlet, 1805-1859. 4 George Voronoi also known as Georgy Fedoseevich Voronoy, 1868-1908. 5 Boris Nikolaevich Delone or Delaunay, 1890-1980. See [Faddeev et al. 1992] for a biography of Delaunay. 2
34
CHAPTER 2. DELAUNAY
TRIANGULATION
Actually, several different approaches can be taken if one wants to introduce this class of methods. Among those, a peculiar method known as the incremental method will be detailed. Our interest in this method is motivated by two aspects. First of all, it is the easiest method to describe as well as to implement, and, on the other hand, it has proved to be the most suitable for our purpose. The aim of this chapter is to discuss this approach in more detail by first providing a history of ways in which the Delaunay triangulation has been previously considered.
2.2
From Dirichlet to Delaunay
Figure 2.1: B.N. Delaunay. Dirichlet, [Dirichlet-1850], at the end of the 19th century, has shown, that for a given set of points in two dimensions, it is possible to partition the plane into convex cells using a proximity criterion. At the turn of this century, Voronoi, [VoronoT-1908], focused on quadratic forms related to proximity problems. From this are derived a quite interesting application
2.2. FROM DIRICHLET TO DELAUNAY
35
of the "parallelledres primitifs6", that is, the elementary polyhedra. As a consequence, Dirichlet's results can be extended in the three dimensional space. The concept of a Voronoi diagram, a set of cells corresponding to the proximity notion for a set of points, is then introduced. In the 30's, Delaunay, [Delaunay-1934], established that it is possible to deduce a triangulation from these diagrams by duality. His name being associated with this kind of triangulations, the Delaunay triangulation is introduced. Several issues related to this type of triangulations are then established including its uniqueness. This property is expressed using a key feature, referred to as the empty sphere criterion, see Chapter 1 where this notion has already been mentioned. In the 70's, i.e. approximately 40 years later, Lawson, [Lawson-1977], notices that, in two dimensions, the Delaunay triangulation and diagonal swapping are closely related. He shows that a local pattern of two non-Delaunay adjacent elements can be replaced by a Delaunay configuration by simply swapping the common edge. As a main consequence of this result, a Delaunay triangulation can be derived from any arbitrary triangulation using local modifications (edge swapping) only. Green and Sibson, [Green,Sibson-1978], pointed out that Delaunay triangulation enjoys a series of interesting properties. For instance, a "maxmin" criterion is achieved. That is the Delaunay triangulation is the one, among all the possible triangulations, that maximizes the minimum of the angles formed by the edges between the elements (cf. Chapter 1). One has still to wait until the 80's to find some effective algorithms for constructing the Delaunay triangulation in terms of computational efficiency, particularly in dimensions higher than two; [Hermeline-1980] as well as [Bowyer-1981], [Watson-1981] and [Avis,Bhattacharya-1983] being the main references related to this aspect. Voronoi diagram. Let «S be a set of points7 P t , i = 1,..., n, in d dimensions. The construction of the Delaunay triangulation of the convex hull of this set can be completed by considering that this triangulation is the dual of the Voronoi diagram associated with <S. The Voronoi diagram is the set of cells or poly topes, Vi, defined by Vi = {P
such that
d(P, Pi) < d(P, />•),
Vj £ i}
(2.1)
where d(.,.) is the distance between two points, this distance being induced 6 7
In french. sites can be used instead of points.
T by the Euclidean metric8. A cell9, V;, is then the locus of the points closer to Pi than any other point in S. The VJ's are closed convex polygons (polyhedra in three dimensions, d-polytopes in d dimensions); these (open) non-overlaping cells cover the space, and constitute, in the plane (i.e., if d = 2), the Dirichlet tessellation ([Dirichlet-1850]) or, in a more general sense, the so-called Voronoi diagram in Rd.
Figure 2.2: Voronoi diagram (example in two dimensions).
Delaunay, Voronoi dual. Based on its definition, each cell V; is a non empty set and is associated with one point in S. From these V^'s, the dual can be constructed, which is the desired Delaunay triangulation. This can be considered as the fundamental result of Delaunay. For instance, in two dimensions, the cell sides are midway between the two points they separate, thus, they form the perpendicular bissectors of the edges of the triangulation. In other words, joining the vertices in S belonging to two adjacent cells results in the triangulation. There is also, in any dimension, an orthogonality relationship between the Voronoi cells and their dual, the Delaunay triangulation. The Voronoi 8
This metric induces a distance, although other distances, related to other metrics, can be envisaged, leading to a different proximity definition. 9 These cells are also known as the Wigner-Seitz cells.
2.2. FROM DIRICHLET TO DELAUNAY
37
fc-face cells are orthogonal to the (d—fc)-faces of the Delaunay triangulation. The Delaunay triangulation or a Delaunay triangulation ? In two and three dimensions, cases we are mostly interested in, if a set of points in a general position is given, then the dual of the Voronoi cells associated with the given cloud is nothing else than the Delaunay triangulation of the corresponding convex hull. Moreover this triangulation is unique and is constituted by triangles (tetrahedra) only. The given points are said to be in a general position if no configuration exists which encloses d + 2 cocyclic (cospherical) points. Under this assumption, the previous result holds while, conversely, the above dual is a unique covering which can include elements other than simplices (i.e. convex polygons (polyhedra) other than triangles (tetrahedra)). In the envisaged application, namely a finite element computation, the desired elements must be triangles (tetrahedra) only. Hence, to achieve this goal, non-simplicial elements of the covering shall be further subdivided into triangles (tetrahedra) using an obvious splitting procedure. Consequently, several triangulations are possible. For the sake of simplicity, these will also be referred to as Delaunay triangulations.
Figure 2.3: The Delaunay covering. Figure 2.3 depicts an example including 4 cocyclic points, such that the true Delaunay covering contains one quadrilateral element. This quadrilateral is then subdivided into 2 triangles (Figure 2.4), thus leading to two
38
CHAPTER 2. DELAUNAY
TRIANGULATION
Figure 2.4: The Delaunay triangulation. solutions, either of which can be retained and simply called the Delaunay triangulation. Indeed, this trick will allow us to avoid repeating that such a triangulation is only one of the possible solutions, each time that such a situation is encountered.
2.3
Delaunay lemma
Delaunay, [Delaunay-1934], has proved the following result, considered as a key-issue for most of the Delaunay triangulation algorithms. General lemma. Let T be a given arbitrary triangulation of the convex hull of a set of points S. If for every pairs of adjacent simplices in T, the empty sphere criterion holds, then this criterion holds globally and T is a Delaunay triangulation. More recently, Lawson, [Lawson-1986], has proposed a different and rather elegant proof of this lemma, based on contradiction. Proof. At first, the Delaunay criterion is symmetric. Let us consider a couple of adjacent simplices sharing a common (d — l)-face and let P (resp. Q] be the vertex in these simplices opposite to that face. Then
2.3. DEL A UNA Y LEMMA
39
where Bp (resp. BQ) is the ball associated with the simplex having P (resp. Q) as a vertex. Exercise 2.1. Establish relation (2.2).
Figure 2.5: Balls and hyperplanes. Then, if Q £ Bp, one can notice that BP fl HQ C BQ D HQ
(2.3)
and conversely (Bqr\Hp C BpC\Hp), with, considering the same configuration of adjacent simplices, HP (resp. HQ) being the half-space containing P (resp. Q) bounded by the hyperplane supporting the common (d — l)-face. Exercise 2.2. Establish relation (2.3). This result leads to the conclusion. Actually, we assume that the empty sphere criterion is satisfied for every pattern of two adjacent elements and that there exists in the triangulation a point Pn included in the ball associated with a simplex which is not adjacent to the simplices having Pn as a vertex. Let A*0 be this simplex (Figure 2.6). We consider the point G, centroid of KQ, and we define the line GPn. This line joins the simplex A'o to a simplex denoted as Kn (one of the simplices of the ball of Pn)
40
CHAPTER 2. DELAUNAY
TRIANGULATION
and intersects a set of simplices, denoted as A'o, KI, KI,...., Kn, having an intersection included in a (d — l)-face10.
Figure 2.6: For the Lawson proof. By assumption, we have
Pn € #o, Bi being the ball associated with a simplex K{ and as Kn-\ is adjacent to Kn (after the empty sphere criterion for the adjacent simplexes)
The simplices K{ are sorted from KQ to Kn according to their intersection with the line GPn. Let Pi be the vertex in K{ not included in /f,_i and let Hi be the half-space containing A't but not Ki-\. According to the definition, we have Pn € Hi. It is now proved that an index i exists such that
hence As Pn e Hi+i and as B{ n #;+i C Bi+i n £Ti+i, then Pn € Bi+1, completing the result. While Pn G 50, then Pn £ Bn_i holds, refuting the assumption, Pn £ Bn-i and therefore achieving the proof. D 10
Points are supposed to be in a general position, by slightly modifying the location of G, one can always return to this case.
2.4. INCREMENTAL METHOD
2.4
41
Incremental method
The Delaunay triangulation construction can be completed by using several methods. One such method has already been mentioned which uses the duality of the Voronoi diagram and the Delaunay triangulation. Among the other possible methods, the one which draws our attention is referred to as the incremental method. The latter being popular also under different names, including the so-called Watson algorithm. The following paragraphs aim at discussing in details this method. Method description. Given Ti the Delaunay triangulation of the convex hull of the first i points in <S, we consider the i-\-lth point of this set, denoted as P. The purpose of the incremental method is to obtain 7i+i, the Delaunay triangulation including P as an element vertex, starting from 71. To this end, we introduce a procedure referred to as the Delaunay kernel. Delaunay kernel. This kernel is the trivial local procedure that follows = Ti - CP + BP .
(2.4)
The ball associated with P, Bp, matches the definition of a ball as described in Chapter 1. More precisely, Bp is the set of elements formed by joining P with the external edges (faces) of Cp, the cavity associated with P whose construction is now detailed. The location of P, with respect to Tii fells m two categories • either P in enclosed in 71, • or P is outside of Ti. In the first case (Figure 2.7) the cavity of P (cf. Chapter 1) is the set of elements in Ti whose circumdisc (circumball) contains P. In the second case (Figures 2.8 and 2.9), this cavity is the same set enriched by the set of elements formed by joining P with the edges (faces) in Ti visible from P, and not already selected by the previous criterion. The key issue of this construction relies on the following theorem. Theorem 2.1. Given Ti a Delaunay triangulation of the convex hull of the first i points of a set S, then Ti+\ is a Delaunay triangulation of this hull that includes P, point i + 1 of the set, as vertex. Several proofs can be given, one of which uses two steps. First it is proved that 7i+i is a valid triangulation and then this triangulation is
42
CHAPTER 2. DELAUNAY
TRIANGULATION
Figure 2.7: Inserting point P (P G T{, case 1).
2.4. INCREMENTAL
METHOD
43
Figure 2.8: Inserting P (P outside of 71 and outside of any circumdisc, case 2.1).
Figure 2.9: Inserting P (P outside of 7~i and inside one circumdisc, case 2.2).
44
CHAPTER 2. DELAUNAY
TRIANGULATION
proved to be a Delaunay triangulation. An alternate approach consists of more directly using the Voronoi-Delaunay duality. We would like to propose a proof based on the second approach which corresponds to case 1 (Figure 2.7). Proof. First of all, we assume that the points are in a general position. The triangulation of the complement of the cavity, Cp, remains unchanged as, following the definition, the balls circumscribed to the simplices of this complement are empty and thus these simplices are Delaunay (they follow the empty sphere criterion). We turn now to establish that the sole possible Delaunay triangulation of Cp when considering point P is the ball Bp defined by joining P with the external faces of Cp. The result is proved by contradiction. We assume that a simplex of the new triangulation does not count P as a vertex, thus necessarily this simplex belongs to Cp, because the points are in a general position. In which case, the triangulation of the cavity is unique (as being the dual of the VoronoY diagram). Hence, this simplex violates the Delaunay criterion. In other words, the new triangulation is composed of the simplices formed by joining P with the external faces of Cp, which means that Bp is this new triangulation of Cp. In the case where the points are not in a general position, let us assume that a simplex, denoted by K, in the new triangulation, does not count P as a vertex. Then, there exists one or several other simplices in Cp whose circumball is identical to that of K. Consequently, K violates the Delaunay criterion leading to the same result. d A proof of this theorem using the first approach has been given by [Hermeline-1980]. In practice, the major key to construction using the Delaunay kernel is that the cavity is a star-shaped11 polygon (polyhedron) with respect to the point under consideration. This obvious property ensures the validity of the resulting triangulation. The completion of the12 Delaunay triangulation of the convex hull of 11
A polygon is star-shaped with respect to a point P if, for every point X in the (closed) polygon, the (open) segment PX is internal. The set of points verifying this property constitutes the kernel of the polygon. For instance, the kernel of a convex polygon is its interior. This notion of star-shapeness is similar to that of visibility. The point P is visible by every point of the boundary of the polygon. This notion extends to the d-polytopes and, in specific, to the polyhedra. For sake of simplicity, we call the cavity a star-shaped cavity (instead of having to specify that the cavity is a cf-polytope, the latter being star-shaped). 12 Return to the remark regarding the uniqueness of this triangulation.
2.4. INCREMENTAL
METHOD
45
the set «5 relies on applying the Delaunay kernel procedure to every point in S. The process is initialized by one element formed by choosing + 1 affine independent points so as to define T^+i, where d is the spatial dimension. Exercise 2.3. Prove the theorem in the case where the points are in situations 2.1 or 2.2 (Figures 2.8 and 2.9). Reduced incremental method. The reduced version of the incremental method is based on the proper definition of a box enclosing the points in set S in such a way as to meet the first situation only, i.e. P is always enclosed in the current triangulation. This situation, as depicted in Figure 2.7, is a trick that simplifies the construction of the relevant algorithms. Moreover, this trick maintains the generality of the proposed procedure. At completion of the insertion of all the points in «S, the Delaunay triangulation of the enclosing box, Box(S], is obtained. It is then possible to obtain a triangulation of a hull of «S, not necessarily the convex hull, by removing some elements from this triangulation (i.e., those having at least one vertex identical to one of the box corners) . Conversely, obtaining the convex hull by this method is a tedious task, at least from the practical point of view. In theory, one has to simply define a box at an infinite distance from <S, which is obviously not realistic (in terms of computer implementation). There are several ways to define the box in order to find To, the initial starting point of the process. A solution is to construct one simplex (triangle or tetrahedron) enclosing S. Another approach is to define an elementary polygon (polyhedron), for instance a square or a quadrilateral element in two dimensions, a cuboid or a parallelepipedon in three dimensions. The reduced incremental method relies then on defining this enclosing box and, if required, on triangulating it (with a few simplices) before invoking the Delaunay kernel. Delaunay measure. The Delaunay measure13 is a numerical characterization enabling us to construct the cavity associated with a given point. This notion is not strictly required at this time. However, as will be seen, it will be a convenient way to define a constructive Delaunay kernel process, in an anisotropic context for example. Let us recall that the cavity is the set of elements whose opened circumdisc (circumball) encloses the considered point P. If d(P, OK] stands 5
The notion of a measure is simply a way to evaluate the Delaunay criterion.
46
CHAPTER 2. DELAUNAY
TRIANGULATION
for the Euclidean distance between P and OK, this point being the circumcenter of the disc (ball) associated with element K and if rx denotes the radius of this disc (ball), then an element K in 71 is a member of the cavity of Pif d(P,0K)-rk
(2.5)
or, d
-£°?l < 1. rK
(2.6)
Definition 2.1. The ratio d(P;°K), denoted by a(P,K), is called the Delaunay measure of point P with respect to element K. One may notice that a(P, K) is measured in the usual Euclidean metric and consequently, is independent of P. A characterization of the cavity of P is such that K <ECP<=> a(P, K) < I .
(2.7)
As previously mentioned, this rather obvious numerical characterization will be of great interest in the following.
2.5
Other methods
There exist other ways to construct a Delaunay triangulation, especially in two dimensions.
2.5.1
Method by edge swapping in two dimensions
Edge swapping14 is a topological process which modifies the triangulation of a convex (edge) shell by swapping its diagonal. Theorem 2.2. Given 7i, an arbitrary triangulation of the convex hull of a set of points S, the Delaunay triangulation Tod °f this hull can be obtained through edge swapping. Proof. First of all, one can notice that a diagonal swapping applied on two triangles sharing an edge violating the Delaunay criterion is always possible (because the two triangles necessarily form a convex polygon). 14
Or diagonal swapping (where one were to consider two triangles sharing an edge, the latter being the diagonal of the so-formed quadrilateral).
2.5. OTHER METHODS
47
Moreover, the new configuration verifies this criterion. It shall be emphasized that this property (regarding the convexity) does not hold in R3. Hence, the sole process needed is to apply edge swapping everywhere the Delaunay criterion is not satisfied and to iterate this process. The Delaunay lemma ensures the proof since the process is prevented from an infinite loop. This means that, using swapping, it is impossible to return to a previous configuration. This property is due to the equivalence between edge swapping and the maximization of the minimum angle between the edges associated with any configuration of two triangles sharing an edge. D This theorem is also proved in [Cherfils,Hermeline-1990]. As a corollary, the following result holds. Theorem 2.3. Given 71 an arbitrary triangulation of the convex hull of a set of points S, then any other triangulation ?2 of this hull can be obtained by edge swapping. Proof. We simply use Theorem 2.2 to obtain triangulation Toei from the given triangulation 71 and, afterwards, we use the same theorem reciprocally to obtain 72- This process is made possible because the edge swapping is a reversible process (although, this proof assumes that the triangulation is unique which is verified only if the points are in a general position). D This result will be used for constructing constrained triangulations in two dimensions. The extension of this result to three dimensions has notyet been done, see for instance [Joe-1989], On the other hand, [Joe-1991] presents an incremental Delaunay triangulation construction based on local transformations (edge-face swapping). Provided that the points follow a lexicographical ordering, the triangulation at stage i is obtained by connecting the point under insertion (which lies outside the convex hull of the first i — Ith points) with the visible faces and by swapping the newly constructed faces.
2.5.2
Divide and conquer
Divide and conquer is a method that applies in numerous computational constructions. The basic idea of this type of methods is to split the given problem into two sub-problems of less complexity (in terms of size), this step being known as the dividing step of the whole process. Subsequently, the two sub-problems are dealt with and the global solution is achieved by merging the solutions of the two sub-problems. The key feature of this class of methods is the merging step.
48
CHAPTER 2. DELAUNAY
TRIANGULATION
Recursively applied, this method leads, in principle, to the consideration of an elementary problem. To construct a triangulation of Conv(S], the method relies on finding a separator capable of partitioning S into two disconnected sub-sets, S\ and S<2. Afterwards, the triangulations of Conv(Si) and Conv(S2) are constructed and finally merged in order to obtain the final triangulation. The question is how to define a way so as to obtain a Delaunay triangulation of Conv(S] by merging the two Delaunay triangulations (as these triangulations could be disconnected). One possible method, in two dimensions, consists of finding the two "extremal" points of each sub-set with respect to the separating line of these sets. In each part, the list of points lying between these extrema is then established and sorted. To obtain a triangulation joining Conv(S\) and Conv(S-2)i it is only necessary to join the points according to the previous sort. The boundary points of each sub-set are assumed to be clockwise ordered. In each sub-set, the point closest to the separation line is searched. Then the edge with these points as endpoints (ai in the first sub-set and a? for the other) is constructed. Considering ai and, by vicinity with respect to the boundary of Conv(S-2), we pick, starting from a 2 , all the points in this boundary visible by ai and we construct the corresponding edges. This process is repeated, replacing a\ by the last point considered while visiting the sub-set not containing this point. The process stops when no visible point remains. The last points found through this process are the above mentioned extrema. By applying edge swapping to the resulting triangulation, the Delaunay triangulation is obtained (nevertheless, it can be observed that edge swapping may lead us to entirely modify, or nearly so, the two initial triangulations) . In three dimensions, a similar method exists that enables the construction of the convex hull of a set of points, see [Boissonnat,Yvinec-1995]. However the construction of a triangulation having all the points in Conv(S\) and in Conv(S2) as vertex is not trivial.
Remark. An algorithm with an optimal complexity is also introduced by [Shamos,Preparata-1985] which relies on merging the two Voronoi* diagrams. This is done by constructing the new edges resulting from this merge.
2.5. OTHER METHODS 2.5.3
49
Sweeping algorithm
A sweeping algorithm is an alternate solution to obtain a triangulation in two dimensions, while the three dimensional case is still open. The Voronoi diagram is constructed and, using the duality, the desired triangulation is completed. This algorithm has been proposed by Fortune, [Fortune-1987]. A similar construction is also given in [O'Rourke-1993] and its main lines are given below. The key-idea is to consider a line (£>), parallel to the y's axis, which sweeps the plane (xy) from x — — oo to x = oo in order to construct the Voronoi diagram corresponding to the "left" part of (D). One has to notice that a point located at the left side of (D) can interact with the construction. To overcome this problem, Fortune introduced the following method.
Figure 2.10: Sweeping cones. With each point in the cloud S is associated a cone, whose corner is the point, having its axis parallel to the z's axis in such a way that the angle defined by the plane (xy) and all generatrices is 45 degrees, see the Figure 2.10. Similarly, with the line (D) is associated a plane (TT) whose inclination is 45 degrees. This plane intersects the points at the left side of (D) forming patches of a parabola. Using an orthogonal projection of the intersection points of these patches on the plane (xy), we obtain a series of points constituting the desired diagram. Indeed, this diagram is constructed at the left side of (D) with a shift due to the chosen angle. This trick enables us to anticipate the effect of a point at the right side of this line eventhough it has not been discovered at this time.
50
CHAPTER 2. DEL A UN AY TRIANG ULATION
Exercise 2.4. Prove that the orthogonal projection of the intersection points of the branches of a parabola in the plane (xy) defines the edges of the Voronoi' diagram.
2.6
Computational aspects
In this section, we only consider the reduced incremental method and we present one of the possible algorithms that can be developed to implement it. We have two objectives, i.e. to obtain some level of robustness (in particular, insensitiveness to round-off errors under some assumptions) as well as some level of efficiency, in terms of CPU. Before discussing these two points, we would like to give some ideas about the robustness and the complexity of a given algorithm. 2.6.1
Robustness and complexity
The robustness of a given algorithm is strongly related to its sensitiveness to round-off errors that are obviously encountered in any effective computation. Nevertheless, regardless of which algorithm is used, it involves a certain amount of calculations. The latter is mostly of integer type or involves floating-point operations (boolean operations can also exist, leading to evaluate integer or floating-point expressions). An integer operation is exact (provided that the range of the values involved falls within an adequate interval), while a floating-point operation necessarily includes a degree of uncertainty. The complexity of a given algorithm can be measured with respect to • its size complexity, i.e. how big are the memory space requirements, • its "operational" complexity, i.e. how many operations are required to achieve the solution. In general, these two aspects are inversely proportional, increasing the memory resources allows to avoid some operations while decreasing these resources leads to increase the computational effort. A "satisfactory" algorithm can be defined as an algorithm where a good balance is obtained between these two aspects. A precise analysis of the complexity of a well-balanced algorithm with respect to the two aspects above is necessary when designing an algorithm.
2.6. COMPUTATIONAL ASPECTS
51
A remark that makes sense. The complexity (in terms of CPU times) of an algorithm can be reduced in two ways, in particular for an algorithm involved in an iterative process (which is the case for the incremental method). Firstly, by minimizing the number of operations involved and, secondly, by reducing the size of the inputs. The first solution leads to a local optimization, while the second choice induces a global optimization. The computational steps described hereafter are discussed accordingly to one or the other of these points or, even, both of them in certain cases. 2.6.2
Reduced incremental method scheme
We would like to propose a scheme corresponding to the reduced incremental method. This scheme indicates the main steps that must be performed. Each point is furthermore discussed in detail, giving some indications as to how to implement the process. The purpose is to write the Delaunay kernel in terms of operators and then operations and data structures. That is TH-I = 71 — Cp + Bp .
Concerning the operators, this construction leads to complete different steps, among which are the following • the analysis of S so as to pick its extrema, • the construction of an enclosing box, • the triangulation of this box (let To be this triangulation and let i = 0) • the insertion of each point in 5, which consists of — picking the element(s) in 71 which enclose(s) the point under consideration (this is the search for the base associated with the point), - constructing the cavity using (d — l)-face adjacencies, starting from the base, — removing the cavity, enumerating the ball and replacing the cavity by this ball, then setting i =• i + 1, before considering the following point in S. Operations required in this scheme include searches (to find the base), comparisons (to obtain the cavity, it is required to compare distances with
52
CHAPTER 2. DELAUNAY
TRIANGULATION
radii) and operations related to the update of the current triangulation. This leads to update the corresponding data structure, when a new point insertion is done (so at the time a cavity is replaced by the corresponding ball). While detailed, this description does not go into the numerous difficulties that are mostly due to the problem of accuracy encountered when performing floating-point operations. This point is clearly a key-issue if some level of robustness is to be achieved. 2.6.3
Cavity correction
The cavity construction is done by adjacency given the base as initialization while other methods exist. However, the proposed solution offers more advantages (as will be seen, for instance, in the case of a constrained algorithm (cf. Chapter 3), where a multi-connected cavity cannot be defined in this way). The question is to decide if a given element is a member of the cavity of the current point P. Let K be the visited element, let OK be its circumcenter (i.e. the center of its circumcircle (circumsphere)) and let TK be the corresponding circumradius. Theoretically speaking, one has just to consider the Delaunay measure a(P, K) and to check if
Accuracy and robustness problems for the kernel construction. As
the relevant check leads to compare d(P, OK) and rK- As these two quantities are not precisely valued, this check may be inaccurate, specifically, if the region in which P falls is close to the disc (the ball) CK of K. This uncertainty may result in dramatic results and the Delaunay kernel may result in a non- valid triangulation. These ambiguous configurations can fall in two classes : • the cavity is not empty, meaning that there exists at least one vertex of a previously created element inside the cavity. This default is usually due to a proximity problem, • the cavity is not a connected set. In general, this denotes a cocyclic (cospherical) configuration.
2.6. COMPUTATIONAL
ASPECTS
53
Figure 2.11: The two ambiguous configurations. The first case leads to a vertex being missed (the resulting triangulation is still valid albeit wrong in this respect). The second case leads to a triangulation having overlapping regions. The Figure 2.11 depicts these two situations. • The first case of failure, due to imprecise computations, corresponds to the case where all the triangles in the figure are picked, point G is then strictly included in the cavity. • The second case of failure, also caused by imprecise calculations, corresponds to the case where all the triangles of the figure but triangle (ADB] are selected, thus resulting in a non-connected cavity. To overcome these problems, several solutions have been investigated. They consist of • not introducing any point causing a problem, • (slightly) perturbing all points leading to the problem, • introducing a threshold value, e, in the comparisons, • performing exact computations, • or, finally, suppressing the ambiguity using a different formulation of the algorithm.
54
CHAPTER 2. DELAUNAY
TRIANGULATION
The first solution requires that the current point is placed on a stack, such that its insertion will be done latter when the local context is modified. The second approach moves the point upon insertion and modifies the quantities involved in the construction, thereby expecting to remove the ambiguity. The third solution, which introdes a threshold value e in the comparison, has been investigated by numerous authors but does not lead to satisfactory results. An adequate value e for a given case is not suitable for other cases. The fourth approach implicitly assumes integer-type coordinates for the vertices and is not based on the Delaunay measure (meaning that the circumcenters and the circumradii are not computed or updated). Instead, it is related to the equivalent formulation (let us consider the two dimensional case)
&K(xp,yp) < o where AK(X,IJ) is the determinant introduced in Chapter 1. This inequality includes quantities in the range of a length to the power d + 1 which involve additions (subtractions) and multiplications only. Consequently, a restriction is imposed on the vertex coordinate's range. In other words, the minimal distance between two points is limited. Indeed, if b is the number of bits of the mantissa of a double memory word, the largest value (denoted as /) that can be expressed in the above expression must satisfy the following relation Assuming that the vertex coordinates start from the origin, this relation states that these coordinates must range from 0 to / < 65536 in two dimensions and from 0 to / < 4096 in three dimensions and, on the other hand, that the distance between two points is at least 1 with a typical computer15 for which 6 = 50. These limits give both the maximal possible number of points according to the d directions as well as the minimal distance between two points. We have introduced, de facto, the separating power or the resolution of the method. The limit resulting from this discussion is obviously too restrictive and, consequently, while a priori elegant, this method is not adequate in general. A determinant evaluation method can be found in [Avnaim et al. 1994], which overcomes this limit at the expense of increased complexity. 15
Double precision words are employed, with a priori 51 significative bits. For safety reason, we limit ourselves to 50 bits. It should be noted that this limit depends on the technology, actually, 128 bit computers already exist.
2.6. COMPUTATIONAL ASPECTS
55
Another way to avoid this limit is to introduce an extended arithmetic and, more specifically, to use infinite precision16 in the computations. See for instance [Guibas et al. 1989], [Fortune,Van Wyk-1993] (among others) and [Perronnet-1988b] or [Peraire,Morgan-1997] for a meshing application. The fifth method is the one we would like to recommend. We assume that the vertex coordinates are of integer type and we propose a new formulation for the Delaunay kernel resulting in a robust and exact algorithm in this context. The discussion of this method is the aim of the following paragraph. Briefly, the assumptions about the coordinates allow us to find the base exactly. This base enables us to define an approximated cavity which is furthermore corrected so as to ensure the expected properties (emptyness, connecteness and star-shapeness). This method will result in a valid triangulation which will not strictly be Delaunay. A correction algorithm. The problem centers on expressing the Delaunay kernel in such a way as to obtain an efficient constructive algorithm despite the round-off errors that may occur in the actual computation scheme. As already mentioned, the given coordinates are assumed to be of integer type ensuring exact surface (or volume) evaluations (obviously, to this end, we compute twice the surface area or six times the volume so as to avoid the division needed for an exact value). In this context, a two part algorithm is proposed. This algorithm includes the above method serving to initialize the cavity, the latter being wrong in some cases. The process is completed by a new algorithm, referred to as the the correction algorithm. Let P be the current point to be inserted and let T{ be the current triangulation, the first stage of the method leads to • using the Delaunay measure to construct the cavity associated with P, Cp, by adjacency, given the base. As this algorithm can result in a non-valid cavity, a correction step is need as the second part of the process. This correction relies in removing some elements from Cp to meet the desired properties again. Thus, the correction algorithm can be described as follows • if a vertex of 71 falls in the cavity, find one of the simplices17 in Cp, not in the base, having this point as vertex and remove this element 16
This approach requires some comments. Indeed, if we consider the example of a surface strictly positive valued, in infinite precision, it is not obvious that the same surface will be computed in the same way when used in a different software. 17 One can select as a simplex candidate the first element found that can be removed or select one of the possible simplices, enjoying a desired property.
56
CHAPTER 2. DELAUNAY
TRIANGULATION
from Cp. • if there is a (d — 1) boundary face of Cp not visible by P, pick and remove the simplex having this face from the cavity. • repeat this process as long as the number of elements in the cavity changes. One iteration results in starting the whole analysis of the elements remaining in the cavity again. This is done either from the base and proceeding by adjacency or this can be done by considering the last element not affected by the actual process. The main feature of this method can be summarized by the following lemma. Lemma 2.1. This algorithm converges. Proof. In the worst case, the cavity is reduced to the base thus leading to the convergence. Using adjacency relationships in the process ensures that the cavity is a connected set; as the base is necessarily included in the cavity, the latter contains point P. D Moreover, the visibility checks (surface or volume computations according to d) guarantee the star-shapeness property of the cavity. In summary, the proposed method ensures a valid cavity triangulation construction having P as a vertex. Indeed, each element in this triangulation is defined by P and a (d — l)-face boundary of the cavity (we meet the definition of the ball associated with P). This point is our main concern; to this end, the Delaunay criterion may be violated in some regions, when removing the elements in the cavity violating this criterion (so, these non-Delaunay elements remain in the triangulation). As a consequence, the resulting triangulation may not be strictly Delaunay; nevertheless, this triangulation turns out to be suitable for the proposed application of the method (i.e. the construction of a finite element mesh). The proposed algorithm is constructive and the computations are integer in nature (and thus are exact) or such that only surface (volume) evaluations, or equivalent computations, have been used. Thus, it is possible to obtain a computationally exact algorithm with a limit of application, as discussed above, partly extended. Indeed, the limit is now
leading to / < 33554432 in two dimensions and / < 65536 in three dimensions. Obviously, an order of magnitude has been obtained and the
2.6. COMPUTATIONAL
ASPECTS
57
separation power of the method is increased. Hence, this method is usually well-suited. The / value gives the separation power of the method and indicates the maximal number of points in each direction (the minimum distance from point to point being unity). In conclusion, under the aforementioned assumption, we have obtained an algorithm insensitive (at least less sensitive) to round-off errors. The starting point of this algorithm is the knowledge of one element in the base. In the first chapter, a classical method of finding such an element was presented. 2.6.4
Using the kernel
Computational geometry, [Shamos,Preparata-1985], [Edelsbrunner-1987], [Mulmuley-1993], [Boissonnat,Yvinec-1995], etc, includes the study of triangulations and in particular of Delaunay triangulation. The complexity of the algorithms devoted to triangulation construction is an important part of this discipline. A well-known result is that the introduction of some extend of randomization in an incremental method results in an important advantage. This technique leads to constant size cavities on average (in terms of the number of elements, regardless of the number of points), and thus globally minimizes the number of required operations. Consequently, the loop • For each point in «S, part of the general scheme (cf. 2.6.2) is randomized. The points in «5 are introduced randomly. As mentioned previously, the benefit is related to the reduction of the input data size of the algorithm, i.e., the number of elements of each cavity or, similarly, that of the corresponding ball. As every element operation induces a certain amount of operations, the average minimization of the number of elements implies an average minimization of the number of required operations. 2.6.5
Access to the base
This task has been described in Chapter 1 and can be applied here without any problem. Let us recall that a regular grid can be defined in order to improve the efficiency of the searching algorithm used to find a first element close enough to this base.
58
CHAPTER 2. DELAUNAY
2.6.6
TRIANGULATION
Inheritance
The Delaunay kernel or any other method used when inserting a point leads to the derivation of 7i+i from 7i- As this process is purely local, the modifications needed to construct the new triangulation by updating the previous one are local. Specifically, the old elements in the cavity must be deleted, new elements must be defined and neighbouring relationships, circumcenters and circumradii18 of these elements must be defined or computed. Due to the nature of the method, this last information is inherited, in some sense, from the corresponding old values. Neighbouring relationships. When inserting a point P, only the neighbouring relationships related to the elements in the cavity associated with P are affected. Indeed, two kinds of neighbouring relationships are encountered • those relating the elements in the cavity to the elements in the complement of this set. These are the relationships related to external edges or faces (d = 3) of the cavity, • those relating the elements in the cavity one to each other, the intracavity relationships. These are the relationships related to internal edges or faces (d = 3) of the cavity. Let K be an element constructed from an edge (a face) Fj of the cavity and let Kdet be the element in 71 having this edge (face) before being removed. Clearly, the former element neighbour of Kdet through this entity becomes the neighbour of K through this same entity. This obvious remark can be exploited to quickly establish the neighbouring relationships related to the entities of the cavity boundary. Similarly, neighbouring intra-cavity relationships can be deduced from the former. To this end, the connectivity matrix (cf. Chapter 1) is used. Given K and K' two adjacent simplices in the ball of P, then there exist two edges (faces) F and F' in the cavity such that K = Conv(F, P) and K' = Conv(F', P), as A' and K' are adjacent, F and F1 share a vertex (an edge). Thus, the adjacency relationships from K to K' can be obtained by rotating 18
Maintaining correct circumcenters and circumradii is not strictly required as these quantities can be evaluated at the time they are used when a cavity is constructed. Nevertheless, while this way of processing is less demanding in terms of memory resources, it is not possible to benefit from the inheritance (as will be described later) of old quantities and new ones so as to minimize the number of required calculations.
2.6. COMPUTATIONAL
ASPECTS
59
around this common vertex (edge) starting from a former element including F so as to reach, element by element, a former element including F'. This operation can be completed using the connectivity matrix corresponding to the affected elements, [Borouchaki et al. 1996]. When the operation is completed, the relevant connectivity matrices are updated. Remark. This way of processing using the connectivity matrices is memory intensive, indeed we need to define and update these matrices. This is why, we would say that this rather elegant method is not fully satisfactory. A less memory intenzive algorithm, which does not require these matrices can be developed, as will be seen later. Circumcenter transport. As above, new circumcenters inherit from some relevant previous circumcenters.
Figure 2.12: Circumcenter transport from Kdet to K. Let K be an element constructed with the edge (face) Fj of the cavity and let Kdet be the former element in 7i having this entity as an edge (face). We denote by X the point in Kdet that will be replaced by P, formally speaking. Clearly, (cf. Figure 2.12), the Circumcenter of K can be deduced from that of Kdet- Let us consider a two dimensional situation, let UK be the unit normal w.r.t. the edge of K opposite to P, by definition OK, the Circumcenter of the circle K, belongs to the line with director vector
CHAPTER 2. DELAUNAY
60
TRIANGULATION
HX passing through 0/rdet, the circumcenter of element K^et- Hence, a real value t exists, such that (2.8)
Let A be one of the endpoints of Fj in Kdet, then
If M denotes the midpoint of segment [A,P], then OK belongs to the hyperplane having a normal vector AP passing through M. Thus (2.9)
The factor t can be obtained from equations (2.8) and (2.9). Indeed, since and
we have
and
where Hp^P] stands for the power of point P w.r.t. the hyperplane supporting Fj. As ||Oj^~3|| = rjfi we can deduce that
and, by substituting t in equation (2.8), we obtain OK and consequently
A fundamental remark. The computation of t in the above expression involves ||0A'de{F||2 — (^det)2 and HF}(P)- These two quantities correspond respectively to the cavity construction step and the correction phase of this cavity. As a consequence, they have already been evaluated.
2.6. COMPUTATIONAL ASPECTS
61
This means that the computational effort to obtain t is reduced to one simple division. A similar relationship exists in three dimensions and, more generally, in any arbitrary dimension.
Exercise 2.5. Establish the relation relating the inheritance from circumcenter and circumradii of an element constructed with a face of a given element. As a final remark, it can be observed that the above relationships avoid the solution of a linear system.
2.6.7
Computational background
Throughout this section, we allude giving some indication regarding the "tables"19 required to effectively implement the different phases discussed before. Other resources will be needed, due to programming constraints, which are not mentioned. Information associated with an element. An element table is obviously strictly required. A triple (quadruple) is associated with each element. This table contains the vertex numbering. Meanwhile, a table is defined to store the neighbouring relationships (in terms of edge or face adjacencies according to the spatial dimension). This table is quite similar to the previous one. With every element are also associated the circumcenter (d coordinates) and the circumradius. This is the minimum set of data required. This data represents • 3 X (d + 1) x nemax values where nemax is the (pre-specified) maximum number of elements. Information associated with a vertex. The vertices are defined by their d coordinates, thus • d x npmax values are required, where npmax is the (pre-specified) maximum number of vertices. 19
cf. supra about the meaning of the notion of a table.
62
CHAPTER 2. DELAUNAY
TRIANGULATION
Other information. The grid used for searching the base is a table with d indices, varying from 0 to g, a given integer value. The latter must be defined in such a way as to be just long enough. Indeed, the vertices are encoded in this grid and a too few number of boxes in the grid implies a large number of collisions, as several points fall within the same box. On the other hand, too many boxes, while minimizing the number of collisions, can be demanding in terms of memory resources. Actually, (g 4- l) d boxes are necessary. According to the spatial distribution of the given points, it is possible to define a different type of grid. As an example, the number of boxes can be different from one direction to the other, the boxes being different than a square (a cuboid). A quadtree or octree type of data structures could also be envisaged. Memory requirements. In three dimensions, the number of elements is on average about 6 times the number of points in a triangulation (this is a majoration). Hence, the above estimate requires about 75 x npmax words (excluding the grid) in terms of npmax, the maximum number of allowable points. 2.6.8
Dynamic management of the background
The dynamic management of the background includes the two previously discussed aspects (the update of the neighbouring relationships and that of the circumcenters and circumradii) as well as the update of the grid used for the base search. To update the grid, one needs to encode in it the point P in insertion by dealing at the same time with the collisions, if any (meaning that there are already some vertices in the grid box within which point P falls). As previously indicated, the update of the neighbouring intra-cavity relationships can be completed by exploiting the connectivity matrices with a larger amount of memory effort. In two dimensions, it is possible to complete this by finding the pair of elements sharing a given edge. As such an edge joins P and a point Xj lying in the cavity boundary, it is only required to find two elements having Xj as vertex to ensure that these elements are adjacent through the edge PXj. As the elements are oriented, it can be checked that one contains the edge PXj, while the other has XjP as an edge. In three dimensions, the similar problem relies in searching the pairs of elements sharing a face, and, as above, this implies that the search concerns the elements containing a given edge, say an oriented double. Therefore, an
2.7. ABOUT SOME RESULTS
63
algorithm similar to that described in Chapter 1 for constructing an edge table can be employed.
2.7
About some results
In this section, we would like to illustrate the materials discussed throughout this chapter by depicting some application examples along with comments. The selected test to evaluate the proposed methods consists of constructing a triangulation of a box enclosing a given set of points. Examples in two dimensions. Two examples are now depicted. The first is a set of points defined on the boundary of a realistic domain, while the other concerns a set of points randomly created along lines and circles, the latter also being randomly defined. This last example is interesting as it is well-known that Delaunay triangulation methods are not well-suited for cocyclic or even colinear cases. This fact is related to the way in which the cavities are constructed by analyzing the circles circumscribed to the elements in the triangulation. The Table 2.1. reports, for these two examples, the number of points, np, the number of triangles, ne, the required time, i, and the triangulation construction speed-up, v, where this value indicates the number of triangles constructed per second. .
example 1 example 2
np 570 17,132
ne 1,134 34,258
t 0.04 1.15
V
27,000 29,680
Table 2.1: CPU time (in seconds) versus number of elements. To fully appreciate the efficiency of the proposed triangulation algorithm, we give here a third example. It is a quadrilateral enclosing a square whose boundary is discretized by a given number of segments. By varying the discretization step, we simulate different models with an increasing number of elements. Case np ne t V
1 25,453 50,328 1.36 37,000
2 4 3 5 72,897 239,360 713,557 713,557 144,816 476,942 1,423,736 1,423,736 17.2 72. 4.67 55.85 31,000 28,000 25,500 19,800
Table 2.2: CPU time (in seconds) versus number of elements.
64
CHAPTER 2. DELAUNAY
TRIANGULATION
Figure 2.13: Example 1 (the given points are those of a domain boundary along with the four corners of an enclosing rectangle).
2.7. ABOUT SOME RESULTS
65
Figure 2.14: Example 2 (the given points are randomly created on circles and straight lines that have been randomly defined).
66
CHAPTER 2. DELAUNAY
TRIANGULATION
Table 2.2 reports statistics about the resulting triangulations (on a HP 9000/735, 99 Mhz., notice that this time corresponds to the phase devoted to the triangulation construction only). This deserves some comments. It seems that a robust algorithm in two dimensions is quite fast. Example 5 in the table is the same as example 4, we have only modified the size of the grid used for searching the base (the boxes in this grid have been subdivided by a factor 4). In fact, the reduced incremental method coupled with a correction phase, as proposed in this chapter, does not lead to significant problems when implemented on a computer and, despite the cost of the correction phase, its global cost remains quite reasonable.
Examples in three dimensions. At first, we give two examples of box triangulations (Figures 2.15 and 2.16), where the given sets of points correspond to the boundary of very simple domains. Figures 2.17 and 2.18 display an enlargement of the triangulations so as to better show the geometry of the two domains whose boundary points define the sets S.
Figure 2.15: Case L
Figure 2.16: Case 2.
It may be of interest to give some statistics of the CPU times needed to construct such box triangulations (HP 9000/735, 99 Mhz.). In Table 2.3 (as in the previous tables) np is the vertex number, ne is the element number, t is the required time and v stands for the number of elements constructed per second.
2.7. ABOUT SOME RESULTS
67
Figure 2.17: Case 1 (enlargement). Figure 2.18: Case 2 (enlargement).
np ne t V
390 1,055 1,765 2,600 19,258 33,371 26,012 2,046 6,468 11,312 15,337 113,373 149,291 166,560 8.64 .34 .20 .47 5.66 .05 5.30 40,900 32,300 33,270 32,600 21,400 26,400 19,300 Table 2.3: CPU time (in seconds) versus number of elements.
In this table, we can observe that the speed-up, v, although it seems to be high, is not strictly linear (notice that the first examples are not relevant in this respect) as can be seen in the last examples which express this behavior. So what kind of complexity can be expected ? Most likely nlog(ri), according to the theory, where n is the range of the entry size (number of points or number of elements). Nevertheless, the given points are not in a general position (because they lie on the surface of some realistic domains) and, because of this particularity, the given points may compose colinear, cocyclic, cospherical and coplanar patterns, all of which penalize Delaunay-type methods. Thus, the theoretical speed-up behavior cannot be achieved. To avoid the above problem, we now propose an example in which the points are in a general position and we report in Table 2.4 the CPU time required to construct the triangulation as a function of the number of created elements. The speed-up aspect is depicted in Figure 2.19 which corresponds to Table 2.4.
CHAPTER 2. DELAUNAY
68
TRIANGULATION
Figure 2.19: CPU time (in seconds) versus number of tetrahedra.
ne t V
150,986
219,694
286,862
320,034
385,230
4.19
8.14
11.84
13.64
17.23
36,000
27,000
24,000
23,500
449,938 575,855 20.80 28.05 22,000 21,800 20,500
Table 2.4: Some values of curve 2.19. As far as we can see, the algorithm appears to be better in terms of complexity. The behaviour seems to be almost linear. An additional remark must be given. The speed-up of the algorithm in three dimensions is not very different from that of the corresponding algorithm in two dimensions. This is partly due to the fact that the scale concerns the number of elements and not the number of vertices in the triangulation. In this way, we impede the measure, since the ratio ^ is used.
2.8
Applications
Numerous applications of the proposed method for triangulation construction can be found, as they apply in different fields of activities. Hereafter, a selection of possible applications is given. Convex hull by extraction in a box mesh. The reduced incremental method enables us to complete the triangulation of a box enclosing a given set of points. To obtain by extraction from this triangulation that of the convex hull is a tedious task. If we remove all the simplices having at least one vertex coincident with a corner of the box, the sought convex hull is
2.8.
APPLICATIONS
69
not, in general, obtained and to find such a set, it is necessary to consider the problem globally. This means that the reduced incremental method is not well suited for this purpose. Convex hull as a triangulation boundary. We complete the triangulation corresponding to the given set of points without the help of an enclosing box. To this end, the incremental method (in its general version) is employed. The boundary entities of the triangulation is the desired result. To increase the efficiency of the process, every set point enclosed in the current triangulation is not inserted (meaning that every point in the situation depicted in Figure 2.7 does not contribute to the convex hull construction). Dynamic convex hull. To obtain a fast algorithm, one has to consider not a triangulation directly (as a set of triangles or a set of tetrahedra depending on the spatial dimension) but, may observe that the desired result is a set of segments (in two dimensions) or a surface composed of triangles (in three dimensions). The algorithm relies then on updating a list of edges or a list of triangles according to d and not a list of simplices. This task is governed by a visibility criterion.
Figure 2.20: Domain 1 and its convex hull (courtesy of SDRC).
CHAPTER 2. DELAUNAY
70
TRIANGULATION
Figure 2.21: Domain 2 and its convex hull (courtesy of ENST). Examples in Figures 2.20 and 2.21 depict, in three dimensions, the convex hull obtained as the boundary of a triangulation. For domain 2, 12, 762 tetrahedra have been constructed within 0.78 seconds, while the third method completes the convex hull within 0.05 seconds! Localization search. Given a triangulation and a point P in the space, the question is to find the element within which P falls. Indeed, we return to a problem similar to that of searching for a base (cf. Chapter 1), the only difference is that P could be identical to a vertex of the given triangulation (while this case was not covered in the previous discussion).
Meshing an arbitrarily shaped domain in R2 or R3. This is the aim of this book. The following chapters will deal in detail with this issue. They will indicate how the triangulation algorithms can be used to define meshing algorithms for arbitrary geometrical domains.
Surface meshing. case.
A chapter will be devoted to the parametric surface
2.9. NOTES
2.9
71
Notes
Other metrics. The usual Euclidean metric can be replaced by any other Euclidean metric. This leads to a modification of the notion of a proximity and, thus, results in a different Voronoi diagram whose dual is the triangulation associated with the chosen metric.
Delaunay tree. This tree is a graph which stores in memory, for every point insertion, the local modification applied to update the current triangulation. Hence, this graph memorizes the hierarchy of the construction. The reason for constructing such a graph is to quickly locate the elements in the cavity associated with the point being inserted. The complexity of this algorithm is optimal, in any dimension, if the points are randomly inserted. On the other hand, the amount of memory required is relatively large. This notion of a Delaunay tree is discussed in [Boissonnat,Yvinec-1995] and the reader is referred to this reference. Delaunay triangulation from the convex hull. Given a set 5 in Rd, this cloud of points can be mapped to Rd+l using a geometrical transformation defined as Xi = X i , i = l,d
when the convex hull of the X^s is completed, then the Delaunay triangulation of the a;t's is obtained, cf. [Brown- 1979], by projection (the #,-, i = 1, d denotes the coordinates of a point in .R^, while the X t , i = l , d + 1 define the points in Rd+1).
Voronoi diagram of higher level entities. An example of such diagrams is that associated with a set of edges. A useful application of this notion is the construction of the skeleton of a polygon (polyhedron).
Complexity. The incremental method, as proposed and discussed in this chapter, is of n2 complexity in the worst case (in particular for cocyclic points in two dimensions or cospherical points in three dimensions). However, in practice, it enjoys a linear behavior (for a reasonable number of points) both in two or three dimensions.
72
CHAPTER 2. DELAUNAY
TRIANGULATION
In two dimensions, any arbitrary triangulation of n points includes O(n) triangles, as it is true for Delaunay triangulations. In three dimensions, there are configurations of n points whose Delaunay triangulation may include O(n2) tetrahedra, cf. [Shamos,Preparata-1985], [Klee-1980] and [Seidel-1982]. In [Edelsbrunner et al. 1990], a triangulation method in R3 is described that exhibits an optimal time complexity and results in 3n — 11 tetrahedra in the worst case. Higher dimensions. The incremental method extends, in fact, to any dimension. Experimental checks have been completed up to dimension seven ([Borouchaki-1993]). In these implementations, the general position for the points has been assumed so as to avoid numerical problems. The following table reports some statistics about this. The number of simplices (ne) created as a function of the number of points (np) in terms of the spatial dimension (d) is indicated. Clearly, the number of created simplices increases very rapidly, thus leading to memory difficulties, see [Klee-1980]. np 50 100 d=2 rf=3 d =4 568 1,532 d = 5 1,550 5,040
500 1000 971 1,965 2,943 6,138 3,717 11,050 14,207 44,904 200
2000 5000 3,958 9,935 12,661 32,398
Table 2.4: Statistics about some triangulations for dimensions 2 to 5.
Some open problems. The sweeping method, as well as the divide an conquer approach, are still open problems in three dimensions and in higher dimensions as well.
Chapter 3
Constrained triangulation 3.1
Introduction
We consider the space Rd with d = 2 or d = 3. Given a set or cloud of points, denoted as <$, we assume that a set of constraints, Const, is specified as a set of edges in two dimensions or a set of edges and faces in three dimensions. The set S is enriched by including the endpoints of the constrained entities (edge endpoints or face vertices) and the points of the so created set $ are assumed to be distinct. The problem then is to complete a triangulation, for instance a triangulation of Conv(S}, the convex hull of S or a triangulation of a box, Box(S), as defined in Chapter 2, such that the constraints in Const are represented in some sense in this triangulation. The triangulation algorithm discussed in Chapter 2 enables us to obtain a triangulation of Conv(S} or Box(S), which are sets where we would like to find the members of Const as triangulation entities in some sense. In general, except under some specific assumptions regarding the regularity of Const, this result does not hold. Therefore, after determining if the problem has one or several solutions, it is necessary to develop algorithms which achieve the desired result. The first idea is to constrain the construction itself (a member of the set of constraints existing at one stage of the process will never be removed). Although it appears to be useful, it is not sufficient, thus other techniques must be involved. A suitable method is a method which enables us to generate or regenerate, in a proper sense, the constraints in a given triangulation. Two classes of methods are encountered depending on how the constraints must be satisfied. The first kind acts by local modifications to enforce the given constraints, while the other kind tends to modify the constraints so as to create an "admissible" set of constraints. Spatial dimensions higher than three are also investigated.
74
3.2
CHAPTER 3. CONSTRAINED
TRIANGULATION
Constraints and triangulation
Throughout this section, several definitions are given, both to formulate the problem in a sense or another and to define a toolkit suitable to develop algorithms for the desired solution. The set of constraints, Const, consists of a set of fc-faces, 1 < k < d — 1. We assume that the members of the constraint do not self-intersect. The given set can include sub-sets forming, in two dimensions, polygonal lines and in three dimensions polygonal lines and polyhedral surfaces. These types of sets, if any, will be denoted by £. 3.2.1
Some definitions
Definition 3.1. A k-face is Delaunay admissible if the k +1 Voronoi cells associated with the k + 1 vertices of the k-face share a (d — k}-face. As a corollary, a Delaunay admissible fc-face will be formed whenever a Delaunay algorithm is employed. This means that if Const includes only Delaunay admissible entities, then the Delaunay triangulation completed by inserting the vertices of the constraint (i.e. the edge endpoints and the face vertices) will exactly include Const (according to the next Definition 3.2). In other words, the constrained triangulation is obtained by constructing the Delaunay triangulation associated with the set of vertices in the constraint. A stronger condition ensuring this property related to Delaunay admissibility is such that, in two dimensions, every circle whose diameter is the edges of the constraint is empty (this circle is the smallest circle passing through the endpoints of the given edge). We have a similar characterization in three dimensions and, in general, in any arbitrary dimension. This key-point is established hereafter. Definition 3.2. A given triangulation Tr satisfies exactly a given set of constraints Const if any member of Const is an entity ofTr. This definition is quite demanding and achieving such a triangulation is a rather tricky problem in three dimensions (albeit being obvious in two dimensions) as will be seen. Definition 3.3. A given triangulation Tr satisfies weakly a given set of constraints Const if any member in Const is an entity ofTr, exactly or as a partition.
3.2. CONSTRAINTS
AND TRIANGULATION
75
This definition is less demanding and, from a computational point of view, is relatively easy to satisfy. In Figure 3.1, at the top on the left handside, a set of four edges (01, «2i a 3i a 4) is depicted, forming the given set of constraints. The triangulation shown in this figure on top at the right hand-side includes as edges all the a;'s (Definition 4.2) and, specifically, the edge 04. The last triangulation of the figure includes cti, a^ and 03, while edge 04 does not exist as such, but can be found as the union of the two edges a'4 and a![ (Definition 3.3).
Figure 3.1: Satisfying the constraints. The notion of a triangulation of a polygon in two dimensions and, in three dimensions, the triangulation of a polyhedron (actually, this is the triangulation or the mesh of a rf-polytope) is introduced which is different from the notion of a triangulation (of a set of points). Definition 3.4. The triangulation of a polygon (resp. of a polyhedron) is a set of elements whose union is this polygon (resp. polyhedron) such that the con formal properties hold. In practice, we are interested in a triangulation which includes the boundary fc-faces of the given polygon (polyhedron) as fc-faces. Therefore, regarding this property, all the triangulations of a polygon (polyhedron) are said to be equivalent. Definition 3.5.
Two triangulations of a polygon (polyhedron) are
76
CHAPTER 3. CONSTRAINED
TRIANGULATION
Figure 3.2: Equivalent triangulations or weakly equivalent triangulations. said to be weakly equivalent if they include the k-faces of the given polygon (polyhedron) or a partition of these k-faces as k-face. The first three triangulations of Figure 3.2 (left) are equivalent, while the last one (right) is weakly equivalent to the first three triangulations.
3.2.2
Constrained triangulation problems
Different classes of triangulation problems are encountered, depending on how Const must be represented in a triangulation, • according to Definition 3.2, the integrity of the members in the field of constraints must be preserved. In this case, the method consists, in principle, of locally modifying the current triangulation so as to regenerate the entities of the constraint, • according to Definition 3.3, the set of constraints can be modified, and thus, either a Delaunay admissible set (if any) or a suitable set more easy to proceed can be obtained. In this formulation of the problem, sets with type £ or equivalent sets may appear.
3.3
The two-dimensional case
In two dimensions, Const is a series of edges. Several different methods to constrain such a set are now discussed.
3.3.1
Construction of a Delaunay admissible constraint
The main point is that a Delaunay algorithm constructs the Delaunay triangulation in which the set of constraints must be present. To ensure this property, it is required to have Delaunay admissible (Definition 3.1) edges
3.3. THE TWO-DIMENSIONAL
CASE
77
in the constraint. Then, the method relies on partitioning the edges of the given field in such a way that the members of this partition form a Delaunay admissible field. Afterwards, this new field becomes the constraint to be satisfied.
Figure 3.3: A Delaunay admissible constraint. A quite simple algorithm can be developed to complete an admissible field. This algorithm relies on the following proposition. Proposition 3.1. Let a = AB be an edge and Ca be the disc of diameter a (thus passing through A and B). The line supporting a partitions the plane into two half-planes, denoted by H+ and H~. Then the edge a is said to be Delaunay admissible if • Ca is empty (no vertex is included in Ca), or • Ca fl H+ (resp. Ca H H~) encloses some points but C£ (resp. C~) is empty, C* (resp. C~) being the circumdisc of the triangle formed by a and the point in Ca D H+ (resp. Ca H H~), such that C+ (resp. C~) does not include any other point ofCa D H+ (resp. Ca H H~). Proof. The second condition is obvious as the defined triangle is a Delaunay element. The first condition implies that the midpoint M of a, is closer to A and B than any other vertex of the triangulation. As a consequence, the
78
CHAPTER 3. CONSTRAINED
TRIANGULATION
Voronoi' cells corresponding to A and B share point M, at least. This point cannot be a Voronoi vertex because, in this case, it would be equidistant to three vertices of the triangulation at least. Hence, M lies on a Voronoi edge and thus the cells associated with A and B share only one edge leading to the result. D As a corollary, if the set of constrained edges satisfies the first condition of the above proposition solely, then this set is Delaunay admissible. The algorithm consists in processing the edge a recursively as follows • the disc Ca is empty, then a is Delaunay admissible (see the edge a? and the circle Ca2 on the left-hand side of Figure 3.3), • Ca is not empty, then one or more points exist in (for instance) Ca fl H+ (see the edge a 2 and the disc Ca2 on the right-hand side of Figure 3.3. This disc is not empty, it actually encloses the other endpoint of 03), — Method 1: define on a, the point P projection of the point1 of Ca H H+ associated with the disc C+ and replace a by the two edges AP and PB. — Method 2: search the point in Ca PI H* associated with the disc C+ and analyze this disc, the circumdisc of the triangle formed by a and this point, * the above disc is empty, the initial edge is Delaunay admissible (the disc Ca2?a3 in the figure is empty), * the above disc is not empty, apply Method 1. At each edge partition, the corresponding discs are strictly enclosed in the disc associated with the initial edge. Moreover the new discs enclose at least one less point. As the number of points in the set is finite, the convergence of this algorithm is ensured. It may be observed, by the way, that replacing every non-Delaunay edge by introducing its midpoint is a method converging if an uniform refinement is applied globally. Indeed, the radii of the corresponding discs decrease, although it is not guaranteed that these discs enclose at least one less point than the former discs, thus, only a uniform splitting fine enough can lead to the result. One could consult [Weatherill-1985] about this topic. Notice that this algorithm satisfies the constraint in the sense of Definition 3.3. In addition, the resulting triangulation is not Delaunay. Return to the definition of this point in Proposition 3.1.
3.3. THE TWO-DIMENSIONAL
3.3.2
CASE
79
Method by constraint partitioning
In this case, we consider a triangulation not satisfying the constraint. So, some edges in Const are not edges of the given triangulation. The key-idea is to retriangulate every triangle intersected by a constrained edge while ensuring that the so-created sub-edges are edges in the resulting triangulation. The proposed algorithm consists of processing each missing edge. Let a be the segment corresponding to a missing edge. Then one has to • find the pipe2 associated with segment a. Let A and B be the endpoints of a (these two points are vertices in the current triangulation), • find the intersection points of the edges interior to this pipe and a. Let PI, P2> ••••, Pn De these points, • introduce the edges APi,P\P2, ...,PnB in the triangulation, • remesh the triangles while maintaining this list of edges. Each missing edge a, with endpoints A and B, can be then retrieved in the triangulation as the edges AP\, P\P2, •», PnB.
Figure 3.4: Initial pipe and partition of the constrained edges. While quite obvious, this algorithm extends to higher dimensions, see [Borouchaki-1993]. In practice, a unique operator is required, used recursively if needed. Given a triangle and a point lying on one of its edges, this operator remeshs the triangle with two sub-triangles having this point as vertex. 2
Return to Chapter 1 where the definition as well as the construction of a pipe are detailed.
80
CHAPTER 3. CONSTRAINED
TRIANGULATION
As before, this method ensures the constraint in terms of Definition 3.3 and the resulting triangulation is not Delaunay. 3.3.3
Method by enforcing the constraints
In this case, we start with a triangulation that does not include the given constraints. Thus, there are some edges in Const which are not element edges in the triangulation. After discussing the existence of a solution, we would like to propose a method which enforces the constraints in terms of Definition 3.2. Existence of a solution. Before discussing the existence of a solution for this problem, we establish a preliminary result which can be seen as a complement of Theorems 2.2 and 2.3 of Chapter 2. Keep in mind that these results show, on the one hand, that it is possible to modify an arbitrary triangulation so as to obtain a Delaunay triangulation by means of edge swapping only and, on the other hand, that it is possible to modify any arbitrary triangulation so as to complete a different arbitrary triangulation. Theorem 3.1. There is a triangulation without internal vertex covering any domain with a non self-intersecting polygonal boundary. Proof. Let Pt be the endpoints of the edges defining the polygonal domain in question. We assume the P;'s to be ordered in such a way that the points P t _i, Pi and Pi+i are successively encountered when the boundary of the domain is visited counterclockwise with respect to its exterior component. We denote the triple (P,-_i, Pj, Pj+i) by (kj). Then, there is at least one triple (kj) such that the corresponding triangle, K{ — (kj),j = i — 1, i, i + 1, has a strictly positive surface. This result is established by contradiction. A polygon (or a domain with a polygonal boundary) cannot be properly defined if all the triple associated with three consecutive boundary vertices are either null or strictly negative. If (kj) is such a triple, then only two states can exist. Either the triangle associated with the triple is empty (this is the case of triangle K8 = (P7P8P9) associated with the triple (ks) in Figure 3.5) or the corresponding triangle is not empty (on the same figure, see the triangle K4 = (PaPiPs), whose triple is (k4), which encloses point Pi2). The first state enables us to construct as a triangle of the triangulation the empty element associated with the actual triple. The initial domain is then modified by removing this triangle and the process is repeated after having reordered (formally speaking) the points of the new boundary. The second state leads to
3.3. THE TWO-DIMENSIONAL
CASE
81
Figure 3.5: The initial domain and the ordered vertices. • picking all of the edges having at least one endpoint enclosed in the current triangle A't, • constructing the convex hull of these edges, • replacing P;+i (or P;_i) by one of the points of this convex hull, say P;, by choosing the point that ensures that the triangle associated with the triple P;_iPjP; (or P/PjPj+i) is empty. Hence, we return to the first state and its construction rules are applied.
Figure 3.6: Creation of two triangles and resulting domain. In the second case, three steps are required to show that the triangle Pi-iPiPl (or P/PiPi+i) is empty. First, the fact that the initial triangle has three consecutive points on the domain boundary implies that all edges intersecting this triangle have at least one endpoint included in this triangle. Indeed, if the last property does not hold, then the edge under consideration has a non-empty intersection
82
CHAPTER 3. CONSTRAINED
TRIANGULATION
Figure 3.7: Convex hull of the edges intersecting a triangle. with one of the edges P;_iP; or P;P;+i. This is not permitted as the boundary of the domain is not supposed to be crossed. Second, since every point of the convex hull sees at least one of the edges Pj_iP; or P^P J+ i, then the triangle formed by joining such a point and the visible edge has a strictly positive surface. The Figure 3.7 depicts a configuration in which the triangle Pj_iP z Pj+i is non empty. The convex hull of the edges having a non empty intersection with this triangle encloses the points P/i and P/2. The triangles PiPi+\Pn and P;Pl+iP;2 have a positive surface area. Third, the closest point of the convex hull to the selected edge (Pt-_iP2- or PjPj'+i) ensures that the formed triangle is empty. The triangle PjPj+iP^ in the Figure 3.7 is empty. D The set of theorems used here provides a theoretical support for the following. Thus, discussing about how to enforce an edge in a triangulation is equivalent to discussing about how to mesh a polygon. We consider a triangulation and a set of constrained edges, Const, whose endpoints are vertices of the triangulation. Assuming that some constrained edges are missing in the current triangulation, we show that it is possible to modify the current triangulation so as to complete a (non-Delaunay) triangulation including these edges as element edges. Therefore, the existence of a solution for the given problem will be established. To this end, a constructive method3 is developed. Let a be an edge of Const currently missing in the triangulation. We denote also by a the segment joining A and B, the endpoints of a. We construct the pipe Ta 3
One can observe that the proof of the above theorem can be used to define a method, although it can have a poor complexity.
3.3. THE TWO-DIMENSIONAL
CASE
83
associated with the constraint a, and we denote by P the corresponding polygon. For sake of simplicity, we assume that P is a simple4 polygon and we would like to triangulate the polygon P in such a way as to ensure the existence of a. By definition, a separates P into two sub-polygons PI and P% (each of them being located on one side of a). Let us consider, for instance, PI and let us define by PQ = A, PI, PI, •••, Pn-\, Pn = B the vertices of this polygon. We search among the Pj, (j = 1, n — 1) for the point closest to a. This point exists (if it is not unique, the first encountered can be selected), we called this point Pk. Then, APk and BPk separate P\ into at the most three sub-polygons, one is the triangle ABPk, the two others are polygons whose number of sides is now less than that of the initial polygon. So, the polygon P is replaced by the triangle ABPk and, in the worst case, by two sub-polygons constituting the new triangulation problem. To complete this triangulation, it is only necessary to repeat the same method for all sub-polygons having more than three vertices (i.e. more than three sides) while replacing the edge a by the relevant edge (say APk and in the first recursion).
Figure 3.8: Initial polygon and first recursion. This process converges. Upon convergence, a triangulation of Ta is completed and a = AB exists. This method, as applied to all edges of 4
A polygon is said to be simple, if it is not self-intersecting.
84
CHAPTER 3. CONSTRAINED
TRIANGULATION
Const, implies the existence of a solution, and, moreover, completes the latter (assuming non self-intersecting edges in Const}. Exercise 3.1. Give a proof of this algorithm (Hint: the proof is based on the fact that the initial polygon has a finite number of vertices. It also uses the fact that the complexity of the sub-problems recursively defined is strictly less than that of their parent problems. Notice, on the one hand, that since the initial polygon is simple (as supposed at first), then the series of polygons involved in the process maintains this feature, and, on the other hand, that a polygon with three sides stops the recursion). Exercise 3.2. Discuss how to handle the case of a non-simple polygon (Hint: find a method to return to the case of a simple polygon). Method using edge swapping. According to the previous material, a solution of the problem exists. Consequently, it is sufficient to apply Theorem 2.3, meaning that the solution can be achieved by means of edge swapping. Note that this result has been established in two dimensions only. The method consists of locally modifying the triangulations of some relevant polygons (cf. Definition 3.4), taking advantage of the equivalency of such triangulations. It is now possible to propose the algorithm that follows. Other examples include those presented in [Hecht,Saltel-1989] and [Dyn,Goren-1993] where a slightly different approach is involved. Let a be the segment corresponding to a missing edge, whose end points are vertices of the triangulation, then • determine the pipe associated with the segment a, • apply edge swapping, as long as possible, to every edge of the triangulation intersected by a, while avoiding an infinite loop. The following algorithm is quite elegant : • determine the pipe associated with the segment a, • randomly apply edge swapping to every edge of the triangulation intersected by a.
3.4. CONSTRAINED
DELAUNAY
TRIANGULATION
85
Figure 3.9: Initial pipe and first three edge swaps. The convergence of these algorithms is ensured since a solution exists by means of edge swapping operations and since the number of required edge swapping is necessarily bounded. As no (infinite) loop is possible (as a consequence of an explicit check or due to the random aspect in the second algorithm), the convergence is guaranteed. On Figure 3.9, one may notice that the number of elements in the pipe does not necessarily decrease when an edge is swapped. Nevertheless, the number of elements in the pipe tends globally to two so that an ultimate edge swap achieves the solution.
3.4
Constrained Delaunay triangulation
This notion, briefly introduced in Chapter 1, is specific to twodimensional space. It leads to the extention of the concept of a Delaunay triangulation in the case where one or several edges (forming the constraint) are specified. Definition 3.6. Tr is said to be a constrained Delaunay triangulation of Q if the open circumdiscs (circumballs) of its elements do not include any point visible5 from the vertices of these elements. Constructing a constrained Delaunay triangulation can be achieved in three steps. First, the given points and the endpoints of the constrained 5
A point is said to be visible by an other point in the mesh if the segment joining these two points does not intersect any constrained edge.
86
CHAPTER 3. CONSTRAINED
TRIANGULATION
edges are inserted. Then, the specified edges are enforced and finally, edge swapping is applied to all created edges if the Delatmay criterion is violated according to the above definition. This kind of triangulation can be used in several different applications, including computer vision, CAD and many others. On the other hand, this type of triangulation is not useful in the meshing context for finite element purposes, as will be seen in Chapter 5.
3.5
The three-dimensional case
While the two-dimensional problem is successfully solved, the corresponding three-dimensional problem is still not fully treated. Numerous problems are still open and, specifically, the existence of a solution, according to Definition 3.2, is not clearly established. Nevertheless, this does not mean that more or less heuristic methods have not been developed, which aim at constructing a suitable solution. In practice, several authors have shown that it is possible to achieve a constrained triangulation in most of the cases. Within the discussion, we will split the problem into two phases. The first phase concerns the regeneration of the edges in the constraint field, in some sense. The other is devoted to the enforcement of the faces specified in the given constraint. Indeed, it must be noticed that a specified face may be missing while its three (specified) edges are present in the current triangulation. In what follows, a discussion similar to the two-dimensional case is made and we indicate, with respect to the different approaches of satisfying the specified constraints, the existence of a solution if any.
3.5.1
Construction of a Delaunay admissible constraint
Delaunay admissible edges. Dealing with the non Delaunay admissible edges included in the constraints leads to the same result as in two dimensions. Such an edge can be split using Method 1, detailed in two dimensions. Similar reasons ensure the convergence; indeed, the spheres involved in the construction are strictly enclosed in the former spheres they replace. Remark. Extending the two dimensional Method 2 previously discussed is a more tedious problem. It is actually not so easy to guarantee the Delaunay criterion for a given edge by analyzing all the tetrahedra that can be formed from this edge.
3.5. THE THREE-DIMENSIONAL
CASE
87
Delaunay admissible faces. The construction of a set of Delaunay admissible faces, given a set of arbitrary faces, seems to be an open problem. In fact, we think that even the existence of a solution not always holds. Splitting a face into sub-triangles by introducing a point inside the given face, does not lead to obtain spheres strictly included in the sphere they replace. Similarly, introducing the edge midpoints on the current face leads to the same negative conclusion. Moreover, dealing with a face may result in the need to modify adjacent faces, contrary to the case of the edges. To avoid such a propagation, one solution consists of constructing a uniformly refined mesh leading to a large size problem. Exercise 3.3. Construct an example of the above situation (Hint: splitting a triangle face into four may result in repeating the same pattern at infinity). Nevertheless, the following proposition can be used to decide if a given face is Delaunay admissible. Proposition 3.2. Let f = ABC be a face and Cf be the smallest ball passing through A, B and C. The plane through f partitions the space into two half-spaces, denoted by Hf and HJ. Then, the face f is Delaunay admissible if • Cf is empty (no vertex is included in it), or • Cf fl Hf (resp. C/ fl HJ) encloses some points but Cf (resp. Cj) is empty, Cf (resp. CJ) being the circumball of the tetrahedron formed by f and the point in C/ D H* (resp. Cf n HJ ) such that Cf (resp. Cj) does not include any other point of Cf D #f (resp.
3.5.2
Method by constraint partitioning
Splitting the edges. In this type of solution, the method proposed in two dimensions can be extended without difficulty. We search the pipe associated with the segment a corresponding to the missing edge. This pipe is actually a general pipe (cf. Chapter 1). The intersection points of a with the elements fall into two categories, • a intersects one or two faces of a given tetrahedron,
88
CHAPTER 3. CONSTRAINED
TRIANGULATION
• a intersects one or two edges of a given tetrahedron, • a intersects one face and one edge of a tetrahedron of the pipe. The algorithm consists of finding the intersection points and of defining the required operators, • an operator that splits an element having a face intersected by a into three sub-tetrahedra. This is the operator OP? described later (see Figure 3.10), • an operator which deals with the intersection through an element edge (where the corresponding shell is considered). Recursively applied, these operators enable us to manage all the possible configurations and result in ensuring the existence of a (non Delaunay) solution which is completed at the same time.
Figure 3.10: Operator
Splitting the faces. In principle, the face problem can be solved in a similar way. The method consists of directly enforcing the sub-faces resulting from the intersection of the constrained face and the tetrahedra of the current triangulation.
Exercise 3.4. Prove that an algorithm enforcing the sole edges of the sub-faces does not lead to the desired result (Hint: analyze the case where a sub-face is a four-sided polygon).
3.5. THE THREE-DIMENSIONAL
CASE
89
Exercise 3.5. Develop the algorithm which directly enforces the subfaces (Hint: take care to preserve a conforming triangulation). The reader will find hereafter the description of this algorithm for arbitrary dimensions. 3.5.3
Method by enforcing the constraints
Satisfying the constraint in terms of Definition 3.2 results in a problem for which the existence of a solution is not proved. We will now point out the reason of this lack of solution. Nevertheless, we will propose an heuristic method suitable for most cases. Existence of a solution ? The Schonhart polyhedron (at bottom of Figure 3.11), is a rather simple polyhedron for which it is not possible to find a triangulation (consisting of strictly positive volume tetrahedra) without introducing internal point(s). Such points are called Steiner points. Theorem 3.1 does not extend to three dimensions. In this example, this results from the fact that the triangulation without any internal points contains a tetrahedron whose volume is exactly zero. On the other hand, a point located anywhere in the interior of the polyhedron belongs to the (visibility) kernel of this set. Consequently, joining the faces of the polyhedron and this point results in a valid triangulation. This example, while rather obvious, points out a fundamental difference between the two-dimensional and the three-dimensional problem. In twodimensions, a solution without (Steiner) points always exists. The example also leads us to expect some difficulties in detecting this type of problem as well as in determining the adequate Steiner points. Thus, an open problem is the determination of the minimum number of Steiner points required to make the constrained triangulation possible. Another open question is that of finding appropriate locations for these points, in a polynomial complexity.
Exercise 3.6. "Twist" the Schonhart polyhedron (i.e. reduce the size of its kernel) and find the location of the Steiner point making its triangulation possible (To play this game, one can use construction paper to construct a model. This is a good way to develop some insight into the difficulty of this problem).
90
CHAPTER 3. CONSTRAINED
TRIANGULATION
Figure 3.11: Possible prism and Schonhart polyhedron. To conclude, it has not been proved that an algorithm with a polynomial cost can be exhibited to ensure the existence of a solution for the given problem (see [Ruppert,Seidel-1992]). As this does not rule out the existance of a solution, we would like to propose an heuristic method suitable for most cases. This method is detailed hereafter. Method by means of local transformations The key-idea is to use local transformations to regenerate the missing members of Const. Its principle is based on the equivalency of polyhedral triangulations (cf. Definition 3.4). In practice, operators will be defined so as to remove an edge or a face intersected by any missing entity of the constraint. Useful local operators. The number of operators dealing with an (elementary) polyhedra is reduced. Indeed, one can • introduce a point along an edge and remesh the corresponding shell (cf. Chapter 1). This operator is denoted by • introduce a point on a face and remesh the shell formed by the two elements sharing this face, OP?,
3.5. THE THREE-DIMENSIONAL
CASE
91
• create a point inside a tetrahedron and subdivide it into four elements, OP3, • introduce two points on two faces of a tetrahedron and remesh it into five tetrahedra, • remove the edge defining a shell, • remove the face common to two tetrahedra, • relocate a point, The operators OPi, OP?, OPs and OPj are rather simple and apply without restriction. On the other hand, the operators OPs, OPe do not apply in all situations as the resulting triangulation can be made invalid. These tools, suitable for optimization purposes, will be described in detail in this context (see Chapter 8). Finally, the operator OPr, also described later, can be used without any difficulty 6 , at least in theory. In summary, • OP\ operates on a shell and a point, denoted as OPi(C,P). This operator results in two half-shells, operates on two elements and a point, denoted as OP
92
CHAPTER 3. CONSTRAINED
TRIANGULATION
Exercise 3.7. Develop the operators OPi, OP2 (see Figure 3.10), OP4 and OP& (the more delicate operators OP*, and OPi will be described in details in the chapter devoted to triangulation optimization). At this stage, we have defined a list of operators which allow for the creation of additional points (the Steiner points) and the removal or creation of an edge or a face. It can be observed that ensuring the conformity of the triangulation when applying an operator may result in the modification of one or more elements in the neighborhood of the operation. With this material, we now would like to define a tentative method leading to the solution. If such a method is found, then the existence of a solution will be obtained, at least in theory. Following the previous discussion about the nature of the problem, it is tedious to obtain such a result (the presumed thoughtful reader will take pleasure in finding where the proposed method could fail). Enforcing an edge. Considering the current triangulation and a missing edge a, member of Const, we construct the pipe, 7^, corresponding to a. According to the type of the intersection points of a and the elements of the triangulation, • Ta is a (simple) pipe, • Ta is a general pipe, including some shells whose defining edges are intersected by a. The first situation is relatively easy to consider, at least formally speaking. While the repeated use of operator OP& is not possible in all the configurations, the method relies on observing that a pipe consisting of two tetrahedra is necessarily convex and thus the solution can be achieved (via the operator OPo). Hence, we aim at defining an algorithm decreasing the number of members in a series of pipes, the first pipe of this series being TaLet A and B be the endpoints of a, we set i = 0, define as 0 a set IV and denote the pipe Ta by T1'. Then Ta = T* + K* holds. Let Kj's be the n*h tetrahedra of T*, we denote by K\ the tetrahedron having a vertex A and whose face opposite to A is intersected by a. Similarly, we denote by KHt the tetrahedron having a vertex B and whose face opposite to B is intersected by a. With these notations, we assume that the Kj's are ordered, meaning that A'i, A'2, ..., and finally Kni , are successively encountered while traversing a from A to B. Then the algorithm consists of decreasing the number of elements of Tl while ensuring by means of an adequate construction of IV that, for all 2, Ta = Tl + TZl holds. Then
3.5. THE THREE-DIMENSIONAL
CASE
93
• (a] If m = 2, apply OP6 to (A'i, A'2), END. • Else, consider the first7 elements of T* — (/3) If the polyhedron formed by K\ and K2 is convex, apply OP& to (Ki,K2). K\ and K2 are then replaced by three tetrahedra, Ei,E2 and £"3; among them only one8 has its face opposite to A intersected by a. Let E\ be this element, then
implying that n;+1 = n,- — 1, then, GO TO (a) (7) If the polyhedron formed by KI and KI is not convex, OP& cannot be applied. We then consider element K%. If Q is the intersection point of a with the common face between K2 and A's, and if P is the vertex of K2 voyeur of K\, then a point R, lying on PQ exists, visible by A (the plane (TT) in Figure 3.13 limits (at bottom) the visibility cone from A, thus defining the point R*, then R is between this point R* and Q}. Once this point is constructed, we apply OP2 to (/i'2, A's, R). Let Ei,E2, £3 be the three elements replacing K2 and £4, £"5, £3 those replacing A'3. If on/y one element in both of these triples is intersected by a, assuming that it is EI and £4 then
implying that r^+i = nt-, then, GO TO (a). If more than one element of the second triple is intersected by a (for the first triple, there is no problem because of the construction of the point R in PQ), a more delicate construction is required to achieve the decreasing property of the series n,-. Let Dp be the set of elements of Tl having P as a vertex, we construct the sequence Rj similarly as the above point R (this Obviously, one can also consider the last elements as well. If P denotes the point of KI voyeur of K\, this feature suppose that AB does not belong to the plane supporting one of the faces APP\, APPi or APP$. If not, indeed, not only one but two elements must be added to the newly created pipe and the desired property is not achieved. We are actually in the general configuration, i.e the second case described hereafter.
94
CHAPTER 3. CONSTRAINED
TRIANGULATION
point is actually the point #1). This construction relies on defining the <2;'s, intersection points of AB with the common faces between the elements of T>p. Then, we introduce the R+ as Ri = UiP + (1 — Ui)Qi such that a proper choice of u>; ensures that the distance between AB and Ri decreases while going away from A (the Figure 3.14 illustrates this construction using a cutting plane). The first and the last element of Vp are remeshed in three [OP^] while the intermediary elements are remeshed in five (OP^). Thanks to the location of the points Ri, the step (/?) is reached when dealing sequentially with the resulting pipes (Ri being considered, then Ri+i becomes visible by A). As we have
Ta =r + iv
at each step of the algorithm, the initial pipe is retriangulated and, moreover, the desired edge a now exists. Proof. This algorithm constructs a sequence nt- strictly decreasing in spite of step (7). Indeed, if (7) is performed, the next step will necessarily involves the step (/?). D
Figure 3.12: Operator OP& applied to K\ and K^In practice, the proposed algorithm uses only the operators OP^, and OPe> while in a computational implementation, other operators can be considered. In the second case, i.e. if there is at least one shell corresponding to the edge a, we will try to remove the shell(s) leading to the general pipe configuration and, consequently, retrieve the previous case.
3.5. THE THREE-DIMENSIONAL
CASE
Figure 3.13: The polyhedron (K\,K<2) is not convex (case 1).
Figure 3.14: The polyhedron (Ki,K
95
96
CHAPTER 3. CONSTRAINED
TRIANGULATION
Let then C be the shell associated with an edge a/3 whose intersection with a is not empty, let Q be this intersection point. Then, • we construct a point P on a/3, distinct from Q, for instance the midpoint of a/3, • we apply OPi to (C,P), • we relocate P by means of OPi. This rather simple scheme removes the impeding edge and, when applied to all edges of this kind, it makes it possible to return to a (simple) pipe situation.
Figure 3.15: Removing the edge leading to the general pipe. Proof. The point P can be moved, thus proving the result. Indeed, the visibility kernel of P is not empty (Hint: The ball of P exists, including some tetrahedra of positive volume; we then assume that moving P with a step, as small as desired, cannot result in a negative or null volume). D Enforcing a face. We assume in the following that all the edges of Const have been constrained. In numerous cases, at the time that the edges of a face exist, the face in question also exists. Nevertheless, this is not necessarily guaranteed in general. Indeed, there are configurations in which some faces are missing while their three edges all exist in the triangulation. This means that one or more edges of the current triangulation intersect the plane supporting the desired face inside the triangle corresponding to this face. Such a configuration is said to be a pebble. The following paragraph shows how to reduce such pebbles by suppressing all the intersecting edges in order to regenerate the missing faces.
3.5. THE THREE-DIMENSIONAL
CASE
97
A pebble is then a set of elements having an edge intersecting a missing face.
Figure 3.16: Pebble corresponding to the face • Case of a three element pebble. The simplest pebble we can imagine is the one where a face is intersected by only one edge. This pebble is necessarily a shell with three elements. As the impeding edge, a/3, intersects the triangle PiP^Ps whose vertices are the endpoints of the considered face, this pebble is convex and OP$ applies. • Case of an arbitrary pebble. In general, pebbles can include an arbitrary number of intersecting edges (in Figure 3.16, a case is depicted for which two edges, a\fti and o^&j intersect the face PiP2P3). A shell of an arbitrary number of elements is associated with each intersecting edge. The key is to remove these edges by reducing their corresponding shells. The pebble is constructed by adjacency starting with the three shells with edges PiPz, P^Pz and P$P\- Such a shell, for instance that of P\Pi, can be written as the collection of the following tetrahedra (Mj, PI, Pj, MJ+I), (j = 1, nc) (modulo nc) where nc is the number of elements in this set. We examine the edge MfcM; opposite to the edge P\P^ and in the case where an intersection with the triangle P\P^P^ is found, a new shell to be considered is exhibited. We then visit the edges of this shell, other than M^M/, in
98
CHAPTER 3. CONSTRAINED
TRIANGULATION
order to determine the other possible shells and this process is repeated. The problem turns now in removing the series of edges intersecting any face by processing all of the corresponding shells. • (1) If the pebble includes only 3 elements, apply OPs, END. • Else, — For all of the existing shells; apply — For all of the shells still present, we will modify each initial shell, with a defining edge denoted by aft and whose number of elements is n c , in such a way as to obtain a reducible shell. To this end, we define the plane P+ parallel to the face PiP2^3 passing through a point Q+ suitable for every edges in the configuration. We construct the point, ]V+, intersection of a(3 with P+. In other words, a point is introduced "above" the face plane, we similarly introduce a point "below", namely the point denoted by N~ . Each initial shell (P!,c*,/3,P2)
(P 2j a,0,P 3 )
... (P ne ,a,/?,Pi)
can be then rewritten as
and in this last shell, the only which needs to be considered, is reducible. Proof. (Key-ideas for the proof) By construction, the initial shell has been triangulated in a different way. This mesh includes some new shells which are reducible (meaning that OPs applies). This feature is due to the way in which the points N~ and N+ have been constructed by an adequate choice of the Q~ and the Q+ involved in the construction. Moreover, a pebble is a set enjoying some specificity leading to offer the properties required to ensure the constructiveness of the process. D 9
This operator does not require the shell under consideration to be convex in order to be successfully used. The convex property is too strong of a condition.
3.6. HIGHER DIMENSIONS
99
A brief conclusion. As described in this section, the method to enforce the edges and the faces of a given constraint, indicates a way to achieve a constrained triangulation in three dimensions starting from an initial (Delaunay or non Delaunay!) triangulation whose vertices include the endpoints of the considered constraint. This method does not take care of some "there exists a point such that ... " and of the complexity aspect (number of operations or number of required (Steiner) points). From a computational point of view, we have obtained a solution for the given problem in three dimensions in most of the experimental cases. Obviously, the resulting triangulation is not a Delaunay triangulation.
3.6
Higher dimensions
According to the previous results relative to three dimensions, one can easily imagine that the same problem is not solved for higher dimensions. The only method possible, in our opinion, consists of splitting the constraints. This method is now described. 3.6.1
Constraint partitioning method
This method, previously discussed in two and three dimensions, applies regardless of the spatial dimensions or the dimension of the constrained face under consideration.
Figure 3.17: A configuration depicting the intersection points with a face. The method consists in identifying the simplices intersected by the constrained face (a fc-face in practice) and in retriangulating them independently, such that
100
CHAPTER 3. CONSTRAINED
TRIANGULATION
• the triangulation of each intersected simplex contains a triangulation of the poly tope resulting from this intersection, • every pair of adjacent intersected simplices shares the same triangulation for their common (d — l)-face. The first property implies that the constrained face is retrieved as a partition in the triangulation, while the second property ensures that the resulting triangulation is a conforming one. The Figure 3.17 shows, for a three-dimensional example, a representative configuration of this problem (among the different possible cases). The face / intersects the two tetrahedra Aafti and aft^B; the intersection points are denoted by Pt, i = 1,6. In a first stage, we define a triangulation said to be a lazy triangulation which will be used to retriangulate the simplices intersected by the constrained face under consideration. Then, we propose an algorithm suitable to construct the constrained faces. Let E be a set of n points in the Rd space, E = (P;),i = l,n. We denote by Lex(E) this set of points when these latter are lexicographically ordered10. As a consequence of this ordering, the i + Ist point of Lex(E) does not belong to the convex hull of the first i points of Lex(E}. For simplicity, we assume now that E is ordered accordingly (thus E = Lex(E)). Lazy triangulation. We denote by Ek the first k points of E and we construct the lazy triangulation T(Ek) of Ek recursively, in the affine hull of Ek as follows
• T(Ek+i) in the affine hull of £*+!> denoted by Afi(Ek+i), is defined
by [ T(Ek) U U Conv(P*+1, /) T(Ek+1) = }
u
^onv^, s)
if Pk+l € A«(Ek)
if Pk+1
where Fk is the set effaces of the convex hull of Ek, whose dimension is maximal, and which are visible by the point P/t+i, Conv(Pjt+i? /) is the rf-simplex defined by the face / and the point Pk+i and where Conv(Pfc+i, s) is the e/-simplex defined by the (d — l)-simplex s of T(Ek) and the point 10
If x\ denotes the jth coordinate of point Pi, the lexicographic ordering is defined by the relation Pi < Pk iff 3m such that 1 < m < d V / < m x\ = xlk and x™ < x™.
3.6.
HIGHER
DIMENSIONS
101
Exercise 3.8. Prove that this triangulation is well-defined (Hint: the point Pk+i does not belong to the convex hull of Retriangulating an intersected simplex. Let / be the constrained face under consideration, K be one of the simplices intersected by /, S(K) be the vertices of K and S(Kf) be the vertices of the (d — l)-polytope Kf resulting from the intersection. Then, the retriangulation of K is defined
by T(Lex(S(Kf)),Lex(S(K))) and the final triangulation of / is defined by
\jT(Lex(S(Kf))) K where K lives in the set of simplices intersecting the face /.
Figure 3.18: Triangulation of the face 0^7 by introduction of the points according to the lexicographic order. Going back to the example of Figure 3.17, the lexicographic order gives AP^PzapPiPij for the first element and aftP^PiP^P^B for the second element. Without enumerating the simplices resulting from the construction, we will give the triangulation related to the part of / intersected by each of the simplices Aaf3j and afl^B. Afterwards we will turn to the triangulation of their common face. The triangulation of the part of / intersected by the first simplex is given by
102
CHAPTER 3. CONSTRAINED
TRIANGULATION
which is the set composed by the two triangles P^P^Pz and P^P^Pi. The triangulation of the part of / intersected by the second simplex is given by T(P 2 ,Pl,P5,P 6 )
which is the set formed by the two triangles P^PiP^ and P^P^P^. On the other hand, the construction of the mesh of the face common to the two elements is that depicted in Figure 3.18 and is defined as follows
This leads to the creation of the triangulation by • incorporating P2: one obtains the vertex PI (a 0-simplex), as the starting point of the construction, • incorporating P\\ one obtains the segment P2Pi (a 1-simplex), because PI i Aff(P 2 ), • incorporating a: one obtains the triangle P2Pia (a 2-simplex), because a £ Aff(P 2 , PI), • incorporating /?: one obtains the two triangles P2Pi/3 and P2a/3 (two 2-simplices), because /5 G Aff(P 2 , PI, a), • and then incorporating 7: one obtains the two triangles PI 07 and Pi/37 (two 2-simplices), because 7 £ Aff(P2, PI, a, (3). Consequently, the resulting triangulation is a conforming one. The two following exercises analyze this construction in detail so as to prove that the two desired properties are satisfied.
Exercise 3.9. Show that the resulting triangulation encloses a triangulation of Kf (first expected property).
Exercise 3.10. Show that the above process ensures the same triangulation in the common (d — l)-faces (second expected property). A full description of this algorithm can be found in [Borouchaki-1994].
3.7. COMPUTATIONAL
3.7
ASPECTS IN THREE DIMENSIONS
103
Computational aspects in three dimensions
Let us consider a set of constraints and a triangulation, for instance a triangulation completed by the Delaunay method with the constrained kernel. The goal is to obtain the list of the missing edges and faces at the time the endpoints of the latter have been inserted. Then, by means of the local modification algorithms, we will define a scheme which results in the regeneration of these missing entities.
3.7.1
Searching the missing constraints
In order to recover the missing constraints, one can simply follow the method proposed in Chapter 1 relative to the construction of the edge (face) table of a given triangulation. Once this table is completed, the missing entities are identified by visiting the elements of the current triangulation.
3.7.2
Local configurations
A pipe is associated with every missing edge.. With every missing face, when the three corresponding edges have been regenerated, is associated its pebble. In practice, the missing entities are dealt with one at a time starting with the missing edges and, then moving on to the missing faces.
3.7.3
Tentative scheme for an algorithm
For the missing edges. The previously described edge recovery algorithm is used, enabling us to define a computational method. The following scheme is proposed (for every missing edge, say AB]: • Remove the edges resulting in a general pipe - by means of a direct remeshing (operator OPs), — by introducing a point along every edge (operators OP\ and OPr), • Process the (simple) pipes — by considering the "first" or the "last" two elements (while visiting the missing edge, AB, from A to B or from B to A), - by considering two consecutive elements, - by considering a set of elements (using the points R described in the algorithm).
104
CHAPTER 3. CONSTRAINED
TRIANGULATION
In contrast to the algorithm in 3.4.3., our concern here is to limit the number of (Steiner) points. This means that the point creation phase is delayed as much as possible. Indeed, a configuration requiring a Steiner point can disappear as a neighboring pattern is modified. For the missing faces. Similarly, we follow the above algorithm to define a computational method. We propose the following scheme (for every pebble) : • Suppress the edges resulting in this pebble - by direct remeshing (operator — by incorporating some relevant points along every edge. Our concern is to limit the number of (Steiner) points just as in the previous scheme.
3.8
Some application examples
Throughout this section, several application examples are given and discussed. These cases aim to illustrate some of the methods previously proposed. Examples in two dimensions. We give several examples analyzed by means of two methods. The first method consists of modifying the constrained edges of the given constraint so as to obtain a set of Delaunay admissible edges. Then, the second method solves the problem by means of edge swapping. In Figure 3.19 a constraint (a set of edges forming a polygon) is shown and in Figure 3.20 the little circles11 associated with the constrained edges are depicted. Figure 3.21 displays the smallest circles related to the edges after partitioning. One may notice that the partition used has resulted in circles enclosing one point at the most. In other words, the so-created subdivision is composed of Delaunay admissible edges only. Figure 3.22 presents the triangulation derived from this subdivision. The next example illustrates the edge swapping method (randomly applied). Using the same example, Figure 3.23, we show the box triangulation completed when inserting the endpoints of the constrained edges. Afterwards, in Figure 3.24, we show the triangulation of the same box created by using edge swaps to recover the missing edges.
3.8. SOME APPLICATION
EXAMPLES
105
Figure 3.19: The set of edges composing the given constraint.
Figure 3.20: The little circles related to the edges of the constraint.
106
CHAPTER 3. CONSTRAINED
TRIANGULATION
Figure 3.21: The little circles after partition.
Figure 3.22: The triangulation now preserves the subdivided edges.
3.8. SOME APPLICATION
EXAMPLES
107
Figure 3.23: The triangulation of the set of edge endpoints.
Figure 3.24: The constrained triangulation obtained by edge, swapping.
108
CHAPTER 3. CONSTRAINED
TRIANGULATION
Figure 3.25: The constrained triangulation obtained by edge swapping (example 2, the edges of the constraint are randomly created).
3.8. SOME APPLICATION
109
EXAMPLES
The final example in two dimensions consists of 5,000 points randomly created and includes a set of 1,000 edges that constitute the given constraint. These edges have been randomly constructed as well. Figure 3.25 displays the resulting triangulation. Examples in three dimensions. A few examples in three dimensions are now presented. In fact, we show the geometries under consideration. They give a rough idea of the constraints that must be satisfied. Indeed, the surface mesh of these domains defines the set of constrained edges and faces. For each case, we report the statistics about the entities of the given set of constraints missing after their endpoints have been inserted in the triangulation.
Figure 3.26: Case 1.
-
nac 1,440
case 1 case 2 2,394 case 3 4,134 case 4 5,991
Figure 3.27: Case 2 (GIAT-EDF).
nfc
960 1,596 2,756 3,994
na 0 210 342 991
nf 0 347 662 1,729
Table 3.1: Number of constraints to deal with. 11
The little circle related to a given edge is the smallest circle passing through the edge endpoints. Thus, its diameter is nothing else than the edge.
110
CHAPTER 3. CONSTRAINED
Figure 3.28: Case 3 (MDTV).
TRIANGULATION
Figure 3.29: Case 4 (DA).
nac and nfc denote the number of edges and faces of the constraint, na and nf are the number of edges and faces that are missing when the triangulation of the corresponding vertices is completed. One may notice that the constraint in the first example is Delaunay admissible since na = 0 as well as n f .
3.9
Applications
The potential applications of the proposed triangulation method are quite numerous and may be of interest to many different fields. Hereafter some typical applications are given. Domain meshing. As mentioned in the previous chapter, this is the aim of this work. The next chapters will discuss this application in detail. It will point out how to use a triangulation method and a constrained triangulation method so as to develop meshing algorithms suitable for arbitrarily shaped domains. These arbitrary domains are described by their boundary discretization that forms the set of constraints to be enforced. Dynamic constraint. A quite interesting application of constrained triangulations is to construct a non-constrained triangulation in which some constraints are introduced dynamically and then removed at another step. In this case, the constraints are said to be dynamic.
3.10. NOTES
111
Triangulation of a domain following its boundary with a given threshold e. Considering a two-dimensional case, our goal is to construct a triangulation (or, more precisely, a mesh) of a polygon whose boundary is given in such a way as to ensure that the distance from a boundary edge to the boundary (curve) is bounded by a given threshold £. This means that the boundary curvature must be checked and respected. We consider a boundary discretization, not necessarily fine enough, i.e. such that the desired property is not satisfied everywhere. The problem then becomes that of determining the regions where a violation was occured and to locally remesh these regions by enforcing new edges. Then, we return to the problem of constrained triangulation construction, i.e. we still have to enforce some constraints in a given triangulation. A different application of this problem is to achieve triangulations that are compatible with finite elements for order greater than one, allowing to define properly these elements at a later stage.
3.10
Notes
Regardless of the technique used to enforce the constraints in a triangulation, constructing such a constrained triangulation is an almost trivial task in two dimensions. The same problem is still widely open in three dimensions and in higher dimensions. At first, one can imagine that this issue is related to the way in which the triangulation is constructed, say a Delaunay-type method. Conversely, one can think that a different method for triangulation construction, such as an advancing-front technique, overcomes the difficulty. This feeling is obviously unsatisfactory. Indeed, an advancing-front method, starting from the given constraints, results in elements including necessarily these constraints but it leads to a convergence problem. We consider this to be quite analogous to the problem where Steiner points must be used so as to permit the desired convergence. Along with the methods discussed in this chapter, a very different method described in [Coupez-1991] is briefly presented. It consists of constructing a constrained triangulation of a domain starting from an initial triangulation which is, on the one hand, valid in terms of topology, and, on the other hand, wrong in terms of geometry. The key-idea is a clever use of local modifications which enables us to regenerate a valid (conformal) triangulation by optimizing the current triangulation. To get an idea about this method, let us consider a discretized boundary (a set of constrained
112
CHAPTER 3. CONSTRAINED
TRIANGULATION
edges) in two dimensions. A first triangulation is formed by joining one of the given points with all of the others. This results in a topologically valid triangulation while the latter can include overlapping elements as well as elements with a negative or null surface. A function defined as the sum of the absolute values of the element surfaces is then optimized. This process is achieved by edge swapping and by local remeshing applied in some quite simple pattern (including the possible creation of points). The convergence is completed when a valid triangulation exists which contains the given edges. This method extends to three dimensions and can be used to design algorithms for edge or face enforcement in a given triangulation. Thus, this is an alternate method as compared with the methods depicted in this chapter. Moreover, a theoretical work, [Chazelle,Palios-1990], discusses non convex polyhedron partitioning in terms of a small number of convex polyhedra. The main premise concerns the partitioning of a polyhedron having n vertices and r reflex edges12 into a number of tetrahedra in the range of n + r 2 . In addition, an example of a polyhedron, having n vertices, which results in the creation of a number of Steiner points in the range of n 2 is exhibited in [Chazelle-1984], the above limit being an upper bound. We find again one of the previously mentioned difficulties related to the Steiner points. How many Steiner points are strictly required to make the triangulation of a given polyhedron possible is still an open question. In this vain, [Ruppert,Seidel-1992], discuss the complexity of the triangulation construction for a non convex polyhedron and show that this problem is, in general, hard-complete, even for star-shaped polyhedra. This paper also discusses about how many Steiner points are needed to achieve such a triangulation. The above material leads us to believe that the problem of constructing a constrained triangulation, in three dimensions, can only be solved by heuristics (as claimed earlier in this chapter).
12
An edge is said to be reflex if the dihedral angle formed between two faces sharing this edge is greater than TT.
Chapter 4
Anisotropic triangulation 4.1
Introduction
The construction of governed triangulations (and thus governed meshes), either isotropic (when the control concerns the size requirement at any space location1) or anisotropic (non-isotropic) (when the control concerns both the desired size and directionality) is a relatively recent field of interest motivated by different requests. Among the possible applications are finite element computations for problems where the expected solutions vary very rapidly and/or have directional features. To give some keys about the anisotropic context, one may consider flow problems with shocks or boundary layers, in fluid mechanics. Another type of problem, in solid mechanics, provides interesting applications for isotropic triangulations with size constraints or for anisotropic triangulations. Parametric surface meshing (see Chapter 6) can also be developed by means of anisotropic mesh generation algorithms applied in the parametric space. In a more general context (as will be seen in Chapter 9), adaptation problems naturally lead to the construction of triangulations (or more precisely of meshes) subjected to size specification (i.e. with an isotropic control) or to size and direction prescriptions (anisotropic control). In fact, this control is a way to govern the mesh construction as part of an iterative adaptation process so as to conform to the physical features of the given problem. Up to now, such triangulations were constructed almost manually using the expertise of the engineers or following some heuristics. Thus, the construction was more or less heuristic, based on local transformations or opti1 One application example of a mesh subjected to a size criterion is provided by bathymetric meshes. Such meshes are used in water simulations with two-dimensional elements whose size is governed by a third dimension, namely the depth.
114
CHAPTER 4. ANISOTROPIC
TRIANGULATION
mization procedures (see, for instance, [Peraire et al. 1987], [Lohner-1989] and [Vallet-1990] for anisotropic construction schemes). In this chapter, we aim to extending the Delaunay incremental method, as described in Chapter 2, so as to propose a systematic approach for isotropic size conforming or anisotropic triangulation construction. Let us consider the space Rd with d = 2 or d = 3. We assume that a set or a cloud of points, denoted as <S, is provided, the latter being properly located. We would like to define the environment suitable for the desired extension. The fundamental issue, as will be seen, consists in defining a metric everywhere in the space, this metric indicating both the directions to follow and the expected sizes with respect to these directions. Thus, the aim of a governed isotropic or anisotropic triangulation algorithm is to construct elements conforming to a given size map or to both the given directions and the related sizes. Regarding these anisotropic constraints, the optimal element is no longer a classical Euclidean equilateral triangle, in two dimensions. This chapter will first review some key notions related to the concept of a metric. Then, it will consider the classical triangulation case (i.e. the Delaunay kernel as described in Chapter 1) interpreted in terms of a metric. Finally, the classical case will be extended to the present context (in particular regarding the classical Delaunay kernel). We will show indeed that the only requirement is to replace the Euclidean structure implicitely used in the classical situation by a Riemannian structure 2 . Nevertheless, from a computational point of view, it will be seen that these materials are not directly usable requiring us to develop discrete approaches so as to propose an effective Delaunay-type algorithm suitable for anisotropic triangulation construction.
4.2
Notion of a metric
As described in Chapter 2, the Delaunay triangulation method relies on a simple construction, referred to as the Delaunay kernel, involving the so-called Delaunay measure. This construction is based on a proximity criterion and thus relies on distance comparisons. For this reason, we will now focus on the notion of distance. 'Bernhart Riemann, 1826-1866.
4.2. NOTION OF A METRIC 4.2.1
115
Metrics and distances
Let fi (Rd or Conv(S] or Box(S}} be the considered domain and let X be an arbitrary point in fi. Let us assume that a metric or a metric tensor is specified at any point X in 0, consisting in practice of a d x d symmetric positive definite matrix M(X}. In two dimensions, such a matrix is defined by bx M(X)=(a.x ) \ bx cx J
with ax > 0, cx > 0 and axcx — b2x > 0. The field (M(X})x£Q. is assumed to be continuous. A Riemannian structure is thus obtained in Q. This field along with this structure is denoted by (Q,(M(X))X€si). If M.(X] is independent of X , the problem is reduced to the regular Euclidean case. In the general case, we will define the notion of a distance and thus that of proximity used to find the cavity Cp involved in the Delaunay kernel, by discussing a way to design a constructive method. To this end, we first recall the notion of a distance in such a space. Let A and B be two points in (Q, (M.(X}}xt$i) and M be a point lying on the geodesic3, denoted as F, joining A and J3. Let 7(£) be a parametrization, at least of class C1, for F (t ranging from 0 to 1) such that 7(0) = A and 7(1) = B. Then, djv((A, 5), the distance between A and B is /(F), the length of F defined by
, B) = l(T) =
Jo
v
*i'(t).M(i(t)).-f(t)dt
.
(4.1)
In practice, this relation cannot be used because 7(2) is quite difficult to define. On the other hand, if the metrics are region independent, then the Riemannian space is nothing more than an Euclidean space. The geodesies are then reduced to straight segments, and j(i) can be expressed as A-\-tAB, t ranging from 0 to 1. Then,
dM(A,B)= I \Jt^.M.'ABdt, Jo
(4.2)
where M. is the defining metric of the Euclidean space. Thus, we have
dM (A, B) = \tA6.M.A&. 3
(4.3)
We assume this notion to be well defined in our context (a rigorous proof of such a fact being beyond the scope of this text).
116
CHAPTER 4. ANISOTROPIC
TRIANGULATION
Consequently, the distance between A and B is simply given by
with \\U\\M = y/< U,U
and
< w, v >M=t u.M.v. where u and v stand for two vectors. 4.2.2
Multiple metrics
In the following, we assume that only one metric is specified. As there are some applications (as will be seen later) where several metrics are associated with each point, we have to define a way so as to return to the case where only one metric is given. Before doing this, we assume that the various metrics are compatible4 with each other with respect to range limits (see Chapter 12). Two methods for dealing with a multiple metric case can be envisioned. The first method assumes that each metric is comparable to the others, in terms of its behavior. This method, referred to as the metric intersection method, relies on the simultaneous reduction of the quadratic forms related to the metrics. The second approach induces that one of the considered metrics plays a peculiar role. Metric intersections. In the case where several metric maps are specified simultaneously (as when adapted meshes must be created so as to conform to several criteria), we propose a method resulting in the merging of these maps so as to obtain only one map. Let us consider a point in the space at which two different metrics are specified. We need to determine a unique metric in accordance with the two given ones. For sake of simplicity, we consider the two-dimensional case while observing that the relations that will be established apply in three dimensions (a circle being replaced by a sphere, an ellipse by an ellipsoid and the indices ranging up to three). 4
In a finite element application, the metrics result from the solution analysis, in terms of error estimates. Several metrics are obtained at the same point when several variables are present. The range of the variables as well as their possible relationships can then induce a wide variation in the metric specifications. There is a problem of adimensionality that must be considered so as to ensure a reasonable compatibility.
4.2. NOTION OF A METRIC
117
Let us consider the unit circles (which are ellipses) associated with the two initial metrics. The desired solution is then the metric associated with the intersection of these two ellipses. As, in general, the result is not an ellipse, we consider the largest ellipse that will fit in this intersection region. It defines a metric referred to as the intersection metric. The simultaneous reduction 5 of the two quadratic forms corresponding to the two metrics leads to define the intersection metric related to the two initial metrics. Indeed, if M.\ and .A/12 are two metrics, the two corresponding unit circles can be expressed in the base associated with the simultaneous reduction of the matrices .A/fi and t
XMiX = Aiz 2 + A 2 ?/ 2 = 1 and *XM2X = ^x2 + ^2y2 = 15 (4.4)
the intersection metric (All, A4 2 ) is then defined by
where P is the matrix mapping the canonical base to that associated with the simultaneous reduction of the two metrics. Figure 4.1 depicts the metric intersection of two given metrics. When several metrics (Mi}\
Exercise 4.1. Prove that (M\,M-z) defines a metric (Hint: check that the relevant properties hold). 5
The simultaneous reduction of two positive quadratic forms is possible as soon as one of them is defined, meaning that the matrix associated with the bilinear form is invertible. Let M\ and M.? be the matrices associated with two metrics. If M\ is supposed invertible, then the matrix Af = A4J"1 M? is .Mi-symmetric and thus diagonalisable in R2. Let (61,62) be the eigenvectors of A/", defining a base in R2, then t
e\M.\e2 =£ eiA^2C2 = 0.
If X = 0:161 + #262 is an arbitrary vector of R2 and if A, =* eiM.\ei,i = 1,2 and Pi =* 6iA<2ei, i = 1, 2, then we have
118
CHAPTER 4. ANISOTROPIC
TRIANGULATION
Figure 4.1: Metric intersection based on the reduction. Exercise 4.2. scheme ?
Is the intersection scheme an associative or commutative
A different method. The above method relies in finding a maximal ellipse included in the intersection region of the initial ellipses. This requirement does not preserve, in any way, the directions of one or the other of the given metrics. As this property can be of great interest6, we would like to propose a different method resulting in a metric whose directions are identical to those of one of the initial metrics. Then, a maximal ellipse with particular directions will be found. In the case depicted in Figure 4.2, the directions specified in metric M.\ are preferred. The intersection metric (Aii, M.^} is defined by
with
if one wants to preserve the shape of the metric M. \ (fa and Az- also denoting the eigenvectors of the matrices) . To conclude, we assume in the following that only one metric map is defined everywhere in the space. 'For instance, when triangulating some surfaces.
4.3. INCREMENTAL METHOD
119
Figure 4.2: Metric intersection preserving the directionality of one of the two metrics. The metric (At 1,^2) keeps the directionality of the metric Mi.
4.3
Incremental method
The incremental method and more precisely the Delaunay kernel relies on distance evaluations or proximity computations (cf. Chapter 2). By constructing, using the Delaunay measure, a cavity satisfying the proximity criterion implicitely involved in this measure, the resulting cavity is valid. Thus, a valid triangulation can be obtained in this way. In other words, the proximity criterion ensures that a visibility criterion holds. The point is now to show that by applying the relation Ti+i — Ti — Cp + Bp which leads to a triangulation including a new point P, (i + Ith point of a cloud) we obtain, similarly to the classical situation (Chapter 2), a valid result. This means that starting from 7;, the triangulation whose element vertices are the first i points of the given cloud, it is possible, using this scheme, to construct Ti+i, the triangulation having P as vertex. This requires that we prove that an adequate construction of Cp, the cavity of P, results in a valid ball Bp. To this end, one has to establish that Cp is a star-shaped region with respect to P. Let us recall that Cp is defined by adjacency, starting from the base associated with point P while some criteria are satisfied, relying on distance evaluations and comparisons (i.e. involving the metrics). To start with, we will discuss on the Euclidean case before dealing with
120
CHAPTER 4. ANISOTROPIC
TRIANGULATION
the general Riemannian situation.
4.3.1
Euclidean space
In this case, the metric map is independent of the location in space. Then •M(jAB(t)) — -M holds if jAB(t) is a parametrization of the geodesic joining A and B, the latter being a straight segment. Thus, the distance between A and B is and the Delaunay measure, to insert a point P, used to decide if a triangle K (in a two-dimensional case) must be removed and then put into the cavity
(d being the usual distance), with OK the center of the circumcircle corresponding to K and TK the radius of this circle, is actually replaced by
wherewhere, now, • OK stands for the point equidistant to the vertices of K. This means that this point is the center of the circumcircle of K according to the Euclidean space defined by M (in general, this circle is an ellipse in the usual Euclidean space) and, • TK is the radius of this circle according to the Euclidean space defined by M. In the case where the metric field is an isotropic field (for instance in the two-dimensional case), we will check that the classical characterization is retrieved. While in an anisotropic case, this will also hold subject to an adequate transformation. Isotropic case.
The matrix can simply be written as
Then, for every pair of points A and B,
4.3. INCREMENTAL
METHOD
121
Thus, the circumcircle corresponding to K according to the metric M. is nothing more than the circumcircle of this element in a usual metric expanded by a factor of ^J~a. Thus, we have
As a remark, it can be noted that since a is a constant value, the isotropic case with size specifications (that will be discussed later) is similar to the previous situation. This means that the circumcircle associated with a given triangle is only a function of the locations of the vertices of this element and, more specifically, is not related to a given size, if any. Hence the construction does not involve M.. Anisotropic case. The matrix for the anisotropic case is written as
A variable change (a rotation coupled with a mono-directional scaling) exists such that this matrix is diagonal with identical diagonal coefficients. Thus the above situation is met again. One may notice that the construction remains unchanged if any scaling factor is applied. If a, 6 and c are multiplied by a factor of fc, the same result holds. Exercise 4.3. In the anisotropic case, give the center of the circumcircle corresponding to a triangle A', in accordance with a given metric.
4.3.2
Riemannian space
In such a case, the metrics vary from one location to the other. In dimension rf, for a given simplex K = (Pi, P?,..., Pd+i), it is tempting to define the point OK equidistant to PI,P2 , ~->Pd+ii if sucn a point exists, as the solution of the following system dM(0K,Pi) = dM(0K,P2) = ••• = dM(0K,Pd+l].
(4.9)
Since, in general, it is rather tedious to explicitely develop the equation corresponding to the geodesies involved in this system, it is not possible to find OKI i n practice. Moreover, it is quite difficult to prove the existence of such a point. Thus, the Delaunay measure corresponding to K and a given point P cannot be defined properly. In other words, a relationship like
122
CHAPTER 4. ANISOTROPIC
TRIANGULATION
required to write the Delaunay kernel is not achieved. One must be convinced that a constructive method is not properly defined in the general case, requiring that approximate solutions be investigated. In specific, the geodesies involved in the expression are not well defined. In what follows, we will present a simplified meaning for the general formulation above before proposing different approximate solutions. Simplified form of the general case. The proximity criterion uses OK the point solution of System (4.9) and, via Relation (4.10), indicates that P is closer to OK than to any other vertex of K. Since we cannot solve this system, it is useless to pursue this approach further. Remark. It would be interesting to look at this triangulation construction problem in a Riemannian space, by means of the corresponding Voronoi diagram and, using duality to complete the (affine) triangulation associated with this diagram. If the Voronoi cells are properly defined, this problem is nothing more than proving that joining the centroid of two cells sharing a face results in the expected triangulation. In a first attempt, we particularize System (4.9) so as to allow us to find a point OK- To this end, the distance evaluation AM is simplified. Let djy({x)(A, B} be the distance between A and B evaluated in the metric M(X) at a given point X. Then one has (4-11) or
(4.12) Thus, the point OK-, solution of the system
is an a priori suitable approximation of the solution. Nevertheless, as above, this non-linear system is quite tedious to solve and, a fortiori, Relation (4.10) is quite difficult to deal with.
4.3. INCREMENTAL
METHOD
123
This leads us to introduce some more simplified approximate solution. This is the aim of the next sections.
4.3.3
Discrete approximations in two dimensions.
The key-idea is to return to a Euclidean case in which all the difficulties related to a Riemannian context are removed. In two dimensions, we propose three approximate solutions. A first approximation. To go back to an Euclidean case, we approach locally the Riemannian space by an Euclidean space defined by the metric of only one point. The point P, point under insertion, is a natural candidate. Then, the linear system
leads to find OK and the relation
where r# = ^X(P)(OA', PI), expresses the proximity criterion. By means of the Delaunay measure, it means that aM(P)(P,K)
(4.15)
Notice that this value provides an evaluation of the element K during the insertion of point P according to M(P], the metric of P. Proposition 4.1. The relationship o>f(p)(P, K) < 1 results in a valid Delaunay kernel. Proof. The cavity is initialized with the base of P, the latter being obviously star-shaped with respect to P. Then, this cavity is enriched by adjacency. Given a star-shaped cavity including a certain number of elements, we simply have to prove that adding one element, using the above criterion, preserves the star-shapeness of the resulting cavity. As every point in the ellipse aM{P}(P, K) = 1 is visible by P and as the edge (denoted by a in the Figure 4.3, where the triangle being processed is depicted by a continuous line while the triangle in dotted lines is still in the cavity. The ellipse circumscribing the triangle K has been evaluated using the metric of P), common to the current cavity and the element K in question, separates7 the ellipse into two non-connected components. Thus, P is visible by the two other edges of K, edges that will be part of the new cavity. d 7
This property will not be satisfied in three dimensions.
124
CHAPTER 4. ANISOTROPIC
TRIANGULATION
Figure 4.3: To prove Proposition 4.1. A second approximation. A second almost natural choice (Figure 4.4) consists of defining a Delaunay measure relative to triangle K and point P by combining two Delaunay measures of the previous type into one measure. Hence, two metrics are used, that of point P, the point to be inserted, and that of a point, denoted as PI, which is the vertex of triangle K under examination still not included in the cavity (obviously, the two other vertices are included in this cavity). The Delaunay criterion can be then written as <xM(P}(P,K) + aM(Pl](P,K) < 2. (4.16) This choice is clearly better that the previous one, as the point PI is involved and as if the triangle K is selected, the edge PPi will be constructed. Obviously, this solution is more expensive that the previous one which required only one Delaunay measure. Moreover, it may be observed that each Delaunay measure requires the computation of a center OKI which differs from one measure to the other. To close the discussion, we notice that the Cp cavity construction ensures a star-shaped region with respect to P. This can be summarized by the following proposition. Proposition 4.2. The relationship QM(p)(P, K) + &M(PI)(PI K) < 2 results in a valid Delaunay kernel. Proof. The proof relies on a rather obvious remark. Indeed, the resulting cavity will be star-shaped if one of the two ellipses involved in
4.3. INCREMENTAL METHOD
125
Figure 4.4: To prove Proposition 4.2. the construction encloses the point P because it is not possible to have a M(P)(PiK) > 1 and OiM^p^(P^K] > 1, simultaneously. As a consequence, the proof of Proposition 4.1 can be argued. n The Figure 4.4 displays a point P, the ellipses £p and £pl circumscribing the given triangle K according to the metrics of P and of PI . In the depicted case, only £p encloses the point P. A third approximation. The last solution consists of involving the metrics of the four points induced when inserting the point P to analyze the triangle K with vertices Pi,p2 and P3. The proximity criterion is then aM(P)(P,K) + aM(pv)(P,K)+aM(p^(P,K)+aM(p^(P,K)<^.
(4.17)
Proposition 4.3. This approximation results in a valid Delaunay kernel. Proof. Similar to that of Proposition 4.2. D This choice invokes the same comments as before. Notice also that four centers are involved in the scheme. 4.3.4
Discrete approximations in three dimensions.
Formally speaking, we again find the same type of approximations. The first approximation uses the point P under insertion. In addition to P, the
126
CHAPTER 4. ANISOTROPIC
TRIANGULATION
second one takes into account the vertex of tetrahedron K not already in the cavity (meaning that the neighbor to the face opposite to this vertex of K is still in the cavity). Finally, the last choice involves the five points concerned, i.e. P and the four vertices of the tetrahedron K. The first approximation. In this case, a solution similar to that of the two-dimensional situation is retrieved. Nevertheless, this solution may result in a non valid triangulation. The proof used in two dimensions (while proving Proposition 4.1) does not extend to three dimensions. This negative feature is due to the fact that a path may exist which joins point P and a face of the examined tetrahedron without intersecting the face of the same element common to an element already in the cavity; this path being strictly included in the circumball of K. This means that the separation property which held in two dimensions is no longer satisfied. To make this point clear, we would like to give three lemmas related to this situation. Lemma 4.1. Let f be the edge of K shared with an adjacent element which is already in the cavity. Then point P is located in the intersection region of the circumdisc of K and the half-plane bounded by the plane supporting f not containing the vertex of K opposite to f . Figure 4.5 illustrates this property.
Figure 4.5: "Necessary" location of point P. Proof. We just have to remark that the edges of triangle K partition the circumdisc in four regions whose interiors are disjoint. D
4.3. INCREMENTAL METHOD
111
Thus, this implies that the two edges of K other than / are visible by P. This property does not extend to three dimensions because of the following lemma. Lemma 4.2. Let / be the face of K common to an adjacent element still in the cavity. Then a path joining point P and the vertex ofK opposite to f may exist such that face f is not intersected. The example in Figure 4.6 illustrates this situation.
Figure 4.6: There is a path from P and the voyeur of f in K which does not intersect this face. Then the three faces of K other than / are not necessarily visible by P. Thus, if no neighbor of K apart than those already in the cavity (i.e. the element sharing /) is added to the cavity, the latter will not be star-shaped with respect to P. In a classical case, this situation is never encountered (according to the theory). Indeed we have the lemma that follows. Lemma 4.3. We assume that K belongs to the cavity associated with the point P and that there exists a face g of K which is not visible by P. Then the ball circumscribing an element adjacent to K through g contains P. Exercise 4.4. Prove this lemma by using the results of Chapter 2. As a corollary, this element will be added to the cavity and the boundary faces of the latter are then necessarily visible by P. Thus, the cavity will be star-shaped with respect to P. When the problem is approached using one metric, Lemma 4.3 does not apply because it is possible to obtain a cavity which is not star-shaped.
128
CHAPTER 4. ANISOTROPIC
TRIANGULATION
This is due to the fact that the elements are not Delaunay in the given metric. Thus, to ensure a valid construction, we advocate the use of the correction algorithm proposed in Chapter 2. In this way, the cavity is explicitely checked and the construction is necessarily valid. The other approximations. The solutions proposed in two dimensions extend to three dimensions. The success of the process is guaranteed in the same way (by means of the cavity correction algorithm). Exercise 4.5. Examine the case where the sum of several measures is replaced by their product (as compared with 1).
4.4
Computational aspects
Most of the aspects in the algorithms described in Chapter 2 for the construction of classical triangulations, are still valid for anisotropic constructions and obviously directly extendable for isotropic constructions with prescribed sizes). Nevertheless, some remarks can be given. Correcting the cavity. The cavity correction algorithm, as seen in Chapter 2, is rather useful, specifically in three dimensions. This is the sole way to ensure a posteriori that the cavity is star-shaped with respect to the inserted point. About inheritance. As seen in Chapter 2, the triangulation constructed by inserting a new point, inherits, at least to some extent, from the former triangulation. This inheritance has, at this time, two aspects. First, when updating the vicinity relationships, the new relationships were relatively simple to deduce from the old one. Second, updating the information related to the balls associated with the elements (centers and circumradii) is inexpensive by transporting the former values. For an anisotropic triangulation, the mesh topology aspect, i.e. the vicinity relationships, remains quite useful. On the other hand, it is not necessary to store the ellipses (assuming a two-dimensional case) corresponding to the triangles. Indeed, an ellipse involved when constructing the cavity corresponding to a point is not defined as the ellipse associated with an element but as one related to the points used to evaluate the selected metrics.
4.5. SOME RESULTS
129
Required data structures. Throughout this section, we will briefly discuss some points regarding the "tables"8 required to effectively implement the different phases discussed in this chapter. Other resources will be needed, due to programming constraints, which are not mentioned. • Information associated with an element : An element table is obviously strictly required. A triple (quadruple) is associated with each element. This table contains the vertex numbering. Meanwhile, a table is defined to store the neighbouring relationships (in terms of edge or face adjacencies according to the spacial dimension). This table is quite similar to the previous one. This requires • 2 X (d + 1) x nemax values where nemax is the maximal number of elements allowed. • Information associated with a vertex : The vertices are known through their d coordinates. With each vertex is associated a d* d matrix representing the metric, i.e. • (d+ d2] x npmax values where npmax stands for the maximal number of vertices allowed. • Other information : We basically require the same resources as in the classical case (cf. Chapter 2).
4.5
Some results
We refer the reader to the next chapters. Indeed, it is hard to give examples at this stage, since giving a purely academic example will not be interesting, while exhibiting a realistic example requires a background which has not yet been fully provided at this time.
4.6
Applications
Domain meshing. Chapters 5 and 7 discuss the use of governed triangulation methods (isotropic or anisotropic) in order to obtain a mesh of a domain in R2 or R3. 'cf. supra about the meaning of the notion of a table.
130
CHAPTER 4. ANISOTROPIC
TRIANGULATION
Parametric surface meshing. Chapter 6 shows how to use the method for anisotropic triangulation construction when a mesh of a parametric surface is needed. The main idea is to develop an anisotropic mesh governed by the intrinsic properties of the given surface. This construction is achieved in the parametric space and, then, mapped onto the surface. Images processing. The techniques that make the generation of anisotropic meshes possible can be applied in such a way as to segment an image or to extract some characteristics from it.
4.7
Notes
Isotropic triangulation construction with prescribed sizes as well as anisotropic triangulation construction using a direct method, as opposed to the methods using local modifications applied to a classical triangulation, have also been investigated using other approaches, see [Peraire et al. 1987] and [Lohner-1989]. It would be of interest to try to replace the classical Delaunay measure a(P, K] = ' r' „*•' by a different measure before developing the anisotropic extension. For instance, a measure of the form a(-P, K) — ' r'$K' is one K possible solution.
Chapter 5
Meshing in two dimensions 5.1
Introduction
We now consider the R2 space and assume that a given domain, fi, is known through its boundary discretization. The latter is a list of edges defining a constraint (according to Chapter 3). The problem is to construct a mesh of Q using this sole data. In fact, the resulting mesh will be a mesh of the polygonal approximation of O defined by the given discretization. This mesh is expected to be well suited for the envisaged application. In particular, for a finite element application, it is required to construct a mesh of Q whose elements are as close to equilateral as possible (in the usual sense or following a definition that will be clarified latter). From a practical point of view, the data may include edges other than the boundary edges and, moreover, some internal or specified points. The given internal edges are added to the boundary edges so as to define the constraint. On the other hand, the specified points enrich the list of points formed by the endpoints of all the given edges, meaning that after the point insertion process, the specified points will be mesh vertices without any specific treatment. Several situations are discussed, leading to different ways of considering the mesh generation problem. In the first case, the so-called classical case, the sole data available is formed by the constrained edges (specifically, the edges defining the domain boundary) along with some additional internal points, if any. The second case, also called the governed isotropic cose, corresponds to having a size map defined everywhere in the domain which extends the previous type of (classical) data. This size specification is a way to define the desired size of the elements to be created. The last case, which is also referred to as the anisotropic case, assumes that a map is
132
CHAPTER 5. MESHING IN TWO DIMENSIONS
specified, which includes both size and directional information. In any case, the mesh generation methods include at first a creation stage resulting in a mesh without internal points1. Such a mesh is referred to as the empty mesh. It can be achieved using the results and algorithms discussed in the Chapters 2 and 3. From here, the methods differ in how the required internal points are created. These steps are the main concepts discussed in this chapter. The notion of a control space is introduced, which is a very such a natural way to drive the internal point creation.
5.2
The empty mesh construction
Definition 5.1. The empty mesh of a given domain Q is a mesh whose vertices are the sole boundary points of this domain. In principle, the mesh vertices are only the boundary points. Consequently, there is no point inside ft (and this mesh is called empty mesh). By extension, the term empty mesh can also be associated with a mesh containing the boundary points along with the (internal) points already known (If such data is provided, the corresponding points are expected to be element vertices). Since this mesh does not include any internal points, except any specified points, it is not generally well-suited for computational purposes. Nevertheless, this is the first mesh we can construct to cover the domain (in addition, this mesh will be used to identify this domain). Moreover, the empty mesh can serve as a geometric background, for when at the required field points are created. Also note that the Delaunay method is a rather elegant way to construct such a mesh (even empty). The existence of an empty mesh can be proved, in two dimensions, by Theorem 3.1 in Chapter 3. Bounding box construction. The construction of a box enclosing all the points known at this stage (basically the boundary points of the domain under consideration) ismade for sake of simplicity. It does not limit the range of applicability of the mesh generation method. In this way, we return to a situation where the reduced incremental method can be used, as presented in Chapter 2. The bounding box is defined by the extrema of the given point coordinates. For instance, it can be chosen as a quadrilateral such that it encloses 1
Except for any specified points.
5.2. THE EMPTY MESH
CONSTRUCTION
133
these extrema. The bounding box is then divided into two triangles, the resulting mesh being the mesh denoted by To (One may notice in Figure 5.2 that a few points have been added to this box. This feature, while not strictly needed, improves the efficiency of the method). Inserting the boundary points. The boundary points (actually, all the points known at this stage) are inserted in TO by means of the reduced Delaunay kernel (used in its constrained extension, cf. Chapter 3) Ti+i = Ti-CP + BP where P denotes the i + l t/l point (i.e. Pi+i) of the set of known points (i = 0,1,2,...), Cp is the cavity associated with P, and Bp is the ball of P. At the completion of these insertions, we have a mesh, TBOX-, of the box enclosing the domain, whose element vertices are the Pt's and a few points added to define the box. This mesh is a mesh that completely covers the box and, therefore, is not a mesh of the domain fi. To obtain the latter, we first have to check that the boundary discretization entities (edges in this two-dimensional case) are also mesh entities. Indeed, this is the only way to determine, by adjacency, if a triangle belongs to the domain or its exterior.
Figure 5.1: A few edges are missing while their endpoints are all element vertices.
Boundary enforcement. In general, the mesh TBOX does not include as element edge all of the boundary edges given as input to the mesh genera-
134
CHAPTER 5. MESHING IN TWO
DIMENSIONS
tion scheme. This means that an edge whose endpoints are mesh vertices is not necessarily present in the current mesh. Nevertheless, in two dimensions, one may notice that this drawback is rarely encountered in a mesh constructed by means of the proposed method. The simple example in Figure 5.1 depicts a relevant case (in advance, we actually display the triangles interior to the domain). Thus, the materials described in Chapter 3 are used to retrieve the missing boundary edges (and, more generally, all the initial edges, i.e. the boundary edges and the internal edges, if any are specified). Starting from TBOX, Figure 5.2, we apply local modifications so as to achieve a new mesh Tgox which includes the given boundary discretization2 exactly. After removing the elements exterior to the domain by means of one of the algorithms proposed hereafter, we then obtain the empty mesh depicted in Figure 5.3.
Figure 5.2: Mesh of the box enclosing Figure 5.3: a domain. mesh.
Corresponding empty
Connected components. As the boundary edges defining the domain have been enforced, it is now possible to define the empty mesh related to this domain (Figure 5.3). More precisely, we determine the internal triangles and we define the different connected components of the domain. To this end, one can follow the following algorithm 1. Assign the value v = —1 to all elements of the box mesh 7gox and set c = 0 (in fact, c can be thought of a color); 2 We consider the case where the (boundary) constraint is exactly satisfied, according to the Definition 3.2.
5.2. THE EMPTY MESH CONSTRUCTION
135
2. Find an element having one vertex identical to one of the box corners and set it to the value v = c; 3. Visit the elements by means of edge neighborhood relationships: • if the color of the current element is not —1, GO TO 3 (the element has been colored previously); • else if the edge crossed when reaching the current triangle is not a boundary entity, assign the value v = c to this triangle and GO TO 3; • else if the edge crossed when reaching the current triangle is a boundary member, GO TO 3; 4. Set c = c + 1, and GO TO 3 as long as a triangle exists such that v = -l. D A variant of this algorithm, using the notion of a seed, also provides a solution for connected components determination. At first, the elements are stored in a list. 1. Assign the value v = — 1 to all elements of the box mesh, Tg0:r, set c = 0 and introduce a seed g, with g = 0; 2. Pick one element having a box corner as vertex, assign it to v — c and put this element on top of the list; 3. Visit the list of elements by means of edge neighborhood relationships: • if the color of the visited element is not —1, GO TO 3 (the element is already colored); • else if the edge crossed when reaching the current triangle is not a boundary entity, assign v = c to this triangle and GO TO 3; • else if the edge crossed to reach the examined triangle is a boundary member and if g = 0, DO g = c + 1, store the current triangle and GO TO 3; 4. As long as a non-null seed g exists, set c = g, put the corresponding element on top of the list, set g = 0 and GO TO 3. D In this way, the triangles are classified with respect to the different connected components of the domain, c being the component number. The triangles with color 0 are outside the domain, it is thus possible to remove them. Nevertheless, this operation is not done at this time in order to
136
CHAPTER 5. MESHING IN TWO
DIMENSIONS
preserve a convex background allowing us to maintain a simple context for the field point creation (in particular when searching for which element a a given point falls in). Note that, at completion of one of these algorithms, it is easy to detect some invalid configurations. Indeed, if at the end of the first algorithm, all the elements are marked with the value 0, then the domain is not well defined as the boundary is not a closed boundary or is self-intersecting. If the second algorithm has been used, a lack in the boundary is determined if a seed needs to be marked with a different value, meaning that one boundary (between two components) is not closed or is self-intersecting. Hence, a few pathological cases can be detected which allow us to determine that one of the boundary component is topologically invalid.
5.3
Field points (creation)
In general, points must be created and inserted into the domain so as to construct a suitable mesh, in peculiar for finite element computations. Numerous methods can be used to perform this step. So much so that it would be a tedious task to establish an exhaustive list of all such methods. Nevertheless, we will mention the most popular methods, while observing that some of them may not enjoy the properties required to handle all the situations we are interested in. In particular, we will pay special attention to two methods with a relatively broad range of usage (at least in our opinion).
5.3.1
Several methods for field point creation
The existing methods can be classified with respect to different criteria. Methods can be defined according to their type by indicating their class of applications (i.e. classical mesh, isotropic mesh with size constraint or purely anisotropic mesh). They can also be classified by the applications for which they are well-suited. We would like to follow the first argument indicating at the same time the flexibility of a method with respect to the anticipated usage. Tentative list of methods. As mentioned before, numerous methods are suitable for internal point creation. In fact, two questions need to be considered. The first question concerns the field point location and corollary, the second question is related to the number of requested points. The principal methods proposed in the literature are based on
5.3. FIELD POINTS (CREATION)
137
• circumcenter creation for every triangle that violates certain criteria, see [Shenton et al. 1985] or [Holmes,Snyder-1988], surface evaluation (the surface is judged too large), inradius length (also considered as too large), aspect ratio (or quality), etc., • centroid creation (weighted or not) for the triangles judged too large, [Hermeline-1980], • creation of points along the edges of the current mesh, see for instance [Borouchaki,George-1997] which will be described in detail later in this chapter, • different meshing techniques, for instance — using a quadtree3. Then, the resulting partition is used to create the internal points. It is possible to define the cell corners or the cell centroids as internal points (one may observe that the internal point density is defined as a function of the boundary point density input as sole data. Indeed, the cell size is directly related to the distance between two consecutive boundary points), — using an advancing-front method4. The advancing-front method enables the internal points to be located from the edges of a front defined with this purpose and then to connect the points so as to create the elements (see below); - using a given point distribution function, the latter being defined as a set of the cell vertices of a regular grid (as before) or being a set of points following a given pattern (along circles of given radius, along straight lines sweeping the plane, etc.). • by means of a "variogram". This notion relies on constructing a list of points by first including the boundary points. Afterwards, an internal point is constructed, which is the point that maximizes the distance from all the other points in the list. The created point is then included in the list and the process is repeated until saturation, 3
A quadtree is a data structure based on a recursive subdivision scheme. This approach consists of constructing a rectangle enclosing the domain. This rectangle forms the parent cell of a recursive subdivision scheme, every current cell in the structure being subdivided into four similar cells as long as the considered cell contains more than one boundary point. This method results in a set of cells whose size is directly related to the boundary discretization density. 4 An advancing-front technique consists of constructing the "optimal" set of points associated with a current front. The initial front is formed by the boundary edges input as data. Then, the front is updated after each operation.
138
CHAPTER 5. MESHING IN TWO
DIMENSIONS
cf. [Tacher,Parriaux-1996]. This final stage is achieved when no point is farther from any other point by more than a given global or local threshold. Thus, the general principle is to either create a point and insert it immediately by means of the Delaunay method (the so-called Delaunay kernel), repeating the process as long as points can be created, or to generate a series of points, insert this series and iterate the process as long as a non empty series is created. Classical case. In this context, the only goal is to locate the points so as to obtain (nearly) equilateral elements in the resulting mesh or, at least, element with the best possible quality. In this context (in two dimensions), all the briefly mentioned methods5 lead to suitable results. Isotropic case with size prescription. The aim is to now locate the points so that the resulting mesh consists of equilateral triangles (or element with the best possible quality) and, in addition, such that the triangle size is as close as possible to a pre-specified size. For this case, all the above methods6 are also usable. Anisotropic case. Here the goal is to locate the points so that the resulting mesh consists of triangles whose sizes and directions are as close as possible to a pre-specified input. Then, it seems that the method relying on edge point creation is satisfactory. The method using an advancing-front approach probably leads to the same result. However, the flexibility of the other methods is not yet decided7.
5.4
Control space
It is quite useful to define the notion of a control space to govern the internal point creation. This is a way to facilitate the choices being made. Indeed, this space serves to determine the current background. The ideal control space is the input of a function H(x,y) defined at any point P(z, y] of R2. In other words, the function is defined analytically and specifies the size 5 As the method in question has been properly developed, in terms of computer implementation. 6 Same remark as 5. 7 Clearly, this opinion can be refuted.
5.4. CONTROL SPACE
139
and the directional features that must be conformed anywhere in the space. In practice, a control space can be defined as follows, [George-1991]. Definition 5.2. domain ft if
(A, H) is a control space for the mesh T of a given
• ft C A where A covers the domain ft, • a function H(P,d) is associated with every point P G A , where d is the direction of the disc S1 for sphere 52 in three dimensions): H(P,d) : A xS 1 -»fl. The function H, whose support is the covering triangulation A, allows the specification of the requested properties or the criteria to which the elements of the mesh shall conform. In terms of geometry, A is an arbitrary covering triangulation. For example, it can be one of the following a) a quadtree type partition, b) a regular partition, such as a finite difference type, c) an arbitrary user-constructed mesh, d) a current mesh, for instance, the last mesh in an iterative process. In addition to this partitioning aspect, (A, H) contains, by means of the function H, the global information related to the physics of the problem. These values allow us to determine if the mesh T, under construction, conforms to the function everywhere. To construct H, one can consider one of the following approaches • compute, from the data, the local stepsizes h (the value h being the desired distance between two points) related to the given points. A generalized interpolation then enables us to obtain H (this process is purely geometric in the sense that it relies on the geometric data properties: boundary edge lengths, etc.); • manually define the value of H for every element of A. A desired size is then given everywhere in the space for isotropic control, or the desired sizes according to specific directions are given for dnisotropic control;
140
CHAPTERS.
MESHING IN TWO DIMENSIONS
• by manually specifying H by giving its value for each element of the covering triangulation constructed in this purpose (we return here in a type "c" space as introduced above); • use the cell sizes (in the above case "a"), where this size is used to encode the value of H. This then leads to the construction of the (A, H} space so as to satisfy this requirement; • define H from an a posteriori error estimate. We are then in an iterative adaptive process. A mesh T is constructed, the corresponding solution is computed and the error estimate analyzes this solution so as to complete H. The pair (T, H) forms the control space used to govern the new mesh construction (cf. Chapter 9). For each of the different cases, this definition results in one or the other control space types. In what follows, we will show how to create the internal points in accordance with the specifications contained in this space. When the geometric locus of P + H(P, d] is a circle (a sphere in three dimensions), with P in A and d varying, the control space is isotropic. When this locus is an ellipse (ellipsoid), the control space is anisotropic. Only these two cases will be discussed hereafter, leading to the definition of the metric map which governs the construction.
5.5
Creation along the edges, classical case
In this case, the sole data is the discretization of the domain boundaries. Hence, it means that we don't have an explicitely defined control space. Nevertheless, to return to the general framework, we would like to construct, at best we can, a control space. To this end, the edges of the current mesh, say the empty mesh (Figure 5.4) are used. Control space.
The control space is then
• A which is the current mesh, i.e. the initial empty mesh and, in case of iterations, as will be seen latter, the mesh corresponding to the former iteration, along with • H which is defined by means of a P1 interpolation, using the sizes, denoted as hioc, associated with the points of the domain boundaries and, furthermore, during the iterative process, by using the hioc assigned to the vertices of A.
5.5. CREATION ALONG THE EDGES, CLASSICAL CASE
141
The initial sizes can be computed by averaging the lengths of the edges emanating from a boundary point. Point creation. With this control background, we now proceed to describe the internal point construction method along the internal edges. The current mesh edges are examined and their lengths are compared with the stepsizes related to their endpoints, hioc\ and /i/ OC 2- The goal of the method is to decide if one or several points must be created along the visited edge. If yes, both the number of required points, n, and their location must be determined. The key-issue is, on the one hand, to saturate the edge, and on the other hand, to obtain a smooth point distribution. It is possible to use an arithmetic type of point distribution. Thus, (for every edge AB), if • /i(0) — hiod denotes the stepsize associated with P0 = A, one of the endpoints, • h(n + 1) = hiOC2 is that related to Pn+i = -B, the other endpoint, one can define the sequence oti by
where d(P z , Pi+i) is the (Euclidean) distance between Pt and P,-+i, while r is the ratio of the distribution. The problem requires the solution of the system
to find r and n. This system yields to the solutions
and
As n must be an integer value, the solution is rescaled so as to obtain an exact discretization of the visited edge in terms of n and r. The sequence of points is determined at the time n and r are established. Then, with
142
CHAPTER 5. MESHING IN TWO DIMENSIONS
each so-defined point is associated a value, /i, derived from the /i's of the supporting edge. It means that the control space is completed "on the fly". The process is repeated for all the current mesh edges and the series of so-created points is then filtered, using simply a grid (cf. Chapter 2). This treatment is needed due to the fact that the vertices are well-positioned along one edge but not necessarily in a global sense. For instance, one may think of the case of all the edges emanating from one point. The retained points are then inserted using the Delaunay kernel and the entire process is iterated as long as any of the mesh edges need to be subdivided.
Figure 5.4: The edges of the empty Figure mesh. mesh.
5.5:
Final corresponding
Remark. A different lecture of this algorithm can be given. Indeed, the algorithm attemps to construct unit length edges everywhere in the domain. The unit value is evaluated in a local metric which is nothing more than a linear (or smooth enough) interpolation between /i(0) and h(n+ 1). Remark. This internal points creation method offers several advantages. First, one may observe that the algorithm is dimensionless meaning that the same method extends to three dimensions. Second, in our experience, the resulting meshes enjoy reasonable quality, especially in two dimensions. In fact, the so-created meshes do not strictly require any optimization steps (cf. Chapter 8). Finally, as this method constructs a series of points, the randomization is very efficient during their insertion, in particular if the series are large (in terms of the number of points).
5.6. CREATION ALONG THE EDGES, ISOTROPIC CASE
143
Exercise 5.1 Discuss the case of a geometric progression or of a different type progression. A variation of the above method consists of taking into account only the first and the last point of the edges judged not significative or too long. Such an edge can be exhibited by observing the number of points that are expected. If this number is greater than a given threshold value, the edge is considered as non significative. Looping about the edges of the current mesh then leads to a saturation of the domain thereby achieving the desired result.
5.6
Creation along the edges, isotropic case
Similar to the classical case, we define the control space used to govern the internal point construction and, we propose a method to achieve this creation (i.e. suitable to determine both the number of required points and their locations). Control space. We assume that an ideal control space is given for which the value H(x,y) specifying the desired sizes is known everywhere in R2. Indeed, H(x,y) is a metric map consisting, from a practical point of view, of a matrix field of the form
or, when a straight line is considered, as assumed in the discussed method
with M(t) the point (x,y), t being the parameter. Actually, we have also
h(i) denoting the expected size at the point of parameter t. This size is the desired length of all the edges emanating from this point. Here is an academic situation used to introduce the internal point construction method in a formal manner. The realistic case, directly suitable for realistic problems will be discussed in Chapter 9 in its theoretical aspects and in Chapter 12 for a real application where the control space can be constructed practically using the current background.
144
CHAPTERS.
MESHING IN TWO
DIMENSIONS
Point creation. Similar to the classical case, we use the current internal mesh edges as support. We subdivide them by creating the points in such a way as to conform (as best we can) to the properties described in the control space. The key-idea is to obtain (nearly) equilateral triangles with unit size (i.e. having unit length edges) according to the metric of the control space. In other words, this means that elements of size h (whose edges have a length of h) are required in the usual metric space. Formally speaking, this implies that locally the unit circle in the control space is the circle of radius h in the usual space. We assume that the empty mesh constructed as previously mentioned is such that its boundary is discretized in compliance with the control space requirements. Then, we examine the edges of this mesh and we compute their lengths in the control space metric. Let A and B be the endpoints of the visited edge, then, if M(t] = A + tAB stands for the parametrization of segment AB, with parameter t (t ranging from 0 to 1), we have
1M(A,B)= t \l*~A&.M(M(t)).A&dt, Jo
(5.6)
and, in the case where M(i) is independent of £, this leads to
(5.7) 2
) is the given metric, the length of AB
s ljv((A,B} = 1 is equivalent with ||A5|| = h.
The length IM(A, B) enables us to find the number of points that shall be constructed along the edge AB so as to subdivide it into unit length sub-edges. Let m be the integer value such that m < IM(A, B} < m -f 1. The process consists of splitting the edge into m or m + 1 segments by creating m — I or m points so as to approach IM(A, B). Indeed, this value is not an integer value, while m must be an integer value. Once the number of points is known, it is possible to construct them. We still have to describe in detail how to compute an edge length, and, given this value, how to find the number and the location of the points. This is the aim of the following paragraphs. • Edge length computation.
5.6. CREATION ALONG THE EDGES, ISOTROPIC CASE
145
Let AB be an edge, as defined above. The length of AB cannot be computed directly. Therefore, we propose a method resulting in the computation of an approximate length. Let a and (5 be the components of the vector A5, then, due to the specific form of M. , 1 can be written as
or as
and, finally, (5.9)
where d is the (usual) distance between A and B. We use then the trapezoid (A corresponds to t — 0, B to t — 1), we have
Figure 5.6: For the length computation of the edge AB. Let 6 be a threshold value (8 < 1, for instance 0.5). If IM(A, B} < 6, the edge AB shall not be subdivided, otherwise we introduce the point Qi, the midpoint of A5, and we evaluate the lengths IM(A,Q\) and I M ( Q I , B ) . We analyze these two quantities and as long as one of these is greater than the threshold value <£, the process is repeated using the corresponding midpoints. At convergence (we assume that the map M. and thus the
146
CHAPTERS.
MESHING IN TWO DIMENSIONS
function h(t) are smooth enough), we have a sequence of points Q; such that, (after index renumbering),
and
lM(A,B) = '£ilM(QilQi+1)
(5.11)
i
Consequently, the number of points to be created along AB is known and, moreover, this method enables us to define the location of these points as will be seen hereafter. • Construction of the points subdividing the edge AB. We take advantage of the way the length of AB has been evaluated to locate the points subdividing this edge. Let Si be the length of QiQi+i (A = Qo and B = Qn], then, the point location algorithm can be described as follows.
Figure 5.7: Construction of the points subdividing AB. We pick the smallest index i such that /o i — Z) $j > 1 and we construct j=o
a point, PI, as the average of the points Qi and <2»+i weighted by the difference from 1 of the above sum
with
where /;_i )4 - is the length of Qi-iQi (according to the control space) and di-iti is the usual length of this segment.
5.7. CREATION ALONG THE EDGES, ANISOTROPIC CASE
147
Iterating this process, for a given edge, results in the construction of the points, one at a time. Then, applying this set of methods to all the current mesh edges gives the cloud of internal points. As with the classical situation, this cloud is filtered so as to remove all points too close to each other for the specified metric. Remark. When h(t) is an arithmetic (geometric, ...) progression, it is possible to compute the integral 5.9 exactly.
5.7
Creation along the edges, anisotropic case
We follow a similar approach, by first introducing the control space and by then discussing how to construct the internal points. Control space. We now assume that an ideal control space is given in which, for every point of R2, the value H(x,y) indicates the expected directions and sizes. More precisely, H ( x , y ) is a metric map specified as a matrix map, a typical matrix being like (5.12) or, if a straight line is the considered support in the proposed method
with M(t) the point (a;,y) and t the parameter. Point creation. As before, the key-idea is to use the edges of the current mesh as support for the construction. The points are constructed along these edges so as to conform to the properties described by the control space. Hence, the aim is to construct equilateral triangles of unit size (i.e. with unit length edges) according to the metric defined in the anisotropic control space. In terms of the usual space, it implies that the triangles enjoy a size (/ii, ^2)5 i-e. their edges have a length (h\, h*}), where h\ denotes the desired length in the first principal direction, di, and h^ is the expected size in the orthogonal principal direction, d^. This means, formally speaking, that
148
CHAPTER 5. MESHING IN TWO
DIMENSIONS
locally the unit circle in the control space is the ellipse of directions (di, d^} (angle 6) with length h\ according to d\ and length hi following d% in the usual space (see Figure 5.8). The equilateral triangle in the control space is the triangle image of the equilateral triangle of the usual space while a rotation (0) and two directional dilation factors (h\,hi} are applied.
Figure 5.8: Unit circle according to the control space. As before, we assume that the empty mesh provided as input conforms to the control space (and, more specifically that its boundary follows the specifications) . The edges of this mesh are analyzed and their lengths are computed with respect to the metric. Let A and B be the endpoints of a visited edge, if M(t) = A + tAB is the parametrization, with parameter £, of the segment AB, we then have
1M(A,B}= I JiA&.M(M(t)).A$dt. Jo
(5.14)
• Edge length computation for AB. As defined before, the length of AB is not easy to compute, thus the previous approximate method is used. The length IM(A, B] in relation (5.14) is approximated by the trapezoid rule, we then have (if A corresponds to t = 0 and B to t = 1) 1M(A,B) = --- . Then, let 8 be the threshold value (see above), if IM(A, B} < £, then edge AB does not require any subdivision. Othrewise, we introduce the point Qi and recursively we define a sequence of points Qi such that lM(Qi,Qi+i) < $
(5.15)
5.8. ADVANCING-FRONT TYPE CREATION
149
and, as above, we have
Afterwards, the number of points that must be created along AB is known and, moreover, the way in which the length of AB has been evaluated can be used in order to locate the required points. • Constructing the points subdividing the edge AB. We follow the same method as that in the isotropic case and we exploit the algorithm used to compute the length of AB in such a way as to locate the points along this edge. Applied to every edge in the current mesh, this sequence of algorithms results in a set of points. As with the classical case, this set is filtered to discard all points too close to any other points (in the metric).
5.8
Advancing- front type creation
We now focus on an alternative method not based on the mesh edge analysis. We return to the classical case, i.e. where the desired mesh is a mesh with equilateral triangles with no additional constraints in terms of size or directional properties. Control space. Hence, the control space is similar to that of the previous case • A is the current mesh, the initial empty mesh and, in case of an iterative scheme (as will be seen), the mesh corresponding to the previous iteration, • H is defined by means of a P1 interpolation, starting from the sizes associated with the boundary points, /i/oc, and then from the hioc assigned to the vertices of A. The initial sizes are computed as the average of the lengths of the edges emanating from a boundary point.
150
CHAPTER 5. MESHING IN TWO DIMENSIONS
Point creation. Provided this control background, we aim to introduce a point creation method based on an advancing-front point placement strategy. The goal is to combine two mesh generation methods so as to take advantages of both while avoiding their respective weakness, if any. The main advantages of the Delaunay triangulation coupled with those of an advancing-front have been pointed out recently by several authors including [Merriam-1991], [Mavriplis-1992], [Muller et al. 1992], [Rebay-1993] and more recently by [Marcum,Weatherill-1995]. In all these two-dimensional approaches, the usual point insertion algorithms (Bowyer/Watson, GreenSibson) as well as the reduced incremental method are combined with the point placement strategy of the advancing-front method leading to optimal point construction (to some extend). The front represents the interface between an "acceptable" element and an "unacceptable" element with respect to a quality measure (size, aspect ratio, etc.). Starting from the empty mesh, a list of triangles is determined based on the intrinsic properties of the elements (in-radius). The control space is then used to find the maximal parameters defining a suitable triangle. Thus, the triangles are divided in suitable triangles (i.e. having a size conforming the control space) and triangles that must be processed. By default, every triangle outside of the domain is classified as suitable. The active triangles, neighbors of the suitable triangles, are those where a point can be defined. The points to be inserted are located along the medial line of the edge that separates an active triangle and its suitable neighbor. These points are located so as to permit the construction of an optimal triangle, the optimal feature being related to the control space. The resulting meshes enjoy a nice gradation in terms of size variation and have nicely shaped elements as usually produced by the advancing-front method. Moreover, size shocks are quite easily avoided unlike to the classical advancing-front method when a front contains edges with a wide difference in size. This method is quite simple, efficient and robust as the well known drawbacks of the advancing-front method are removed such as the requirement of complex data structure so as to locate the neighbors and determine the intersection between the entities involved in the construction. Moreover, the mesh is constructed point by point and not element by element. With each point creation are associated all the elements sharing this point resulting in a substancial efficiency by observing that the number of elements in a two-dimensional mesh is about twice the number of its vertices.
5.9. FIELD POINTS (INSERTION)
151
Other situations. The governed isotropic case as well as the pure anisotropic case differ from the previous case by how the control space is defined and how it is used to govern the "optimal" point placement. This time, the goal is not to create equilateral triangles (in the usual space). Thus, extending the previous method when an isotropic or an anisotropic control is expected can be achieved without major difficulty.
5.9
Field points (insertion)
Any point construction method results in a set of points. These points are then inserted by means of the appropriate Delaunay kernel. • In a classical context, we consider the constrained Delaunay kernel as seen in the Chapter 2 and Chapter 3. • In the governed isotropic case, the same kernel is well-suited. • In the anisotropic case, one must consider one of the constrained Delaunay kernels described in Chapter 4. The point insertion process is completed by successive waves. The first wave results from the empty mesh edge analysis (edge method) or from the empty mesh front analysis (advancing-front method). The following waves correspond to the analysis of the edges of the previous mesh. For an advancing-front strategy, the waves follow the analysis of the front associated with the current mesh; this front being the interface between the suitable elements and the active elements, those being the candidates for point placement.
5.10
Optimization
In two dimensions, as will be detailed in Chapter 8, there is only a few mesh optimization operators. In fact, one can • use the edge swapping, • move a free8 vertex, • remove a free vertex, 8
Only the free vertices can be moved. A vertex is said to be free if it is not a boundary vertex (when it is not possible to move such a point along the boundary lines) or if it is not explicitely defined as a specified point.
152
CHAPTER 5. MESHING IN TWO
DIMENSIONS
• collapse an edge by merging its two endpoints (when it won't invalidate the mesh topology). In our experience, mesh optimization is not strictly required in the classical meshing case. Indeed, the two mesh generation methods discussed in detail in this chapter result in good quality meshes (in terms of the element aspect ratio, see Chapter 8). This is due to the good point placement strategy and thus no additional moving procedure is needed. Moreover, the Delaunay kernel used in the mesh generation process induces a good connectivity, thereby avoiding any edge swapping9. In the governed isotropic case as well as in the anisotropic case, some degree of mesh optimization can be useful for obtaining a certain level of improvement. Specifically, the few "ill-shaped" elements that are constructed anyway, are usualy improved or removed. One may notice that, in these cases, the mesh optimization is governed by the quality related to the provided metric map (for more details, refer to the Chapter 8).
5.11
General scheme for the mesh generator
In this section, we summarize the materials discussed in this chapter so as to propose a general scheme for a mesh generation method. Seven steps can be identified as follows. • Preparation step. — Data input: point coordinates, boundary edges and internal edges (if any), — Construction of the bounding box, — Meshing of this box by means of a few triangles. • Construction of the box mesh. — Insertion of the given points in the box mesh using the Delaunay kernel. • Construction of the empty mesh. — Search for the missing specified edges, - Enforcement of these edges, 9
These simple remarks argue in favor of the proposed methods, nevertheless, as will be seen in three dimensions, this positive feature is not met so easily.
5.11. GENERAL SCHEME FOR THE MESH GENERATOR
153
— Definition of the connected components of the domain. • Internal point creation and point insertion. — Control space definition, - (1) Internal edge analysis, point creation along these edges. — Point insertion via the Delaunay kernel and return to (1). • Domain definition. - Removal of the elements exterior to the domain, - Classification of the elements with respect to the connected components. • Optimization. — edge swapping, — point relocation, ... • File output. When using the advancing-front approach described in this chapter, one has to replace the step denoted by (1) of the general scheme. The analysis of the edges of the current mesh is then replaced by the front analysis. It is possible to develop a somewhat different scheme. Each step can be discussed and modified. An enclosing box is not strictly required. Without this trick, one has to consider as Delaunay kernel a process suitable for all the possible configurations and not just the reduced incremental method. The internal point creation using the internal edges or following different successive fronts first uses the empty mesh as a basis and then the current mesh. While this method offers numerous advantages, other methods exist which are not based on the empty mesh concept. For instance, [Weatherill,Hassan-1994] define the internal points in advance (using a different method, see the above section about this) and all the resulting points are inserted before dealing with the boundary integrity problem. The enforcement of the missing edges as well as the exact definition of the domain take place at the very end of the mesh generation method. It is not reasonable to claim, ex abrupto, that one approach is better than the other. Nevertheless, we advocate the approaches discussed in this chapter based on their efficiency (both in terms of CPU time and mesh quality). Moreover, these methods are quite flexible, and, in Chapter 7, we will see that the three-dimensional mesh generation problem is not too much different from the two-dimensional case discussed here.
154
5.12
CHAPTER 5. MESHING IN TWO
DIMENSIONS
Some results
To illustrate the proposed edge method, we would like to show some mesh examples resulting from its application. First, two classical applications are given. Then we turn to an example of a governed isotropic situation where a size specification is provided and, finally, a third example is given for an anisotropic mesh generation case. The last two application examples are academic in that the specified metric map is analytically defined. A realistic case where the metric map is related to an effective computation will be discussed in Chapter 12. Figure 5.9 depicts the mesh of a domain defined by a boundary discretization. This example is a section of a brake disk (courtesy of Telma). The resulting mesh has 4,442 triangles and 2,496 vertices. Figure 5.10 shows a domain modeling a circuit. The resulting mesh includes 7,646 triangles and 3, 975 vertices. The worst triangle quality is 2.05 in the first example, 38 elements have a quality worse than 1.5, only one element exceeds the value 2 (let us recall that the optimum is 1 and the greater the value, the worse the triangle). In the second example, we have respectively 2.09 as worst quality, 101 elements worse than 1.5 and only 2 triangles worse than 2.
Figure 5.9: Brake disk (section).
Figure 5.10: Circuit.
The example in Figure 5.12 is a mesh of a simple rectangle with dimensions 7 x 9 . The mesh is constructed so as to conform to a given element size specification. The specification is depicted in Figure 5.11 where the radii of the circles correspond to the desired sizes. In this case , the metric
5.12. SOME RESULTS
155
specification is analytically defined (thus, the control space is ideal) by the following function h ( x , y ]
h(x,y] =
1.-0.95-
if y < 2
0.05 x 20
if 2 < y < 4.5 if 4.5 < y < 7
-7\4
Figure 5.11: Governed isotropic case with a size specification.
if 7 < y < 9 .
Figure 5.12: Resulting mesh.
Table 5.1 reports the characteristics of the mesh constructed subjected to this isotropic size constraint. The number of points, np, number of triangles, ne, the worst triangle quality (aspect ratio), Q, minimal desired size, /imt-n, and the maximal prescribed size, hmax, are reported. np
2,227
ne
Q
"•mm
""max
4,448
1.14
0.05
1.
Table 5.1 : Statistics for the example of Figure 5.12. The diagram of Figure 5.13 reports the number of edges with a given length in the metric associated with the control space. Clearly, it may be noticed that this diagram is well centered on the value 1.
156
CHAPTER 5. MESHING IN TWO
DIMENSIONS
Figure 5.13: Dispersion diagram related to Figure 5.12. The example in Figure 5.14 shows, in a different case, the construction, step by step, of a mesh subjected to a size prescription. The first plot (iteration step 0) is the empty mesh of the domain, the second mesh (iteration step 1) is the mesh resulting after inserting some points along the edges of the empty mesh. After a few iteration steps (6), the final mesh is saturated (the edge analysis does not require any point creation). The example in Figure 5.16 displays the mesh of the same domain as above, a rectangle of dimensions 7 x 9 . This mesh is constructed so as to conform to a given specification including size and directional properties. The metric map is shown in Figure 5.15 where the ellipses with different radii and directions illustrate the expected features. This metric map is analytically defined (the control space is thus ideal) by the following functions h\(x,y) and h^x^y]
5.12. SOME RESULTS
Figure 5.14: Initial mesh and iteration steps following the edges.
157
158
CHAPTER 5. MESHING IN TWO DIMENSIONS
These two functions allow to define the following metric maps 2
,
I/ h~ ll-i (r [a/. 11} UI
n U
\\
.-2,
which enable us to construct the control space. Table 5.2 reports the number of points, np, number of triangles, ne, the worst triangle quality (aspect ratio), Q, and the minimal and maximal sizes, hmin and hmax, that are expected as well as the minimal and maximal stretching ratios (this is the ratio of the largest size and the smallest one for all the directions), rm{n and
np 1,298
ne 2,590
Q 2.04
i^min
""max
0.02
0.5
T
' mm I.
^max
19.5.
Table 5.2 : Statistics for the example related to the Figure 5.16. The diagram in Figure 5.17 reports the number of edges with a given length in the metric associated with the control space. Clearly, it may be also noticed that this diagram is well centered on the value 1.
Figure 5.15: Anisotropic case with a directional and size specification).
Figure 5.16: Resulting mesh.
5.13. NOTES
159
Figure 5.17: Dispersion diagram for the mesh in Figure 5.16.
5.13
Notes
Periodic mesh. Periodic mesh construction is required for numerical simulations for some specific problems. The periodic feature can be related to both the equations that must be solved as well as the geometry acting as computational support or only related to the geometry. A simple case of a periodic mesh is a "quadrilateral" domain in which the two opposite sides must be periodic or identical. Then, there are two possible solutions • enforcing an identical discretization for the mesh along the two edges under consideration, • transforming the domain so as to naturally obtain the periodic condition. The first method, the simplest one, results in a locally periodic mesh as two triangles related to two equivalent edges are not adjacent. On Figure 5.18, we display a domain in which the mesh of lines AB and DC must be identical. The triangles c*i/?i7i and otilifti are such that the edges 0:171 and 0:272 are corresponding edges. Nevertheless these two triangles are not adjacent. The second approach is more delicate from a computational point of view but results in a globally periodic mesh (the two triangles as above
160
CHAPTER 5. MESHING IN TWO DIMENSIONS
mentioned are indeed adjacent). We refer to Chapter 6 for an application example of this notion of a periodic mesh.
Figure 5.18: Locally periodic mesh.
Medial axis. Given an empty Delaunay mesh of a domain in .R2, it is possible to compute its medial axis. This line is the locus of the centers of the circles of maximal radius inscribed in the domain; this line is also referred to as the skeleton of the domain. A description of a method for medial axis construction can be found in Chapter 13 along with some applications for such a construction.
Chapter 6
Parametric surface meshing 6.1
Introduction
Surface triangulation is critical for numerous applications. For instance such meshes are needed for plate or shell problems (for both numerical simulations and realistic visualizations), and for mesh generation methods in three dimensions. In the latter application, the reason is that the threedimensional mesh construction algorithms require the input of a surface mesh of the domain under consideration. Surface triangulation is not a trivial problem, in particular, as the surface definition is not unified (in fact, the mathematical support differs from one package to the other and, moreover, in a given system, several representations can be used so as to define the different situations). Although the parametric surface case is easier to study, it is nevertheless still tedious. Indeed, let fi be a domain of R2 and a be an adequatly smooth function, then the surface E defined by
a : Q —>• R3, (u,v) \—»• a(u, v) can be meshed in f2 by means of any two-dimensional mesh generation method. The resulting mesh can then be mapped to R3 using a. This simple method does not ensure in advance a good approximation as the resulting mesh may be an inaccurate geometric approximation of the real surface. On the other hand, achieving an accurate approximation may require a stepsize so fine that the resulting mesh is then difficult to handle as its number of elements may be too large. Indeed, the problem is to approximate the surface as accurately as possible using piecewise planar patches such that a sufficient regularity between these patches holds. In particular, if every patch is a triangle, one needs to
162
CHAPTER 6. PARAMETRIC SURFACE MESHING
control the gap between these mesh triangles and the real surface. It is not easy to include such a control in the above mesh generation method, thus other approaches have been introduced. In this respect, several methods for surface meshing have been proposed. The more classical method consists of recursively subdividing an initial coarse mesh in the parametric space so as to obtain a suitable mesh, meaning that the gap between a triangle and the surface is controlled. This control ensures a few peculiar properties. The case of planar triangles has been investigated by [Filip et al. 1986] who propose an upper bound for the gap. An extension for arbitrarily shaped triangles has been proposed by [Sheng,Hirsch-1992]. Then, let £ be a surface with a parametrization
where IK is the length of the longest edge of K while
Thus, if the quantities X\, £2 and 1$ are known for K, we observe that the gap is controlled by IK- The difficulty lies in evaluating the quantities Jt-, 1 < * < 3. When <j(w, v) is a polynomial form, an estimation of such quantities is proposed in [Filip et al. 1986] and [Dolenc,Makela-1990], although in a more general case, this estimation can be tedious. The approach we would like to discuss in detail consists of constructing a mesh in the parametric space so that the mesh obtained by applying the transformation to the surface is close enough to this surface. This feature is achieved by controlling the construction in parametric space, ensuring that the gap between a mesh edge and the surface is bounded. To this end, the fundamental forms of the surface are used to achieve a mesh with unit length edges according to a control space related to these forms.
6.2. THE FUNDAMENTAL FORMS AND RELATED METRICS
163
The key-idea of the method is to first define a control space related to the fundamental forms of S, then to derive a Riemannian structure in the parametric space, and finally to construct a unit mesh in this context. Consequently, we return to an application of the anisotropic triangulation construction as described in Chapter 4. To define the metric map, we use the basic notions in differential geometry related to the intrinsic properties of a surface, i.e. its two fundamental forms. Based on this forms, we outline a method resulting in a unit mesh, in terms of the Riemannian structure corresponding to the metric map. We aim to define this metric map and to show how to use it for meshing both the surface boundaries and the surface. Different types of surface meshes can be obtained (uniform isotropic meshes, arbitrary isotropic meshes, anisotropic meshes, and so on), depending upon the choices selected to define the control space.
6.2
The fundamental forms and related metrics
Let E be a regular surface defined by a the following parametrization a : fi —»• R3, (w,u) '—> a(u,v) where $7 is a domain in R2 and a is a function of class C2.
Figure 6.1: A given surface and its parametrization. Two fundamental quadratic forms are defined at every point P of S, allowing, in particular, to compute the length of a curve plotted on E and the radius of curvature everywhere along this curve. At first, we will review these basic notions of differential geometry in terms of metrics.
164
CHAPTER 6. PARAMETRIC SURFACE MESHING
6.2.1
Metric of the tangent plane
Let P = cr(u,v) be the point of parameter (w, v). The tangent plane Tp at P is directed by the two vectors T\ = a'u(u,v] and r? = a'v(u,v). Let v — --- be the unit normal of Tp at P, then
Iki x ^ l l
fl/(P) = (ri,7*,i/)
(6.1)
constitutes a base of R3, the so-called local base at P. Every vector V of Tp can be written as V = AiTi+A2r 2 , with AI, A 2 € R and we have < V, V >= E\\ + 2FX1X2 + G\\ where < .,. > stands for the usual scalar product, while E, F and G are defined as
E=< n,Ti >, F =< Ti,T 2 >, G=< T2,T2 > .
If
f denotes the quadratic form associated with the scalar product, then the restriction of the form $f (V) =< V, V > to Tp is called the first fundamental form of £ at P. By introducing
we can write The matrix1 Aii(P) is symmetric positive definite, and thus represents a metric, the so-called tangent plane metric to S at P. It is then possible to evaluate the lengths of the segments constructed in the parametric space in accordance with the geometry of the surface by means of this metric A4\. Indeed, let F be a curved segment plotted on E and defined by 7 : [a, 6] —> #3, t H-» 7(0 where 7 is a function of class C2. The length of F is then given by
1
Also said to be the first fundamental matrix.
6.2. THE FUNDAMENTAL FORMS AND RELATED METRICS
165
where ||.|| is the usual Euclidean norm. As F is plotted on S, a function exists, and is defined by
such that 7 = a o u.
Figure 6.2: Curve on the surface and in the parametric space. Then, we have I \
/ 1 J \ i ^ ™~"
(6.2)
but, as we have also (6.3)
which reduces to (6.4)
and we can deduce that
(6.5) In particular, if u>(t) is a segment AB of Q, we have (6.6)
166
CHAPTER 6. PARAMETRIC SURFACE
MESHING
Geometric interpretation of M\. Let O be the point of fi (i.e. the pair (-u, v}} and Mi(cr(O)) be the tangent plane metric in o(O}. Let e be an arbitrary real value, then the locus of the points X of 17 such that
is an ellipse, centered at O, denoted by £(O,e). In the R2 space provided with the Euclidean metric defined by Ati(cr(O)), £(O,e) is a circle centered at O whose radius is e. Assuming that £(O,e) C fi, for every X € £(O,e} we consider the curve TX = cr([OX]) plotted on S, image by a of the segment [OX] of fi. Then the length of TX is given by i
L(TX) =
ytQMiWO + tO))Odt .
(6.8)
o
For e small enough, irrespective of the parameter t, we have
Mi((T(O + to3))&Mi(
(6.9)
and then
L(TX) = \JtOXMi((7(O))OX
= £.
(6.10)
Thus, the locus of the curves, plotted on £, of origin cr(O} and length £ (for £ small enough) is the image by a of the ellipse £(O,e] plotted on fi. To summarize, the first fundamental form, related to the tangent planes, allows for the computation of the length of the curves plotted on the surface. Therefore, it will be used in the following sections by the triangulation algorithms based on length evaluations.
6.2.2
Metric related to the main curvatures
The main radii of curvature can be used to construct different metric maps that will be used in the following paragraphs. Let V = \\TI + ^2^1 be a vector in the tangent plane Tp of S at P. Also let v be the unit normal vector to Tp at P defined as
if
6.2. THE FUNDAMENTAL
FORMS AND RELATED METRICS
167
then, the projection of the quadratic form onto Tp
is called the second fundamental form of E in Tp. The two following results hold (see the proof in [Lelong,Arnaudies-1977]) Lemma 6.1. The normal curvature K(V) of S at point P, related to the tangent vector V = X\TI + X^TI, is the value
Lemma 6.2. The curvature Km(V) of S at point P corresponding to an arbitrary section related to the tangent vector V = X\TI + X^ is such that the following relationships holds
where m is the unit normal vector of the curve plotted on S defined by this section. The first lemma enables us to compute the radius of curvature of the curve plotted on S defined by a normal section associated with a given tangent vector V (this plane contains P, V and z/). Similarly, the second lemma allows the same computation for an arbitrary section related to V. One can obtain the curvatures related to different normal sections of S at point P by changing V, in Tp, in the expression of «(V). In particular, the extrema of «(V) are termed the main curvatures of S at point P. These curvatures satisfy the following relations
K'XI — 0
and
K'x2 = 0 ,
and
(FL - EM}\\ + (GL - EN)^ + (GM - FN)\\ = 0. If all three coefficients in this trinomial expression are null, then $2*00 is proportional to $C(V) irrespective of the value of V, ^(V), an(^ ^ ne curvature is constant along all the normal sections. In such a case, we have
168
CHAPTER 6. PARAMETRIC SURFACE MESHING
In general, this equation has two distinct solutions V\ and V2, such that < V\,V-2 >= 0. These two vectors define two directions, the soVi V2 called principal directions. Let us define W\ = .. and W2 = 'HI
ll"2|
This pair of vectors (W\, W2) forms an orthonormed base of Tp and Bp = (W\, W2, v) is a base called the local principal base at P. Let us define the principal curvature and the principal radius of curvature corresponding to V\ (respectively V2) by KI and define p\ = — (resp. K2 and p2 — — ). KI
K2
Then the following result holds, see [Lelong,Arnaudies-1977j. Lemma 6.3. Let V = \\V\\(WiCosO-\- W2sinO) be an arbitrary vector ofTp and let p(V) —
be the corresponding radius of curvature. If
= \/p(V)cosO and if x2 = \/p(V] sin 9 then
In other words, the square root of the radii of curvature follows a conic section in the tangent plane. From the above results, it will then be possible to define a metric. A metric related to the principal curvatures Let P be a point on E. Let W\ and W2 be the two principal directions and let p\ and p2 be the corresponding radii of curvature. Provided that Pi < P2i we have the following lemma. Lemma 6.4. Let V = \\V\\ (Wi cos# + W2 sin 9} be an arbitrary vector of Tp of origin P and let p(V) be the radius of curvature corresponding to the normal section. Let Y be the intersection point of the half-line A whose origin is at P, aligned with vector V with ellipse £ plotted on Tp and defined by % +% =! Pi Pi
(6-11)
where (z1} z2] = PZ is a point of this ellipse, then we have (see Figure 6.3) \\P?\\
(6.12)
6.2. THE FUNDAMENTAL FORMS AND RELATED METRICS
169
Figure 6.3: Metric related to the principal radii of curvature. it is sufficient to verify if g(B] < 0 holds where . (6.14) This relation is satisfied since (6.15) Lemma 6.4 establishes that a metric can be defined in the tangent plane Tp at P, such that the unit length in all directions does not exceed the radius of curvature range in this direction. Such a metric is given by
In conclusion, the specification of such a metric at P allows us to control the lengths of the edges emanating from P so as to obtain a mesh that is a second order approximation of the surface. In fact, we have the following lemma Lemma 6.5. Let V be an arbitrary vector in T(P) of origin P. The normal section related to V intersects E along a curve F and is the osculating plane of T at P. In this plane, a second order approximation of F in the vicinity of P is a circle called the osculating circle. Let 7(5) be a parametrization of F where s is the curvilinear abscissa. V Let us define r = TTTTTT (respectively is) the unit tangent vector (resp. the
170
CHAPTER 6. PARAMETRIC SURFACE MESHING
unit normal vector) to F at P in the osculating plane. If P — 7(^0), then
7(s0 4- As) = P + rAs +
2p(V )
z/As 2 + As 2 e(As) ,
(6.17)
where As represents a small increment of s, p(V) is the radius of curvature corresponding to the specified section and e(As) is a quantity vanishing with As. Then, we have simply to observe that the point defined by (6.18) belongs to the circle centered at O — P + p(Y}v whose radius is p(V) (Figure 6.4).
Figure 6.4: Osculating circle. In the neighborhood of P, the curve can be approximated by a circle of radius p(V) and the metric specification Mp in P leads to discretize this circle using a stepsize smaller than p(V). It is also possible to discretize this circle with a stepsize smaller than ap(V), thereby allowing us to control the gap between the circle approximation and the circle itself. If 8 stands for the gap (Figure 6.5), the control relies on defining a in such a way that, for a given £, the following relation holds (6.19) as we have (6.20)
6.2. THE FUNDAMENTAL
FORMS AND RELATED METRICS
171
Figure 6.5: Discretization of a circle. then,
a < 2^(2 - e) .
(6.21)
With this control, the metric Mp must be replaced by (6.22)
In fact, this proves the following proposition Proposition 6.1. The specification of the metric —^ in the tangent plane, at every point of the surface, allows us to control the lengths of the edges emanating from the point and results in a mesh that is a second order approximation of the surface. Isotropic metric based on the minimal radius of curvature If p, in fact />(P), is the smaller of the principal radii of curvature pi and P2 then the metric map
(6.23)
is called the isotropic map related to the minimal radius of curvature and, according to Formula (6.16) and Proposition 6.1, this map enables us to obtain an isotropic mesh with a second order approximation of the surface.
172
CHAPTER 6. PARAMETRIC SURFACE MESHING
Anisotropic metric based on the principal curvatures Similarly, assuming p\ < p% (where p\ and pi are functions of P), the metric map
where A is an arbitrary scalar value, is called the anisotropic map based on the principal radii of curvature and, according to Formula (6.16) and Proposition 6.1, this map allows us to obtain an anisotropic mesh with a second order approximate of the surface. 6.2.3
Physically-based metric
The metric maps as defined above are directly related to the intrinsic properties of the considered surface. In an adaptive analysis of a given (physical) problem, the analysis of the corresponding solution may result in a metric map which is purely physical in nature (apart of the surface geometry). Constructing a surface mesh by using such a metric solely does not result, in general, in a mesh that closely approximate the surface. Hence, a new metric map must be defined so as to combine both the physical metric map and the geometric metric map. Such a metric, referred to here as the intersection metric, can be achieved by either two approaches. Let M^(P] be the physical metric and, let Ats(P,piipi] for instance be the geometric metric we have chosen. Then the intersection of these two maps (cf. Chapter 4) can be created by either • maintaining the aspect of the initial metric map, • approaching at the best the two metrics in question. The first solution consists in defining a scalar value a such that the unit ball corresponding to the intersection metric aM^P] is the largest ball included in the unit ball of the metric Mz(P,p\,p2). The second solution results in the definition of the metric based on the largest ball included in both the unit ball A^s(P) and that of A1s(P, /?i, p^)-
6.3. SURFACE MESHING
6.3
173
Surface meshing
In this section we propose a general scheme for parametric surface mesh construction. We show how to define a controlling metric map and we give one possible classification of the different metrics that can be used. Then, we discuss how to mesh the surface boundaries. Given the surface boundary discretization, we indicate how to create a mesh of the parametric domain which in turn allow us to obtain the desired surface mesh by applying the mapping function. 6.3.1
General scheme
Let Q be the domain of R2 corresponding to the parametrization of a given surface E. After defining a Riemannian structure on this space, the meshing problem can be formulated as the construction of a mesh of Q in such a way that unit length edges are created in accordance with the Riemannian structure. This structure, used to govern the mesh construction of Q, is defined in terms of the expected result (isotropic mesh, anisotropic mesh, mesh with specified sizes, and so on). In particular, the above control space may rely on the first fundamental form (through its metric map) and even on the second fundamental form (via its metric) of the surface E. From a practical point of view, the metric map is defined, by means of interpolation, using the discrete metric map associated with the vertices of a given mesh of Q (thus serving as a background mesh). The pair constituted by the background mesh and the discrete map is the control space of our mesh construction problem, as introduced in the previous chapter. Meshing the domain, based on this control space, is achieved in two stages by 1. meshing the boundaries of fi, 2. meshing £7 using the previous boundary mesh. In what follows, we will summarize the outline of this algorithm, discussed in detail in [Borouchaki et al. 1997a] and [Borouchaki et aL 1997b]. This algorithm enables us to achieve a governed mesh of 17 so as to obtain a mesh of E conforming (as much as possible) the input specified properties (given via the control space). 6.3.2
Construction of a metric in fi
The mesh of E, based on the given metric map A^s, is deduced from the mesh of fi. The mesh of fi is governed by the metric map j\A
174
CHAPTER 6. PARAMETRIC SURFACE
MESHING
now have to define based on the metric map Ais- Hence, we will define At 2 as a function of Afs and P such that the equivalent relation holds governed mesh f £, governed mesh
ft,
where X 6 ft and P e E are related by P = cr(X). According to Formula 6.6 which computes the length of a curve plotted on E using the parametric space, A^X) is the metric induced by M.$(P} on the tangent plane of the surface E at P. If O(P) denotes the matrix transforming the R3 canonical base into the local base at P, the metric M
6.3.3
Classification of the useful metrics
At first, we may notice that three classes of metrics can be considered solely or after a combination (after a metric intersection). There are two metrics of a pure geometric nature related to the intrinsic properties of E. Among them, are Ais(P, p), the metric associated with the smallest radius of curvature, and A1a(P, pi,/f>2), the metric associated with the principal radii of curvature. Moreover, a (non geometric) metric A4s may be specified, that could be uniform,
or of an arbitrary type (6.27) This general metric can be expressed in a more comprehensive way, as
6.3. SURFACE MESHING
175
The first metric is isotropic, it specifies a constant size h expected for the mesh elements. The second metric is anisotropic, it specifies the sizes /&i, h-2 and ^3 in the base vector directions of B(P). Then, based on our choice, we obtain the following possible mesh types Uniform isotropic mesh.
We consider the metric map Ms(P, h) only.
Physics-based anisotropic mesh. only.
We consider the metric map Ats(P)
Geometric mesh based on the minimal radius of curvature. We consider the metric map Mz(P, p] only. Geometric mesh based on the principal radii of curvature. We consider the metric map Ms(P, P\,P2) only. Physico-geometric mesh based on the minimal radius of curvature. We take into account the metric map A^s(P, />) intersected with the given uniform or arbitrary metric map Physico-geometric mesh based on the principal radii of curvature. We take into account the metric map M.%(P,p\,p2) intersected with the given uniform or arbitrary metric map Thus, there are six different possible situations that will be depicted in the application examples provided in a following section.
6.3.4
Boundary meshing
The geometry of the domain f2 is described by a discretization of its boundaries. This discretization is used to defined a support by means of an adequate mathematical representation. The boundary mesh is governed by the metric map MI and is a discretization of the above support into unit length arc segments (according to the control space). To achieve this
176
CHAPTER 6. PARAMETRIC SURFACE MESHING
boundary mesh, the support is approached by polygonal segments which are in turn subdivided into unit length polygonal segments, resulting in a unit boundary mesh. Chapter 11 will discuss this boundary meshing process in more detail.
6.3.5
Domain meshing
The boundary mesh of Q is a set of edges constituting a set of constrained edges where these edges have a set of points denoted by S(Q) as vertices. To achieve the mesh of the domain Q, we first construct, by means of the Delaunay method, [Borouchaki,George-1997], an empty mesh of fi, [George-1991], containing only the points of 5(fi) as vertices . Then, by adding the relevant points inside this empty mesh, a new mesh is constructed which is then optimized to obtain the final mesh of Q. The internal points are created and then inserted in the domain iteratively. Indeed, at the first step of the process, the domain mesh is the empty mesh and then, at each step, the internal edges of the current mesh are analyzed and the internal points are • constructed along the edges so as to, on the one hand, subdivide a given edge in unit length segments and, on the other hand, ensure that the so-created points are not too close (less than 1) to any already existing points (with respect to the control space), and • inserted in the current mesh using the constrained Delaunay kernel in a Riemannian context. This process is repeated as long as the current mesh is affected. The set of processes and procedures required to the effective construction of a unit mesh has been described in the previous chapters to which the reader is referred.
6.3.6
Surface mapping
Mapping the mesh of fi onto the surface S is rather obvious, in fact one has just to apply the function a.
6.4
Some results
In this section, we will show some meshes resulting from the proposed method. Specifically, we will pay attention to the metric effect on the
6.4. SOME RESULTS
111
resulting mesh. The selected example is a surface, analytically given by the equation (x,y,z = 3sin(2z)cos(2y)) (6.29) where the parametric domain is a circle of origin O = (0,0) and of radius r = 3, the circle being in the plane XY. Uniform isotropic mesh in the parametric domain. At first, the parametric domain is meshed in a uniform isotropic way and most of its elements have a size h = 0.2 (Figure 6.6, left-hand side on top). This mesh includes 876 points and 1,746 triangles. Using cr, this mesh is mapped onto the surface (Figure 6.6, right-hand side on top), an arbitrary surface mesh is then obtained which does not conform a priori the geometry of the surface.
178
CHAPTER 6. PARAMETRIC SURFACE MESHING
Figure 6.6: Uniform mesh (h = 0.2^ in fi and its mapping onto S (top). Uniform mesh (h = 0.4,) onto £ and the governed mesh o/fi supporting the construction (bottom).
6.4. SOME RESULTS
179
General metrics. To clarify the different categories of surface meshes obtained by specifying a metric map of the type M,(P] =
/ a(P) b(P) b(P) d(P) \ c(P) e(P)
c(P) \ e(P] f(P) )
(6.30)
with a(P) > 0, a(P)d(P) - 62(P) > 0 and Det(A43(P)) > 0, in R3, we recall how to find the metric in R2 governing the process from the above metric in R3. The metric map M.
where a(P) = 6 cos(2a:) cos(2y) and (3(P] = -6sin(2z) sin(2y). Thus,
with
Uniform isotropic surface mesh. We successively specify the uniform isotropic metric maps A^s(P, 0.4), A^3(P, 0.2) and A^s(P, 0.1) and we obtain the meshes from the bottom of Figures 6.6 and 6.7 respectively. This choice results in setting 6, c and e to 0 in the above general expression and setting a = d = f = /i~ 2 , where h is 0.4, 0.2 and 0.1 respectively. The characteristics of these meshes are summarized in Table 6.1, where np is the number of points, ne is the number of triangles, tcpu is the CPU time (HP735/99Mhz) required to achieve the mesh and e is the maximal stretching factor specified (this value is the ratio of the largest specified size with the smallest one in any direction) in the parametric domain.
180
CHAPTER 6. PARAMETRIC SURFACE MESHING Metric map A*3(P,0.4) M3(P,0.2) Af 3 (P,0.1)
np
ne
tcpu
1,098 3,905 14,839
2,190 7,804 29,672
1.4 4. 12.
e 6. 6. 6.
Table 6.1: Statistics related to the uniform metric maps. These meshes do not conform the geometry although they follow the given metric map specifications, as can be seen on the figures. Clearly, by decreasing the value of /i, one can expect a mesh which conforms to the geometry of E. This feature can be achieved at the expense of a large number of elements. Geometric isotropic or anisotropic surface mesh. We now consider two geometric metric maps. The first is isotropic A^P, p), while the other is anisotropic M.s(P, />i,/t>2). These two maps are subjected to the constraint Pmin < P < Pmax
(6.35)
with Pmin — 0.05 and pmax — 2.0 so as to avoid any degeneracy in terms of element sizes. The resulting meshes are displayed in Figures 6.8 and the corresponding statistics are reported in Table 6.2. By definition, these meshes conform to the geometry and give a second order approximation of the surface. Metric map
M3(P,p) M3(P,Pl,P2)
ne
tcpu
15,464 30,922 2,243 4,480
12. 2.
np
e 6.
8.
Table 6.2: Statistics related to the geometric metric maps. One may notice that the second mesh adequately approaches the surface with only 4,480 triangles, i.e., with almost one seventh the number of elements in the first mesh.
6.4. SOME RESULTS
181
Figure 6.7: Uniform meshes (h = 0.2, top and h — 0.1 bottom) onto E and the governed meshes of Q which support the construction.
182
CHAPTER 6. PARAMETRIC SURFACE MESHING
Figure 6.8: Isotropic geometric mesh (top), anisotropic mesh (bottom) o/E (right) and the governed meshes in Q (left) which support the construction.
183
6.4. SOME RESULTS Arbitrary meshes. We now consider two metric maps, the first is isotropic and defined by
(6.36)
where /(r) = 0.05 (1-fr2) with r 2 = x 2 +y 2 , and the second map M$(P, aniso), an anisotropic map, is defined by
where 0(z) = 0.1|^ 2 - 2.25) + 0.01. The first map leads to a quadratic increase in sizes with respect to x and y corresponding to an increasing desired size as we move away from the origin. Elsewhere, a minimal size of 0.05 is specified. The second map specifies more element stretching as we approach the two planes z = 1.5 and 2 = —1.5, with a maximal stretching factor of 20. Figures 6.9 top ( 6.9 bottom resp. ) depict the meshes corresponding to the map A^-P, iso) P, aniso), resp.) and Table 6.3 reports the corresponding statistics. Metric map <M3(P,iso) Ais(-P, aniso)
np 6,065 9,279
ne 12,124 18,552
tcpu 6. 17.
e 6. 95.
Table 6.3: Statistics related to the arbitrary maps. In both cases, the resulting meshes conform to the specified metric maps but not to the geometry.
184
CHAPTER 6. PARAMETRIC SURFACE MESHING
Figure 6.9: Arbitrary governed meshes, isotropic (top) anisotropic (bottom) of £ (right) and the governed meshes of Q (left) which support the construction.
185
6.4. SOME RESULTS
Geometric arbitrary meshes. We again consider the above two metric maps A^3(P, iso), A^s(P, aniso) which are now modified to include the metric maps Ats(P, />) and A1s(P, pi^pi] respectively, so as to produce a metric map which approximates the geometry more accurately. Thus, the first metric map "intersection", M.z(P, iso,geom), is deduced from A^s(P, iso) by replacing /(r) by min(/(r),/f>) where P = (x,y, 2 = 3sin(2a;)cos(2y)) and p is the minimal radius of curvature in P. The second map, M$(P, aniso,geom), is obtained by intersecting the initial metric map A1s(P, aniso) with the metric Ms(P,p\,p2) which is the anisotropic metric map of the radii of curvature. Figure 6.10 shows the resulting meshes, while the statistics for these meshes are given in Table 6.4. Metric map X3(iso,geom) Af3(aniso,geom)
np
ne
ICPU
17,192 32,212
34,378 64,418
14. 45.
e 6. 97.
Table 6.4: Statistics related to the combined geometric metric maps. One may observe that the specified conditions are not satisfied everywhere due to geometric features which constrain the construction.
186
CHAPTER 6. PARAMETRIC SURFACE MESHING
Figure 6.10: Geometric governed meshes, isotropic (top) anisotropic (bottom) of E (right) and the governed meshes in J7 (left) which support of the construction.
6.5. APPLICATIONS
6.5
187
Applications
Numerous applications of this mesh generation method can be exhibited.
6.5.1
Cylindrical meshing
A surface with a cylindrical topology is well-suited for the proposed mesh generation method. We assume that this surface, S, is defined by a set of values p = f(0, z) or z = g(0, p) where z is the altitude, p is the radius and 0 is the corresponding angle. Such a surface can then be mapped onto a rectangle and the mesh generation method applies to this rectangle. Nevertheless, this first solution does not result in an optimal mesh. Indeed the vertical sides of the cylinder (or the horizontal sides according to the way the mapping is defined), and more precisely the mesh segments corresponding to these sides will be mesh edges lying on the surface S, after mapping. Thus, an "artificial" line may be observed and the resulting mesh is no longer a periodic mesh. Figure 6.12 illustrates this method. The input data is a set of values associated with a uniform grid 120 X 69 (courtesy of ENST). The resulting geometric isotropic mesh includes 24,847 triangles and 12,569 points. To overcome the non-periodic nature of the mesh, a simple transformation is possible which relies on mapping the cylinder onto a ring, instead of a rectangle, to define the planar parametric domain ft to be meshed. Indeed, it is possible to find a new space by applying a stereographic projection to a point Q — (0,0,Z/), where L > d, d being such that 0 < z < cf, on the plane z — 0 (Figure 6.11). The new parameters are then defined
by Lr
x' =
cos(d) , LrZ y' = _ sin(0) . L
(6-38)
The new parametric domain is thus a ring (see Figure 6.13 where the input data is a uniform grid 180 X 144 (courtesy of ENST) where the resulting geometric anisotropic mesh has 20,374 triangles and 10,190 points) Lr defined by the two circles centered at O whose radii are r and resL —d pectively and the cylinder is defined, through this domain, by the following
188
CHAPTER 6. PARAMETRIC SURFACE MESHING
Figure 6.11: Projection onto the plane z = 0. equations
(6.39)
This projection trick allows for the creation of the mesh in Figure 6.13 which can be compared, to some degree, with the mesh in Figure 6.12 which was obtained using the classical approach.
6.5. APPLICATIONS
189
Figure 6.12: Uniform isotropic geometric mesh of a cylinder created by the first approach (where the domain Q, is a rectangle).
190
6.5.2
CHAPTER 6. PARAMETRIC SURFACE MESHING
Sampled surface meshing
For some types of surfaces (for example, surfaces with a cylindrical topology), a scanning device can provide (as a function of the used technology) a set of values z = /(w, v) based on a uniform grid. It is then possible to exploit this grid so as to complete a parametric representation of £ along with a discrete metric map. Thus, we return to a case where the mesh generation method described in this chapter applies.
6.5.3
Arbitrary surface meshing
CAD-CAM software packages define surfaces as a set of patches and construct triangular meshes (or quadrilateral or mixed meshes) from the mesh of these patches using a specific method. The process is performed on a patch basis (patch-dependent meshing) or by combining several patches together if possible (patch-independent meshing). The problems encountered in this kind of process include the following, among others • the tedious case where the surface is defined by a large number of patches, • the case where the sizes of the patches vary widely, • the case where the patch size is "small", so that the patch mesh is the patch itself (if it conforms to a triangular or quadrilateral shape), • the fact that the patches do not form, in general, a (coarse) conforming mesh of the surface. Indeed, overlapping regions and gaps may exist, • the non-uniqueness of the mathematical basis for each patch (i.e., the description may be different from one patch to the other), • and several others. Thus, the mesh generation method as described in this chapter cannot be applied by itself. Nevertheless, we think that some of the ideas and concepts developed in this discussion can be used fruitfully for many general situations.
6.5.
APPLICATIONS
191
Figure 6.13: Anisotropic geometric mesh of a cylinder using the second approach (where the domain Q is a ring).
192 6.5.4
CHAPTER 6. PARAMETRIC SURFACE MESHING Adaptive meshing
We consider a cycle of computations, for instance using the finite element method. At each iteration step, it is necessary to mesh the surface of the computational domain. This mesh is governed by a metric map deduced from the solution of the problem under consideration analyzed via an a posteriori error estimate. We return then to the previous situation where the geometric metric map associated with the intrinsic properties of the surface and a physical metric map derived from the solution analysis must be combined.
6.6
Notes
Surface meshing, in the context of a finite element applications (i.e., surface meshing so as to provide the input to a R3 domain meshing method), has been investigated using several different approaches. Octree approach. Based on the combination of the octree decomposition technique with a local meshing process at the octant level, the "octree"type methods are probably among the oldest and the simplest methods for surface mesh generation (see [Yerry,Shephard-1984], [Kela-1989] and [Perucchio et al 1989]). For every domain whose surface shall be meshed, a size map is given either known in advance or user-specified (for instance to take into account the curvature of the geometric model). The mesh generation method proceeds in two steps. In a first stage, using a bounding box, a tree is constructed by recursive subdivision such that all the model entities are present at the octant level ([Shephard,Georges-1991]). At every boundary octant level the vertices, edges and faces of the mesh are successively created (to maintain a valid representation) after analyzing the intersection of the model with the current octant. The part of the model surface intersected by an octant is explicitely known using an interface with the geometric modeler supporting the domain representation. The pattern analysis as well as the neighborhood analysis corresponding to these intersections enable us to achieve the surface mesh at the octant level. The resulting surface mesh is then the union of all these local meshes. A common feature of all these techniques is that every entity resulting from the interaction of the domain boundary with an octant boundary is present in the final mesh. In some cases, this feature may lead to ill-shaped elements. Thus, mesh optimization algorithms are needed.
6.6. NOTES
193
We think that such a method (i.e., a tree-dependent method) is rather difficlut to use if anisotropic meshes are desired. Voxel approach. This approach follows the same idea as the octree method but differs in the way the model is described ([Lorensen,Cline-1987], [Frey-1993], [Frey et al 1994]). The method consists of triangulating an implicit surface (thus, a parametric surface) using a tetrahedral or hexahedral partitioning of the domain. From every cell of this partitioning is extracted a set effaces resulting in an iso-surface (via a "marching-cube" algorithm). Advancing-front approach. Using an advancing-front method in twodimensions ([A.George-1971]) in the case of surface meshing has been investigated in several papers ([Lo-1985], [Marcum-1996], [Rypl,Krysl-1994] and, [M611er,Hansbo-1995]). The optimal points constructed from the edges in the current front are located on the surface mesh and then their locations can be corrected according to the curvature of the real surface. The merging (or shocks) between two fronts are dealt with using the geometry.
This page intentionally left blank
Chapter 7
Meshing in three dimensions 7.1
Introduction
We consider R3 space and assume that a given domain, Q, is known through its boundary discretization. This discretization is a list of faces (assumed to be triangular) defining a constraint (according to Chapter 3). The problem is to construct a mesh of ft using only this input data. In fact, the resulting mesh will be a mesh of the polyhedral approximation of Q defined by the given discretization. This mesh is expected to be well-suited for the expected application (e.g., for finite element computations). From a practical point of view, the data may include faces other than the boundary faces and, in addition, it may contain specified edges or points. The given internal faces and edges are added to the boundary faces and edges so as to define the actual constraint. On one hand, the specified points enrich the list of points formed by the endpoints of all the given faces and edges, meaning that after the point insertion process, the specified points will be mesh vertices without any special treatment. On the other hand, internal faces or edges will require a specific treatment. This chapter follows exactly the same framework as that in Chapter 5. Similarly several situations are discussed, leading to different ways of considering the mesh generation problem. In the first case, the so-called classical case, the sole data available is formed by the constrained faces and edges (specifically, the faces and edges defining the domain boundary) and, if any, some additional internal points. The second case, also called the governed isotropic case corresponds to the data of a size map defined everywhere in the domain that completes the previous type of (classical) data. This size specification is a way to define the desired size of the elements to be created. The last case, also referred to as the anisotropic case,
196
CHAPTER 7. MESHING IN THREE DIMENSIONS
assumes that a similar map is specified, including both size and directional information. In any case, the mesh generation methods will follow the same scheme as in two dimensions and thus include an initial creation stage resulting in a mesh without internal points1. A major difference with the two dimensional case is that the above mesh may be not an empty mesh in some cases, as will be seen. Nevertheless, for sake of simplicity, such a mesh is also referred to as the empty mesh. It can be achieved using the results and algorithms discussed in Chapters 2 and 3. Afterwards, the methods differ in the way the required internal points are created. These steps constitute the main issues discussed in this chapter. As mentioned previously, this chapter follows in principle the same discussion as that used for the two-dimensional case. This is essentially due to the fact that the internal point creation methods, used in two dimensions, also apply in three dimensions without major changes. This is a feature which shows a posteriori the power of these methods, specifically the edge support method applies in any dimension. Despite the similarity between this chapter and Chapter 5, we will give a description which is detailed enough so as to permit an independent reading. Nevertheless, less detail will be given, except in the case where are conceptual differences between two and three dimensions.
7.2
The empty mesh construction
Let us recall the definition of the so-called empty mesh. Definition 5.1. The empty mesh of a given domain fi is a mesh whose vertices are the sole boundary points of this domain. From a practical point of view, in three dimensions, Theorem 3.1 (Chapter 3) does not apply. In other words, polyhedra exist with a valid boundary (in particular, a non crossed boundary) for which it is not possible to construct a mesh without introducing a few internal points, also called Steiner points. An example of such points is made clear in the simple configuration in Figure 3.11, where it is not possible to construct three tetrahedra with positive volume lying on the given boundary faces. Nevertheless, assuming that such impeding configurations can be successfully dealt with, it is possible to obtain a mesh of the given domain. 1
Apart the specified points, if any.
7.2. THE EMPTY MESH CONSTRUCTION
197
Since, except for the eventual specified points and the Steiner points required to achieve it, this mesh does not include any internal points, it is not generally well-suited for computational purposes. Nevertheless, this is the first mesh we can construct to cover the domain. In addition, this mesh will be used to identify this domain. Moreover, the empty mesh can serve as a geometric background, at the time the required field points will be created.
Bounding box construction. The construction of a box enclosing all the points known at this stage (basically the boundary points of the domain under consideration) is made for sake of simplicity. This trick does not limit the range of applications of the mesh generation method. Using the enclosing box, the reduced incremental method can be used, as seen in Chapter 2. The bounding box is defined according to the extrema of the given point coordinates. For instance, it can be chosen as a cuboid or a rectangular parallelepiped such that it encloses these extrema. The bounding box is then divided into five (or six) tetrahedra, the resulting mesh being the mesh denoted by TO (one may notice, as in two dimensions, that a few points can be added to this box. While optional, it can improve the efficiency of the method.
Inserting the boundary points. The boundary points (actually, all the points known at this stage) are inserted in To by means of the reduced Delaunay kernel (used in its constrained extension, cf. Chapter 3) Ti+i = Ti - Cp + Bp where P denotes the i + 1st point (i.e. -P;+i) of the set of known points (i = 0,1,2,...), Cp is the cavity associated with P, and Bp is the ball of P. Upon completion of these insertions, we have a mesh, TBOXI of the box enclosing the domain, whose element vertices are the P,'s and a few points added to define the box. This mesh complertely fills the box and, therefore, is not a mesh of the domain fi. To obtain a domain mesh, we have to first check that the boundary discretization entities are also mesh entities. Indeed, this is the only way to determine, by adjacency, if a tetrahedron belongs to the domain or its exterior.
198
CHAPTER 7. MESHING IN THREE
DIMENSIONS
Boundary enforcement. In general, the mesh TBOX does not include, in its list of edges and faces, all of the input boundary edges and faces. This means that an edge or a face whose endpoints are mesh vertices is not necessarily present in the current mesh. It should be noted that, in three dimensions, this drawback is encountered almost all the time irrespective of the boundary mesh quality (especially when the domain is not convex). Thus, the materials described in Chapter 3 are used to retrieve at first the missing boundary edges and then the missing faces (and, more generally, all the initial edges and faces, i.e. the boundary edges and faces and the internal edges and faces, if any). Starting from TBOX, we apply local modifications so as to achieve a new mesh TBox including the given boundary discretization2 exactly. After removing the elements exterior to the domain by means of one of several possible algorithms, we obtain the desired empty mesh. Contrary to the two-dimensional case, this empty mesh may include some internal points (i.e. Steiner points introduced so as to successfully recover the missing boundary entities3). Remark. The algorithm for boundary enforcement allows for the detection of some invalid boundary configurations and thus provides a means of verifying that the given boundary discretization is correct. Therefore, overlapping entities, crossing entities and various inconsistencies can be detected. Connected components. As the boundary faces defining the domain have been enforced, it is now possible to define the empty mesh related to this domain. More precisely, we determine the internal tetrahedra and then define the various connected components of the domain. To this end, we return to one of the algorithms used in two dimensions in its extension to the three dimensions 1. Assign the value v = — 1 to all elements of the box mesh, TBox and set c = 0 (c can be thought of a color); 2. Find an element having one vertex identical to one of the box corners and set it to the value v = c; 2
We consider the case where the (boundary) constraint is exactly satisfied, according to Definition 3.2. 3 As a bounding box is used, the problem related to determining the existence of a mesh of the domain whose element vertices are solely the boundary face points is not relevant.
7.3. FIELD POINTS (CREATION)
199
3. Visit the elements by means of face neighborhood relationships: • if the color of the current element is not — 1, GO TO 3 (the element has been colored previously); • else if the face crossed when reaching the current tetrahedron is not a boundary entity, assign the value v = c to this tetrahedron and GO TO 3; • else if the face crossed when reaching the current tetrahedron is a boundary member, GO TO 3; 4. Set c = c+ 1, and GO TO 3 as long as a tetrahedron exists such that u=-l. D Using this algorithm, the tetrahedra are sorted into the various connected components of the domain, c being the component number. The tetrahedra with color 0 are outside the domain, it is thus possible to remove them. Nevertheless, this operation is not completed at this time in order to preserve a convex background allowing us to maintain, as in two dimensions, a simple context for the field point creation. Remark. As in two dimensions, the analysis of the colors enables the detection of some improperly defined configurations (like a boundary with a hole in it).
7.3
Field points (creation)
Due to element quality concerns, it is generally necessary to insert points inside the domain, especially for finite element computations. Numerous methods to perform this step are feasible, just as in two dimensions. 7.3.1
Several methods for field points creation
Formally speaking, again we find that some of the methods discussed in two dimensions apply without change to three dimensions while others require some modification to extend them. Tentative list of methods. As mentioned before, numerous methods are suitable for internal point creation. In fact, two questions need to be considered. The first question concerns the field point location and the second question is related to the number of requested points. The principal methods proposed in the literature are based on
200
CHAPTER 7. MESHING IN THREE
DIMENSIONS
• circumcenter creation for every tetrahedron that violates certain criteria related to volume computation, inradius length, aspect ratio (or quality), etc., • centroid creation (weighted or not) for tetrahedra judged too large, • creation of points along the edges of the current mesh, • creation using other meshing techniques, for instance — using an octree4 method. — using an advancing-front method5. — using a given point distribution, which is created by another meshing method. • a creation by by means of a "variogram". For a discussion of this, see Chapter 5 where this method has already been discussed for two dimensions. Thus, the general principle is either to create a point and to insert it immediately by means of the Delaunay method (using the the so-called Delaunay kernel), repeating the process as long as points can be created, or to generate a set of points, to insert them and to iterate the process as long as a non empty set is created. Classical case. The only goal here is to locate the points so as to obtain nearly equilateral elements in the resulting mesh or, at least, elements with the best possible quality. In this context (in three dimensions), all the methods briefly mentioned lead to suitable results at the expense of more or less optimization applied to the mesh resulting from the field point insertion. This feature, typical in three dimensions, is due to the possible creation of tetrahedra whose edges have satisfactory lengths and whose faces enjoy a nice aspect ratio but whose volume can be close to 0. This situation, already mentioned in Chapter 1, implies that the mesh constructed 4
A octree is a data structure based on a recursive subdivision scheme. In this context, this approach consists in constructing a cuboid or a rectangular paraJlelepipedon enclosing the domain. This box forms the parent cell of a recursive subdivision scheme, every current cell in the structure being subdivided into eight similar cells as long as the considered cell contains more than one boundary point. This method results in a set of cells whose size is directly related to the boundary discretization density. 5 An advancing-front technique consists in constructing the "optimal" set of points associated with a current front. The initial front is formed by the boundary faces input as data. Then, the front is updated after each operation.
7.4. CONTROL SPACE
201
from a set of nicely located points is not necessarily a mesh with nicely shaped elements. This points out a difficulty specific to three dimensions which makes point insertion quite different to that of two dimensions. Isotropic case with size prescription. The aim here is to locate the points so that the resulting mesh consists of nearly equilateral tetrahedra (or element with the best possible quality) and, in addition, such that the tetrahedra size is as close as possible to a pre-specified size. For this case, all the above methods are feasible, keeping in mind that elements with degenerated volume may be created. Anisotropic case. The goal here turns to locate the points so that the resulting mesh consists of tetrahedra whose sizes in each directions are as close as possible to the pre-specified data. For this, the method based on edge point creation leads to satisfactory results. The method using an advancing-front approach probably leads to similar results. With regard to the other methods, the question is more difficult than it was in two dimensions and is not yet decided6.
7.4
Control space
The background introduced in two dimensions for the internal point creation stage clearly applies in three dimensions. The notion of a control space appears to be well suited so as to govern this stage. As given in Chapter 5, the control space definition is still valid and we assume in what follows that an "ideal" control space is given (see Chapters 9 and 12 for realistic mesh generation situations).
7.5
Creation along the edges, classical case
In this case, the sole data is the discretization of the domain boundaries. Hence, it means that we don't have an explicitely defined control space. Nevertheless, to return to the general framework, we would like to construct, as best we can, such a control space. To this end, the edges of the current mesh, say the empty mesh, are used. 'Clearly, this advice can be refuted.
202
CHAPTER 7. MESHING IN THREE
Control space.
DIMENSIONS
The control space is then
• A the current mesh, i.e. the initial empty mesh and, in case of iterations, as will be seen later, the mesh corresponding to the previous iteration, along with • H defined by means of a P1 interpolation, using the sizes, denoted as /IJTOC, associated with the points of the domain boundaries, and, furthermore, in an iterative process, by using the hioc assigned to the vertices of A. The initial sizes can be computed by averaging the lengths of the edges emanating from a boundary point or by considering an average of the surface areas of the faces sharing a given vertex. One may observe that it is a tedious task to assign a size for a Steiner point; this is why, in certain cases, these points7 will be ignored meaning that every edge with such points as endpoints will not be considered in the point creation process. After a few iterations through the mesh edges, these artificially rigid edges are usually removed due to the side effect of inserting points along neighboring edges. Point creation. With this control background, we can now develop the point creation method discussed in two dimensions. It is based on edge subdivision and it applies without modification in three dimensions. The current mesh edges (see the remark about the Steiner points) are examined and their lengths are compared with the stepsizes related to their endpoints, hioci and /i/0c2- The objective of the method is to decide if one or more points must be created along each edge. If so then, both the number of required points and their location must be determined. The objective is, on the one hand, to saturate the edge, and on the other hand, to obtain a smooth point distribution. The required algorithm is similar to its two-dimensional counterpart given in Chapter 5. Once all the current mesh edges have been processed, a series of points is obtained, and then filtered, by simply using a grid (cf. Chapter 2). This filtering, as in two dimensions, is needed due to the fact that the vertices are well-positioned along each edge but may be ill-spaced globally. For instance, one may observe the case where all of the edges emanate from a given point. The retained points are then inserted using the Delaunay kernel and the entire process is repeated as long as any mesh edges need to be subdivided. 7
Removing the Steiner points is not always possible. This task can be successful for some cases but, in general, not for all.
7.6. CREATION ALONG THE EDGES, ISOTROPIC CASE
7.6
203
Creation along the edges, isotropic case
We now return to the method described in Chapter 5. Here, the control space is based on a metric map provided as a matrix map, where a matrix is ( a(x,y,z) 0 0 \ M(x,y,z)= 0 a(x,y,z) 0 (7.1) V 0 0 a(ar,y,z)7 or, when a straight line is considered, as supposed in the discussed method
C
a(t) 0
0 a(t)
0
0
0 0
\
(7.2)
a(t) }
with M(t) being the point ( x , y , z ) and t being the parameter. Where, we also ha,ve
with h(t) denoting the expected size at the point with parameter value t. This size is the desired length of all edges emanating from this point. The internal point creation stage is done as it was in two dimensions. The targeted elements are unit length tetrahedra according to the control space meaning that unit length edges are expected in the control space, while edges of length h are locally expected in the usual space. This means that the unit sphere in the control space is the sphere of radius h in the usual space. Hence, the point creation method is done as in two dimensions. The edge lengths are computed and the edges are subdivided, if needed, by creating a number of points equal, as closely as possible, to these lengths — 1. Then the points are located in such a way that the distance between points is at least 1 in the control space.
7.7
Creation along the edges, anisotropic case
Just as with the method described in Chapter 5, the control space is now related to a metric map (or matrix map) whose matrices are of the form
I
a(x,y,z) b(x,y,z) b(x,y,z) d(x,y,z) (ar,y,z) e(ar,y,z)
c(x,y,z) e(x,y,z) \ f(x,y,z)
(7.4)
204
CHAPTER 7. MESHING IN THREE
DIMENSIONS
or, when a straight line is considered, as assumed in the discussed method M(M(t))=
/ a(t) b(t) b(t) d(t)
c(t) \ e(t)
(7.5)
^ c(t) e(t) f(t) ) with M(t) the point (x, y, z] and t being the parameter. The aim, as in two dimensions, is to construct nearly equilateral tetrahedra of unit size (i.e. with unit length edges) according to the metric defined in the control space. In terms of the usual space, it implies that the tetrahedra enjoy a size (h1, h2, h3), i.e., their edges have a length (h1,h2, h3), where hi denotes the desired length in the directions di, i = 1,3. This means, formally speaking, that the unit sphere in the control space is locally the ellipsoid of principal directions (d1,d2,d3) with length h\ mapping to d1, length h-2 following d2 and h3 mapping to d3 in the usual space. The di's, as well as the /i;'s, are the eigen-elements (eigenvalues and eigenvectors) of the matrix A4. Again, the point creation method is done as it was in two dimensions. The edge lengths are computed and the edges are subdivided as needed by creating a number of points equal, as closely as possible, to these lengths — 1. Then the points are located in such a way that the distance between points is at least 1 in the control space.
7.8
Advancing-front type creation
The method, introduced in Chapter 5, combining the advancing-front point placement strategy and the Delaunay approach to connect them, can be extended to three dimensions, provided that some modifications are made. Control space. The control space is similar to that of the previous case • A is the current mesh, the initial empty mesh or, in case of an iterative scheme (as will be seen), the mesh corresponding to the previous iteration, • H is defined by means of a Pl interpolation, starting from the sizes associated with the boundary points, hioc, and then from the h\oc assigned to the vertices of A. The initial sizes are computed as the average of the lengths of the edges emanating from a boundary point (different methods can be used to evaluate these quantities, for instance, by considering the surface areas of the faces sharing the boundary point under examination).
7.8. ADVANCING-FRONT TYPE CREATION
205
Point creation. Given this control background, we introduce a point creation method based on an advancing-front point-placement strategy. The front represents the interface between an "acceptable" element and an "unacceptable" element with respect to a quality measure. Starting from the empty mesh, a list of tetrahedra is determined based on the intrinsic properties of the elements (in-radius). Thus, the tetrahedra are divided in suitable tetrahedra (i.e. having a size conforming the control space) and tetrahedra that must be processed. By default, every tetrahedron exterior to the domain is classified as suitable. The active tetrahedra, neighbors to the suitable tetrahedra, are those where a point can be created. The interface between the retained elements and the elements that must be processed is a set of triangular faces defining a front which form the basis for the internal point creation. The points to be inserted are located for a given front face so as to permit the construction of an optimal tetrahedron, according to the control space. The set of the so-created points is filtered resulting in a set of points that must be effectively inserted. The goal is to discard all points that dont result in mesh improvement. To complete this task, we shall use the circumspheres of the elements within which the points fall. If the visited point is interior to such a sphere, it is retained, otherwise it is removed from the list (in fact, inserting a point in this situation by a Delaunay method does not result in including this element in the corresponding cavity).
Figure 7.1: Retained point.
Figure 7.2: Rejected point.
The set of retained points at this step is then globally filtered so as to discard any point too close to an already existing point coming from the
206
CHAPTER 7. MESHING IN THREE
DIMENSIONS
analysis of a neighbouring face. The Figures 7.3 and 7.4, respectively, show the initial front and the front after the fourth iteration of this mesh generation process. The example depicted is the domain exterior to an airplane (courtesy of DA).
Figure 7.3: Initial front.
Figure 7.4: Front after 4 iterations.
The proposed algorithm, including some control processes, as presented in [Frey et al. 1996], converges and produces good quality isotropic meshes (without requiring any optimization process).
7.9
Field points (insertion)
Any point construction method can be used to create a set of points. These points can be then inserted by means of the adequate Delaunay kernel. • In a classical context, we consider the constrained Delaunay kernel as described in Chapters 2 and 3. • For the governed isotropic case, the same kernel is well-suited. • For the anisotropic case, one must consider one of the constrained Delaunay kernels described in Chapter 4. The point insertion process is completed in successive waves. The first wave results from the empty mesh edge analysis (edge method) or from the empty mesh front analysis (advancing-front method). The following waves correspond to the analysis of the edges of the previous mesh or to that of the front associated with the current mesh. This front is the interface between
7.10. SPECIFIED INTERNAL EDGES AND FACES
207
the suitable elements and the active elements, which are the candidate elements for point placement.
7.10
Specified internal edges and faces
Once the empty mesh is constructed, the set of specified edges and faces (i.e. the boundary discretization (notice that a boundary edge is necessarily a boundary face edge) as well as the internal edges and faces, if any) exist as mesh entities. When using the point insertion process, it is possible to remove, unless specific attention is paid to them, one or more specified internal edges or faces. This happens in three dimensions, but is not a problem property in two dimensions. This is due to the way in which the cavities are defined, these cavities being the support of the point insertion. Indeed, as discussed in Chapter 2, the cavities are constructed by adjacency starting from the base. As such, it is impossible to flip an edge in two dimensions (every edge inside a cavity separates it into two parts with a disjoint interior). However, in three dimensions, it is common to flip a specified internal edge or face8. In such cases, even the constrained version of the Delaunay kernel does not avoid this drawback. In fact, a specified internal face can be an internal face of the cavity while never being explicitely crossed when visiting the elements by adjacency. As a result, an internal edge or face existing at some step of the process (for instance, after enforcing the constraints) can be removed when inserting a point in some neighborhood. Thus, the resulting mesh will no longer satisfy the initial constraint. There are several ways to solve this problem. One solution consists of explicitely checking if any specified internal edge or face may be removed when a point is proposed for insertion. That is a constraint that must be added in the cavity construction so as to avoid this problem.
7.11
Optimization
In three dimensions, as will be detailed in Chapter 8, there are several mesh optimization operators. In fact, one can • remove a free face9, 8
While it is not possible to turn around a boundary face since the boundary has been enforced (otherwise, it means that the boundary is not a closed boundary). 9 cf. supra about this notion of a free point, edge or face.
208
CHAPTER 7. MESHING IN THREE
DIMENSIONS
• remove a free edge by means of shell modifications, • collapse an edge by merging its two endpoints (when it is topologically valid), • move a free vertex, • remove a free vertex. In our experience, mesh optimization is very useful in the classical meshing case. In fact, the purpose is, in this case, to obtain a mesh with a nice quality (in terms of the element aspect ratio, see Chapter 8) and while the two point creation methods discussed in this chapter result in a good pointplacement, their insertion may lead to rather poor connections resulting in poor-shaped elements. In the governed isotropic case as well as in the anisotropic case, some extent of mesh optimization proved to be useful to achieve a certain level of improvement. Specifically, the few "ill-shaped" elements that are constructed anyway, are widely improved or are removed. In addition, even the nicely shaped elements are globally improved. One may notice that, in these cases, the mesh optimization is governed by the quality based on the provided metric map (for more details, refer to Chapter 8).
7.12
General scheme of the mesh generator
In this section, we collect the materials discussed throughout this chapter so as to propose a general scheme for a mesh generation method. Seven steps can be identified as follows. • Preparation step. — Data input: point coordinates, boundary faces as well as internal faces and edges (if any), - Construction of the bounding box, — Meshing of this box with a few tetrahedra. • Construction of the box mesh. - Insertion of the given boundary and specified internal points in the box mesh using the Delaunay kernel. • Construction of the empty mesh.
7.13. ABOUT SOME RESULTS
209
— Search for the missing specified edges, — Enforcement of these edges, - Search for the missing specified faces, — Enforcement of these faces, - Definition of the connected components of the domain. • Internal point creation and insertion - Control space definition, - (1) Internal edge analysis, point creation along these edges. — Point insertion via the Delaunay kernel and return to (1). • Domain definition. — Removal of the elements exterior to the domain, — Classification of the elements with respect to the connected components. • Optimization. — local modification, — point relocation, ... • File output. When using the advancing-front approach described in this chapter, one has to replace the step denoted by (1) of the general scheme. The analysis of the edges of the current mesh is then replaced by the front analysis. It is possible to develop a somewhat different scheme as proposed by other authors. One is referred to the remarks given in the two-dimensional discussion in Chapter 2..
7.13
About some results
Here we will present four classical mesh examples for three-dimensional domains. The mesh generation process is based on the edge method. The input data is simply the discretization of the domain boundary. The goal is to obtain, on the one hand, a maximum number of good elements, i.e. with a quality as close as possible to 1 and, on the other hand, to have the worst elements having a quality as close as possible to a target value
CHAPTER 7. MESHING IN THREE
210
DIMENSIONS
imposed by the worst data faces (while checking that the worst elements have these faces as entities). We present examples that were previously (partially) dealt with. The examples include the domain enclosed in the box of Figure 2.15 as first example. The next examples (2, 3 and 4) correspond to Figures 3.27, 3.28 and 3.29, to which the reader is referred to have a feeling of the geometries. The statistics related to these examples are reported in Table 7.1 where the following values are reported • ne, the number of tetrahedra, • np, the number of vertices, • target, the quality target value (cf. Chapters 1 and 8) which is that of the best tetrahedron that can be constructed using the worst input face, I the global quality of the mesh, i.e. that of its worst element (to be compared with the value target], • 1 — 2, the percentage of elements that are well-shaped (i.e those whose the quality values range between 1 and 2), • t, the required CPU time (in seconds using a HP 9000/735) for the mesh generation (including i/o).
case 1 case 2 case 3 case 4
ne 286
2,658 7,230 369,304
np 105 880 1,917
62,495
target 7.38 6.01 10.24 38.06
QM 9.61 7.05 11.65 42.06
1-2 66 78 76 91
t 0.36 1.19 2.75 53.30
see the figure 2.15 or 2.17 3.27 3.28 3.29
Table 7.1 : Statistics related to the four classical examples (isotropic mesh constructed using only the boundary discretization of the domain as input). These results demonstrate several characteristics of the meshes. First, is indeed close to the target. Second, the proportion of nicely shaped elements increases with the domain size. Finally, the meshing time per element created decreases for larger domains. This is because the proportion of the pure triangulation stage of the algorithm as compared with the other stages (boundary integrity recovering, internal point construction, and so on) increases for larger meshes.
7.13. ABOUT SOME RESULTS
211
To more clearly appreciate the efficiency of the mesh generation algorithm, we also give Table 7.2, in which v is the number of elements constructed per minute, for six examples10 with different sizes. Moreover, we denote by Del the percentage, of the mesh generation time devoted solely to the triangulation process (the point insertion).
Example Example Example Example
1 2 3 4
-
Example 5 Example 6
np 1,014 36,252 62,495 214,184 np 518,759 1,452,192
ne t ( sec., HP 9000/735) 1.54 3,601 191,279 28.26 369,304 53.30 1,243,871 179. ne t ( sec., HP PA 8000) 373.62 3,067,937 8,700,574 1125.
Table 7.2: Mesh generation algorithm
V
140,000 406,000 415,000 417,000 V
492,000 464,000
Del 17 49 44 46 Del 54 49
efficiency.
One may observe that v is linearly proportional to Del and, as previously mentioned, v is less important for small meshes than for large ones. It should also be noted that the efficiency of the mesh generation method is about three times less important than the efficiency of a triangulation method (as seen in Chapter 2). For the isotropic case with size control and the purely anisotropic case, we don't have, at this time, anything other than academic examples where the geometries are quite simple (sphere, cuboid). Thus we don't give any concrete examples related to these mesh generation situations. The examples depicted in Figure 7.5 and in the following figures show a simple sphere. Figure 7.5 shows a cut of the classical mesh obtained using only the uniform mesh of the domain boundary as input. Figures 7.6 and 7.7 show, by means of a cut again, the mesh resulting from a given size specification. In the first case, the sizes vary linearily from the centroid to the boundary with a repulsive effect of the centroid of the sphere. Conversely, the other figure corresponds to the same kind of data where an attractive effect at the centroid. Table 7.3 reports the statistics related to these two examples of controlled meshing. 10
The last two examples have been run using a different computer due to memory resource considerations.
212
CHAPTER 7. MESHING IN THREE
DIMENSIONS
Figure 7.5: A classical mesh without any specific control (cut). As the boundary is uniformity meshed, the expected mesh must be as uniform as possible. This mesh includes 11,957points and 65,257 elements.
213
7.14. NOTES
Figure 7.6: Size control (repulsion Figure 7.7: Size control (attraction towards the centroid). towards the centroid).
-
ne
np
case 1 65,257 11,957 case 2 11,681 2,991 case 3 227,883 39,019
target 1.13 1.13 1.13
QM 2.71 2.32 2.70
1-2 99 93 99
see Figure 7.5 7.6 7.7
Table 7.3 : Statistics related to three examples of isotropic meshes without size control (case 1) or with size control (cases 2 and 3).
7.14
Notes
Periodic mesh. Periodic mesh construction is required for numerical simulations for some specific problems. The periodic nature is either related to both the equations that must be solved and the geometry acting as computational support or it is only related to the geometry. Medial surface. Given an empty Delaunay mesh of a domain in .R3, it is possible to compute its medial surface. This surface is the locus of
214
CHAPTER 7. MESHING IN THREE
DIMENSIONS
the centers of the spheres of maximal radius inscribed in the domain; this surface is also referred to as the skeleton of the domain. A description of a method for medial surface construction can be found in Chapter 13 along with some applications for such constructions.
Chapter 8
Optimizations 8.1
Introduction
The quality of a finite element solution is strongly related to the quality of the underlying mesh. Mesh optimization can be characterized in two ways, either as part of a mesh generation algorithm (see the previous chapters) or as an a posteriori process (applied to a mesh resulting from a given construction method). Irrespective of the characterization, optimization techniques rely on some basic operators. Mesh optimization is governed by a notion of quality, which is based on the context (isotropic or anisotropic) and on the problem requirements. The purpose of this chapter is to define the notion of quality and to introduce the various optimization algorithms that can be used for triangular or tetrahedral meshes. The topological operators, which act on the vertex connections, and the geometric operators which act on the vertex locations will be described. One may notice that most of the operators have a local effect and therefore defining a global mesh optimization method is a rather cumbersome task, both for strategy and efficiency reasons.
8.2
Mesh quality
In Chapter 1, the notion of mesh quality based on element quality, was introduced. Based solely on geometric criteria, this measure only quantifies the element shapes. These results will be reviewed and the notion of quality will be extended to various pertinent situations encountered in finite element simulations.
216 8.2.1
CHAPTERS.
OPTIMIZATIONS
Shape and size qualities
Isotropic case, aspect ratio. The value1
Let K be an element in a given mesh M.
QK = a
max
PK
(81) (8.1)
where a is a normalization factor, hmax is the element diameter2 and px is the inradius (the radius of the in-circle (or the in-sphere) related to the element), represents a measure of the element aspect ratio (cf. Chapter 1) and is called the element shape quality. The quality of a set of elements, E, is defined as that of its worst element. This quantity, QE, is defined by QE = maxQtf .
(8.2)
t\ t-C*
The shape quality, Qp, of a point P is measured through the quality of the elements forming its ball Bp as QP = max QK.
(8.3)
Similarly, the shape quality, Qa, of a given edge a is that of its shell. Finally, the quality of a mesh, 7/n is defined as QM = max QK.
(8.4)
In this context, the sole purpose of an optimization process is to minimize the quality of the elements and, specifically, the value QM- To this end, one has to locally improve the values of the Qp's so as to improve the values Qa, QE and finally, QMIsotropic case, element size. Provided a size map given everywhere in the space, this map specifies the desired size expected in the vicinity of a point. In other words, this map indicates the desired lengths of the edges sharing this point. Let a be a given edge. We denote by la its current J The inverse of this quantity is also used, its range of variation being the interval [0,1]. The aim is then to maximize the quality. The worst element in a given set of elements is the element whose quality is the closest to 0. We would prefer to use the quality expressed in the Relation 8.1, as the variation ranges from one to infinity, thus having a larger amplitude of variation (instead of the above interval [0,1]). In this way, a fast discrimination of the elements is made possible. 2 The diameter of a simplicial element is the longest edge length.
8.2. MESH QUALITY
217
length and by ha its desired length as specified by the size map. Then the size quality of edge a is defined by Oi = max (£-, M \«a
(8.5)
'a/
and consequently the mesh size quality is, as in the above case, the quality value associated with the worst edge in the mesh. According to this definition, the size quality ranges from 1 to infinity. The objective is ultimately to minimize QM (the mesh shape quality), while conforming to the required size quality for the mesh edges. To this end, it is sufficient to minimize Q'a, i.e., to ensure that this value tends towards 1 for all of the edges in the mesh, while preserving an acceptable shape quality. In this way, a control for the element size is obtained by means of the edge length control. This control clearly induces the grading variations over the mesh elements. It may be observed that a triangle having three approximately unit length edges is necessarily a well-shaped element (in terms of aspect ratio) . However, this does not apply for a tetrahedron. In other words, optimizing the edge lengths in three dimensions may not result in a mesh where the elements are well-shaped. To retrieve such a mesh, two solutions can be used • successively apply a size optimization and a shape optimization (and iterate) , • define as a quality function a function controlling both the size and the aspect ratio (cf. hereafter). Anisotropic case. Let us consider a metric map given everywhere in the space. This map induces the classical shape quality relation /-\ QK = aPK
10max
(86)
(8.6)
held in an Euclidean space to be replaced by a similar relation in the Riemannian case. As seen in Chapter 4, only an approximation can be developed to handle this notion. Let us consider the two-dimensional case. Let K be a triangle and let PI, PI and P3 be its vertices. The shape quality of K can then be defined as QK = max . Qfc
(8.7)
218
CHAPTERS.
OPTIMIZATIONS
where Q1K is the triangle quality in Euclidean space related to the metric specified at any vertex Pi of K. To evaluate the quality Q1K of A', one just has to transform the Euclidean space associated with the metric specified at any vertex Pi to classical Euclidean space and then consider the quality of the triangle Ki (which is the triangle image of K). This means that
(8-8)
<3k = QK
Let (Mi)i
«* \
I
ls
the
diagonal matrix formed by the eigenvalues of Mi. Let (^i,j)i
(8.10)
l>1 where Hi is the diagonal matrix I ., I . As a result, the ver\ U l//i;,2 J tices of K* are UiPiP\,U{PiP
(8.11) with a being the same normalization factor and
However, since
and
8.2. MESH QUALITY
219
we have
One may observe that the above expression is exactly the usual quality relation (Relation 8.6), when the matrices are the identity matrices (i.e., it reduces to the classical case). Indeed if we do so, we have successively
r
where PK denotes the half-perimeter of K and
where SK is the surface area of K. Which then reduces to <* hmax PK
SK
which is a formula equivalent to Relation 8.6. In three dimensions, a similar expression for the tetrahedron quality definition exists.
Exercise 8.1. Establish the formula giving the shape quality of a tetrahedron (i.e., extend Formula 8.11).
Back to the isotropic case with specified sizes. The shape quality as defined in the general anisotropic case enables us to elegantly define the notion of an element quality whenever a size map is specified. Formally speaking, to return to the isotropic case, one simply has to use diagonal matrices with equal diagonal factors in the above expression.
220
CHAPTERS.
8.2.2
OPTIMIZATIONS
Classification
In this section, we assume the Euclidean framework by default. Our concern is classical element quality (i.e., shape quality or aspect ratio). Although the function Q (introduced in Chapter 1 and reviewed in this chapter) allows us to classify the elements, Q is not capable of characterizing the badly-shaped elements (i.e., Q is unable to determine the cause of this state). Therefore, finding a more selective function would be of great interest for further discrimination between the elements. So, we need to obtain the nature of the poorly-shaped elements, so as to develop an optimization method adapted to the various pathologies that can be encountered. Triangle classification. gles can be encountered :
By means of edge lengths, three types of trian-
• Ttrl: the three edges have comparable lengths, • Ttr1: one edge is clearly smaller than the two other edges, • Ttr3: one edge has a length close to the sum of the other two edge lengths. An alternate method to classify the elements can use the angles : • Tt r l: the three angles are comparable, • Ttrl: one angle is significantly acute, • Tir3: one angle is significantly obtuse. From a practical point of view, thresholds must be defined so as to compare the edge lengths or the angles, to obtain the classification. Tetrahedron classification. According to the same criterion, eight types of tetrahedra can be found. These types can be qualified using the volume information (V) and the face type (a face being a triangle subjected to the previous classification): • Ttel: all faces are of type Ttrl and V is reasonable, • Ttee2: all faces are of type Tf r l, but V is inconsistly small3, 3
This is the sliver element described in Chapter 1, whose faces are well-shaped although its volume can be arbitrarily small.
8.2. MESH QUALITY
221
• Tte3: all the faces are of type Tf r l but V is inconsistly small and, in addition, one vertex is arbitrarily close to the opposite face4, • Tfe4: three faces are of type T tr l, the fourth being of type Tir3, • Tte5: two faces are of type T tr l, the two others being of type Ttr2 (this element is called a wedge), • Tte6: three faces are of type Ttr2, the fourth being of type T fr l (this element is a needle), • Ttel: two faces are of type T fr 2, the two others being of type Tfr3, • Tte8: the four faces are of type Jf r 2. Several other means of classifying the elements are also possible (dihedral angles, ...). From a practical point of view, the proposed classification is also related to threshold values. Exercise 8.2. Show that, for a given tetrahedron, the face area analysis is sufficient to distinguish types Tfe2 and Tfe3.
8.2.3
Other (isotropic) quality measures
For the isotropic case, in addition to the two measures already introduced (cf. Chapter 1), there are other ways of defining the quality of a simplicial element. One may consult, among others, [Cavendish et al. 1985], [Baker-1989] or [Cougny et al. 1990], [Dannelongue,Tanguy-1991] or a comprehensive synthesis by [Parthasarathy et al. 1993], about this point. Before enumerating some of the possible quality measures (in three dimensions), we need to introduce a few notations. For a given element K, with volume VK, the inradius is denoted by p%, the circumradius is TK. The length of edge i of K is Li, the surface area of face i is Si. We now 4
introduce SK = Z) ^n the sum of the surface areas of the faces of K. hmax i=l
is the diameter of K, i.e., hmax = maxL,, and hmin = min Li and, finally, » « Lmoy is the average of the Lj's. Given this notation, the quality measures are • PK ?-£-,i the ratio between the radii of the two relevant spheres, 4
Hence, this vertex is close to the face barycenter, thus making a difference with the previous type. The considered element is a flat element whose inradius is quite large contrary of that of the sliver. This element is called a cap.
222
CHAPTERS 8. OPTIMIZATIONS • lirK '. the ratio between the inradius and the element diameter,' the ratio between the edges with extremal lengths, ,
a ratio between the volume and the surface areas of the
, a ratio between the average edge length and the volume, , the maximal dihedral angle5 between two faces, the minimal solid angle6 associated with the vertices of K. Each of these measures offers some advantages as well as some weakness. However, we think that none of these measures provide a perfect classification of the type (based on the above element classification) of the element under investigation. While they can serve as a complement for this classification, several of these measures require a relatively large computational effort (in terms of CPU) to compute. Exercise 8.3. Find the value of the different measures for a regular tetrahedron (and thus obtain the normalization factors insuring a unit value for this element). Exercise 8.4.
Simplify these measures to work for triangles.
Exercise 8.5.
Extend these measures for use in an anisotropic context.
8.3
Topological operators
In this section several local operators are described which can modify the topology of a mesh, i.e., they change the mesh connectivity. An operator is said to be valid if its use results in a valid mesh, meaning that the elements are conforming and the surface areas (volumes) are strictly positive. Obviously, an optimization operator will be more demanding than this simple requirement. Actually, the resulting mesh must be valid 5
The dihedral angle between two faces of a tetrahedron is the value II + arccos(ni, n^) or II — arccos(ni, ^2) depending of the configuration of the two faces with respect to n,, the normals of the faces of interest. 6 The solid angle at a vertex of a tetrahedron is the surface area of the surface of a unit sphere centered at this vertex bounded by the three faces sharing this point.
8.3. TOPOLOGICAL OPERATORS
223
and the mesh quality must be improved, too. Thus, a given operator must improve the quality while maintaining the validity of the considered mesh.
8.3.1
Edge swapping in two dimensions
As seen previously, specifically in Chapter 2, this operator is the only local topological operator7 of the two dimensions. Edge swapping is possible if the polygon formed by the two triangles sharing the edge in question is a convex quadrilateral. This operator operates on the two triangles sharing this common edge (Figure 8.1).
Figure 8.1: Edge swapping.
8.3.2
Ball remeshing in two dimensions
In this case, the balls associated with the free vertices are set of the possible remeshings of the polygon related to analyzed, as the point defining the ball is removed. For ball of the point consists of five triangles, then the possible those depicted in Figure 8.5.
8.3.3
processed. The a given ball is instance, if the alternatives are
Shell transformation in three dimensions
The purpose of a shell transformation is to remesh the shell so as to remove its defining edge. The extension to three dimensions of the edge swapping operator leads us to consider two tetrahedra sharing a face and to remove 7 Other topological operators can also be defined corresponding, in definitive, to a repeated application of the edge swapping process.
CHAPTERS.
224
OPTIMIZATIONS
this face (if the polyhedron formed by these two elements is convex). Therefore, we define the operator, denoted as Tr2_>3 and depicted in Figure 8.2. It allows us to replace the two initial elements with three tetrahedra.
Figure 8.2: Transformation Tr2_>3. The reverse operator seems to be that illustrated in Figure 8.3 which replaces three elements with two elements and is denoted by Tr3_).28- Actually, this operator is only a special case of a more general operator that consists of remeshing a shell made of an arbitrary number of elements by removing its defining edge. For instance, for a four element shell, the related operator Tr^^ is depicted on Figure 8.4. Formally speaking, this kind of operators considers all of the possible triangulations for a pseudo-polygon. The vertices of this polygon are defined by the shell vertices other than a and /?, the two endpoints of the edge defining the shell. Figure 8.5 shows the possible retriangulations for the case of a five element shell, where every triangulation is created by joining all the triangles of the polygon with both a and (3 (so as to define the pair of tetrahedra part of the sought triangulation). The Catalan number of order n
Cat(n) = 8
(2n-2)! n\(n- 1)!
One may notice that this operation is not always possible. For instance, the case where the two vertices of the defining edge are not in the same half-plane related to the triangle associated with the other three vertices of the shell is a typical example of an invalid triangulation. Such a shell is called a firtree ("sapin" in french). This type of configuration may exist for a shell with an arbitrary number of elements.
8.3. TOPOLOGICAL OPERATORS
Figure 8.3: Transformation
Figure 8.4: Transformation
225
226
CHAPTERS.
OPTIMIZATIONS
Figure 8.5: The 5 triangulations related to a 5 points polygon. gives the maximal number of topologically possible triangulations, JVn, of a shell9 of n elements. Indeed, we have Nn = Cat(n - 1)
n Nn
Trn
11 12 3 4 5 6 7 9 8 10 13 1 2 5 14 42 132 429 1,430 4,862 16,796 58,786 1 4 10 20 35 56 84 165 120 220 Table 8.1: Number of topologically valid triangulations.
Table 8.1 reports Nn, the number of possible triangulations as a function of n. It also indicates Trn the number of different triangles in each possible triangulation. In this enumeration, the validity of the triangulations is not considered (only the topological aspect is taken into account). Exercise 8.6. Show that the maximum number of possible triangulations for a shell with n (n > 3) elements verifies the relation
9
The topologically possible solutions in three dimensions are constructed by enumerating all the two-dimensional geometrically valid retriangulations of a convex (planar) polygon with n vertices.
8.3. TOPOLOGICAL OPERATORS
227
with A^2 = 1 and thus reduce it to the Catalan number. To conclude this section, the operator acting on a shell based on the retriangulation of its outer polygon is a powerful and inexpensive basic operator for tetrahedral mesh transformation.
8.3.4
Entity suppression by local remeshing
Edge removal in two dimensions. A simple edge swap results in an edge removal (see also, hereafter, edge removal by reduction). This operation is valid if the polygon made up of two triangles sharing the edge is convex. Edge removal in three dimensions. This is a different way to view the shell transformation operator described previously (Again, this operation is valid if the polyhedron is convex, otherwise, it may be invalid). Vertex suppression in two dimensions. The main idea is to consider the ball associated with the vertex under consideration as a cavity. Then a triangulation algorithm without internal point creation is applied to this cavity. One may also (as explained later) apply edge swap(s) so as to obtain a ball having only three triangles (or four triangles in a degenerated configuration) as shown in Figure 8.7. Vertex suppression in three dimensions. First of all, this operation is not always possible, with the Schonhart polyhedron as an example. Nevertheless, the key-idea is, as before, to process the ball of the point so as to reduce its number of elements to four (except for degenerated configurations) .
8.3.5
Suppression by means of reduction
Removing a free edge or a free vertex in a triangulation can also be done by means of a reduction. In the case of an edge, this process consists of replacing this edge with only one point connected to all the vertices formerly connected to the initial edge. In the vertex case, two methods can be considered, either a ball retriangulation or an adequate application of edge swaps.
CHAPTERS.
228
OPTIMIZATIONS
Edge suppression. Let us consider an edge, aft. We replace this edge with only one point A. Formally speaking, this results in locating a on /?, ft on a or again locating the merged vertex between a and j3. Figure 8.6 shows the three possible solutions for this example. From a practical point of view, it is sufficient to check if the ball of point A, resulting from the reduction, is valid. To this end, we examine the shell aft and we check the validity of the balls of a and ft when these two vertices are replaced by the point A. This operator may be classified as a geometric operator since it maintains the connectivities. That is the new connections to A can be seen as the "union" of the former connections to a and ft.
Figure 8.6: Edge removal by reduction.
Vertex suppression. Obviously, a vertex whose ball is composed of d-\-l elements (d being the spatial dimension) can be removed by replacing these elements with only one element. In general, the vertex degree (see above) is greater than d-\-1. Thus, edge swaps (in two dimensions) or shell transformations applied to the edges emanating from this vertex (Figure 8.7) can be used to reduce to this case. A different method which is more robust10 in practice, is .to consider 10
I.e. successful in most of the situations.
8.3. TOPOLOGICAL OPERATORS
229
all the edges emanating from this vertex and to apply to one of the edges (for instance to the shortest edge) the edge reduction process previously described.
Figure 8.7: Removing a vertex A by modifying its ball. Notice that a vertex removal is always possible in two dimensions while not always being so in three dimensions. 8.3.6
Edge splitting
Splitting a free mesh edge aft consists of creating a point, P, along this edge and of replacing the original edge by the two edges aP and P/3. Hence, edge splitting, in two dimensions, reduces in replacing the two initial triangles, sharing edge a/3, with four triangles. In three dimensions, we have the same effect on the initial shell which is then replaced by the shells of the two edges aP and Pfi. 8.3.7
Valance relaxation
Definition 8.1 The valance of a mesh vertex is the number of edges11 emanating from this point. d For a two-dimensional isotropic mesh, a valance of six is an optimal value. A vertex with valance lesser than six is said to be under-connected, while a vertex with valance larger than six is said to be over-connected. 11
It is also, in two dimensions, the number of elements sharing the point.
230
CHAPTERS.
OPTIMIZATIONS
Relaxing the valance of a two-dimensional mesh consists in modifying the vertex valances, by means of topological operators, so as to tend towards a value of six in average, see [Frey,Field-1991]. In three dimensions, the targeted value is 12, due to the fact that a ball with 20 optimal elements corresponds to the triangulation by a regular icosahedron of the space centered at the point defining this ball. A similar notion applies to shells (in three dimensions), the valance of a shell (the number of its elements) can be relaxed. The ideal value12 is 6, while a shell with a lower valance is said to be under-connected, while a shell with a larger value is said to be over-connected.
8.4
Geometric operators
A geometric operator preserves the mesh connectivities. Thus, such operators basically modify the point locations. Several methods may be used to perform this type of operation, by means of a local or a global approach and by considering different types of constraints.
8.4.1
Local geometric operator
Local geometric operators modify the location of the mesh points so as to optimize a given criterion. The proposed process is iterative and results in each free vertex being processed, one at a time, so as to finally relocate all of them. So, we consider a vertex P and its ball Bp. We then construct, using any method, an "optimal" point denoted by P*. Irrespective of the method used, a sub-relaxation is introduced, governed by a parameter u; which ranges from 0 to 1. This process enables us to define a new point P as P= (l-u)P + uP* .
(8.13)
The point P is then replaced by P if the quality of B-p is improved as compared with that of Bp. Afterwards, the process is repeated. The relaxation is a good way to keep P in the kernel of Bp . Different methods are possible related to a given choice for defining the "optimal" point P*. The possible choices are now discussed. In what follows, we denote by n the number of elements in the ball of the point P under consideration. 2
A value like 5 may also be a suitable objective.
8.4. GEOMETRIC OPERATORS
231
Classical barycentric smoothing.
Let Pj be the vertices of Bp, other n
than P. The method consists in minimizing ^ ||PP,-||2 with respect to P. 3=1 This results in the definition of P* as (8.14) or
This operation minimizes the gap between the lengths of the edges emanating from P, it is well-known as Laplacian smoothing and presents an analogy with elasticity problems (such as a set of springs). The efficiency of this process holds when there are no size constraints (or, more generally, no metric constraints) that must be honored. A mechanical application without an explicit surface area check (volume check in three dimensions) leads to a valid mesh, as every ball is convex. Conversely, for a non-convex ball, such explicit checks are strictly required.
Weighted barycentric smoothing. The relation .£ <*,Pj
P* = i=l
(8.16)
£ aj
3= 1
where ctj is a weight associated with the point Pj, is another possible method for barycentric smoothing, provided adequate coefficients Q>J. Numerous choices of the weight coefficients are possible. For instance, if a size I _ is specified, one can define aj — ^, where hj is the average of the expected h i sizes at point P and at point Pj. Exercise 8.7.
Justify the choice of aj described above.
Exercise 8.8. Consider the case where a metric is specified everywhere and find the corresponding «j's.
232
CHAPTERS.
OPTIMIZATIONS
Isotropic and anisotropic optimal shape relocation. As opposed to the previous cases, we now use points other than the vertices of the ball Bp to define the optimal point P*. Let the /j's be the external edges (faces in three dimensions) of Bp and let the Kj's be the elements of this ball formed by the /j's and P. Let Ij be the "ideal" location with respect to the external edge j of Bp in two dimensions, and, in three dimensions, with respect to the external face j. The above location is said to be ideal as it ensures an optimal quality for the element defined by fj and Ij. Then, the proposed method consists of defining P* by, cf. [Briere,George-1995], n V*
T
P* = i=±
.
(8.17)
£ °i
Several different choices for the coefficients aj are possible in a scheme of this type, among which, the following can be considered • aj = 1, the weights are constant and it reduces to the classical barycentric smoothing method, • aj; = 0, for every element of the ball except for the worst one (in terms of quality) for which we set aj = 1, • aj = QK}I i-6., the weight is related to the quality of element A'j, • aj = Q?K ; the weights are related to the square of the element qualities, • or, more generally, aj — g(QKj), meaning that the weights are related to a certain function g of the quality of the elements in Bp. This method can be applied to an anisotropic case by using a relevant definition of quality, each of which result in locating the points Ij differently. Isotropic optimal size relocation. In this case, we also used points other than those of the ball Bp to define the optimal point P*. The main idea of the point moving method is based on the length of the edges emanating from the point. The edge lengths are evaluated in an average metric related to the sizes specified at the edge endpoints. The process consists of trying to obtain a unit length (for the considered metric) for all of the
8.4. GEOMETRIC OPERATORS
233
affected edges. Thus, the new locations, Pj, are defined by
Pj = Pj + -3=r-hj
(8.18)
where hj is the average size associated with edge PjP and the optimal point P* is the barycenter of the points Pj . Anisotropic optimal size relocation. The same method can also be applied in this case by generalizing the definition of Pj. We take
where /A/I (Pj, P) is the length of the edge PjP evaluated in the metric A1 associated with the edge. The lengths are then measured in the metric associated with the anisotropic control space and the point relocation trys to obtain unit length edges for all the edges emanating from the point. One may observe that this formula is more general than the previous one (isotropic case) where the length of the edge PjP in the specified metric has been approximated by a more simple expression, i.e. (8.20) Relocation by local remeshing. We consider the ball of the points to be moved. Any mesh generation method is applied to the polygon (polyhedron) which would have resulted in creating a new point in the ball (for instance, an advancing-front method, cf. [Rassineux-1995]). The so-created point is then considered as the new location of the initial point. Actually, this method is a different way of expressing the same previous method. Relocation subjected to topological constraints. to relocate a vertex and, in addition, to • move it away from a given edge, • move it away from a given plane, • move it along a given edge,
The objective is
234
CHAPTERS.
OPTIMIZATIONS
• etc.
Any method of the above type can be used by adding the constraint during the analyzis of the criterion that must be optimized.
8.4.2
Global geometric operators
All of the previous geometric operators are local, but a global approach can be also envisioned, at least formally speaking. The idea is to solve a global system over the entire mesh. In this respect, one can imagine a spring analogy, the mesh edges are replaced by springs and the resulting system is brought to equilibrium. The general principle is to define a cost function and to optimize it, for instance by means of a gradient method, cf. [Kohli,Carey-1993] among other references. Another type of global system can be developed considering quantities related to various types of energies (where one analogy could be based on solid mechanics problems). An example of such a construction is proposed by [Jacquotte-1992] for which the distortion of the elements with respect to an optimal element is considered. A cost function is then defined that can be interpreted in terms of a non-linear elasticity problem.
8.5
Remarks on surface optimization
We consider a surface mesh composed of triangles only. Any purely twodimensional optimization method can be applied while checking that the initial geometry is preserved. The two main tools for surface optimization are point relocation and edge swapping. Point relocation. Moving a free point must ensure that the new location remains on the surface and that the new triangles remain close to this surface (at least as close as before the optimization). A (free) point lying on a first order discontinuity (a ridge), must remain on this discontinuity. Edge swapping. Similarly, edge swapping must be controlled so as to avoid the situation depicted in Figure 8.8. A topological particularity related to the swap of a surface edge occurs when the ball associated with one of the endpoints of the candidate edge consists of only three elements. Actually, in the initial configuration of
8.6. ALGORITHMIC
ASPECTS
235
Figure 8.8: Edge swapping on a surface mesh. Figure 8.9, the edge P^P^ exists as entity in the triangle P\P^P^ and swapping the edge in the triangles P\P^P^ and P\P^P^ results in constructing the edge PS/^, thus resulting in an invalid topology. Although this configuration is a simple example, more complex configurations may obviously result in the same pathology. Thus, in general, before performing an edge swap, it is necessary to check that the resulting edge does not already exist in the initial mesh. Once the validity of the operation is established, one must ensure that the quality has improved in the resulting mesh (with regard to the given criterion). To this end, one may use the usual quality measure or, by analogy to the Delaunay criterion (cf. Chapter 1), one may base the decision on the angles formed by the triangle edges. The configurations composed of two adjacent triangles are analyzed by evaluating the six possible angles and, a swap is applied if the smallest angle of the new configuration is larger than that of the initial pattern.
8.6
Algorithmic aspects
At this time, we have defined a toolbox of (local) operators that must be adequately used so as to obtain the desired result, i.e., the optimization of a given mesh. An operator will be considered as successful if, on the one hand, the resulting mesh is still valid and, on the other hand, the mesh quality is improved. We will now discuss a way to control this mesh improvement; the notions of quality associated with a mesh will also again be visited. In this respect, we shall discuss mesh quality, in terms of the quality of a set of elements as well as that of any given element.
236
CHAPTERS.
OPTIMIZATIONS
Figure 8.9: Edge swapping in a three element ball. 8.6.1
How to use an optimization operator
Using an optimization operator is quite simple. First, its result is simulated in terms of both the validity and the quality evolution of the concerned elements. In the case where an improvement is achieved, the simulated output is retained. In the case where several solutions are possible, the best one is selected (or the first possible solution which has been observed). When the simulation includes a large number of possible solutions, CPU cost consideration require that we optimize the pair simulation-effective application. The operations used in the simulation are of a purely geometric nature (surface area or volume computations, element quality evaluations, before and after the process in question has been applied), while the effective application of the operator leads to the definition of the new elements and the new neighborhood relationships that can be affected in the process. A tricky computer procedure of these two phases allows us to minimize the overall CPU cost of the process. 8.6.2
How to control an optimization operator
Classical optimization. First we compute the initial quality of the point, edge, element or set of such entities included in the initial configuration. We then compute the same quality for the affected entities for the proposed solution (s). Finally, we determine if we should apply the optimization process, based on the quality results. Several strategies can be
8.6. ALGORITHMIC ASPECTS
237
chosen to govern the decision. The process is applied if • the resulting configuration is improved, • the resulting configuration is improved to some extend, • in the case of multiple possible solutions, by selecting the first solution occuring in the simulation or by chosing the best solution among all, • and so on.
Another issue consists of defining the way in which the operator is used, one can decide to process • all of the mesh entities starting from the first and going to the last, • only some entities properly selected (using a stack based on a relevant criterion, edge length if edges are proceeded, or using a quality threshold, ...), • all the entities, or only some of them, randomly picked, • and so on.
The question is now to design an automatic and global optimization method by deciding on a strategy for both the choice of the local operators and the order in which to use them (described hereafter), assuming that the strategy related to a given local operator is fixed. Constrained optimization. In this case, a similar discussion can be given, only the quality notion must be changed, accordingly. 8.6.3
How to control an optimization process
A mesh optimization method by means of a sequence of the available local operators requires the definition of a strategy on how to order them, the local strategy for all of them being fixed. Several observations can be helpful in defining such a strategy. Assuming that a local operator sequence is given, a stopping criterion has to be defined first. In fact, several classes of criteria are possible, including the following ones. The process is repeated until • the mesh is affected by one operation, • the mesh is affected by one or many operations,
238
CHAPTERS.
OPTIMIZATIONS
• a given threshold (in terms of quality) is not achieved, • and so on.
Provided this, a strategy must be defined. One has some flexibility in ordering the possible choices. Indeed, one can • apply every local operator to all the entities concerned by its application, and then turn to a different operator, • consider a given mesh entity and apply all the local possible operators before turning to a different entity, • combine the two above approaches. It is also possible to classify the pathologies (cf. the above classification section) and to deal with the mesh entities accordingly. As a reasonable example, we have developed a three dimensional optimization algorithm, whose scheme can be formally written as follows • the mesh elements are classified in terms of element qualities, • the "bad" elements (for a given threshold) are identified with respect to their pathology, • with each pathology is associated a specific sequence of local operators developed so as to get rid of the actual pathology, • once the worst elements have been processed, we then consider all of the mesh elements and — shell modifications are applied to each mesh edge, — the free vertices are relocated and the actual process is repeated as long as a significative modification is encountered. This strategy has proven to be efficient although its justification relies on experimental tests. That is, no theoretical results can be presented to justify the choices.
8.7
Some results
Classical optimization in two dimensions. We show in Figures 8.10 and 8.11 an example of node relocation by barycentris smoothing. The mesh on the left-hand side is the initial mesh, while that on the right-hand side is the mesh resulting from the barycentric smoothing of all the free vertices. Improved regularity of the elements can be observed.
8.7. SOME RESULTS
Figure 8.10: Initial mesh.
239
Figure 8.11: Mesh resulting from node relocation.
Governed optimization in two dimensions. We now consider a disc of radius 6 centered at the origin. The provided size map specifies that the sizes be approximately of 0.01 along the perimeter line of a circle centered at the origin with a radius of 3, that the size increase linearly towards the interior of the circle and that the size be 1.5 in its exterior region (thus, the size varies widely in the range from 0.01 to 1.5). Figure 8.12 displays a mesh of this domain which respects, more or less, the given metric map, this mesh being created by a given mesh generation method. The goal of the optimization step is then to obtain a new mesh, cf. Figure 8.13, as a modification of the previous mesh by means of a process governed by both the size and the aspect ratio parameters. The initial mesh has a quality of 83 (where this value is that of an element adjacent to the circle towards its exterior). The mesh consists of 13,264 triangles, 7% having a quality (aspect ratio) worse then 2. There are 19,865 edges whose lengths vary from 0.34 to 26 in the metric map. Among these edges, 5% have a quality (in terms of size) worse than 2. The optimized mesh has a quality of 6 (in the same region). Among the 13, 264 triangles, 4% have a quality (aspect ratio) worse than 2. The edge lengths vary from de 0.4 to 13 in the metric. 7% of edges have a quality (size) worse than 2. One may notice in this example that it is difficult to satisfy two mutually opposite constraints (size and aspect ratio) at the same time, especially if the specified control is strongly discontinuous. Nevertheless, a gain factor of about 2 has been obtained with respect to the size constraint.
240
CHAPTERS.
OPTIMIZATIONS
Figure 8.12: Initial mesh.
Figure 8.13: Mesh resulting from size and aspect ratio optimization.
241
8.7. SOME RESULTS
Remark. The simple observation, in two dimensions, of a mesh (see the figures) is not always a good way to appreciate the element qualities or the evolution of these qualities. It appear to be more fruitful and faster to give some pertinent numerical values, curves, histograms, or distribution diagrams. This is strictly required in three dimensions where simple visualization, even using powerful graphic facilities, is a rather poor way of investigation. Classical optimization in three dimensions. In this case, we consider two mesh examples. The following tables report the mesh statistics before and after optimization. In the first line is given the distribution, in terms of element percentage, of the elements falling in the quality range from 1 to 2, from 2 to 3, from 3 to 10 and then greater than 10 (the second line reports the corresponding number of elements), target is the quality of the best tetrahedron that can be constructed using the worst boundary face, QM is the global quality of the mesh, ne denotes the number of elements while np is the number of points in the mesh. The rest of the table reports the same information for the optimized mesh.
Q before after •
1-2 2-3 3-10 > 10 target 21 12 63 8.30 3 824 2,475 486 132 72 21 6 8.30 0 224 2,620 759 5
QM
ne
755 3,917
np 1,004
11.44 3,608 1,015
Table 8.2: Aspect ratio before and after optimization (example 1).
Q before • after •
1-2 76 58,248
85 61,215
np ne 2-3 3-10 > 10 target QM 15 7 0 38.06 4,471 75,816 12,872 11,611 5,251 706 12 2 0 38.06 53.80 71,510 12,903 8,695 1,524 76
Table 8.3: Aspect ratio before and after optimization (example 2). The histogram is now better centered around 1 and a large reduction in poorly shaped elements can be observed. The value QM is also closer to the target value.
242
CHAPTERS.
Governed optimization in three dimensions. refer the reader to [Briere,George-1995].
8.8
OPTIMIZATIONS
For such examples, we
Applications
Basically, three types of applications for an optimization algorithm can be envisioned. • The optimization algorithm is a standalone, i.e., a software package which processes a given mesh so as to achieve an optimized mesh (according to the desired objectives). The program reads the data, constructs the necessary "tables", chains the local operators and saves the output. • The optimization algorithm is a part of a mesh generation algorithm (cf. Chapters 5 and 7, for instance). The program takes advantages of the internal data structures of the mesh generation program (for example, neighborhood relationships can be easily used by any local operator). • The optimization algorithm is a part of a local adaptation process which serves at at adapting a region of the mesh with respect to some criteria (a posteriori error estimate), cf. Chapter 9.
8.9
Notes
Some specific applications of the optimization operators can be developed, not to improve a criterion but for topological purposes. In what follows, we give two such examples. The first such application is the generation of non-obtuse meshes, in two dimensions. A (triangular) mesh is said to be non-obtuse if the element angles formed by the pairs of edges are non-obtuse. Such meshes can be requested for some specific computations. The second example deals with the suppression of over-constrained elements or over-constrained edges in a mesh. In two dimensions, such an element has two boundary edges. In three dimensions, it has three boundary faces. An over-constrained edge is an internal edge whose endpoints are both located on the boundary. These elements and edges with this feature are processed by either one or the other adequate operators so as to obtain a suitable configuration of edges or elements (in terms of the mentioned constraint).
Chapter 9
Mesh adaptation 9.1
Introduction
Using mesh adaptivity to conform to the physical properties of a given problem is a promising method that is rapidly becoming commonplace. Its purpose is to obtain better results by adapting the mesh to the physical behavior of the problem. This technique serves at controlling the accuracy of the computed solution and is a key-issue in developing a fully automatic computational process. The proper (or incorrect) adaptation of a mesh to a given problem can be envisaged in many different ways (possibly competing). It is based on the number of mesh nodes, and then to the number of mesh elements, as well as the nodal locations and element shapes. Thus, mesh adaptivity may affect several properties including the nodal density, the element density (i.e., the level of mesh refinement in certain regions), the element shapes (isotropic meshes or anisotropic meshes where the elements are aligned in the directions of some relevant curves or surfaces) and also the presence of various element characteristics. An example of one such property is element orthogonality1 with respect to curves or surfaces in some regions. The key-issue is that an adapted mesh leads to an accurate solution of the computational process. This means that the fine level phenomena (rapid solution variations, boundary layers, shocks, ...) are expected to be well captured while minimizing the number of elements so as to reduce the computational cost. There are numerous mesh adaptivity methods. Formally speaking, the *In two dimensions, a triangle is said to be orthogonal with respect to a curve if it has two orthogonal edges, one of them following the given curve. In three dimensions, we can infer a similar definition involving two faces.
244
CHAPTER 9. MESH
ADAPTATION
adaptivity discussed here relies on an a posteriori error estimate. It is an iterative process; starting from an initial mesh, a first solution is computed. This solution is analyzed using the given a posteriori error estimate and the result of this analysis is translated into criteria directly usable to govern the mesh generation process. A new mesh is then constructed (by means of a local or a global process) and the entire process is repeated. Indeed, mesh adaptivity methods can be classified into two categories. The first category relies on local processes which locally or globally modify a current mesh so as to adapt it. On the other hand, the second category corresponds to a quite different approach, the mesh is entirely reconstructed at each iteration step of the adaptation process. It is also possible to combine these two approaches. Mesh adaptivity using local modifications is performed during some iteration steps, afterwhich an entire mesh reconstruction is done and the process is repeated. Irrespective of the classification, mesh adaptivity can be implemented using several different ways. Some of the methods available include the r-methods2, the /i-methods, the p-methods and a combination of the last two, the so-called /ip-methods. In this chapter, we will discuss these different approaches. We assume that a constraint field is given (including size and directional specifications) which enable us to govern the adaptive process. Thus, we assume an ideal or academic situation and, more specifically, we are not directly interested in the way the control field has been constructed (in general using an error estimate, ...). We consider also that a suitable geometric description of the domain boundaries is provided (in a future chapter, the non-trivial problem of constructing such a description will be discussed).
9.2
Mesh adaptivity methods
In this section, we introduce the main ideas used in some classical adaptive methods. Mesh adaptive methods are not the purpose of this book, thus only the mesh generation aspect of these methods is considered, while theoretical issues related to convergence, accuracy, ..., will not be covered. 9.2.1
The r-method
Using the r-method, the mesh connectivity is unchanged. Instead, node relocation is used to move the mesh nodes either by means of a weighted barycentric smoothing based on the location and the weight of the nodes 2
Also referred to as the s-methods.
9.2. MESH ADAPTIVITY METHODS
245
in some neighborhood (cf. Chapter 8), or by means of element distortion. The criteria (weights) governing these operations are obtained by analyzing the current solution. In this method, the connections remain unchanged, it is only by node location modifications that the adaptivity can be achieved thus insinuating some sense of rigidity. Nevertheless, some problems, where for instance the point locations are a function of a "source" (a point, a straight line, a curve, a surface), can be successfully addressed by an r-method. The principal difficulty of the r-methods is that of ensuring that the resulting elements do not degenerate (very small or even negative surface areas or volumes) and that the node relocations do not result in overlapping elements. A solution to control these phenomena is to move the points step by step, checking the validity of the solution at each step and then to iterate the process (cf. Chapter 8). In this way, deformations that are too large are avoided although such large displacements can nevertheless be still obtained. Every point is moved with a reasonable stepsize but, as a consequence of the global iteration steps, the mesh can be widely modified after a while. 9.2.2
The ^-method
H-method adaptivity is defined in terms of local or global mesh enrichment by means of refining (by partitioning) or coarsening selected elements or all the elements in a mesh. A complete regeneration of the mesh based on the analysis of the solution obtained is another more flexible form of /i-method. This method will be discussed in more detail hereafter. 9.2.3
The p-method
This approach is based on an invariant mesh (in terms of points (nodes) and elements) and adjusts the degree (in terms of the interpolation functions) of the finite elements constructed on the mesh elements as a function of the current solution analysis. Considered as an a priori elegant approach, this method is anything but easy to implement for complex geometries. The main difficulty consists in properly defining the nodes that must be created on the domain boundaries as the degree of the interpolation increases since such construction needs to result in • properly located nodes, • satisfactory elements in terms of aspect ratio, positive jacobian, ...
246
CHAPTER 9. MESH ADAPTATION
Figure 9.1: Creation of a P2 finite element (simple case). This node construction implicitely assumes that the initial mesh, i.e. the mesh obtained by a mesh generation method, is compatible, in some sense, with the required order p approximation. This requirement is to ensure that the initial mesh is compatible with the domain geometry and, in particular, that this mesh conforms to the boundary curvatures accurately. When the geometry is "straight" (in two dimensions) or planar (in three dimensions), the requested compatibility automatically holds and the pmethod is a widely efficient method to handle problems that would be rather difficult to consider elsewhere (for instance, a problem where (linear or planar) cracks must be followed accurately). Conversely, considering an arbitrary domain geometry (i.e., an arbitrary boundary), a practical computer implementation of the p-method leads to some difficulties. Figure 9.1 illustrates a two-dimensional case where a P1 element of vertices Paft has an edge a/3 discretizing the boundary F. Three edge "midpoints" must be created so as to translate the P1 interpolation in a P2 interpolation. The midpoints of the two edges connected to P are clearly well suited for the P2 construction while the edge aft must be replaced by two sub-edges after introducing a suitable point ai on F. As in the depicted configuration, F and the edge aft are close enough, so that the point ai location can be easily found. The following figure, Figure 9.2, presents a more difficult configuration where the initial mesh is not compatible. Indeed, the edge aft is not close enough to F and thus the point ai is such that the P2 triangle that will be constructed will be poorly shaped. This situation is clearly related to the fact that the P1 mesh is a bad geometric approximation of the boundary F. One may easily imagine other configurations where the P2 triangle is such that the sub-edges are selfintersecting or such that the point a\ falls in a triangle other than the triangle Pa ft (then a more complex process will be required, for instance using the /ip-method).
9.3. MODIFICATION VERSUS RECONSTRUCTION
Figure 9.2: Creation of a P2 finite element (for a more difficult tion).
247
configura-
Notice that these difficulties are also present in three dimensions in many cases and that finding a suitable solution is a rather tedious task. A possible solution for this problem relies on first constructing a compatible mesh for which we need to use a technology similar to that used for enforcing constraints in a given mesh (cf. Chapter 3). In conclusion, there are no major difficulties in two dimensions. However, the same problem in three dimensions requires more subtle care. 9.2.4
The ftp-method
The ftp-method combines the ft-method and the p-method. This solution could be a nice complement to the p-method since some extent of flexibility is added. Returning to the example of Figure 9.2, we first create the point cti, then we define the triangle ota\(5 and finally we consider the two boundary edges ota\ and a\j3. Hence, a reasonable situation is encountered where the presumed boundary edges are close enough to the boundary F. The final remark of the previous section remains valid in general. Indeed, it is a tedious task to develop an ftp-method algorithm for complex geometries.
9.3
Modification versus reconstruction
In what follows, we consider an ft-type adaptivity method. As previously mentioned, two approaches can be envisaged to obtain (or at least to try to obtain) an adapted mesh. The first approach modifies the current mesh
248
CHAPTER 9. MESH ADAPTATION
to create the adapted mesh while the second one constructs a new mesh using a governed mesh generation method. Whatever the case, constructing a mesh relies on the data of the domain geometry and of a specified field (which serves as a control space). This map specifies the expected characteristics of the mesh everywhere in the space.
9.3.1
Adaptivity based on local modifications
An adaptive scheme which employs local modifications can be envisioned in two ways. Either by applying mesh refinement and mesh derefinement tools or by using the local transformations included in an optimization algorithm. Before discussing a modification-based adaptivity scheme, we will establish the list of the tools suitable for this purpose. As a preliminary comment, one may notice that these tools are well-suited for mesh refinement purposes and conversely, that it is usually rather delicate to derefine a given mesh (more specifically, derefining a mesh in a region where it has been previously refined is possible in some cases but it can be tedious elsewhere). Several local tools. These local tools are applied to the elements that have been selected based on the error estimate (through the control space). The refinement process splits a given element and may also result in splitting some elements in the neighborhood so as to maintain mesh conformity3. There are several ways to refine an element based on the simple patterns shown in Figures 9.3 (in two dimensions) and 9.4 (in three dimensions). Used recursively, these operators allow us to refine one or more times a given element or a set of elements. In two dimensions, the basic constructions consist either of introducing a point on an edge, dealing with two edges (the same operator is applied twice) or three edges or by creating a point inside the triangle (for instance the centroid of the element). This series of operators results in refining the mesh. On the other hand, derefining a mesh is not trivial except if these operators can be used reversely. This means that the derefinement operator can be used only in a region previously refined. This specificity is obviously a weakness related to this type of adaptive method. Nevertheless, such a method can be governed 3
One may notice that some solution methods do not require a conforming mesh, thus removing this constraint.
9.3. MODIFICATION VERSUS
RECONSTRUCTION
249
Figure 9.3: Local splitting in two dimensions. by a strategy enabling the selection of the elements to be processed so as to lead to the desired result, while avoiding the creation of poor-shaped elements. Numerous authors have proposed such strategies and the reader is referred to [Rivara-1986], [Bai,Brandt-1987] and [Bansch-1991]. In three dimensions, there are also a few basic operators (see Figure 9.4). Three basic tools exist that can be used recursively so as to obtain all of the desired configurations. Indeed, it is possible to define a point along an edge, on a face or inside a tetrahedron. As before, it is more difficult to obtain a derefinement effect.
Figure 9.4: Local partitioning in three dimensions. Elements other than simplices are not discussed in this paper, nevertheless it is also possible to define some local operators capable of handling such elements. A global tool. The only operator which results in a uniform processing of all the elements in a given mesh consists of partitioning each mesh element by adding n points along its edges. Then (n + l)d geometrically identical elements are obtained (d being the spatial dimension), as in Figure 9.5. This process results in a uniform refinement of the given mesh. This type of
250
CHAPTER 9. MESH ADAPTATION
operations is useful for comparing against the mesh adaptivity method with variable and local stepsize with a "reference" solution which is computed on a fine uniform mesh.
Figure 9.5: Global partitioning of a triangle.
Tentative scheme. The general scheme of a local adaptivity method based on mesh modification includes the following steps • The initial mesh construction resulting in Tj, with j' = 0, • (1) The computation of the solution Uj associated with this support, • The error estimation and its formulation as a list of elements to be processed, • The modification, if required, of the current mesh so as to obtain the new mesh Tj, with j = j + 1, • The adequate transfer of the current solution on Tj so as to permit, after going back to (1), for the whole process to be repeated. Local modification relies on either using the one or more local tools to refine (derefine) the selected elements or in optimizing the current mesh. Adaptivity by means of mesh optimization can use the same tools as well as the set of optimization procedures described in Chapter 8. We refer the reader again to the above references for a more exhaustive investigation of this type of scheme. Nevertheless, one may read the following to find more details about the steps of the general scheme that will be found in the approach proposed hereafter. 9.3.2
Adaptivity based on a complete reconstruction
The /i-adaptivity method that we will present now does not rely on local or global modifications of the current mesh but instead requires an entire reconstruction of the mesh for each iteration step of an iterative process.
9.4. GENERAL SCHEME FOR AN ADAPTATION LOOP
251
The main idea then is to define an iterative process by using an automatic mesh generation method governed by a control space, the latter resulting from the error estimate included in the computational process. Constrained mesh generation methods. In two dimensions, the required meshing operations affect the domain boundaries and the domain itself. Thus, two types of mesh generation methods must be envisaged. The first is a boundary mesh generation method, basically a method adapted to curve or surface meshing. The second one is a domain mesh generation method using the boundary mesh previously created so as to create the mesh elements of the domain, accordingly. In three dimensions, one will successively need a curve meshing operator, a surface meshing operator and a domain meshing operator (based on the discretization of the domain surface resulting from the previous operator). Whatever the entity to be meshed (a curve, a planar domain, a surface or a volume), the expected mesh shall conform the set of criteria (size, direction) specified via a control space; thus the required mesh generation operators must necessarily be governed or constrained so as to conform to the specifications defined in the control space. Before giving a detailed description of these algorithms, we would like to propose a general scheme which allows for the development of an automatic computational loop based on mesh adaptation.
9.4
General scheme for an adaptation loop
A solution computation in an automatic adaptive process includes the stages described in the following formal scheme. • Initial mesh construction (using any classical method), • (1) Computation of the solution based on the current support, • Analysis of the solution quality using an error estimate and construction of the related control space, — If the current solution has "converged" (discussed hereafter), then we have reached the end of the computational process. — If the current solution has not converged, construct the related control map which will serve as meshing specifications and GO TO (1).
252
CHAPTER 9. MESH
ADAPTATION
As previously mentioned, the construction of a new mesh governed by a control map so as to iterate the process includes several aspects that will be discussed in the following. We will successively discuss the (formal) way to construct a control space, then we will focus on the boundary meshing (or remeshing), before detailing how to mesh the domain. We will also mention the problem of transferring the solutions from one mesh to the other so as to preserve the suitable properties (for instance without introducing any numerical dissipation). Indeed, such a transfer step can be a crucial point, in particular when the solution method is iterative.
9.5
Control space
As seen in Chapter 5 and in the following, the control space is a convenient way to govern an automatic mesh generation algorithm. In an adaptive scheme, the notion of a control space is clearly a useful requirement.
9.5.1
Definition of the successive control spaces
We denote by Tj the mesh constructed at step j of the mesh generation process, TO being the initial mesh. We denote by CSj the control space used to govern the construction of the mesh Tj. Hence, • in principle, CSo is not defined at the time the initial mesh TO must be constructed. Nevertheless, the discretization (the mesh) of the domain boundaries is made according to user-provided specifications (specified sizes, constant sizes, sizes automatically derived from the geometric features of the boundaries (radius of curvature, etc.), ...). Thus, once the boundary mesh is created, it is possible to define the control space CSo which can be used to govern the construction of the domain mesh TQ. The initial mesh is related to the given boundary discretization. • C<Sj, the control space used in the following iteration steps to govern the construction of the mesh Tj, (.;"= 1,2,...), is automatically defined
by CSj = (Tj.1,Hj-l)
(9.1)
where the pair (Tj, Hj) represents the mesh at step j and the control map supplied to this mesh. This map results from the error estimate used to analyse the solution Uj related to Tj. This construction gives HJ as a metric map which is a matrix map from a practical point of view. Thus, £Cj is by essence a discrete field, where the function Hj
9.5. CONTROL SPACE
253
is only known at the element vertices of the mesh Tj. This is one of the expected difficulties in any adaptive process and constitutes a major difference with any purely academic problem (where Hj is assumed to be well-defined everywhere in the space).
9.5.2
Control space construction
We will present in this section several issues related to the construction of the control spaces associated with each iteration step of the adaptive process. The purpose is to define the pair (Tj, Hj) which in practice only requires that we define Tfj, since Tj is simply the current mesh. At first, we assume that the a posteriori error estimate used in the computational part of the global scheme provides the values defining Hj. The realistic case of a finite element situation will be discussed in Chapter 12. Here we would like to emphasize two difficulties • the field Hj is a discrete field and so it is necessary to define an interpolation operator so as to know Hj everywhere, • the error estimate may be made up of several unknowns and then may define several fields Hj that must be combined in some way. We will summarize the notions of a metric interpolation and that of a metric intersection in what follows. Metric interpolation. This operation enables us to find the metric related to any point in the domain from the metrics defined at the vertices of the current mesh. Several situations are relevant depending on the position of the point under investigation. This point can be located along an edge for which the metric is known at the endpoints and this point can fall within an element for which the metric is available at the vertices. Let us consider the first situation and for the sake of simplicity let us assume a two-dimensional problem. Let a = AB be an edge and let P be a point along this edge. We assume that the metrics are given at A and B, let MA and MB be these metrics. The question is then to find a metric MP at P. The solution is trivial in an isotropic context. Indeed, if the first metric is simply A/2 and the second is /x/2, then the expected sizes are h^. = l/\/A for the first metric and HB = l/\/A* f°r tne sec°nd °ne- Then the interpolation function, assuming that an arithmetic size distribution is expected
254
CHAPTER 9. MESH
ADAPTATION
is defined by M(t}=
*-r-^/a
(hA + t(hB - tiA))
0
(9.2)
with M(Q) = MA and M(l) = MBOne may also consider other types of point distribution such as geometric or sinusoidal distributions. In the anisotropic case, several approaches can be envisaged, which are now discussed. By analogy with the isotropic case where a metric is usually written as M — -r^ /2, we observe that the variation related to the /i's are "equivalent" IV
to the variations related to the Al"1/2^. Thus, we obtain the following interpolation scheme M(t)=
(l-i]M~A
+ tM~B
Q
(9.3)
Computing M~1/2 requires to evaluate the eigenvalues of M. To avoid this, one can consider the interpolation as M(i] = ((I - t}M~Al + tMg1)-1
0< t
(9.4)
by noticing that this relation emphasizes the smallest sizes. The interpolation scheme based on a metric exponent is properly defined, indeed • if M is a metric, then tMa is also a metric, where t > 0 and a are two arbitrary real values; • if MA and MB
are
two metrics, MA + MB is also a metric.
To prove these results, it is only necessary to verify that, in both cases, the resulting matrix is symmetric and positive definite. This kind of interpolation has some weakness, in particular, the variations in terms of h are not explicitely controlled. Thus, we consider the simultaneous reduction of two metrics (which are two quadratic forms) which, as will be seen, allows us to control the h specification in two directions (in two dimensions). This operation supplies a basis where these two forms are defined by two diagonal matrices. The matrix M = M^MB is A4 ,4 -symmetric and can then be diagonalized in K2 (we consider the two-dimensional case). Let (^1,^2) be the eigenvectors of Af, defining a basis in .R2, then
9.5. CONTROL SPACE
255
Let X = x\v\ + #2^2 be an arbitrary vector in R2 provided with the basis (^1,^2) ; if (A,- = t€iMA^i)i=i,2 and (m = t€iMBei)i=i,2 are defined, for all i = 1,2, A; > 0, the inequality //, > 0 holds, and we have 1
XMA* = \\x\ + A2a;^
and ^XMsX = ^LIX{ + \^x\.
Let us denote (h,A,i = —7==)»=i,2 and (hs,i = —:=)i=i,2, the value /i^(respectively hs,i) represents the unit length in the metric M.A (resp. along the axis direction et-. Let t be the parameter related to edge a, we define, as in Chapter 11, two continuous monotonous functions, #,-(£),£ € [0, 1], which allows us to go smoothly from A to B in terms of size variations. Then, the metric interpolation between A and B is given by 0
where P is the matrix composed by the row vectors (ui, t^), (H\(t},H%(t}') being such that -flft(O) = h,A,i and /?,-(!) = /IB,» for i = 1,2. Hence, Alp is known. Exercise 9.1. Consider the case where a metric must be established at a point falling within a given element using the metrics at the element vertices. Metric intersection. This operation allows us to define a unique metric given several metrics. The resulting metric will serve to govern the mesh generation algorithm. For instance, cf. Chapter 4, if M\ and MI are the two matrices associated with the given metrics and if A, (respectively //,-), i = l,d, are the eigenvalues of these matrices, then the intersection metric (.Mi, M-z) can be defined by
where P is the transition matrix from the canonical basis to the basis related to the simultaneous reduction of the two initial metrics.
256
CHAPTER 9. MESH ADAPTATION
9.6
Boundary meshing (or remeshing)
The mesh generation method for constructing the mesh of the domain uses the mesh of the domain boundaries as data. This choice will be justified in the Chapter 11.
9.6.1
Curve meshing (or remeshing)
Starting from the mesh describing the curve under consideration, a mathematical representation of the curve is infered. This form is then used so as to obtain a mesh of the curve conforming to the given specifications (either size or size and directional features). We refer the reader to Chapter 11 for a detailed description of this process.
9.6.2
Surface meshing (or remeshing)
The reader is also referred to Chapter 11 for this phase of the meshing process.
9.7
Domain meshing
Constructing the domain mesh, given the mesh of its boundary, consists of using a governed mesh generation algorithm. Two situations are encountered. The first one corresponds to the construction of the initial mesh (J = 0) while the other one concerns the construction of the meshes at successive iterations (j — 1,2,...). To create the initial mesh TO, one has to follow the classical method as proposed in the Chapters 5 (in two dimensions) and 7 (in three dimensions) . Using the discretization of the domain boundaries, we construct the corresponding space C$Q and the classical rnesh generation method is used. Constructing the successive meshes, Tj, is more subtle. We use the methods previously proposed except that we will now use a discrete control map as opposed to continuous control maps assumed in Chapters 5 and 7. For a more comprehensive survey of the mesh generation process (for a given j), we will briefly review the mesh generation method on which the process is based by focusing on the mesh edge analysis method used to create the relevant field points. The scheme is summarized as follows (cf. Chapter 5) • Preparation step, i.e. construction of the bounding box and creation of the related mesh.
9.7. DOMAIN MESHING
257
• Insertion of the given points in the box mesh using the Delaunay kernel. • Construction of the empty mesh. • (1) Field point creation according to the current (internal) edge analysis. • Insertion of these internal points and GO TO (1). • etc.
The main idea of this method is the computation of the edge lengths in the current mesh and, accordingly, the creation of the internal points such that the distance between two points is unity with respect to the control space CSj. Once all the current edges have been analyzed, we have a set of points, which are filtered and then inserted into the current mesh. The resulting mesh replaces the current mesh and the procedure is repeated as long as a point is created. Let us assume that CSj is a continuous space, then if AB denotes a current mesh edge, the length of AB is defined by 1M(A,B} =
*A.M(M(t)).Adt,
(9.6)
where t is the parameter defining AB, M(t) is the current point along AB, with M(0) = A and M(l) = 5, and M. is the matrix relating the specification included in CSj. This computation is approximated by the following quadrature
1M(A, B) = --- . (9.7) Let 6 be a threshold value (cf. Chapter 5), if IM(A, B) < <5, the edge AB does not need to be subdivided, otherwise we introduce the point <5i and step by step the series of the Qi's such that
and the length of AB is approached by
258
CHAPTER 9. MESH ADAPTATION
This means that
AB = AQi + QiQ2 + ... + Qn-iQn + QnB.
(9.9)
This length enables us both • to determine the number of required points along AB, • to locate these points. In the present case, CSj is a discrete space, thus the length of a given edge cannot be obtained by the above method. Indeed, CSj relies on TJ_I and, as a consequence, the matrices M are defined only at the vertices of TJ_I. Computing the length of an edge AB can then be performed using the following scheme
Figure 9.6: Intersection of AB, edge ofTj, with Tj-\. • we define the intersection points of AB with the elements of T,-_I. Let (Figure 9.6) Rk be these points (k = 1,2, ...,p), • at every Rk is computed the value of the control (i.e. the corresponding matrix Ai) by interpolation between the values at the endpoints of the edge of Tj-\ supporting the point Rk in question (see Figure 9.7 where R$ falls on the edge aft of Tj_i), • at the point A (resp. 5), this matrix is necessarily known. Indeed either these points are boundary points and their metrics are welldefined (as data) or these points are the endpoints of an edge of the current mesh and their metrics have been evaluated at the time they were created, • the length of AB is then
P-I 1M(A, B) = 1M(A, Ri) + £ lM(Rk, Rk+i) + IM(RP, B), Jb=l
(9.10)
9.8. SOLUTION INTERPOLATION
259
Figure 9.7: Metric evaluation at point
Figure 9.8: Length o • and every sub-segment is evaluated by the initial formula. Thus, we introduce the series of required points Qi (Figure 9.8). The length of AB being now computed, so it is now possible (as above) to
• determine the number of required points along AJB, • locate these points.
9.8
Solution interpolation
By definition, an adaptive process requires that the solution of the given problem be computed at every iteration. An iterative solution method is advocated in order to take advantage of the context. Indeed, the computation of the solution w j+ i using an iterative solver can be initialized by the solution Uj that is currently known. As w j+ i and Uj are assumed to be reasonably "close", this initialization leads to time savings and speeds-up the convergence of the iterative solver. The initialization of Uj+i by Uj consists of transferring the solution Uj known at the nodes of Tj to the nodes of the mesh TJ+I. This operation
260
CHAPTER 9. MESH ADAPTATION
must be performed without loss of information and without introducing any (numerical) dissipation. Transfer by interpolation. The simplest solution relies on using the interpolation operator related to the finite elements that are used. We search for the element of Tj that contains the node of TJ+I under examination and we apply the classical interpolation formula in this element, i.e., 3
Wj+iW = n(ttj(a?)) = ^2uj(ai)\i(x) , (9.11) »=i where x is the node under consideration, II is the interpolation operator, a,i and \i are respectively the nodes of the triangle of Tj in which the point x lies and the basis functions in this element (we assume that a P1 interpolation is used). This solution is not well-suited for all types of problems. In particular,
Actually, our wish is to have
.e.
Transfer by corrected interpolation. we can construct iij+i as
Uj+l
To satisfy the above equation,
f Uj- f E(Uj) = U(Uj) + ^--*- w ,
where a; is a function defined in TJ+I, for instance, n(ttj) or VH(uj) (V standing for the gradient). Although it is conservative, this solution does not clearly improve the quality of the solution transfer from Tj to Tj+i. Another solution consists of processing the elements of Tj+\ rather than the nodes one at a time. To this end, a mesh intersection is required between TJ and T^+i so as to permit the construction of the desired transfer operator.
(9.14)
9.8. SOLUTION INTERPOLATION
261
Mesh intersection. The question we are interested in is the following, given TJ and TJ+I, how do we determine for every element of Tj+\ the elements of TJ which are partially or entirely overlapped? If we assume that the boundary of TJ+ i is identical to that of TJ, then every point of Tj+\ is strictly included in TJ. The intersection of the two meshes in question is not a difficult task in this case. Let P be a vertex of TJ+I, • we locate P in TJ using an adequate (searching) method (cf. Chapter 1), • the solution element initializes the effective searching procedure and all of the elements of TJ with a non-empty intersection with any element of TJ+i having P as vertex is obtained by adjacencies. In the case where the boundaries at indices j and j + 1 are not strictly coincident, a specific procedure is needed so as to find the element which can initialize the above searching procedure. At the completion of this operation, we have for every point P of TJ+I the list of all the elements having it as vertex, i.e., the ball of P, denoted by Bp (following the notations introduced in the previous chapters) and, for every element K of TJ+I, we have the set of elements of TJ, denoted by SK, whose intersection with K is not empty. Transfer by L2 projection. If fi is the computational domain and / is an arbitrary function of Z,2(Q), called a test function, then we suppose that is the L2 projection of Uj onto the closed sub-space VJ+1, then I = Q V/€V>+1.
f (u; Jo,
(9.15)
Since VJ+1 is the finite element space associated with the mesh TJ+I , whose base are the test functions Ap for the vertex P, by integrating over this mesh, we obtain / JTj+i
uj+1\p = I
UjXp
VP G Sj+i
(9.16)
JTj+i
where Sj+i denotes the set of vertices of TJ+I, then (9.17) because Ap = 1 at P and is null at the other vertices of TJ
262
CHAPTER 9. MESH
ADAPTATION
An adequate numerical quadrature (indeed, a mass lumping) allows us to compute this integral. If we assume that Uj+i is constant over every ball, we have
where ]#p| is the area of Bp. Determining Wj+i is then immediate via the formula (9.19) This approach leads to some difficulties when the two mesh boundaries are not coincident, indeed there are points P of Tj+\ for which
Assuming that Uj+\ is linear on every element of TJ+I, the above relation can be written as
By replacing Uj+i in the equality (9.16), we obtain a linear system written as
(9.20) and, in a matrix notation, we have AUj+l = B with A = (OPQ) and B = (&p), these coefficients being defined by
The matrix ,4 is a sparse matrix since «PQ ^ 0 if P and Q belong to the same triangle and is also a symmetric matrix. An iterative method can be used to solve this system and MJ+I is then determined.
9.8. SOLUTION
INTERPOLATION
263
Figure 9.9: The boundaries of the two meshes are not coincident. For a detailled discussion about these solution transfers ensuring the proper desired properties, we refer the reader to [Ouachtaoui-1997]. In our experience, the method involving the L2 projection and based on the intersection of the two meshes (the mesh at step j and the previous iteration) seems to give a suitable answer to the solution transfer problem, in particular when a linear function is used. More investigation is needed for this problem so as to compare the accuracy (and the conservative aspect) of the process for different choices.
264
CHAPTER 9. MESH ADAPTATION
9.9
General scheme of an adaptive loop
The whole general scheme of an adaptive process as described in this chapter can be formally written as
• Construction of an initial mesh Tj with j — 0 (by means of a classical method).
• (1) Computation of the solution, Wj, associated with Tj.
• Analysis of the quality of Uj via an error estimate,
- the solution Uj is "converged", END. — the solution Uj is not yet converged, construct C«Sj, — construction of TJ+I , * mesh of the domain boundaries with respect to CSj, * construction of the empty mesh of the domain, * (2) based on an analysis of the edges of the current mesh according to CSj, create points along these edges, * point insertion according to CSj and GO TO (2) as long as a point is inserted, — transfer of Uj onto TJ+I (construction of Wj+i) and GO TO (1) with j = j + 1.
In this scheme, the line identified by (2) corresponds to the governed mesh generation process. This line stands for a repeated loop over the edges of the current mesh and then for a loop over the current mesh as long as edges remain to be processed. The loop denoted by (1) corresponds to the general loop of the adaptation process. The following flowchart shows this general scheme and summarizes the sequence of steps involved in a three-dimensional problem.
9.10. SOME RESULTS
265
END
9.10
Some results
We would now give some academic examples in this section (the realistic examples will be given in Chapter 12). To this end and for each of them, we consider a metric map analytically provided. We construct an initial mesh and we perform an adaptive loop while the obtained results are analyzed. An isotropic case is first discussed for which a size specification is given, then, an anisotropic problem is depicted, for which a size as well as a directional specification is provided. In particular, we indicate the
266
CHAPTER 9. MESH ADAPTATION
relationship between the (local) mesh size and the proper capture of the solution variation, if any. Such capture is the key to constructing the proper control space required to achieve the new mesh for the next iteration. We return to the two academical examples given in Chapter 5 so as to simulate a real mesh adaptivity process and to point out the difficulties that can be expected.
9.10.1
An isotropic example
The example we will discuss is a 7 X 9 rectangle on which an initial mesh is constructed. This mesh is displayed in Figure 9.11 on the top-left side. The problem then is to obtain a mesh of this domain by means of successive iteration steps, such that the specifications illustrated in Figure 9.10 are satisfied. On this figure, the sizes of the specification have been represented at the vertices of a uniform grid by means of circles. The radii of these circles stand for the expected sizes. In other words, the desired elements should conform as best they can to the sizes (actually, their edges must conform to these lengths) depicted in this figure. The mesh adaptivity problem can then be formally written as "Find a mesh whose edge lengths are u" where u = h(x,y). This size u is analytically defined (the control space being ideal) by the following function
The adaptation process is as follows • construction of an initial mesh, • (1) computation of the edge lengths in the current mesh and comparison of these values with the specification, • if all the edges have satisfactory lengths, END. • otherwise - boundary mesh generation using unit length segments,
9.10. SOME RESULTS
267
Figure 9.10: Controlled isotropic case (the specification). - construction of the empty mesh, — (2) computation of the edge lengths of the current mesh and point creation along the relevant edges (if any), — insertion of the so-created points (after filtering) and GO TO (2) as long as a point is added.
• GO TO (1). The stage denoted as (1) in the above scheme emulates the error estimate. The stage denoted as (2) is the point creation process according to the method proposed in the previous chapters. This scheme requires some further comments. The edge length, at stage (2), is computed using the intersection points of the examined edge with the previous mesh. Indeed, the control space is known in a discrete form at the vertices of the elements of this mesh and the value of the specification is then obtained at these intersection points. We use the previous method to compute the edge length and to decide if one or several points must be created. Thus, it is possible to miss a rapid size variation not compatible with the current mesh density. In other words, the initial mesh must be reasonably dense so as to expect to adequately capture the features of the problem.
268
CHAPTERS. MESH ADAPTATION
Figure 9.11: Initial mesh and iteration steps.
9.10. SOME RESULTS
269
The example depicted in Figure 9.11 shows that the final mesh (obtained at the time that the edge lengths of the current mesh do not require any further subdivision, the current mesh being then stable) is satisfactory with respect to the considered control space. Hence, the initial mesh was sufficiently fine so as to ensure that the solution variations were adequately captured. To more precisely analyze the quality of the different iteration steps, we give Table 9.1 and various diagrams related to the mesh edge length dispertions. iter 0 1 2 3
np 106 1,754 2,218 2,227
ne 206 3,502 4,430 4,448
Q 1.205 1.562 1.428 1.428
R"min 1.
""max
0.05 0.05 0.05
1.
1. 1. 1.
Table 9.1 : Statistics for the isotropic example. In this table,i£er, np and ne denote the iteration step number, the number of points and the number of elements, respectively. The values hmin and hmax are the minimum and maximum sizes. The value observed for Q, the shape quality, is related to the rapid size variation within a small interval. The curves (Figures 9.12 to 9.15) eac show two diagrams. The first diagram, Dl, reports the number of mesh edges of TJ_J conforming the size specification and the second diagram reports the same information with respect to the mesh Tj. At convergence, the two diagrams are overlapping, thus ending the iterative loop. If a denotes an edge, we display la if la is greater than 1 while, conversely, we display J"1; the unit value being that of the control space. The scale on the z's axis must be viewed as follows • the value 100 corresponds to a dispersion equal to 1, meaning that the edge lengths are adequate, • the dispertion is symmetric around this value (ranging from 0 to 1 and then from 1 to infinity in a symmetric way with respect to the value 100).
270
CHAPTER 9. MESH ADAPTATION
Figure 9.12: Initial dispertion diagrams (iteration step 0).
Figure 9.13: Dispertion diagrams after first iteration.
9.10. SOME RESULTS
Figure 9.14: Dispertion diagrams after two iterations.
Figure 9.15: Dispertion diagrams after three iterations.
271
CHAPTER 9. MESH ADAPTATION
272
9.10.2
An anisotropic example
This example shows a case where an anisotropic metric map is specified everywhere in the domain. The domain is again a 7 X 9 rectangle. The metric map is defined, in this rectangle, by the following 2 x 2 diagonal matrices
ftr•(z,t/)
'
0
0
with if x < 2
1.-0.951
if 2 < x < 3.5 if 3.5 < x < 5
0.05 x 20TT j-3.5 0.2-nr
— 5\4
if 5 < x < 1,
1.-0.95 |
if y<2
0.05 x 20^
if 2 < y < 4.5 if 4.5 < y < 7
-7\4
0.2 + 0.8^—)
if 7 < y < 9 .
This data specifies that the expected sizes in the abcissa direction are (via a projection) the discretization on the bottom side of the rectangle and the expected sizes in the ordinate direction follow the discretization on the left-side of the rectangle. One can check the adequate conversion between the given specification and both the size and the shape of the elements obtained at iteration step 3. The maximum stretching factor is about 20. To help the analysis of the different iteration steps, we report the same information as in the isotropic example by means of Table 9.2 and the dispersion diagrams of Figures 9.17 to 9.20. The value rmin and rmax correspond to the minimal and maximal stretching factors. iter 0 1 2 3
np ne Q 52 98 1.25 815 1,624 2.22 1,328 2,650 2.27 1,298 2,590 2.32
""min
""max
1.5 0.02 0.02 0.02
1.5 0.5 0.5 0.5
r
min
fmax
-
-
1.
9.7 19.5 19.5
1. 1.
Table 9.2 : Statistics for the anisotropic example.
9.10. SOME RESULTS
Figure 9.16: Initial mesh and iteration steps.
273
274
CHAPTER 9. MESH ADAPTATION
Figure 9.17: Dispertion diagrams at iteration step 0.
Figure 9.18: Dispertion diagrams at iteration step 1.
9.10. SOME RESULTS
Figure 9.19: Dispertion diagrams after two iterations.
Figure 9.20: Dispertion diagrams after three iterations.
275
276
CHAPTER 9. MESH
9.11
ADAPTATION
Notes
Mesh adaptivity as described in detail in this chapter covered mainly the fo-method. Indications were given as to a new mesh can be constructe at each iteration of an iterative process. From a practical point of view, this approach is not necessarily the best or, at least, must be used with great care. Indeed, it is possible to develop an adaptive process by coupling together • mesh optimization algorithms and • global mesh construction algorithms. In this way, it is not necessary to reconstruct the entire mesh at each iteration and, specifically, the delicate solution transfer problem from one step to the next is simplified.
Chapter 10
Data structures 10.1
Introduction
Defining the data structures useful in a finite element numerical simulation is a rather tedious task. There is no standard or even a definition that has been accepted worldwide and, conversely, every software package has developped its own definition. While it is easy to specify the general principles to define an adequate data structure, it is more difficult to find a well-suited solution which satisfies the requirements for all known situations and for the new situations that can arise. It is even harder still to create a data structure that is comprehensive, readible, easy-to-use, extensible and maintenable. The recent evolution of computational needs has resulted in an increasing demand for fully automatic computational methods based on adaptive schemes. These changes have helped to point out the weakness of existing data structures, which have somewhat been addressed by heuristics. In this chapter, we will first establish the list of information needed in a computational process and thus, that must be included in the data structure(s) dedicated to a finite element application. Several requirements must be clearly stated, related to the mesh generation algorithms as well as to the computational step of the solution process. In other words, the question is what information is needed by the mesh generation method and what kind of information does the mesher shall provide to the solution computation step. This chapter does not claim to exhaustively cover all data structure issues, but it does propose a solution suitable for all of the problems we have encountered.
278
10.2
CHAPTER 10. DATA STRUCTURES
Useful information (tentative list)
In order to establish a list of the relevant information for a specific meshing or computational situation, we will proceed by steps. We will discuss some application examples, starting from a very simple example, by discussing both the data flow and the nature of this data.
10.2.1
Recalling the notion of a mesh
As was seen in the first chapters and following the present topic (i.e. the data structure aspect of the problem), constructing a d-dimensional mesh (in the classical case) consists of • considering the mesh of the domain boundaries, i.e. a d— 1-dimensional mesh, • applying a mesh generation algorithm to achieve the desired mesh, i.e., a cf-dimensional mesh. Hence, the input data flow is a discretization of the boundaries of the domain of interest: a set of edges in two dimensions or a set of triangles in three dimensions1. The output data flow is a set of vertex coordinates, a set of elements, i.e., the triples or quadruples related to the element vertices. In other words, the mesh data structure is formally of the type • from 1 to np : the d vertex coordinates, • from 1 to ne : the (d + l)-uples defining the element vertices, (np being the number of vertices and ne the number of elements in the mesh). Obviously, it is not usually possible to perform any effective computation using only this information. Therefore, we discuss how to enrich this data so as to permit the desired computations.
10.2.2
For a (static) problem using a Pl approximation
To this end, we consider a simple academic example. Let fi be a domain and F be its boundary. We assume that 0 is composed by two sub-domains, denoted as Q;, and that F includes three parts, denoted as F z . We want to solve the following problem, find u, the solution of 'We will not discuss on how this discretization has been obtained. This will be the objective of Chapter 11.
10.2.
USEFUL INFORMATION (TENTATIVE LIST)
—div(kiVu)
= Fi in fit (z = l,2)
k^ + gtu w
=
/• on Ft- (i = l,2) 0 on FS
279
(10.1)
du where V stands for the gradient and — is the derivative with respect to on the normal along F,-.
Figure 10.1: The domain fi and its boundary F. This system of partial differential equations models a heat transfer problem. The physical conditions to which the problem is subjected are : • the domain Qz has a conductivity coefficient &;, • the source term for the sub-domain f£; is Fz, • the flux term for the boundary Ft is /;, • the transfer coefficient for the boundary F; is gi, • and finally, the condition u = 0 is prescribed on FS. Before computing the finite element solution to this problem, we have to apply the various steps included in the method. Thus a better suited formulation, called the variational formulation, or weak formulation, is derived from the above formulation written in terms of partial differential equations (P.D.E.), and can be written as follows. Find u £V such that a(u,v) = f(v),Vv€V
, (
. '
where V is the space of admissible values. In this way, we find the operator a and the form / associated with the above operators.
280
CHAPTER 10. DATA STRUCTURES
This continuous problem is then replaced by an approximate problem. In practice, the explicit solution of the continuous problem is not possible in general. This has lead to the investigation of approximate solutions using the finite element method. It consists of the construction of a sub-space Vh, which is a finite dimensional sub-space of space V, and the definition of Uh, an approximate solution of w, where Uh is the solution of the following problem. Find
Uh G Vh
such that
It can be shown that this problem has a unique solution, Uh, and that the convergence of Uh to the solution, M, is directly related to the manner in which the functions Vh of space Vh approach the functions v of space V, and therefore the manner in which the space Vh is defined. The finite element method consists of the construction of a finite dimensional space Vh such that, on the one hand, a suitable approximation is obtained and, on the other hand, the actual computer implementation is not too difficult. This construction is based on the following three basic ideas. 1. The creation of a mesh, denoted by Th, of domain 12. The domain is written as the finite union of elements K (cf. Chapter 1); 2. the definition of Vh as the set of functions t^, whose restriction to each element K in Th is a polynomial; 3. the existence of a basis for the space Vh whose functions have a small support. Thus, the restriction of a function Uh over any element K is written as
where TV denotes the number of degrees of freedom in element A', Ui is the value of the function at the degrees of freedom and vt- is the basis function i of the polynomial space previously defined. A finite element is then characterized by a suitable choice of (following the notation used in [Ciarlet-1978]) • K the geometrical element (triangle, tetrahedron, etc.);
10.2. USEFUL INFORMATION (TENTATIVE LIST)
281
• SA~ the set of degrees of freedom defined over K. These degrees of freedom are the values at the nodes2 of the function (for Lagrange type finite element), or the values and the directional derivatives of the function (for Hermite type finite element); • PK the basis polynomials on K. The implementation of a finite element method can be summarized as follows : • the mathematical analysis of the variational formulation of the problem along with the investigation of its properties ; • the construction of a triangulation (a mesh) of the domain under consideration, i.e. the creation of 7^; • the definition of the finite elements, i.e., the choice of the triple • the generation of element matrices due to the contribution of each element, A', to the matrix and the generation of the right-hand side of the system; • the assembly of the global system; • the consideration of essential^ boundary conditions (in our example, u = 0 on F3); • the solution of the system, i.e., the computation of the solution field approaching the solution; • the post-processing of the results. This scheme is quite general and works for any problem solved by a finite element method. From a practical point of view, the mesh, i.e., the information associated with the vertices and the elements, must provide the information needed to create both the matrix and the right-hand side for the resulting discrete system. The mesh also allows for the essential boundary conditions to be taken into account. Thus, we have to : 2 3
The nodes are the degree of freedom supports. The nature of boundary conditions is based on the selected variational formulation.
282
CHAPTER 10. DATA STRUCTURES
• compute integrals over the elements (depending on the element subdomains), • compute boundary integrals for each of the boundary segments (i.e., along edges or over faces), • consider the condition u = 0, which results in a query to see if a vertex is on boundary segment FS (for the above example). Physical attributes. It is then convenient to associate with every mesh entity a physical attribute which enables us to decide which set of mesh entities this entity belongs to. To this end, we define • a (physical) sub-domain number or material domain number associated with every element, • a (physical) boundary number associated with every element face, • a (physical) boundary number associated with every element edge, • a (physical) boundary number associated with every element vertex. With every attribute will be associated a specific treatment or computation related to this number. This seems to be a very simple way to decide if a given treatment is required and to find the relevant numerical coefficients (ki, /,-, ...) related to this treatment. Hence, this method allows us to define in a concise way, i.e., via a simple number, a (large) set of entities. In the above example, we need at least two material numbers and three boundary numbers. Then, the data structure becomes (the differences between the actual data structure and the initial one are reported in italics) • from 1 to Tip : the d vertex coordinates and a physical number, • from 1 to ne : the (d-f l)-uples defining the element vertices and for every element — a material number, — the physical boundary numbers for its faces and edges. In this list, we are not concerned, at this time, with the organization of this information. In particular, it has not yet been decided if numbers, booleans, character strings, pointers, ..., will be used. At the moment, we just have to keep in mind that physical information must be associated with the mesh entities.
10.2. USEFUL INFORMATION (TENTATIVE LIST)
10.2.3
283
For a (static) problem using a P2 approximation
We now discuss a case where P2 finite elements must be constructed. As seen previously, the mesh generation algorithms usually result in implicitely P1 meshes. Every mesh vertex is then considered as a node and the geometric elements are actually the finite elements. In this section, we assume that a mesh implicitely of a P^type is given and we like to define a P2type mesh. We assume in addition that the provided mesh is compatible4 with respect to a P2 interpolation. The problem then turns to defining the "midpoint" nodes along the boundary edges (see Figure 10.2 for an example in two dimesnions), as the extra nodes required for the interior edges are trivial to obtain. Boundary nodes must be properly located on the boundary and thus we need to know which edge is a boundary edge and, for a boundary edge, we need to have a suitable representation of the geometry, so as to permit the proper location of the new node.
Figure 10.2: Construction of a P2 finite element,
Geometric attributes. It is then convenient to associate with every mesh entity a geometric attribute to decide if it belongs to such or such category of mesh entities. To this end, we define • a (geometric) boundary number associated with every element face, • a (geometric) boundary number associated with every element edge, • a (geometric) boundary number associated with every element vertex. A segment of the boundary will be associated with a given number. Thus, all related treatments can be envisionned. As above, this processing method is a concise way to qualify a (large) set of entities by means of a single value. The data structure is then enriched as follows : 4
To see more about this notion, see the p-method as presented in Chapter 9.
284
CHAPTER 10. DATA STRUCTURES • from 1 to np : the d vertex coordinates, a physical number and a geometric number,
• from 1 to ne : the (d+ l)-uples defining the element vertices and for every element — a material number, — the physical boundary numbers and the geometric numbers for its faces and edges. Again, we have not yet considered how this information will be organized. Boundary representation. In order to construct the desired P2 finite elements, we have to define some points on the domain boundaries (i.e. along the boundary edges and on the boundary faces). Whatever the construction method used, a boundary representation is needed. This representation must enjoy some properties, in particular, it must be • easy to construct, • easy to use and • inexpensive, in terms of computational cost. In addition it should be • as general as possible, • able to give a reliable and accurate description of the "true" geometry, • flexible and easy to extend so as to handle new situations that may appear in the future. Several choices have to be made so as to achieve these goals. The first fundamental choice is taht of dissassociating the geometric representation used in the mesh generation step, the interpolation definition step (when constructing the P2 elements) and the solution step5 from the CAD definition related to the geometry of the domain. While it can be debated, this principle results in numerous advantages : • the geometric representation used in the computational loop is independent of the CAD system used to define the geometry, 5
In specific, this choice is primordial in a computational loop including both a meshing stage and a solution stage, as will be seen afterwards.
10.2. USEFUL INFORMATION
(TENTATIVE LIST)
285
• and, as a corollary, some range of efficiency can be expected. Nevertheless, to validate this principle, it must be proved that • it is possible to construct the geometric representation given the CAD information, either directly or using some simple procedures, • the so-defined geometry is "close" enough to the true geometry of the domain as defined by the CAD system. The second choice relies on deciding that the geometric representation can be constructed using a mesh, where this mesh is referred to as the support, and shall be defined with this objective. We denote by Suppgeom the geometric representation and by Mgeom this support mesh (cf. hereafter). The third choice is to define several links from the mesh (the mesh data structure) to Suppgeom by means of Mgeom. Let M be the mesh constructed from Suppgeom. Every entity of M (point, edge, face, element) points to one entity of Mgeom , thus • a point of M - is indeed a point of Mgeom, — is located along an edge of Mgeom, and also along the related entity in Suppgeom, — is located on a face of Mgeom , and also on the related entity in — falls within a sub-domain described in Mgeom,
an edge of M - is indeed an edge of Mgeom , — is supported by an edge of Mgeom and thus by the related entity in Suppgeom, — is supported by a face of Mgeom and thus by the related entity in Suppgeom, — falls within a sub-domain described in Mgeom ,
a face of M — is indeed a face of Mgeom ,
286
CHAPTER 10. DATA STRUCTURES — is supported by a face of Mgeom and thus by the related entity in Suppgeom, - falls within a sub-domain described in Mgeom,
• an element of M — falls within a sub-domain described in M 5eom ,
Thus, an adequate definition of Mgeom enables us to • construct Suppgeom, • and then, given a mesh, to qualify all of its entities using the links previously defined. 10.2.4
For an adaptive computational process
In this section, we consider a computational scheme as part of an adaptative loop. Such a process can be formally written as follows (cf. Chapter 9) • Initialization step, j = 0, construction of Mj, • (1) Current solution computation on Mj, • Check of the convergence — if an iteration is required: j = j -f- 1, construction of Mj and GO TO (1), — otherwise: END Assuming the above principles, this scheme leads us to • construct the supporting mesh M5eom, • define its representation Suppgeom, • construct a first mesh, j — 0, relying on - meshing the domain boundaries using Suppgeom and the given specifications, if any, — meshing the domain using the above mesh and the given specifications,
10.2. USEFUL INFORMATION (TENTATIVE LIST)
287
• (1) compute the solution associated with Mj, • check if another iteration is needed, - if yes, j = j + 1, construct Mj, i.e., (cf. above) * mesh the domain boundaries using Suppgeom and the given specifications, if any, * mesh the domain using the above mesh and the given specifications, and GO TO (1), — otherwise: END. Hence, the retained choices seems to lead to a scheme that is reasonably easy to implement. 10.2.5
Constraining a mesh
It is also possible to indicate some characteristics to which the mesh must conform. In particular, we like to point out how to specify : • that a given point (or a given set of points) must be an element vertex (vertices) in the mesh, • that a given edge (or a given set of edges) must be an element edge(s) in the mesh, • that a given face (or a given set of faces) must be an element face(s) in the mesh. Additionally, we can specify • that a given point (a given set of points) must be located on a given entity (edge, curve, face, surface) of the mesh, • that an edge will be used to model a crack and that it must be considered as two separate entities at the computational step level,
Again, such specifications are apparently possible using the mesh Mgeom and the representation Suppgeom.
288
10.3
CHAPTER 10. DATA STRUCTURES
A general data structure
Following the previous discussion, we have formally introduced three different entities, Mgeom, a mesh such that a geometric representation of the domain can be obtained, Snppgeom, and M the current mesh. In practice, as seen above, only two distincts entities are present, a mesh, Mgeom or M, and a representation Suppgeom. In this section, we would like to give a general description of an abstract data structure which allows for the definition of a mesh (in the classical meaning) as well as a mesh support of the geometry. Then, we discuss the case of a mesh support, indicating specifically how to construct a geometric representation based on this mesh. Finally, we discuss how to define a mesh using this theoretical structure. 10.3.1
A general data structure
First, we define a general data structure6, which is, in fact, a mesh and we discuss the two applications in which we are interested. We present the data structure for the mesh support of the domain geometry and the structure for the mesh serving as finite element spatial support. After introducing the relevant notations, we will give a technical description of the abstract structure along with some comments. The following sections will then show how to restrict the abstract definition so as to obtain the two particular data structures. Notations. The terms in policy font are file items. The blanks, «new lines» and tabs are item separators. The comments start with the character "#" and end at the end of the line, unless if they are in a string. The comments are placed between the fields. The notation (... 5 i=l,n) stands for an implicit DO loop as in FORTRAN. The syntactic entities are field names, integer values (I) , (double) floating values (R) , strings (C*) (up to 1024 characters) being placed between "". The blanks and «new lines» are significant when used between quotes and to use a quote " in a string, one has to type it twice 11 " (as in FORTRAN). Booleans (B) : 0 for false and any other value for true (1 in general). Numbers, for instance a vertex number, is denoted by QVertex. 'Definition subjected to modification.
10.3. A GENERAL DATA STRUCTURE
289
The entities of number type (assuming that the numbering starts from 1 as in FORTRAN) are the vertex numbers ©Vertex, the edge numbers QEdge, the triangle numbers QTria, the quadrilateral numbers QQuad, the tetrahedron numbers QTetra, the pentahedron numbers QPenta, the hexahedron numbers OHexa, the numbers of a vertex in the appropriate support (described later), QVertexsupp, the numbers of a support edge QEdgesupp, the numbers of a support triangle <9Triasupp, the numbers of a support quadrilateral <9Quadsupp, the numbers of a support tetrahedron @Tetrasupp, the numbers of a support pentahedron <9Pentasupp, the numbers of a support hexahedron @Hexasupp. In addition, Reffa denotes a number based on a physical attribute.
Description in extenso. tifying the release
The data structure first includes a string iden-
• MeshVersionFormatted 0 Then, the fields are defined as follows • Dimension (I) dim • Vertices (I) NbOfVertices ( ( (R) xj ,j=l,dim) , (I) Reftf
,i=l , NbOfVertices )
• Edges (I) NbOfEdges ( CVertexJ , OVertex? , (I) Reftf , i=l , NbOfEdges ) • Triangles (I) NbOfTriangles ((cVertex^,j=l,3) , (I) Reftf
, i=l .NbOfTriangles)
• Quadrilaterals (I) NbOfQuadrilaterals ((cverterj,j=l,4), (I) Ref<j>\ , i=l , NbOfQuadrilaterals) • Tetrahedra (I) NbOfTetrahedra ((cvertexj,j=l,4) , (I) Ref<j>\ , i=l , NbOfTetrahedra) • Pentahedra (I) NbOfPentahedra ( (cverterjf , j = l , 6 ) , (I) Ref\ , i=l .NbOfPentahedra) • Hexahedra (I) NbOfHexahedra ((cverterj,j=l,8) , (I) Reftf
, i=l , NbOfHexahedra )
• SubDomain (I) NbOfSubDomain == 2 : OEdge,- ^| ( (I) type,-, if < typei —— 3 : CTria, > , ( I ) Orientation,- , (I) Ref<j>* , i =— 4 : OQuad, J i=l, NbOfSubDomain)
290
CHAPTER 10. DATA STRUCTURES • Corners (I) NbOfCorners ( OVertex, , i=l , NbOfCorners ) •
Ridges (I) NbOfRidges ( CEdge; , i=l , NbOfRidges ) RequiredVertices (I) NbOfRequired Vertices ( CVertex, , i=l , NbOfRequired Vertices j
•
RequiredEdges (I) NbOfRequiredEdges ( CEdge, , i=l , NbOfRequiredEdges )
•
RequiredTriangles (I) NbOfRequiredTriangles ( CTria, , i=l , NbOfRequiredTriangles )
•
RequiredQuadrilaterals (I) NbOfRequiredQuadrilaterals ( CQuad, , i=l , NbOfRequiredQuadrilaterals J
• TangentAtEdges (I) NbOfTangent At Edges ( CEdge, , (I) VertexInEdge , ( (R) xf , j=l,dim) , i=l , NbOfTangent AtEdges ) • Normal At Vert ices (I) NbOfNormal At Vertices ( CVertex, , ( (R) xj,j=l,dim) , i=l , NbOfNormal At Vertices ) • NormalAtTriangleVertices (I) NbOfNormalAtTriangleVertices ( CTria,- , (I) VertexInTrian. , ( (R) xf , j=l,dim) , i=l , NbOfNormal AtTriangle Vertices ) • NormalAtQuadrilateral Vert ices (I) NbOfNormalAtQuadrilateral Vertices ( OQuad,- , (I) VertexInQuad. , ( (R) x{ ,j-l,dim) , i=l , NbOfNormalAtQuadrilateral Vertices J • AngleOf CornerBound (R) 8 • Geometry (O) FileNameOfGeometricSupport - VertexOnGeometricVertex (I) NbOfVertexOnGeometric Vertex ( CVertex, , CVertexf 60 , i=l , NbOfVertexOnGeometric Vertex ) — EdgeOnGeometricEdge (I) NbOfEdgeOnGeometricEdge ( CEdge, , CEdgef 60 , i=l , NbOfEdgeOnGeometricEdge ) - TriangleOnGeometricTriangle (I ) NbOfTriangleOnGeometricTriangle ( OTria,- , CTriaf 60 , i=l , NbOfTriangleOnGeometricTriangle)
10.3. A GENERAL DATA STRUCTURE
291
- TriangleOnGeometricQuadrilateral (I) NbOfTriangleOnGeometricQuadrilateral ( OTria;, OQuadf 60 , i=l, NbOfTriangleOnGeometricQuadrilateral) - QuadrilateralOnGeometricTriangle (I) NbOfQuadrilateralOnGeornetricTriangle ( Oquadj, OTriaf60 , i=l, NbOfQuadrilateralOnGeometricTriangle) — quadrilateralOnGeometricQuadrilateral (I) NbOfQuadrilateralOnGeometricQuadrilateral ( Oquad,, OQuad?60 , i=l, NbOfQuadrilateralOnGeometricQuadrilateral J • MeshSupportOfVertices (C*) FileNameOfMeshSupport
— VertexOnSupportVertex (I) NbOfVertexOnSupportVertex ( OVert ex,-, OVertextsupp , i=l, NbOfVertexOnSupport Vertex ) — VertexOnSupportEdge (I) NbOfVertexOnSupportEdge ( OVertex;, OEdge*upp , (R) u*upp , i=l, NbOfVertexOnSupportEdge ) - VertexOnSupportTriangle (I) NbOfVertexOnSupportTriangle (oVertex t -,CTria, supP , (R) u?pp , (R) < upp 7 i=l, NbOfVertexOnSupportTriangle ) — VertexOnSupportquadrilateral (I) NbOfVertexOnSupportQuadrilateral (oVertexi.Oquad^, (R) u*upp , (R) «?"» , i=l, NbOfVertexOnSupportQuadrilateral) - VertexOnSupportTetrahedron (I) NbOfVertexOnSupportTetrahedron (oVertex^OTetra*^, (R) u™pp , (R) t,,stw , (R) w?" , i=l, NbOfVertexOnSupportTetrahedron ) - VertexOnSupportPentahedron (I) NbOfVertexOnSupportPentahedron (oVertexi,OPenta,* upp , (R) u?™ , (R) t;,supP , (R) t",-UPP ? i—1, NbOfVertexOnSupportPentahedron ) - VertexOnSupportHexahedron (I) NbOfVertexOnSupportHexahedron r^, (R) n?" , (R) v?" , (R) w?" , i=l, NbOfVertexOnSupportHexahedron )
292
CHAPTER 10. DATA STRUCTURES • CrackedEdges (I) NbOfCrackedEdges ( CEdgeJ, CEdge? , i=l, NbOfCrackedEdges ) • CrackedTriangles (I) NbOfCrackedTriangles ( CTriaJ , CTria? , i=l, NbOfCrackedTriangles ) • CrackedQuadrilaterals (I) NbOfCrackedQuadrilaterals ( CQuad^ , OQuad? , i=l, NbOfCrackedQuadrilaterals ) • EquivalentEdges (I) NbOfEquivalentEdges ( CEdgeJ, CEdge? , i=l, NbOfEquivalentEdges ) • EquivalentTriangles (I) NbOfEquivalentTriangles ( CTria^ , CTria? , i=l, NbOfEquivalentTriangles ) • EquivalentQuadrilaterals (I) NbOfEquivalentQuadrilaterals ( CQuad* , OQuad? , i=l, NbOfEquivalentQuadrilaterals ) • PhysicsReference (I) NbOfPhysicsReference ( (I) Reffa , (C*) CommentOnThePhysic , i=l, NbOfPhysicsReference j • IncludeFile (C*) filename • BoundingBox ( (R) Mm, (R) Max,,, i=l , dim)
Remarks. In the following, we give some comments concerning the different fields. At first, one may notice that some fields are strictly required while some others are optional7. The comments and remarks are given according to their introduction order in the above description. The string MeshVersionFormatted indicates the release identificator and the type of the file. MeshVersionUnf ormatted is an alternative case for this field. The edge table, Edges, includes only, a priori, the edges with a significant reference number Ref4>. The elements are given with respect to their geometric nature (triangle, quadrilateral, etc.). In this way, when several types of elements exist in the mesh, it is not required to manage a table of element types. The sub-domains are defined using one edge in two dimensions or one face in three dimensions combined with the orientation information, 7
In this way, it will be possible to add some fields that are not yet defined.
10.3. A GENERAL DATA STRUCTURE
293
(Orientation;), indicating on which side of this entity the sub-domain lies. The sub-domain number is Ref(f)s. A corner point, Corners (for a support type structure), is a point where there is a C° continuity between the edges sharing the point. Thus, a corner will necessarily be a mesh vertex. A ridge is an edge where there is a C° continuity between the faces sharing it. Thus, a ridge will necessarily be a mesh edge. The required vertices, RequiredVertices, are the vertices of the support that must be present in the mesh as element vertices. Similarly, some edges or (triangular or quadrilateral) faces can be required. The tangent vector to an edge, TangentAtEdges, gives the tangent vector (with respect to the surface) for this edge at the indicated endpoint. Giving the tangent vector of an edge by means of the tangent vector at a point enables us to deal with the case where several edges (boundary lines) emanate from a point. The normal at a vertex, NormalAtVertices, gives the normal vector (with respect to the surface) at this vertex. The normal at a vertex of a triangle (in practice, a triangular face), NormalAtTrianglesVertices, gives the normal vector at the vertex of the specified triangle, as these values may change between neighbouring triangles. The corner threshold, AngleOfCornerBound, is a value which enables us to determine the continuity type between two edges or two faces that was not clearly defined or not explicitely specified. The mesh vertices are related to the type of support in which they exist. There are two categories of supports, a geometric support and a current mesh. If the support is of a geometric nature, Geometry, defined by a file, it gives the relationships between the vertices, boundary edges and boundary faces of the current mesh with the geometric entities. Thus, a mesh vertex can be identical to a geometric vertex, a mesh edge can have a geometric edge as support and, in three dimensions, a face (a triangle or a quadrilateral) can have a geometric face as support. These relationships allow us to classify the entities of the current mesh with respect to an entity defining the domain geometry (this information will be particularly useful when constructing finite elements of order greater than one).
294
CHAPTER 10. DATA STRUCTURES
If the support is a (usual) mesh by itself, MeshSupportOf Vert ices, defined by a file, gives the relationships between the current mesh and the above mesh. A vertex of the current mesh belongs to an entity8 of the support mesh (this information may be relevant when interpolating or transferring a solution from one mesh to another in an adaptive iterative process for instance) . Hence, in an iterative computational process, the support for the mesh at a given iteration step is the mesh of the previous step. In this way, we indicate that a vertex, z, of the current mesh • is identical with a vertex of the support, • lies on an edge of the support at abcissa w, • falls within a triangle of the support, w, v being the coordinates in the reference element, • falls within a quadrilateral of the support, w, v (idem), • falls within a tetrahedron of the support, «, f , w (idem), • falls within a pentahedron of the support, u,v,w (idem), • falls within an hexahedron of the support, w, u, w (idem). A vertex not in this "table" is considered to be a free vertex. The relationships defined in this way enable us to know the location of a vertex using the reference element related to the support entity which includes this vertex. To use the reference element to arrive to the current element, one must use one of the following relations based on the geometric type of the element : • for an edge with endpoints ki and k^
• for a triangle with vertices ki ,
/ = 1 ,3
• for a quadrilateral with vertices &/,
/ = 1,4
'For a boundary element, a projection will be needed to obtain the desired location.
295
10.3. A GENERAL DATA STRUCTURE
Figure 10.3: Canonical numbering. • for a tetrahedron with vertices fc/, / = 1,4
• for a pentahedron with vertices fc/, / = 1,6
for an hexahedron with vertices A;/,
/ = 1,8
Figure 10.4: Canonical numbering.
296
CHAPTER 10. DATA
STRUCTURES
Remark. This information is naturally known by the mesh generation algorithm and is relatively easy to obtain (cf. Chapter 9). Moreover, when simplicial elements are used, the bary centric coordinates are trivial to obtain and thus do not strictly need to be stored. Crack definition is the purpose of three fields, CrackedEdges, CrackedTriangles and CrackedQuadrilaterals; we specify then that an edge (a face, respectively) is identical in terms of geometry to another edge (face). The three fields, EquivalentEdges, EquivalentTriangles and EquivalentQuadrilaterals indicate that two edges (resp. faces) must be meshed the same way (for instance, in periodic meshes). A comment about the meaning of the physical reference numbers is provided in the field PhysicsRef erence. It is possible to include a file in the data structure, IncludeFile. This inclusion will be made without ensuring any compatibility. For some applications, it is useful to know the size of the domain, i.e., the extrema of its point coordinates. This will be stored in the field BoundingBox.
10.4
A geometric data structure
The general structure allows us to specify a mesh describing the geometry of the given domain. This mesh is used to define both the geometry and the physical conditions of the problem under consideration. In this case, some of the above fields are not relevant. We would like to give some indications about this geometric structure in two dimensions and add specific comments related to three dimensions. One is referred to latre in this chapter to see how such a structure can be effectively used to construct a geometric representation and to Chapter 11 to see how to use this representation for boundary meshing (remeshing) purposes.
10.4.1
The two-dimensional case
We give the list of the fields used in this case by first indicating the required fields • MeshVersionFormatted 0 • Dimension (I) dim
10.4. A GEOMETRIC DATA STRUCTURE • Vertices (I) NbOfVertices ( ( (R) xj ,j=l,dim) , (I) Reftf
,i=l, NbOfVertices )
• Edges (I) NbOfEdges ( CVertex}, OVertex? , (I) Reftf
, i=l,NbOfEdges )
297
• SubDomain (I) NbOfSubDomain ( (I) typei,
{ typei == 2 : OEdge, } , (I) Orientation, , (I) Reftf i=l, NbOfSubDomain)
,
and then the optional fields • Corners (I) NbOfCorners ( OVert ex;, i=l, NbOfCorners ) • RequiredVertices (I) NbOfRequiredVertices ( OVert ex, , i=l, NbOfRequiredVertices ) • RequiredEdges (I) NbOfRequiredEdges ( OEdge,-, i=l, NbOfRequiredEdges ) • EdgesTangent (I) NbOfEdgesTangent ( OEdgej, (I) VertexInEdge , ( (R) x£ ,j=l,dim) , i-1, NbOfEdgesTangent) • MaximalAngleOfCorner (R) 0 • CrackedEdges (I) NbOfCrackedEdges ( OEdgeJ , OEdge? , i=l, NbOfCrackedEdges ) • EquivalentEdges (I) NbOfEquivalentEdges ( OEdge} , OEdge? , i=l, NbOfEquivalentEdges ) • PhysicsReference (I) NbOfPhysicsReference ( (I) Reffa ,(C*) CommentOnThePhysic , i=l, NbOfPhysicsReference J • IncludeFile (C*) filename • BoundingBox ( (R) Mm,- (R) Max, ,i=l ,dimj The geometric representation of a boundary in two dimensions will be detailed hereafter. At this time, we will use the edges provided in the data structure so as to define some curves of order three in the following way 9 • an edge whose endpoints are corners and if no additional information is provided will be represented by a straight segment, 9
A different choice is clearly possible which would lead to the modification of the type of representation and thus the way in which the related information is used later on.
298
CHAPTER 10. DATA
STRUCTURES
• an edge whose endpoints are corners but whose tangent is provided at one end point will be represented by a curve of degree two, • an edge whose endpoints are corners but whose tangents are provided at both corners will be represented by a curve of degree three, • an edge whose endpoints are not corners and with no additional information will be represented by a curve of degree three. Indeed, we use in this case the adjacent edges so as to evaluate the tangents at the edge endpoints, • etc.
In short, an edge defined by two pieces of information will be approximated by a straight line, three items allow us to obtain a curve of degree two and four items allow for an approximation10 of degree three. Now that this means of constructing the geometric support from a geometric support mesh has been briefly established, we would now like to give an example. We consider the rather simple domain of Figure 10.5 where F denotes (with no more precise information at this stage) the boundary associated with the segments CF and FD. We will discuss the different ways to construct a mesh data structure by observing in each case the resulting geometric definition.
Figure 10.5: The domain to be defined. Hence, if we provide the following information as input data 10
One also may say that an approximation of degree three is constructed, in some cases, when some data is missing, where the tangents can be defined by the edge itself.
10.4. A GEOMETRIC DATA STRUCTURE
299
• MeshVersionFormatted 0 • Dimension 2 • Vertices 6
(xA,yA,i), (xB,yB,l), (a?c,yc,l), (*D,yD,l), (xE,yE,l), (xF,yF,l) • Edges 6
(A, B, I), (B, C, 1), (A, E, 1), (E, D, 1), (F, D, 1), (F, C, 1)
• SubDomain 1 2 4 - 1 10 • Corners 5 ABC DE Six points have been implicitely defined11, as well as six edges, the domain is on the "right side" of the edge number 4, alias ED and its material number is 10. The edges AB, BC, AE and ED will be approximated by straight lines, the "curve" F is the union of the segments FD and FC with a tangent defined at F due to DC (as F is not a corner) and, at D (resp. at C), a tangent supported by FD (resp. FC) as D (resp. C) is a corner, thus F will be a piecewise curve of degree two. Remark. If the edge number 4 is DE (instead of ED), the sub-domain must be described by the sequence 1 2 4 1 10 instead of 1 2 4 - 1 10. How do we define the geometric data structure so that F can be a circle (at least a curve closely approximating a circle)? It is only necessary to enrich the structure by providing more points along F, then an approximation of degree two will be obtained for each terminal sub-segment and of degree three elsewhere. One may also define the tangents at the endpoints so as to obtain an approximation of degree three everywhere. One may notice that we do not explain how to construct the desired data structure from a practical point of view. Clearly a CAD type preprocessor will be in charge of this task. 10.4.2
A few remarks about three dimensions
In this dimension, the required fields are the following • MeshVersionFormatted 0 • Dimension (I) dim 11
In this example, a point and its number stand for the same thing, for instance the point A is the same as the point with number A.
300
CHAPTER 10. DATA
STRUCTURES
• Vertices (I) NbOfVertices ( ( (R) x? ,j=l,dim) , (I) RefVi ,i=l , NbOfVertices )
• Triangles (I) NbOfTriangles ((«VertexJ,j=l,3), (I) Reftf
, i=l , NbOfTriangles)
• Quadrilaterals (I) NbOfQuadrilaterals ((overtexf ,j=l,4) , (I) Reftf , i=l .NbOfQuadrilaterals) • SubDomain (I) NbOfSubDomain ((I>*«pe,-,
l^;":^
cSuad! } > (^Orientation,, (I)
i=l , NbOfSubDomain ) then, the optional fields are • Corners (I) NbOfCorners ( ©Vertex; , i=l , NbOfCorners ) • Ridges (I) NbOfRidges ( QEdge, , i=l , NbOfRidges ) • RequiredVertices (I) NbOfRequired Vertices ( QVert ext , i— 1 , NbOfRequired Vertices ) • RequiredEdges (I) NbOfRequired Edges ( QEdge; , i=l , NbOfRequiredEdges ) • RequiredTriangles (I) NbOfRequired Triangles iaj , i=l , NbOfRequiredTriangles ) • RequiredQuadrilaterals (I) NbOfRequiredQuadrilaterals ( OQuadj , i=l , NbOfRequiredQuadrilaterals ) • Tangent AtEdges (I) NbOfTangentAtEdges ( @Edget , (I) VertexInEdge , ( (R) x{ , j=l,dim ) , i=l , NbOfTangentAtEdges ) • Normal AtVert ices (I) NbOfNormal At Vertices ( ©Vertex; , ( (R) xj , j=l,dim ) , i=l , NbOfNormalAt Vertices ) • NormalAtTrianglesVertices (I) NbOfNormalAtTrianglesVertices ( QTria,- , (I) VertexInTrian. , ( (R) xj , j=l,dim ) , i=l , NbOfNormalAtTrianglesVertices )
10.4. A GEOMETRIC DATA STRUCTURE
301
• NormalAtQuadrilateralsVertices (I) NbOfNormalAtQuadrilateralsVertices ( QQuad;, (I) VertexInQuad., ( (R) x^ , j=l,dim ) , i=l, NbOfNormalAtQuadrilateralsVertices ) • AngleOfCornerBound (R) 0 • CrackedEdges (I) NbOfCrackedEdges ( flEdgeJ , QEdge? , i=l, NbOfCrackedEdges ) • CrackedTriangles (I) NbOfCrackedTriangles ( QTriaJ, QTria? , i=l, NbOfCrackedTriangles ) • CrackedQuadrilaterals (I) NbOfCrackedQuadrilaterals ( QQuad*, QQuad? , i=l, NbOfCrackedQuadrilaterals ) • EquivalentEdges (I) NbOfEquivalentEdges ( OEdgeJ, ©Edge? , i=l , NbOfEquivalentEdges ) • EquivalentTriangles (I) NbOfEquivalentTriangles ( QTriaJ , OTria? , i=l , NbOfEquivalentTriangles ) • EquivalentQuadrilaterals (I) NbOfEquivalentQuadrilaterals ( QQuadJ, QQuad? , i=l , NbOfEquivalentQuadrilaterals ) • PhysicsRef erence (I) NbOfPhysicsReference ( (I) Reffa , (C*) CommentOnThePhysic , i=l, NbOfPhysicsReference ) • IncludeFile (C*) filename • BoundingBox ( (R) Mini (R) Max; ,i=l ,dim ) The geometric representation of a surface in three dimensions will be detailed hereafter. At this time, one may keep in mind that the triangles (quadrilaterals) provided in the structure will be used to obtain quadratic surfaces using the available information. In particular, it is crucial to find the ridges and the corners, if any. An approximate definition of these two types of entities will not result in a proper definition of the surface near these entities. As will be seen, every element of the data structure will serve as support for constructing a G1 representation of the domain surface by using only the normals (i.e., the tangent plane) at the points defined in the structure; these normals, aim of the construction, must be known as accurately as possible.
302
CHAPTER 10. DATA STRUCTURES
10.5
Geometric representation
A geometric support characterized by a mathematical representation is derived from the geometric data structure.
10.5.1
The two-dimensional case
As previously mentioned, the given geometric mesh is interpreted so as to construct a geometric support used furthermore for boundary meshing purpose, prior to the domain meshing step. The key-idea is to use the edges in the structure for defining a mathematical form 12 of degree three (with possible degeneracies). The construction is completed edge by edge using the available information (tangent, corner, ...). Constructing the representation function. First, we consider an edge, AB, and we assume that we know the tangents at A and at 5, IA and IB, then the edge allows us to define the following curve f ( s ) =o 0 -|- ais + a2s2 + a3s3 where ai € R2 We have
(10.4)
(i = 0, 3) and s is the parameter ranging from 0 to 1.
f'(s) - ai + 2a2s + 3a3s2 .
(10.5)
Hence, the four known pieces of information result in the system :
Whose solution is
12
The mathematical representation described in the following is only an example of what we can do, other forms are possible leading in principle to a similar discussion.
10.5. GEOMETRIC
REPRESENTATION
303
When one or the other tangent is not provided, we look for a function of degree two, i.e., we fix «s = 0, then, if tA is known, we have (10.8)
while if is is known, we obtain a0
A
=
ai =
2(B - A) - CB
a2
-(B-A)+t~B.
=
(10.9)
When no tangent is provided, we look for a function of degree one, i.e., a3 = 0 and «2 = 0 and we obtain
a0 = A ai = B — A ,
(10.10)
meaning that f(s) = A + s(B - A). Another kind of situation is where A and/or B are not corner(s) but we know a point "before" A, denoted as A_i, and/or a point "after" 5, denoted as J5+i, then we can define a function of degree three or two by returning to the previous cases. The tangents at AB in A and/or B are evaluated by using the points A-\ and/or B+\. The final type is determined once the data is combined. Table 10.1 indicates the degree that can be expected for the approximation of AB as a function of the data categories. degree tA tA
A-i tA
A-i A-i
A A A A A A A A A
B B B B B B B B B
tB
tB tB B+i B+i
B+i
3 2 2 1 3 3 3 2 2
Table 10.1 : Degree of the approximation based on the data type.
CHAPTER 10. DATA STRUCTURES
304
Remarks. In general, the function /(s) constructed from two points and the two related tangents may offer different aspects. Some of these may have obviously undesirable characteristics for the envisioned applications. For example, the presence of a loop, i.e., if there are two different values si and 52 for which /(si) = /(s2), corresponds necessarily to a ill-suited input. In fact, we assume that • g(x] is an application. where g(x) denotes the function which associates with every x of the segment AB the value of the function / taken for the s value whose projection on AB is x (cf. Figure 10.6 where AB is horizontal, s, is a value of s and Xi is the related value on AB).
Figure 10.6: f(s) and g ( x ) . The assumed property implies that • the curve has no loop, • the tangent keeps the same sign (if AB is not horizontal, this property must be satisfied after a rotation such that AB becomes horizontal). From a practical point of view, we can also assume that the distance between AB and the curve is bounded by a reasonably small threshold. In other words, the segment AB is close to the curve meaning that the edges in the data structure already correspond to a reasonable approximation of the geometry. 10.5.2
The three-dimensional case
Several methods have been proposed for the construction of composite G1 surfaces starting from a polyhedral triangular representation of this surface,
10.5. GEOMETRIC REPRESENTATION
305
where every triangle defines a patch. In other words, this leads to the construction of a geometric support based on a (discrete) mesh that provides a reasonable description of the surface. To this end, adjacent patches must induce tangency continuity between patches. The initial stage of these methods relies on defining a network of boundary curves (around the boundary of each patch). Then, each patch is constructed from its boundary so as to ensure the desired continuity for the corresponding tangent plane. The method proposed in [Walton, Meek- 1996] defines the tangent planes at each patch boundary independently, i.e., using only the patch boundary curves. Then, to ensure the geometric continuity inside each patch, the classical scheme of [Gregory- 1974] is used. The construction scheme is then as follows (assuming that the surface is G1 everywhere (no ridges or corners)) : • a curved segment of degree 3 passing through the edge endpoints is associated with each mesh edge. It is defined such that the principal normals at the curve endpoints are parallel to the unit normals specified at these points (thus ensuring tangent plane continuity at each mesh vertex), • for each patch, the tangent plane associated with its boundary curve (as described above) is then based on the curve tangent vector and the bi-normal vector (normal to the plane defined by the tangent vector and the principal normal of the Frenet frame system), • for each patch, a Gregory polynomial surface of degree 4 is constructed by increasing the degree of the boundary curves. Before detailing the steps of the scheme, we will review the definition of Bernstein-Bezier polynomials. Then, these steps are described and some examples are given. Bernstein polynomials and Bezier curves. The Bernstein polynonmials
where Cf = in^\\.i\ constitute a basis for the polynomials of degree n defined on the interval [0, 1]. They allow for the construction of Bezier curves that can be written as i=0
with Pi being the control points for the curve.
306
Bezier patches.
CHAPTER 10. DATA STRUCTURES The polynomials
with i + j + k = n allow for the construction of patches whose form is
where the /^,j,jt's are the control points for the patch under consideration. Gl patch associated with each triangle. We consider a triangle K with vertices PI, PI and P% and we suppose that the normals nt (i = 1, 3) at the surface at these points13 are given.
Figure 10.7: T/ie boundaries of the patch associated with the triangle K (example shows edge number 2 of this triangle). The first part of the construction deals with the definition of cubic curves modelling the edges of K. Then, one will turn to the construction of the surface associated with K having these curves as boundaries. With every edge of K is associated a cubic curve. Hence, for the first edge (cf. Chapter 1 for the numbering convention, edge i is opposite to vertex i), if C^(t) denotes this curve, we have
13
In the following formula, P* is the same as PI and PS as PI. The same convention applies to the indices at the normals.
10.5.
GEOMETRIC REPRESENTATION
307
with Vi?o = PI, Vi,3 = PS, the edge endpoints, and V\,\,V\,i computed as indicated hereafter. More generally, we define the three curves Cf~ (t) related to the three edges of K, with Vi,Q = />•+!
(10.13)
Vi,3 = Pi+2
(10.14)
Vi,i = K',0 + otdi (67, - 2yw + (nni+i)
(10.15)
Vi,2 = V-,3 + adi (67,- + Pini - 2c7t-nt-+i)
(10.16)
where i is the number of the edge under treatment, a = ^, pt ; = 6 "''^"a"*'1 ^ a a> ^ _ 6 a''*4-a. s '° , with, at last a,- = n t -.n t -+i, a;o' = ^-.7,- and a t '1 = nt-+i.7,-
where we have 7,- = following property.
*'3^ '|0 and rfj- = IJV^a — Vi^oll- The result is the
Proposition 10.1. The curve C^ joins the two endpoints of the edge i of triangle K and the curve's normal at t = 0 is parallel to nt-+i as its normal at t = 1 is parallel to n t+2 . This proposition is proved in [Walton, Meek- 1996]. The vector tangent to the curve Cf" (t) is a quadratic Bezier curve of the form
with Witk = Vi,k+i - Vijk for k = 0, 1, 2. At this time, we have defined the curves for the edges of K. We still need to define the corresponding surface. Figure 10.7 shows the construction for edge number two of the triangle. For this task, the required data is the two endpoints, PS and PI, i.e., the values of 1/2,0 and V^,3 respectively and the two normals 713 and n\. We define thus V2,i and ^2,2 and obtain the three tangents W^i , i — 0,2 (plotted using arrow lines in the figure). A degree four patch can be associated with the triangle A', whose expression is
where the /^,j,&'s are respectively defined now. First we have P4)0,0 - PI ,
(10.18)
308
CHAPTER 10. DATA STRUCTURES
Figure 10.8: The patch associated with the triangle K (the control points are used to define the desired representation). ^0,4,0 = ^2,
(10.19)
Po,o,4 = Ps-
(10.20)
To obtain the Pi,j,k's of the formula other than these, we use a technique to increase the degree of a curve so as to obtain a degree four for the above C^ (i)'s curves starting from the current degree three curves. To this end, we introduce the following control points
for i = 1, 2, 3 and j = 1, 2, 3 (for j = 0 we meet the definitions for the three vertices above). These points allow us to obtain the Pjj^'s by means of the relationships : PO^-J = L0jJ , (10.21) Pj|4-,-,o - LIJ ,
(10.22)
P4-j,o,j — L2,j • (10.23) We still need to define the control points inside the patch, i.e., the points PI, 1,2) Pi,2,i and P2,i,i so a sto complete the definition of S A (r, s, t). This construction is rather technical and is given in several steps. The Cf" (£)"s are known, thus the Wij's are known. If
10.5. GEOMETRIC REPRESENTATION
309
where the A,-j's are
One can prove that the Cf" (£)"s and the H* (£)'s allow us to define a suitable continuity between the patches. One constructs the values Dij for i = 1,2,3 and j = 0,1,2,3.
Using these D,-j's and the C^ (£)'X we obtain the A^-'s, for z = 1,2, 3.
Using the Dijs and the //(^)'s, we obtain the /Wj-j's, for i = 1,2,3.
From the A l?J 's, the //;,j's, the Z/ij's, the At-j's and the W,-,j's, we deduce the Gt-j's, for z = 1,2, 3 and j = 1, 2.
310
CHAPTER 10. DATA STRUCTURES Then, one may obtain the desired control points. This gives successively
The patch is then well defined via the expression 10.17 and the relations 10.18 to 10.26. Proposition 10.2. The resulting surface is Gl. This property results from the fact that the edges, patch interfaces, defined only by the endpoints and normals, allow us to define a unique tangent, thus ensuring the desired transition from one patch to the next. Moreover, the constructed curves depend only on one triangle (thus a patch can be dealt with independently from the others) and do not require other information. For instance, they are independent from the third vertex of the triangle. A simple example. On Figure 10.9 a surface is displayed (by means of a mesh) resulting from the above process for a single triangle given through its three vertices and the normals at these points. The case of a complex domain with no singularities. On the righthand side of Figure 10.10 is shown a surface (via a mesh) which was obtained by using the previous method applied to the surface of a geometry described by the triangular mesh as shown on the left-hand side of the same figure. Two plots are given for each case, the entire surface (on top) and an enlargement (bottom). The case of a realistic domain. For such realistic geometry (a mechanical part for instance), there are, in general, several singularities (corners and ridges). Thus, the above method does not directly apply everywhere. The resulting surface will be G1 except in the neighborhood of these singularities that will be considered using the same algorithm while prescribing the tangent values so as to locally be G°. Indeed, applying this method to compute the tangents will result in small gaps between patches.
10.6. MESH DATA STRUCTURE
311
Figure 10.9: G1 patch associated with a triangle.
10.6
Mesh data structure
The mesh data structure, which is the output of a mesh generation algorithm, refers to the geometric data structure and in some cases to another mesh data structure. 10.6.1
The two-dimensional case
In this case, the fields are : • MeshVersionFormatted 0 • Dimension (I) dim • Vertices (I) NbOfVertices ( ( (R) xj ,j=l,dim) , (I) Reftf
, i=l,NbOfVertices )
• Edges (I) NbOfEdges ( evertexj, CVertex? , (I) Ref^ , i=l, NbOfEdges ) • Triangles (I) NbOfTriangles ( (<5Vertex^ ,j=l,3) , (I) Reftf
, i=l,NbOfTriangles)
• Quadrilaterals (I) NbOfQuadrilaterals ( (fiVertex:? ,j=l,4) , (I) Ref<j>\ , i=l,NbOfQuadrilaterals) • Geometry (O) FileNameOfGeometricSupport
312
CHAPTER 10. DATA STRUCTURES
Figure 10.10: Definition mesh andG1 representation.
10.6. MESH DATA STRUCTURE
313
- VertexOnGeometricVertex ;i) NbOfVertexOnGeometricVertex CVertex,, CVertexf 60 , i=l, NbOfVertexOnGeometric Vertex ) - EdgeOnGeometricEdge (I) NbOfEdgeOnGeometricEdge ( CEdge,, CEdgef60 , i=l.NbOfEdgeOnGeometricEdge ) • CrackedEdges (I) NbOfCrackedEdges ( CEdge} , CEdge? , i=l, NbOfCrackedEdges ) When the current mesh refers to a previous mesh, we also have : • MeshSupportOfVertices (O) FileNameOfMeshSupport — VertexOnSupportVertex (I) NbOfVertexOnSupportVertex ( OVert ex,, CVertex*upp , i=l, NbOfVertexOnSupport Vertex ) — VertexOnSupportEdge (I) NbOfVertexOnSupportEdge (cVertexi,CEdge* upp , (R) u?pp ,i=l,NbOfVertexOnSupportEdge) — VertexOnSupportTriangle (I) NbOfVertexOnSupportTriangle ( CVertex,-, CTria* upP , (R) u\npp , (R) v*upp , i=l, NbOfVertexOnSupportTriangle ) - VertexOnSupportQuadrilateral (I) NbOfVertexOnSupportQuadrilateral ( CVertex,-, CQuadrPP, (R) u-" pp , 00 $upp , i=l, NbOfVertexOnSupportQuadrilateral ) 10.6.2
The three-dimensional case
In this case, the fields are • MeshVersionFormatted 0 • Dimension (I) dim • Vertices (I) NbOfVertices ( ( (R) xj ,j=l,dim) , (I) Reffl
, i = l , NbOfVertices )
• Edges (I) NbOfEdges ( CVertex} , CVertex? , (I) Reftf
, i=l, NbOfEdges )
314
CHAPTER 10. DATA STRUCTURES
• Triangles (I) NbOfTriangles ^ , j = l , 3 ) , (I) Ref$
, i=l , NbOfTriangles)
• Quadrilaterals (I) NbOf Quadrilaterals ((oVertex^',j=l,4), (I) Reftf , i=l , NbOfQuadrilaterals ) • Tetrahedra (I) NbOfTetrahedra ((€VertexJ,j=l,4), (I) Reftf , i=l , NbOfTetrahedra) • Pentahedra (I) NbOfPentahedra ((oVertexf ,j=l,6), (I) Ref
, i=l , NbOfHexahedra)
• Geometry (O) FileNameOfGeometricSupport - VertexOnGeometricVertex (I) NbOfVertexOnGeometricVertex ( OVert ex,, OVertexf60 , i=l, NbOfVertexOnGeometric Vertex ) - EdgeOnGeometricEdge (I) NbOfEdgeOnGeometricEdge ( OEdge,-, CEdgef60 , i=l,NbOfEdgeOnGeometricEdge ) — TriangleOnGeometricTriangle (I) NbOfTriangleOnGeometricTriangle ( OTria,, OTriaf60 , i=l,NbOfTriangleOnGeometricTriangle) — TriangleOnGeometricQuadrilateral (I) NbOfTriangleOnGeometricQuadrilateral ( OTriai, CQuadf60 , i=l,NbOfTriangleOnGeometricQuadrilateral) — QuadrilateralOnGeometricTriangle (I) NbOfQuadrilateralOnGeometricTriangle ( OQuadi, OTriaf60 ,i=l,NbOfQuadrilateralOnGeometricTriangle) — QuadrilateralOnGeometricQuadrilateral (I) NbOfQuadrilateralOnGeometricQuadrilateral ( OQuad,-, OQuadf 60 , i=l,NbOfQuadrilateralOnGeometricQuadrilateral) When the current mesh refers to a previous mesh, we have in addition • MeshSupportOfVertices (O) FileNameOfMeshSupport — VertexOnSupportVertex (I) NbOfVertexOnSupportVertex ( OVert ex,, OVert ex*upp , i=l,NbOfVertexOnSupport Vertex )
10.7. NOTES
315
- VertexOnSupportEdge (I) NbOfVertexOnSupportEdge ( CVertex,;, OEdge'upp , (R) u*upp , i=l,NbOfVertexOnSupportEdge ) - VertexOnSupportTriangle (I) NbOfVertexOnSupportTriangle (cVertex,-,CTriar PP , (R) u*upp , (R) v?" , i=l, NbOfVertexOnSupportTriangle ) - VertexOnSupportQuadrilateral (I) NbOfVertexOnSupportQuadrilateral («Vertex,-,Cquadr PP , (R) u*upp , (R) t^PP , i=l, NbOfVertexOnSupportQuadrilateral J — VertexOnSupportTetrahedron (I) NbOfVertexOnSupportTetrahedron ( CVertex,, CTetra* u PP, (R) t/*upP , (R) <"PP , (R) w-UPP » i=l, NbOfVertexOnSupportTetrahedron ) — VertexOnSupportPentahedron (I) NbOfVertexOnSupportPentahedron ( CVertex,, CPenta t supP , (R) «- upp , (R) < upp , (R) w\u" , i=l, NbOfVertexOnSupportPentahedron J - VertexOnSupportHexahedron (I) NbOfVertexOnSupportHexahedron (oVertex^OHexa^PP, (R) u?pp , (R) t;,'upp , (R) ^"PP , i=l, NbOfVertexOnSupportHexahedron J
10.7
Notes
The abstract data structure and thus the two data structures described in this chapter are only reasonable proposals for organizing the information related to a geometric description or a mesh. They have been tested and validated in two dimensions and are in "beta-test" in three dimensions. The way to construct a mathematical representation for characterizing a domain geometry is a widely open problem, especially in three dimensions. Numerous methods are available for the construction of Gl surfaces. One could consult [Farin-1983] which deals with the triangles by adding an internal vertex to then obtain three sub-triangles. These sub-triangles are then used, considering also the neighboring elements, to fix the value of the parameters so as to ensure the desired continuity from one triangle to another. One may also consider Gregory patches ([Gregory-1974]) which are another approach to solve the same problem. Moreover, a finite element
316
CHAPTER 10. DATA STRUCTURES
of type HCT, [Farin-1985], can serve as support for the construction of a G1 support.
Chapter 11
Boundary meshing 11.1
Introduction
The algorithms used to construct the mesh of a domain in R2 or R3 require the input of a discretization or a mesh of the domain boundary. In two dimensions, the boundary consists of a curve or a series of curves and the aim of a boundary mesh generation method is to convert this data into a polygonal line or a series of such lines, where each line is composed of a set of straight segments. In three dimensions, the boundary consists of a surface or a series of surfaces and the aim of a boundary mesh generator is then to discretize this (these) surface(s) with a set of triangles (and/or quadrilaterals). In this chapter, we will propose several boundary mesh generation methods (for R2 or R3 domain boundaries) without claiming to exhaustively cover all such methods. Any boundary mesh generation method is strongly dependent on the way in which the boundaries are described, i.e., it depends on the mathematical representation used to define the considered geometries (cf. Chapter 10).
11.2
Boundary meshing in two dimensions
The main idea is to consider the boundary not as a mathematical representation (line, circle, cubic, ...) but as a unique representation of these entities, namely a polygonal line. In this way, the boundary mesh method deals with an unique definition and is thus simplified. The general scheme of the process starts from the boundary description and results in the desired mesh after several steps. These steps include the following
318
CHAPTER 11. BOUNDARY
MESHING
• the boundary CAD definition, • the conversion of the so-defined entities in terms of a unique mathematical support, • the creation of the polygonal lines associated with this support, • the mesh creation of these polygonal lines.
11.2.1
CAD definition of a boundary
Numerous CAD software packages can be found which are not based on a unique standard mathematical representation for the geometry definition.
11.2.2
Related (discrete) data base
We assume that the available CAD system has completed the data base described in the Chapter 10. This allows for the definition of the mathematical support (by means of degree 3 curves), as explained in the previous chapter.
11.2.3
Construction of a polygonal discrete line
The previous mathematical support is converted into a polygonal line or a series of such lines. Indeed, each entity of the support is split into polygonal segments in order to approximate it as closely as possible. Construction of polygonal segments. This construction will serve as support for the construction of the segments of the desired boundary mesh. It relies on approximating the boundary lines by a set of straight segments, such that • the gap between a segment and the line supporting this segment is bounded. Two methods satisfy this requirement. The first method consists of uniformily repartitioning the curve with a fairly large number of vertices. The second method relies on finding the minimal set of vertices required, thus leading, in general, to a smaller number of points. We advocate this second approach which, for a given relative threshold, ensures a proper geometric adherence in the regions with large boundary curvatures. Let C be an edge of the polygonal segment and let C be the arc of the curve segment having the same endpoints (cf. Figure 11.1).
11.2. BOUNDARY MESHING IN TWO
DIMENSIONS
319
Figure 11.1: Detail of the polygonal segment (emphasized threshold value e = 0.08). In order to satisfy the relation ^ < £, where h is the distance between the point and the edge, d is the edge length and e is the a priori given relative threshold (for example e = 0.01) everywhere along the curve, • we find the extremum value (in terms of h) of the patch, or its two extrema values (if the curve has an inflexion point), denoted as {£;}, and • if the relative threshold is not exceeded at {£,-}, the patch is judged satisfactory. Otherwise, the segment is subdivided at {£;} and the process is recursively applied on each resulting segment. Figure 11.2 shows the partitioning obtained with the previous algorithm for this example. Finally, the set of polygonal segments constitutes the geometric support Suppgeom that will be used later for constructing the desired mesh. We will first discuss the general case (anisotropic case) and then deduce the solution for the isotropic case.
11.2.4
Meshing
The expected mesh, according to the specifications, is constructed by considering the previously created polygonal segments as support. For an initial mesh, the specifications are user-supplied. For instance, the user may
320
CHAPTER 11. BOUNDARY
MESHING
Figure 11.2: Polygonal segment (thresholds — 0.01). specify that a constant size mesh is expected or the user may explicitely provide the necessary information so as to define the relevant control space. In the context of an adaptative computational loop, a metric map, which is a function of the current solution of the problem, is known at this stage and is used to govern the boundary mesh generation method. User-supplied specifications. The user specifies, at least at the endpoints of the polygonal lines, the sizes (isotropic case) or the directions and the expected mesh sizes (anisotropic case). The input of a number of points or that of a uniform size may also be used to define the desired control. Specifications supplied via a metric map. In this case, a (discrete) metric map is known as a result of the solution analysis (actually in a computational loop). Anisotropic meshing. The aim is now to split every polygonal segment C of the geometric support Suppgeom according to the specifications given by the following data (cf. Figure 11.3) • the coordinates of the vertices {Pi}i=i..p+i of C, where p is the number of edges of C, • the mesh of C at the previous iteration step, or the background mesh, with the vertices {Qi}i=i,.q+i- From a practical point of view, it is
11.2. BOUNDARY MESHING IN TWO DIMENSIONS
321
only necessary to know the curvilinear abscissa of these points1. • a metric map2 {Mi}i=i..q+i defined at the points {Qi}t-=i..g+i.
Figure 11.3: Data for the curve mesh generation operator. By interpolating the metrics {Mi}i=i.,q+i along the polygonal segment C, one can define a control space. A satisfactory mesh of C is one where every sub-segment is a unit segment with respect to this space. To achieve this mesh, we propose the following algorithm 1. Find an interpolation J\A(s) of the metrics {A1;} along C. 2. Compute / = length^ ( S )(C), the total length of C in the metric M(s). 3. Compute an integer value n "close" to the real value /. 4. Split C into n curved sub-segments with the same length u = — (thus, n close to the unit value) in the metric At (s). We still need to clearly specify each of the different steps involved in this algorithm. The first point concerns the definition of the metric map at all points, while it is only known in a discrete way. This definition is based on an interpolation. Provided two ellipses M\ and M.I, we have to define a function which can enable us to pass from one ellipse to the other in a continuous monotonous way, in terms of size. *In order to obtain a general-purpose mesh generation method, we assume consider that this data exists (including for the first iteration step of an adaptation loop), even if the vertices {Qi} are reduced to the two endpoints of C. 2 We assume that only one metric map is specified, otherwise, it is necessary to merge the metric maps.
322
CHAPTER 11. BOUNDARY
MESHING
To this end, we consider the simultaneous reduction of the two metrics (which are two quadratic forms). This operation results in a base in which these two forms are defined by two diagonal matrices. Let M.\ and M.^ be two arbitrary metrics, then the matrix M = M±lMI is Mi-symmetric and can then be diagonalized in R2. Let (61,62) be the eigenvectors of J\f defining a base of R2, then t
e\M\e2 = ie\Mie
Let X = x\e\ + #262 De an arbitrary vector in R2 supplied with the base (e!,e2) ; with (A; = *c,-Aiiei) t - = i f 2 and (m = *e;Al2e;)i=i,2, we have, by definition, for all i — 1,2, At > 0, //; > 0, thus *XM.\X = \\x\-\-\ix\
and
Let (hij = —-=)j-=1)2 and (h,2,i = —=)z-=i,2, the value h\^ (respectively V A; ' ' h,2,i) is the unit length in the metric MI (resp. A^) according to the axis e;. The metric interpolation between M\ and M.^. is defined by
where P is the matrix formed by the column vector (ei, 62), and (//i(£ are the continuous monotonic functions such that Hi(0) = h\j and Hi (I) = hij for i — 1,2. In practice, one can consider the following interpolation functions : - a linear function :
Hi(t) = h\^ + t (h,2,i — h\^} ,
- a geometric function :
2-j HAi] = hn \ -—
fh y
or
'
- a sinusoidal function :
Hi(t) = | f /iiit- + /i2,i + (h\^ — /i2,i) cos(?r t) } .
Notice that this interpolation is only controlled along the directions of the axis e\ and 62- As an example, Figures 11.4 and 11.5 show two initial metrics and the interpolated metrics for the case of a linear function (and a geometric function respectively). Stage 2 of the algorithm consists of computing the total length / of the polygonal segment C in the interpolated metric J(4(s). This length is
11.2. BOUNDARY MESHING IN TWO DIMENSIONS
323
Figure 11.4: Linear interpolation : the metrics. computed as the sum of the lengths of the edges of C. Along such edges [Pi, -Pt'+i], the metrics are defined at several points: at its two endpoints Pi and Pi+i (metric obtained by interpolation), and possibly, at some vertices {Qj}j=ji..j2 of the background mesh. The length of edge [P;, Pi+i] is evaluated as the sum of those of the subsegments [P;, QjJ, [Qjj, QJ,+I], . . . , [Qj2,Pi+i\- For the sake of simplicity, the edge is denoted by [A, B] and one of its sub-segments is denoted by [P,Q](cf. Figure 11.6). We assume that the components (a, (3) of the vector AB are known, as well as the curvilinear abscissa denoted by 5,4, s#, sp and SQ. Considering again the length formula for a straight segment, by changing the variable t 6 [0,1] by the variable s = sp + t (SQ — sp) G [sp, SQ], we obtain
-SQ f-=+
_ , dt
/ ' sp
as
with
sB - sA
,
\ P )
, and
dt -r- = ds SQ- sP '
thus (a, 6 and c being the coefficients of the matrix), length ^w v ; (P, Q) = ---
SB - SA JSp
v
a(s)^ + 2b(s)aO + c(s}^ ds .
In general, the formal computation of this integral is too complex, thus we propose a method based on the "trapezoid rule". If /(s) is the function
CHAPTER 11. BO UNDARY MESHING
324
Figure 11.5: Geometric interpolation : the metrics.
Figure 11.6: Edge [A,B] and sub-segment [P, Q]. to be integrated, we have length If the resulting value is smaller than a given threshold 5, then the value is considered to be satisfactory. Otherwise, the segment PQ is subdivided into two segments of equal size (in the usual metric) and the process is repeated recursively. From a practical point of view, a threshold value 6 = 0.5 has been seen to be adequate. To summarize, the desired length is approximatively the value length(sp,SQ), where length is a function that can be written (in pseudocode) as Function length(sp,SQ)
length =
SB - sA
11.2. BOUNDARY MESHING IN TWO DIMENSIONS
325
If length < 6 then the output is satisfactory. Otherwise length = length ( sp, V End of the function length.
^
— j -f length ( / \
^
—, SQ \. /
Once the length is known, the number of sub-segments that must be defined is deduced, this number being, for example3, the closest integer value to length. The question now is how to split the polygonal segment C into n subsegments of the same length u (close to one) in the metric M(s). Thus, from the curvilinear Al-abscissa / of a point,
we have to find its curvilinear J-abscissa s. But, when computing the total length, C has been subdivided into subsegments of length lesser than a certain threshold value 8 (for instance, 8 = 0.5). The endpoints of these sub-segments define a "graduation" of the curve, i.e., a series of points at which both the curvilinear Z-abscissa s; and the curvilinear .M-abscissa /; are known. It is only necessary to find an interval i such that li < I < /;+i, and then to compute the desired value s using a linear interpolation s — Si
I —L
The solution s defines 5x, the abscissa of the point X, that must be constructed. Isotropic meshing. The isotropic case is a particular case where every metric follows the form M.{ — A;!, where A; £ R+ and J is the order 2 identity matrix. Then all the material suitable for the anisotropic case remains valid while, in the isotropic case, an exact computation of the integrals used to obtain the lengths is possible. Then we simply obtain
and the total length of the segment C is the sum of the lengths of the parts [Qt, Qi+i], Qi and Q;+i being two consecutive points at which the metrics 3 In practice, the closest integer value does not lead to a suitable solution in all cases, a more precise analysis allows us to select this value or to add one to it or subtract one from it.
326
CHAPTER 11. BOUNDARY MESHING
are known. For example, for i = 1, on the part [si,S2]> the function h(s) s — Si is equal to an interpolation function H(S) with S = - G [0, 1], thus, we compute the length as
$2 - si
Once the function H(S) is chosen, for example for a geometric interpolation function where
by defining
we obtain
and then
If /ii is close to /^2, the previous formula is not determined, but a limited expansion gives
The problem is then to split the segment C, assumed to be a straight line, into n sub-segments of length u (close to the unity) in the metric M(s). Thus, we have, knowing the curvilinear .M-abscissa lx of a point A", lx = (J ~ 1) u
j = 1 .. n + 1 ,
to find its curvilinear Z-abscissa s% • When computing the total length C, we can store the curvilinear Xabscissa st and the curvilinear Ai-abscissa li of each point Qi where the metrics are given. We then look for a segment i such as /,- < lx < /;+i, and assuming that we find i = I for sake of simplicity. We then have
11.3. BOUNDARY MESHING IN THREE DIMENSIONS
327
In the segment [§1,82], the function h(s) is equal to an interpolation function #(S), with 5 =
— G [0,1]. We then look for the value
Sx = SX ~Sl € [0,1], such that
The direct solution of the previous equation results in a rather complex expression for Sx • In practice, it is advised to use the value of \i — l\ that has been previously computed, which leads to
For example, if the interpolation function is a geometric one, we obtain by expanding the expression of I(S) previously computed as Log
x /i 2
- LX (h? - hi]
7~1^ Log U
If h\ is close to /i2, the above formula is again undetermined, but a Taylor expansion gives
enabling us to construct the desired point. The solutions for several different choices of interpolation functions can be found in [Laug et al. 1996].
11.3
Boundary meshing in three dimensions
We encounter the case of curve meshing for curves considered as surface member or surface boundaries, and the case of a surface meshing by itself.
328 11.3.1
CHAPTER 11. BOUNDARY
MESHING
Curve meshing
The previous method applies for curves in three dimensions. As mentioned previously, the curves we are interested in are either curves plotted on a surface or the curve boundaries of a surface. The difference between the two and the three dimensions mainly affects the way the polygonal segments associated with a given curve are constructed. Once this task is completed, it is possible to compute the unit lengths involved in the construction.
11.3.2
Meshing of a surface consisting of several patches
The parametric surface case has been described in detail in Chapter 6 to which the reader is referred. This situation still remains academic as only one patch is considered. As such a case is not realistic in the type of application examples we want to handle (finite element applications for complex geometries), we need to find other methods. In this section, we aim to extend the parametric surface mesh construction method to the case where the surface is defined by a few patches, which are usually known as a parametric surface. The case where the surfaces are not defined in this way will be considered in a later section. By applying the method described in Chapter 6 to each patch, the junction between two patches is not compatible a priori. The question is then, on the one hand, to ensure a conforming mesh at the patch interfaces and, on the other hand, to guarantee an adequate regularity. Then the proposed method consists of first meshing the patch interface so as to obtain the desired conformity and, afterwards, of constraining the patch meshes so as to achieve the desired regularity. The difficult point of such a method is to transfer the interface discretization in R3 onto the boundaries of the parametric domains. Indeed, even if a, the function which maps the parametric domain (in R2) to the surface (in R3) is given, it is still tedious to obtain the reverse transformation. We would like to propose a mesh construction algorithm by illustrating its various steps on an example composed of two quarters of a cylinder, £1 and £2, which perpendicularity cut each other (cf. Figure 11.14). The surface £1 of axis Ox with vertices {1,2,3,4} is parametrized by the function v\
329
11.3. BOUNDARY MESHING IN THREE DIMENSIONS
where QI is a planar domain whose vertices are also denoted by {1,2,3,4} (Figure 11,13, left-hand side). The coordinates of the points 3 and 4 are respectively (|,/i) and (0,/i), l\ being a given length. The points 1 and 2 are the endpoints of a curve that will be defined hereafter. Similarly, the surface £2 of axis Oy with vertices {1,2,5,6,7} is parametrized by the function <72 X
C
V) =
=
y =
T2 COSM
V
r2 sin u
where Q2 is a planar domain whose vertices are also denoted by {1,2,5,6,7} (Figure 11.13, right-hand side). The coordinates of the points 5, 6 and 7 are respectively (f ,0), (f 5/2) and (0,/2), /2 being a given length. We assume that the lengths r l 5 r 2 , /i and \i satisfy the inequalities r i < r 2> ^i > r2 and /2 > T\. In our example, r\ = 1, r-z — 1.1 and /!« = /2 = 2. The intersection of the surfaces Si and £2 gives the equation of the curve {1,2} of domain QI as u = t v
— r-
efl 2 ,
from which the coordinates of the points 1 and 2 of domain Q\ are obtained, respectively as (0,r 2 ) and (|,r 2 yl - % ). Similarly, the equation of curve {1,2} of domain ^2 is : u = t
t G [0, arcsin — ] C R
v =
from which the coordinates of the points 1 and 2 of domain ^2 are obtained to be (0,r!) and (arcsin(p-),0) respectively. To achieve the surface meshes in the space (Figure 11.14), we apply the following steps. • We consider the domains 17i and ^2 along with the functions a\ and 01 to be given as input. The domains are defined by a certain number of points which are the control points for their boundaries, while the functions a\ and 02 are specified (see Figure 11.7).
330
CHAPTER 11. BOUNDARY
MESHING
• Applying the function a\ (resp. cr^) to the control points of the boundary of QI (resp. ^2)5 results in control points in R3. Notice that the control polygon of the curve {1,2} in R3 may be obtained in two different ways (as the image of QI by a\ or as the image of f^ by a^). Here we retain the first definition (see Figure 11.8). • From the control points in R3, we deduce piecewise cubic curves passing through these points, and obtain a polygonal geometric support (see Figure 11.9) using a method similar to the two-dimensional method. • We define a discretization of the geometric support based on the curvature radii. The minimum and maximum edge lengths are prescribed as hmin — 10~6 and hmax = 0.2 to avoid any degeneracy. In addition, a relative threshold, e = 10~3, is given to control the gap between the mesh surface and the surface (see Figure 11.10). • The curve discretization in R3 is mapped onto the domains QI and ^2 in R2. To this end, we first compute a piecewise cubic curves passing through these points and then deduce the polygonal geometric support of the two domains (see Figure 11.11). Then, we sample these supports and relate the support discretization in R2 with the corersponding one in R3. This enables us to construct the discretization of the boundary of the planar domains (see Figure 11.12). • We can now create the triangular mesh of the domains QI and fi2 by means of the process described in Chapter 6. Every mesh relies on the previous boundary discretization and matches the curvatures of the corresponding surfaces (in R3} (Ei or £2), see Figure 11.13. • Finally, the image of the domains QI and f^ gives a mesh of the surfaces in the space (see Figure 11.14) by means of a\ and a?..
11.3. BOUNDARY MESHING IN THREE DIMENSIONS
Figure 11.7: Control polygons in R2.
Figure 11.8: Control polygons in R3.
331
332
CHAPTER 11. BOUNDARY
Figure 11.9: Polygonal geometric supports in R"1
Figure 11.10: Discretization of the curves in
MESHING
11.3. BOUNDARY MESHING IN THREE DIMENSIONS
Figure 11.11: Polygonal geometric supports in R2.
Figure 11.12: Discretization of the curves in R2.
333
334
CHAPTER 11. BOUNDARY
Figure 11.13: Surface meshes in R2.
Figure 11.14: Surface mesh in R3.
MESHING
11.3. BOUNDARY MESHING IN THREE DIMENSIONS
11.3.3
335
Surface remeshing using optimization
In this section, we consider a general case and assume that the surface is provided geometrically either directly or by means of an (adequate) mesh. The case where the surface is known directly infers that we are able to obtain the information required for the remeshing process in some way (for example by means of a relevant series of "queries"). When the surface is defined by means of a mesh, this mesh is used to construct a mathematical support as indicated in Chapter 10. This support permits us to again be able to query the surface so as to obtain the necessary information. In addition, we assume that an arbitrary mesh (which may be the mesh serving as geometric support, in the second case) is provided and we aim to propose a method which creates a new mesh. The problem then becomes a problem of remeshing. The information we need for such a remeshing process is the following : • how to locate a point on the surface, • the surface geometry at a given point (minimal curvature radius, principal curvature radii, ...). Then, we define a few local operators so as to • create a point, • remove a point, • swap an edge. These logical operators use local operators, most of them having been described in the previous chapters. More precisely, from a topological point of view, we need to • swap an edge, • remesh a polygon (in order to remove a point). Using this set of tools, it is possible to remesh a surface. The main idea is always the same, we define a metric map and remesh so as to obtain unit length edges in the resulting mesh while maintaining good element quality. The metric reflects the surface geometry. Notice that the surface ridges are processed first. The three entities that are encountered during the remeshing process are the initial mesh, the support mesh which serves as geometric support and the related mathematical representation (or the CAD software) and, finally, the mesh under construction.
336
CHAPTER 11. BOUNDARY
MESHING
Pure geometric remeshing. The metric governing the process is only based on the surface geometry. When the CAD system can be accessed to define the metric, it will be used. However when a discrete definition held, the associated support (cf. Chapter 10) is used. Remeshing according to a physic related map. The metric governing the process is based on both the surface geometry and a (physical) map defined on the initial mesh. To obtain the desired metric from the two given metrics, the metric intersection method is applied to the geometric metric and the physical metric. A remeshing method suitable for surface which is already meshed . The given surface mesh initializes the new mesh. The singularities of the mesh are then found (corners and ridges). Corners are points that must remain unchanged through the process. The ridges are meshed first. To this end, their lengths are computed and, if necessary, points are added so as to form unit length edges. Thus, the mesh of these (peculiar) edges is established. Inserting a point along an edge is done by splitting all of the triangles sharing this edge by joining the point with the vertices opposite to the edge. Edge swapping is then applied if the element quality is improved and the swapped edge still conforms to the geometry. Now, we simply have to analyze the lengths of the current mesh edges. Thus, • an edge with a length greater than one is split into sub-edges that are as close to unit length as possible, • an edge considered too short is removed (if possible). To this end, several methods can be utilized as indicated in Chapter 8.
11.4
Results
To illustrate the methods described within this chapter, we will give some boundary mesh examples. First, the case of a planar curve (isotropic or anisotropic mesh of a segment and an anisotropic mesh of a curved segment) is shown. Then we give some surface examples.
11.4.1
Planar curve mesh
We first consider the simple case of a straight segment and then the case of a curved segment. The isotropic case (the given specification only gives the
11.4. RESULTS
337
lengths) as well as the anisotropic case (the given specification indicates both directions and lengths) are considered. Isotropic mesh of a segment. We consider a straight segment whose endpoints are Q\ and Q$ along with a discrete size map specified at each of the different points Q,. These expected sizes are denoted as hi and we want to mesh the initial segment so as to obtain a set of sub-segments which conform as closely as possible to the given specification. This input specification is displayed in Figure 11.15 and the related values are reported in Table 11.1.
Figure 11.15: Size (isotropic) specification.
Qi Qi
Q2
Q3
Q*
0.0 1.5
3.0 0.5
4.0 2.0
8.0 0.1
Si
hi
Qs 10.0 0.7
Table 11.1: Discrete isotropic specifications at the points Qi. Figures 11.16, 11.17 and 11.18 display the resulting meshes based on the different types of interpolation functions (linear, geometric or sinusoidal). This choice results in the definition of the size variation between two points whose sizes are known. Each figure shows the previously defined points {Qi}, the interpolation function h(s) and the resulting mesh. The latter is the set of segments having the points {#;} as endpoints. The histogram provided with each figure is composed of rectangles for which one side is an edge [Ri,Ri+i]. The intersection points of the curve h(s) with this histogram show that the obtained sizes conform to the expected values. One may notice, as a function of the interpolation selection, the differences
CHAPTER 11. BOUNDARY
338
MESHING
between the different meshes based on the way the size varies from one specified point to the next. Thus, the final mesh is a function of this choice and results in both a different number of elements and a faster or slower size variation from one element to the next.
Figure 11.16: Mesh resulting from a linear interpolation.
Figure 11.17: Mesh resulting from a geometric interpolation.
Anisotropic mesh of a segment. The second example shows the mesh of a segment where a (discrete) anisotropic map is provided. The metrics are specified at the points reported in Table 11.2 (also see Figure 11.19 where the "unit circles" associated with these metrics are shown).
Qi
Qi
Si
0.0 30° 0.70 0.25
Qi 2.0 90° 1.10 0.15
Qs 3.0 0° 0.30 0.30
Q4 6.0 -60° 1.50 0.40
<2s 7.5 -45° 1.00 0.35
Bi hi,i hi,i Table 11.2: Discrete anisotropic specifications at the points Qi. The meshes resulting from linear, geometric or sinusoidal interpolation functions are depicted in Figures 11.20, 11.21 and 11.22. They show the correspondence between the expected metrics and the resulting meshes.
11.4.
RESULTS
339
Figure 11.18: Mesh resulting from a sinusoidal interpolation. Anisotropic mesh of a curve segment. The last example concerns the mesh of a curved segment. The expected metrics along this curved segment are shown in Figure 11.23 and the resulting meshes using linear or geometric interpolation functions are depicted in Figures 11.24 and 11.25. The analysis of the interpolated metrics again shows a good correlation between the expected metrics and the resulting meshes. Remark. The construction of anisotropic line meshes is more than just an academic exercise. Indeed, in the context of a finite element application for a domain whose boundary is this line where an anisotropic mesh is expected inside this domain, the boundary mesh must conform to the same anisotropic metric as the interior of the domain. 11.4.2
Mesh of a surface defined by several patches
We would now like to give an application example of the surface mesh generation method for a surface defined by conforming patches (cf. Chapter 1 regarding the notion of a conforming mesh). Figure 11.26 depicts the mesh
Figure 11.19: Specification of the directions and sizes.
340
CHAPTER 11. BOUNDARY MESHING
Figure 11.20: Mesh resulting from a linear interpolation.
Figure 11.21: Mesh resulting from a geometric interpolation
Figure 11.22: Mesh resulting from a sinusoidal interpolation.
11.4.
RESULTS
341
Figure 11.23: Anisotropic size specification.
Figure 11.24: Mesh resulting from a linear interpolation. of a model where a sphere has been substracted from a cube. This domain is defined by eight patches (the six cuboi'd faces and the two hemispheres used to define the sphere). Figure 11.27 depicts another mesh of this surface based on a smaller threshold value (for which the polyhedral approximation of the surface is improved).
11.4.3
Surface mesh (remeshing via optimization)
We now turn to an application example of the remeshing method based on optimization. We consider a meshed surface (Figure 11.28) which we will remesh based on a metric map associated with its geometry. Figure 11.29 shows the resulting mesh. The initial mesh consists of 1,189 vertices and
342
CHAPTER 11. BOUNDARY
MESHING
Figure 11.25: Mesh resulting from a geometric interpolation. 2,378 triangles while the final mesh is made up of 7,538 triangles and 3,769 vertices. The new mesh is finer than the initial mesh because the regions of high curvature have been remeshed (see the figure). To obtain a coarser mesh (in terms of the number of elements), it is necessary to prescribe a minimal edge size constraint. Notice that no point relocation (smoothing procedure) has been applied to this mesh.
11.5
Notes
While the proposed method is only one of several possible approaches, we feel we have adequately covered the topic of boundary mesh generation in two dimensions. For the three-dimensional case, the solution based on remeshing has shown great flexibility. The expected difficulties usually result from the fact that the mesh serving as the geometric support must be interpreted so as to deduce the geometry of the surface. Other approaches also exist to address the remeshing problem encountered in adaptation loops. Among them, is the method recently proposed by [Lohner-1996] which uses an advancing-front type method based on the former mesh as geometric definition. To conclude, we think that, at least in three dimensions, the surface remeshing problem will be a field of intensive researchs over the next few years.
11.5. NOTES
Figure 11.26: Mesh of the hollowed cuboid ("wiffle
343
cube").
Figure 11.27: Mesh of the hollowed cuboid with a better geometric approximation (especially visible around the sphere and circles).
344
CHAPTER 11. BOUNDARY
Figure 11.28: Initial mesh of the surface.
Figure 11.29: Surface mesh after remeshing.
MESHING
Chapter 12
Finite element applications 12.1
Introduction
Chapter 5 described a method suitable for mesh construction for when we must conform to a metric map. Chapters 10 and 11 have shown how to define a geometry and how to mesh a domain boundary based on a given metric map. Thus, we have all the components required to develop an automated computational loop for realistic applications. The last ingredient we need then is the proper definition and construction of a metric map to govern the process. In order to show how to use this background, we select several computational fluid dynamics examples numerically computed by the finite element (finite volume) method using adapted meshes as spatial supports. Then, we formally review the situation depicted in the chapter devoted to adaptation methods (Chapter 9), with a realistic computation now governing the process. Two-dimensional examples are given in this chapter and a few comments are mentioned regarding the three-dimensional case. Before describing the realistic examples, we would first like to propose a method to define the metric map which will govern the process.
12.2
Metric definition and metric construction
The goal is to define the metric M. needed for the mesh construction method. For the sake of simplicity, at first, we will assume that the problem involves only one unknown, denoted by rj. The goal is to find a metric which locally distributes the interpolation error equally. First, we give some keys about the one-dimensional case when a Pl
CHAPTER 12. FINITE ELEMENT
346
APPLICATIONS
interpolation is used. We consider 77 and we denote by H^r/ the F1 interpolate of the solution, this solution being considered as regular enough. In one dimension, the interpolation error can be defined by 600 — I*? — <>(^|oo 1 Along a segment [a, 6] of a one-dimensional mesh (Figure 12.1), we have (11^77) (a) = 7)(a) and (II/jTy)^) = 77(6). A simple Taylor expansion, for all x e]o, 6[, gives
Figure 12.1: Linear interpolation in one dimension.
(12.1) By construction of 11^, we have (12 2)
I J. jtw • £-1 I
and using again a Taylor expansion to evaluate 77(6), we have (12.3) From Equations (12.1) and (12.3), we obtain
(12.5) But, since (12.6) •oo denotes the L°° norm
12.2. METRIC DEFINITION AND METRIC CONSTRUCTION
347
we simply deduce the interpolation error £<*, along the segment [ab] as £„ = i^l!^(a) + 0(6 - a)3 . o
(12.7)
The main idea is, as to that done for the dimension one, to find a similar relationship in two (and three) dimensions where the second derivative is replaced by the Hessian of the variable. We consider a triangle K whose vertices are a, b and c. The P1 interpolate (H/ l r/)(x) is, by definition, equal to TI(X) if x is one of the vertices of K. We have Ti(a) (12.8) 7/(a)
(12.9)
If x is a point in K, we have x = Xa a + A;, 6 + A c c with A0 + A& -f- A c = 1, then ax = X^ab + Acoc . Thus, V(7r fc )ax = (V7?)(a)az+ ^(V27/)(a)a1>2 + ^(V 2 r?)(a)ac 2 + O(\dx\3) . £*
£t
(12.10) As
ri(x) = 77(0) + (V7/)(a)ox+ ^(V 2 7 ? )(a)ax 2 +O(|ax| 3 ) , Zi
(12.11)
the interpolation error ||r; — n/^H;^ is written as
where
As a consequence
where \H\ is defined hereafter. Following the adaptation loop scheme given in Chapter 9, we assume that a first solution has been computed on the given mesh. In addition, we assume that the interpolation is piecewise linear (P1 finite element). Then,
CHAPTER 12. FINITE ELEMENT
348
APPLICATIONS
from the work of [Azevedo,Simpson-1989] and [Azevedo,Simpson-1991], the interpolation error 8 is related to the Hessian of the variable rj (12.14) where II/^ is the (F1) interpolate of?/, |.| is the H1^) norm (fi being the computational domain) and
(12.15)
If the absolute value of a symmetric definite matrix is defined by
where 7£ is the orthogonal matrix (i.e. *7£ = 7£~°°) which diagonalizes the symmetric matrix %, and AI, A 2 are the eigenvalues of H such that (12.17) Then, the error related to an edge a? of the current mesh can be computed as (*. r^t />
Cj ^^ ^0
I " si . 1J ft . I
t«j TTUj
.
I 1 O "1 O \
I-L^.-LO)
To minimize the number of mesh vertices (which is one of the goals of the mesh adaptation strategy), we try to equilibrate the error. Provided a threshold value £o, the aim is to obtain an error, £,-, in the range of this threshold value. This relies on constructing a mesh with unit length edges based on the metric M = —\H\ (12.19) 12.2.1
Hessian computation
For a problem using a P1 finite element interpolation, the key to defining the metric map is to use the second derivatives. In principle, these derivatives are null (for each triangle, as the computed solution is P1), thus a weak formulation using Green's formula must be used to obtain the Hessian. The Hessian is based on the computed solution rjh (which is actually
12.2.
METRIC DEFINITION AND METRIC CONSTRUCTION
349
known, as opposed to the interpolate n/j(r/) which is not known). It can be written as
,
f OT]hdvh
iij-^h — - I -j.— -~ Jfl
UXi UXj
[ c%
1- /
^—- v h
Jd£l OX{
/loom
(12.20)
1
where v^ is the classical P test function and H = ('Htj)j,j=i,2To solve the linear problem 12.20, a mass lumping technique is used which results in a diagonal problem. The discrete Hessian 7^. at vertex k is computed by
where vfc is the piecewise linear hat function associated with the vertex k.
12.2.2
Remark about the metric computation
It is useful (and judicious) to bound the mesh edge lengths so as to avoid unrealistic situations. This is, in general, possible because we have an idea as to what these values should be for the problem under consideration. More precisely, the eigenvalues (related to the desired lengths) as defined in Relation 12.17, are bounded by Ai, 2 = min (maz(|A lj2 |,
— ), - — ),
where /imt-n and hmax are the minimal and maximal allowable edge lengths.
12.2.3
Metric associated with the usual norms
To modify the norm in Relation 12.14, when computing the error £ = |r/ — Il/iT/l, we introduce a new family of metrics defined by M =
\A\
\U\P
|A|?
(12.22)
£
G
where the matrix power p is defined, according to the notations of the relation 12.16, by
The selection of a value for p and the choice of a matrix A result in a control for a given norm. Thus,
CHAPTER 12. FINITE ELEMENT APPLICATIONS
350
• for the Z/°° norm, we take p = 1 and we define A = 7d2. The error is then £=ll/-IW)||oo (12.24) • for the H l norm, we take p = 1 and we define A = Id^. The error is then (12.25) for the energy norm, we take p — 2 and for the matrix A we use the matrix of the problem, then the error is S = Hv/V(/-n,(/))AV(/ -
(12.26)
for the L2 norm, we take p = | and A — 7f/2, and we have (12.27)
12.2.4
Relative error metric
The previous material relies on using a global error. In the case where the solution varies differently in local some region, the global error analysis may alter the error analysis by missing the regions where the solution is not of the same magnitude as compared with other regions where it clearly changes. To overcome this phenomena, we advocate the use of a relative error. The latter, denoted as £ r , may be defined as n o oo^
where cutoff is a positive real value used to avoid any division by zero. The metric tensor (i.e., the metric map) Ai associated with the relative error £r is then
Notice that this is a dimensionless error, thus any specific analysis is avoided.
12.2.5
Intersection of several metrics
In the case of a problem with several unknowns (and when all of them are used to govern the process), each unknown provides a metric map. We then find several metric maps. The aim is then to reduce the problem to the case where only one metric map is used so as to return to the situation above thereby making the definition of the control space possible. We refer the reader to the previous chapters in which the metric intersection operator was introduced.
12.2. METRIC DEFINITION AND METRIC CONSTRUCTION 12.2.6
351
Transfer of the solution from one mesh to the other
We assume here that we are solving a problem by means of an adaptive computational loop. The solution at the iteration step j, associated with the mesh constructed at this step, is used to define a metric map that serves to govern the mesh generation for the iteration step j' + 1. The problem now is to transfer the current solution which is known at the nodes of the current mesh (iteration step j) to the nodes of the new mesh (iteration step j -f 1). This transfer must be processed with no loss of information, no diffusion, etc. We refer the reader to Chapter 9 in which a description of the computational process stage can be found.
352
12.3
CHAPTER 12. FINITE ELEMENT
APPLICATIONS
Three CFD examples
In this section, we will depict several applications of the mesh generation methods previously described to cases where a computational loop is used. Three two-dimensional computational fluid dynamic problems are described, including a supersonic scramjet case, a transonic viscous flow around a Naca0012 profile and a supersonic viscous flow around a cylinder.
12.3.1
General presentation
We shall consider this series of complex flow configurations to be modeled by means of Euler equations or Navier-Stokes equations. The regimes considered are the transonic regime and the supersonic regime respectively. We denote by p the density, u = (HI , u^) is the velocity and T is the temperature. The total energy is T + 2 ' P ~ (^ ~ typT is the pressure, Vw = U{j denotes the gradient of u and D — M;)Z is its divergence while S = (Vw -+- V*M) — | D is the deformation tensor. Using these notations, the dimensionless Navier-Stokes equations are + V • (H = 0
(12.30)
+ V - (pu <8> ti) + Vp = V • OuS)
(12.31)
+ V • ((pE + p}u) = V - (/i5«) + V(«VT)
(12.32)
with K = ^, 7 = 1.4 and Pr = 0.72 while // is the inverse of the laminar Reynolds number or viscosity coefficient. The laminar viscosity is given by the Sutherland law
'33) where the indices oo stand for the reference values. The NSC2KE Navier-Stokes solver, described in [Mohammadi-1994], is used. This solver relies on a Galerkin finite volume technique where the unknowns are supported at the nodes. This makes the treatment of the viscous variables easy as in a finite element method. A four step RungeKutta method is used for the time integration. The metric map is the intersection 2 of the metrics related to the conseri i vative variables (p,pu, pv,p(CvT + 3 ))• More precisely, for every conservative variable (^«, i = 1, ••, 4) we define the metric tensor A14-(2, 2) by
12.3.
THREE CFD EXAMPLES
353
• normalizing ^ between [0,1] (to avoid any problem with dimension). • computing the metric related to ^>,-, proportional to the Hessian of ibi, in a weak manner for each node is
where w;ts is the finite element P1 base function at node is and CJS is the surface area of the dual cell of node is. L defines the edge length in the metric M.{. Thus, all the edges have a length of L in the metric. •
correcting the Hessian at the boundary nodes in the normal direction by using an average of the values at the internal nodes.
• limiting the eigenvalues by \min = l/h?max and Xmax = \jtiimin where hmin and hmax are the extremal edge lengths that are expected in the mesh. • intersecting the metric M.{ with M.^ for k / i\ following the algorithm described in Chapter 4. The computational scheme includes • the construction of an initial mesh (Figure 12.2, top) which is fine enough to capture the physical behavior of the problem, • a first solution with this mesh as spatial support, • (1) the construction, via the solution Hessian, of the metric maps related to the various variables, • the construction of the global control map by intersecting the previous maps, • the construction of a new mesh which conforms to the metric map above, • the solution of the problem with the current mesh and if the solution is not stable, the whole process is repeated (GO TO (1)).
354
CHAPTER 12. FINITE ELEMENT
12.3.2
APPLICATIONS
Supersonic scramjet
This case regards an Euler type computation for a scramjet at Mach 3. Although the geometry is symmetric, the computation has been performed on the whole domain so as to determine if a symmetric solution can be obtained despite the use of an unstructured mesh. The upper wall is defined by the following 6 points (the dimensions are in meters) : (0.0,3.5), (0.4,3.5), (4.9, 2.9), (12.6, 2.12), (14.25,1.92) and (16.9,1.7). The upper obstacle is defined by the 5 points : (4.9,1.4), (12.6,1.4), (14.25,1.2), (9.4, 0.5) and (8.9, 0.5). The axis of symmetry is defined by the segment joining the points (0.0,0.0) and (16.9, 0.0). Six adaptive loops have been processed separated by 400 iteration steps in the Navier-Stokes solver. The resulting solution is symmetric even though the mesh is an arbitrary mesh. One may notice that the solver handles the large distortions included in the mesh and does not produce any oscillations. The adaptation coefficients are the following ""min — ^5.10
,
hmax = 2.,
L — 10
Due to the complexity of the configuration, a similar computation with a uniform structured mesh should require the use of about 107 points so as to obtain he same accuracy for a regular discretization of the domain consisting of elements sized as hmin. In our case, the finest mesh includes 30, 000 points (i.e., a gain of a factor 300 w.r.t. the number of points). Moreover, the mesh adaptivity reduces the cost of the computation because of its multigrid effect. In this respect, only 2,400 explicit iteration steps were needed for the convergence. On the other hand, as a local time-steping strategy has been used, the adapted mesh method leads to great advantage as using a regular grid cannot result in such an effect. Figures 12.2 and 12.3, respectively, depict the meshes and the isodensity contours at different iteration steps including the step 0 (initial configuration), the steps 2 and 6 (final configuration). The initial mesh includes about 4,000 points, at iteration step 2, it includes about 23,000 points and, finally, at step 6, it has about 30,000 points. Thus the number of points remains almost constant between steps 4, 5 and 6. At iteration step 6, the maximal stretching is in the range of 120 and the CPU time to construct the mesh is about 30 seconds on an HP735/99Mhz workstation (i.e.,^ibout 1% of the total cost).
12.3. THREE CFD EXAMPLES
Figure 12.2: Meshes of the Scramjet at iteration steps 0,2 and 6.
355
356
CHAPTER 12. FINITE ELEMENT
APPLICATIONS
Figure 12.3: Iso-density contours at iteration steps 0,2 and 6.
12.3. THREE CFD EXAMPLES 12.3.3
357
Viscous transonic flow for a Naca-0012
This application example deals with a viscous computation around a Naca0012 profile at Mach 0.95 with a Reynolds number of 5,000. A fishtail configuration is obtained including a region of unsteadiness due to the interaction of the straight shock and the wake. One can observe that the intersection of the metrics of the different variables enables us to capture phenomena of different nature such as shocks, boundary layers and wakes. The adaptation coefficients are as follows h"min — O.1U
,
""max
=
!•>
^ ~ O.1U
The adaptive loop includes six steps. For the first three steps, 500 iteration steps of the Navier-Stokes solver have been processed. In this configuration, the region of unsteadiness is located in the wake. Thus, we have first obtained a good description of the flow nature before capturing the unsteadiness feature. For the last three steps, 3,000 iteration steps have been used to capture the unsteadiness feature. Figures 12.4 and 12.5 illustrate the meshes and the iso-density contours at steps 0 (initial step), 2, 4 and 6 (the final step). The initial mesh includes 2,282 points, at iteration step 2, there are 7,697 points, at iteration step 4, we have 17,850 points and at the final iteration, we have 18,003 points. This last mesh is completed in 20 seconds and its maximal stretching factor is about 50.
358
CHAPTER 12. FINITE ELEMENT
APPLICATIONS
Figure 12.4: Fish Tail, meshes and iso-density contours.
12.3. THREE CFD EXAMPLES
Figure 12.5: Fish Tail, meshes and iso-density contours
359
(following).
360
12.3.4
CHAPTER 12. FINITE ELEMENT
APPLICATIONS
Viscous supersonic flow around a cylinder
The final application example is a supersonic simulation around a cylinder with a unit diameter at Mach 2 and Reynolds number 5,000. The cylinder is in a channel of length 30 and width 10 with slip boundary conditions on the lateral walls. This case appears to be a complex one since it includes some unsteadiness related both to the wake and to the location of the expected shocks. The goal then is to illustrate the capability of the method to follow the shock fronts as the unsteadiness is not limited to only one region as opposed to the previous example. The adaptation coefficients are : hmin = 5.10~3,
hmax = 2.,
L = 3.10'3.
The complexity of the configuration again would require a uniform structured mesh to contain elements of size hmin in order to obtain a similar accuracy. This would require about 6 X 107 (i.e. 30 X 10/(5 X 10~3)2) points. On the other hand, the unstructured mesh requires only about 50,000 points. The adaptive loop includes 110 iteration steps. At each step, 500 iteration steps of the Navier-Stokes solver have been applied. Figures 12.6 to 12.12 depict the meshes and the iso-density contours after 60 and 110 adaptive loops. It can be noted that the locations of the shocks change from one step to the next and that the adaptive process follows this movement. While the complexity of these configurations does not permit such a computation with a uniform structured mesh. The metric intersection enables us to capture shocks, boundary layers as well as wakes and contact discontinuities. The mesh at iteration step 60 has 50,883 points while at iteration step 110, it includes 54,460 points, the time required for the mesh construction is about 74 seconds and the maximal stretching factor is about 170.
12.3. THREE CFD EXAMPLES
Figure 12.6: The cylinder mesh at iteration step 60.
Figure 12.7: Mesh enlargement.
361
362
CHAPTER 12. FINITE ELEMENT
APPLICATIONS
Figure 12.8: The cylinder mesh at iteration step 110.
Figure 12.9: Mesh enlargement.
12.3. THREE CFD EXAMPLES
Figure 12.10: Mesh enlargement at iteration step 60.
Figure 12.11: Mesh enlargement at iteration step 110.
363
364
CHAPTER 12. FINITE ELEMENT
APPLICATIONS
Figure 12.12: Iso-density contours at iteration step 60. ll/WiljJJlLLiJ
Figure 12.13: Iso-density contours at iteration step 110.
12.4. NOTES
12.4
365
Notes
The application examples depicted in this chapter merit a few comments. One can notice that the given examples are CFD problems and that only the two-dimensional cases have been considered. One can also remark that the metrics involved in the adaptive loops have been obtained by means of the Hessian of the various variables of the problem. Regarding the spatial dimension, there is currently no automatic mesh generation method fully available for anisotropic mesh creation purpose. This does not mean that no adaptive computations have been performed. In this respect, the construction of anisotropic boundary layers for CFD problems can be envisioned using an advancing front approach. This approach allows for the creation of several boundary layers (in general by means of pentahedra) that are furthermore joined with the elements resulting from an automatic mesh generator applied to the rest of the computational domain. Another way for constructing an (isotropic) adaptive mesh is to use a classical mesh generation method by introducing several control points (or sources) which govern the sizing in the mesh algorithm. Another approach can be adopted which starts with a mesh resulting from a given method and applies some form of mesh optimization (cf. Chapter 8). We refer the reader to [Ladeveze et al, 1991] for the construction of adaptive meshes for structural mechanics applications and, in particular, for the error estimates. In this paper, different applications (static or dynamic) are considered for linear or non-linear problems. With respect to the construction of the required metric maps, we have described one way to obtain such maps by means of the Hessian of the problem variables (when a P1 approximation is used). Obviously other methods can be advocated for obtaining these metric maps. The relevant literature is highly recommended concerning a posteriori error estimates (for instance, one may consult [Babuska,Rheinboldt-1978], [Verfurth-1996] and, for a comprehensive overview of the more recent papers, see [Bernardi-1996]). In this respect, methods based on residual analysis, local problem solution, hierarchical approaches or average-based methods are mainly found. On the other hand, in some cases a priori error estimates may be involved to define the metric map serving at governing the process. Anyway, the purpose of this book is not to thoroughly cover the area of error estimates and thus we refer the reader to the above references.
This page intentionally left blank
Chapter 13
Other applications 13.1
Introduction
Delaunay triangulations and Delaunay-type meshes have proven to be useful in several fields of application, some of them being more or less related to finite element problems, while others concern other areas of application. This chapter does not claim to exhaustiviely cover all applications of the Delaunay triangulation method. However the objective is to show that this type of triangulation can be used for several specific applications. To this end, several examples are selected and described in the following sections.
13.2
Medial axis and medial surface
Let fi be a domain of Rd where the problem of interest is to find the medial axis (d = 2) of the domain or its medial surface (d — 3), also referred to as the skeleton of Q. From a practical point of view, we assume that the domain is described by means of a discretization of its boundary, i.e., a polygonal line in two dimensions or a polyhedral surface in three dimensions.
13.2.1
Medial axis
Let us first briefly give the definition of the medial axis of a polygon. Definition 13.1. The medial axis of a polygonal domain is the locus of the centers of the circles of maximum radius that can be inscribed in the domain. Actually, we are interested in a discrete approximation of this "line". We denote with F the boundary of fi and we assume that F is discretized
368
CHAPTER 13. OTHER
APPLICATIONS
by a set of edges. Then, the following theorem holds. Theorem 13.1. If h tends to 0, where h is the length of the longest edge of the discretization of the boundary F and ifM is the empty Delaunay mesh of ft, then the union of the segments formed by joining the centers of the circumscribing circles associated with the elements of M, considering each pair of elements sharing an edge, approaches the skeleton of ft. Proof. This proof is given as an exercise. Notice that as h —> 0, the boundary discretization is Delaunay admissible, then it is sufficient to prove that the centers of the circumcircles are inside ft and belong to the desired skeleton. D Remark.
An empty mesh of ft exists as shown in Theorem 3.1.
One can refer to [Armstrong et al. 1995] for a detailed description of a medial axis extraction method starting from an empty Delaunay mesh. From a practical point of view, the boundary discretization is not, in general, infinitively fine and the previous construction is not realistic. A similar idea relies on analyzing the elements of the mesh M using adjacency relationships. This results in a series of segments constructed as follows • (1) if an element includes one boundary edge a, we define the segment whose endpoints are the midpoints of the edges others than a, • (2) if an element includes two boundary edges ai and 02, we define the segment joining the endpoint shared by ai and a? with the midpoint of the third edge, • (3) if an element does not include any boundary edge (or includes three such edges), we define the point G, centroid of the triangle and we define the three segments joining G with the midpoints of the three edges. The union of the so-defined segments forms a polygonal line denoted by P which does not correspond to the exact skeleton of the domain but is easy to obtain and may be used for many applications.
13.2.2
Medial surface
First, we give the definition of the medial surface of a polyhedron. Definition 13.2. The medial surface of a polyhedral domain is the locus of the centers of the spheres of maximal radius inscribed in the domain.
13.2. MEDIAL AXIS AND MEDIAL SURFACE
369
Theorem 13.1 can be formally applied in three dimensions. The difficulty results from the fact that it is tedious to construct an empty Delaunay mesh for an arbitrary polyhedron (see Chapter 3). The existence itself of such a mesh is in general not well established. Nevertheless in the case where the discretization of the domain boundary is Delaunay admissible (cf. Chapter 3), a solution exists that can be used to construct the desired surface.
13.2.3
Several applications based on the skeleton
We assume that the domain skeleton (medial axis in two dimensions, medial surface in three dimensions) is provided. This skeleton provides a topological knowledge of the domain and thus can be used for several interesting applications. Domain partitioning. In this case, the skeleton acts as a support for the partitioning of the domain into sub-domains. Idealization. The skeleton is analyzed so as to determine local topological properties of the geometry (these properties concerning either small or local features). Hence, under some conditions, it is possible to • identify the principal characteristics of the geometry (inertial axis, branch detection, ...), • simplify the geometry by removing some details while preserving the general shape of the domain. Dimension reduction. The skeleton is considered as a dimensional reduction of the domain on which it is based. In two dimensions, a polygonal domain is replaced by a series of lines, while in three dimensions, a polyhedral domain is replaced by a series of surfaces. Quadrilateral meshing. Using the skeleton and the partition of a domain ft of R2 it is possible to obtain several sub-sets of f2 which are polygons with a limited number of sides. For a three-sided or a four-sided polygon, applying an algebraic mesh generation method (for instance, see [Cook-1974]) allows us to obtain a quadrilateral mesh (after splitting any triangles into three quadrilaterals using their edge midpoints). For the other polygons, a subdivision method introducing midpoints on each side allows us to return to the previous simple case.
370
13.3
CHAPTER 13. OTHER
APPLICATIONS
Parallel computing
Parallel computing is a solution to handle large size problems (i.e., with a large number of unknowns). When coupled with a domain decomposition solution method, this approach needs to construct the mesh of several subdomains whose union covers the entire initial domain. These sub-meshes must enjoy a series of properties and must make communication possible between sub-meshes. From the algorithmical point of view, regarding the meshing point of view, a parallel computation relies on a partitioning of the domain consisting of several meshes (so as to furthermore distribute the computation on the various processors, where one processor is in charge of one sub-mesh). Constructing this partitioning, as well as constructing each of the sub-meshes, can be achieved using many different approaches. Actually, three kinds of approaches can be found that we would like to briefly discuss hereafter. (The purpose of this section is not to provide a detailed study of partitioning methods, but solely to mention some ideas related to the a posteriori methods and to introduce an a priori method based on an extension of Delaunay-type algorithms).
13.3.1
A posteriori partitioning
Provided a (fine) mesh of the domain under consideration, an a posteriori partitioning method splits this mesh into sub-meshes so as to extract the sub-domains. Numerous methods exist to address this problem. For more details, we refer the reader to the rich literature about this topic (see for instance [Simon-1991] or [Farhat,Lesoinne-1993]). The main drawback of this approach is that the necessary memory requirement to complete the partitioning is approximately the sum of the size needed to store the initial mesh and the size of at least one of the sub-meshes. Furthermore, various classical difficulties related to the partitioning methods are observed (related to the sub-meshes, load balancing, the interface smoothness, the interface sizes, etc.). An important remark. This a posteriori approach is most likely the worst way to perform the parallelism at the mesh level. Nevertheless, this method leads to good results for reasonably sized meshes. Conversely, if the size of the problem is very large (for instance, of the order of several million elements), the creation of the initial mesh may not even be possible. For instance, it is possible to achieve a mesh with about 9 millions elements (cf. Table 7.2 at Chapter 7) but, while it is possible to create a mesh a little bit larger, this mesh usually cannot be written onto a file for memory size
13.3. PARALLEL COMPUTING
371
reason. Hence, the a priori approach makes sense, even though it requires more effort in order to make it reasonably efficient.
13.3.2
A priori partitioning
Following this approach, we will attempt to avoid the difficulties and weakness related to the a posteriori method (large memory space requirements, no parallelism at the mesh generation step, etc.) by first constructing a partitioning. This step may start either from a coarse mesh (i.e., with a small number of elements) and then by constructing the mesh of the different members of the partition concurrently, thus taking advantage of the parallelism from the mesh generation step. Another approach starts only from the surface mesh of the domain. The major difficulty expected with this method is to find an appropriate method ensuring a good load balancing between the different submeshes and to define a suitable way to handle the interface between the sub-domains. Indeed, the load balancing can be addressed by using the information provided by either a coarse mesh or the surface mesh. For the first approach, we can consider the empty mesh as defined in Chapters 5 or 7 or a mesh with a relatively small number of internal points as the coarse mesh. For the second approach, the coarse mesh will be constructed using the surface mesh as sole input data. Similarly, the sub-domain interface, irrespective of the method, must be constructed using either the coarse mesh or the surface mesh. The purpose of the following sections, is to discuss these two methods. To this end, we introduce the notion of an inductive Delaunay triangulation.
13.3.3
Partitioning by inductive Delaunay triangulation
As a complement to Chapter 2, we define herein the notion of an inductive Delaunay triangulation. We consider an "ideal" context where the set V of points in R3 is assumed to be in a general position. Then, the sets (cf. Chapter 2) Vi = [P £ R3 such that
d(P, Pi) < d(P, P,),
Vj ^ *}
(13.1)
where d(.,.) is the usual distance from point to point, defines the Voronoi cells whose dual is the Delaunay triangulation of Conv(P], the convex hull of P. This triangulation satisfies the empty sphere criterion. Provided a plane, denoted as (II), we assume that this plane • does not pass through any vertex of the cells V;,
372
CHAPTER 13. OTHER
APPLICATIONS
• intersects at least one bounded Voronoi cell. As a consequence (of the first assumption), any vertex of the cells Vj , as hereafter defined, will be of degree three (meaning that a vertex is shared by exactly three cells). Then, the sets \/.(n) = {P € (H) such that
d(P, Pi} < d(P, PJ),
Vj + i]
(13.2)
where rf(., .) is the usual distance between two points and the P, (Pj) are in jR3, define Voronoi cells in (II) which constitute a set of polygons, where in fact Vj = Vin(II). The dual of this set is a series of triangular faces, 3 in R , joining the Pj's whose cell Vj ' is a non empty cell. The duality relies on constructing the edge PiPj in R3 since V/ ' and Vj ' share an edge. This set of triangles verifies a criterion similar to the classical empty sphere criterion. On the one hand, we will show (in Proposition 13.1) that the union of the faces /"(n) is a planar graph and, on the other hand (in Proposition 13.2), we will further clarify the above criterion. Proposition 13.1. The set JF(n) is a planar graph. Proof. The Voronoi induced cells V^ ' are convex polygons in (II) and, by definition, their union constitutes a conforming partition of (II) (in the sense that the intersection of two cells is either empty, or reduced to a vertex or an edge). Assuming that every cell represents a node of the graph and that the adjacency relationship by an edge is a graph edge, then this graph is planar. D Proposition 13.2. PiPjPk is a face of F if and only if both the three Voronoi induced cells, V^ ,Vj and V^ , are non-empty and these three cells share a vertex. By definition, Proposition 13.2 holds and the vertex common to the three cells is the point on (II) which is equidistant from PJ5 Pj and P&. As a consequence, PiPjPk is a face of F^ if the sphere passing through /^, Pj and Pk centered at a point of (II) is empty (Figure 13.1). Remark. In the case where the first part of the initial assumption is not satisfied, then there exists a vertex of the inductive Voronoi cells equidistant to four points of P and the dual is a quadrilateral. The following proposition is an alternative characterization of the inductive Voronoi cells of (II).
13.3. PARALLEL COMPUTING
373
Figure 13.1: The inductive empty sphere criterion. Proposition 13.3. The set of faces JT(n) is a sub-set of the set of faces of the Delaunay triangulation ofConv(P). Proof. If S(n) stands for the point intersection of (II) with the edge common to three Voronoi cells related to P, then • the dual in R3 of this common edge is a face of the Delaunay triangulation of P, • the dual of £(n) is a face of ^ n ). Hence every face of J^^ is a face of the triangulation of P.
d
As a corollary, the following result holds Proposition 13.4. Conv(P).
The union of the elements of F^ separates
It is then possible to split Conv(P) into two parts.
13.3.4
Partitioning a set of points by induction from the Delaunay triangulation of the convex hull.
Provided a set of points P and a plane (IT). We construct the Delaunay triangulation of P (in fact, that of Conv(P}}. The construction of a partitioning consisting of two parts of Conv(P) can be obtained by identifying the faces of the triangulation whose circumsphere with centers on (II) is empty.
374
CHAPTER 13. OTHER APPLICATIONS
Remark. If (II) intersects only unbounded edges of the Voronoi cells in .R3, then j^(n) consists of the faces of the Delaunay triangulation of V belonging to the boundary of Conv(P] and thus is not a separator. This fact justifies the second part of the initial assumption regarding the given plane. 13.3.5
Partitioning from the domain boundary
In this case, we use the surface discretization of a domain in R3 to obtain a separator enabling us to split the given domain which has not yet been meshed. The construction of this separator is based on the inductive extension of the Delaunay triangulation as described above. Provided that the surface discretization of the domain is Delaunay admissible (cf. Chapter 3) and defining P as the set of the vertices of the surface triangles, we recall that if the triangulation of Conv(P) is given, then a triangulation of the separator can be easily obtained. In fact, one needs only to find the Voronoi cells in R3 which intersect the plane (II) and to construct in R3 the network of triangles forming the desired triangulation which involves the duality of the Voronoi cells in (II). It may be observed that, on the one hand, this triangulation (in fact, this mesh) may include some faces exterior to the domain and, on the other hand, that it is empty in the sense that the element vertices are only the points of Conv(P) (as no points have been added). A few questions are still open : • the surface triangulation is not Delaunay admissible (thus the above method cannot be directly applied to define a triangulation of a separator) , • the points must be created on the separator so as to enrich this separator (the goal being to obtain a mesh suitable for a finite element type application), • the plane (II) must be effectively defined, in the case where a meshed surface is considered and not a convex hull of points. Scheme for the separator construction. We consider a set of points P which are the vertices of a mesh of a surface F which discretizes the boundaries of a polyhedral domain Q. In addition, we consider a plane
13.3. PARALLEL COMPUTING
375
(II). The question then is to construct a separator for P and to derive from it a partitioning of Q. First of all, besides the above problems, we have to notice that the given problem is slightly different from the problem previously discussed. The context is no longer "ideal", the points are not in general position and the domain is not a convex hull problem. Before describing the partitioning method, we would like to indicate the scheme of the construction. Provided a plane (II), the proposed method consists of : • finding a polygonal line (a set of edges), £, such that — all of the edges of C are edges of F, - the line £ is Delaunay admissible with respect to the inductive empty sphere criterion, • inserting the points of £ so as to complete the Delaunay triangulation of a surface whose boundary is C (this is a three-dimensional triangulation problem where the topology is two-dimensional). Upon completion of this stage, we have a empty mesh of the above surface. Then, one has to • create points on this surface (by means of inserting points along the mesh edges as discussed in Chapter 5), • insert these points and repeat the process as long as a point is created. A mesh of the above surface is obtained once this stage is completed. This mesh is the desired separator. As a consequence, two sub-domains can be defined whose boundaries consist of the union of • the mesh of the above separator, • and the relevant parts of the mesh of F. By iterating this process, the initial domain can be split into two or more sub-domains. The interface between two sub-domains is constructed in a unique manner and then a mesh construction algorithm (as described in Chapter 7) can be applied in each sub-domain whose union covers Q, and whose interfaces are conforming. It may be noticed, as a positive effect of the proposed method, that the mesh generation method can be used in parallel. We now turn to give some details about this set of construction stages. At this time, we assume that the plane (II) is given (furthermore we will address the problem of how to define this plane).
376
CHAPTER 13. OTHER APPLICATIONS
Construction of an empty mesh of the separator. We consider a square in the plane (II) enclosing the region intersection of ft with this plane. This square is meshed by means of two triangles. We find the set of triangles of F intersected by the plane. We insert, using the inductive Delaunay kernel the vertices of these triangles in the initial mesh of the square. The resulting mesh is a set of triangles in R3 which form a polyhedral surface. By extracting from this mesh a polygonal boundary, the line £ (a sub-set of the edges of the intersected triangles) we obtain an empty mesh of the separator. Exercise 13.1. Can the cavity associated with any vertex of an intersected triangle be empty and, if so, is it possible to find a line £ suitable to define the separator ? Construction of the final mesh of the separator. The "internal" points are created on each face edge of the empty mesh. The points are then inserted using the inductive Delaunay kernel. The internal point insertion process is similar to that depicted in Chapter 5. Exercise 13.2. Show that the cavity associated with every so-defined internal point is a non-empty set. Possible definitions of the plane (II). Several strategies can be adopted to define the plane (II) which will serve as a support for the separator. The most popular methods consist of : • enclosing the surface F in a box and using the extrema of this box so as to determine the plane, • finding the inertial axes of the domain and using them. The inertial axes are defined through the eigenvectors of the inertia matrix associated with the points of F, i.e., with the set V. This matrix is as follows :
13.3. PARALLEL COMPUTING
377
where G is the centroid of the points of P and Xyx = Xxy (etc., so the above matrix is a symmetric matrix). evaluating the volume of tt and defining a plane so as to balance the volumes, • etc.
One may consult [Galtier,George-1996] and [Galtier-1997] for more details about the construction of the mesh of the separator using this method. Application example. In this paragraph, we depict an example of partitioning for a mechanical device. To appreciate this example, we report the characteristics of the mesh obtained when the whole domain is considered and those of the sub-meshes resulting from the partitioning. The partitioning process has been completed in two phases, first the initial surface has been split into two parts and in turn these parts have again each been split into two parts.
initial surface surface of the first sub-domain surface of the second sub-domain surface of the third sub-domain surface of the fourth sub-domain
np 4,756 1,346 1,435 1,309 1,363
ne 9,512 2,688 2,866 2,610 2,722
Table 13.1: Characteristics of the different surface meshes.
378
CHAPTER 13. OTHER
APPLICATIONS
Figure 13.2: The data: the surface mesh related to the whole domain.
13.3. PARALLEL
COMPUTING
379
Figure 13.3: The output: the meshes related to the surface partitioning.
CHAPTER 13. OTHER
380
entire mesh first sub-domain second sub-domain third sub-domain fourth sub-domain
ne 40,392 2,503 10,764 2,546 10,673 2,269 9,427 2,396 10,095 np 9,161
t 10.32 2.59 2.87 2.22 2.38
APPLICATIONS
Q 9.28 11.05 8.49 21.43 9.28
Table 13.2: Characteristics of the different volume meshes. By observing the initial surface and the four constructed surfaces, we see a nice looking load balancing. So it is if we examine the different threedimensional meshes constructed as compared with the mesh resulting from the initial surface. One may remark, on Figure 13.3, that there are five parts while only four were expected. This is due to the fact that one of the parts includes two connected components. This may or may not be a source of concerns depending on the type of algorithm used in the envisioned computational process. Remarks. The areas that demand more attention, in particular for finite element type application, concern the size of the sub-domain interfaces (in terms of the number of points or elements), their smoothness (in fact, this property is related to the way in which the points are created on the interfaces, by not directly using the plane, but instead using a smoother surface) and the proper load balancing of the sub-domains. Moreover, the assumption that guarantees the validity of the method, i.e., the Delaunay admissibility of the surface F (at least in some neighborhood of (II) on the supporting surface) should probably be avoided.
13.4
Minimal roughness of a surface
We turn here to a surface mesh problem for a cartesian surface defined by a finite set of points. We assume that the points are given in a cartesian manner, (x,y,/(x,y)), where / is the function describing the surface. Let T be an arbitrary triangulation of the points x,y in the plane and let T be the corresponding triangulation defined in R3 by the triples (x,y,/(x,y))'s related to the (x,y)'s. The roughness value of T is the quantity
13.5. NOTES
381
where Ki is a triangle of T, n being the number of elements in T and |<7|# t)1 being defined by
with g the piecewise linear surface representing T. Thus, [Rippa-1990] shows that the Delaunay triangulation of the points x,y in the plane serving to define the triangulation T is that which minimizes the roughness value of the surface. A possible application of this property can be the reconstruction of terrains (in geology) starting from a series of points with the purpose of obtaining a surface with a minimal roughness value.
13.5
Notes
As discussed in this chapter, Delaunay triangulations or Delaunay type meshes can be involved in various applications. In this respect, an "exotic" application of the anisotropic Delaunay triangulation can be shown which constructs color palettes. If we consider the RGB space (Red, Green, Blue) as the color space, we are actually in a non-isotropic context (the color perception by the human eye is indeed not linear). This implies that the proximity notion between two given colors can be expressed by an anisotropic metric, thus leading to an anisotropic Delaunay method to construct the colors which interpolate one of several given colors so as to define a nice looking palette. As a final remark, our feeling is that there exists other fields of applications for the concepts (and thus for some of the algorithms) discussed in this book. Indeed, most problems relying on the notion of proximity can benefit of a formulation based on the notions or properties existing in a Delaunay triangulation.
This page intentionally left blank
T Introduction In this short appendix, we indicate several software packages mainly devoted to mesh generation, most of them being issued from INRIA (or in cooperation with INRIA). For more details, the reader interested in any of these packages is invited to visit the ftp-sites mentioned herein or to contact the authors of the book at the following addresses houman.borouchaki@univ — troyes.fr paul — [email protected]
BL2D The BL2D software package (cf. [Borouchaki,Laug-1996]) includes several parts devoted to : • domain boundary definition, • boundary discretization, • Delaunay-type mesh generation following an algorithm suitable for isotropic or anisotropic mesh construction. This algorithm uses a metric map to govern the mesh generation process. This software is mainly based on algorithms that have been, for the most part, discussed in this book. It allows for the definition of adaptive computational loops. The package can be accessed at the following address : ftp :
//ftp.inria.fr/INRIA/Projects/Gamma.
384
GHS3D GHS3D is an automatic tetrahedral mesh generation algorithm based on a (classical) isotropic Delaunay method (cf. [Borouchaki et al. 1995]). It constructs a mesh for a volume which is defined by a conforming mesh of its boundary. The volume mesh is controlled by the properties of the surface mesh (thereby definingto both the element density and the element quality). The resulting mesh includes the input faces and edges as element faces and edges. In other words, the resulting mesh satisfies Definition 3.2. of Chapter 3. TetMesh-GHS3D is the commercial release of this software which is available from Simulog (a commercial subsidiary of INRIA).
Erne2 This sofware offers the capability to construct two-dimensional meshes. To this end, it includes an editing system for mesh and boundary management. Formally speaking, Emc2 consists of three parts: a part devoted to data construction (points, lines, arcs, boundaries, ...), a part related to mesh generation (by means of a Delaunay method) and a part which allows for manipulation of existing meshes (for quadrilateral element generation, symmetry construction, ...). This software is compatible with the Modulef software package described hereafter. Regarding the different capabilities of Emc2, we refer the reader to [Hecht,Saltel-1989], where a detailled description can be found.
Modulef The Modulef library (cf. [Bernadou et al. 1988]), developped mainly at INRIA, proposes a series of two-dimensional mesh generation algorithms; including a user-driven method, an algebraic method ([Cook-1974]), an advancing-front type method([George,Seveno-1994]), a Delaunay type algorithm (cf. Chapter 5) and a multiblock approach. In three dimensions, the user-driven method and the multiblock method (using locally algebraic approaches) are available. A set of programs which allow for mesh modifications (symmetry, refinement, ...) is also provided. Furthermore, a series of preprocessors assists the user for most cases, cf. [George-1993].
385
Simail Simail is a graphic interactive software package suitable for mesh generation and mesh manipulation. Line, surface as well as volume meshes are carried out. This software is available through Simulog (http://www.simulog.fr). The aim of Simail is to significantly reduce the time needed for complex mesh creation. To this end, the software includes efficient algorithms coupled with user-friendly interfaces. The CAD interface module of Simail enables the user to pick curves or surfaces from various CAD modelers. Simail offers surface mesh generation facilities for these surfaces. Structured as well as unstructured meshes are supported by Simail making this software suitable for various problems in numerical simulation (computational fluid dynamics, structural analysis, electromagnetic problems, ...). Its rather complete set of mesh generation methods (including GHS3D and some other algorithms developed at INRIA) and of mesh manipulation for surface or volume application allows users to construct meshes of high quality very quickly. Simail is easy to use thanks to an attractive interactive multiwindow environment.
Gfem To simplify FEM programs, the dedicated programming language, Gfem, allows for the manipulation of P1 functions, explicit descriptions of partial differential equations and of display modules. Gfem is implemented in a freeware version called freefem (http://www.asci.fr) and commercial menu driven programs: MacGfem, PCGfem and XGfem. All have an automatic 2D triangular mesh generator based on the Voronoi-Delaunay algorithm with boundary recovery and also a mesh adaptivity module based on the Delaunay-Voronoi algorithm with variable metrics given by the Hessians of P1 functions (this algorithm was developped by [Castro,Hecht-1995]).
386
Surface mesh of a mechanical device (data courtesy of the MacNealSchwendler Corp.). Top : original surface mesh, bottom : geometric surface remeshing.
387
Tet mesh of a carrier wheel meshed using ANSYS.
388
Tet mesh of rudder stock component meshed using ANSYS, courtesy of Rolls Royce Industrial Power Group.
389
Geometry of a cylinder head, courtesy of IFF (Institut Francois du Petrole) and Simulog.
Tet mesh resulting from TetMesh-GHSSD
(Simulog-INRIA).
390
Geometry of a part of an automotive cylinder head, courtesy of Renault and Simulog.
391
Surface mesh of the cylinder head, courtesy of Renault and Simulog. This mesh has been used for the numerical simulation of an admission cycle with the N3S-MUSCL software package.
This page intentionally left blank
Bibliography [Aggarwal et al. 1989] A. AGGARWAL, L.J. GUIBAS, J. SAXE AND P.W. SHOR (1989), A linear-time algorithm for computing the Voronoi diagram of a convex polygon, Discrete Comput. Geom. 4(6), 591604. [Aho et al. 1983] A. AHO, J. HOPCROFT AND J. ULLMAN (1983), Data structures and algorithms, Addison-Wesley, Reading, Mass. [Armstrong et al. 1995] C.G. ARMSTRONG, D.J. ROBINSON, R.M. McKEAG, T.S. Li, S.J. BRIDGETT AND R.J. DONAGHY (1995), Applications of the medial axis transform in analysis modelling, in Proc. of the Fifth International Conference on Reliability of Finite Element Methods for Engineering Applications, Nafems, 415-426. [Aurenhammer-1987] F. AURENHAMMER (1987), Power diagrams properties, algorithms and applications, SIAM J. Comput. 16, 78-96. [Aurenhammer-1990] F. AURENHAMMER (1990), A new duality result concerning Voronoi diagrams. Discrete Comput. Geom. 5, 243-254. [Aurenhammer-1991] F. AURENHAMMER (1991), Voronoi diagrams: a survey of a fundamental geometric data structure, ACM Comput. Surv. 23, 345-405. [Avis,Bhattacharya-1983] D. Avis AND B.K. BHATTACHARYA (1983), Algorithms for computing rf-dimensional Voronoi diagrams and their duals, in P.P. Praparata eds. Advances m Computing Research 1, 159188. [Avis,ElGindy-1987] D. Avis AND H. ELGiNDY (1987), Triangulating point sets in space, Discrete Comput. Geom. 2, 99-111. [Azevedo-1991]
E.F. D'AzEVEDO (1991), Optimal triangular mesh generation by coordinate transformation, Siam J. Sci. Stat. Comput. 12, 755-786.
394
BIBLIOGRAPHY
[Azevedo,Simpson-1989] E.F. D'AzEVEDO AND B. SIMPSON (1989), On optimal interpolation triangle incidences, Siam J. Sci. Stat. Comput. 10, 1063-1075. [Azevedo,Simpson-1991] E.F. D'AZEVEDO AND B. SIMPSON (1991), On optimal triangular meshes for minimizing the gradient error, Numerische Mathematik 59(4), 321-348. [Avnaim et al. 1994] F. AVNAIM, J.D. BOISSONNAT, O. DEVILLERS, F.P. PREPARATA AND M. YVINEC (1994), Evaluating signs of determinants using single-precision arithmetic, RR INRIA 2306. [Babuska,Rheinboldt-1978] I. BABUSKA AND W.C. RHEINBOLDT (1978), A posteriori error estimates for the finite element method, Int. j. numer. methods eng. 12, 1597-1615. [Babuska et al. 1994] I. BABUSKA, T. STROUBOULIS, C.S. UPADHYAY, S.K. GANGARAJ AND K. COPPS (1994), Validation of a posteriori error estimation by numerical approach, Int. j. numer. methods eng. 37, 1073-1123. [Bai,Brandt-1987] D. BAI AND A. BRANDT (1987), Local mesh refinement multilevel techniques, Siam J. Sci. Stat. Comput. 8(2), 109-134. [Baker-1986]
T.J. BAKER (1986), Three dimensional mesh generation by triangulation of arbitrary point sets, Proc. AIAA 8th Comp. Fluid Dynamics Conf. Honolulu, HI, 87-1124.
[Baker-1988a]
T.J. BAKER (1988), Generation of tetrahedral meshes around complete aircraft, Numerical grid generation in computational fluid mechanics '88, Miami, FL.
[Baker-1989]
T.J. BAKER (1989), Developments and trends in threedimensional mesh generation, Appl. Num. Math. 5, 275-304.
[Baker-1989]
T.J. BAKER (1989), Element quality in tetrahedral meshes, Proc. 7th Int. Conf. on Finite Element Methods in Flow Problems. Huntsville, AL.
[Baker-1989]
T.J. BAKER (1989), Automatic Mesh Generation for Complex Three-Dimensional Regions Using a Constrained Delaunay Triangulation, Eng. Comp. 5, 161-175.
[Baker-1991]
T.J. BAKER (1991), Shape Reconstruction and Volume Meshing for Complex Solids, Int. j. numer. methods eng. 32, 665-675.
[Baker-1992]
T.J. BAKER (1992), Mesh Generation for the Computation of Flowfields over Complex Aerodynamic Shapes, Comp. Math. Applic. 24(5-6), 103-127.
BIBLIOGRAPHY
395
[Bank,Chan-1986] R. E. BANK AND T.F. CHAN (1986), PLTMGC: a multigrid continuation program for parametrized nonlinear elliptic system, Siam J. Sci. Stat. Comput. 7(2), 540-559. [Bansch-1991]
E. BANSCH (1991), Local Mesh Refinement in 2 and 3 Dimensions, Impact of Comp. in Sci. and Eng. 3, 181-191.
[Berger-1978]
M. BERGER (1978), Geometric tome 3". convexes et polytopes, polyedres reguliers, aires et volumes, Fernand Nathan, Paris.
[Berger,Jameson-1985] M.J. BERGER AND A. JAMESON (1985), Automatic adaptive grid refinement for Euler equations, AIAA J. 23(4), 561-568. [Bernardi-1996] C. BERNARDI (1996) Estimations a posteriori: une mode ou un outil de base, Matapli 48, 19-30. [Bernadou et al. 1988] M. BERNADOU, P.L. GEORGE, A. HASSIM, P. JOLY, P. LAUG, B. MULLER, A. PERRONNET, E. SALTEL, D. STEER, G. VANDERBORCK ET M. VIDRASCU (1988), Modulef: une bibliotheque modulaire d'elements finis, INRIA (Eds.). [Boissonnat-1984] J.D. BOISSONNAT (1984), Geometric structures for 3dimensional shape representation, A CM Transactions on graphics 3(4), 266-286. [Boissonnat et al. 1992] J.D. BOISSONNAT, O. DEVILLERS, R. SCHOTT, M. TEILLAUD AND M. YVINEC (1992), Applications of Random Sampling to On-Line Algorithms in Computational Geometry, Discrete Comput. Geom. 8, 51-71. [Boissonnat,Teillaud-1986] J.D. BOISSONNAT AND M. TEILLAUD (1986), A hierarchical representation of objects: the Delaunay Tree, Second ACM Symp. on Comput. Geom., Yorktown Heights. [Boissonnat,Yvinec-1995] J.D. BOISSONNAT ET M. YVINEC (1995), Geometrie Algorithmique, Ediscience, Paris. [Borouchaki-1993] H. BOROUCHAKI (1993), Graphe de connexion et triangulation de Delaunay, These, Universite Paris VII, Paris. [Borouchaki-1994] H. BOROUCHAKI (1994), Triangulation sous contraintes en dimension quelconque, RR INRIA 2373 [Borouchaki et al. 1995] H. BOROUCHAKI, F. HECHT, E. SALTEL AND P.L. GEORGE (1995), Reasonably efficient Delaunay based mesh generator in 3 dimensions, in 4th International Meshing Roundtable, Albuquerque, New Mexico, 3-14.
396
BIBLIOGRAPHY
[Borouchaki,George-1996a] H. BOROUCHAKI ET P.L. GEORGE (1996), Triangulation de Delaunay et metrique riemannienne. Applications aux maillages elements finis, Revue europeenne des elements finis 5(3), 323-340. [Borouchaki,George-1996b] H. BOROUCHAKI ET P.L. GEORGE (1996), Triangulation de Delaunay et metrique riemannienne. C. R. Acad. Sci. Paris, t. 323, Serie I, 1195-1200. [Borouchaki,George-1996c] H. BOROUCHAKI ET P.L. GEORGE (1996), Mailleur de Delaunay gouverne par une carte. C. R. Acad. Sci. Paris, t. 323, Serie I, 1141-1146. [Borouchaki et al. 1996] H. BOROUCHAKI, P.L. GEORGE AND S.H. Lo (1996), Optimal Delaunay point insertion, Int. j. numer. methods eng. 39(20), 3407-3438. [Borouchaki,George-1997] H. BOROUCHAKI AND P.L. GEORGE (1997), Aspects of 2D Delaunay mesh generation, Int. j. numer. methods eng. 40, 1957-1975. [Borouchaki,Laug-1996] H. BOROUCHAKI ET P. LAUG (1996), Le mailleur adaptatif bidimensionnel BL2D: manuel d'utilisation et documentation, RT INRIA 0185. [Borouchaki et al. 1997a] H. BOROUCHAKI, P.L. GEORGE, F. HECHT, P. LAUG AND E. SALTEL (1997), Delaunay mesh generation governed by metric specifications. Part I: Algorithms, Finite Elements in Analysis and Design, 25(1-2), 61-83. [Borouchaki et al. 1997b] H. BOROUCHAKI, P.L. GEORGE AND B. MOHAMMADI (1996), Delaunay mesh generation governed by metric specifications. Part II: Application examples, Finite Elements in Analysis and Design, 25(1-2), 85-109. [Borouchaki,Lo-1995] H. BOROUCHAKI AND S.H. Lo (1995), Fast Delaunay triangulation in three dimensions, Comput. Methods Appl. Mech. Engrg. 128, 153-167. [Bowyer-1981]
A. BOWYER (1981), Computing Dirichlet tesselations, The Comp. J. 24(2), 162-167.
[Briere,George-1995] E. BRIERE DE L'ISLE AND P.L. GEORGE (1995), Optimization of tetrahedral meshes, IMA Volumes in Mathematics and its Applications, I. Babuska, W.D. Henshaw, J.E. Oliger, J.E. Flaherty, J.E. Hopcroft and T. Tezduyar (Eds.), 75, 97-128. [Bristeau,Periaux-1986] M.O. BRISTEAU AND J. PERIAUX (1986), Finite element methods for the calculation of compressible viscous flows using self-adaptive refinement, VKI lecture notes on CFD.
BIBLIOGRAPHY
397
[Brown-1979]
K.Q. BROWN (1979), Voronoi diagrams from convex hull, Inform. Process. Lett., 9, 223-228.
[Castro-1994]
M.J. CASTRO-DIAZ (1994), Mesh refinement over triangulated surfaces, RR INRIA 2462.
[Castro,Hecht-1995] M.J. CASTRO-DIAZ AND F. HECHT (1995), Anisotropic surface mesh generation, RR INRIA 2672. [Castro et al. 1995] M.J. CASTRO-DIAZ, F. HECHT AND B. MOHAMMADI (1995), New Progress in Anisotropic Mesh Adaption for Inviside and Viscous Flow Simulations, RR INRIA 2671. [Catmull-1974]
E. CATMULL (1974), A subdivision Algorithm for Computer Design of curved Surfaces, Univ. Utah Comp. Sci. Dept., UTECCSCJ4-133.
[Cavendish et al. 1985] J.C. CAVENDISH, D.A. FIELD AND W.H. FREY (1985), An approach to automatic three-dimensional finite element mesh generation, Int. j. numer. methods eng. 21, 329-347. [Cendes et al. 1985] Z.J. CENDES, D.N. SHENTON AND H. SHAHNASSER (1985), Magnetic field computations using Delaunay triangulations and complementary finite element methods, IEEE Trans. Magnetics 21. [Chazelle-1984] B. CHAZELLE (1984), Convex partitions of polyhedra: a lower bound and worst-case optimal algorithm, SIAM J. Comput 13, 488-507. [Chazelle,Palios-1990] B. CHAZELLE AND L. PALIOS (1990), Triangulating a nonconvex poly tope, Discrete Comput. Geom. 5, 505-526. [Chazelle-1991] B. CHAZELLE (1991), Triangulating a simple polygon in linear time, Discrete Comput. Geom. 6, 485-524. [Cherfils,Hermeline-1990] C. CHERFILS AND F. HERMELINE (1990), Diagonal swap procedures and characterizations of 2D-Delaunay triangulations, Rairo, Math. Mod. and Num. Anal. 24(5), 613-626. [Chew-1989]
L.P. CHEW (1989), Constrained Delaunay Triangulations, Algorithmica 4, 97-108.
[Chew,Drysdale-1985] L.P. CHEW AND R.L. DRYSDALE (1985), Voronoi diagrams based on convex distance functions, ACM 0-89791-163-6, 235-244. [Ciarlet-1978]
P.G. CIARLET (1978), The Finite Element Method, North Holland.
398
BIBLIOGRAPHY
[Ciarlet-1986]
P.G. CIARLET (1986), Elasticite Tridimensionnelle, RMA n° 1, Masson, Paris.
[Ciarlet-1988]
P.G. CIARLET (1988), Mathematical Elasticity, Vol 1: ThreeDimensional Elasticity, North-Holland.
[Ciarlet-1991]
P.G. CIARLET (1991), Basic Error Estimates for Elliptic Problems, in Handbook of Numerical Analysis, vol II, Finite Element methods (Part 1), P.G. Ciarlet and J.L. Lions Eds, North Holland, 17-352.
[Clarkson-1987] K. CLARKSON (1987), New applications of random sampling in computational geometry, Discrete Comput. Geom. 2, 195-222. [Clarkson,Shor-1989] K. CLARKSON AND P.W. SHOR (1989), Applications of random sampling in computational geometry, Discrete Comput. Geom. 4, 387-421. [Clarkson et al. 1989] K. CLARKSON, R.E. TARJAN AND C.J. VAN WYK (1989), A fast Las Vegas algorithm for triangulating a simple polygon, Discrete Comput. Geom. 4, 423-432. [Cook-1974]
W.A. COOK (1974), Body oriented coordinates for generating 3-dimensional meshes, Int. j. numer. methods eng. 8, 27-43.
[Coorevits et al. 1995] P. CooREViTS, P. LADEVEZE AND J.P. PELLE (1995), Au automatic procedure for finite element analysis in 2d elasticity, Comput. Methods Appl. Mech. Engrg. 121, 91-120. [Cougny et al. 1990] H. COUGNY, M.S. SHEPHARD AND M.K. GEORGES (1990), Explicit Node Point smoothing within Octree, Scorec Report 10(1990), RPI, Troy, NY. [Coulomb-1987] J.L. COULOMB (1987), Maillage2D et 3D. Experimentation de la triangulation de Delaunay, Conf. on Automated mesh generation and adaptation, Grenoble. [Coupez-1991]
TH. COUPEZ (1991), Grandes transformations et remaillage automatique, These ENSMP, Paris.
[Coxeter et al. 1959] H.S.M. COXETER, L. FEW AND C.A. ROGERS (1959), Covering space with equal spheres, Mathematika 6, 147-151. [Dannelongue,Tanguy-1991] H. DANNELONGUE AND P.A. TANGUY (1991), Three-dimensional adaptive finite element computations and applications to non-Newtonian flows Int. J. Num. Meth. Fluids 13, 145-165. [Delaunay-1934] B. DELAUNAY (1934), Sur la sphere vide, Bui. Acad. Sci. URSS, Class. Sci. Nat., 793-800.
BIBLIOGRAPHY
399
[Devillers-1992] O. DEVILLERS (1992), Randomization yields simpe O(nlog*n) algorithms for difficult £7(n) problems, Internal. J. Comput. Geom. Appl. 2(1), 97-111. [Devillers et al. 1992] O. DEVILLERS, S. MEISER AND M. TEILLAUD (1992), Fully dynamic Delaunay triangulation in logarithmic expected time per operation, Comput. Theory Appl. 2(2), 55-80. [Dirichlet-1850] G.L. DIRICHLET (1850), Uber die reduction der positiven quadartischen formen mit drei understimmten ganzen zahlen, Z. Angew Math. Mech. 40(3), 209-227. [Dolenc,Makela-1990] A. DOLENC AND I. MAKELA (1990), Optimized triangulation of parametric surfaces, Mathematics of Surfaces IV. [Dyn,Goren-1993] N. DYN AND I. GOREN (1993), Transforming triangulations in polygonal domains, Computer Aided Geometric Design 10, 531536. [Dwyer-1987]
R.A. DWYER (1987), A faster Divide-and-Conquer algorithm for constructing Delaunay triangulations, Algorithmica 2, 137-151.
[Dwyer-1991]
R.A. DWYER (1991), Higher-Dimensional Voronoi diagrams in linear expecetd time, Discrete Comput. Geom. 6, 342-367.
[Edelsbrunner-1987] H. EDELSBRUNNER (1987), Algorithms in Combinatorial Geometry, vol 10, EATCS Monographs on Theoretical Computer Science, Springier-Verlag. [Edelsbrunner et al. 1990] H. EDELSBRUNNER, F.P. PREPARATA AND D.B. WEST (1990), Tetrahedrizing point sets in three dimensions, J. Symbolic Computation 10, 335-347. [Faddeev et al. 1992] O.K. FADDEEV, N.P. DOLBILIN, S.S. RYSHKOV AND M.I. SHTOGRIN (1992), B.N. Delone (on his life and creative work), Proceedings of the Steklov Inst. of Math. 4, 1-9. [Farhat,Lesoinne-1993] C. FARHAT AND M. LESOINNE (1993), Automatic partitioning of unstructured meshes for the parallel solution of problems in computational mechanics, Int. j. numer. methods eng. 36, 745-764. [Farin-1983]
G. FARIN (1983), Smooth interpolation to scattered 3D data, in R.E. Barnhill and W. Boehm (Eds.) Surfaces in Computer Aided Geometric Design, Noth-Holland, 43-63.
[Farin-1985]
G. FARIN (1985), A modified Clough-Tocher interpolant, Computer Aided Geometric Design 2, 19-27.
400
BIBLIOGRAPHY
[Farin-1986]
G. FARIN (1986), Triangular Bernstein-Bezier patches, Computer Aided Geometric Design 3(2), 83-127.
[Field-1988]
D.A. FIELD (1988), Laplacian smoothing and Delaunay triangulations, Commun. numer. methods eng. 4, 709-712.
[Field,Smith-1991] D.A. FIELD AND W.D. SMITH (1991), Graded tetrahedral finite element meshes, Int. j. numer. methods eng. 31, 413-425. [Field,Yarnall-1989] D.A. FIELD AND K. YARNALL (1989), Three dimensional Delaunay triangulations on a Cray X-MP, in Supercomputing 88, vol 2, Science et Applications, IEEE C.S. and ACM Sigarch. [Filip et al. 1986] D. FILIP, R. MAGEDSON AND R. MARKOT (1986), Surface algorithm using bounds on derivatives, Comput. Aided Geom. Des. 3, 295-311. [Fortune-1987]
S.J. FORTUNE (1987), A sweepline algorithm for Voronoi diagrams, Algorithmica 2, 153-174.
[Fortune-1992]
S.J. FORTUNE (1992), Voronoi diagrams and Delaunay triangulations. In D.Z. Du and F.K. Hwand, edts., Computing in Euclidean Geometry, vol 1 of Lecture Notes Series on Computing, World Scientific, Singapore, 193-233.
[Fortune,Van Wyk-1993] S.J. FORTUNE AND C.J. VAN WYK (1993), Efficient exact arithmetic for computational geometry, Proc. 9th ACM Sympos, Comput. Geom., 163-172. [Frey-1993]
P.J. FREY (1993), Generation automatique de maillages 3D dans des ensembles discrets. Application biomedicale aux methodes d'elements finis. These, Universite de Strasbourg I.
[Frey et al. 1994] P.J. FREY, B. SARTER AND M. GAUTHERIE (1994), Fully automatic mesh generation for 3-D domains based upon voxel sets, Int. j. numer. methods eng. 37, 2735-2753. [Frey et al. 1996] P.J. FREY, H. BOROUCHAKI AND P.L. GEORGE (1996), Delaunay Tetrahedralization using an Advancing-Front Approach, in 5th International Meshing Roundtable, Pittsburg, PA, 31-43. [Frey,Field-1991] W.H. FREY AND D.A. FIELD (1991), Mesh relaxation: a new technique for improving triangulations, Int. j. numer. methods eng. 31, 1121-1131. [Frey,Borouchaki-1996] P.J. FREY ET H. BOROUCHAKI (1996), Texel: triangulation de surfaces implicites. Partie I: aspects theoriques, RR INRIA 3066.
BIBLIOGRAPHY [Galtier-1997]
401
J. GALTIER (1997), Structures de donnees irregulieres et architectures haute performance. Une etude du calcul numerique intensif par le partitionnement de graphes, These, Universite de Versailles Saint-Quentin.
[Galtier,George-1996] J. GALTIER AND P.L. GEORGE (1996), Prepartitioning as a way to mesh subdomains in parallel, in 5th International Meshing Roundtable, Pittsburg, PA, 107-121. [Garey et al. 1978a] M.R. GAREY, D.S. JOHNSON, F.P. PREPARATA AND R.E. TARJAN (1978), Triangulating a simple polygon, Inform. Proc. Letters 7(4), 175-180. [Gaudel et al. 1987] M.C. GAUDEL, M. SORIA ET C. FROIDEVAUX (1987), Types de Donnees et Algorithmes : Recherche, Tri, Algorithmes sur les Graphes, collection didactique INRIA, 4(2). [A.George-1971] J.A. GEORGE (1971), Computer implementation of the finite element method, Stan-CS, Ph. D. [George-1991]
P.L. GEORGE (1991), Automatic mesh generation. Applications to finite element methods, Wiley.
[George-1993]
P.L. GEORGE (1993), Construction et Modification de Maillages, Guide Module/ 3.
[George-1996a]
P.L. GEORGE (1996), Automatic Mesh Generation and Finite Element Computation, in Handbook of Numerical Analysis, vol IV, Finite Element methods (Part 2), Numerical Methods for Solids (Part 2), P.G. Ciarlet and J.L. Lions Eds, North Holland, 69-190.
[George-1997]
P.L. GEORGE (1997), Improvement on Delaunay based 3D automatic mesh generator, Finite Elements in Analysis and Design, 25(3-4), 297-317.
[George et al. 1990] P.L. GEORGE. F. HECHT AND E. SALTEL (1990), Fully automatic mesh generator for 3d domains of any shape, Impact of Comp. in Sci. and Eng. 2, 187-218. [George et al. 1991a] P.L. GEORGE, F. HECHT AND E. SALTEL (1991), Automatic mesh generator with specified boundary, Comput. Methods Appl. Mech. Engrg. 92, 269-288. [George et al. 1991b] P.L. GEORGE, F. HECHT AND M. G. VALLET (1991), Creation of internal points in Voronoi's type method, Control and adaptation, Adv. in Eng. Soft. 13(5/6), 303-313.
402
BIBLIOGRAPHY
[George,Hermeline-1992] P.L. GEORGE AND F. HERMELINE (1992), Delaunay's mesh of a convex polyhedron in dimension d. Application to arbitrary polyhedra, Int. j. numer. methods eng. 33, 975-995. [George,Seveno-1994] P.L. GEORGE AND E. SEVENO (1994), The advancing-front mesh generation method revisited, Int. j. numer. methods eng. 37, 3605-3619. [Green,Sibson-1978] P. GREEN AND R. SIBSON (1978), Computing Dirichlet tesselations in the plane, Comput. J. 21, 168-173. [Gregory-1974]
J.A. GREGORY (1974), Smooth interpolation without twist constraints, in R.E. Barnhill and R.F. Riesenfed (Eds.), Computer Aided Geometric Design, Academic Press, 71-87.
[Grunbaum-1967] B. GRUNBAUM (1967), Convex Polytopes, Wiley, New York. [Guibas et al. 1989] L.J. GUIBAS, D. SALESIN AND J. STOLFI (1989), Epsilon geometry: building robust algorithm from imprecise computations, Proc. 5th ACM Sympos. Comput. Geom., 208-217. [Guibas et al. 1992] L.J. GUIBAS, D.E. KNUTH AND M. SHARIR (1992), Randomozed incremental construction of Delaunay and Voronoi diagrams, Algorithmica 7, 381-413. [Hecht,Saltel-1989] F. HECHT ET E. SALTEL (1990), Emc2 : Un logicield'edition de maillages et de contours bidimensionnels, RT INRIA 118. [Hermeline-1980] F. HERMELINE (1980), Une methode automatique de maillage en dimension n, These, Universite Paris VI, Paris. [Ho Le-1988]
K. Ho LE (1988), Finite element mesh generation methods: a review and classification, Comp. Aided Design 20, 27-38.
[Holmes,Snyder-1988] D.G. HOLMES AND D.D. SNYDER (1988), The generation of unstructured triangular meshes using Delaunay triangulation, Numerical grid generation in computational fluid mechanics'88, Miami, 643-652. [Hosaka-1992]
M. HOSAKA (1992), Modeling of Curves and Surfaces CAD/CAM, Springer-Verlag.
in
[Jacquotte-1992] O.P. JACQUOTTE AND G. COUSSEMENT (1992), Structured mesh adaptation: space accuracy and interpolation methods, Comput. Methods Appl. Mech. Engrg. 101, 397-432. [Joe-1989]
B. JOE (1989), Three-dimensionnal triangulations from local transformations, SI AM J. Sci. Stat. Comput. 10(4), 718-741.
BIBLIOGRAPHY
403
[Joe-1991]
B. JOE (1991), Construction of three-dimensionnal Delaunay triangulations using local transformations, Comput. Aided Geom. Design 8, 123-142.
[Joe-1991a]
B. JOE (1991), Delaunay versus max-min solid angle triangulations for three-dimensionnal mesh generation, Int. j. numer. methods eng. 31(5), 987-997.
[Johnson-1995]
A. A. JOHNSON (1995), Mesh Generation and Update Strategies for Parallel Computation of Flow Problems with Moving Boundaries and Interfaces, Thesis lecture, U. of Minnesota at Minneapolis.
[Johnson,Hansbo-1992] C. JOHNSON AND P. HANSBO (1992), Adaptive finite element methods in computational mechanics, Comput. Methods Appl. Mech. Engrg. 101, 143-181. [Kela-1989]
A. KELA (1989), Hierarchical octree approximations for boundary representation-based geometric models, Comp. Aided Des. 21, 355-362.
[Klee-1966]
V. KLEE (1966), Convexe polytopes and linear programming, Proc. IBM Sci. Comput. Symp: Combinational Problems, 123158.
[Klee-1980]
V. KLEE (1980), On the complexity of d-dimensional Voronoi diagrams, Archiv der Mathematik 34, 75-80.
[Klein-1989]
R. KLEIN (1989), Concrete and Abstract Voronoi diagrams, Lecture Notes in Computer Science, Vol 400, Springer Verlag.
[Klein et al. 1993] R. KLEIN, K. MEHLHORN AND S. MEISER (1993), Randomized incremental construction of abstract Voronoi diagrams, Comput. Geom. Theory Appl. 3(3), 157-184. [Kohli,Carey-1993] H.S. KOHLI AND G.F. GAREY (1993), Shape optimization using adaptive shape refinement, Int. j. numer. methods eng. 36(x), 2435-2451. [Ladeveze et al. 1991] P. LADEVEZE, J.P. PELLE AND P. ROUGEOT (1991), Error estimates and mesh optimization for finite element computation, Eng. comput. 8, 69-80. [Laug et al. 1996] P. LAUG, H. BOROUCHAKI ET P.L. GEORGE (1996), Maillage de courbes gouverne par une carte de metriques, RR INRIA 2818. [Lawson-1972]
C.L. LAWSON (1972), Generation of a triangular grid with application to contour plotting, California institute of Technology, JPL, 299.
404
BIBLIOGRAPHY
[Lawson-1977]
C.L. LAWSON (1977), Software for C1 surface interpolation, Math. Soft., 3, J. Rice ed., Academic Press, New York.
[Lawson-1986]
C.L. LAWSON (1986), Properties of n-dimensional triangulations, Computer Aided Geometric Design 3 , 231-246.
[Lee-1980]
D.T. LEE (1980), Two-dimensional Voronoi diagrams in LpMetric, J. of the ACM2T(4), 604-618.
[Lee,Schachter-1980] D.T. LEE AND B.J. SCHACHTER (1980), Two algorithms for constructing a Delaunay Triangulation, Int. J. Comp. Inf. Sci. 9(3), 219-242. [Lee,Wong-1980] D.T. LEE AND C.K. WONG (1980), Voronoi diagrams in L\ (Z
D.T. LEE AND A.K. LIN (1988), Generalized Delaunay triangulation for planar graphs, Discrete Comput. Geom. 1 , 201-217.
[Lelong,Arnaudies-1977] J. LELONG-FERRAND ET J.M. ARNAUDIES (1977), Cours de Mathematiques, Tome 3, Geometrie et cinematique, Dunod Universite, Bordas. [Leon-1991]
J.C. LEON (1991), Modelisation et construction des surfaces pour la CFAO, Editions Hermes, Paris.
[Lewis et al. 1996] R.W. LEWIS, Y. ZHENG AND D.T. GETHIN (1996), Threedimensional unstructured mesh generation: Part 3. Volume meshes, Comput. Methods Appl. Mech. Engrg. 134, 285-310. [Lo-1985]
S.H. Lo (1985), A new mesh generation scheme for arbitrary planar domains, Int. j. numer. methods eng. 21, 1403-1426.
[Lo-1989]
S.H. Lo (1989), Delaunay triangulation of non-convex planar domains, Int. j. numer. methods eng. 28, 2695-2707.
[Lohner-1988]
R. LOHNER (1988), Some useful Data Structures for the Generation of unstructured grids, Commun. numer. methods eng. 4, 123-135.
[Lohner-1989]
R. LOHNER (1989), Adaptive remeshing for transient problems, Comput. Methods Appl. Mech. Engrg. 75, 195-214.
[Lohner-1995]
R. LOHNER (1995), Surface gridding from discrete data, in 4th International Meshing Roundtable, Albuquerque, NM, 29-45.
[Lohner-1996]
R. LOHNER (1996), Regridding Surface Triangulations, Jour, of Comput. Phys. 126, 1-10.
BIBLIOGRAPHY
405
[Lorensen,Cline-1987] W.E. LORENSEN AND H.E. CLINE (1987), Marching cubes: a high resolution 3D surface construction algorithm, Comput. Graphics 21(4), 163-169. [Marcum-1996] D.L. MARCUM (1996), Unstructured grid generation components for complete systems, Vth Int. Conf. on Grid Generation in Comp. Field Simulations, Mississippi State, USA, 1-5 April. [Marcum,Weatherill-1995] D.L. MARCUM AND N.P. WEATHERILL (1995), Unstructured grid generation using iterative point insertion and local reconnection, AIAA Journal. 33(9), 1619-1625. [Mavriplis-1992] D.J. MAVRIPLIS (1992), An advancing front Delaunay triangulation algorithm designed for robustness, ICASE report 92-49. [Meagher-1982] D. MEAGHER (1982), Geometric modeling using octree encoding, Comput. Graphics and Image Proc. 19, 129-147. [Mehlhorn et al. 1991] D. MEHLHORN, S. MEISER AND C. O'DUNLAING (1991), On the construction of abstract Voronoi diagrams, Discrete Comput. Geom. 6, 211-224. [Merriam-1991] M.L. MERRIAM (1991), An efficient advancing front algorithm for Delaunay triangulation., AIAA paper 91-0792. [Mitty et al. 1993] T.J. MITTY, A. JAMESON AND T. J. BAKER (1993), Solution of three-dimensionnal supersonic flowfields via adapting unstructured meshes, Computer Fluids 22(2/3), 271-283. [Mohammadi-1994] B. MOHAMMAD: (1994), Fluid dynamics computation with NSC2KE - an User-Guide, Release 1.0, RT INRIA 164. [Mohammadi,Pironneau-1994] B. MOHAMMADI AND O. PIRONNEAU (1994), Analysis of the K-Epsilon Turbulence Model, Wiley and Masson (Eds.). [M611er,Hansbo-1995] P. MOLLER AND P. HANSBO (1995), On advancing-front mesh generation in three dimensions, Int. j. numer. methods eng. 38, 3551-3569. [Muller et al. 1992] J.D. MULLER, P.L. ROE AND H. DECONINCK (1992), A frontal approach for node generation in Delaunay triangulations., Unstructured grid methods for advection dominated flows., VKI Lecture notes, pp. 91-97, AGARD Publication R-787. [Mulmuley-1993] K. MuLMULEY (1993), Computational Geometry: An Introduction Through Randomized Algorithms, Prentice Hall, New York. [O'Rourke-1993] J. O'ROURKE (1993), Computational geometry in C, Cambridge University Press.
406
BIBLIOGRAPHY
[Ouachtaoui-1997] R. OUACHTAOUI (1997), Intersection de maillageset transport de solutions d'un maillage a 1'autre. RR INRIA, to appear. [Palmerio-1994] B. PALMERIO (1994), An attraction-repulsion mesh adaption model for flow solution on unstructured grid, Comp. and Fluids 23(3), 487-506. [Parthasarathy et al. 1993] V.N. PARTHASARATHY, C.M. GRAICHEN AND A.F. HATHAWAY (1993), A comparison of tetrahedron quality measures, Finite Elements in Analysis and Design 15, 255-261. [Peraire et al. 1987] J. PERAIRE, M. VAHDATI, K. MORGAN AND O.C. ZiENKlEWicz (1987), Adaptive remeshing for compressible flow computations, Jour, of Comput. Phys. 72, 449-466. [Peraire,Morgan-1997] J. PERAIRE AND K. MORGAN (1997), Unstructured mesh generation including directional refinement for aerodynamic flow simulation, Finite Elements in Analysis and Design 25(3-4), 343355. [Perronnet-1988a] A. PERRONNET (1988), Tetraedrisation d'un objet multimateriaux ou de 1'exterieur d'un objet, Laboratoire d'analyse numerique 189, Universite Paris 6. [Perronnet-1988b] A. PERRONNET (1988), A generator of tetrahedral finite elements for multi-material object and fluids, Numerical grid generation in computational fluid mechanics'88, Miami, FL. [Perucchio et al. 1989] R. PERUCCHIO, M. SAXENA AND A. KELA (1989), Automatic mesh generation from solid models based on recursive spatial decompositions, Int. j. numer. methods eng. 28, 24692501. [Piegl,Richard-1995] L.A. PIEGL AND A.M. RICHARD (1995), Tesselationstrimmed NURBS surfaces, Comput. Aided Des. 7(1), 12-26. [Preparata,Hong-1977] F.P. PREPARATA AND S.J. HONG (1977), Convex hull of finite sets of points in two and three dimension, Com. of the A CM 2(20), 87-93. [Price et al. 1995] M.A. PRICE, C.G. ARMSTRONG AND M.A. SABIN (1995), Hexahedral mesh generation by medial surface subdivision; Part I. Solids with convex edges, Int. j. numer. methods eng. 38, 3335-3359. [Price,Armstrong-1997] M.A. PRICE AND C.G. ARMSTRONG (1997), Hexahedral mesh generation by medial surface subdivision; Part II. Solids with flat and concave edges, Int. j. numer. methods eng. 40, 111136.
BIBLIOGRAPHY [Rajan-1994]
407
V.T. RAJAN (1994), Optimality of the Delaunay Triangulation in Rd, Discrete Comput. Geom. 12 , 189-202.
[Rank et al. 1993] E. RANK, M. SCHWEINGRUBER AND M. SOMMER (1993), Adaptive mesh generation and transformation of triangular to quadrilateral meshes, Common, numer. methods eng. 9(2), 121129. [Rassineux-1995] A. RASSINEUX (1995), Maillage automatique tridimensionnel par une methode front ale pour la methode des elements finis, These, Nancy I. [Rebay-1993]
S. REBAY (1993), Efficient unstructured mesh generation by means of Delaunay triangulation and Bowyer/Watson algorithm, J. of Comput. Physics 106, 125-138.
[Rippa-1990]
S. RIPPA (1990), Minimal roughness property of the Delaunay triangulation, Comput. Aided Geom. Design. 7, 489-497.
[Risler-1991]
J.J. RISLER (1991), Methodes mathematiques pour la C.A.O., RMA n° 18, Masson, Paris.
[Rivara-1984a]
M.C. RIVARA (1984), Mesh refinement processes based on the generalised bisection of simplices, SI AM J of Num. Ana. 21, 604613.
[Rivara-1984b]
M.C. RIVARA (1984), Algorithms for refining triangular grids suitable for adaptive and multigrid techniques, Int. j. numer. methods eng. 21, 745-756.
[Rivara-1986]
M.C. RIVARA (1986), Adaptive finite element refinement and fully irregular and conforming triangulations, in Accuracy Estimates and Adaptive Refinements in Finite Element Computations, I. Babuska et al., John Wiley and Sons.
[Rivara-1990]
M.C. RIVARA (1990), Selective refinement/derefinement algorithms for sequences of nested triangulations, Int. j. numer. methods eng. 28(12), 2889-2906.
[Ruppert,Seidel-1992] J. RUPPERT AND R. SEIDEL (1992), On the difficulty of triangulating three-dimensional nonconvex polyhedra, Discrete Comput. Geom. 7, 227-253. [Rypl,Krysl-1994] D. RYPL AND P. KRYSL (1994), Triangulation of 3-D surfaces, Tech. Report, Czech Tech. Univ., Prague. [Sapidis,Perucchio-1991] N. SAPIDIS AND R. PERUCCHIO (1991), Delaunay triangulation of arbitrarily shaped planar domains, Comput. Aided Geom. Des. 8, 421-437.
408
BIBLIOGRAPHY
[Schonhart-1928] E. SCHONHART (1928), Uber die Zerlegung von Dreieckspolyedern, Mathematish Annalen 98, 309-312. [Seidel-1982]
R. SEIDEL (1982), The complexity of Voronoi diagrams in higher dimensions, Proc. 2(fh Ann. Allerton Conf. Commun., Control, Comput., 94-95.
[Seidel-1991]
R. SEIDEL (1991), Small-dimensional linear programming and convex hulls made easy, Discrete Comput. Geom. 6, 423-434.
[Shamos,Preparata-1985] F.P. PREPARATA AND M.I. SHAMOS (1985), Computational geometry, an introduction, Springer-Verlag. [Sheehy et al. 1996] D.S. SHEEHY, G.C. ARMSTRONG AND D.J. ROBINSON (1996), Shape description by medial surface construction, IEEE Trans, on Vizualization and Comput. Graph. 2(1), 62-72. [Sheng,Hirsch-1992] X. SHENG AND B.E. HIRSCH (1992), Triangulation of trimmed surf aces in parametric space, Comput. Aided Des. 24(8), 437-444. [Shenton et al. 1985] D.N. SHENTON AND Z.J. CENDES (1985), Threedimensional finite element mesh generation using Delaunay tesselation, IEEE Trans. Magnetics 21(6), 2535-2538. [Shephard,Georges-1991] M.S. SHEPHARD AND M.K. GEORGES (1991), Automatic three-dimensional mesh generation technique by the finite octree technique, Int. j. numer. methods eng. 32, 709-749. [Simon-1991]
H. SIMON (1991), Partitioning of unstructured problems for parallel processing, Comp. Systems in Eng. 2, 135-148.
[Simpson-1994] R.B. SIMPSON (1994), Anisotropic mesh transformation and optimal error control, Applied Num. Math. 4, 183-198. [Sloan-1987]
S.W. SLOAN (1987), A fast algorithm for constructing Delaunay triangulations in the plane, Adv. Eng. Soft. 9(1), 34-55.
[Sloan,Houlsby-1984] S.W. SLOAN AND G.T. HOULSBY (1984), An implementation of Watson's algorithm for computing 2-dimensional Delaunay triangulations, Adv. Eng. Soft. 6(4), 192-197. [Tacher,Parriaux-1996] L. TACHER AND A, PARRIAUX (1996), Automatic nodes generation in N-dimensional space Commun. numer. methods eng. 12, 243-248. [Talon-1987a]
J.Y. TALON (1987), Algorithmes de generation et d'amelioration de maillages en 2D, Rapport Technique Artemis-Imag 20.
BIBLIOGRAPHY [Talon-1987b]
409
J.Y. TALON (1987), Algorithmes d'amelioration de maillages tetraedriques en 3 dimensions, Rapport Technique Artemis-Imag 21.
[Tam,Armstrong-1991] T.K. TAM AND G.C. ARMSTRONG (1991), 2D finite element mesh generation by medial axis subdivision, Adv. in Engrg. Soft. 13, 313-324. [Tarjan,Van Wyk-1988] R.E. TARJAN AND C.J. VAN WYK (1988), An O(nloglogn)-time algorithm for triangulating a simple polygon, SIAM J. Comput. 17, 143-178. [Vallet-1990]
M.G. VALLET (1990), Generation de maillages anisotropes adaptes - Application a la capture de couches limites, RR INRIA 1360.
[Verfurth-1996]
R. VERFURTH (1996), A review of a posteriori error estimation and adaptive refinement techniques, Wiley Teubner.
[Voronoi-1908]
G. VORONOI (1908), Nouvelles applications des parametres continus a la theorie des formes quadratiques. Recherches sur les parallelloedres primitifs. Journal Reine angew. Math. 134.
[Walton,Meek-1996] D.J. WALTON AND D.S. MEEK (1996), A triangular G1 patch from boundary curves, Comp. Aided Design 28, 113-123. [Watson-1981]
D.F. WATSON (1981), Computing the n-dimensional Delaunay Tesselation with applications to Voronoi polytopes, Computer Journal 24(2), 167-172.
[Weatherill-1985] N.P. WEATHERILL (1985), The generation of unstructured grids using Dirichlet tesselation, MAE report no. 1715, Princeton Univ. [Weatherill-1988] N.P. WEATHERILL (1988), A method for generating irregular computational grids in multiply connected planar domains, Int. J. Num. Meth. Fluids 8. [Weatherill-1988] N.P. WEATHERILL (1988), A strategy for the use of hybrid structured-unstructured meshes in CFD, Num. Meth. for Fluids Dynamics, Oxford University Press. [Weatherill-1990] N.P. WEATHERILL (1990), The integrity of geometrical boundaries in the 2-dimensional Delaunay triangulation, Commun. numer. methods eng. 6, 101-109. [Weatherill,Hassan-1994] N.P. WEATHERILL AND O. HASSAN (1994), Efficient three-dimensionnal Delaunay triangulation with automatic point creation and imposed boundary constraints, Int. j. numer. methods eng. 37, 2005-2039.
410 [Yao-1981]
BIBLIOGRAPHY A.C. YAO (1981), A lower bound to finding convex hulls, J. ACM 28, 780-787.
[Yerry,Shephard-1984] M.A. YERRY AND M.S. SHEPHARD (1984), Automatic three-dimensional mesh generation by the modified-octree technique, Int. j. numer. methods eng. 20, 1965-1990. [Zheng et al. 1996a] Y. ZHENG, R.W. LEWIS AND D.T. GETHIN (1996), Threedimensional unstructured mesh generation: Part 1. Fundamental aspects of triangulation and point creation, Comput. Methods Appl. Mech. Engrg. 134, 249-268. [Zheng et al. 1996b] Y. ZHENG, R.W. LEWIS AND D.T. GETHIN (1996), Threedimensional unstructured mesh generation: Part 2. Surface meshes, Comput. Methods Appl. Mech. Engrg. 134, 269-284.
Index a posteriori partitioning, 370 a priori partitioning, 371 adaptation, 243 adjacency graph, 16 advancing-front method, 137,150, 200, 204
algebraic method, 369 arbitrary position (points in), 13 aspect ratio, 8 assembly, 281 B ball, 23, 91 base, 23 Bernstein polynomial, 305 Bezier curve, 305 boundary condition, 281
Catalan number, 224 cavity, 24, 52 circumcircle, 6 circumsphere, 10 collision, 31, 62 compatible mesh, 246 complexity, 50 conformal triangulation, 14 connectivity, 244 connectivity matrix, 16, 59 constrained triangulation, 74 constrained Delaunay triangulation, 85
constrained triangulation, 20 control space, 138 (ideal) control space, 138 convex hull, 68 covering, 14 curvature, 167 cylindrical mesh, 187 D data structure, 277 Dehn-Sommerville (formula), 14 Delaunay admissible edge, 77 Delaunay admissible face, 74 Delaunay kernel, 41 Delaunay lemma, 38 Delaunay measure, 45, 123 Delaunay triangulation, 18 dihedral angle, 222 divide and conquer, 47 dynamic coloring, 26 dynamic constraint, 110 E edge swapping, 46, 151, 223 empty mesh, 132, 196 empty sphere criterion, 18 equivalent triangulation, 75 Euler formula, 14
filtration, 142, 147, 202, 205 finite differences, 139 finite element, 279 first fundamental form, 164
INDEX
412
firtree, 224
G general pipe, 24, 25, 87, 92 general position (points in), 13 geodesic, 115 geometric attribute, 283 Gregory patch, 315 H hashing, 30 HCT finite element, 316 /i-method, 245 /ip-method, 247
ideal control space, 138 incircle, 7 incremental method, 41 inductive Delaunay triangulation, 374 inductive Delaunay triangulation, 371 infinite precision, 55 inheritance, 58, 128 inscribed circle, 7 insphere, 11 K kernel, 89, 230 kernel of a polygon, 44
Laplacian smoothing, 231 lazy triangulation, 100 little circle, 109 local base, 164 localization search, 70 local principal base, 168 M medial axis, 160, 367
medial surface, 213, 367 mesh, 20, 280, 281 mesh intersection, 261 metric, 115 metric simultaneous reduction, 322 metric interpolation, 253 metric intersection, 116, 255 multiple metric, 116 N node, 281 non obtuse mesh, 242 O octree, 62, 192, 200 optimization, 215 orthogonal mesh, 243 osculating circle, 170 over-connected edge, 230 over-connected point, 229 over-constrained edge, 242 over-constrained element, 242 P P1 -approximation, 278 P2-approximation, 283 parallel computing, 370 pebble, 96 periodic mesh, 159, 187, 213, 296 physical attribute, 282 pipe, 24, 87, 92 p-method, 245 polytope, 13 principal directions, 167
Q quadtree, 62, 137, 139 quality, 8, 22 R randomization, 57, 142 reduced incremental method, 45
INDEX reflex edge, 112 regular tetrahedron, 12 r-method, 244
413 vertex valance, 229 Voronoi cells, 371 Voronoi diagram, 35 voyeur, 16
S Schonhart polyhedron, 89 W second fundamental form, 167 weakly equivalent triangulation, 76 seed, 135 separating power, 54 separation power, 57 shape quality, 216 shell, 25, 90 simple polygon, 83 simplex, 13 simultaneous reduction of two quadratic forms, 117 size quality, 217 skeleton, 160, 214, 367 solid angle, 222 star-shaped polygon, 44 star-shaped polyhedron, 44 Steiner point, 89, 196, 202 stereographic projection, 187 surface roughness, 381 sweeping algorithm, 49 T target value, 22, 209 tetrahedron, 9 regular tetrahedron, 12 triangle, 5 triangulation, 14, 281 U under-connected edge, 230 under-connected point, 229 V validity of a mesh, 22 validity of a triangulation, 17 variational formulation, 279 variogram, 137
This page intentionally left blank
GET O U V R A G E A ETE C O M P O S E PAR LES EDITIONS HERMES REPRODUIT ET ACHEVE D'lMPRIMER
PAR L'IMPRIMERIE FLOCH A MAYENNE EN MAI 1998.
This page intentionally left blank
DEPOT LEGAL : MAI 1998. N° D'IMPRIMEUR : 43716.
Imprime en France