3. Construction of an optimal mesh#

3.1. Generalities#

The objective of an adaptation procedure is to guarantee the user a level of precision on the overall error while minimizing calculation costs. To assess discretization errors, we use a relative global measure of error \(\varepsilon\) and the associated local contributions \({\varepsilon }_{E}\) with:

\({\varepsilon }^{2}=\sum _{E}{{\varepsilon }_{E}}^{2}\) eq 3.1-1

The idea is to use the results of this first finite element analysis and the error estimators to determine an optimal mesh \({T}^{\ast }\), that is to say a mesh that makes it possible to respect the desired precision while minimizing calculation costs. The \({T}^{\ast }\) mesh is then constructed using an automatic mesh machine and a second finite element analysis is carried out.

3.2. Definition of optimality#

For a given global error \({\varepsilon }_{0}\), a \({T}^{\ast }\) mesh is optimal with respect to an error measure \(\varepsilon\) if:

\(\begin{array}{cc}{\varepsilon }^{\ast }={\varepsilon }_{0}& \text{précision demandée}\\ {N}^{\ast }& \text{nombre d'éléments de}{T}^{\ast }\text{est minimum}\end{array}\) eq 3.2-1

This optimization criterion naturally leads to the minimization of calculation costs.

3.3. Determination of an optimal mesh#

To determine the characteristics of the optimal mesh \({T}^{\ast }\), the method consists in calculating on each element \(E\) of the mesh \(T\) a size modification coefficient:

\({r}_{E}=\frac{{h}_{E}^{\ast }}{{h}_{E}}\) eq 3.3-1

Where \({h}_{E}\) is the current size of the \(E\) element and \({h}_{E}^{\ast }\) is the (unknown) size that should be imposed on the elements of \({T}^{\ast }\) in the field of \(E\) to ensure optimality [Figure 3.3-a]. A possible choice to define the size of a \({h}_{E}\) element is to take the size of the largest side of this element. The determination of the optimal mesh is thus brought back to the determination, on the initial mesh \(T\), of a map of size modification coefficients.

\({E}^{\text{*}}\subset {T}^{\text{*}}\)

\(E\subset T\)

_images/10000B3400001987000018E8342BCBC933D31FCE.svg

Figure 3.3-a: Defining sizes

The calculation of the coefficients \({r}_{E}\) is based on the convergence rate of the error:

\(\varepsilon =O({h}^{q})\) eq 3.3-2

where \(q\) depends on the type of finite element used but also on the consistency of the exact solution of the problem being treated. For « classical » error estimators, it is assumed that the convergence rate of the error estimator is equal to the order of convergence of the finite elements solution. For estimators in terms of quantity of interest, this convergence rate is equal to twice the order of convergence of the finite elements solution ([bib4]).

Subsequently, to calculate the modification coefficient of size \({r}_{E}\), a distinction is made between the case of the regular solution (\(q\) only depends on \(p\), the degree of interpolation of the shape functions of the element) and the case of the singular solution (\(q\) only depends on \(\alpha\), order of the singularity of the displacement field).

3.4. « Classic » error estimators#

« Classic » error estimators refer to estimators that provide a norm (norm‑L2, norm‑H1, energy norm) of the error in solution.

3.4.1. Case of the regular solution#

First, we assume that the exact solution is regular enough that the value of \(q\) depends only on the type of finite elements used and is equal to the degree of interpolation used \(p\) (\(p\) is 1 for linear finite elements and 2 for quadratic finite elements). In this case, to predict the optimal sizes, we write that the size ratio is related to the ratio of the contributions to the error by:

\(\frac{{\varepsilon }_{E}^{\ast }}{{\varepsilon }_{E}}=\left[\frac{{h}_{E}^{\ast }}{{h}_{E}}\right]={r}_{E}^{p}\) eq 3.4.1-1

where \({\varepsilon }_{E}^{\ast }\) represents the contribution of the elements of \({T}^{\ast }\) located in zone \(E\), i.e.:

\({\varepsilon }_{E}^{\ast }={\left[\sum _{{E}^{\ast }\subset E}{\epsilon }_{{E}^{\ast }}^{2}\right]}^{1/2}\) eq 3.4.1-2

\({\varepsilon }_{E}^{\ast }\) is the error for element \({E}^{\ast }\) calculated on mesh \(T\).

The square of the error on mesh \({T}^{\ast }\) can therefore be evaluated by:

\(\sum _{E}{({\varepsilon }_{E}^{\ast })}^{2}=\sum _{E}{r}_{E}^{2p}{\varepsilon }_{E}^{2}\) eq 3.4.1-3

and the number of elements of \({T}^{\ast }\) by:

\({N}^{\ast }=\sum _{E}\frac{1}{{r}_{E}^{d}}\) eq 3.4.1-4

Where \(d\) is the dimension of space (in practice, \(d=2\) or \(3\)).

Indeed, \({r}_{E}=\frac{{h}_{E}^{\ast }}{{h}_{E}}={(\frac{V}{{{N}_{E}}^{\ast }})}^{1/d}{(\frac{N}{V})}^{1/d}\) with \(N\) the number of elements of \(T\) in \(E\) (so 1), \({N}_{E}^{\ast }\) the number of elements of \({T}^{\ast }\) in the region of \(E\). So we have \({N}_{E}^{\ast }=\frac{1}{{r}_{E}^{d}}\), which is \({N}^{\ast }=\sum _{E}{N}_{E}^{\ast }=\sum _{E}\frac{1}{{r}_{E}^{d}}\) the total number of elements in \({T}^{\ast }\).

So the problem to be solved is:

\(\text{Minimiser}{N}^{\ast }=\sum _{E}\frac{1}{{r}_{E}^{d}}\text{avec}\sum _{E}{r}_{E}^{2p}{\varepsilon }_{E}^{2}={\varepsilon }_{0}^{2}\) eq 3.4.1-5

This is an optimization problem with a constraint on optimization variables.

By introducing a Lagrange multiplier, noted \(A\), the problem [éq 3.4.1-5] amounts to making the Lagrangian extremum:

\(L({\left\{{r}_{E}\right\}}_{E\in T}\text{;}A)=\sum _{E}\frac{1}{{r}_{E}^{d}}+A(\sum _{E}{r}_{E}^{2p}{\varepsilon }_{E}^{2}-{\varepsilon }_{0}^{2})\) eq 3.4.1-6

The extreme conditions give:

\(\frac{\partial L}{\partial {r}_{E}}=\frac{-d}{{r}_{E}^{d+1}}+2Ap{\varepsilon }_{E}^{2}{r}_{E}^{2p-1}=0\forall E\in T\) eq 3.4.1-7

From where:

\({r}_{E}={\left[\frac{d}{2Ap{\varepsilon }_{E}^{2}}\right]}^{1/(2p+d)}\) eq 3.4.1-8

By transferring to the second equation of [éq 3.4.1-5], we deduce \(A\):

\(A=\frac{d}{2p}{\left[\frac{\sum _{E}{\varepsilon }_{E}^{2d/(2p+d)}}{{\varepsilon }_{0}^{2}}\right]}^{(2p+d)/2p}\) eq 3.4.1-9

We replace the expression for \(A\) thus obtained in equation [éq 3.4.1-8] to obtain \({r}_{E}\):

\({r}_{E}=\frac{{\varepsilon }_{0}^{1/p}}{{\varepsilon }_{E}^{2/(2p+d)}{\left[\sum _{E}{\varepsilon }_{E}^{2d/(2p+d)}\right]}^{1/2p}}\) eq 3.4.1-10

3.4.2. Case of singular areas#

To predict the optimal sizes, we use a convergence rate \({q}_{E}\) defined by element:

\(\frac{{\varepsilon }_{E}^{\ast }}{{\varepsilon }_{E}}={\left[\frac{{h}_{E}^{\ast }}{{h}_{E}}\right]}^{{q}_{E}}={r}_{E}^{{q}_{E}}\) eq 3.4.2-1

where \({\varepsilon }_{E}^{\ast }\) represents the contribution of the elements of \({T}^{\ast }\) located in zone \(E\), i.e.:

\({\varepsilon }_{E}^{\ast }={\left[\sum _{{E}^{\ast }\subset E}{\varepsilon }_{{E}^{\ast }}^{2}\right]}^{1/2}\) eq 3.4.2-2

The square of the error on mesh \({T}^{\ast }\) can therefore be evaluated by:

\(\sum _{E}{({\varepsilon }_{E}^{\ast })}^{2}=\sum _{E}{r}_{E}^{2{q}_{E}}{\varepsilon }_{E}^{2}\) eq 3.4.2-3

and the number of items in \({T}^{\ast }\) is always evaluated by:

\({N}^{\ast }=\sum _{E}\frac{1}{{r}_{E}^{d}}\) eq 3.4.2-4

So the new problem to be solved is:

\(\text{Minimiser}N\ast =\sum _{E}\frac{1}{{r}_{E}^{d}}\text{avec}\sum _{E}{r}_{E}^{2{q}_{E}}{\varepsilon }_{E}^{2}={\varepsilon }_{0}^{2}\) eq 3.4.2-5

which is an optimization problem with a constraint on optimization variables.

By introducing a Lagrange multiplier, noted \(A\), the problem amounts to making the Lagrangian extremum:

\(L({\left\{{r}_{E}\right\}}_{E\in T}\text{;}A)=\sum _{E}\frac{1}{{r}_{E}^{d}}+A(\sum _{E}{r}_{E}^{2{q}_{E}}{\varepsilon }_{E}^{2}-{\varepsilon }_{0}^{2})\) eq 3.4.2-6

The extreme conditions give:

\(\frac{\partial L}{\partial {r}_{E}}=-\frac{d}{{r}_{E}^{d+1}}+2A{q}_{E}{\varepsilon }_{E}^{2}{r}_{E}^{2{q}_{E}-1}=0\forall E\in T\) eq 3.4.2-7

From where:

\({r}_{E}={\left[\frac{d}{2A{q}_{E}}{\varepsilon }_{E}^{2}\right]}^{1/(2{q}_{E}+d)}\) eq 3.4.2-8

By transferring to the second equation of [éq 3.4.2-5], we get a non-linear equation in \(A\) (because \({q}_{E}\) depends on the elements):

\(\sum _{E}\left[{\left[\frac{d}{2A{q}_{E}}\right]}^{2{q}_{E}/(2{q}_{E}+d)}{\varepsilon }_{E}^{2d/(2{q}_{E}+d)}\right]-{\varepsilon }_{0}^{2}=0\) eq 3.4.2-9

It is solved by Newton’s method (the Lagrange multiplier is initialized by taking the Lagrange multiplier from the regular solution, i.e. the expression [éq 3.4.2-9] with \({q}_{E}=p\)). Once \(A\) is calculated, \({r}_{E}\) is deduced from it by the equation [éq 3.4.2-8].

3.5. Error estimators in quantities of interest#

Error estimators in quantities of interest refer to estimators that provide an error on a specific physical quantity (quantity of interest) over a selected area.

3.5.1. Case of the regular solution#

In the case of estimators in terms of quantity of interest, the value of \(q\) is equal to \(2p\) [bib4] (\(p\) is equal to 1 for linear finite elements and 2 for quadratic finite elements). To predict the optimal sizes, we write that the size ratio is related to the ratio of the contributions to the error by:

\(\frac{{\epsilon }_{E}^{\ast }}{{\epsilon }_{E}}={\left[\frac{{h}_{E}^{\ast }}{{h}_{E}}\right]}^{2p}={r}_{E}^{2p}\) eq 3.5.1-1

where \({\epsilon }_{E}^{\ast }\) represents the contribution of the elements of \({T}^{\ast }\) located in zone \(E\), i.e.:

\({\epsilon }_{E}^{\ast }=\sum _{{E}^{\ast }\subset E}{\epsilon }_{{E}^{\ast }}\) eq 3.5.1-2

\({\varepsilon }_{E}^{\ast }\) is the error for element \({E}^{\ast }\) calculated on mesh \(T\).

The error on mesh \({T}^{\ast }\) can therefore be evaluated by:

\(\sum _{E}{\varepsilon }_{E}\ast =\sum _{E}{r}_{E}^{2p}{\varepsilon }_{E}\) eq 3.5.1-3

and the number of elements of \({T}^{\ast }\) by:

\({N}^{\ast }=\sum _{E}\frac{1}{{r}_{E}^{d}}\) eq 3.5.1-4

Where \(d\) is the dimension of space (in practice, \(d=2\) or \(3\)).

So the problem to be solved is:

\(\text{Minimiser}{N}^{\ast }=\sum _{E}\frac{1}{{r}_{E}^{d}}\text{avec}\sum _{E}{r}_{E}^{2p}{\varepsilon }_{E}={\varepsilon }_{0}\) eq 3.5.1-5

Again, this is an optimization problem with a constraint on optimization variables.

By introducing a Lagrange multiplier, noted \(A\), the problem [éq 3.5.1-5] amounts to making the Lagrangian extremum:

\(L({\left\{{r}_{E}\right\}}_{E\in T}\text{;A})=\sum _{E}\frac{1}{{r}_{E}^{d}}+A(\sum _{E}{r}_{E}^{2p}{\varepsilon }_{E}-{\varepsilon }_{0})\) eq 3.5.1-6

The extreme conditions give:

\(\frac{\partial L}{\partial {r}_{E}}=-\frac{d}{{r}_{E}^{d+1}}+2Ap{\varepsilon }_{E}{r}_{E}^{2p-1}=0\forall E\in T\) eq 3.5.1-7

From where:

\({r}_{E}={\left[\frac{d}{2Ap{\varepsilon }_{E}}\right]}^{1/(2p+d)}\) eq 3.5.1-8

By transferring to the second equation of [éq 3.5.1-5], we deduce \(A\):

\(A=\frac{d}{2p}{\left[\frac{\sum _{E}{\varepsilon }_{E}^{d/(2p+d)}}{{\varepsilon }_{0}}\right]}^{(2p+d)/2p}\) eq 3.5.1-9

We replace the expression for \(A\) thus obtained in equation [éq 3.5.1-8] to obtain \({r}_{E}\):

\({r}_{E}=\frac{{\varepsilon }_{0}^{1/2p}}{{\varepsilon }_{E}^{1/(2p+d)}{\left[\sum _{E}{\varepsilon }_{E}^{d/(2p+d)}\right]}^{1/2p}}\) eq 3.5.1-10

3.5.2. Case of singular areas#

To predict the optimal sizes, we now impose:

\(\frac{{\varepsilon }_{E}^{\ast }}{{\varepsilon }_{E}}={\left[\frac{{h}_{E}^{\ast }}{{h}_{E}}\right]}^{2{q}_{E}}={r}_{E}^{2{q}_{E}}\) eq 3.5.2-1

where \({\varepsilon }_{E}^{\ast }\) represents the contribution of the elements of \({T}^{\ast }\) located in zone \(E\), i.e.:

\({\varepsilon }_{E}^{\ast }=\sum _{{E}^{\ast }\subset E}{\varepsilon }_{{E}^{\ast }}\) eq 3.5.2-2

The square of the error on mesh \({T}^{\ast }\) can therefore be evaluated by:

\(\sum _{E}{\varepsilon }_{E}^{\ast }=\sum _{E}{r}_{E}^{2{q}_{E}}{\varepsilon }_{E}\) eq 3.5.2-3

and the number of items in \({T}^{\ast }\) is always evaluated by:

\({N}^{\ast }=\sum _{E}\frac{1}{{r}_{E}^{d}}\) eq 3.5.2-4

So the new problem to be solved is:

\(\text{Minimiser}{N}^{\ast }=\sum _{E}\frac{1}{{r}_{E}^{d}}\text{avec}\sum _{E}{r}_{E}^{2{q}_{E}}{\varepsilon }_{E}={\varepsilon }_{0}\) eq 3.5.2-5

which is an optimization problem with a constraint on optimization variables.

By introducing a Lagrange multiplier, noted \(A\), the problem amounts to making the Lagrangian extremum:

\(L({\left\{{r}_{E}\right\}}_{E\in T}\text{;}A)=\sum _{E}\frac{1}{{r}_{E}^{d}}+A(\sum _{E}{r}_{E}^{2{q}_{E}}{\varepsilon }_{E}-{\varepsilon }_{0})\) eq 3.5.2-6

The extreme conditions give:

\(\frac{\partial L}{\partial {r}_{E}}=-\frac{d}{{r}_{E}^{d+1}}+2A{q}_{E}{\varepsilon }_{E}{r}_{E}^{2{q}_{E}}-1=0\forall E\in T\) eq 3.5.2-7

From where:

\({r}_{E}={\left[\frac{d}{2A{q}_{E}}{\varepsilon }_{E}\right]}^{1/(2{q}_{E}+d)}\) eq 3.5.2-8

By transferring to the second equation of [éq 3.5.2-5], we get a non-linear equation in \(A\):

\(\sum _{E}\left[{\left[\frac{d}{2A{q}_{E}}\right]}^{2{q}_{E}/(2{q}_{E}+d)}{\varepsilon }_{E}^{d/(2{q}_{E}+d)}\right]-{\varepsilon }_{0}=0\) eq 3.5.2-9

It is solved by Newton’s method (the Lagrange multiplier is initialized by taking the Lagrange multiplier from the regular solution, i.e. the expression [éq 3.5.2-9] with \({q}_{E}=p\)). Once \(A\) is calculated, \({r}_{E}\) is deduced from it by the equation [éq 3.5.2-8].