6. Matrix system resolution#

We therefore obtain a linear system to be solved:

(6.1)#\[\mathrm{\langle }\tilde{u}\mathrm{\rangle }\mathrm{.}\mathrm{[}A\mathrm{]}\mathrm{.}\left\{u\right\}+\mathrm{\langle }\tilde{u}\mathrm{\rangle }\mathrm{.}\left\{L\right\}\mathrm{=}0\]

Whatever the field of virtual travel, therefore:

(6.2)#\[\mathrm{[}A\mathrm{]}\mathrm{.}\left\{u\right\}\mathrm{=}\left\{L\right\}\]

6.1. Imposition of kinematic boundary conditions#

Kinematic boundary conditions of the \(u={u}^{D}\) type are treated in two different ways:

  1. The « kinematic » method (AFFE_CHAR_CINE in Code_Aster) consists in modifying the matrix and the second member. This method is fast and does not introduce additional variables. On the other hand, it is not general and does not allow the application of complex boundary conditions like \(\sum {u}_{i}\mathrm{.}{a}_{i}={u}^{D}\).

  2. The dualization method (AFFE_CHAR_MECA in Code_Aster) consists in introducing a vector of multipliers (or parameters) of Lagrange \(\lambda\), which increases the number of unknowns but makes it possible to treat all cases.

(6.3)#\[\begin{split}\{\begin{array}{}[A]\mathrm{.}\left\{u\right\}+{[Q]}^{T}\left\{\lambda \right\}=\left\{L\right\}\\ [Q]\mathrm{.}\left\{u\right\}=\left\{{u}^{D}\right\}\end{array}\end{split}\]

6.2. Resolution#

The linear system can be solved by a number of numerical methods. The methods used in Code_Aster are a block factorization \({\mathit{LDL}}^{T}\), a multi-frontend method (or its equivalent with pivoting, MUMPS), a preconditioned conjugate gradient, and the collection of iterative solvers PETSC.

Resolution methods fall into two categories:

  • Direct methods that solve exactly (with the exception of numerical errors)

  • Iterative methods that build a sequence of vectors converging towards the solution

The matrices resulting from the finite element method are very sparse (they contain a majority of zero terms). In practice, on standard-sized systems (a few tens of thousands of equations), the density of non-zero terms rarely exceeds 0.01%. They are therefore stored in hollow form (or « sparse ») and take up little space in memory. On the other hand, matrices are not built to be used effectively with optimized mathematical program libraries dedicated to full matrices (libraries BLAS for example). Solvers are therefore developed specifically for these problems.

The principle of a direct solver is to decompose the matrix into a product of matrices of particular shapes. For example, decomposition \({\mathit{LDL}}^{T}\):

(6.4)#\[[A]=[L]\mathrm{.}[D]\mathrm{.}{[L]}^{T}\]

Where matrix \(D\) is diagonal and matrix \(L\) is lower triangular. This decomposition is only valid for symmetric matrices. If this is not the case, other decompositions should be used.

The principle is as follows:

  • Starting from the initial matrix (very hollow), a product of remarkable matrices is constructed. It’s the*factorization* operation.

  • These remarkable matrices make it possible to solve the problem very quickly. It’s the*descent-ascent phase. *

The factorization phase is the most expensive. For the most common decompositions, the machine cost is in \({n}^{3}\) where \(n\) is the number of equations. The memory cost will depend on the profile of the matrix (the numbering of the finite elements). Automatic processes seek to optimize this numbering in order to have a structure that is as compact as possible. Even with this optimization, it is common for the factored matrix to take several hundred times, or even several thousand times more memory than the initial matrix. Direct solvers therefore consume a lot of memory and this becomes unacceptable from several hundreds of thousands of degrees of freedom, even on the most powerful machines. On the other hand, these direct methods are particularly robust. Problems in structural and solid mechanics often lead to matrices with poor conditioning (this is particularly the case with all the latest numerical innovations that use mixed methods with a lot of Lagrange multipliers).

When possible, iterative methods are preferentially used whose principle consists in finding an approximation of the inverse of the matrix and in then proceeding with an iterative resolution, step by step, which only uses matrix-vector products, which are very efficient and inexpensive in memory.

However, these iterative methods have several shortcomings:

  • They are less robust than direct methods, especially when the packaging is poor

  • Pre-conditioning methods are very numerous and there are as many as there are different problems (or even several possible per problem). This requires the user to juggle the various methods, without ever being assured of obtaining a result in the end.

  • These are iterative methods, which implies a criterion for stopping the process, and therefore a parameter to be managed, but also problems with the accumulation of rounding errors.