2. Choosing the damage model#

To represent the degradation of a material or even its breakage, one of the possible methods (especially when the mode of failure is not known) is to use a « softening » law of behavior, that is to say such that, once a threshold has been passed (in stress or in deformation), the stress decreases when the deformation increases. This type of behavior can be obtained:

*by introducing a damage variable \(D\) between \(0\) and \(1\) (= damage mechanics as introduced by Kachanov or Lemaitre): laws ENDO_FRAGILE, ENDO_ISOT_BETON, ENDO_ORTH_BETON, MAZARS in*Code_Aster; *

by using plasticity models with negative « work-hardening » such as for example laws BETON_DOUBLE_DP, DRUCK_PRAGER, but also the various soil laws available in*Code_Aster;

by introducing a variable such as porosity coupled with plasticity for ductile rupture: law of ROUSSELIER in*Code_Aster.

Whatever the method used, all these « local » softening laws lead to the same difficulty: there comes a time when the problem becomes poorly posed and the solution obtained becomes highly dependent on the mesh. In fact, the damage is localized in a band whose thickness is a single element (hence a cracking energy that tends to zero when the mesh is refined) and there is a dependence of the cracking « path » on the topology of the mesh. The relevance of these calculations (although still quite widespread in the literature) is therefore very questionable, and making the energy dissipated depend on the mesh size provides only a very partial solution.

To get around this problem, it is commonly accepted that it is necessary to introduce into the problem to be solved a characteristic length, which will control the thickness of the damaged zone independently of the mesh and allow the solution to converge again when the mesh is refined.

Several regularization methods have been developed in recent years, each of which has its advantages and limitations. In Code_Aster, the available methods all rely on a notion of gradient (as opposed to integral-type methods, which are also widely used in the literature).

A distinction is made between methods that introduce:

  • the gradient of the damage variable (model GRAD_VARI, confer documentation [R5.04.01]);

  • second gradient and second expansion gradient models that introduce an energy that is totally or partially dependent on the components of the deformation gradient (models 2 DGet DIL, cf. doc [R5.04.03], cf. doc []);

  • swelling gradient models (model INCO_UPG, cf. doc [R3.06.08]).

In all cases, the principle is to penalize the location of the damage from an energetic point of view.

The table below summarizes for the various laws of behavior, which regularized modeling is available (and valid) in*Code_Aster*.

Law of behavior

Modeling

ENDO_FRAGILE

GRAD_VARI/

2DG

ENDO_ISOT_BETON

GRAD_VARI/

2DG

MAZARS

2DG

ENDO_ORTH_BETON

2DG

DRUCK_PRAGER

2DG/ DIL

ROUSSELIER

INCO_UPG

Table 1: correspondence between damage law and non-local modeling

Notes:

  1. A non-local version of the BETON_DOUBLE_DP model does not exist. However, the implemented version includes a Hillerborg type adjustment to avoid the problem of energy tending to zero when the size of the elements tends to zero.

  2. All laws of behavior can be used with 2DG and DIL models. However, 2DG modeling has so far only been used with soil laws and modeling DIL only makes sense to regulate « volume damage »: it is therefore well suited to treat the case of dilating materials and therefore soils. In addition to DRUCK_PRAGER, it is therefore possible to use laws CAM_CLAY (which is a particular case of Hujeux’s law) and HUJEUX (cf. thesis by Alexandre Foucault and CR- AMA -09-154), VISC_DRUCK_PRAG (see note H-T64-2009-03498) and LETK, BARCELONE, CJS, HOEK - BROWN); but the feedback from these laws is still weak.

  3. INCO_UPG modeling is currently poorly mastered, and must be the subject of additional studies and development. Its use is not recommended.

  4. The CPU cost of regularized models is significant, on the one hand because they require fairly fine meshing (see § 3) and on the other hand because they introduce additional degrees of freedom making the matrices to be inverted much larger and less sparse.