6. Quality control#

In general, the overall quality of a study depends on several factors. Among the causes that can alter the quality of a study, we can distinguish in particular:

  • uncertainties in the data,

  • modeling choices (transition from physics to digital technology),

  • discretization errors, once the data has been established and the modeling has been chosen.

6.1. Uncertainties about the data#

On this point, there may be multiple uncertainties because they may concern each of the following aspects.

  • Material data: once the law of behavior has been chosen, and its parameters correctly identified on appropriate characterization tests, there may remain uncertainty in the values of the parameters of this law, due to the variability of the materials.

  • The exact geometry of the structure to be modelled may include uncertainties: the real dimensions are not those provided for at the design stage, or the deterioration of the shape of a part leads to poorly controlled geometry (outside of any aspect of discretization).

  • Loads (including thermal loads or due to other control variables: irradiation, hydration, drying, etc.) and boundary conditions can present numerous uncertainties.

  • Initial conditions, in particular those due to manufacturing (residual stresses, initial work hardening).

For each of the uncertain parameters mentioned above, it will sometimes be necessary to perform a sensitivity analysis:

  • or the user performs a few simulations for extreme values of uncertain parameters, which is sufficient in the case of a monotonic variation of the result at these parameters (but this is not always the case: for example the dependence of stresses on extreme temperature values, with material coefficients that are a function of them, is not trivial.)

In this regard, it should be noted that it is possible to perform parametric calculations very simply with*Code_Aster* (cf.

u2.08.07)

6.2. Modeling choice and data verification#

Even if the data is reliable, the quality of the results also depends on the choices made by the user concerning the simulation to be performed. The preceding paragraphs describe these choices. Let’s recall some general tips:

  • Beyond the mesh, it can be useful to check the calculation data, such as, for example, visualizing boundary conditions and loads (direction and location of application), material assignments, and elementary characteristics (orientation of beams, thicknesses, sections). To do this, use IMPR_RESU/CONCEPT.

  • Other checks can be carried out directly in the command file; for example, check the consistency of the system of units, the boundary conditions, the elementary characteristics, it is always useful to perform a first elastic calculation, before any non-linear study.

To verify material data, it may be useful to perform a test with the*Code_Aster data set that is simpler than the study, for example on hardware (SIMU_POINT_MAT). Can we find the traction-compression curve?

  • Calculating the mass (or volume) of the structure is also part of simple but sometimes useful checks.

6.3. Controlling errors due to spatial discretization#

On this point, we recall some generalities here, but reading u2.08.01 is more than recommended.

Regardless of any desire to perform mesh adaptation, we recommend checking the initial mesh with the MACR_INFO_MAIL command u7.03.02 u7.03.02. This command makes it possible to carry out the following checks at low cost:

  • check the correspondence of the mesh with the initial geometry (in dimension, in volume);

  • list GROUP_MA and GROUP_NO, for a good modeling of boundary conditions;

  • diagnose possible problems (connectivity, holes, mesh interpenetration);

  • provide mesh quality criteria (evaluated mesh by mesh).

To more easily control the effect of mesh quality, an automatic mesh adaptation strategy is operational: it is based on the Lobard mesh adaptation software, which can be called directly from the Code_Aster command file or in the Salome-Meca platform, and uses either error indicators to control the adaptation, or the values of a field, or the jump of a field from an element to the other. Several reasons appear to adapt a network:

  • Meshing is very complicated to produce: we start with a simple version and we entrust an automatic process with the task of refining it.

  • We want to ensure the convergence of the digital solution: rather than creating ever finer meshes by hand, we let the software look for the places where the mesh should be refined to increase the precision of the result.

  • The conditions of the calculation change as it is carried out: the areas that must be finely meshed move. If you knit fine everywhere from the start, the mesh is too big. By adapting as you go, (refinement - deraffination) the mesh will only be fine in the necessary places: its size will be reduced and the quality of the solution will be good.

The areas to be refined can be identified:

  • Or with an error indicator. Note that the simplest ones (ZZ1, of the Zhu-Zienkiewicz type) are very robust, available in 2D and 3D, and even if they do not provide the best errors, their variations are sufficient to control the adaptation. Quantity of interest estimators are more relevant for providing a limit for the error committed.

  • Or with a relevant field (deformation, internal variable). Attention, in plasticity, the Von Mises constraint is not always a step. You can use:

  • or the extreme values of this field (for example ask to refine the 5% of elements that have the highest values),

  • or pilot the adaptation by jumping the values of this field to the border between two finite elements.

  • Or by boxes (uniform refinement in geometric areas).

For more information, see u2.08.01

6.4. Time discretization errors#

For most nonlinearities (behavior, contact, large deformations) temporal discretization influences the result. Convergence results exist (very fortunately) in all classical cases (elasto-visco-plasticity, contact), but it is nevertheless necessary to ensure that for a chosen time step the solution is sufficiently close to the continuous solution in time.

The simplest solution is similar to uniform refinement for mesh adaptation: it consists in uniformly refining the time step over the entire transient and restarting STAT_NON_LINE. This can be automated using a DEFI_LIST_INST feature (see u2.04.01 Tips for using STAT_NON_LINE and u4.34.03).

Another criterion for controlling the time step is subdivision according to a quantity of interest: DEFI_LIST_INST/DELTA_GRANDEUR in fact makes it possible to recut the time step if the maximum variation of a given quantity (for example a plastic deformation component) is greater than a threshold provided.

In addition, for elastoplastic behaviors, it is possible to directly estimate the error due to time discretization, in a manner similar to RESI_RADI_RELA (see 3.2.4). If this criterion was not taken into account during the calculation, it is possible to calculate it during post-processing, in CALC_CHAMP: the ERR_RADI component of option DERA_ELGA contains the error estimate at each moment and at each integration point.