3. Numerical resolution methods#
Solving incremental equations confronts us either with a non-linear scalar equation, or with a non-linear system with two unknowns. The numerical methods used are presented below. We also examine the calculation of the tangent matrix, possibly used by the global algorithm of STAT_NON_LINE, (cf. [R5.03.01]).
3.1. Scalar equation: secant method#
It involves solving a non-linear scalar equation by looking for the solution within a confidence interval. To do this, it is proposed to couple a secant method with a control of the search interval. Let the following equation be solved:
\(\text{f}(x)=0\text{}x\in \left[a,b\right]\text{}\text{f}(a)<0\text{}\text{f}(b)>0\) eq 3.1-3.1-1
The secant method consists in building a series of points \({x}^{n}\) that converge towards the solution. It is defined by recurrence (linear approximation of the function by its chord):
\({x}^{n+1}={x}^{n-1}-\text{f}({x}^{n-1})\frac{{x}^{n}-{x}^{n-1}}{\text{f}({x}^{n})-\text{f}({x}^{n-1})}\) eq 3.1-3.1-2
Moreover, if \({x}^{n+1}\) were to leave the interval, then we replace it with the limit of the interval in question:
\(\{\begin{array}{c}\text{si}{x}^{n+1}<a\text{alors}{x}^{n+1}:=a\\ \text{si}{x}^{n+1}>b\text{alors}{x}^{n+1}:=b\end{array}\) eq 3.1-3.1-3
On the other hand, if \({x}^{n+1}\) is in the current interval, then we update the interval:
\(\{\begin{array}{c}\text{si}{x}^{n+1}\in \left[a,b\right]\text{et f}({x}^{n+1})<0\text{alors}a:={x}^{n+1}\\ \text{si}{x}^{n+1}\in \left[a,b\right]\text{et f}({x}^{n+1})>0\text{alors}b:={x}^{n+1}\end{array}\) eq 3.1-3.1-4
It is considered to have converged when \(\text{f}\) is close enough to 0 (tolerance to be entered). As for the first two points of the sequence, we can choose the limits of the interval, or else, if we have an estimate of the solution, we can adopt this estimate and one of the limits of the interval.
3.2. Nonlinear systems: Newton method and linear search#
Here we present a Newton method to which a linear search technique and a control of the direction of descent have been combined in order not to leave the field of research (limits on the unknowns).
Consider the following nonlinear system:
\(\{\begin{array}{c}\text{F}(x,y)=0\\ \text{G}(x,y)=0\end{array}\text{avec}\{\begin{array}{c}{x}_{\text{min}}\le x\le {x}_{\text{max}}\\ {y}_{\text{min}}\le y\le {y}_{\text{max}}\end{array}\) eq 3.2-3.2-1
If \((x,y)\) is a point in the field of research, then a series of points \(({x}^{n},{y}^{n})\) is constructed that converges to a solution (or at least, we hope so) by the following method.
Determination of the direction of descent
A direction of descent \((\deltax ,\deltay )\) is given by the resolution of the 2 x 2 linear system:
\(\left[\begin{array}{cc}{F}_{,x}^{n}& {F}_{,y}^{n}\\ {G}_{,x}^{n}& {G}_{,y}^{n}\end{array}\right]\left[\begin{array}{c}\deltax \\ \deltay \end{array}\right]=-\left[\begin{array}{c}{F}^{n}\\ {G}^{n}\end{array}\right]\) eq 3.2-3.2-2
Correction of the direction of descent
We correct the direction of descent \((\deltax ,\deltay )\) so that the points considered are in the field of research (with \({\rho }_{\mathrm{max}}\) the maximum length that we are allowed to describe along the direction of descent):
\(\{\begin{array}{cccc}\text{si}x+{\rho }_{\mathrm{max}}\delta x<{x}_{\mathrm{min}}& \delta x:=\frac{{x}_{\mathrm{min}}-x}{{\rho }_{\mathrm{max}}}& \text{si}x+{\rho }_{\mathrm{max}}\delta x>{x}_{\mathrm{max}}& \delta x:=\frac{{x}_{\mathrm{max}}-x}{{\rho }_{\mathrm{max}}}\\ \text{si}y+{\rho }_{\mathrm{max}}\delta y<{y}_{\mathrm{min}}& \delta y:=\frac{{y}_{\mathrm{min}}-y}{{\rho }_{\mathrm{max}}}& \text{si}y+{\rho }_{\mathrm{max}}\delta y>{y}_{\mathrm{max}}& \delta y:=\frac{{y}_{\mathrm{max}}-y}{{\rho }_{\mathrm{max}}}\end{array}\) eq 3.2-3.2-3
Linear search
All that remains is to minimize quantity \(\text{E}=({\text{F}}^{2}+{\text{G}}^{2})/2\) in the direction of descent. Note that the \(\text{E}\) norm, which is minimized in this way, is a measure of the error committed in the resolution of the system: it is zero when \((x,y)\) is the solution of the system [éq 3.2-1]. To minimize \(\text{E}\), we will simply try to cancel its derivative, that is to say solve the scalar equation:
\(\underset{({\mathrm{FF}}_{,x}+{\text{GG}}_{,x})\delta x+({\mathrm{FF}}_{,y}+{\text{GG}}_{,y})\delta y}{\underset{\underbrace{}}{\frac{\partial }{\partial \rho }\left[\text{E}(x+\rho \delta x,y+\rho \delta y)\right]}}=0\mathrm{et}0\le \rho \le {\rho }_{\mathrm{max}}\) eq 3.2-3.2-4
Convergence criterion
It is considered to have converged when error \(\text{E}\) is less than a prescribed quantity. Moreover, if the norm of the direction of descent becomes too low (another quantity to be entered), one may think that the algorithm is not succeeding in converging.
3.3. Stop criteria#
So far, the stop values and the maximum iteration numbers of the previous resolution methods have not been specified. A distinction must be made between two cases.
When trying to verify the coherence conditions (scalar equation or non-linear system depending on the situation), we expect precise results, whose relative tolerance \(\eta\) is set by the user in the STAT_NON_LINE command under the command under the keyword RESI_INTE_RELA, (cf. [U4.32.01]). Depending on whether one seeks to solve \(\text{F}=\mathrm{0,}\text{G}=0\) or simultaneously \(\text{F}=\text{G}=0\), the stopping criterion is expressed respectively:
\(∣\frac{\text{F}}{{R}^{0}}∣\le \eta \text{ou}∣\frac{\text{G}}{{R}^{0}}∣\le \eta \text{ou}\frac{1}{{R}^{0}}\sqrt{\frac{{\text{F}}^{2}+{\text{G}}^{2}}{2}}\le \eta\)
\({R}^{0}\) initial elasticity limit, provided by the user, cf. [§ 1.1].
In addition, the user always specifies a maximum number of iterations in the STAT_NON_LINE command under the ITER_INTE_MAXI keyword, (cf. [U4.32.01]).
When doing linear search iterations, we are looking for faster (or at least safer) convergence. However, you should not devote excessive time to it. This is why a maximum number of iterations equal to 3, a maximum limit \({\rho }_{\mathrm{max}}\) equal to 2 and a relative stopping criterion of 1% have been set once and for all:
\({\frac{\partial }{\partial \rho }\left[\text{E}(x+\rho \delta x,y+\rho \delta y)\right]\mid }_{\rho }\le {10}^{-2}{\frac{\partial }{\partial \rho }\left[\text{E}(x+\rho \delta x,y+\rho \delta y)\right]\mid }_{\rho =0}\)
3.4. Tangent matrix#
In order to solve (global) equilibrium equations by a Newton method, it is essential to determine the consistent matrix of tangent behavior, (cf. Simo et al. [bib4]). This matrix is classically composed of an elastic contribution and a plastic contribution:
\(\frac{\delta \sigma }{\delta \varepsilon }=\frac{\delta {\sigma }^{e}}{\delta \varepsilon }-2\mu \frac{\delta \Delta {\varepsilon }^{p}}{\delta \varepsilon }\) eq 3.4-3.4-1
It is immediately deduced that under elastic regime (classical or pseudo-discharge), the tangent matrix is reduced to the elastic matrix:
Elastic regime:
\(\frac{\delta \sigma }{\delta \varepsilon }=\frac{\delta {\sigma }^{e}}{\delta \varepsilon }\) eq 3.4-3.4-2
On the other hand, under the plastic regime, the variation in plastic deformation is no longer zero. The rules of compound derivation make it possible to obtain:
\(\frac{\delta \Delta {\varepsilon }^{p}}{\delta \tilde{{\sigma }^{e}}}=\frac{3}{2}\left[{s}^{0}\otimes \frac{\deltap }{\delta \tilde{{\sigma }^{e}}}+\Deltap \frac{\delta {s}^{0}}{\delta \tilde{{\sigma }^{e}}}\right]=\frac{3}{2}\left[{s}^{0}\otimes \frac{\deltap }{\delta {\sigma }^{e}}+\frac{\Deltap }{{s}_{\mathrm{eq}}^{e}}(\mathrm{Id}-\frac{3}{2}{s}^{0}\otimes {s}^{0})\right]\) eq 3.4-3.4-3
\(\otimes\) tensor product
It can be noted that we preferred to derive with respect to
, knowing that we have:
\(\frac{\delta \Delta {\varepsilon }^{p}}{\delta \varepsilon }=\frac{\delta \Delta {\varepsilon }^{p}}{\delta {\sigma }^{e}}\text{.}\frac{\delta {\sigma }^{e}}{\delta \varepsilon }=2\mu \frac{\delta \Delta {\varepsilon }^{p}}{\delta \tilde{{\sigma }^{e}}}\text{.}P\text{avec}P:\{\begin{array}{}S\to S\\ \varepsilon \to \tilde{\varepsilon }\end{array}\) eq 3.4-3.4-4
\(S\) |
symmetric tensor space |
\(P\) |
spotlight on diverters |
Finally, all that’s left is to calculate the variation of \(p\). To do this, it is necessary to distinguish whether it is a classical plasticity regime \((\Delta {\sigma }^{p}=0)\) or a plasticity regime with two surfaces. So:
Classic plasticity: \(\text{F}(p,{\tilde{\sigma }}^{e})=0\)
\({\text{F}}_{,p}(p,{\tilde{\sigma }}^{e})\deltap =-{\text{F}}_{,{\tilde{\sigma }}^{e}}(p,{\tilde{\sigma }}^{e})\delta {\tilde{\sigma }}^{e}\text{}\Rightarrow \text{}\frac{\deltap }{\delta {\tilde{\sigma }}^{e}}=-\frac{{\text{F}}_{,{\tilde{\sigma }}^{e}}(p,{\tilde{\sigma }}^{e})}{{\text{F}}_{,p}(p,{\tilde{\sigma }}^{e})}\) eq 3.4-3.4-5
Plasticity at two surfaces: \(\text{F}(p,{\sigma }^{p},{\tilde{\sigma }}^{e})=0\text{et}\text{G}(p,{\sigma }^{p},{\tilde{\sigma }}^{e})=0\)
\(\left[\begin{array}{cc}{\text{F}}_{,p}& {\text{F}}_{,{\sigma }^{p}}\\ {\text{G}}_{,p}& {\text{G}}_{,{\sigma }^{p}}\end{array}\right]\left[\begin{array}{c}\deltap \\ \delta {\sigma }^{p}\end{array}\right]=-\left[\begin{array}{c}{\text{F}}_{,{\tilde{\sigma }}^{e}}\\ {\text{G}}_{,{\tilde{\sigma }}^{e}}\end{array}\right]\text{.}\delta {\tilde{\sigma }}^{e}\text{}\Rightarrow \text{}\frac{\deltap }{\delta {\tilde{\sigma }}^{e}}=\frac{∣\begin{array}{cc}{\text{F}}_{,{\sigma }^{p}}& {\text{F}}_{,{\tilde{\sigma }}^{e}}\\ {\text{G}}_{,{\sigma }^{p}}& {\text{G}}_{,{\tilde{\sigma }}^{e}}\end{array}∣}{∣\begin{array}{cc}{\text{F}}_{,p}& {\text{F}}_{,{\sigma }^{p}}\\ {\text{G}}_{,p}& {\text{G}}_{,{\sigma }^{p}}\end{array}∣}\) eq 3.4-3.4-6
A careful examination of the expressions [éq 2.1-5] and [éq 2.1-6] reveals that the variations of \(\text{F}\) and \(\text{G}\) with respect to \(\tilde{{\sigma }^{e}}\) are not necessarily collinear to \({s}^{0}\). Taking [éq3.4‑3] into account, we then deduce that the tangent matrix is in general not symmetric in the plastic regime. Rather than requiring the use of a non-symmetric solver, which is much more expensive in terms of calculation time, it is preferred to symmetrize this matrix.
3.5. Plane stresses#
The treatment of plane constraints adds a non-linear equation to be solved, coupled with the systems [éq 2.3‑6] and [éq 2.3-8], (cf. [R5-03-02]). Faced with this significant difficulty and the absence of demonstrated need, it was preferred not to offer the possibility of forcing a state of plane constraints at the level of the law of behavior. In other words, C_ PLAN modeling is not available for the VISC_TAHERI law of behavior.