3. 3 Linear regression method#

3.1. 3.1.1 Principle#

The theoretical-experience gap is measured by the expression:

\(\sum _{i}{(\text{LogLog}(\frac{1}{1-{P}_{f}^{i}})-\text{LogLog}(\frac{1}{1-{P}_{f}({\sigma }_{W}^{i})}))}^{2}\) eq 3.1.1-1

(« Log » refers to the natural logarithm). We want to minimize this difference with respect to (m, \({\sigma }_{u}\)).

3.2. 3.1.2 Resolution#

The adjustment method usually used is based on successive linear regressions: at iteration \((k)\), the values (\({m}_{k},{\sigma }_{u(k)}\)) of the module and of the cleavage constraint are known. It is therefore possible, with these values, to calculate the stresses of WEIBULL \({\sigma }_{W(k)}^{i}\) at the various moments of rupture thanks to [éq 2.1-2]. These new constraints of WEIBULL are then classified by increasing amplitude and the new estimates of the probability of failure \({P}_{f(k)}^{i}\) at iteration \((k)\) are deduced from them. For these fixed WEIBULL stress values, the minimization of [éq 3.1.1-1] is reduced to a simple linear regression on the point cloud (\(\text{Log}({\sigma }_{W(k)}^{i})\), \(\text{LogLog}(\frac{1}{1-{P}_{f(k)}^{i}})\)) since if we transfer \(\text{LogLog}(\frac{1}{1-{P}_{f}})\) as a function of \(\text{Log}({\sigma }_{W})\), we obtain a line with a slope m that intersects the x-axis at (\(\text{Log}({\sigma }_{u})\)). The new values (\({m}_{k+1},{\sigma }_{u(k+1)}\)) of these parameters are therefore given by (cancellation of the partial derivatives of [éq 3.1.1-1] with respect to each parameter):

\({m}_{k+1}=\frac{\frac{1}{N}\sum _{i,j}{X}_{i(k)}{Y}_{j(k)}-\sum _{i}{Y}_{i(k)}{X}_{i(k)}}{\frac{1}{N}\sum _{i,j}{X}_{i(k)}{X}_{j(k)}-\sum _{i}{X}_{i(k{)}^{2}}}\) eq 3.1.2-1

\({\sigma }_{u(k+1)}=\text{exp}(\frac{1}{N}(\sum _{i}{X}_{i(k)}-\frac{1}{m}\sum _{i}{Y}_{i(k)}))\), eq 3.1.2-2

with \({X}_{i(k)}=\text{Log}({\sigma }_{W(k)}^{i})\) and \({Y}_{i(k)}=\text{LogLog}(\frac{1}{1-{P}_{f(k)}^{i}})\).

These iterations are repeated as long as the difference between the sets of parameters obtained at the iterations (k) and (k+1) is significant (typically, five iterations). The measure of this difference is given by: \(\text{Max}\left[∣\frac{{m}_{k+1}-{m}_{k}}{{m}_{k}}∣,∣\frac{{\sigma }_{u(k+1)}-{\sigma }_{u(k)}}{{\sigma }_{u(k)}}∣\right]\).

Note:

If \(m\) is set, \({\sigma }_{u(k+1)}\) is always given by [éq 3.1.2-2]. On the other hand, if \({\sigma }_{u}\) is set, \({m}_{k+1}\) is no longer given by [eq 3.1.2-1] but: \({m}_{k+1}=\frac{\sum _{i}{X}_{i(k)}{Y}_{i(k)}}{\sum _{i}{X}_{i(k{)}^{2}}-\text{log}({\sigma }_{u})\sum _{i}{X}_{i(k)}}\) .