15. Appendix 3. Jacobi method#

15.1. Principle#

The Jacobi method [LT86] makes it possible to calculate all the eigenvalues of a generalized problem whose matrices are definite positive and symmetric (the matrices obtained at each iteration by the Bathe &Wilson method verify these properties; cf. § 8). It consists in transforming the matrices \(A\) and \(B\) of the GEP \(\mathrm{A}\mathrm{u}\mathrm{=}\lambda \mathrm{B}\mathrm{u}\) into diagonal matrices, using successively similar orthogonal transformations (Givens rotation matrices). The process can be summarised as follows:

_images/Object_437.svg _images/Object_438.svg

Algorithm 3.1. Jacobi process

The eigenvalues are given by \(\lambda \mathrm{=}{\mathrm{A}}^{d}{({\mathrm{B}}^{d})}^{\text{-1}}\) i.e. \(\lambda \mathrm{=}\frac{{\mathrm{A}}_{\mathit{ii}}^{d}}{{\mathrm{B}}_{\mathit{ii}}^{d}}\) and the eigenvector matrix verifies:

_images/Object_441.svg

Each matrix \({\mathrm{Q}}_{k}\) is chosen so that a diagonal and non-zero \((i,j)\) term of \({\mathrm{A}}_{k}\) or \({\mathrm{B}}_{k}\) is zero after the transformation.

15.2. A few choices#

In this algorithm, we realize that the important points are the following:

  • How do I choose which terms to cancel?

  • How do you measure the diagonal character of matrices when \(k\) approaches infinity?

  • How do you measure convergence?

15.2.1. Non-diagonal terms to cancel#

To choose the terms to cancel, there are several methods:

  • The first consists in each step \(k\), choosing the largest element in non-diagonal modulus from the \({\mathrm{A}}_{k}\) or \({\mathrm{B}}_{k}\) matrix and cancelling it by performing a rotation. This choice ensures the convergence of the Jacobi method but is relatively expensive (search for the maximum element).

  • The second solution consists in cancelling all the non-diagonal elements of these matrices in succession by following the natural order \({\mathrm{A}}_{13}^{k},\mathrm{....},{\mathrm{A}}_{\mathrm{1n}}^{k},{\mathrm{A}}_{23}^{k},\mathrm{....}\). When you get to \({\mathrm{A}}_{n\mathrm{-}\mathrm{1,}n}^{k}\), you start the cycle again. This method is converging slowly.

A variant of this method, while following the natural order of the terms, consists in cancelling out only those that are greater than a given \({\varepsilon }_{k}\) precision. At the end of a cycle, the value of this criterion is reduced and the value of this criterion is repeated. It is this strategy that is used in*Code_Aster*.

15.2.2. Convergence test#

To test the convergence and the diagonal nature of the matrices, we operate in this way. It is verified that all the coupling factors defined by:

\({f}_{{\mathrm{A}}_{\mathit{ij}}}\mathrm{=}\frac{∣{\mathrm{A}}_{\mathit{ij}}∣}{\sqrt{∣{\mathrm{A}}_{\mathit{ii}}{\mathrm{A}}_{\mathit{jj}}∣}}\) \({f}_{{\mathrm{B}}_{\mathit{ij}}}\mathrm{=}\frac{∣{\mathrm{B}}_{\mathit{ij}}∣}{\sqrt{∣{\mathrm{B}}_{\mathit{ii}}{\mathrm{B}}_{\mathit{jj}}∣}}\) \(i\ne j\)

are less than a given precision (diagonal nature of the matrices). The convergence of eigenvalues is also checked via the indicator.

\({f}_{\lambda }=\underset{i}{\mathrm{max}}\frac{∣{\lambda }_{i}^{k}-{\lambda }_{i}^{k-1}∣}{{\lambda }_{i}^{k-1}}\)

so that it stays below a given precision \({\varepsilon }_{\mathrm{jaco}}\).

15.2.3. Algorithm implemented in Code_Aster#

The algorithm implemented in*Code_Aster* comes down to:

Initialization of the vector matrix specific to the identity matrix.

Initialization of eigenvalues.

Define the required dynamic convergence accuracy.

Define global accuracy \({\varepsilon }_{\mathrm{glob}}\).

For each \(k\mathrm{=}\mathrm{1,}{\text{n\_max}}_{\mathit{jaco}}\) cycle

Define dynamic tolerance: \({\varepsilon }_{k}={({\varepsilon }_{\mathrm{dyn}})}^{k}\),

\(l=0\),

For each line \(i=\mathrm{1,}n\)

For each column \(j=\mathrm{1,}n\)

Calculating coupling factors, \({f}_{{\mathrm{A}}_{\mathit{ij}}^{k,l}}\mathrm{=}\frac{∣{\mathrm{A}}_{\mathit{ij}}^{k,l}∣}{\sqrt{∣{\mathrm{A}}_{\mathit{ii}}^{k,l}{\mathrm{A}}_{\mathit{jj}}^{k,l}∣}}\) \({f}_{{\mathrm{B}}_{\mathit{ij}}^{k,l}}\mathrm{=}\frac{∣{\mathrm{B}}_{\mathit{ij}}^{k,l}∣}{\sqrt{∣{\mathrm{B}}_{\mathit{ii}}^{k,l}{\mathrm{B}}_{\mathit{jj}}^{k,l}∣}}\) \(i\ne j\)

If \({f}_{{A}_{\mathrm{ij}}^{k,l}}\text{ou}{f}_{{B}_{\mathrm{ij}}^{k,l}}\ge {\varepsilon }_{k}\) then

Calculation of the Givens rotation coefficients,

Transformation of matrices \({\mathrm{A}}^{k\mathrm{,}\mathrm{l}}\) and \({\mathrm{B}}^{k\mathrm{,}\mathrm{l}}\),

Transformation of eigenvectors,

\(l=l+1\)

End yes.

End loop.

End loop.

Calculating eigenvalues \({\lambda }_{i}^{k}\mathrm{=}\frac{{\mathrm{A}}_{\mathit{ii}}^{k,l}}{{\mathrm{B}}_{\mathit{ii}}^{k,l}}\).

Calculation of \({f}_{\lambda }=\underset{i}{\mathrm{max}}\frac{∣{\lambda }_{i}^{k}-{\lambda }_{i}^{k-1}∣}{{\lambda }_{i}^{k-1}}\).

Calculating global coupling factors

\({f}_{\mathrm{A}}\mathrm{=}\underset{\begin{array}{c}i,j\\ i\mathrm{\ne }j\end{array}}{\mathit{Max}}\frac{∣{\mathrm{A}}_{\mathit{ij}}^{k,l}∣}{\sqrt{∣{\mathrm{A}}_{\mathit{ii}}^{k,l}{\mathrm{A}}_{\mathit{jj}}^{k,l}∣}}\) \({f}_{\mathrm{B}}\mathrm{=}\underset{\begin{array}{c}i,j\\ i\mathrm{\ne }j\end{array}}{\mathit{Max}}\frac{∣{\mathrm{B}}_{\mathit{ij}}^{k,l}∣}{\sqrt{∣{\mathrm{B}}_{\mathit{ii}}^{k,l}{\mathrm{B}}_{\mathit{jj}}^{k,l}∣}}\)

If \({f}_{\text{A}}\le {\varepsilon }_{\mathrm{glob}}\text{et}{f}_{\text{B}}\le {\varepsilon }_{\mathrm{glob}}\text{et}{f}_{\lambda }\le {\varepsilon }_{\mathrm{glob}}\) then

Correcting the natural modes (let’s divide by \(\sqrt{{\mathrm{B}}_{\mathit{ii}}^{k}}\)),

Exit;

End yes.

End loop.

Algorithm 3.1. Jacobi method implemented in Code_Aster.

Note \({\text{n\_max}}_{\mathrm{jaco}}\) as the maximum number of iterations allowed.

Note:

  • Since matrices \(\mathrm{A}\) and \(B\) are symmetric, only half of the terms are stored as a vector.

15.2.4. Givens rotation matrix#

At each step, we try to cancel the terms found in position \(i\) and \(j\) \((i\ne j)\) of the matrices \({A}^{k,l}\) and \({B}^{k,l}\) by multiplying them by a rotation matrix which has the following form:

\({\mathrm{G}}_{\text{ll}}\mathrm{=}1l\mathrm{=}\mathrm{1,}n;{\mathrm{G}}_{\mathit{ij}}\mathrm{=}a;{\mathrm{G}}_{\mathit{ji}}\mathrm{=}b\)

the other terms being null. We want to have \({\mathrm{A}}_{\mathit{ij}}^{k,l+1}\mathrm{=}{\mathrm{B}}_{\mathit{ij}}^{k,l+1}\mathrm{=}0\) which leads to:

\(\mathrm{\{}\begin{array}{c}a{\mathrm{A}}_{\mathit{ii}}^{k,l}+(1+\mathit{ab}){\mathrm{A}}_{\mathit{ij}}^{k,l}+b{\mathrm{A}}_{\mathit{jj}}^{k,l}\mathrm{=}0\\ a{\mathrm{B}}_{\mathit{ii}}^{k,l}+(1+\mathit{ab}){\mathrm{B}}_{\mathit{ij}}^{k,l}+b{\mathrm{B}}_{\mathit{jj}}^{k,l}\mathrm{=}0\end{array}\)

because \({\mathrm{A}}^{k,l+1}\mathrm{=}{\mathrm{G}}^{T}{\mathrm{A}}^{k,l}\mathrm{G}\) and \({\mathrm{B}}^{k,l+1}\mathrm{=}{\mathrm{G}}^{T}{\mathrm{B}}^{k,l}\mathrm{G}\). If the two equations are proportional we have:

\(a=0\) and \(b\mathrm{=}\frac{\mathrm{-}{\mathrm{A}}_{\mathit{ij}}^{k,l}}{{\mathrm{A}}_{\mathit{jj}}^{k,l}}\)

otherwise:

\(a=\frac{{c}_{2}}{d}b=\frac{-{c}_{1}}{d}\)

with:

\({c}_{1}\mathrm{=}{\mathrm{A}}_{\mathit{ii}}^{k,l}{\mathrm{B}}_{\mathit{ij}}^{k,l}\mathrm{-}{\mathrm{B}}_{\mathit{ii}}^{k,l}{\mathrm{A}}_{\mathit{jj}}^{k,l}\) \({c}_{2}\mathrm{=}{\mathrm{A}}_{\mathit{jj}}^{k,l}{\mathrm{B}}_{\mathit{ij}}^{k,l}\mathrm{-}{\mathrm{B}}_{\mathit{jj}}^{k,l}{\mathrm{A}}_{\mathit{ij}}^{k,l}\)

\({c}_{3}\mathrm{=}{\mathrm{A}}_{\mathit{ii}}^{k,l}{\mathrm{B}}_{\mathit{jj}}^{k,l}\mathrm{-}{\mathrm{B}}_{\mathit{ii}}^{k,l}{\mathrm{A}}_{\mathit{jj}}^{k,l}\) \(d=\frac{{c}_{3}}{2}+\mathrm{sign}({c}_{3})\sqrt{{(\frac{{c}_{3}}{2})}^{2}+{c}_{1}{c}_{2}}\)

Note:

  • If \(d=0\) then we are in the proportional case.