7. Description of document versions#
Version Aster |
Author (s) Organization (s) |
Description of changes |
4 |
|
Initial text |
11.2 |
|
Some cosmetic formula corrections. |
12.1 |
I.Zentner |
Vanmarcke Formula Correction |
Power Spectral Density Conventions
A1.1 Introduction
In order to maintain the necessary consistency for all the calculations and the comparisons with the experiment (cf. [§2.3] and [§3.3]), we develop below the two sets of definitions consistent with the random response and post-processing calculations as they were used for Aster:
the first based on spectral data expressed as a function of frequency. It is this set that is consistent with the calculation carried out in the operator CALC_INTE_SPEC [U4.56.03].
the second based on spectral data expressed as a function of pulsation.
These two sets make post-processing as expressed in POST_DYNA_ALEA valid.
Each time, we will specify the unit in which the various quantities manipulated are expressed as a function of the unit u of the reference signal. The explanations given are brief. For more details, refer to reference [bib10].
A1.2 Signal types and power definition
We consider four types of signals:
finite energy signals,
periodic signals,
finite power signals and deterministic signals,
random signals satisfying the ergodicity hypothesis and stationary.
In dynamic random calculation the signals are random. For the interpretation of experimental results, the signals are either periodic or of finite power (deterministic).
For each type of signal, we define an energy quantity that is either an energy or a power and that we will refer to in the following paragraphs under the unique term of power:
Finite energy signals are defined by their \(E\) energy expressed in \({u}^{2}s\):
\(E=\underset{\text{-}\infty }{\overset{\text{+}\infty }{\int }}x{(t)}^{2}\text{dt}\text{<+}\infty\) eq An1.2-1
Periodic signals are defined by the \(P\) power of the signal expressed in u2:
\(P=\frac{1}{T}\underset{\left[T\right]}{\int }{\mid x(t)\mid }^{2}\text{dt}\) eq An1.2-2
\(T\) refers to the period of the signal. \(\left[T\right]\) is an interval of length \(T\).
Finite power signals are defined by the average power \(P\) of the signal expressed in \({u}^{2}\):
\(P=\underset{T\to \text{+}\infty }{\text{lim}}(\frac{1}{T}\underset{-T/2}{\overset{+T/2}{\int }}{\mid x(t)\mid }^{2}\text{dt})\text{<+}\infty\) eq An1.2-3
Random signals are defined by the average power \(P\) of the signal expressed in \({u}^{2}\):
\(P=E\left[{\mid X(t)\mid }^{2}\right]=\underset{T\to \text{+}\infty }{\text{lim}}(\frac{1}{T}\underset{-T/2}{\overset{+T/2}{\int }}{\mid x(t)\mid }^{2}\text{dt})\text{<+}\infty\) eq An1.2-4
Here, we use the ergodicity hypothesis, which implies that the statistical and temporal averages performed on an achievement of a process are identical.
A1.3 Autocorrelations
Taking into account the statistical reminders made in the body of the text, for each type of signal previously defined, we have:
Autocorrelation of**finite energy signals*, expressed in \({u}^{2}/\mathrm{Hz}\):
\({R}_{\text{XX}}(\tau )=\int \overline{x(t)}x(t+\tau )\text{dt}\) eq An1.3-1
Autocorrelation of**periodic signals*, expressed in \({u}^{2}\):
\({R}_{\text{XX}}(\tau )=\frac{1}{T}\underset{\left[T\right]}{\int }\overline{x(t)}x(t+\tau )\text{dt}\) eq An1.3-2
Autocorrelation of**finite power signals*, expressed in \({u}^{\mathrm{2 }}\):
\({R}_{\text{XX}}(\tau )=\underset{T\to \text{+}\infty }{\text{lim}}\frac{1}{T}\underset{-T/2}{\overset{+T/2}{\int }}\overline{x(t)}x(t+\tau )\text{dt}\) eq An1.3-3
Autocorrelation of**random signals*, expressed in \({u}^{2}\):
\({R}_{\text{XX}}(\tau )=E\left[\overline{X(t)}X(t+\tau )\right]=\underset{T\to \text{+}\infty }{\text{lim}}\frac{1}{T}\underset{-T/2}{\overset{+T/2}{\int }}\overline{x(t)}x(t+\tau )\text{dt}\) eq Year1.3-4
A1.4 Power Spectral Density Definition
A1.4.1 Frequency expression
Power spectral density is defined by:
\({S}_{\text{XX}}(f)=\underset{\text{-}\infty }{\overset{\text{+}\infty }{\int }}{R}_{\text{XX}}(\tau ){e}^{-2i\pi f\tau }d\tau \text{ou}{G}_{\text{XX}}(f)=\underset{0}{\overset{\text{+}\infty }{\int }}{R}_{\text{XX}}(\tau ){e}^{-2i\pi f\tau }d\tau\) eq An1.4.1-1
Since the mechanic is only interested in positive values of frequency and time, the \({G}_{\text{XX}}\) function is more often used.
It can be demonstrated, in the case where Fourier Transformations of signals exist, that this definition is equivalent (Wiener-Kinchine theorem) to the following definitions of power spectral density.
For finite energy signals:
\({G}_{\text{XX}}(f)={\mid X(f)\mid }^{2}\) expressed in \({u}^{2}/{\mathrm{Hz}}^{2}\) eq An1.4.1-2
For periodic signals:
If \(X(f)=\sum _{n\text{=-}\infty }^{n\text{=+}\infty }{C}_{n}\delta (f-{\mathrm{nf}}_{0})\) then \({G}_{\text{XX}}(f)=\sum _{n\text{=-}\infty }^{n\text{=+}\infty }{C}_{n}^{2}\delta (f-{\mathrm{nf}}_{0})\) eq An1.4.1-3
\({G}_{\text{XX}}(f)\) is expressed in \({u}^{2}/\mathrm{Hz}\).
\({f}_{0}\) is the inverse of the signal period.
\({C}_{n}\) Dirac function coefficient.
For finite power signals:
\({G}_{\text{XX}}(f)=\underset{T\to \text{+}\infty }{\text{lim}}(\frac{1}{T}{\mid {X}_{\left[T\right]}(f)\mid }^{2})\text{en}{u}^{2}/\mathrm{Hz}\) eq An1.4.1-4
where \({X}_{\left[T\right]}\) refers to the restriction of \(x(t)\text{à}\left[-\text{T/}2;\text{T/}2\right]\).
For random signals:
\({G}_{\text{XX}}(f)=\underset{T\to \text{+}\infty }{\text{lim}}E\left[\frac{1}{T}{\mid {X}_{\left[T\right]}(f)\mid }^{2}\right]\text{en}{u}^{2}/\mathrm{Hz}\) eq An1.4.1-5
where \({X}_{\left[T\right]}\) refers to the restriction from \(x(t)\) to \(\left[-T/2;T/2\right]\).
Link between DSP and power.
With the definitions given above for power spectral densities, for all the signals, we have the relationship:
\(P=\underset{\text{-}\infty }{\overset{\text{+}\infty }{\int }}{G}_{\text{XX}}(f)\text{df}\) eq An1.4.1-6
This relationship is established using the theorem of PARSEVAL.
A1.4.2 Pulse expression
In pulsation, the power spectral density is defined by:
\({G}^{{\text{'}}_{\text{XX}}}(\omega )=\frac{1}{2\pi }\underset{\text{-}\infty }{\overset{\text{+}\infty }{\int }}{R}_{\text{XX}}(\tau ){e}^{-i\omega \tau }d\tau\) eq An1.4.2-1
As for the frequency expression, we can demonstrate, in the case where Fourier Transformations of signals exist, that this definition is equivalent (Wiener-Kinchine theorem) to the following definitions of power spectral density
For finite energy signals:
\({G}_{\text{XX}}^{\text{'}}(\omega )=2\pi {\mid {X}^{\text{'}}(\omega )\mid }^{2}\) expressed in \({u}^{2}/{\mathrm{Hz}}^{2}\) eq An1.4.2-2
For periodic signals:
\(\text{Si}{X}^{\text{'}}(\omega )=\sum _{n\text{=-}\infty }^{n\text{=+}\infty }{C}_{n}\delta (\omega -n{\omega }_{0})\text{}\text{alors}{G}_{\text{XX}}^{\text{'}}(\omega )=\sum _{n\text{=-}\infty }^{n\text{=+}\infty }{C}_{n}^{2}\delta (\omega -n{\omega }_{0})\) eq An1.4.2-3
\({G}_{\text{XX}}^{\text{'}}(\omega )\) is expressed in \({u}^{2}/\mathrm{Hz}\), and \({\omega }_{0}=\frac{2\pi }{T}\) where \(T\) is the signal period.
\({C}_{n}\) Dirac function coefficient.
For finite power signals:
\({G}_{\text{XX}}^{\text{'}}(\omega )=\underset{T\to \text{+}\infty }{\text{lim}}(\frac{2\pi }{T}{\mid {X}_{\left[T\right]}^{\text{'}}(\omega )\mid }^{2})\text{en}{u}^{2}/\mathrm{Hz}\) eq An1.4.2-4
\({X}_{\left[T\right]}^{\text{'}}\) refers to the restriction from \(x(t)\) to \(\left[-\text{T/}2;\text{T/}2\right]\).
For random signals:
\({G}_{\text{XX}}^{\text{'}}(\omega )=\underset{T\to \text{+}\infty }{\text{lim}}E\left[\frac{2\pi }{T}{\mid {X}_{\left[T\right]}^{\text{'}}(\omega )\mid }^{2}\right]\) in \({u}^{2}/\mathrm{Hz}\) eq An1.4.2-5
\({X}_{\left[T\right]}^{\text{'}}\) refers to the restriction from \(x(t)\) to \(\left[-T/2;T/2\right]\).
Link between DSP and power.
Likewise, for all signals, we have the relationship - which follows from the theorem of PARSEVAL -:
\(P=\underset{\text{-}\infty }{\overset{\text{+}\infty }{\int }}{G}_{\text{XX}}^{\text{'}}(\omega )d\omega\) eq An1.4.2-6
A1.4.1 Relationship between DSP in frequency and DSP in pulsation
For the four types of signals:
\({G}_{\text{XX}}^{\text{'}}(\omega )=\frac{1}{2\pi }{G}_{\text{XX}}(f)\) eq An1.4.3-1
Hilbert transformation
Let \(X(t)\) |
a real \(X(\omega )\) Fourier transform signal. |
Let \(H(\omega )\) |
the Transfer function: \(H(\omega )=j\text{sign}(\omega )=\{\begin{array}{cc}j& \omega >0\\ -j& \omega <0\\ 0& \omega =0\end{array}\) |
\(H(\omega )\) turns \(X(t)\) into his Hilbert transform denoted \(\stackrel{ˆ}{X}(t)\). Transfer function system \(H(\omega )\) produces a phase shift of \(+90°\) for positive frequencies and \(–90°\) for negative frequencies. It follows from the convolution theorem that \(\stackrel{ˆ}{X}(t)\) can also be defined as the convolution of \(X(t)\) by the corresponding impulse response, that is, \(h(t)=1/\pi t\).
\(\stackrel{ˆ}{X}(t)\) is also real, a second application of the Hilbert transform restores the initial signal, changed sign and amputated its possible continuous component.
Example: \(x(t)=A\text{cos}\omega t\to \stackrel{ˆ}{x}(t)\text{=-}A\text{sin}\omega t\)
This property is the basis for using the Hilbert transform to define the envelope of a narrow band process.