Definitions

A heuristic definition of random signal could be the following: a random signal is a signal which does not reproduce identiquely with an experiment under the same conditions. More precisely, a random signal can be defined as a random variable which depends of the time variable (discrete or continuous)

Random signal Let $(\Omega,\mathcal{F},\mathcal{P})$ be a probability space, and $\mathcal{L}(\Omega)$ the space of random variables on $(\Omega,\mathcal{F},\mathcal{P})$. A random signal is an application $$ \begin{aligned} X:\mathbb{R} &\rightarrow \mathcal{L}(\mathcal{\Omega}) \\t &\mapsto X(t) \end{aligned} $$ In other words, a random signal $X$ is a function such that for each time $t\in\mathbb{R}$, $X(t)$ is a random variable.

A random signal is a stochastic process, ie an evolution of a random variable with respect to the time. For the measured signal $x(t)$, each observed value $x(t)$ is a realization of the random variable $X(t)$. For one realization of an event $\omega\in\Omega$, the function $t\mapsto X(\omega,t)$ is called a trajectory of the random signal $X$.

Statistical descriptors

As well as for random variables, a random signal can be characterized by few statistical descriptors

First order descriptor: expectation $$ \mu_X(t)=E(X(t)) $$

Second order descriptors Instantaneous power: $$ \text{P}_X(t) = \text{E}(|X(t)|^2) $$ Variance: $$ \text{Var}(X(t))=\text{E}(|X(t)-\mu_X(t)|^2 $$ Standard deviation: $$ \sigma_X(t) = \sqrt{\text{Var}{X(t)}} $$

Second order random signal A random signal $X$ is said to be of second order if $\forall t$ $\text{E}(|X(t)|^2)<+\infty$. It is uniformly of second order if it exists $B>0$ such that $\forall t$, $\text{E}(|X(t)|^2)< B$

A meaningful descriptor for random signals a given by the autocorrélation function

autocorrélation $$ R_X(t,s) = \text{E}(X(t)X(s)) $$

Covariance $$ \begin{aligned} C_{X}(t,s) & = \text{E}((X(t)-\mu(t))(\bar{X}(s)-\bar{\mu}(s)))\\& = R_X(t,s) - \mu(t)\overline{\mu(s)} \end{aligned} $$

Stationarity

An important assumption is the stationarity of a random signal

Strict sense stationarity A random signal is said stationary iff all its statistical properties are time-shift invariant.

In practice, we will focus on the first two orders

Weak sense stationarity Let $X$ be a second order random signal. $X$ is stationary in a weak sense iff $$ \mu_X(t) = \mu_X(0) = \mu_X $$ $$ R_X(t+\tau,s+\tau) = R_X(t,s) = R_X(t-s) $$

Remark: $|R_X(t-s)|\leq R_X(0)$

Ergodicity

In practice, the stationarity is not enough. This hypothesis says that the statistics of a signal are time-shift invariant, but, in practice, we only have access to only one trajectory of a random signal. We need one more hypothesis, called ergodicity, in order to be able to compute these statistics with only one trajectory.

Ergodicity A random signal is said ergodic iff its time expectations are deterministics.

Ergodicity of a stationary signal A stationary signal is ergodique iff for all function $g$ $$ \lim_{T\rightarrow +\infty} \frac{1}{T}\int_{-\frac{T}{2}}^{\frac{T}{2}} g(X(t))\mathrm{d} t = \text{E}(g(X(t))) $$ In particular, the temporal expectation is the expectation of the process $$ \lim_{T\rightarrow +\infty} \frac{1}{T}\int_{-\frac{T}{2}}^{\frac{T}{2}} X(t)\mathrm{d} t = \mu_X $$

In practice, the ergodicity is always supposed to be checked !

Intercorrelation and Intercovariance

For two random signals $X$ and $Y$, one can define the intercorrélation and intercovariance functions $$ \begin{aligned} R_{XY}(t_1,t_2) & = \text{E}(X(t_1)Y(t_2)) \\ C_{XY}(t_1,t_2) & = R_{XY}(t_1,t_2)-\mu_X(t_1)\mu_Y(t_2) \end{aligned} $$

If $X$ and $Y$ are weak sense stationary, then $R_{XY}(t_1,t_2) = R_{XY}(t_1-t_2) = R_{XY}(t)$ and $C_{XY}(t_1,t_2) = C_{XY}(t_1-t_2) = C_{XY}(t)$