# Background

In this lesson, we will explore how to examine relationships among observations that are made in a sequence (time series). We will focus on characterizing additional patterns within a single time series.

## Autocovariance and autocorrelation

Autocovariance for positive values of lag $$k$$ is defined in a form similar to the cross-covariance: $c_{xx}(k) = \frac{1}{N} \sum_{t=1}^{N-k} (x_t - \overline{x})(x_{t+k} - \overline{x})$ Similarly, autocorrelation is: $r_{xx}(k) = \frac{c_{xx}(k)}{c_{xx}(0)}$

Autocorrelation coefficients and the correlogram (plot of $$r_{xx}(k)$$ as a function of lag $$k$$) can provide insight into the underlying processes (Chatfield 2003).

• randomness. low $$r_{xx}(k)$$ for all $$k > 0$$.
• short-term correlations. rapid drop-off following high $$r_{xx}(1)$$.
• non-stationarity (time series contains a trend). non-zero values of $$r_{xx}(k)$$ until high value of $$k$$.
• periodic fluctuations (for the duration corresponding to lag $$k$$). $$r_{xx}(k)$$ will oscillate with the frequency of the periodic fluctuation.

Example correlelograms (notated as “$$r_k$$”) are shown below. Corresponding time series signals (“$$x_r$$”) are shown directly above for the first three figures. In the last the figure (lower right quadrant), correlelograms are shown for monthly observations in air temperature (top), and short-term correlations remaining after removal of the seasonal contribution to the signal is removed (bottom).

Such analyses can be used as a basis for constructing a purely statistical models of time series (ARIMA models) primarily used for forecasting, but we can also isolate and describe relationships among sequential observations using autocorrelations.

For instance, these representations can be included in discussions of:

• locally emitted pollutants, which might exhibit short-term autocorrelations, or
• pollutants produced by regional photochemistry and/or transported from non-local sources, which might lead to longer-term autocorrelations.

## Power spectrum

We can project a times series (or arbitrary function) onto a basis set consisting of harmonic functions of various frequencies, and determine which coefficients contribute most to the reproduction of the original signal.

Next, we will describe the Fourier series transformation of a sequential series of observations (time series) using notation adapted from Marr and Harley (2002).

The original time series $$x_t$$ can be represented by a Fourier series: $x_t = \sum_{k=0}^{N-1} X(k) \exp\left(2\pi i \nu_k t\right)$ where $$N$$ is the number of observations, and $$\nu_k = k/N$$. The new series $$X(k)$$ defined by the discrete Fourier transform are given by: $X(k) = \frac{1}{\sqrt{N}} \sum_{t=0}^{N-1} x_t \exp\left(-2\pi i \nu_k t\right)$ for $$k=0,1,\ldots,N-1$$. The periodogram or power spectrum is defined as: $P(\nu_k) = |X(k)|^2$ and indicates the strength of frequencies in a time series.

The periodogram is a finite Fourier transform of the autocovariance $$\{c_{xx}(k)\}$$ (Chatfield 2003).