Have you ever watched a sculptor at work? If not, you’ve missed the opportunity to see the sculptor coax a form out of an apparently formless chunk of raw material. Generally, the sculptor will start out by creating a very coarse approximation of a shape. Then, the shape is refined until it looks roughly like what the sculptor envisages. Finally, the sculptor gives the sculpture a few finishing touches to give it the right texture. To illustrate this process, here are a few images detailing the progress of an ice sculpture:


Images of ice carving, extracted from the video found at http://www.militarychefs.com/1A/2_Media/IceVideos/IceBirdOfPrey.html (With kind permission from James Sewson)

To generate the rough shape at the beginning, the sculptor needs to use large coarse cuts, but at the end, when the fine detail needs to be added, the sculptor uses small precise cuts. Thus, the final shape is generated by a combination of cuts on different scales.

In signal analysis, it is possible to use a similar approach to approximate any function of interest. Here, instead of using cuts, we use sine waves of different periods. When the correct combination of sine waves are all added together, the end result is identical to the function of interest, except maybe at isolated points (for example, when the function of interest is not continuous). The period, amplitude and phase of each component sine wave can be found by using the Fourier transformation.


Left) If we add up two or more sinusoidal waves, we can easily calculate the resulting signal. Right) But if we have a given signal, which waves (“Frequency, Amplitude, Phase”) should we add up to best approximate our signal? The “Fourier Transformation” solves this problem.


Composition of a function out of sine waves.

An excellent overview of Fourier analysis can be found in:

C. Harris. (1998) The Fourier analysis of biological transients. Journal of Neuroscience Methods, 83, 15-34.

Fourier integral

To transform a continuous function, one uses the Fourier Integral:

\[\label{eq:FourierIntegral} F(k) = \int_{ - \infty }^\infty {f(t){e^{ - 2\pi ikt}}dt}\]

where \(k\) represents frequency. The inverse transform is given by

\[\label{eq:FourierIntegralInverse} f(t) = \int_{ - \infty }^\infty {F(k){{\mathop{\rm e}\nolimits} ^{2\pi ikt}}dk}\]

Typically, the time-dependent function is written in lowercase and the Fourier transform in uppercase.

Different conventions

It should be noted that some authors choose to define the Fourier transform (and inverse transform) in terms of the angular frequency \(\omega = 2\pi k\). In this case, there needs to be a scaling factor that is in total \(\frac{1}{{2\pi }}\) . The \(\frac{1}{{2\pi }}\) may scale the forward transform or the inverse transform, or \(\frac{1}{{\sqrt {2\pi } }}\) may scale both (Symmetrical Fourier Transformation).

Complex exponential notation

Note that \({e^{2\pi ikt}}\) is simply shorthand notation for the sum of phase-shifted sine waves. To understand this, one needs to understand that:

  • Any combination of a sine and cosine wave with frequency k is again a sinusoid with this frequency.
  • The amplitude and the phase depend on the relative components of the sine and the cosine contribution.


\[a*\cos (2\pi kt) + b*\sin (2\pi kt) = c\sin (2\pi kt + \phi )\]

where \(c = \sqrt {{a^2} + {b^2}} ` , and :math:\)phi = {tan ^{ - 1}}(b/a)`. Euler’s formula states that

\[\label{eq:complexExp} {e^{2\pi ikt}} = \cos (2\pi kt) + i\sin (2\pi kt)\]

From this, it follows that sine and cosine waves can be expressed in terms of \({e^{2\pi ikt}}\) and \({e^{ - 2\pi ikt}}\) :

\[\begin{split}\label{eq:complexSinCos} \begin{array}{l} \sin (2\pi kt) = \frac{1}{{2i}}\left( {{e^{2\pi ikt}} - {e^{ - 2\pi ikt}}} \right)\\ \cos (\omega t) = \frac{1}{2}\left( {{e^{2\pi ikt}} + {e^{ - 2\pi ikt}}} \right) \end{array}\end{split}\]

Fourier transform of a constant

Let us start with the constant function

\[f(t) = 1\]

The Fourier transform is

\[F(k) = \int_{ - \infty }^\infty {{e^{ - 2\pi ikt}}dt}\]

This integral turns out to be the Dirac delta function centred at \(0\):

\[F(k) = \delta (0)\]

Fourier transform of a pure sinusoid

Now, take a simple sinusoid with frequency \(\nu\):

\[f(t) = A\sin (2\pi \nu t)\]

The Fourier transform is

\[F(k) = \int_{ - \infty }^\infty {A\sin (2\pi \nu t)} \,{e^{ - 2\pi ikt}}dt\]

Using the complex exponential representation of a sine wave, this becomes

\[\begin{split}\begin{array}{c} F(k) = \int_{ - \infty }^\infty {A\left( {\frac{{{e^{2\pi i\nu t}} - {e^{ - 2\pi i\nu t}}}}{{2i}}} \right){e^{ - 2\pi ikt}}dt} \\ = \frac{A}{{2i}}\int_{ - \infty }^\infty {\left( {{e^{ - 2\pi i(k - \nu )t}} - {e^{ - 2\pi i(k + \nu )t}}} \right)dt} \end{array}\end{split}\]

Finally, we write this in terms of Dirac delta functions:

\[F(k) = \frac{A}{{2i}}\left[ {\delta (k - \nu ) - \delta (k + \nu )} \right]\]

Fourier Series

In reality, measurement signals are never infinite. They always have a start and an end. Using a little trick, such signals can be turned into “periodic” signals, simply by repeating the signal again and again (see Fig. [fig:FourierSeries]).


Artificially extended signal.

It can be shown that every periodic function can be decomposed into a sum of sinusoids with frequencies that are multiples of a fundamental frequency:

\[f(t) = {a_0} + \sum\limits_{n = 1}^\infty {\left[ {{a_n}\cos (2\pi n{k_g}t) + {b_n}\sin (2\pi n{k_g}t)} \right]}\]

The fundamental frequency \({\omega _g}\) is determined by:

\[{k_g} = \frac{1}{{{T_D}}}\]

where \({T_D}\) is the length of the data set, i.e., the length of the artificial period. The Fourier series can also be written as

\[\label{eq:FourierSeries} f(t) = \sum\limits_{n = - \infty }^\infty {{F_n}\,{e^{2\pi in{k_g}t}}}\]

where the values of \(F_n\) are given by

\[\label{eq:FourierCoefficients} {F_n} = \frac{1}{{{T_D}}}\int_\tau ^{\tau + {T_D}} {f(t)\,{e^{ - 2\pi {\mathop{\rm int}} /{T_D}}}dt}\]

Example: Square Wave

Using Fourier expansion with cycle frequency f over time t, we can represent an ideal square wave with a peak to peak amplitude of 2 as an infinite series of the form

\[\begin{split}x_{\mathrm{square}}(t) & {} = \frac{4}{\pi} \sum_{k=1}^\infty {\sin{\left (2\pi (2k-1) ft \right )}\over(2k-1)} \\ & {} = \frac{4}{\pi}\left (\sin(2\pi ft) + {1\over3}\sin(6\pi ft) + {1\over5}\sin(10\pi ft) + \cdots\right )\end{split}\]

First four Fourier approximations for a square wave. (from Wikipedia)

Discrete Fourier Transformation

In practice, real data are sampled from a continuously varying signal, and are thus discrete and time-limited. For a data set of discrete observations, we do not need an infinite number of waves to represent the signal. Specifically, if we have \(N\) discrete data points, we only require \(N\) waves to make up our signal.

If the data are sampled with a constant sampling frequency and there are \(N\) data points,

\[\label{eq:dft} {f_\tau } = \frac{1}{N}\sum\limits_{n = 0}^{N - 1} {{F_n}\,{e^{2\pi i\,\frac{{n\,\tau }}{N}}}}\]

The coefficients \(F_n\) can be obtained by

\[\label{eq:idft} {F_n} = \sum\limits_{\tau = 0}^{N - 1} {{f_\tau }\,{e^{ - 2\pi i\,\frac{{n\tau }}{N}}}}\]

Since there are a discrete, limited number of data points and with a discrete, limited number of waves, this transform is referred to as Discrete Fourier Transform (DFT).

Fast Fourier Transformation

The calculation of \(F_n\) can be computationally expensive, since it requires \(O(N^2)\) multiplication operations. However, if there are exactly 2n data points, the formula for the coefficients \(F_n\) becomes much simpler, and \(F_n\) can then be computed with only O(N log N) multiplication operations by using a Fast Fourier Transform (FFT). The difference in speed can be substantial, especially when there are many data points.

In Matlab, the Discrete Fourier Transform F of data set f is found via

F = fft(f);

and the inverse transform with

f = ifft(F);

A frequent source of confusion is the question: Which frequency corresponds to :math:`F_n`? If there are \(N\) data points and the sampling period is \(T_s\) , the \(n^{th}\) frequency is given by

\[\label{eq:FFTfrequencies} {f_n} = \frac{n}{{N \cdot {T_s}}},\,\,1 \le n \le N (in Hz)\]

In other words, the lowest frequency is \(\frac{1}{{N \cdot {T_s}}}\,\left[ {in\,Hz} \right]\) , i.e. \(\frac{1}{{Signalduration}}\) , while the highest independent frequency is - due to the Nyquist-Shannon theorem – half the sampling frequency, \(\frac{1}{{2{T_s}}}\) .

Power Spectrum

Most FFT functions return the complex Fourier coefficients \(F_n\). If we are only interested in the size of the contribution at the corresponding frequency, we can obtain this information by taking the square of the magnitude of the coefficients:

\[\label{eq:PowerSpectrum} {P_n} = {F_n} \cdot {F_n}^* = {\left| {{F_n}} \right|^2}\]

This is the power spectrum of our signal.

We will take a closer look at a modified version of the FFT example found in the Matlab documentation:

Fs = 1000;                    % Sampling frequency
T = 1/Fs;                     % Sample time
L = 1000;                     % Length of signal
t = (0:L-1)*T;                % Time vector

% Sum of a 50 Hz sinusoid and a 120 Hz sinusoid, with noise
y = 0.7*sin(2*pi*50*t) + sin(2*pi*120*t) + 2*randn(size(t));

figure(1); subplot(4,1,1)
title('Signal Corrupted with Zero-Mean Random Noise')
xlabel('time (ms)')

NFFT = 2^nextpow2(L); % Next power of 2 from length of y
Y = fft(y,NFFT)/L;

% Plot double-sided power spectrum
f_full = Fs*linspace(0,1,NFFT);
plot(f_full, log10(abs(Y).^2))
title('Double-Sided Power Spectrum')
xlabel('Frequency (Hz)')
ylabel('log Power')

% Plot single-sided power spectrum
f_half = Fs/2*linspace(0,1,NFFT/2+1);
plot(f_half, log10(2*abs(Y(1:NFFT/2+1)).^2))
title('Single-Sided Power Spectrum')
xlabel('Frequency (Hz)')
ylabel('log Power')

% Periodogram
[Pxx,w] = periodogram(y,[],[], rate);
ylabel('Power Spectral Density');

This code produces the graphs in the Figure above.


Power spectra

The top curve in the Figure shows the signal. Note that by eye it is pretty much impossible to clearly discern the main frequency components in that signal. We need the Fourier transform to extract that from the data.

The second curve shows the double-sided power spectrum. Two prominent peaks visible. These correspond to the frequencies 50Hz and 120 Hz found in the signal.Since the input data (i.e. the original signal) are real, the power spectrum is symmetric about the Nyquist frequency, which here is 500 Hz.

Because the two halves of the double-sided power spectrum are identical, the normal thing to do is to collapse all of the power onto the positive frequencies, which gives rise to the single-sided power spectrum, shown in the third line.

To reduce the noise in the power spectrum, a procedure called Welsh-Periodogram can be employed, resulting in the bottom graph. Thereby the data set is broken down into a number of separate pieces, the power spectrum is calculated for each of them, and then the resulting power spectra are averaged. The resulting power signal is normalized and plotted on a linear scale (Power Spectral Density - PSD). Assuming that the frequency distribution stays approximately constant, this enhances the dominant components, and reduces random noise.

Fourier Transform and Convolution

For short signals, the convolutions can be calculated efficiently directly. But for longer signals, it becomes much more efficient to calculate it through the Fourier Transformation. The basis for this is given by the convolution theorem

\[{F}\{f*g\} = \mathcal{F}\{f\} \cdot \mathcal{F}\{g\}\]

where \(\cal F\) denotes the Fourier transform, the left side indicates the cross-correlation, and \(\cdot\) denotes point-wise multiplication.

By applying the inverse Fourier transform \({F}^{-1}\), we can write:

\[f*g= \mathcal{F}^{-1}\big\{\mathcal{F}\{f\}\cdot\mathcal{F}\{g\}\big\}\]

The Next Level

Short Time Fourier Transform (STFT)

While the Fourier Transform provides information about the frequency content of the whole file, we often want to know when a certain frequency contribution occurs, and how strong it is at each point in time. This information can be obtained by applying temporal filters to the data, and is referred to as Short Time Fourier Transform (STFT). An example is the analysis of auditory signals, such as the one in Fig. [fig:spectrogram].

You can find more information on http://en.wikibooks.org/wiki/Sensory_Systems/Auditory_System#Computer_Simulations_of_the_Auditory_System.


Spectrogram of the German vowels “a,e,i,o,u”. These correspond approximately to the vowels in the English words “hut, hat, hit, hot, put”. Calculated using the Matlab command “spectrogram(data, 512,256, 512, fs)”.


Exercise 1: DFT - Frequency Domain -> Time Domain

Calculate the inverse Fourier Transform of the following data, and explain what happens:

  1. fd = zeros(1,101); fd(11) = 1; fd(92) = 1;
  2. fd = zeros(1,101); fd(11) = i; fd(92) = -i;
  3. fd = zeros(1,101); fd(11) = i; fd(92) = i;
  4. fd = zeros(1,101); fd(11) = 1; fd(92) = 1; fd(1) = 50;

Exercise 2: Power Spectrum

Part 1

Calculate the powerspectrum of the following two sine-waves, and explain the difference.

  1. t = (0:99)*4*pi/100; x = cos(t);
  2. t = (0:70)*4*pi/100; x = cos(t);

Part 2

Calculate the powerspecturm of

 t = 0:0.01:10;
 s = RandStream('mt19937ar','Seed',1);
randNr = s.randn(size(t));
x = sin(2*pi*0.3*t) + 2*sin(2*pi*2*t) + randNr;

What happens when you replace x by x(1:4:end)?

What happens when you replace x by x+10?

Exercise 3: DFT - Time Domain \(->\) Frequency Domain

Take the signal described below, sampled at a frequency of 100 Hz, and calculate the powerspectrum. Then increase the sampling frequency to 400 Hz, by interpolating between existing datapoints, and re-calculate the powerspectrum. What features of the powerspectrum change? And why?

 t = 0:0.01:10;  % in [sec]
 s = RandStream('mt19937ar','Seed',1);
randNr = s.randn(size(t));
x = 10 + sin(2*pi*0.3*t) + 2*sin(2*pi*2*t) + randNr;

Exercise 4: Handwork


 t = 0:0.1:10;
x = sin(t) + 3*cos(3*t);

calculate the Discrete Fourier Transform by hand, and compare it to fft(x).