Fourier’s wonderful idea: Breaking complex signals into simple pieces

That’s Maths: The Fast Fourier Transform dramatically reduced the way computation scales with problem size, making it practicable to solve many problems that could not otherwise be tackled

Joseph Fourier, born just 250 years ago, introduced a wonderful idea that revolutionised science and mathematics: any function or signal can be broken down into simple periodic sine-waves. Radio waves, micro-waves, infra-red radiation, visible light, ultraviolet light, X-rays and gamma rays are all forms of electromagnetic radiation, differing only in frequency.

The ability to break down signals into components of different frequencies is very useful. Some examples are for tuning a radio to a specific station, using ultrasound to examine a developing foetus or extracting a single phone conversation from a complex combination of inputs.

The equations that govern many physical processes were formulated in the 17th century but, although many of them were linear – having the output proportional to the input – nobody knew how to solve them. These partial differential equations remained intractable for 100 years or more. Around 1807, Joseph Fourier developed a powerful method of expressing the solutions as combinations of simple trigonometric functions and thereby solving the equations. His ideas had a profound effect, and now permeate all areas of mathematical physics and engineering.

Fourier showed that (almost) any periodic function can be expressed as a sum of sine and cosine functions. This allowed him to solve many of the equations of physics. Fourier analysis also turned out to be an ideal language for quantum mechanics. He showed that functions that are not periodic, but that become small rapidly outside a bounded range, can be analysed into periodic components using what is now called the Fourier transform.

READ MORE

This transform is like a mathematical prism, breaking up inputs into constituents with different frequencies, just as a prism splits light into colours having different wavelengths.

Partial differential equations are solved by Fourier analysis via a “detour” from the physical space (or the time domain) to Fourier space (or the frequency domain), where the solution is simple, and then returning to physical space by an inverse transformation of the solution.

We might compare this to solving the problem of calculating with Roman numerals: what is LXXXVI multiplied by XLI? Roman accountants and book-keepers had various tables to help them, but it was still a cumbersome process. There is an easier way: we convert the two numbers to the “decimal domain”, LXXXVI = 86 and XLI = 41. Then we multiply to get 3,526 and convert this back to the “Roman domain” to get MMMDXXVI.

Partial differential equations are solved in an analogous manner: we transform from the time domain, where all the components are entangled, to the frequency domain where they are distinguishable, solve for each component and transform back to the time domain to get the solution.

With the development of computers it became possible to analyze complicated signals using Fourier analysis. But the level of calculation becomes rapidly greater with the signal length. The Fast Fourier Transform (FFT), developed around 1965, provided a solution to this problem. It dramatically reduced the way computation scales with problem size, making it practicable to solve many problems that could not otherwise be tackled.

Peter Lynch is emeritus professor at UCD School of Mathematics & Statistics– he blogs at thatsmaths.com