# Cross-Correlation

Published May 19, 2022  -  By Marco Garosi

You can find an introduction to signals here. This post is part of a series on Image and Signal Processing. If you are looking for convolution, you may find it here.

Cross-correlation is a measure of similarity between two signals; that measure is computed as a function of the displacement of one relative to the other.

## Mathematical definition

Cross-correlation is mathematically defined as:

$[f_1 \bigotimes f_2](t) = \int_{-\infty}^{+\infty} \bar{f_1}(\tau) f_2(\tau - t) d\tau$

Cross-correlation formula

Now, the important part is understanding what’s going on here. The integral is just the multiplication, point-by-point, of the values of the functions — $f_1$ and $f_2$. That means that you: evaluate $\bar{f_1}$ at every point from $-\infty$ to $+\infty$; do the same for $f_2$; multiply the two values for each point, e.g. $\bar{f_1}(0) * f_2(0)$, $\bar{f_1}(1) * f_2(1)$, etc.

As you may see, $f_2$ also features another term: $t$. As explained in the Signals, the basics, subtracting some quantity $t$ from the input of a function/signal does nothing but translating it to the right (it get delayed in time). Of course, if $t$ is negative, the two $-$ signs cancel out and you get a function translated to the left.

As you may have already seen, the only input to the cross-correlation is $t$: you compute the cross-correlation between $f_1$ and $f_2$ with $f_2$ $f_2$ translated left or right. This means that the integral computes the similarity between the two signals once they are fixed on the axis, while $t$ lets you decide where to move $f_2$.

### Sliding

Computing cross-correlation is no more than fixing the first signal and making the second signal slide above it and, for every possible lag $t$, computing the integral. This means that computing cross-correlation on a computer is basically impossible: you would need infinite memory.

This is not a problem, however: cross-correlation is widely used to support many image and signal processing techniques. How do we do that? Well, computers work with digital values, which means we sample signals and only have a finite amount data. In fact, what we computed is a discretized version of the cross-correlation (explained below).

## Normalized cross-correlation

Signals are often subjected to noise: they are not clear and pure mathematically-defined signals. They come from the real world. Computing a cross-correlation may thus result in incorrect values: it could produce bad results. To try to avoid this, you can use normalized cross-correlation, which basically takes into account the energy of $f_1$ and $f_2$ to make sure that the result is… well, normalized.

Normalized cross-correlation produces values bounded in $[-1, 1]$: $[f_1 \bar{\bigotimes} f_2](t) \in [-1, 1]$. It is mathematically defined as:

$[f_1 \bar{\bigotimes} f_2](t) = \frac{\int_{-\infty}^{+\infty} \bar{f_1}(\tau) f_2(\tau - t) d\tau}{\sqrt{E_{f_1} E_{f_2}}}$

Cross-correlation formula

## Autocorrelation

There’s a particular and special case: it happens when $f_1 = f_2$ — that is, $f_1$ and $f_2$ are really the exact same signal. In that case, cross-correlation is called autocorrelation, since the signal is cross-correlated with itself.

Informally, this is the measure of similarity between two different observations of the same signal as a function of the time lag $t$ between them.

It’s particularly useful to find repeating patterns, period signal obscured by noise, finding the fundamental frequency, etc.

## Discrete cross-correlation

Computers work with discrete set of values. That is why we use discrete cross-correlation, which is defined as:

$[x_1 \bigotimes x_2](n) = \sum_{k = -\infty}^{+\infty} \bar{x_1}(k) x_2(k - n), k \in \Z$

Discrete cross-correlation

If $x_1$ has length $M$ and $x_2$ has length $N$, then the discrete cross-correlation $\bigotimes$ is going to have length $M + N - 1$.

## Cross-correlation with images

Images can be though of as 2D signals. In fact, $image(m, n) = z$, where $m$ and $n$ are the pixel coordinates and $z$ is the grayscale or RGB value of that pixel $(m, n)$.

It is therefore possible to compute the discrete cross-correlation between two images — call them $x_1$ and $x_2$. Formula is:

$[x_1 \bigotimes x_2](m, n) = \sum_{u = -\infty}^{+\infty} \sum_{-\infty}^{+\infty} x_1(u, v) x_2(u - m, v - n)$

Discrete 2D cross-correlation

Image $x_1$ is said to be the template or kernel, while $x_2$ is called the image.

2D cross-correlation is particularly useful to find something in an image: if you have a sample of what you are looking for (kernel), you can slide it over the whole image and find where it best matches the image itself. If the value where the cross-correlation is the highest is higher than a threshold, then you can be pretty confident you found the match you were looking for.

2D cross-correlation can also be used for filtering images with “special” kernels: you may use it to blur an image, to remove noise, to reinforce its edges and shapes, etc. Cross-correaltion is a really powerful tool that comes in very handy.

Share on: