Published May 19, 2022  -  By Marco Garosi

You can find an introduction to signals here. This post is part of a series on Image and Signal Processing. If you are looking for convolution, you may find it here.

Cross-correlation is a measure of similarity between two signals; that measure is computed as a function of the displacement of one relative to the other.

Mathematical definition

Cross-correlation is mathematically defined as:

[f1f2](t)=+f1ˉ(τ)f2(τt)dτ [f_1 \bigotimes f_2](t) = \int_{-\infty}^{+\infty} \bar{f_1}(\tau) f_2(\tau - t) d\tau

Cross-correlation formula

Now, the important part is understanding what’s going on here. The integral is just the multiplication, point-by-point, of the values of the functions — f1f_1 and f2f_2. That means that you: evaluate f1ˉ\bar{f_1} at every point from -\infty to ++\infty; do the same for f2f_2; multiply the two values for each point, e.g. f1ˉ(0)f2(0)\bar{f_1}(0) * f_2(0), f1ˉ(1)f2(1)\bar{f_1}(1) * f_2(1), etc.

As you may see, f2f_2 also features another term: tt. As explained in the Signals, the basics, subtracting some quantity tt from the input of a function/signal does nothing but translating it to the right (it get delayed in time). Of course, if tt is negative, the two - signs cancel out and you get a function translated to the left.

As you may have already seen, the only input to the cross-correlation is tt: you compute the cross-correlation between f1f_1 and f2f_2 with f2f_2 f2f_2 translated left or right. This means that the integral computes the similarity between the two signals once they are fixed on the axis, while tt lets you decide where to move f2f_2.


Computing cross-correlation is no more than fixing the first signal and making the second signal slide above it and, for every possible lag tt, computing the integral. This means that computing cross-correlation on a computer is basically impossible: you would need infinite memory.

This is not a problem, however: cross-correlation is widely used to support many image and signal processing techniques. How do we do that? Well, computers work with digital values, which means we sample signals and only have a finite amount data. In fact, what we computed is a discretized version of the cross-correlation (explained below).

Normalized cross-correlation

Signals are often subjected to noise: they are not clear and pure mathematically-defined signals. They come from the real world. Computing a cross-correlation may thus result in incorrect values: it could produce bad results. To try to avoid this, you can use normalized cross-correlation, which basically takes into account the energy of f1f_1 and f2f_2 to make sure that the result is… well, normalized.

Normalized cross-correlation produces values bounded in [1,1][-1, 1]: [f1ˉf2](t)[1,1][f_1 \bar{\bigotimes} f_2](t) \in [-1, 1]. It is mathematically defined as:

[f1ˉf2](t)=+f1ˉ(τ)f2(τt)dτEf1Ef2 [f_1 \bar{\bigotimes} f_2](t) = \frac{\int_{-\infty}^{+\infty} \bar{f_1}(\tau) f_2(\tau - t) d\tau}{\sqrt{E_{f_1} E_{f_2}}}

Cross-correlation formula


There’s a particular and special case: it happens when f1=f2f_1 = f_2 — that is, f1f_1 and f2f_2 are really the exact same signal. In that case, cross-correlation is called autocorrelation, since the signal is cross-correlated with itself.

Informally, this is the measure of similarity between two different observations of the same signal as a function of the time lag tt between them.

It’s particularly useful to find repeating patterns, period signal obscured by noise, finding the fundamental frequency, etc.

Discrete cross-correlation

Computers work with discrete set of values. That is why we use discrete cross-correlation, which is defined as:

[x1x2](n)=k=+x1ˉ(k)x2(kn),kZ [x_1 \bigotimes x_2](n) = \sum_{k = -\infty}^{+\infty} \bar{x_1}(k) x_2(k - n), k \in \Z

Discrete cross-correlation

If x1x_1 has length MM and x2x_2 has length NN, then the discrete cross-correlation \bigotimes is going to have length M+N1M + N - 1.

Cross-correlation with images

Images can be though of as 2D signals. In fact, image(m,n)=zimage(m, n) = z, where mm and nn are the pixel coordinates and zz is the grayscale or RGB value of that pixel (m,n)(m, n).

It is therefore possible to compute the discrete cross-correlation between two images — call them x1x_1 and x2x_2. Formula is:

[x1x2](m,n)=u=++x1(u,v)x2(um,vn) [x_1 \bigotimes x_2](m, n) = \sum_{u = -\infty}^{+\infty} \sum_{-\infty}^{+\infty} x_1(u, v) x_2(u - m, v - n)

Discrete 2D cross-correlation

Image x1x_1 is said to be the template or kernel, while x2x_2 is called the image.

2D cross-correlation is particularly useful to find something in an image: if you have a sample of what you are looking for (kernel), you can slide it over the whole image and find where it best matches the image itself. If the value where the cross-correlation is the highest is higher than a threshold, then you can be pretty confident you found the match you were looking for.

2D cross-correlation can also be used for filtering images with “special” kernels: you may use it to blur an image, to remove noise, to reinforce its edges and shapes, etc. Cross-correaltion is a really powerful tool that comes in very handy.

Share on: