In convolution, the kernel specifies the weight and positions of the neighbors of each pixel. To find the convolved value of a pixel, the central pixel of the kernel is placed on that pixel. The values of each overlapping pixel in the kernel and image are multiplied by each other and summed for all the kernel pixels. To have one pixel in the center, the sides of the convolution kernel have to be an odd number. This process effectively mixes the pixel values of each pixel with its neighbors, resulting in a blurred image compared to the sharper input image.

Formally, convolution is one kind of linear ‘spatial filtering’ in image processing texts. If we assume that the kernel has \(2a+1\) and \(2b+1\) pixels on each side, the convolved value of a pixel placed at \(x\) and \(y\) (\(C_{x,y}\)) can be calculated from the neighboring pixel values in the input image (\(I\)) and the kernel (\(K\)) from

$$C_{x,y}=\sum_{s=-a}^{a}\sum_{t=-b}^{b}K_{s,t}\times{}I_{x+s,y+t}.$$

Formally, any pixel that is outside of the image in the equation above will be considered to be zero (although, see Edges in the spatial domain).
When the kernel is symmetric about its center the blurred image has the same orientation as the original image.
However, if the kernel is not symmetric, the image will be affected in the opposite manner, this is a natural consequence of the definition of spatial filtering.
In order to avoid this we can rotate the kernel about its center by 180 degrees so the convolved output can have the same original orientation (this is done by default in the Convolve program).
Technically speaking, only if the kernel is flipped the process is known as *Convolution*.
If it is not it is known as *Correlation*.

To be a weighted average, the sum of the weights (the pixels in the kernel) has to be unity. This will have the consequence that the convolved image of an object and unconvolved object will have the same brightness (see Brightness, Flux, Magnitude and Surface brightness), which is natural, because convolution should not eat up the object photons, it only disperses them.

The convolution of each pixel is independent of the other pixels, and in some cases, it may be necessary to convolve different parts of an image separately (for example, when you have different amplifiers on the CCD). Therefore, to speed up spatial convolution, Gnuastro first defines a tessellation over the input; assigning each group of pixels to “tiles”. It then does the convolution in parallel on each tile. For more on how Gnuastro’s programs create the tile grid (tessellation), see Tessellation.

JavaScript license information

GNU Astronomy Utilities 0.23 manual, July 2024.