GNU Astronomy Utilities



7.4.3.7 Surface brightness limit of image

As we make more observations on one region of the sky and add/combine the observations into one dataset, both the signal and the noise increase. However, the signal increases much faster than the noise: Assuming you add \(N\) datasets with equal exposure times, the signal will increases as a multiple of \(N\), while noise increases as \(\sqrt{N}\). Therefore the signal-to-noise ratio increases by a factor of \(\sqrt{N}\). Visually, fainter (per pixel) parts of the objects/signal in the image will become more visible/detectable. The noise-level is known as the dataset’s surface brightness limit.

You can think of the noise as muddy water that is completely covering a flat ground216. The signal (coming from astronomical objects in real data) will be summits/hills that start from the flat sky level (under the muddy water) and their summits can sometimes reach above the muddy water. Let’s assume that in your first observation the muddy water has just been stirred and except a few small peaks, you cannot see anything through the mud. As you wait and make more observations/exposures, the mud settles down and the depth of the transparent water increases. As a result, more and more summits become visible and the lower parts of the hills (parts with lower surface brightness) can be seen more clearly. In this analogy217, height (from the ground) is the surface brightness and the height of the muddy water at the moment you combine your data, is your surface brightness limit for that moment.

The outputs of NoiseChisel include the Sky standard deviation (\(\sigma\)) on every group of pixels (a tile) that were calculated from the undetected pixels in each tile, see Tessellation and NoiseChisel output. Let’s take \(\sigma_m\) as the median \(\sigma\) over the successful meshes in the image (prior to interpolation or smoothing). It is recorded in the MEDSTD keyword of the SKY_STD extension of NoiseChisel’s output.

On different instruments, pixels cover different spatial angles over the sky. For example, the width of each pixel on the ACS camera on the Hubble Space Telescope (HST) is roughly 0.05 seconds of arc, while the pixels of SDSS are each 0.396 seconds of arc (almost eight times wider218). Nevertheless, irrespective of its sky coverage, a pixel is our unit of data collection.

To start with, we define the low-level Surface brightness limit or depth, in units of magnitude/pixel with the equation below (assuming the image has zero point magnitude \(z\) and we want the \(n\)th multiple of \(\sigma_m\)).

$$SB_{n\sigma,\rm pixel}=-2.5\times\log_{10}{(n\sigma_m)}+z \quad\quad [mag/pixel]$$

As an example, the XDF survey covers part of the sky that the HST has observed the most (for 85 orbits) and is consequently very small (\(\sim4\) minutes of arc, squared). On the other hand, the CANDELS survey, is one of the widest multi-color surveys done by the HST covering several fields (about 720 arcmin\(^2\)) but its deepest fields have only 9 orbits observation. The \(1\sigma\) depth of the XDF and CANDELS-deep surveys in the near infrared WFC3/F160W filter are respectively 34.40 and 32.45 magnitudes/pixel. In a single orbit image, this same field has a \(1\sigma\) depth of 31.32 magnitudes/pixel. Recall that a larger magnitude corresponds to fainter objects, see Brightness, Flux, Magnitude and Surface brightness.

The low-level magnitude/pixel measurement above is only useful when all the datasets you want to use, or compare, have the same pixel size. However, you will often find yourself using, or comparing, datasets from various instruments with different pixel scales (projected pixel width, in arc-seconds). If we know the pixel scale, we can obtain a more easily comparable surface brightness limit in units of: magnitude/arcsec\(^2\). But another complication is that astronomical objects are usually larger than 1 arcsec\(^2\). As a result, it is common to measure the surface brightness limit over a larger (but fixed, depending on context) area.

Let’s assume that every pixel is \(p\) arcsec\(^2\) and we want the surface brightness limit for an object covering A arcsec\(^2\) (so \(A/p\) is the number of pixels that cover an area of \(A\) arcsec\(^2\)). On the other hand, noise is added in RMS219, hence the noise level in \(A\) arcsec\(^2\) is \(n\sigma_m\sqrt{A/p}\). But we want the result in units of arcsec\(^2\), so we should divide this by \(A\) arcsec\(^2\): \(n\sigma_m\sqrt{A/p}/A=n\sigma_m\sqrt{A/(pA^2)}=n\sigma_m/\sqrt{pA}\). Plugging this into the magnitude equation, we get the \(n\sigma\) surface brightness limit, over an area of A arcsec\(^2\), in units of magnitudes/arcsec\(^2\):

$$SB_{{n\sigma,\rm A arcsec}^2}=-2.5\times\log_{10}{\left(n\sigma_m\over \sqrt{pA}\right)+z} \quad\quad [mag/arcsec^2]$$

MakeCatalog will calculate the input dataset’s \(SB_{n\sigma,\rm pixel}\) and \(SB_{{n\sigma,\rm A arcsec}^2}\) and will write them as the SBLMAGPIX and SBLMAG keywords the output catalog(s), see MakeCatalog output. You can set your desired \(n\)-th multiple of \(\sigma\) and the \(A\) arcsec\(^2\) area using the following two options respectively: --sfmagnsigma and --sfmagarea (see MakeCatalog output). Just note that \(SB_{{n\sigma,\rm A arcsec}^2}\) is only calculated if the input has World Coordinate System (WCS). Without WCS, the pixel scale cannot be derived.

As you saw in its derivation, the calculation above extrapolates the noise in one pixel over all the input’s pixels! In other words, all pixels are treated independently in the measurement of the standard deviation. It therefore implicitly assumes that the noise is the same in all of the pixels. But this only happens in individual exposures: reduced data will have correlated noise because they are a stack of many individual exposures that have been warped (thus mixing the pixel values). A more accurate measure which will provide a realistic value for every labeled region is known as the upper-limit magnitude, which is discussed in the next section (Upper limit surface brightness of image).


Footnotes

(216)

The ground is the sky value in this analogy, see Sky value. Note that this analogy only holds for a flat sky value across the surface of the image or ground.

(217)

Note that this muddy water analogy is not perfect, because while the water-level remains the same all over a peak, in data analysis, the Poisson noise increases with the level of data.

(218)

Ground-based instruments like the SDSS suffer from strong smoothing due to the atmosphere. Therefore, increasing the pixel resolution (or decreasing the width of a pixel) will not increase the received information).

(219)

If you add three datasets with noise \(\sigma_1\), \(\sigma_2\) and \(\sigma_3\), the resulting noise level is \(\sigma_t=\sqrt{\sigma_1^2+\sigma_2^2+\sigma_3^2}\), so when \(\sigma_1=\sigma_2=\sigma_3\equiv\sigma\), then \(\sigma_t=\sigma\sqrt{3}\). In this case, the area \(A\) is covered by \(A/p\) pixels, so the noise level is \(\sigma_t=\sigma\sqrt{A/p}\).