GNU Astronomy Utilities



7.4.3.3 Surface brightness error of each detection

We can derive the error in measuring the surface brightness based on the surface brightness (SB) equation of Brightness, Flux, Magnitude and Surface brightness and the generic magnitude error (\(\Delta{M}\)) of Magnitude measurement error of each detection. Let’s set \(A\) to represent the area and \(\Delta{A}\) to represent the error in measuring the area. For more on \(\Delta{A}\), see the description of --spatialresolution in MakeCatalog inputs and basic settings.

$$\Delta{(SB)} = \Delta{M} + \left|{-2.5\over ln(10)}\right|\times{\Delta{A}\over{A}}$$

In the surface brightness equation mentioned above, \(A\) is in units of arcsecond squared and the conversion between arcseconds to pixels is a multiplication factor. Therefore as long as \(A\) and \(\Delta{A}\) have the same units, it does not matter if they are in arcseconds or pixels. Since the measure of spatial resolution (or area error) is the FWHM of the PSF which is usually defined in terms of pixels, its more intuitive to use pixels for \(A\) and \(\Delta{A}\).