GNU Astronomy Utilities

7.4.3.4 Upper limit magnitude of each detection

Due to the noisy nature of data, it is possible to get arbitrarily low values for a faint object’s brightness (or arbitrarily high magnitudes). Given the scatter caused by the dataset’s noise, values fainter than a certain level are meaningless: another similar depth observation will give a radically different value.

For example, assume that you have done your detection and segmentation on one filter and now you do measurements over the same labeled regions, but on other filters to measure colors (as we did in the tutorial Segmentation and making a catalog). Some objects are not going to have any significant signal in the other filters, but for example, you measure magnitude of 36 for one of them! This is clearly unreliable (no dataset in current astronomy is able to detect such a faint signal). In another image with the same depth, using the same filter, you might measure a magnitude of 30 for it, and yet another might give you 33. Furthermore, the total brightness might actually be negative in some images of the same depth (due to noise). In these cases, no magnitude can be defined and MakeCatalog will place a NaN there (recall that a magnitude is a base-10 logarithm).

Using such unreliable measurements will directly affect our analysis, so we must not use the raw measurements. When approaching the limits of your detection method, it is therefore important to be able to identify such cases. But how can we know how reliable a measurement of one object on a given dataset is?

When we confront such unreasonably faint magnitudes, there is one thing we can deduce: that if something actually exists under our labeled pixels (possibly buried deep under the noise), it’s inherent magnitude is fainter than an upper limit magnitude. To find this upper limit magnitude, we place the object’s footprint (segmentation map) over a random part of the image where there are no detections, and measure the total brightness within the footprint. Doing this a large number of times will give us a distribution of brightness values. The standard deviation ($$\sigma$$) of that distribution can be used to quantify the upper limit magnitude for that particular object (given its particular shape and area):

$$M_{up,n\sigma}=-2.5\times\log_{10}{(n\sigma_m)}+z \quad\quad [mag/target]$$

Traditionally, faint/small object photometry was done using fixed circular apertures (for example, with a diameter of $$N$$ arc-seconds) and there was not much processing involved (to make a deep stack). Hence, the upper limit was synonymous with the surface brightness limit discussed above: one value for the whole image. The problem with this simplified approach is that the number of pixels in the aperture directly affects the final distribution and thus magnitude. Also the image correlated noise might actually create certain patterns, so the shape of the object can also affect the final result. Fortunately, with the much more advanced hardware and software of today, we can make customized segmentation maps (footprint) for each object and have enough computing power to actually place that footprint over many random places. As a result, the per-target upper-limit magnitude and general surface brightness limit have diverged.

When any of the upper-limit-related columns requested, MakeCatalog will randomly place each target’s footprint over the undetected parts of the dataset as described above, and estimate the required properties. The procedure is fully configurable with the options in Upper-limit settings. You can get the full list of upper-limit related columns of MakeCatalog with this command (the extra -- before --upperlimit is necessary189):

\$ astmkcatalog --help | grep -- --upperlimit


Footnotes

(189)

Without the extra --, grep will assume that --upperlimit is one of its own options, and will thus abort, complaining that it has no option with this name.