Due to the noisy nature of data, it is possible to get arbitrarily faint magnitudes, especially when you use labels from another image (for example see Working with catalogs (estimating colors)). Given the scatter caused by the dataset’s noise, values fainter than a certain level are meaningless: another similar depth observation will give a radically different value. In such cases, measurements like the image magnitude limit are not useful because it is estimated for a certain morphology and is given for the whole image (it is a crude generalization; see Metameasurements on full input. You want a quality measure that is specific to each object.
For example, assume that you have done your detection and segmentation on one filter and now you do measurements over the same labeled regions, but on other filters to measure colors (as we did in the tutorial Segmentation and making a catalog). Some objects are not going to have any significant signal in the other filters, but for example, you measure magnitude of 36 for one of them! This is clearly unreliable (no dataset in current astronomy is able to detect such a faint signal). In another image with the same depth, using the same filter, you might measure a magnitude of 30 for it, and yet another might give you 33. Furthermore, the total sum of pixel values might actually be negative in some images of the same depth (due to noise). In these cases, no magnitude can be defined and MakeCatalog will place a NaN there (recall that a magnitude is a base-10 logarithm).
Using such unreliable measurements will directly affect our analysis, so we must not use the raw measurements. When approaching the limits of your detection method, it is therefore important to be able to identify such cases. But how can we know how reliable a measurement of one object on a given dataset is?
When we confront such unreasonably faint magnitudes, there is one thing we can deduce: that if something actually exists under our labeled pixels (possibly buried deep under the noise), it’s inherent magnitude is fainter than an upper limit magnitude. To find this upper limit magnitude, we place the object’s footprint (segmentation map) over a random part of the image where there are no detections, and measure the sum of pixel values within the footprint. Doing this a large number of times will give us a distribution of measurements of the sum. The standard deviation (\(\sigma\)) of that distribution can be used to quantify the upper limit magnitude for that particular object (given its particular shape and area):
$$M_{up,n\sigma}=-2.5\times\log_{10}{(n\sigma_m)}+z \quad\quad [mag/target]$$
Traditionally, faint/small object photometry was done using fixed circular apertures (for example, with a diameter of \(N\) arc-seconds) and there was not much processing involved (to make a deep coadd). Hence, the upper limit was synonymous with the surface brightness limit discussed above: one value for the whole image. The problem with this simplified approach is that the number of pixels in the aperture directly affects the final distribution and thus magnitude. Also the image correlated noise might actually create certain patterns, so the shape of the object can also affect the final result. Fortunately, with the much more advanced hardware and software of today, we can make customized segmentation maps (footprint) for each object and have enough computing power to actually place that footprint over many random places. As a result, the per-target upper-limit magnitude and general surface brightness limit have diverged.
When any of the upper-limit-related columns requested, MakeCatalog will randomly place each target’s footprint over the undetected parts of the dataset as described above, and estimate the required properties.
The procedure is fully configurable with the options in Upper-limit settings.
You can get the full list of upper-limit related columns of MakeCatalog with this command (the extra -- before --upperlimit is necessary224):
$ astmkcatalog --help | grep -- --upperlimit
The upper limit value (in units of the input image) for this object or clump. This is the sigma-clipped standard deviation of the random distribution, multiplied by the value of --upnsigma). This is very important for the fainter and smaller objects in the image where the measured magnitudes are not reliable.
The upper limit magnitude for this object or clump. This is very important for the fainter and smaller objects in the image where the measured magnitudes are not reliable.
The \(1\sigma\) upper limit value (in units of the input image) for this object or clump. When --upnsigma=1, this column’s values will be the same as --upperlimit.
The position of the label’s sum measured within the distribution of randomly placed upperlimit measurements in units of the distribution’s \(\sigma\) or standard deviation.
The position of the label’s sum within the distribution of randomly placed upperlimit measurements as a quantile (value between 0 or 1).
If the object is brighter than the brightest randomly placed profile, a value of inf is returned.
If it is less than the minimum, a value of -inf is reported.
This column contains the non-parametric skew of the \(\sigma\)-clipped random distribution that was used to estimate the upper-limit magnitude. Taking \(\mu\) as the mean, \(\nu\) as the median and \(\sigma\) as the standard deviation, the traditional definition of skewness is defined as: \((\mu-\nu)/\sigma\).
This can be a good measure to see how much you can trust the random measurements, or in other words, how accurately the regions with signal have been masked/detected. If the skewness is strong (and to the positive), then you can tell that you have a lot of undetected signal in the dataset, and therefore that the upper-limit measurement (and other measurements) are not reliable.
Without the extra --, grep will assume that --upperlimit is one of its own options, and will thus abort, complaining that it has no option with this name.
JavaScript license information
GNU Astronomy Utilities 0.24 manual, November 2025.