Suppose we have taken two images of the same field of view with the same CCD, once with a smaller telescope, and once with a larger one. Because we used the same CCD, the noise will be very similar. However, the larger telescope has gathered more light, therefore the same star or galaxy will have a higher signal-to-noise ratio (S/N) in the image taken with the larger one. The same applies for a stacked image of the field compared to a single-exposure image of the same telescope.
This concept is used by some researchers to define the “magnitude limit” or “detection limit” at a certain S/N (sometimes 10, 5 or 3 for example, also written as \(10\sigma\), \(5\sigma\) or \(3\sigma\)). To do this, they measure the magnitude and signal-to-noise ratio of all the objects within an image and measure the mean (or median) magnitude of objects at the desired S/N. A fully working example of deriving the magnitude limit is available in the tutorials section: Measuring the dataset limits.
However, this method should be used with extreme care! This is because the shape of the object becomes important in this method: a sharper object will have a higher measured S/N compared to a more diffuse object at the same original magnitude. Besides the inherent shape/sharpness of the object, issues like the PSF also become important in this method (because the finally observed shapes of objects are important here): two surveys with the same surface brightness limit (see Surface brightness limit of image) will have different magnitude limits if one is taken from space and the other from the ground.