GNU Astronomy Utilities



2.8.4 Larger steps sizes for better calibration

In Preparing input and generating exposure map we saw that a small pointing pattern is not good for the reduction of data from a large object like M94! M94 is about half a degree in diameter; so let’s set step_arcmin=15. This is one quarter of a degree and will put the center of the four exposures on the four corners of the M94’s main ring. Furthermore, Script with pointing simulation steps so far, the steps were summarized into a script to allow easy changing of variables without manually re-entering the individual/separate commands.

After you change step_arcmin=15 and re-run the script, you will get a total area (from counting of per-pixel areas) of approximately 0.96 degrees squared. This is just roughly half the previous area and will barely fit M94! To understand the cause, let’s have a look at the full stack (not just the deepest area):

$ astscript-fits-view build/stack.fits

Compared to the first run (with step_arcmin=1), we clearly see how there are indeed fewer pixels that get photons in all 5 exposures. As the area of the deepest part has decreased, the areas with fewer exposures have also grown. Let’s define our deep region to be the pixels with 3 or more exposures. Please set deep_thresh=3 in the script and re-run it. You will see that the “deep” area is now almost 2.02 degrees squared! This is (slightly) larger than the first run (with step_arcmin=1)!

The difference between 3 exposures and 5 exposures seems a lot at first. But let’s calculate how much it actually affects the achieved signal-to-noise ratio and the surface brightness limit. The surface brightness limit (or upper-limit surface brightness) are both calculated by applying the definition of magnitude to the standard deviation of the background. So we should first calculate how much this difference in depth affects the sky standard deviation. For a complete discussion on the definition of the surface brightness limit, see Quantifying measurement limits.

Deep images will usually be dominated by Photon counting noise (or Poisson noise). Therefore, if a single exposure image has a sky standard deviation of \(\sigma_s\), and we combine \(N\) such exposures by taking their mean, the final/stacked sky standard deviation (\(\sigma\)) will be \(\sigma=\sigma_s/\sqrt{N}\). As a result, the surface brightness limit between the regions with \(N\) exposures and \(M\) exposures differs by \(2.5\times log_{10}(\sqrt{N/M}) = 1.25\times log_{10}(N/M)\) magnitudes. If we set \(N=3\) and \(M=5\), we get a surface brightness magnitude difference of 0.28!

This is a very small difference; given all the other sources of error that will be present; but how much it improves the calibration artifacts. Therefore at the cost of decreasing our surface brightness limit by 0.28 magnitudes, we are now able to calibrate the individual exposures much better, and even cover a larger area!

The argument above didn’t involve any image and was primarily theoretical. For the more visually-inclined readers, let’s add raw Gaussian noise (with a \(\sigma\) of 100 counts) over each simulated exposure. We will then instruct astscript-pointing-simulate to stack them as we would stack actual data (by taking the sigma-clipped mean). The command below is identical to the previous call to the pointing simulation script with the following differences. Note that this is just for demonstration, so you should not include this in your script (unless you want to see the noisy stack every time; at double the processing time).

--output

We are using a different output name, so we can compare the output of the new command with the previous one.

--stack-operator

This should be one of the Arithmetic program’s Stacking operators. By default the value is sum; because by default, each pixel of each exposure is given a value of 1. When stacking is defined through the summation operator, we can obtain the exposure map that you have already seen above.

But in this run, we are adding noise to each input exposure (through the hook that is described below) and stacking them (as we would stack actual science images). Since the purpose differs here, we are using this option to change the operator.

--hook-warp-after

This is the most visible difference of this command the previous one. Through a “hook”, you can give any arbitrarily long (series of) command(s) that will be added to the processing of this script at a certain location. This particular hook gets applied “after” the “warp”ing phase of each exposure (when the pixels of each exposure are mapped to the final pixel grid; but not yet stacked).

Since the script runs in parallel (the actual work-horse is a Makefile!), you can’t assume any fixed file name for the input(s) and output. Therefore the inputs to, and output(s) of, hooks are some pre-defined shell variables that you should use in the command(s) that you hook into the processing. They are written in full-caps to be clear and separate from your own variables. In this case, they are the $WARPED (input file of the hook) and $TARGET (output name that next steps in the script will operate on). As you see from the command below, through this hook we are calling the Arithmetic program to add noise to all non-zero pixels in the warped image. For more on the noise-adding operators, see Random number generators.

$ center_ra=192.721250
$ center_dec=41.120556
$ astscript-pointing-simulate build/pointing.txt --img=input/ref.fits \
           --center=$center_ra,$center_dec \
           --width=2 --stack-operator="3 0.2 sigclip-mean" \
           --output=build/stack-noised.fits \
           --hook-warp-after='astarithmetic $WARPED set-i \
                                          i i 0 uint8 eq nan where \
                                          100 mknoise-sigma \
                                           --output=$TARGET'

$ astscript-fits-view build/stack.fits build/stack-noised.fits

When you visually compare the two images of the last command above, you will see that (at least by eye) it is almost impossible to distinguish the differing noise pattern in the regions with 3 exposures from the regions with 5 exposures. But the regions with a single exposure are clearly visible! This is because the surface brightness limit in the single-exposure regions is \(1.25\times\log_{10}(1/5)=-0.87\) magnitudes brighter. This almost one magnitude difference in surface brightness is significant and clearly visible in the stacked image (recall that magnitudes are measured in a logarithmic scale).

Thanks to the argument above, we can now have a sufficiently large area with a usable depth. However, each the center of each pointing will still contain the central part of the galaxy. In other words, M94 will be present in all the exposures while doing the calibrations. Even in not-too-deep observations, we already see a large ring around this galaxy. When we do a low surface brightness optimized reduction, there is a good chance that the size of the galaxy is much larger than that ring. This very extended structure will make it hard to do the calibrations on very accurate scales. Accurate calibration is necessary if you do not want to loose the faint photons that have been recorded in your exposures.

Calibration is very important: Better calibration can result in a fainter surface brightness limit than more exposures with poor calibration; especially for very low surface brightness signal that covers a large area and is systematically affected by calibration issues.

Ideally, you want your target to be on the four edges/corners of each image. This will make sure that a large fraction of each exposure will not be covered by your final target in each exposure, allowing you to calibrate much more accurately.