GNU Astronomy Utilities

2.6.2 Color image using linear transformation

Previously (in Color channels in same pixel grid), we downloaded three SDSS filters of M51 and described how you can put them all in the same pixel grid. In this section, we will explore the raw and low-level process of generating color images using the input images (without modifying the pixel value distributions). We will use Gnuastro’s ConvertType program (with executable name astconvertt).

Let’s create our first color image using the aligned SDSS images mentioned in the previous section. The order in which you provide the images matters, so ensure that you sort the filters from redder to bluer (iSDSS and gSDSS are respectively the reddest and bluest of the three filters used here).

$ astconvertt aligned/i-sdss.fits aligned/r-sdss.fits \
              aligned/g-sdss.fits -g1 --output=m51.pdf

Other color formats: In the example above, we are using PDF because this is usually the best format to later also insert marks that are commonly necessary in scientific publications (see Marking objects for publication. But you can also generate JPEG and TIFF outputs simply by using a different suffix for your output file (for example --output=m51.jpg or --output=m51.tiff).

Open the image with your PDF viewer and have a look. Do you see something? Initially, it appears predominantly black. However, upon closer inspection, you will discern very tiny points where some color is visible. These points correspond to the brightest part of the brightest sources in this field! The reason you saw much more structure when looking at the image in DS9 previously in Color channels in same pixel grid was that astscript-fits-view used DS9’s -zscale option to scale the values in a non-linear way! Let’s have another look at the images with the linear minmax scaling of DS9:

$ astscript-fits-view aligned/*-sdss.fits \
           --ds9extra="-scale minmax -lock scalelimits"

You see that it looks very similar to the PDF we generated above: almost fully black! This phenomenon exemplifies the challenge discussed at the start of this tutorial in Color images with full dynamic range). Given the vast number of pixels close to the sky background level compared to the relatively few very bright pixels, visualizing the entire dynamic range simultaneously is tricky.

To address this challenge, the low-level ConvertType program allows you to selectively choose the pixel value ranges to be displayed in the color image. This can be accomplished using the --fluxlow and --fluxhigh options of ConvertType. Pixel values below --fluxlow are mapped to the minimum value (displayed as black in the default colormap), and pixel values above --fluxhigh are mapped to the maximum value (displayed as white)) The choice of these values depends on the pixel value distribution of the images.

But before that, we have to account for an important differences between the filters: the brightness of the background also has different values in different filters (the sky has colors!) So before making more progress, generally, first you have to subtract the sky from all three images you want to feed to the color channels. In a previous tutorial (Detecting large extended targets) we used these same images as a basis to show how you can do perfect sky subtraction in the presence of large extended objects like M51. Here we are just doing a visualization and bringing pixels to 8-bit, so we don’t need that level of precision reached there (we won’t be doing photometry!). Therefore, let’s just keep the --tilesize=100,100 of NoiseChisel.

$ mkdir no-sky
$ for f in i r g; do \
    astnoisechisel aligned/$f-sdss.fits --tilesize=100,100 \
                   --output=no-sky/$f-sdss.fits; \

Accounting for zero points: An important step that we have not implemented in this section is to unify the zero points of the three filters. In the case of SDSS (and some other surveys), the images have already been brought to the same zero point, but that is not generally the case. So before subtracting sky (and estimating the standard deviation) you should also unify the zero points of your images (for example through Arithmetic’s counts-to-nanomaggy or counts-to-jy described in Unit conversion operators). If you don’t already have the zero point of your images, see the dedicated tutorial: Zero point of an image.

Now that we know the noise fluctuates around zero in all three images, we can start to define the values for the --fluxlow and --fluxhigh. But the sky standard deviation comes from the sky brightness in different filters and is therefore different! Let’s have a look by taking the median value of the SKY_STD extension of NoiseChisel’s output:

$ aststatistics no-sky/i-sdss.fits -hSKY_STD --median

$ aststatistics no-sky/r-sdss.fits -hSKY_STD --median

$ aststatistics no-sky/g-sdss.fits -hSKY_STD --median

You see that the sky standard deviation of the reddest filter (i) is almost three times the bluest filter (g)! This is usually the case in any scenario (redder emission usually requires much less energy and gets absorbed less, so the background is usually brighter in the reddest filters). As a result, we should define our limits based on the noise of the reddest filter. Let’s set the minimum flux to 0 and the maximum flux to ~50 times the noise of the i-band image (\(0.027\times50=1.35\)).

$ astconvertt no-sky/i-sdss.fits no-sky/r-sdss.fits no-sky/g-sdss.fits \
              -g1 --fluxlow=0.0 --fluxhigh=1.35 --output=m51.pdf

After opening the new color image, you will observe that a spiral arm of M51 and M51B (or NGC5195, which is interacting with M51), become visible. However, the majority of the image remains black. Feel free to experiment with different values for --fluxhigh to set the maximum value closer to the noise-level and see the more diffuse structures. For instance, try with --fluxhigh=0.27 the brightest pixels will have a signal-to-noise ratio of 10, or even --fluxhigh=0.135 for a signal-to-noise ratio of 5. But you will notice that, the brighter areas of the galaxy become "saturated": you don’t see the structure of brighter parts of the galaxy any more. As you bring down the maximum threshold, the saturated areas also increase in size: loosing some useful information on the bright side!

Let’s go to the extreme and decrease the threshold to close the noise-level (for example --fluxhigh=0.027 to have a signal-to-noise ratio of 1)! You will see that the noise now becomes colored! You generally don’t want this because the difference in filter values of one pixel are only physically meaningful when they have a high signal-to-noise ratio. For lower signal-to-noise ratios, we should avoid color.

Ideally, we want to see both the brighter parts of the central galaxy, as well as the fainter diffuse parts together! But with the simple linear transformation here, that is not possible! You need some pre-processing (before calling ConvertType) to scale the images. For example, you can experiment with taking the logarithm or the square root of the images (using Arithmetic) before creating the color image.

These non-linear functions transform pixel values, mapping them to a new range. After applying such transformations, you can use the transformed images as inputs to astconvertt to generate color images (similar to how we subtracted the sky; which is a linear operation). In addition to that, it is possible to use a different color schema for showing the different brightness ranges as it is explained in the next section. In the next section (Color for bright regions and grayscale for faint), we’ll review one high-level installed script which will simplify all these pre-processings and help you produce images with more information in them.