As optics and camera technologies continue to improve, just how short of an exposure can we get away with in deep-sky astrophotography?

Amateurs first began using electronic sensors to capture images of the deep sky three decades ago. Since then, advances in camera technology have made these sensors larger and more sensitive, vastly outperforming cameras from even five years ago. In turn, those advances have spurred the development of fast optical systems with a large corrected field that concentrates more light onto the smaller pixels of these sensors.

Fast astrographs such as Celestron new RASA are revolutionizing how deep-sky astrophotography is done.
Richard S. Wright Jr.

Yet some people are still clinging to rules of thumb and best practices devised back when optics were slower and cameras were far noisier. Meanwhile, others are charging wildly forward, experimenting with techniques that ignore basic physics and sound principles of signal processing (which at its root is what imaging is).

Basic Principles: Stacking vs. Shot Noise

Some things are never going to change. Every image you take contains both signal and noise, and to get a good image, you need a very high amount of signal compared to the amount of noise – the signal-to-noise ratio. Further, only some of the noise comes from the camera.

Shot noise, which I first described here, comes from the faint nature of the targets you are capturing. The only way to overcome it is by taking longer exposures or stacking many short ones. If it takes one hour of signal collection to obtain a good image, you can instead combine 12 individual five-minute exposures.

In theory, you can stack more shorter exposures. For example, 60 one-second exposures will accumulate the same amount of signal that one 60-second exposure does. But in practice, this doesn't work: There are both practical and physical problems with this idea. Consider, if it takes one second to download an image and start the next one, that means that an hour of image-taking results in only a half-hour of exposure time. Yikes.

Consider also the problem of data storage: Even if you could get away with one-second exposures, an hour of 1-second exposures amounts to 3,600 images. With a 16-mega-pixel camera, that corresponds to about 60 gigabytes worth of data to store, calibrate, align, and stack. All that for a single hour of exposure time!

See Janesick's Book for a solid footing before getting into any debates about noise and cameras.
Richard S. Wright Jr.

I often see amateurs coming to incorrect conclusions about shorter exposures because they don’t fully understand the underlying mechanisms. Signals and noise are well-studied and understood engineering and physical phenomena, but it takes time to learn the foundations — there are whole books and college courses on this topic. One of the best books on the subject of understanding camera noise sources and performance is Photon Transfer by James R. Janesick.

What to Consider for Short Exposures

Nevertheless, shorter exposures are tempting because of the benefits they offer. Issues with mount tracking alone and not having to guide are reasons enough to consider going down this route. However, determining how short you can go depends on a lot of factors. Here’s a quick list of the top issues you need to consider.

Read Noise

Read noise is noise that comes from the electronics of your camera. You pay the price of read noise every time you download an image from the camera, and it is independent of the amount of signal that you’ve captured. If the signal from the faintest details in your target are not higher than the read noise, you can consider that detail lost no matter how many short exposures you stack.

A short-exposure example
A stack of hundreds of 15-second-exposures can do a great job on bright targets.
Richard S. Wright Jr.

Pattern Noise

Pattern noise could be lumped in with read noise, as it's due to camera electronics, but some low-end CMOS cameras have significant pattern noise that is higher than the quoted read noise. If this pattern noise is not repeatable (and it isn't with some CMOS implementations), it won't be corrected through image-calibration. Pattern noise again raises the bar on how much signal you need in an individual exposure.

Camera Gain

Gain is the amplification of the signal from the sensor. In some cases, it can boost the desired signal above the read noise. Turning up the gain sounds like a good idea, but if you turn it up too high, you lose significant dynamic range. In extreme cases, you might as well be shooting with an 8-bit camera.

Bit-Depth

Does your camera have a 12-bit, 14-bit, or 16-bit readout? This also plays a part as something called quantization error can again lump your faint signal back down into the read noise, and no amount of stacking will save it.

Finally, Signal

Remember, too, it’s all about the signal! I’ve seen so many short-exposure demonstrations that show the Orion Nebula (M42) and the Andromeda Galaxy (M31), two relatively bright deep-sky targets. The minimum exposure needed to record a bright target is not going to be the same for a faint one.

This blog is only an introduction to a big topic. Watch for a full-length article on this topic in an upcoming issue of Sky & Telescope.

Tags

stacking

Comments


Image of Alain Maury

Alain Maury

December 20, 2019 at 3:16 pm

Good article... It's clear that it is the way things are going. Pretty soon, deep sky imaging will use the same techniques as planetary imaging.
Ideally one could go to even shorter exposure times. If you make a sequence of images of a bright star, with one second exposure, you get a star image with a given diameter, if you decrease the time to 1/10th of a second, the star image is smaller, at 1/100th (10ms) the image is very sharp (at least the pack of speckles is smaller), but dancing all over the place (giving the larger diameter one got with 1 second exposures). Lucky imaging and morphing is the way to go.
Storage, not too much of a problem today with the 1tb M2SSD hard disks (and NAS storage which can easily go to tens of TB or more). One can store the image while the next one is exposed to reduce the dead time between exposures.
Processing : just one word: GPU.
Clearly the people who write data acquisition software have to start learning CUDA or VCL or Numba (which allow the programming of GPUs in Python) if it's not already done. Using the GPU (graphics Processing Unit) one can gain a factor of 100 in processing time.
The arrival of cameras like the zwo 6200 and QHY600 with the RASA telescopes will get us to new heights in imaging. Very low noise, very high DQE, very wide field while preserving a good image sampling. I dreamt of this since a long time, and don't see how we will be able to improve after that...

You must be logged in to post a comment.

Image of Fengshui

Fengshui

December 20, 2019 at 7:16 pm

Sorry that I have to disagree.

A nick picking. Regarding the point "bit-depth", stacking definitely overcomes bit depth as long as you know what are you doing. A stack of 2 8-bit images creates a 9-bit image if you store the result in a 9-bit or more format. Similarly, a stack of 64 8-bit images becomes a 14-bit image if the final result is stored in a 14-bit or more format, like a 16-bit psd file. The signal-to-noise ratio of the resulting image increases 64 folds (6 stops or 20dB better) as well.

Read noise decreases in a similar way as bit-depth mentioned above.

Gain clipping is not a con of short exposure. In fact the margin of error for gain clipping is narrower the longer exposure used. As mentioned above, as long as you know what you are doing, short exposure stacking is affected less by gain clipping.

The only noise mentioned in the article that short exposure stacking technique does not minimize is pattern noise. It affects single exposure shot and stacked shot equally.

Stacking does have its overhead. No one will argue that.
However, as the technology advances, the overhead becomes trivial (in terms of down time/storage requirements/processing requirements, etc). The overhead is so trivial that auto stacking is a built-in function of some consumer cameras and smartphones already.

You must be logged in to post a comment.

Image of Richard S. Wright Jr.

Richard S. Wright Jr.

December 22, 2019 at 10:32 am

Thanks for chiming in. You are correct about how many bits can be used to represent data after it is stacked, and planetary imagers can often regain quite a bit of dynamic range by stacking a great many 8-bit images. The difference however is that the planetary imagers have significantly more signal than the dark sky imagers. The error here is an intuitive misunderstanding of the difference between precision and accuracy, a sticky point I remember with me in my first numerical analysis course when I was an engineering student.

The quantization error is occurring because, let's say all the numbers between 0 and 512 are clumped together when digitized. The read noise might be in the range 0-300 for example, and there is some signal, a few photons getting the counts up into the 400's. Now when this is digitized, they are clumped together and there is no difference. So, you have many values read from the chip in the range 0 to 512, but they are all represented by the number 512 because of the bit depth of the ADC. No matter how many 512's you average, the result is always 512.

I hope this makes sense. I will have another analogy and some figures for the longer article to try and explain this better.

Richard

You must be logged in to post a comment.

Image of Richard S. Wright Jr.

Richard S. Wright Jr.

December 18, 2020 at 8:35 am

>A stack of 2 8-bit images creates a 9-bit image if you store the result in a 9-bit
>or more format. Similarly, a stack of 64 8-bit images becomes a 14-bit image if
>the final result is stored in a 14-bit or more format, like a 16-bit psd file.

Day one of my Numerical Analysis course in college (dear lord, some 35 years ago now) was about the difference between 'Accuracy', and 'Precision'. They are often confused, and you are making a very similar mistake.

No matter how many 8-bit numbers you sum, when you average you will get a value that can be represented by an 8-bit container.

128 times a million, divided by a million... is still 128. No matter how many millions you use.

If each exposure is the same length on a fixed light source, then any variation between 128 and 127 for example is.... noise, not signal.

You can scale the values up or sum them all you want, but the coarseness between brightness levels is the same.

You must be logged in to post a comment.

Image of John

John

December 20, 2019 at 8:17 pm

Another issue is light pollution - I'm forced to expose for only couple of minutes in LRGB or my background ADU count soars. I can do 20 minutes or so narrow band so that's the direction I've had to go. I shoot for 3 or 4x my dark frame level to make sure I have more signal from the sky rather than the camera.

You must be logged in to post a comment.

Image of John-Murrell

John-Murrell

December 23, 2019 at 6:56 am

It sounds as though there could be a series of articles on this topic. One thing that needs to be covered is colour cameras. The pixels are grouped into 4 (by the Bayer Matric) 2 green, 1 red & 1 blue. As a result the camera is twice as sensitive in the green though there is not a lot of green is astronomical targets ( With the exception of the oxygen line). This also impacts the reslotion as the diffusing filter in front of the matrix is designed to spread a point source of light over at least 4 pixels. Otherwise you get very odd efects - for instance a red point source that was imaged onto the green or blue filtered pixels would not be recorded.

Pixel binning is another subject that needs covering - I have seen an astro colour camera specification where it specifies that pixels can be binned 3 x 3. Bearing in mind the bayer matrix is 2x2 it seems adjacent bins will have differnet numbers of green, red & blue pixels.

Having covered that therer is the impact of the microlenses which are designed to have light arrive at them perpendicular to the chip surface. This is done in DSLR lenses by a bit of clever design based on a reverse telephoto principle. If the light arrived from a non perpendicular direction as it will for fast telescopes some of the light will be lost.

The difference bettweeb CCDs & CMOS chips needs to be covered as well - I am not sure what the impact of an amplifier per pixel is on a CMOS chip against one amplifier per chip on a CCD.

You must be logged in to post a comment.

You must be logged in to post a comment.