As optics and camera technologies continue to improve, just how short of an exposure can we get away with in deep-sky astrophotography?
Amateurs first began using electronic sensors to capture images of the deep sky three decades ago. Since then, advances in camera technology have made these sensors larger and more sensitive, vastly outperforming cameras from even five years ago. In turn, those advances have spurred the development of fast optical systems with a large corrected field that concentrates more light onto the smaller pixels of these sensors.
Yet some people are still clinging to rules of thumb and best practices devised back when optics were slower and cameras were far noisier. Meanwhile, others are charging wildly forward, experimenting with techniques that ignore basic physics and sound principles of signal processing (which at its root is what imaging is).
Basic Principles: Stacking vs. Shot Noise
Some things are never going to change. Every image you take contains both signal and noise, and to get a good image, you need a very high amount of signal compared to the amount of noise – the signal-to-noise ratio. Further, only some of the noise comes from the camera.
Shot noise, which I first described here, comes from the faint nature of the targets you are capturing. The only way to overcome it is by taking longer exposures or stacking many short ones. If it takes one hour of signal collection to obtain a good image, you can instead combine 12 individual five-minute exposures.
In theory, you can stack more shorter exposures. For example, 60 one-second exposures will accumulate the same amount of signal that one 60-second exposure does. But in practice, this doesn't work: There are both practical and physical problems with this idea. Consider, if it takes one second to download an image and start the next one, that means that an hour of image-taking results in only a half-hour of exposure time. Yikes.
Consider also the problem of data storage: Even if you could get away with one-second exposures, an hour of 1-second exposures amounts to 3,600 images. With a 16-mega-pixel camera, that corresponds to about 60 gigabytes worth of data to store, calibrate, align, and stack. All that for a single hour of exposure time!
I often see amateurs coming to incorrect conclusions about shorter exposures because they don’t fully understand the underlying mechanisms. Signals and noise are well-studied and understood engineering and physical phenomena, but it takes time to learn the foundations — there are whole books and college courses on this topic. One of the best books on the subject of understanding camera noise sources and performance is Photon Transfer by James R. Janesick.
What to Consider for Short Exposures
Nevertheless, shorter exposures are tempting because of the benefits they offer. Issues with mount tracking alone and not having to guide are reasons enough to consider going down this route. However, determining how short you can go depends on a lot of factors. Here’s a quick list of the top issues you need to consider.
Read noise is noise that comes from the electronics of your camera. You pay the price of read noise every time you download an image from the camera, and it is independent of the amount of signal that you’ve captured. If the signal from the faintest details in your target are not higher than the read noise, you can consider that detail lost no matter how many short exposures you stack.
Pattern noise could be lumped in with read noise, as it's due to camera electronics, but some low-end CMOS cameras have significant pattern noise that is higher than the quoted read noise. If this pattern noise is not repeatable (and it isn't with some CMOS implementations), it won't be corrected through image-calibration. Pattern noise again raises the bar on how much signal you need in an individual exposure.
Gain is the amplification of the signal from the sensor. In some cases, it can boost the desired signal above the read noise. Turning up the gain sounds like a good idea, but if you turn it up too high, you lose significant dynamic range. In extreme cases, you might as well be shooting with an 8-bit camera.
Does your camera have a 12-bit, 14-bit, or 16-bit readout? This also plays a part as something called quantization error can again lump your faint signal back down into the read noise, and no amount of stacking will save it.
Remember, too, it’s all about the signal! I’ve seen so many short-exposure demonstrations that show the Orion Nebula (M42) and the Andromeda Galaxy (M31), two relatively bright deep-sky targets. The minimum exposure needed to record a bright target is not going to be the same for a faint one.
This blog is only an introduction to a big topic. Watch for a full-length article on this topic in an upcoming issue of Sky & Telescope.