Advice On Binning

Binning is not always best, particularly if you have light pollution. Steve Cannistra(external link) wrote in a post(external link) on the CDD-NewAstro(external link) Yahoo! group:

Steve, binning helps you increase the signal to noise ratio, as Ron
said. Binning 2x2 means that your camera now considers 4 pixels as
one unit, thereby creating a functional "superpixel." Read noise
occurs when the electron contents of a pixel are being converted into
digital units (ADU conversion). Normally the electrons in 4 separate
pixels (i.e., unbinned) would be subjected to 4 separate read noise
events, since each pixel is being read separately. But with binning,
the superpixel is now considered the unit, and it is subjected to only
one read noise event.

Compare the results of 4 pixels binned 2x2, versus those same 4 pixels
unbinned. The signal, S, is the same in both groups, since the same
amount of light is falling on the same number of pixels (4 pixels in
either case). Let's call the read noise per functional pixel unit "R"
(in electron RMS). By functional pixel unit, I'm referring to the
fact that the read noise is R for each pixel in the unbinned example,
and it would also be R for the entire "superpixel" in the binned case.

Ignoring for a moment the contribution of dark signal and sky flux to
the noise (i.e., assume a relatively short exposure at a dark site),
the noise in the binned example would be sqrt(R^2), or R, and the
signal to noise would be S/R (binned case). Now, compare this to the
unbinned case. The signal collected by 4 pixels is the same, S, but
there will be 4 read noise events (for the 4 individual pixels)
instead of 1. Since noise adds in quadrature, it would be sqrt(R^2 +
R^2 + R^2 +R^2), or sqrt(4 x R^2), for the unbinned group. This means
that the signal to noise unbinned would be S/2R (i.e., this is bad,
since the signal to noise ratio has been cut in half compared to the
binned 2x2 example). Put it another way, from the standpoint of read
noise, binning 2x2 allows you to double your S/N ratio, binning 3x3
allows you to triple the S/N ratio, etc., compared to the unbinned
case. Increasing the signal to noise ratio means that you will need
less subexposure time to become photon limited (i.e., to have sky
noise overwhelm read noise), so binning can be very helpful at dark
sites, where the sky flux is very low, in order to avoid excessively
long subs. Conversely, as the contribution from sky noise increases
at light polluted sites, this same reasoning dilutes the signal to
noise advantage of binning (remember that the assumption above was
that we could ignore noise from sky glow, which isn't the case at a
light polluted site).

There are downsides to binning, the most obvious one being that your
image scale will proportionally increase, because the size of the
functional pixel unit has increased. So binning 2x2 means that you
have doubled your image scale, for instance. This can lead to
decreased resolution (if you are already undersampled), which may be
acceptable for RGB images, but would be less acceptable for luminance.
That's why many people bin color, and try to avoid binning luminance.
That said, if your unbinned image scale is .4 arcsec/pixel (i.e., you
might be oversampled), and it increases to .8 arcsec/pixel with 2x2
binning, this is still represents very respectable sampling at most
imaging sites. In that case binning can cut down on subexposure time,
especially at a dark site, with minimal loss in resolution.

I image at an image scale of between 1.7 and 2.2 arcsec/pixel, so I'm
already undersampled (my best seeing is about 3 arcsec/pixel, which
isn't very good, but this means that I should ideally be sampling at
an image scale of 3/3.3 (Nyquist), or .9 arcsec/pixel. So at 2.2
arsec/pixel, I'm undersampled). I really have no room to move with
respect to increasing my image scale, without further compromising my
sampling. Add to this the fact that my sky glow is signficant. Both
of these factors mean that binning affords me very little advantages.
But if my image scale were much lower, and if I were at a dark site,
then binning would make perfect sense.

PS- thanks to Ron for teaching me everything that I know about binning

Steve Cannistra

Ron Wodaski(external link) also has a very informative post:

There have been several messages posted over the last few days about color
imaging. There has been a mix of correct and incorrect information, so I
thought I would take some time to set the record straight.

First, I want to start where discussions about color imaging seldom start -
but where they should always start: noise. More specifically, signal to
noise ratio.

What is true of luminance images - it is all about the noise - is also true
of color images. Everything good and bad you experience about color imaging
ultimately has to do with the signal to noise ratio in the data.

Yep, data. It is certainly convenient that our eyes can take photons as
input and generate an image as output. But the bottom line is that what we
are talking about here is _data_. The rules that apply to data (signal and
noise) apply here. That is good news, and that is bad news.

The bad news is that data is not an intuitive concept. Oh, we like to think
we understand, but the fact is, the combination of data and noise behaves in
some pretty non-intuitive ways. You can't use your experience of choosing
avocados at the market, or your instinctive understanding of chasing a
moving target, to understand noise. Noise behaves in ways that have nothing
to do with survival or mating. <G> As a result, it can surprise us. The tool
that we use to understand (and more importantly, validate) these surprises

If we approach signal and noise with math, we learn some things that are
surprising (a good euphemism for non-intuitive and "just plain weird").

So let's start at the very beginning, and build to some supportable

Most imagers eventually get a decent feel for signal to noise. The key
concept is one I posted an extended discussion of recently: optimal
sub-exposure time. I turned that discussion into a tutorial on the book web
page. Normally, you need to have a paid-up subscription to the web site to
view tutorials, but I made this one publicly viewable because it's an
important concept. Knowing why, and how, to optimize sub-exposure times
leads to a deeper understanding of how data, signal, and noise co-exist. And
that can make a big difference in the results you get from imaging.

Here is a link to the sub-exposure tutorial:

http://www.newastro.com/newastro/tutorials/noise/noise.asp(external link)

I'm going to assume that you have either read the above, or that you
understand the basic concept behind optimal sub-exposure times. Briefly:

Whereas read noise results in the same level of uncertainty in the data
every time;

Whereas shot noise is always the square root of the signal (and thus grows
with exposure time, but at a slower rate than signal grows);

Be it therefore agreed that exposures of sufficient length will allow the
shot noise to swamp the read noise;

And be it further agreed that this allows a large total exposure time to be
subdivided into shorter, more practical individual exposures, without
significant noise penalty from the numerous individual readouts.

Witnesseth, that this therefore yields excellent signal to noise ratio
without having to make sacrifices to the God of Read Noise.

Once you are on this path of optimal exposure time, your luminance exposures
can be as deep as you are willing to go in total exposure time. (Granted,
the God of Light Pollution still demands his tithe, but you hopefully get my

If this path is good for luminance, might it also be good for color? Yes!

Noise controls how rich, deep, and accurate your color can be.

When you take weighted exposure times for color, what you are really doing
is attempting to get the same signal to noise ratio in all three color
channels. HOWEVER: if your color exposure times are too short, then read
noise will be a significant factor, and your color ratios will not work

So the goal for BALANCED color is equal signal to noise in all three
channels. The primary obstacles to simple achievement of this goal are:

* Variations in the quantum efficiency of your CCD sensor with wavelength
(your chip might be more sensitive in red, for example)

* Variation in the amount of light passed by a given filter (your blue
filter might pass less light, for example).

There are multiple ways to calculate the right answer for any give
filter/chip combination, but the goal is the same: to adjust exposure times
through each filter to account for these variations.

Note: there are other ways to achieve balanced color, such as taking more
exposures through a certain filter. However, this results in a different
balance between signal/shot noise and read noise, which complicates the

There is a simple way to make all of this simpler: determine the optimal
sub-exposure times for each of your color channels, and use exposure
durations for individual exposures that are at least long enough to swamp
the read noise with shot noise. Then, even if you do not get perfectly
balanced color, you have enough signal to work with to achieve a
satisfactory balance.

Note: if read noise is significant, then the differences in signal to noise
will give the colors different strengths. A noisier color is a weaker color.
The noisiest color in your RGB set controls how much color you can have
overall, since boosting saturation will reveal the noise at some point, and
the weakest color will reveal noise first, leading to a color balance
problem that can't be fixed unless you decrease color saturation to hide the
noise. This may or may not sink in immediately, but it's an extremely
important concept to understand clearly.

The last important thing to know is the effect of signal to noise on color
in the image. The previous paragraph says it all, though pretty tersely and
the inference might not be clear.

So let's back up and look at luminance imaging. What happens when you work
with the optimal sub-exposure time in luminance, and then take more and more
individual images? You increase S/N, and dim objects become clearer, subtle
contrast becomes discernable, etc. All good things in an image come from
reducing uncertainty in the data (that is, having better S/N).

With color, similar good things happen. As you improve the S/N of your color
data by taking larger numbers of optimal sub-exposures, you remove
uncertainty from the color information so that dimmer and dimmer objects
take on clean color. And of course the color of bright objects gets richer
and richer.

That's all there is to it.

Ron Wodaski

Created by Andy. Last Modification: Sunday 31 of October, 2010 17:02:32 CDT by Andy. (Version 1)