Camera Sensitivity

20

Introduction

The sensitivity of a camera is one of the most important aspects of camera performance – with inadequate sensitivity, your experiment may simply be impossible. However, sensitivity cannot be represented by one number or factor alone – while one camera may outperform another at one light level or exposure time, the situation may be reversed at higher light or longer exposures. How then do we quantify and compare sensitivity? We calculate the signal to noise ratio.

Sensitivity is reliant on four main factors:

  1. Quantum efficiency
  2. Photon wavelength
  3. The physical size of sensor pixel
  4. Noise sources such as read noise

Quantum Efficiency and Photon Wavelength

Quantum efficiency (QE) is the percentage of incident photons that will release photoelectrons in the sensor, i.e. be detected. Due to the properties of sensor materials, QE is highly dependent on the wavelength of the detected photon. Fig.1 shows a typical QE curve for a modern sCMOS camera.

Figure 1: The quantum efficiency of the Prime 95B at different photon wavelengths.

Pixel Size

The number of photons detected per pixel is even more influenced by the
size of the pixel – a larger pixel area will collect more photons, yet
generally experience no more noise than a smaller pixel. Photon counts scale
with pixel area – a pixel twice as large will collect four times as many
photons.

This is in direct opposition to resolution, where larger pixels decrease the camera’s ability to discern fine details. As mentioned before, sensitivity and resolution need to be balanced when designing a sensor and an optical setup. If high resolution is required, it can be increased, but sensitivity will fall as a result. This means that binning (grouping of pixels on the sensor into ‘super-pixels’) can increase sensitivity, as it increases the effective pixel size.

Signal-To-Noise Ratio

Signal to noise ratio (SNR) is the relationship between signal and noise in a pixel. If the signal from the sample is weak compared to the noise levels, it can be hard to see, measure, or even detect at all.

The signal for our camera is the number of detected photoelectrons in
that pixel. A larger pixel or higher QE will result in higher photoelectron
counts, so higher signal.

Noise is often mistaken for the background grey level value of the image – in fact, noise is error that exists on the measurements made by every sensor pixel. Noise is fundamental and unavoidable and will occur on every camera, with every microscope system. We can’t measure how much noise was in an image, but we can calculate how far off a measurement may be from the ‘true’ value. Rather than every pixel having an absolute value of photoelectrons, it is more of a range, as seen in Fig.2A. While noise is unavoidable, it is best if it can be reduced as much as possible, or it can interfere with the signal.

As signal is desirable and noise is undesirable, it is best to have a SNR as high as possible. The higher the SNR, the better the image. Image contrast is also hugely dependent upon SNR – the closer the signal from your sample is to the random variation in pixel intensities due to noise, the more difficult it will be to see it.  Different imaging and analysis techniques require different minimum SNR levels – for example, resolving ultra-fine sub-cellular details requires a very SNR. However, tracking the movement of a bright object can be performed with much lower SNR. Some examples of how images appear at different representative SNR values can be seen in Fig.2B.

Note: while SNR varies on the single-pixel level, entire images are often described as having one SNR value as shorthand (as in Fig.2B). This representative is the SNR of the peak intensities of your signal of interest.

Figure 2: Demonstrating noise and SNR. A) The left-hand array shows just the signal, while the right-hand array shows the signal and the noise. The peak SNR across this array is 5:1 (from the peak of signal, 25). B) How an image changes at different SNRs. At 15:1, the image is high quality and plenty of information is available. At 10:1 the image may appear similar, but the magnified insert shows that information has been lost. At lower SNR, significant amounts of information have been lost.

Sources Of Noise

A common mistake with noise is to attempt to measure it from images – noise cannot be measured in this way, but fortunately it is easy to calculate. There are three main sources of noise to consider when using a scientific camera: read noise, dark current and photon shot noise.

Read Noise

Read noise (or readout noise) is the inherent limit to the accuracy of measuring (reading) detected photoelectrons, and is not dependent upon exposure time or light level. Each camera typically has a fixed read noise which is displayed by the manufacturer, for example our Prime 95B has a read noise of ±1.6 electrons, meaning this is how much the electron number can vary on average when it is read off the sensor and converted to grey levels. This means signals can fluctuate by around 1.6 electrons, but when a detected signal may typically be hundreds or thousands of electrons, this read noise can be negligible and only has an effect at very low signal levels (like low light imaging). Read noise is typically reduced as low as possible by camera electronic design, and optimization of camera parameters.

Dark Current

Dark current is a consequence of heat in the silicon of the camera sensor. The silicon reacts to photons and produces photoelectrons, which are stored in a ‘well’ in the pixel before readout. The important thing is that the sensor will count all electrons in the well, not knowing where they came from. While most of them are photoelectrons created by the photons hitting the silicon, some of them are thermal electrons that have jumped from the silicon into the well due to thermal motion. The rate at which this occurs may be small, however if an experiment needs long exposure times, there will be longer for dark current to build up (as seen in Fig.3B). Dark current can be decreased by simply cooling the camera, which is why most cameras will come with a fan for air cooling, and are compatible with liquid cooling. The cooler the camera, the less dark noise, with manufacturers often advertising how cold the camera is when taking images at room temperature.

Photon Shot Noise

Photon shot noise is the most commonly observed noise source – when you see variations in intensity from pixel to pixel even at high light levels, that’s photon shot noise. This is caused by the randomness in time of photon emission and detection. Whatever the light source, whether it’s a filament lamp, LED, or even a fluorophore, photons will be emitted at random times and in random directions rather than at regular intervals. We can predict how many photons are emitted on averagein a given time interval, but the actual count will vary around this number from measurement to measurement. This is known as Poisson behavior, and the variation is simple to calculate. If a signal of N photons is received, the photon shot noise will be √N (so 10 photons of shot noise for 100 photons of signal).

Figure 3: Different types of noise. A) While an image may look smooth at first look, every image still contains some noise, which can be seen at higher magnifications. The insert shows the noise in this image. B) Dark current noise, which increases with exposure. These images are of a blank screen range from 0 seconds to 40 second exposure, with dark current noise increasing for each. C) Photon shot noise. The left image shows idealized conditions, where each pixel receives a set amount of photons at set times. The right image shows realistic conditions, where there is randomness in time for photon emission and detection.

Summary

While photon shot noise is just a quirk of physics, some camera technologies can worsen its effects (see our article on EMCCDs). Read noise can be reduced with smart camera design, and dark current can be reduced with camera cooling. By increasing signal collection and reducing noise as low as possible, the signal to noise ratio will be maximized, improving the quality of images and analysis.