Why do ccds have to be cooled




















If a SNR value of 2. In practice, other noise components, which are not associated with the specimen photon signal, are contributed by the CCD and camera system electronics, and add to the inherent photon statistical noise.

Once accumulated in collection wells, charge arising from noise sources cannot be distinguished from photon-derived signal. Most of the system noise results from readout amplifier noise and thermal electron generation in the silicon of the detector chip.

The thermal noise is attributable to kinetic vibrations of silicon atoms in the CCD substrate that liberate electrons or holes even when the device is in total darkness, and which subsequently accumulate in the potential wells.

For this reason, the noise is referred to as dark noise , and represents the uncertainty in the magnitude of dark charge accumulation during a specified time interval. The rate of generation of dark charge, termed dark current , is unrelated to photon-induced signal but is highly temperature dependent. In similarity to photon noise, dark noise follows a statistical square-root relationship to dark current, and therefore it cannot simply be subtracted from the signal.

Cooling the CCD reduces dark charge accumulation by an order of magnitude for every degree Celsius temperature decrease, and high-performance cameras are usually cooled during use. Cooling even to 0 degrees is highly advantageous, and at degrees, dark noise is reduced to a negligible value for nearly any microscopy application.

Providing that the CCD is cooled, the remaining major electronic noise component is read noise , primarily originating with the on-chip preamplifier during the process of converting charge carriers into a voltage signal. Although the read noise is added uniformly to every pixel of the detector, its magnitude cannot be precisely determined, but only approximated by an average value, in units of electrons root-mean-square or rms per pixel. Some types of readout amplifier noise are frequency dependent, and in general, read noise increases with the speed of measurement of the charge in each pixel.

The increase in noise at high readout and frame rates is partially a result of the greater amplifier bandwidth required at higher pixel clock rates. Cooling the CCD reduces the readout amplifier noise to some extent, although not to an insignificant level. A number of design enhancements are incorporated in current high-performance camera systems that greatly reduce the significance of read noise, however.

One strategy for achieving high readout and frame rates without increasing noise is to electrically divide the CCD into two or more segments in order to shift charge in the parallel register toward multiple output amplifiers located at opposite edges or corners of the chip. This procedure allows charge to be read out from the array at a greater overall speed without excessively increasing the read rate and noise of the individual amplifiers.

Cooling the CCD in order to reduce dark noise provides the additional advantage of improving the charge transfer efficiency CTE of the device. This performance factor has become increasingly important due to the large pixel-array sizes employed in many current CCD imagers, as well as the faster readout rates required for investigations of rapid dynamic processes. With each shift of a charge packet along the transfer channels during the CCD readout process, a small portion may be left behind.

While individual transfer losses at each pixel are miniscule in most cases, the large number of transfers required, especially in megapixel sensors, can result in significant losses for pixels at the greatest distance from the CCD readout amplifier s unless the charge transfer efficiency is extremely high.

The occurrence of incomplete charge transfer can lead to image blurring due to the intermixing of charge from adjacent pixels. In addition, cumulative charge loss at each pixel transfer, particularly with large arrays, can result in the phenomenon of image shading , in which regions of images farthest away from the CCD output amplifier appear dimmer than those adjacent to the serial register.

Charge transfer efficiency values in cooled CCDs can be 0. Both hardware and software methods are available to compensate for image intensity shading. A software correction is implemented by capturing an image of a uniform-intensity field, which is then utilized by the imaging system to generate a pixel-by-pixel correction map that can be applied to subsequent specimen images to eliminate nonuniformity due to shading.

Software correction techniques are generally satisfactory in systems that do not require correction factors greater than approximately percent of the local intensity. Larger corrections, up to approximately fivefold, can be handled by hardware methods through the adjustment of gain factors for individual pixel rows.

The required gain adjustment is determined by sampling signal intensities in five or six masked reference pixels located outside the image area at the end of each pixel row. Voltage values obtained from the columns of reference pixels at the parallel register edge serve as controls for charge transfer loss, and produce correction factors for each pixel row that are applied to voltages obtained from that row during readout.

Correction factors are large in regions of some sensors, such as areas distant from the output amplifier in video-rate cameras, and noise levels may be substantially increased for these image areas. Although the hardware correction process removes shading effects without apparent signal reduction, it should be realized that the resulting signal-to-noise ratio is not uniform over the entire image. In many applications, an image capture system capable of providing high temporal resolution is a primary requirement.

For example, if the kinetics of a process being studied necessitates video-rate imaging at moderate resolution, a camera capable of delivering superb resolution is, nevertheless, of no benefit if it only provides that performance at slow-scan rates, and performs marginally or not at all at high frame rates.

Full-frame slow-scan cameras do not deliver high resolution at video rates, requiring approximately one second per frame for a large pixel array, depending upon the digitization rate of the electronics. If specimen signal brightness is sufficiently high to allow short exposure times on the order of 10 milliseconds , the use of binning and subarray selection makes it possible to acquire about 10 frames per second at reduced resolution and frame size with cameras having electromechanical shutters.

Faster frame rates generally necessitate the use of interline-transfer or frame-transfer cameras, which do not require shutters and typically can also operate at higher digitization rates.

The latest generation of high-performance cameras of this design can capture full-frame bit images at near video rates. The now-excellent spatial resolution of CCD imaging systems is coupled directly to pixel size, and has improved consistently due to technological improvements that have allowed CCD pixels to be made increasingly smaller while maintaining other performance characteristics of the imagers.

In comparison to typical film grain sizes approximately 10 micrometers , the pixels of many CCD cameras employed in biological microscopy are smaller and provide more than adequate resolution when coupled with commonly used high-magnification objectives that project relatively large-radii diffraction Airy disks onto the CCD surface.

Interline-transfer scientific-grade CCD cameras are now available having pixels smaller than 5 micrometers, making them suitable for high-resolution imaging even with low-magnification objectives. The relationship of detector element size to relevant optical resolution criteria is an important consideration in choosing a digital camera if the spatial resolution of the optical system is to be maintained.

The Nyquist sampling criterion is commonly utilized to determine the adequacy of detector pixel size with regard to the resolution capabilities of the microscope optics. Nyquist's theorem specifies that the smallest diffraction disk radius produced by the optical system must be sampled by at least two pixels in the imaging array in order to preserve the optical resolution and avoid aliasing. As an example, consider a CCD having pixel dimensions of 6.

At this sampling frequency, sufficient margin is available that the Nyquist criterion is nearly satisfied even with 2 x 2 pixel binning. Detector quantum efficiency QE is a measure of the likelihood that a photon having a particular wavelength will be captured in the active region of the device to enable liberation of charge carriers. The parameter represents the effectiveness of a CCD imager in generating charge from incident photons, and is therefore a major determinant of the minimum detectable signal for a camera system, particularly when performing low-light-level imaging.

No charge is generated if a photon never reaches the semiconductor depletion layer or if it passes completely through without transfer of significant energy. The nature of interaction between a photon and the detector depends upon the photon's energy and corresponding wavelength, and is directly related to the detector's spectral sensitivity range. Although conventional front-illuminated CCD detectors are highly sensitive and efficient, none have percent quantum efficiencies at any wavelength.

Image sensors typically employed in fluorescence microscopy can detect photons within the spectral range of nanometers, with peak sensitivity normally in the range of nanometers. Maximum QE values are only about percent, except in the newest designs, which may reach 80 percent efficiency. Figure 10 illustrates the spectral sensitivity of a number of popular CCDs in a graph that plots quantum efficiency as a function of incident light wavelength.

Most CCDs used in scientific imaging are of the interline-transfer type, and because the interline mask severely limits the photosensitive surface area, many older versions exhibit very low QE values. With the advent of the surface microlens technology to direct more incident light to the photosensitive regions between transfer channels, newer interline sensors are much more efficient and many have quantum efficiency values of percent. Sensor spectral range and quantum efficiency are further enhanced in the ultraviolet, visible, and near-infrared wavelength regions by various additional design strategies in several high-performance CCDs.

Because aluminum surface transfer gates absorb or reflect much of the blue and ultraviolet wavelengths, many newer designs employ other materials, such as indium-tin oxide, to improve transmission and quantum efficiency over a broader spectral range.

Even higher QE values can be obtained with specialized back-thinned CCDs, which are constructed to allow illumination from the rear side, avoiding the surface electrode structure entirely. To make this possible, most of the silicon substrate is removed by etching, and although the resulting device is delicate and relatively expensive, quantum efficiencies of approximately 90 percent can routinely be achieved. Other surface treatments and construction materials may be utilized to gain additional spectral-range benefits.

Performance of back-thinned CCDs in the ultraviolet wavelength region is enhanced by the application of specialized antireflection coatings.

Modified semiconductor materials are used in some detectors to improve quantum efficiency in the near-infrared. Sensitivity to wavelengths outside the normal spectral range of conventional front-illuminated CCDs can be achieved by the application of wavelength-conversion phosphors to the detector face. Phosphors for this purpose are chosen to absorb photon energy in the spectral region of interest and emit light within the spectral sensitivity region of the CCD.

As an example of this strategy, if a specimen or fluorophore of interest emits light at nanometers where sensitivity of any CCD is minimal , a conversion phosphor can be employed on the detector surface that absorbs efficiently at nanometers and emits at nanometers, within the peak sensitivity range of the CCD. A term referred to as the dynamic range of a CCD detector expresses the maximum signal intensity variation that can be quantified by the sensor. The quantity is specified numerically by most CCD camera manufacturers as the ratio of pixel full well capacity FWC to the read noise, with the rationale that this value represents the limiting condition in which intrascene brightness ranges from regions that are just at pixel saturation level to regions that are barely lost in noise.

The sensor dynamic range determines the maximum number of resolvable gray-level steps into which the detected signal can be divided. To take full advantage of a CCD's dynamic range, it is appropriate to match the analog-to-digital converter's bit depth to the dynamic range in order to allow discrimination of as many gray scale steps as possible.

Analog-to-digital converters with bit depths of 10 and 11 are capable of discriminating and gray levels, respectively. As stated previously, because a computer bit can only assume one of two possible states, the number of intensity steps that can be encoded by a digital processor ADC reflects its resolution bit depth , and is equal to 2 raised to the value of the bit depth specification.

Therefore, 8, 10, 12, and bit processors can encode a maximum of , , , or gray levels. Specifying dynamic range as the ratio of full well capacity to read noise is not necessarily a realistic measure of useful dynamic range, but is valuable for comparing sensors. In practice, useful dynamic range is smaller both because CCD response becomes nonlinear before full well capacity is reached and because a signal level equal to read noise is unacceptable visually and virtually useless for quantitative purposes.

Note that the maximum dynamic range is not equivalent to the maximum possible signal-to-noise ratio, although the SNR is also a function of full well capacity.

The photon statistical noise associated with the maximum possible signal, or FWC, is the square root of the FWC value, or electrons, for the previous example of a 16,electron FWC. The photon noise represents the minimum intrinsic noise level, and both detected stray light and electronic system noise diminish the maximum SNR that can be realized in practice to values below , since these sources reduce the effective FWC by adding charge that is not signal to the wells.

Although a manufacturer might typically equip a camera having a dynamic range of approximately , for example, with a bit ADC digitization steps , several factors are relevant in considering the match between sensor dynamic range and the digitizing capacity of the processor. For some of the latest interline-transfer CCD cameras that provide bit digitization, the dynamic range determined from the FWC and read noise is approximately , which would not normally require bit processing.

However, a number of the current designs include an option for setting gain at 0. This strategy takes advantage of the fact that pixels of the serial register are designed to have twice the electron capacity of parallel register pixels, and when the camera is operated in 2 x 2 binning mode common in fluorescence microscopy , bit high-quality images can be obtained. It is important to be aware of the various mechanisms in which electronic gain can be manipulated to utilize the available bit depth of the processor, and when dynamic range of different cameras is being compared, the best approach is to calculate the value from the pixel full well capacity and camera read noise.

It is common to see camera systems equipped with processing electronics having much higher digitizing resolution than required by the inherent dynamic range of the camera.

In such a system, operation at the conventional 1x electronic gain setting results in a potentially large number of unused processor gray-scale levels. It is possible for the camera manufacturer to apply an unspecified gain factor of x, which might not be obvious to the user, and although this practice does boost the signal to utilize the full bit depth of the ADC, it produces increased digitization noise as the number of electrons constituting each gray level step is reduced.

Charge coupled devices, or CCDs , are sensitive detectors of photons that can be used in telescopes instead of film or photographic plates to produce images. CCDs were invented in the late s and are now used in digital cameras, photocopiers and many other devices. Its inventors, Willard Boyle and George E. Smith received the Nobel Prize in physics in for their work. A CCD is a tiny microchip onto which the light that the telescope collects is focused.

The microchip consists of a large grid of individual light sensing elements called pixels. Each pixel is a Imagine the same camera delivering less than 1. To hit these specs it would need to be cooled significantly. Raptor Photonics recently launched a new camera called the Kingfisher V that delivers this type of performance using a new breed of Sony ICX sensors. At resolutions up to 9. So what is the secret?

In fact, the cooler you can get the chip the better the performance. Most applications only require minimum cooling for enhanced performance, but high end applications like single molecule detection require ultimate sensitivity. Applications requiring long exposures will see an increase in the noise over time, so these applications demand cooling.

Minimum thermal impedance leads to maximum efficiency, enabling these long exposures. So why do you need to deep cool? There are several contributing factors to the noise experienced by a CCD, the main ones being dark current and readout noise. Thermal energy alone is enough to excite electrons into the image pixels and these cannot be distinguished from the actual image photoelectrons.

This, of course, has its limits. Reduction in dark current by lowering temperature can be seen in Figure 1. Often forgotten, or deliberately avoided, dark current distribution on CCDs can also be an issue. Most measurements for dark current indicate the average dark current for the complete 2D CCD array and in some cases the median dark current value is used. For any CCD there will be a distribution of dark current values across the array, i. Deep cooling also benefits the often ignored pixels by reducing their dark current beyond detectable limits, making more of the CCD array available for accurate scientific image detection.

So the case for cooling a CCD well below ambient temperature is easy to make. But as anyone who has sipped a cold drink on a warm day knows, a cold surface will cause moisture to condense out of the air. The dark level can be different per pixel and depends on the temperature, integration time of the detector and the electronics that reads out the detector and converts it to a digital value. Therefore, even if you have the specifications of the detector at hand it will be difficult to determine in advance what dark level to expect and what the noise on this dark level the dark noise will be.

The dark level is determined by:. Figure 1, a sketch of the dark level on a logarithmic scale for different integration times. The measured dark level on the S is shown in Figure 2. Of each series the average signal over of the pixels was calculated [2].

The results show that the dark level is basically constant near counts up to integration times of 50 ms. This is the regime where the dark current is so minimal that it does not contribute to the dark level.

At longer integration times the dark current becomes important and thus also the need for cooling. Figure 2, average dark level for different integration times and detector temperatures.



0コメント

  • 1000 / 1000