• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

Test & Measurement Tips

Oscilloscopes, electronics engineering industry news, how-to EE articles and electronics resources

  • Oscilloscopes
    • Analog Oscilloscope
    • Digital Oscilloscope
    • Handheld Oscilloscope
    • Mixed-signal Oscilloscope
    • PC-based Oscilloscopes – PCO
  • Design
  • Calibration
  • Meters & Testers
  • Test Equipment
  • Suppliers
  • Video
  • EE Learning Center
  • FAQs
You are here: Home / New Articles / How to measure CCD and CMOS image sensor qualities

How to measure CCD and CMOS image sensor qualities

January 7, 2020 By David Herres Leave a Comment

Today, about 95% of all digital cameras use CMOS image sensors with the rest employing CCDs. From the standpoint of sensor outputs, the main difference between CMOS and CCD sensors is that each pixel in a CMOS sensor has its own readout circuit next to the photosensitive area. In CCDs, charge collected in individual pixels is subsequently shifted along transfer channels under the influence of voltages applied to a gate structure for readout.

To understand image sensor measurements you must grasp the basic structure of these sensor technologies. Also, a point to note is that there are only limited opportunities for making measurements of image sensors built into end-use equipment such as mobile phones. That’s because there’s no direct access to sensor outputs. Measurements typically take place on the sensor chips themselves. So we’ll quickly review a few fundamentals of these ICs.

ccd structureCCD chips contain a large number of light-sensing (pixel) elements arranged in a two-dimensional array. The elements trap and hold photon-induced charge carriers when biased correctly. The fundamental light-sensing unit of the CCD is a metal oxide semiconductor (MOS) capacitor operated as a photodiode and charge-carrier storage device. Reverse bias causes negatively charged electrons to migrate to an area underneath a positively charged gate electrode. Electrons liberated by photon interaction get stored in the depletion region up to what’s called the full-well reservoir capacity.

In a complete CCD, individual sensing elements in the array are segregated in one dimension by voltages applied to the surface electrodes. They are also electrically isolated from their neighbors in the other direction by insulating barriers, or channel stops, within the silicon substrate.

Light-sensing photodiode elements of the CCD respond to incident photons by absorbing much of their energy, thereby liberating electrons. The process forms electron-deficient sites (holes) within the silicon crystal lattice, one electron-hole pair from each absorbed photon. The resulting charge that accumulates in each pixel is linearly proportional to the number of incident photons.

External voltages applied to each pixel’s electrodes control the storage and movement of accumulated charges. Although either negatively charged electrons or positively charged holes can be accumulated (depending on the CCD design), the charges generated by incident light are usually referred to as photoelectrons.

The image generation process of a CCD is often divided into four stages: charge generation through photon interaction with the device photosensors, collection and storage of the liberated charge, charge transfer, and charge measurement. First, electrons and holes are generated in response to incident photons in the depletion region of the MOS capacitor, and liberated electrons migrate into a potential well formed beneath an adjacent positively-biased gate electrode. There is system of aluminum or (transparent) polysilicon surface-gate electrodes that are separated from the charge carrying channels buried within a layer of insulating silicon dioxide placed between the gate structure and the silicon substrate.

Electrons generated in the depletion region are initially collected into electrically positive potential wells associated with each pixel. During readout, the collected charge is subsequently shifted along the transfer channels as voltages are applied to the gate structure with the proper timing.

Stored charge from each sense element in a CCD are transferred to a readout node via a charge-transfer process. Charge is moved across the device by manipulating voltages on the capacitor gates in a pattern that causes charge to spill from one capacitor to the next, or from one row of capacitors to the next. Because the CCD is a serial device, the charge packets are read out one at a time.

A combination of parallel and serial transfers deliver each sensor element’s charge packet, in sequence, to a single measuring node. The CCD electrode (gate) network forms a shift register for charge transfer. The charge-coupled shift of the entire parallel register moves the row of pixel charges nearest the register edge into a specialized single row of pixels along one edge of the chip called the serial register. From this row charge packets move in sequence to an on-chip amplifier for measurement. Once emptied, the serial register is refilled by another row-shift of the parallel register, and the cycle repeats.

The need for nearly perfect charge transfer complicates the manufacture of CCD image sensors.

cmos 4T sensor cell
A CMOS imager 4T cell. A point to note is that some more advanced designs employ 5T cells and other refinements.

CMOS imagers avoid the problem. The simplest CMOS imagers employed pixels without amplification, with each pixel consisting of a photodiode and a MOSFET switch. Today’s CMOS sensors are called active pixel sensors (APS), with each pixel containing one or more MOSFET amps which convert the photo-generated charge to a voltage, amplify the signal voltage, and reduce noise. CMOS sensors also use a special kind of photodetector called a pinned photodiode that is optimized for low lag, low noise, high quantum efficiency, and low dark current.

The standard CMOS APS pixel today consists of a photodetector (pinned photodiode), a floating diffusion, and the so-called 4T cell consisting of four CMOS transistors including a transfer gate, reset gate, selection gate, and a source-follower readout transistor. The pinned photodiode allows complete charge transfer to the floating diffusion (which is further connected to the gate of the read-out transistor), eliminating lag.

A typical two-dimensional array of pixels is organized into rows and columns. Pixels in a given row share reset lines, so that a whole row resets simultaneously. The row-select lines of each pixel in a row are tied together so only one row is selected at a time. The outputs of each pixel in any given column are also tied together.

One big difference between CMOS and CCD sensing is that each CMOS sensor pixel has its own readout circuit sitting next to the photosensitive area. CMOS image sensors are inexpensive enough to be used in smartphones and consume less power than CCD sensors. They also allow image processing at the pixel level (for zones of interest, binning, filtering, and so forth). But CMOS sensors often exhibit a lower dynamic ramge, more read‐out noise and a spatial response that is more non-uniform. For these reasons it can be important to measure sensor properties.

One technique for making simple sensor measurements employs a small integrating sphere (basically a hollow spherical cavity with its interior covered with a diffuse white reflective coating) that is illuminated by a white LED, a standard calibrated photodiode (or a light power meter) for which the spectral response is known, and a small monochromator transmitting a narrow band of selectable light wavelengths.

spectral response
A datasheet spectral response graph, this one for a Teledyne CMOS sensor chip.
One approach to measuring sensor qualities starts with the sensor in the dark. In sensors with a settable integration time (that is, the amount of time the photosensors are exposed to ambient light), the usual approach is to keep integration time to about a millisecond or less. Then measure the dark output, which is often listed on datasheets in terms of ADUs (analog digital units), also once known as the least-significant bits. This reading can be compared to the value listed on the datasheet and will likely be higher — datasheet values are usually at 25°C. The difference illustrates why sensors used in low-illumination settings (as in star gazing) must be cooled down.

monochronomator
In the common Czerny–Turner monochronomater, a broad-band illumination source (A) points at an entrance slit (B). The amount of light energy available for use depends on the intensity of the source in the space defined by the slit (width × height) and the acceptance angle of the optical system. The slit is placed at the effective focus of a curved mirror (the collimator, (C) so the light from the slit reflected from the mirror is collimated (focused at infinity). The collimated light is diffracted from the grating (D) and collected by another mirror (E), which refocuses the light, now dispersed, on the exit slit (F). In a prism monochromator, a reflective prism takes the place of the diffraction grating, in which case the light is refracted by the prism. At the exit slit, the colors of the light are spread out. Because each color arrives at a separate point in the exit-slit plane, there are a series of images of the entrance slit focused on the plane. Because the entrance slit is finite in width, parts of nearby images overlap. The light leaving the exit slit (G) contains the entire image of the entrance slit of the selected color plus parts of the entrance slit images of nearby colors. A rotation of the dispersing element causes the band of colors to move relative to the exit slit, so the desired entrance slit image is centered on the exit slit. The range of colors leaving the exit slit is a function of the width of the slits. The entrance and exit slit widths are adjusted together. (By FeuRenard – Own work, original by DrBob [GFDL (www.gnu.org/copyleft/fdl.html) or CC-BY-SA-3.0 (https://creativecommons.org/licenses/by-sa/3.0/)], CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=35646782)

To figure out the spectral response of the detector, we use a monochomator and photodiode for which we know the spectral response. As a quick review, a monochromator transmits a selectable narrow band of wavelengths of light that the operator selects. The use of a calibrated photodiode allows the measurement of sensor irradiance, the power per unit area of radiant energy falling on the sensor.

Sensor chip makers publish the spectral response of their devices, usually given as a plot of output level for 1 nJ/cm2 input vs. wavelength in nanometers. Monochromator readings can be compared to the published levels to verify the sensor response.

Finally, we should mention a new kind of image sensor, the Quanta image sensor (QIS). The QIS computes an image by literally counting photoelectrons over space and time. The QIS consists of specialized pixels called jots. Their full-well capacity (the number of charge carriers they generate before they saturate) amounts to just a few electrons and they don’t use avalanche multiplication.

The QIS may contain hundreds of millions or possibly billions of jots read out at speeds of perhaps 1,000 fps or higher, implying raw data rates approaching 1 Tbit/sec. Through use of advanced denoising algorithms, good grayscale images can be captured in extreme low light of less than one photon per pixel on average.

Ordinary CCD and CMOS image sensors integrate received photocharges and digitize them. Their full-well capacity defines the upper end of their dynamic range, while read noise defines the lower end. A problem on the low end is that the photoavalanche process these sensors use causes problems in low light such as a variance in the charge gain. They are also sensitive to silicon defects, leading to high dark charge carrier count rates that limit low-light performance and manufacturing yield.

On the other hand, in a QIS photon-counting image sensor forms an image pixel computationally from a series of jot values registered over time and within a defined space. While QIS images one photon at a time, it can still realize a high dynamic range (>120 dB) using special multiple high-speed exposures.

You may also like:

  • NI HIL
    Hardware-in-the-loop systems speed electric vehicle development tasks
  • VRTS_4
    Radar test system optimized for automotive ADAS apps

  • The super-cold measurement science of superconductivity
  • test instruments
    Basic instrumentation for the electronics workbench
  • ammeter setup
    How to measure current and energy use accurately

Filed Under: FAQ, Featured, New Articles, Optical Tagged With: FAQ, teledyne

Reader Interactions

Leave a Reply Cancel reply

You must be logged in to post a comment.

Primary Sidebar

Test & Measurement Handbook


Oscilloscopes Finder

Search Millions of Parts from Thousands of Suppliers.

Search Now!
design fast globle

Subscribe to our Newsletter

Subscribe to test and measurement industry news, new oscilloscope product innovations and more.

Subscribe Today

EE TRAINING CENTER CLASSROOMS

“ee

“ee

“ee

“ee

“ee

ee classroom

RSS Current EDABoard.com discussions

  • How to simulate (equivalent) noise (charge) of CSA?
  • Weird transformer result in ads
  • Error in Ansys HFSS
  • No Reverse recovery in LLC operating above resonance?
  • GPS signal level

RSS Current Electro-Tech-Online.com Discussions

  • 24v dc relays not de-energising
  • Positive and negative sides of voltage source
  • Automotive 6 Volt Generator Transistor Voltage Regulator
  • DIY Mini 12v Router UPS malfunction
  • Relaxation oscillator with neon or...

Footer

EE World Online Network

  • DesignFast
  • EE World Online
  • EDABoard
  • Electro-Tech Online
  • Analog IC Tips
  • Microcontroller Tips
  • Power Electronic Tips
  • Sensor Tips
  • Connector Tips
  • Wire and Cable Tips
  • 5G Technology World

Test & Measurement Tips

  • Subscribe to our newsletter
  • Advertise with us
  • Contact us
  • About us
Follow us on TwitterAdd us on FacebookFollow us on YouTube Follow us on Instagram

Copyright © 2022 · WTWH Media LLC and its licensors. All rights reserved.
The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of WTWH Media.

Privacy Policy