Long before photography, there was the camera obscura. It consisted of a closed, darkened room, eventually reduced to the size of a small box, with a pinhole opening in one wall. At the opposite side on a screen would be projected an inverted image from outside. At first the only way to preserve the image was for a person inside the box to trace it on the screen. It is said that Renaissance artists learned to render perspective this way.
Even as the camera obscura came into use in ancient times, it was also well known that certain substances undergo change when exposed to light. These insights first began to come together around 1800 when Thomas Wedgewood attempted unsuccessfully to build a photographic camera that would capture images on a surface coated with a light-sensitive chemical. His photos were unsatisfactory shadowy images, but it was a start.
Wedgewood went on to perform many experiments. He used paper and white leather coated with silver nitrate. But what eluded Wedgewood was a method to “fix” his finished photographs so they would not degrade with further exposure to light. Subsequently, several attempts were made to devise a viable photographic process. Nicéphore Niépce in the 1820s built a working camera. The drawback was that exposure required several days and the results were crude at best.
A major breakthrough came in 1839 when Louis Daguerre introduced what came to be known as the daguerreotype process. This innovation marked the beginning of practical photography.
A digital camera is essentially the same as a film camera. Both consist of generic camera obscura arrangements with optical lenses rather than a pinhole. They both have mechanical shutters to control exposure time and there are similar zoom and focusing mechanisms.
The defining difference is that in the digital camera, a sensor array serves as the film. The principal sensor technologies are charge-coupled device (CCD) and complementary metal-oxide semiconductor (CMOS). CMOS has replaced CCD in most midrange digital camera applications. CCD still dominates the inexpensive entry-level point and shoot field and it is prominent in upscale applications such as astrophotography. Both have their advantages. CMOS uses far less electrical power, critical for photographers who want to do field work unencumbered by heavy batteries or a need for frequent recharging.
Both CCD and CMOS image sensors consist of columns and rows of cells generally arranged in rectangular flat panels. Multiplying the number of columns by the number of rows gives the number of cells, or picture elements (pixels). This relates to the resolution and finally the clarity of the end product. The newest smart phones are rated at an astonishing 16-megapixel count, while DSLR cameras as made by Nikon and Canon are pushing 36 and 50 megapixel ratings respectively.
In both CCD and CMOS image sensors, the light energy that strikes each cell is converted to electrical energy. The CCD method is to move the charge across the chip, where it is read at the corner of the array. An analog/digital converter digitizes this information on a cell-by-cell basis.
In contrast, the usual CMOS configuration is for a group of transistors at the site of each cell to amplify and move the signal using conventional transmission lines. This lets the instrument read each pixel as a separate entity. A downside to the CMOS approach is that it is more prone to pick up noise. Also, the presence of transistors within the array inevitably impacts the overall light sensitivity of the array.
Because CMOS sensors are fabricated in generic silicon fabs, they are considerably less expensive than CCD sensors, and this low fabrication cost creates further economies of scale.
The charge-coupled device (CCD) is an integrated circuit etched in the usual way onto a silicon substrate. What distinguishes it from other ICs is that its constituent elements are light-sensitive pixels.
The defining application is optical image conversion into digital electrical signals. CMOS image sensors are used in many digital cameras, but their quality, despite recent advances, is still less than that of CCDs.
CCDs are directly responsible for the high performance of the Hubble Telescope, which is creating astonishing high-resolution images of deep-sky objects. A CCD works by sampling the analog value of incident light energy at discrete intervals. Electrical charges, corresponding to light intensity, shift bucket brigade style down rows of cells. All this happens in discrete time, and of course very rapidly.
Besides their use in digital cameras, CCDs are well suited for application as analog memory components in applications including telephone answering machines. The principal use remains as optical imaging devices.
At its inception in 1969, the CCD was conceived as a promising new mode of semiconductor computer memory. Bell Lab’s George Smith and Willard Boyle worked out preliminary details, and they anticipated imaging uses as well as computer memory.
Other Bell Labs researchers extended the Smith-Boyle CCD, building working solid-state video cameras. Kodak and Nikon offered CCD-based digital cameras having over one mega pixel resolution.
P-doped capacitors within each CCD correspond to the pixels. The charges move from one capacitive bin to the next within the device. When a CCD is used for capturing images, light rays are modified by an optical lens so as to form an image on the capacitive array. Each capacitor acquires an electric charge that is a measure of the light intensity impinging upon it.
After the image has been stored in the capacitive array, an electronic control circuit directs the capacitors to transfer electric charges to adjacent capacitors, after the manner of a shift register. At the end of the line, the charge moves to a charge amplifier, subsequently converted to a voltage level.
From this point, the process may be analog or digital, depending on the desired product.
Leave a Reply
You must be logged in to post a comment.