Standard Article

You have free access to this content

Photography, Digital

  1. Michael A. Kriss

Published Online: 15 APR 2003

DOI: 10.1002/3527600434.eap584

Encyclopedia of Applied Physics

Encyclopedia of Applied Physics

How to Cite

Kriss, M. A. 2003. Photography, Digital. Encyclopedia of Applied Physics. .

Author Information

  1. University of Rochester, Center for Electronic Image Systems, Rochester, New York, U.S.A.

Publication History

  1. Published Online: 15 APR 2003

This is not the most recent version of the article. View current version (15 OCT 2004)

1 Introduction

  1. Top of page
  2. Introduction
  3. the Digital Imaging Chain
  4. Image Capture
  5. Image Interpolation
  6. Sampling, Aliasing and Artifacts
  7. Image Compression and Storage
  8. Image Processing and Manipulation
  9. Color Reproduction
  10. Digital Hard Copy
  11. Image Quality
  12. Works Cited
  13. Further Reading

Digital photography has emerged as an alternative to traditional silver-halide–based photographic systems. As with video imaging systems, this new imaging technology has replaced film usage in markets that require instant access or where a combination of rapid access and lower image quality meets the needs of the user. However, digital imaging systems still lack the price-performance characteristics of film. Furthermore, on account of the exact nature of sampled images and limited resolution, they do not produce images of equal quality. This goal of this article is to provide a systematic look at all the underlying technologies that are required to build a digital imaging system. The emphasis will be on digital still cameras rather than video cameras. Particular attention will be given to those aspects of digital imaging systems that control the overall quality of the final image. Where appropriate, comparisons with silver-halide–based imaging systems will be presented.

2 the Digital Imaging Chain

  1. Top of page
  2. Introduction
  3. the Digital Imaging Chain
  4. Image Capture
  5. Image Interpolation
  6. Sampling, Aliasing and Artifacts
  7. Image Compression and Storage
  8. Image Processing and Manipulation
  9. Color Reproduction
  10. Digital Hard Copy
  11. Image Quality
  12. Works Cited
  13. Further Reading

Digital photography is was born in 1970 (Boyle and Smith, 1970; Amelio et al., 1970) with the creation of the charge-coupled device (CCD). Early CCDs were used mainly as highly accurate delay lines (Sangster and Teer, 1969). Commercial CCD imaging area arrays were first developed for compact video systems, where they replaced a variety of electron imaging tubes like vidicons, saticons, and image orthicon devices (Takizawa et al., 1983; Weimer, 1975).

Early uses of large CCD imaging arrays were confined to special purposes such as satellites and astronomical observations. The expensive nature of such projects made it possible to incorporate CCD arrays in the imaging systems. Sony Corporation demonstrated the first electronic still imaging system for the consumer market, the MAVICA, in 1985. The MAVICA used a CCD sensor to capture the image, but the image signal was recorded in an analog format suitable for display on a NTSC-based imaging system. Using the rapid advances in microelectronics and image processing algorithms, many true digital still cameras (DSCs) were introduced into the market from 1988 to 1998. Current digital cameras range in resolution from 320 × 240 pixels to 3000 × 2000 pixels.

Figure 1 shows a simplified version of the digital imaging chain (DIC).

thumbnail image

Figure 1. Digital imaging chain.

The image-capture stage of the DIC includes the taking lens (similar to those found in all conventional film-based cameras) and an optical prefilter that is used to remove some of the high–spatial-frequency content of the image to reduce image artifacts due to aliasing. The image capture stage also includes the imaging sensor (usually some form of a CCD), a color-filter array (CFA) used to encode color, and in some cases a special microlens array used to increase the speed of the camera and further reduce aliasing.

The second stage of the DIC is image compression and storage. DSCs store the image data in a digital format on magnetic disks, magneto-optic disks, or special solid state memory cards. The digital data are often compressed before being stored on account of the rather large data files associated with a normal image. A moderate resolution system using a single CCD image array (750 pixels × 500 pixels) requires 375 000 bytes of storage for a single image. Without compression, 24 images (the most commonly used number of exposures on a roll of film) would require 9 megabytes of storage. A high-density magnetic diskette can hold about 1.4 megabytes of information.

The third stage of the DIC is image reconstruction. Image reconstruction generally takes place in a computer with software supplied by the camera manufacturer. Since cameras may have differing resolutions, CFAs, and compression algorithms, the software supplied by the manufacturer can only be used with images from a specific camera. The reconstruction algorithms must first decompress the image, then transform the decompressed data to a red–green–blue set of image planes, and, finally, fill in the missing image values that are introduced by the CFA when the image is captured.

The fourth stage of the DIC allows for the enhancement and manipulation of the images. Enhancement algorithms that improve sharpness, reduce noise, adjust tone scale, and improve color reproduction can be applied to the digital image. These algorithms can be written by the user or purchased in the form of software packages from many manufacturers. Image manipulation differs from enhancement in that the original image is changed in some way to meet a special need. This could be as simple as applying a false color transformation to altering the content of the image by combining other images with it or distorting the image in a specific way. Working with digital images allows for unlimited special effects.

The fifth stage of the DIC is image rendering. The digital file that contains the image information must be converted into a signal that drives either a soft display device like a computer monitor or a hard-copy device like an inkjet printer. This rendering process converts the basic image data values into digital data streams that drive the device in question. For a binary color inkjet printer, the rendering process creates a pattern of on-and-off dots (half-tones) for each of the four inks (cyan, magenta, yellow, and black). These rendering algorithms are usually supplied by the manufacturer of the printers, but the users can develop their own rendering algorithms and drivers for the printer.

The last stage of the DIC is the actual soft-display or hard-copy device used to visualize the digital data. Soft-display devices include (among many) cathode-ray tubes (CRTs), liquid-crystal displays and projectors, and digital mirror projectors. Hard-copy printers are segregated into three categories: those that can produce continuous-tone images, those that use binary images (halftones) to approximate continuous-tone images, and those that produce multi-gray-level dots. Thermal dye-transfer printers and photographic printers represent the majority of the continuous-tone printers, while inkjet, electrophotographic, thermal wax transfer, and photographic printers represent the bulk of binary and multi-gray-level printers.

3 Image Capture

  1. Top of page
  2. Introduction
  3. the Digital Imaging Chain
  4. Image Capture
  5. Image Interpolation
  6. Sampling, Aliasing and Artifacts
  7. Image Compression and Storage
  8. Image Processing and Manipulation
  9. Color Reproduction
  10. Digital Hard Copy
  11. Image Quality
  12. Works Cited
  13. Further Reading

3.1 Camera Optics

Image capture in DSC systems has both similarities to and differences from conventional film-based cameras. The camera lenses are, for all practical purposes, the same. In many cases, existing single-lens reflex cameras have been modified to accept solid state imaging sensors.

Since most solid state sensors are smaller than the conventional 24 × 36-mm2 film size, the field of view of the DSC is less than that of its film-based counterpart using the same lens. From the user's perspective the final print shows a greater amount of magnification that is generally associated with a telephoto lens. This is often expressed by stating an effective focal length based on the apparent magnification of the system. For example, if the diagonal of a sensor is 16.22 mm compared with the 43.27 mm of conventional 35-mm film, the effective focal length of a 50-mm lens made for the 35-mm camera is 127 mm when used with sensor in the same camera. This can best be understood by looking at the field of view of a camera lens, Θ. If the vertical dimension of a sensor (film) is given by V and the horizontal dimension is given by H, then the diagonal L and Θ are given by

  • equation image(1)


  • equation image(2)

where f is the focal length of the lens. The field of view for a conventional 35-mm camera with a 50-mm lens is, by Eqs. 1 and 2, 46.8 °. If the smaller sensor is used in the same camera, the field of view becomes 18.4 ° or 2.54 times smaller. This appears to be, relative to the conventional 35-mm camera, a telephoto lens with a focal length 2.54 times larger than the standard 50-mm focal length, or 127 mm.

In some camera designs special optical relays are designed to allow the use of conventional film-camera bodies and lenses (Miyake and Iwabe, 1995). In the case of the popular “point-and-shoot” cameras, the lenses are designed specifically for the solid state sensor used. Again, since the sensors are normally smaller than 35-mm film, the focal lengths are expressed with their effective values as well as their actual values.

For the purposes of this article, only the imaging properties of the lenses will be important and not their exact design. Thus, the two parameters that will be important are the optical transfer function (OTF) of the lens or its magnitude, the modulation transfer function (MTF), and its effective F-number (F) that measures the lens's light-gathering capabilities.

DSC systems often employ two other optical elements of importance. The first is an optical prefilter that is used to blur the image in a controlled manner to reduce artifacts in the final image. The other optical element is a microlens array that is integrated with the solid state sensor array to boost the light-gathering properties of the sensor.

3.1.1 Camera Lenses

The design parameters for a DSC camera are similar to those of any conventional film camera in terms of focal length, field of view, depth of focus, and F-number. Good reviews of camera design can be found in the literature (Kingslake, 1978). For the purpose of this article, only the imaging characteristics of the lens that affect sharpness will be considered. The impact on sharpness of a lens is defined by its optical point-spread function (OPSF) (Goodman, 1968). When a point of light at infinity is focused in the image plane of a lens, one finds not a point of light but a distribution of light. This distribution of light is called the OPSF. The OPSF can be very complex and can vary from point to point in the image plane, reflecting the exact nature of the aberrations in the lens. In what follows it will be assumed that the lens OPSF will be given by the ideal diffraction-limited case where the OPSF depends only on the diameter of the lens, D, the focal length of the lens, f, and the wavelength of light, λ. Under these assumptions the OPSF is given by the normalized Airy pattern,

  • equation image(3)

where J1 is the cylindrical Bessel function of the first kind, r is the radial distance from the center, and

  • equation image(4)

Figure 2 shows OPSFs for green light (λ = 550 nm) for the cases of F = 5.6 and F = 16. Note that as F becomes larger, the OPSF tends to increase, introducing a larger blur in the image plane. A lens might have F-numbers that range from F/1.2 (wide open) to F/22 (smallest aperture). The sharpness of these ideal lenses will decrease with increasing F-number. In the case of real lenses, the aberrations associated with small F-numbers usually dominate, resulting in an OPSF that is greater than that given by Eq. 3. As the lens is stopped down (higher F-numbers) the aberrations are reduced and the sharpness improves. Further stopping down again increases the spread of the OPSF. Thus, most lenses have an optimum OPSF somewhere between the maximum aperture and the smallest aperture. In what follows, the ideal OPSF will be used to define a level of lens sharpness.

thumbnail image

Figure 2. Optical point-spread functions.

Since, other than the wavelength of light, the only important parameter is F, the term effective F-number OPSF will be used to describe the sharpness properties of a lens, but not its light-gathering properties. For example, an F/1.8 lens with an effective F/8 OPSF means that the lens operating at F/1.8 (in terms of aperture) has at F/1.8 an OPSF that matches the diffraction-limited OPSF of an F/8 lens.

Figure 3 shows the importance of the OPSF relative to the pixel size of a CCD imaging array. The square pixel shown in Fig. 3 is 9 µm square. The effective OPSFs for F/2 and F/5.6 fall well within the pixel dimensions. In these two cases, the lens does not limit the sharpness or resolution of the system. Instead the pixel size dominates the sharpness and resolution. For the case of F/11 the effective OPSF spread is close to that of the pixel, indicating that both will affect the ultimate sharpness of the image. For the case of F/22, the effective OPSF is much larger than the pixel, thus dominating the sharpness (lack of sharpness) of the system.

thumbnail image

Figure 3. OPSFs and 0.009-mm pixel.

From the above discussion, it would seem that having an OPSF that is less than the pixel dimension is required for a good image. However, because of the image artifacts introduced by aliasing, such a lens might degrade the image more than a less sharp one. This will be covered in Sec. 5.

The OPSF provides a very good visualization of the impact of a lens on image sharpness. However, it does not provide a simple means of making the analytical calculations required for a systems analysis (see Sec. 10.). The optical-transfer function (OTF) provides a better analytical tool for evaluating the image sharpness of a lens. The OTF is the two-dimensional Fourier transform of the OPSF (Goodman, 1968). As such, the OTF is a complex mathematical function. For the purposes of analytical calculations, it is better to use the modulation-transfer function, MTF, which is the absolute value of the OTF. Making use of the circular symmetry of the OPSF, the MTF of a diffraction-limited lens is given by

  • equation image(5)

where ρ is the spatial frequency and the cutoff frequency ρ0 is given by

  • equation image(6)

The spatial frequency is equivalent to the more commonly used time frequency employed in the analysis of electronics and sound. Any two-dimensional image can be decomposed into a continuous distribution (or discrete distribution) of sine and cosine waves (in the x and y directions). The amplitudes of the sine and cosine waves define the equivalent Fourier transform of the two-dimensional image. Figure 4 shows the MTFs for a series of diffraction-limited lenses.

thumbnail image

Figure 4. MTFs of diffraction-limited lenses as a function of spatial frequency in cycles per millimeter.

From Eq. 3, the value of r that gives the first zero in the OPSF or Airy pattern is given by

  • equation image(7)

The value for rR defines the Rayleigh resolving power criteria. If the peaks of two OPSFs are separated by rR, they are said to be resolvable. The resolving frequency (resolving power) is given by 1/rR. Using Eqs. 5 and 6 gives an MTF value of roughly 0.09. Thus, as a rule of thumb, one can say the resolving power of a lens is given by the frequency at which the MTF = 0.10. A more conservative measure of resolving power is given by the frequency at which the MTF = 0.20.

All lenses experience a falloff of irradiance from the center of the lens to the edge of the image plane (Kingslake, 1983). In conventional photographic systems (negative–positive or reversal systems) this falloff is not noticeable on account of the compensating effects of the printer or projector. However, in a DSC system the falloff is retained in the digital data and needs to be removed by digital processing. The irradiance falloff is approximated by

  • equation image(8)

where I(0) is the irradiance at the center of the image plane and θ the angle made by a ray from the center of the lens to a point in the image plane with the normal ray from the center of the lens to the image plane. Figure 5 shows the intensity falloff for two typical DSC systems. The first system represents a sensor with a diagonal of 8.11 mm and a lens with a focal length of 8 mm; the maximum value for θ is 27 °. The second system represents a system with a diagonal of 32.5 mm and lens of focal length 50 mm; the maximum value for θ is 18 °. While the short focal length system suffers from more intensity falloff, both need digital correction to compensate for the losses.

thumbnail image

Figure 5. Intensity falloff due to the cos4(θ) law.

3.1.2 Optical Prefilters

The discrete sampling nature of CCD imaging arrays leads to image artifacts. These artifacts take the form of low-frequency banding where high-frequency information is aliased to low-frequency information. A detailed review of sampling and the associated aliasing will be given in Sec. 5.

Optical prefilters are used to control the amount of high-frequency information that reaches the CCD imaging array. The most commonly used optical prefilter in DSC systems is based on the birefringent properties of uniaxial crystals (Françon, 1967; Greivenkamp, 1990). Many optical materials have a single optical axis, but depending on the polarization of the electric field vector, the light waves “see” a different index of refraction. If the crystal is cut in the appropriate fashion, an incident nonpolarized light wave will split into two rays traveling in different directions; see Fig. 6. When the two waves exit the crystal of thickness t, they are separated by some distance δ. The wave that has not been displaced is called the ordinary wave and is designated by E0, and the wave that is refracted by an angle α is called the extraordinary wave and is designated by Ee.

thumbnail image

Figure 6. Birefringent crystal used as an optical prefilter.

If the birefringent crystal is rotated about the axis of the incoming light ray, the extraordinary wave (image) will rotate about the ordinary wave (image). This rotational property is shown in Fig. 7.

thumbnail image

Figure 7. The rotational properties of a birefringent crystal.

A controlled smearing (blurring) of the image can be accomplished by adjusting the thickness of the crystal and the angle of rotation. The exact amount of displacement, δ, is given by (see Fig. 6)

  • equation image(9)

where ω is the angle the crystal face makes with the optical axis, ne is the index of refraction for the extraordinary wave, and n0 is the index of refraction for the ordinary wave. Equation 9 can be simplified by picking ω = π/4. The total displacement is then given by

  • equation image(10)

Many crystals are birefringent; however, to be used in a DSC system the crystal must have very good optical surface qualities and be strong and stable enough to withstand normal handling and climatic changes. Quartz is the material of choice. For quartz, no = 1.544 and ne = 1.553. If the distance (pitch) between pixels in a CCD imaging array is 10 µm, then a crystal thickness of 1.72 mm is required to shift the image associated with the extraordinary wave by one pixel. The MTF of such a filter is given by

  • equation image(11)

where ρ is spatial frequency. Figure 8 shows a plot of MTFopticalfilter. Note that this optical prefilter does allow periodic bands of high-frequency information to pass on to the imaging sensor. This limitation results in low-frequency banding from very high-frequency scene content while moderately high-frequency scene content will not induce the offending banding.

thumbnail image

Figure 8. The MTF of an optical prefilter that displaces the image 0.01 m.

Figure 9 shows a simulation of the effect of an optical prefilter that displaces the image 10 pixels (out of 128 pixels) to the right. In DSC systems, the displacement is usually between one-half a pixel to three pixels depending on the nature of the color-filter array used to encode the color signal.

thumbnail image

Figure 9. The original image at the left is smeared by an optical prefilter that displaces image 10 pixels to the right.

3.1.3 Microlens Arrays

The light-sensitive regions of a typical solid state sensor can vary from nearly 100% for a frame-transfer device (see Imaging Detectors) to less than 25% for some interline-transfer devices. Section 3.2 will cover the layout and architectures of imaging sensors in detail. Figure 10 in Sec. 3.2.4 shows a simplified view of a frame-transfer configuration and Fig. 11 shows an interline-transfer configuration. In the frame-transfer architecture almost all the surface area is exposed to light. In an interline-transfer system much of the surface area is devoted to moving the stored charges out of the sensor and is not used to collect light. Microlens arrays are used to help compensate for this loss of effective sensitivity (Furukawa et al., 1992; Deguchi et al., 1992).

thumbnail image

Figure 10. A full-frame transfer CCD.

thumbnail image

Figure 11. An interline-transfer CCD imaging array. The clear areas are the photodiodes; the hashed area represents the transfer gates that move the charge into the vertical shift registers denoted by the gray areas.

Microlens arrays can be fabricated on the CCD sensors to improve the effective light sensitivity. Since each pixel of a CCD imaging array integrates the light falling on it, using a microlens to collect light that would otherwise fall outside the light-sensitive area of a pixel does not greatly degrade the sharpness of the final image and will help reduce the potential for image artifacts due to aliasing.

Figure 12 shows how the microlens collects light that would otherwise miss the light-sensitive area. Rays that are either normal or oblique to the surface of the imaging array will be refracted toward the light-sensitive regions.

thumbnail image

Figure 12. A microlens is used to collect light that would otherwise miss the light-sensitive region of a pixel. N represents the normal to the surface of the microlens.

Microlens arrays are most effective at high F-numbers (small apertures) where the light falling on the pixels is in the form of a tight cone. Most of the light that falls on a microlens will be collected in this case. As the F-number is decreased (larger apertures) the light tends to form a wider cone and some of the rays falling on a given microlens will not be focused in the light-sensitive region. The microlenses become less effective as one moves from the center of the array to the edge of the array. This falloff in intensity is much the same as the cos4(θ) intensity variation discussed above. Hence, while microlens arrays can almost double the effective speed of a DSC system, they can introduce additional non-uniformity across the image plane.

3.2 Image Sensors

The heart of a DSC system is the imaging sensor that replaces the photographic film used in a conventional camera (see Photographic Recording). Image sensors are defined in three ways. The first characteristic is how an individual pixel turns light into stored electrons. The second characteristic is how those stored charges are read out. The third characteristic is the sensor architecture that defines how the first two characteristics are combined to construct a functional imaging sensor.

The other major factors are the sensor's resolution (number of pixels), noise characteristics, charge transfer efficiency, color filter array pattern, and quantum efficiency (speed).

3.2.1 Photodiodes

The p–n junction of a photodiode provides a natural sensing mechanism for photon-induced electron–hole pairs (Saleh and Teich, 1991). Figure 13 shows a photodiode in its reverse-biased mode. In an unbiased photodiode (V = 0) the holes and the electrons from the p-type area and n-type area, respectively, diffuse across the junction. When equilibrium is reached, a potential (charge density) profile is established as shown in Fig. 13. In the reverse-biased case, this potential profile is enhanced, preventing the flow of current. If the p–n photodiode is biased in the forward direction, this potential profile (barrier) is eliminated and current will flow.

thumbnail image

Figure 13. A p–n junction photodiode in the reverse-biased mode.

Figure 14 shows how the p–n photodiode is used as an imaging device (Sadashige, 1987). The switch in Fig. 14 is first placed in the “up” position to reverse bias the photodiode. This creates a reservoir of charge in the depletion region surrounding the junction. The next step is to place the switch in the “down” position. In this state, any electron–hole pairs that are created in the depletion region will be swept out and current will flow charging the capacitor C. In the absence of light, thermally generated electron–hole pairs will be created and cause the charge in the depletion region to be transferred to the capacitor. These thermally generated electron–hole pairs add to the noise component of the final image. The dark decay described above takes several seconds. If a light is directed on the junction region of the photodiode, many electron–hole pairs are generated, one pair for each photon absorbed. These electrons are immediately swept into the collection capacitor. Thus, if the exposure time is short compared with the natural decay time of the photodiode, the number of signal electrons stored will be much greater than that of the thermally generated noise electrons. The switch is next returned to the “up” position to re-establish the initial state of the photodiode. At the same time the charges stored in the capacitor are read out in one of several ways. The imaging process can now be recycled.

thumbnail image

Figure 14. A p–n junction photodiode used as an imaging device.

The storage capacitance associated with the p–n junction photodiode varies with the reverse-bias voltage. A capacitance of about 1 pF is common. The number of electrons that can be stored is about 106 per volt. Other types of photodiodes used in imaging devices include the pin photodiode and the Schottky-barrier photodiode.

3.2.2 MOS Capacitors

A simple physical device to store electrons created when a photon produces an electron–hole pair is the metal-on-silicon (MOS) capacitor (Barbe, 1980; Sequin and Tomspett, 1975; Hobson, 1978; Theuwissen, 1995; Schroder, 1990). Figures 15 and 16 show two such devices. In both cases a capacitor is formed by depositing a silicon dioxide (SiO2) layer on top of a p-type silicon layer (which in turns has been grown on a silicon substrate). Small conducting electrodes are deposited on the SiO2 layer. These electrodes can be transparent, thus not limiting the amount of surface area used to collect photons. Polysilicon (see Silicon, Polycrystalline) is often used to make these transparent electrodes. As shown in these figures, the imaging mode is defined when the voltage on the electrodes marked V1 is less than the voltage V2 in the center. Under these circumstances a potential well is formed in the p-type material. After an absorbed photon creates an electron–hole pair, the holes are swept away through the substrate and the light-induced electrons are captured in the potential well. Figure 15 shows a surface-channel device. Here the light-induced electrons are stored close to the surface. When the charges are to be moved (as part of the image readout process) their proximity to the surface can cause some of the electrons to be left behind, thus reducing the overall effectiveness of the device and introducing image smear. Figure 16 shows a bulk-channel device. The bulk-channel device has an additional layer of n-type material. This allows the electrons to be stored deeper in the device and away from the surface. The bulk-channel configuration allows for a more efficient electron-transfer mechanism that is critical to the image quality of all CCD image sensors.

thumbnail image

Figure 15. A surface-channel CCD MOS capacitor.

thumbnail image

Figure 16. A bulk-channel CCD MOS capacitor.

Other imaging elements that can be used in forming sensor arrays include charge-injection devices and charge-priming devices.

3.2.3 Charge Readout Methods

There are two basic ways to read out the light-induced charges in sensor arrays (Sadashige, 1987; Thorpe et al., 1988). The first uses sensing lines that can address the individual pixels and the second uses CCD shift registers to move the charges from the pixels to an output amplifier.

Figure 17 show a section of an X-Y–addressable imaging array of photodiodes. As explained in Sec. 3.2.1, when light falls on the photodiodes, current is generated in proportion to the number of photons absorbed. This results in a charged capacitor within each imaging site. A series of transistors is now used to read out the charges. Consider the first pixel in the second column of the array in Fig. 17. A clocking pulse in the Y scan direction generator turns on all the gates (transistors) in the first row of pixels. The X-scan direction generator can now be used to turn on its output gates (transistors) selectively to read out a single pixel. If the second transistor controlled by the X-scan direction generator is turned on, then all the pixels in the second column can be probed. Since only the first pixel in that column has its gate open (from the Y-scan direction generator) only that pixel's charge will be used as input to the output amplifier. In a similar fashion any given pixel can be addressed. In most DSC systems, the X- and Y-scan direction generators are run in such a fashion as to read out one row at a time.

thumbnail image

Figure 17. An X-Y–addressable imaging sensor using photodiodes.

Most electronic imaging sensors employ some form of the CCD shift register. Figures 18 and 19 show two popular versions of CCD shift registers. The CCD shift register is a series of MOS capacitors fabricated on the silicon wafer. Above the SiO2 layer, a series of transparent (or opaque) conducting pads are placed to form the potential wells to confine the charges and to move them along the “bucket brigade.” Consider Fig. 18 that shows a section of a three-phase CCD shift register. Note that there are three electrodes associated with each pixel. In the top diagram the charges are confined under the electrodes defined by V2, which is greater than V1 and V3. Keeping V1 and V2 fixed, V3 is increased until it matches the value of V2. This is shown in the middle diagram. The potential well confining the charges is increased in size. Next, V1 remains constant and V2 is lowered until it matches the value of V1. This pushes the charges until they reside under the electrodes marked V3. The net effect of these voltage shifts has been to move the charges one step to the right. The next step will be to increase the voltage of V1, keeping V2 and V3 fixed. The pattern should be self-evident. It takes three of these cycles to move the charge one full pixel to the right; hence it is a three-phase process. It is desirable to reduce the number of phases it takes to move the image charge one full pixel. Fewer steps means that the clocks required to generate the voltage changes can operate at a lower rate. Figure 19 shows a two-phase CCD shift register. Note that that each of the gates has been fabricated at two thicknesses. This variation in gate thickness modifies the potential well so that the confined electrons tend to move to the right. As the values for V1 and V2 are reversed, the potential barrier to the right is reduced and the one to the left increased, pushing the electrons further to the right. While the fabrication process for a two-phase system is more complex, the reduction in clock frequency makes the two-phase CCD shift register more reliable and can help improve charge-transfer efficiency.

thumbnail image

Figure 18. A three-phase CCD shift register.

thumbnail image

Figure 19. A two-phase CCD shift register.

The X-Y–addressable arrays can be run at higher rates than ones with CCD shift registers. However, they have two drawbacks. The capacitance associated with the extensive sensing lines introduces considerable noise into the image. The architectures using them (see Sec. 3.2.4) allocates much of the sensor surface to the sensing lines and transistors required to probe the photodiodes. This loss of light-sensitive area reduces the overall sensitivity and dynamic range of the sensor.

3.2.4 Charge-Coupled Device Architectures

There are many possible configurations for CCD imaging arrays, each with its benefits and drawbacks (Sadashige, 1987; Theuwissen, 1995). Figure 10 shows a full-frame–transfer CCD imaging array. The imager is divided into two regions. The top of the array is dedicated to collecting light while the bottom of the array is dedicated to temporary storage and readout. The storage area is covered by an opaque layer to prevent photon-induced electron–hole pair creation. The image-sensing and storage sites are MOS capacitors as described in Sec. 3.2.2. During exposure each pixel collects charge in proportion to the incident number of photons. The exposure is controlled by a mechanical shutter much like that in a standard 35 mm camera. Once the shutter is closed, the charges stored in the MOS capacitors are shifted downward to the storage area. This is accomplished by the methods described in Sec. 3.2.3. Once the storage area is filled with the image charge packets, they are shifted out of the sensor array by two horizontal shift registers located at the bottom of the sensor. Two horizontal shift registers are used to increase the readout speed of the sensor. Like the storage area, the shift registers are covered by an opaque coating to prevent light from producing non-image related electrons. The shift registers are terminated by output amplifiers that convert the charge into a voltage signal that is used by the camera electronics to form the final output signal.

A variation of the full-frame transfer CCD is shown in Fig. 20. In this case the storage area is eliminated; otherwise the operation is the same. By eliminating the storage area considerable cost reduction is gained, but a second image cannot be captured until the entire array has been read out. In the case of the full-frame transfer CCD, it is possible to take a second image before one starts to read out the storage area.

thumbnail image

Figure 20. A frame-transfer CCD without storage area.

Figure 11 shows the architecture of an interline-transfer CCD. In this case the imaging element is a photodiode; see Sec. 3.2.1. Other than the imaging element itself (photodiode versus MOS capacitor), one of the most significant differences between this architecture and that of a frame-transfer CCD is that no mechanical shutter is required to control the exposure time. The exposure cycle is as follows. The photodiodes are reverse biased to store charges in the p–n junction, and then they are switched to the storage capacitor mode as shown in Fig. 14. This effectively starts the exposure cycle. During the exposure cycle, light falls on the photodiodes causing electrons to flow to the storage capacitors. At the end of the exposure (determined by some form of electronic exposure control) the storage capacitors are isolated from the photodiodes, and the charge is transferred to the adjacent elements of the vertical shift registers. This simultaneously takes place for all the photodiodes. The vertical shift registers are covered with an opaque layer to ensure that no light-induced electrons are formed in the MOS capacitors that constitute the shift register. Once the image charge packets are in the vertical shift registers they are transferred to the two horizontal shift registers and then to the output amplifiers. There are several advantages to interline-transfer imaging sensors. The first is that they provide the opportunity for electronic exposure control and eliminate the need for a shutter. Second, photodiodes are more sensitive to blue light than MOS capacitors since they do not have the polysilicon layer used by MOS capacitors in frame-transfer CCD imaging arrays. Polysilicon absorbs blue light. The third advantage is that interline-transfer CCD sensors can be used for video signals. However, much of the surface area of an interline-Transfer CCD sensor is allocated to moving the image-related electron packets out of the imaging area and into the horizontal shift registers. The smaller fraction of imaging area results in an overall loss in light sensitivity and speed. Furthermore, the smaller pixel area allocated to imaging introduces a greater probability for producing image artifacts due to aliasing; this is covered in Sec. 4.0.

Figure 21 shows one of the more sophisticated interline-transfer architectures, the frame-interline–transfer CCD. The architecture is similar to that of the conventional interline transfer but with two additional properties. A storage area is provided at the bottom of the sensor and a sink for electrons at the top. The electrons sink allows one to “flush” unwanted electrons in the vertical shift registers before they are filled with the image-generated electron packets. The exposure cycle is as follows. While the photodiodes are used to transfer charges to their respective storage capacitors, the vertical shift registers are flushed by a high-speed transfer to the electron sink at the top of the sensor. This removes any residual charge from previous exposure cycles and thermally generated electrons. Next the image-related electron packets are transferred to the vertical shift registers. This is followed by a high-speed transfer to the storage area at the bottom of the sensor. The stored electrons are then read out in the same fashion as they are in the full-frame–transfer CCD. The frame-interline–transfer CCD has the additional advantage of better exposure control and less noise as a result of the flushing of the vertical shift registers. However, they suffer from the same deficiencies as stated above for the interline-transfer CCD sensors.

thumbnail image

Figure 21. A frame-interline–transfer CCD.

3.2.5 Recent Advances in Imaging Sensor Technology

The ability to employ complementary metal-on-silicon (CMOS) fabrication technology for imaging arrays has given rise to new interest in active-pixel sensor architecture (Blouke, 1995; Blouke, 1997). The imaging elements in these CMOS-based sensors are the photodiodes discussed in Sec. 3.2.1. These active-pixel sensors are addressed much in the same way as described in Sec. 3.2.3 for the X-Y–addressable sensors as shown in Fig. 19. The important difference is that all the control, stabilization, and gain electronics can now be placed at each pixel site, and all the control clocks and circuitry can be fabricated on the same chip as the sensor. In the older X-Y–addressable sensors, the control clocks and circuitry were placed on separate chips, and interconnects were used to communicate with the sensor. The increased flexibility will lead to the development of “smart focal-plane sensors,” which process the information in real time, rather than using separate general-purpose or special-purpose microprocessors to do the same task. Much of the work on these smart sensors is being directed to the development of imaging chips that replicate some of the imaging characteristics of human and insect visual systems.

As with the active-pixel architecture outlined above, modern fabrication techniques have led to many new and novel imaging sensor architectures. The reader is encouraged to refer to the literature for details (Bernard, 1996).

3.2.6 Charge-Transfer Efficiency

Most imaging sensors have some form of CCD shift registers (Janesick, 1997). In frame-transfer CCD devices the entire sensor acts as a shift register as well as an image-induced–charge storage device. Interline-transfer devices employ both vertical and horizontal shift registers to read out the stored charges. It is very important that each of the transfer operations be very efficient. Consider a typical 750- by 500-pixel frame-transfer CCD. The last charge packet to be read out will be transferred 1250 times before it reaches the output amplifier. If the efficiency of each transfer (from pixel to pixel) is 95%, then the fraction of charge (of the last pixel read out) will be essentially zero. If one requires that 95% of the charge packet remain after 1250 transfers, than a charge transfer efficiency of 0.999 958 966 is required. A 3000 by 2000 frame-transfer CCD would require a charge transfer efficiency of 0.999 991 45. Since the charge packet from each pixel in the CCD imaging sensor undergoes a different number of transfers, the final image can show an overall shift in image brightness (mapping the shift patterns) and a loss of image sharpness due to the “mixing” of charge packets.

Consider a linear CCD shift register like the one shown in Fig. 22. A packet of charge Q is placed in the first CCD element. It is then transferred N times to the end of the linear array. At each transfer the amount of charge left behind is αQ, where α is the charge transfer inefficiency. A fraction 1 − α of the charge is transferred forward. After N transfers, the charge Qi in the ith CCD element is given by (Barbe, 1975)

  • equation image(12)
thumbnail image

Figure 22. A linear CCD shift register.

Figure 23 shows a plot of Qi/Q for several values of N and α. Note that the charge packets spread with increasing N and α. Note that as the value of α increases, the peak of the electron packet lags further behind the leading edge. This introduces a phase lag.

thumbnail image

Figure 23. The spread of a charge packet as a function of the number of transfers N and the charge-transfer inefficiency α.

The image smear associated with the transfer process gives rise to a modulation transfer function, MTFtrans, given by (Sequin and Tompsett, 1975)

  • equation image(13)

where λ is the wavelength of the signal measured in the number of CCD elements. The wavelength can vary from 2 to infinity. Equation 13 represents what the signal modulation will be after a series of charge packets with a sine-wave profile have been introduced into the first element of the linear array and transferred N times to the output of the linear array. Figure 24 shows a plot of MTFtrans for the two cases shown in Figure 23 with N = 128.

thumbnail image

Figure 24. The response of an 128-element CCD linear array to a sine wave for two charge-transfer inefficiencies; λ is given in CCD elements.

Note that as the values of λ become small (high frequencies) the response of the CCD shift register rapidly goes to zero for both values of α. The effect is best seen in images. Figures 25 and 26 show two simulations of the effect of charge-transfer inefficiencies on images. The charge is transferred down and out to the left of the array. As α becomes larger, the resulting image becomes more uniform moving from the top right, to the lower left. Considerable charge remains in the sensor after the readout process has finished. Figure 26 shows the effect on a 64 × 64 pixel image for α = 0.01.

thumbnail image

Figure 25. The effect of charge-transfer inefficiencies on a checkerboard image for various values of α. Note the residual image left behind in this 16 × 16 frame-transfer CCD.

thumbnail image

Figure 26. The effect of charge-transfer inefficiencies on an image from a 64 × 64 CCD frame-transfer sensor.

3.3 Color-Filter Arrays

The ideal way to encode a color image is to use three CCD sensors that are optically aligned as shown in Fig. 27. These types of systems are used in very expensive professional and studio video cameras and in some consumer video cameras. However, the size and cost of the optics, the three sensors, and additional electronics precludes its use in most digital still cameras. Most digital still cameras use a single CCD sensor with a CFA that is used to encode color (Peddie, 1993; Robertson and Fisher, 1985; Dillon et al., 1978; Lee et al., 1983; Aoki et al., 1982; Manabe et al., 1983; Takizawa et al., 1983).

thumbnail image

Figure 27. A prism system used to separate an image into red, green, and blue components.

3.3.1 Additive and Subtractive Arrays

Color-filter arrays (CFAs) can take on many configurations, using additive colors (red, green and blue), subtractive colors (cyan, yellow and magenta), or a combination of both (Kriss, 1996a). Figure 28 shows four CFA configurations. The CFA with red, green, and blue stripes was used in Sony's MAVICA camera and represents the simplest type of CFA. The CFA using cyan, yellow, green, and white (clear) elements is similar to one of the first CFAs used in video cameras by Hitachi. The CFA with green columns and alternating red–blue columns was introduced by Sony in its higher quality digital still cameras. The green–red–blue checkerboard was developed at Kodak (Bayer CFA) (Bayer, 1976) and represents the most commonly used CFA in digital still cameras.

thumbnail image

Figure 28. Samples of various color-filter arrays.

Since each of these CFAs produces sparse color images, some form of interpolation is required to reconstruct a full-color image. The nature of these interpolation algorithms is discussed in Sec. 4. The sparse color sampling further aggravates the aliasing problem and will be covered in Sec. 5.

Subtractive CFAs have the advantage of higher speed since each element (cyan, yellow, or magenta) passes roughly two-thirds of the incident light. Additive CFAs are slower since each element passes roughly one-third of the incident light. However, subtractive CFAs require a matrixing of the signals to produce the red, green, and blue signals needed to form the final image. The matrix for the Hitachi CFA in Figure 28 is

  • equation image(14)

If the CFA filter elements are ideal, there is three times the signal in the white (clear) channel as in the green channel. In a similar fashion, the yellow and cyan channels have twice the signal of the green channel. If one assumes that the noise in each channel is proportional to the signal and that the noise in the green channel is σ, then the noise in each of the red, green, and blue output channels will be 2.12σ (Parulski, 1985). Subtractive CFAs also introduce image artifacts not seen in additive CFAs as will be outlined in Sec. 4.

3.3.2 Multiple-array Configurations

Some digital still cameras employ two or three CCDs with different CFA patterns to achieve increased resolution and color fidelity (Morimoto et al., 1995; Miura et al., 1993; Sakai et al., 1994). Figure 29 shows one example of a two-CCD system. Since the human visual response (sharpness) is best for green light, one CCD array is fully devoted to green. The second CCD array uses columns of red and blue elements at half the resolution of the green. After the red and blue sparse images are interpolated to produce their respective full images, they are combined with the green image to form the final reconstructed scene.

thumbnail image

Figure 29. A two-chip CCD system.

3.4 Signal, Noise and Speed

The signal, as well as the noise, will be measured by the number of photoelectrons captured by the individual sensor elements. The key factors are the photon flux falling on the sensor, the quantum efficiency η(λ) of the sensor, and the full-well capacity (FWC) of the CCD material. The photon flux depends on the ambient light conditions, the shutter speed τ of the camera, and the F-number F of the camera lens. The FWC depends on the sensor fabrication process and varies between 109 and 1012 electrons per cm2 (Balk and Folberth, 1986; Tredwell, 1985; Schroder, 1990; Carr, 1972). If d is the edge of a square pixel, then the signal S is given by

  • equation image(15)

where P(λ) is the photon flux on the sensor plane and η(λ) is the quantum efficiency of the CCD. The signal electrons cannot exceed the number defined by the full-well capacity and the area of the pixel; Smax = d2 × FWC. The value of Smax is very important since it sets the dynamic range of the CCD sensor. A typical CCD array with d = 9 µm will hold about 85 000 electrons, indicating an FWC = 1.05 × 1011. If a digital still camera is to have the same dynamic range as film, then it must record up to 10 stops of exposure or an exposure ratio of 1024. The FWC and pixel area will determine the exposure latitude of a CCD image sensor.

Figure 30 shows the quantum efficiency for a typical frame-transfer CCD imaging array (Eastman Kodak, 1992). Note that the response extends well into the infrared and is relatively low in the blue. The low blue response is due to the light absorption in the polysilicon layer that covers most of the surface of the CCD. The fluctuations in the quantum efficiency are due to interference effects introduced by the thin silicon dioxide layer used to insulate the electron-storage area from the conducting polysilicon layers. In most digital still cameras, an infrared-blocking filter is used to limit the amount of infrared exposure and preserve good color reproduction.

thumbnail image

Figure 30. Quantum efficiency of a CCD imaging array.

There are several important noise sources in CCD sensors and the electronics used to detect (measure) the charge stored in the individual imaging elements (Barbe, 1980; Sequin and Tompsett, 1975; Theuwissen, 1995; Hynecek, 1986; Teranishi and Mutoh, 1986). The most fundamental noise is the shot noise Ns that maps the fluctuations in the photons absorbed by the CCD array. If N is the number of electrons stored in the individual CCD element, the shot noise is given by

  • equation image(16)

A second source of noise is the thermal noise generated in the resistors of the output amplifiers, given by

  • equation image(17)

where R is the resistance, BW is the bandwidth of the signal, T is the absolute temperature and k is Boltzmann's constant. The output amplifier “sees” an output capacitance C. After each measurement of the charge from a pixel, the output amplifier is reset and it sees a different amount of residual charge on the output capacitor. The reset noise associated with this residual charge is given by

  • equation image(18)

where q is the charge of an electron in coulombs. Fixed-pattern noise, FPN, is due to the fixed variations in the quantum efficiency of each pixel (or, in the case of sensors that have transistors at each site, the variation in the transistor gain). The noise associated with the FPN is given as a fraction κ of the number of signal electrons:

  • equation image(19)

There is also Johnson noise NJ, which depends on the materials used in the amplifiers and is often referred to 1/f noise because of the shape of its spectrum, which varies inversely with frequency. The last, and most important, source of noise is the dark current. When a CCD sensor is placed in the dark, the thermal vibration of the crystal structure and the surface states interact to create thermal electrons that are captured by the CCD imaging elements. This noise depends on the integration (exposure) time τ of the sensor. If the dark current were constant over the CCD array it would be an easy matter to subtract it from the final signal. However, the surface-state density varies from point to point in the CCD, thus giving rise to noise electrons whose number is proportional to the variance of the surface state density. The dark-current noise is given by

  • equation image(20)

where Nss is the surface state density, d is the edge dimension of the sensor pixel, χ is the electron-capture (trapping) cross section in the potential wells of the pixels, υn is the thermal velocity of the electrons, and ni is the intrinsic free-carrier (electron) concentration. The total noise for a CCD array is given by

  • equation image(21)

The noise due to the resetting of the readout amplifiers, NC, can be as high as several hundred rms electrons. However, techniques such as correlated double sampling can reduce this value to about 10 rms electrons (Hynecek, 1988).

Figure 31 shows a plot of the signal, noise, and combined signal of a typical CCD sensor array. This plot represents a 10 − µm pixel with an exposure time of 0.01 s and the FPN given by κ = 0.0025. The curves do not take into consideration the full-well capacity of the pixel. For FWC = 3 × 1011, a 10 − µm pixel can hold Smax = 3 × 105 electrons; this is shown in Fig. 32. The dynamic range of this system (measured from S/N = 1 to full-well capacity) is about 104.5 or about 13 stops. As is discussed below, a more practical dynamic range is given by S/N = 10 to the full-well capacity. This gives a dynamic range of about 103 or about 10 stops, matching the requirement previously noted.

thumbnail image

Figure 31. The response of a CCD sensor. The “signal electrons” curve contains the base noise due to dark current. The full-well capacity line represents a 10 − µm pixel with Smax = 3 × 105 electrons (see page 364).

thumbnail image

Figure 32. Interpolation results for a Bayer CFA using a simple convolution kernel (middle) and logical interpolation (right). The black and white original image (on the left) is reproduced with alternating blue and yellow pixels on the edges when the simple convolution kernel is used. The logical interpolation results in uniformly colored edges.

There are no set rules for defining the speed of a CCD imaging array (Kriss, 1996b, 1998; Holms, 1996; Yoshida, 1995, 1996). Several methods are being proposed. Each method defines speed in terms of the exposure EM in lux-seconds required to reach some specified signal-to-noise ratio for the sensor. In a negative film system the ISO speed is defined by (e.g., Altman, 1977)

  • equation image(22)

where EM is determined as being the exposure at which the signal density is 0.1 density units above fog. As a photographic “rule of thumb,” when a camera calibrated for the above ISO speed is used, then a normal exposure on a bright, sunny day would be F / 16 at shutter speed equal to τ = 1 / (ISO speed). Thus, for an ISO 200 speed film, the shutter speed at F / 16 would be τ = 0.005 s or any other combination that gives the same results based on the equation

  • equation image(23)

The above exposure criterion will ensure an excellent print from the exposed negative. If the same criterion for speed, Eq. 22, is used for a CCD imaging array, then EM is the exposure value at which the signal-to-noise ratio S/N is given by

  • equation image(24)

Under this criterion, the speed of a typical monochrome CCD image array with d = 10 µm would be about ISO 186. If a color-filter array is used to encode color, then the speed falls to ISO 96. The above holds for short exposure times, τ < 0.01 s. For longer exposure times the noise due to the dark current builds up and the effective speed and dynamic range decrease. The two most important factors in the speed of a CCD imaging array are the quantum efficiency and the pixel area. Photodiode arrays do not require the polysilicon layer or the thin layer of silicon dioxide; hence they have greater blue speed than frametransfer devices that use the MOS capacitors to store the photoelectrons. However, as noted earlier, the photodiodes have smaller active pixel areas in the interline-transfer format, thus limiting their ultimate speed.

4 Image Interpolation

  1. Top of page
  2. Introduction
  3. the Digital Imaging Chain
  4. Image Capture
  5. Image Interpolation
  6. Sampling, Aliasing and Artifacts
  7. Image Compression and Storage
  8. Image Processing and Manipulation
  9. Color Reproduction
  10. Digital Hard Copy
  11. Image Quality
  12. Works Cited
  13. Further Reading

As pointed out in Sec. 3.3, when a colorfilter array (CFA) is used to encode a color image with a single CCD imaging sensor, the resulting color images are sparsely sampled. The “holes” in the images must be calculated by some form of interpolation. Each interpolation algorithm has an associated modulation transfer function (MTF) that adds to the degradation of the overall image sharpness and in some cases can add colored artifacts that greatly diminish the quality of the final image.

4.1 Whittaker–Shannon Interpolation

When the spectrum of the image contains no information greater than half the sampling frequency fS, then it is possible to recover the sampled images completely. The sampling frequency for a frame-transfer device is given by

  • equation image(25)

where d is the edge dimension of the pixel. In the case of an interline-transfer device, the sampling frequency is given by

  • equation image(26)

where p is the pitch of the pixels. Half the sampling frequency is called the Nyquist frequency, fN:

  • equation image(27)

The Nyquist frequency plays an important role in defining the conditions for aliasing as is demonstrated in Sec. 5.

If I(id, jd) is the sampled image, where i and j take on integer values, then the completely reconstructed image I′(x, y) is given by

  • equation image(28)

The exact implementation of this Whittaker–Shannon (Whittaker, 1915; Shannon, 1949) method requires an infinite number of image points. However, adequate reconstruction can be obtained with fewer than 20 image points in each direction (20 × 20 array). Even this reduced array is not practical for a typical digital still camera on account of the intensive amount of computation required to fill in the missing pixels.

4.2 Simple Interpolation by Convolution

A more common method of filling in the missing pixels is simple convolution techniques based on nearest-neighbor pixels. These nearest-neighbor interpolation algorithms are based on the more general class of Lagrange interpolation functions (Davis and Polonsky, 1964).

It is best to consider an example. Consider the Bayer CFA shown in Fig. 28. The green image looks like a simple checkerboard, with half of the sampled image values missing. Using a nearest-neighbor interpolation algorithm, one can construct a convolution kernel given as

  • equation image(29)

The green kernel shown in Eq. 29 is used in the following manner to fill in the missing pixels. The matrix (template) given by Eq. 29 is placed on one of the missing pixel sites in the sparse green image. The weights shown in the matrix are then multiplied by the known sampled values associated with each of the matrix elements. The resulting values are then summed to give the value of the missing pixel. The matrix is then moved to the next empty site and the process is repeated. This is equivalent to convolving the kernel with the entire sparse green image. The values of the kernel are such that the known sampled points will not be altered. The equivalent kernels for the sparse red and blue images resulting from the use of the Bayer CFA are identical, and the kernel is

  • equation image(30)

The blue/red kernel is designed to pick the appropriate values for the interpolation as it moves through the sparse blue or red layers.

As noted above, the interpolation process tends to soften the images. The MTF due to the green kernel is given by

  • equation image(31)

where fx and fy are the spatial frequencies in the x and y directions, respectively. The frequency range of the above MTF due to interpolation is between zero and fN in both directions. Thus the maximum loss is along the diagonal where fx = fy = fN and the MTF vanishes. Along the x or y direction, the maximum loss is MTFkernel = 0.5. The MTFkernel for the blue and red layers is given by

  • equation image(32)

Note that the MTF associated with the blue and red layers will vanish at the Nyquist frequency along each axis. Thus, the blue and red images will be less sharp than the green image after interpolation. Each CFA configuration will have interpolation MTFs that correspond to the interpolation algorithms used to fill in the sparse images.

4.3 Logical Interpolation

Instead of using a convolution kernel to perform the interpolation, it is possible to use a logic-based approach. There are several approaches that can be applied (Cok, 1994; Adams, 1995; Tsumura et al., 1997). The simplest approach is to test if the missing pixel is along a vertical, horizontal, or diagonal line. This is done by looking for the direction with the smallest change in pixel values. If none of the three directions shows a minimum, then the average given by the kernels described above are used.

The next level of logic-based interpolation is to create a series of templates that represent lines, edges, corners, and other local geometric shapes. The known pixels around the missing pixel are compared with the set of templates and the template that shows the highest correlation to the known pixels is used to generate the missing pixel.

A more sophisticated approach is to use one of the above methods to fill in the missing pixels of the layer that has the most sampled points, usually the green layer. Once this layer is reconstructed, its pixels are used along with the known pixels in the other two layers to estimate the missing pixels in these layers. This method is very good in smoothly varying regions, but can fail in regions where there are rapid changes in detail or color.

Figure 32 shows two examples of interpolation on a simple target. The left image is the original input. The middle image is the result of sampling with a Bayer CFA and then using the kernels defined by Eqs. 29 and 30 to fill in the three sparse layers. Note the colored edge patterns; they are very easy to see in an image. The right image shows the results of using a logical interpolation algorithm that looks for horizontal, vertical and diagonal lines. The colored edge transitions are much smoother and are not as noticeable as the ones created by the interpolation kernel.

5 Sampling, Aliasing and Artifacts

  1. Top of page
  2. Introduction
  3. the Digital Imaging Chain
  4. Image Capture
  5. Image Interpolation
  6. Sampling, Aliasing and Artifacts
  7. Image Compression and Storage
  8. Image Processing and Manipulation
  9. Color Reproduction
  10. Digital Hard Copy
  11. Image Quality
  12. Works Cited
  13. Further Reading

Digital still cameras differ from conventional cameras in that the images are sampled by a CCD image array. The discrete sampling introduces the possibility for image artifacts due to aliasing. The aliasing in monochrome systems results in banding and jagged edges depending on the scene content. When CFAs are used to encode the color information of the scene, then the aliasing introduces artifacts that have very noticeable color fringing or banding. The aliasing can be eliminated by ensuring that the spectrum of the image as seen by the CCD sensor contains no information beyond the Nyquist frequency of the sensor (Goodman, 1968; Gonzalez and Woods, 1993).

5.1 Sampling and Aliasing

Aliasing is best demonstrated by a simple one-dimensional example. Consider a sine wave that is sampled at different rates. At very high sampling rates the sine wave can be easily reconstructed with little error. However, as the sampling rate drops below twice the frequency of the sine wave, aliasing occurs. The aliased signal appears to be a sine wave with a lower frequency than the original. Figure 33 shows an example of this process. Note that as the sampling rate decreases, the sine wave is distorted and is reproduced at lower frequencies. The relationship between the observed frequency fO and the input frequency fI is given by

  • equation image(33)

where fS is the sampling frequency and n is an integer. For all the cases shown in Figure 33, n = 1 and fI = 16. Hence, for fS = 31, one finds fO = 15. Likewise, for fS = 15, one finds fO = 1. The aliased signal frequency will always be less than the Nyquist frequency, fN = fS/2. For any given values of fS and fI, n is adjusted to find fO to meet the criterion 0 < fO < fN.

thumbnail image

Figure 33. A 16 c/mm sine wave sampled at four different rates.

Figure 34 shows two aliased color images taken with a digital still camera. The images were taken from different distances to demonstrate the how the sampling-induced aliasing produces different artifacts depending on the effective frequency of the target image in the plane of the sensor. Both images show the effects of aliasing. The top image was taken at 24 ft, and so the image in the plane of the CCD contains much higher spatial frequencies than does the one of the image taken from 8 ft. The higher frequency is aliased to a lower value. Any slight rotation between the target and the CCD sensor will introduce complex patterns, as will any distortion introduced by the camera lens.

thumbnail image

Figure 34. Both color images were taken with a high-resolution CCD camera using a Bayer CFA without an optical prefilter. The left image is the original test target. The top right image was taken from 24 ft and the bottom right image from 8 ft.

The amount of aliasing will strongly depend on what type of CCD imaging array that is used. A frame-transfer device will show less aliasing than an interline-transfer device. Refer to Figs. 18–21. In a frame-transfer device, the pitch between pixels is equal to the edge dimension d. However, in an interline-transfer device, the edge dimension is less than the pitch p. The MTFpixel for a pixel of edge d (one-dimensional case) is given by

  • equation image(34)

where f is the spatial frequency. As d decreases in size, MTFpixel increases its response to higher spatial frequencies. This means that the probability for aliasing increases. See Fig. 35. Here the MTFpixel for two values of d are shown. The value of d = 0.01 mm represents a frame-transfer device while the one for d = 0.005 represents an interline-transfer device. The pitch is the same for both: p = 0.01 mm. Since all information beyond the Nyquist frequency will be aliased, it is clear from Fig. 35 that the smaller pixel in the interline-transfer device will have considerably more aliasing. Figure 36 shows an example of what happens when one decreases the pixel size, but keeps the pitch fixed. The top left image is the input image. The top right image represents a frametransfer device with d equal to p. Note that the averaging over the pixel has greatly reduced the image detail and has introduced a hint of aliasing. The bottom left image represents an interline-transfer device where d = 0.5p. The image looks a little sharper, but significant banding appears. The bottom right image also represents an interline-transfer device but with d = 0.25p. The banding becomes stronger. These artifacts can only be eliminated by removing any vestige of the details of the bricks by some form of optical prefiltering; see Sec. 3.1.2.

thumbnail image

Figure 35. The MTF of a pixel. The signal to the left of the Nyquist frequency is not aliased, the signal to the right is aliased.

thumbnail image

Figure 36. The effect of sampling on an image. The top left is the original image. The top right represents a frame-transfer device. The two bottom images represent interline-transfer devices with d = 0.5p on the left and d = 0.25p on the right.

5.2 Sampling in Color Systems

When a CFA (see Fig. 28) is used to encode a color image, the aliasing artifacts become more pronounced as a result of sparse sampling and phase shifts between the red, green and blue layers (Kriss, 1997). Figure 37 shows the full color image and the three separation images from a high-resolution (2000 × 3000 pixels) digital still camera using a Bayer CFA. The target is a square wave. The green sampling frequency is twice that of the red and blue. The resulting red and blue images are aliased to the same low frequency, but they are about 180 ° out of phase. The green signal is reproduced close to the Nyquist frequency. The resulting color image shows very strong orange–blue bands. Different CFA patterns will result in different color banding. While the Bayer CFA is the most popular pattern for digital still cameras, these strong color artifacts can only be eliminated by appropriate optical pre-filtering.

thumbnail image

Figure 37. The full-color image of a test pattern taken with a high-resolution digital still camera using a Bayer CFA, and the red, green, and blue separations.

5.3 Potential for Aliasing

The actual amount of aliasing in the final image will depend on the sampling characteristics of the digital still camera and the scene content. Uniform areas will show no aliasing, edges will show color “jaggies,” and areas with regular patterns will show strong color banding. One measure of the potential for aliasing of a given CCD imaging array is the ratio of aliased power to nonaliased power (Kriss, 1990; Kriss, 1997). Refer to Fig. 35. The power associated with the MTFpixel response to the left of the Nyquist frequency is the nonaliased power and that to the right is the aliased power. For a two-dimensional, frametransfer CCD array where the ratio between the square pixel edge d and the pixel pitch p is given by

  • equation image(35)

the potential for aliasing, Ω, is given by

  • equation image(36)


  • equation image(37)

and where u = v = πdf. The potential for aliasing, Ω, gives the ratio of the signal power aliased from spatial frequencies above the Nyquist frequency to the nonaliased signal power below the Nyquist frequency. As the ratio increases, the DSC system will be more prone to introduce aliasing artifacts. Figure 38 shows a plot of the potential for aliasing, Ω, versus β. For a frame-transfer device, β = 1, Ω = 0.67 indicating that 40% of the image power is due to aliased information. For an interline-transfer device, β = 0.5, Ω = 3.58, or 78% of the image power is due to aliased information. This result agrees with the results shown Fig. 34. In general, the value of Γ will be inversely proportional to the number of sampling points. Thus the green image from a Bayer CFA will have twice the potential for aliasing as a monochrome sensor. Likewise, the blue and red images from a Bayer CFA will have twice the potential for aliasing as the green layer and four times the potential for aliasing as a monochrome sensor.

thumbnail image

Figure 38. The potential for aliasing, Ω, as a function of β = d/p.

One method to reduce aliasing is to record the image by dithering the CCD array. Four images of the scene are taken by displacing the CCD array one-half pixel to the left, one-half pixel down and one-half pixel along the diagonal. The four images are then combined to form a final image. This corresponds to a value of β = 2 and Ω = 0.23 or 19% of the image power is due to aliased information.

In the above, the input spectrum was assumed to be flat. Real images do not have flat spectra and there will be further losses due to the MTF of the lens and any optical prefilters (see Sec. 10). Thus, while the above measure represents the worst case, it gives a very good relative measure of the potential for aliasing for a given CCD imaging array layout.

6 Image Compression and Storage

  1. Top of page
  2. Introduction
  3. the Digital Imaging Chain
  4. Image Capture
  5. Image Interpolation
  6. Sampling, Aliasing and Artifacts
  7. Image Compression and Storage
  8. Image Processing and Manipulation
  9. Color Reproduction
  10. Digital Hard Copy
  11. Image Quality
  12. Works Cited
  13. Further Reading

Storing image data is a very critical part of any digital photography system. To understand the magnitude of the problem it is best to calculate the storage capacity of a 35-mm frame of film in terms of a frame-transfer device. What follows is an approximation that underestimates the true storage capacity of film but does clearly indicate the need for image compression. A quick comparison between film and a CCD can be obtained by finding the “equivalent” pixel size d that gives a CCD MTF [see Eq. 32] that has the same 50% response value as the film MTF in question (Kriss, 1987) (see Fig. 39). From Eq. 34 the 50% MTFpixel value is given by

  • equation image(38)

In Fig. 39, the film's MTF is 50% at f50% = 60 c/mm. Thus the equivalent pixel size is 10 µm. The dimensions of a 35-mm frame are 24 × 36 mm2. Thus, considering all three layers, the total number of equivalent pixels is 2.592 × 107. Current storage technology will allow the storage of 640 megabytes of data on a magnetic-optical disk with a 3.5-in. (8.7-cm) form factor. This means that about 24 images could be stored without compressions, or the equivalent of one roll of film. While only the most expensive digital cameras have effective resolutions close to that of film, even those with much more moderate resolution require considerable storage capacity in the absence of compression.

thumbnail image

Figure 39. Determining the equivalent pixel size for a photographic film.

Most consumer digital still cameras have SVGA resolution of 800 × 600 pixels, thus requiring a total of 1.44 MB per image after full interpolation has taken place. Some consumer digital still cameras have a resolution of 1280 × 960 pixels and require about 3.7 MB of storage after full interpolation. Almost all new digital still cameras use some type of solid state memory, digital picture cards (DPC), to store the image data. These DPCs have about 10-MB capacities. Thus, at the above resolution only three to seven images can be stored without compression. Clearly, considerable image compression is required for digital still cameras if they are to be used in the same way that conventional cameras are used by consumers.

6.1 JPEG Compression

Many image-compression algorithms have been developed over the past 30 years. The two most commonly used are the Joint Photographic Expert Group (JPEG) (JPEG, 1969) for still images and the Motion Picture Expert Group (MPEG-I, 1990; MPEG-II, 1992) for video images.

Figures 40 through 42 show a greatly simplified version of the JPEG compression process. The entire compression process is dedicated to producing an image that “looks good” to an observer, but uses the least possible number of bits per pixel. The first step is to transform the color image into a luminance channel and two chrominance channels. Plate 4 shows such a separation, where the two chrominance channels have been biased to a neutral value in order to print them. The luminance channel contains most of the detail information while the two chrominance channels carry the color information but little spatial detail. Color-television transmission is based on this principal.

thumbnail image

Figure 40. Separation of the color image into luminance and chrominance channels.

thumbnail image

Figure 41. Processing of luminance channel.

thumbnail image

Figure 42. Subsampling of the chrominance channels.

The next step is to break up the luminance channel into 8 × 8 blocks. Each block is then transformed by means of a discrete cosine transform (DCT) (Rao and Yip, 1990) into 64 coefficients that represent specific spatial-frequency information over the 8 × 8 block of pixels. Each coefficient is then quantized differently to use the least number of bits to represent the 8 × 8 block.

Consider Fig. 43, which shows the DCT of a full 64 × 64 image. Note that for this image there is very little high-frequency power in the DCT spectrum. These higher frequencies are not seen well by the human visual system and are assigned fewer bits than those at lower frequencies. In some cases the coefficients corresponding to the highest frequencies are set equal to zero. The 8 × 8 blocks used in the JPEG process represent a more limited frequency range than shown in Fig. 43. The lowest frequency measured by the 8 × 8 block is eight times the lowest frequency from the full 64 × 64 image. However, each 8 × 8 block does address the same high-frequency information as the full image, but in a local fashion.

thumbnail image

Figure 43. The DCT of a 64 × 64 image.

The basis for assigning the bit depth to the coefficients of the DCT is based on the human visual response function, MTFeye, shown in Fig. 44. Since the actual response of the eye depends on the viewing distance, the JPEG compression algorithm assumes a standard viewing distance and image size. Images that are enlarged and viewed at the standard viewing distance will show compression artifacts.

thumbnail image

Figure 44. The visual MTF for a viewing distance of 300 mm.

The next step in the compression process is to compare the values of the same coefficients from adjacent blocks. By use of differential pulse code modulation (DPCM) (Rabbani and Jones, 1991) techniques, the differences between the values of the coefficients are calculated. This operation reduces the variance in the transformed image data, thus reducing the number of bits required to store the image data. Finally, a Huffman code is used to compress the data further before it is stored on the storage medium using error-correcting algorithms. This whole process allows one to store an image using from 4 bits to 0.15 bit per pixel, depending on the image and the desired final quality. This covers a compression range of 2:1 to 50:1.

The same process is used for the two chrominance channels except that they are first subsampled at a rate of 2:1. The subsampling is possible since as seen in Fig. 45 very little detail is carried by the chrominance channels. The decompression is done in the reverse order.

thumbnail image

Figure 45. The luminance and chrominance channels of a color image. The luminance channel on the left carries most of the detail information. The two chrominance channels have been biased with a neutral level to make possible their printing.

Figure 46 shows four levels of JPEG compression provided by a digital still camera. As the compression ratio increases, more image artifacts will be seen, along with a general loss of sharpness. Since this image is only a small portion of the full 960 × 1280 frame, the enlargement makes the noncompressed image appear to be unsharp. This demonstrates the need for high-resolution DSCs if one is to make enlargements.

thumbnail image

Figure 46. JPEG compression of an image from a digital camera using a Bayer CFA. The top left image has no compression. The top right image is compressed at a ratio of 3.5:1, the bottom left image is compressed at a ratio of 6:1, and the image at the bottom right is compressed at a ratio of 10:1. Only a small portion of the image is shown.

7 Image Processing and Manipulation

  1. Top of page
  2. Introduction
  3. the Digital Imaging Chain
  4. Image Capture
  5. Image Interpolation
  6. Sampling, Aliasing and Artifacts
  7. Image Compression and Storage
  8. Image Processing and Manipulation
  9. Color Reproduction
  10. Digital Hard Copy
  11. Image Quality
  12. Works Cited
  13. Further Reading

Conventional photographic systems use sophisticated chemistry to reduce granularity (noise), increase sharpness and improve color reproduction. Image manipulation (special effects) are accomplished by many hours of “darkroom” work. Digital photographic systems have emerged as part of the computer revolution and make good use of the processing power on the desk-top computer to enhance and alter images. The advent of inexpensive film and paper scanners also makes it possible to apply the same enhancement techniques to images captured by conventional photographic systems. Complex operations that were restricted to industrial research laboratories and academic researchers have become available to the general public in the form of sophisticated software and very fast desk-top computers.

7.1 Noise Reduction and Edge Enhancement

Noise in the final image can be classified as part of the signal that does not carry (useful) information. The goal of all noise-reduction algorithms is to eliminate the noise without removing useful or aesthetic information. Many sophisticated algorithms have been developed to remove noise, but only one class will be discussed. A one-dimensional model will be used for ease of exposition.

Figure 47 shows a typical noisy edge trace, Z(x). The goal is to remove the noise and retain the information contained in the edge. The signal Z(x) is decomposed into three “smoothed” images and difference signals, ΔZi, as shown in Fig. 43. Difference signals are obtained by using an aperture, L, to smooth the original image and then subtracting the smooth image from the original. The difference signal tends to emphasize the detail image and noise and represent specific spatial frequency bands. This process is repeated three times, using the output of the first stage for the input of the second stage, and so on.

thumbnail image

Figure 47. A typical noisy edge in an image.

Noise reduction is achieved by comparing each of the difference signals to a threshold value that characterizes the noise in the given band. The threshold values are obtained by implementing the operations outlined above on a uniform area that is imaged with the system in question. The standard deviation of the difference signals forms the basis for determining the threshold values. Once the difference signals have the noise removed by comparing them with the threshold values, the “noise-reduced” difference signals are amplified before they are added back to the least sharp image in Fig. 48. The resulting signal, Fig. 49, has less noise and a much sharper edge transition and will appear much sharper to an observer.

thumbnail image

Figure 48. A three-stage process to break down a signal (image) into well defined frequency bands.

thumbnail image

Figure 49. The noisy edge after noise reduction and edge enhancement.

The above approach can be extended by creating subimages that have specific spatial orientations as well as specific spatial-frequency content. Each subimage is processed by methods similar to those described above. The altered subimages are then recombined to form a greatly enhanced image (Bayer and Powell, 1986).

7.2 Examples of Image Enhancement

The methods outlined above have their roots in a multistage photographic process called unsharp masking. These methods and others have been incorporated into many commercial software packages. Figure 50 shows an example of image enhancement using one of these packages.

thumbnail image

Figure 50. Digital unsharp masking. The image at the left is the original. The next two images show increased levels of unsharp masking.

7.3 Image Manipulation and Protection

The power of the computer has made it possible to manipulate images in many ways. The complexity of the image processing precludes an in-depth discussion, but a few examples will demonstrate what can be done. Figure 51 shows a simple case of combining images. Figure 52 shows three examples of image manipulations that can easily be done with commercial packages.

thumbnail image

Figure 51. Combining images.

thumbnail image

Figure 52. One example each of image distortion and image rendering. The image on the left is the original.

It may be required to ensure the ownership of an image. One method of doing this is to apply a “digital watermark” to the image that cannot be detected upon casual inspection but can be recovered by image processing. Two simple methods will be presented here. The first method is to replace the least significant “bit plane” of an image with the desired watermark. Refer to Fig. 53. The least significant bit plane looks like noise. One can add the watermark such that only those bits that are “off” are turned on. This will minimize the amount of image degradation that the watermark introduces. As seen in Fig. 53, one cannot see the effect of the watermark. If the image with the watermark is subtracted from the original, the watermark is seen. However, if the image with the watermark is significantly altered the watermark may be lost. For example, in Fig. 48 ten units of random noise were added to the image with the watermark. This was enough to change most of the bits in the least significant bit plane; thus when the image subtraction takes place, only noise is seen.

thumbnail image

Figure 53. Watermark placed in the least significant bit plane.

The second method is shown in Fig. 54. Here one adds the spectrum of the watermark to the spectrum of the original image and then the combined spectrum is used to form the image with the watermark. This method distributes the watermark throughout all the image bit planes. Upon subtraction the watermark is clearly seen. If the same ten units of noise are added to the image with the watermark, upon subtraction from the original the watermark is clearly resolved.

thumbnail image

Figure 54. Watermark created by using the addition of spectra.

8 Color Reproduction

  1. Top of page
  2. Introduction
  3. the Digital Imaging Chain
  4. Image Capture
  5. Image Interpolation
  6. Sampling, Aliasing and Artifacts
  7. Image Compression and Storage
  8. Image Processing and Manipulation
  9. Color Reproduction
  10. Digital Hard Copy
  11. Image Quality
  12. Works Cited
  13. Further Reading

Few areas of electronic imaging are richer or more full of pitfalls than color reproduction. The richness is due to the complete flexibility one has by means of software and hardware to alter or enhance the color properties of an image. The pitfalls are due to many different types of DSCs, monitors, color printers, and software that are available. It requires very rigorous calibration of all the equipment used to render a final image if one desires to get the “perfect” print. An image rendered for one printer or monitor may not look very good on a different printer or monitor. The difficulties involved in solving these problems are beyond the scope of this article.

In what follows, it is assumed that the final viewing system is a continuous-tone display or hard copy. Many soft displays and hard-copy systems employ some form of digital halftones. Color reproduction in these digital halftone systems is also beyond the scope of this article.

The properties of CFAs and of CCD sensors are discussed in mostly general terms, but some analytical examples are given for ideal systems. Three-chip systems provide “cleaner” systems, but at nearly three times the cost. Some work has been done on the optimization of color filters for CCD cameras, but this work reaches well beyond the intended scope of this review (Englehardt and Seitz, 1993).

The overall color reproduction of a DSC requires considerable manipulation of the recorded signals, and these manipulations will amplify the noise content of the captured (or stored) image. A DSC (with multiple sensors or with a single sensor with a CFA) will have spectral sensitivities that are defined by the native spectral sensitivity of the CCD (including any effects of the polysilicon layer as described above), the spectral transmittances of the filters used, and, to a lesser degree, the spectral transmittance of the camera lens. The resulting spectral sensitivities will record colors as seen by humans (the standard observer) if they are a linear combination of the CIE colormatching functions; see Fig. 55 (Wyszecki and Stiles, 1982).

thumbnail image

Figure 55. CIE color-matching functions.

The original CIE spectral distribution curves were obtained with three monochromatic light sources, and these sources are referred to as the primaries for the CIE trichromatic system of color. These original curves have negative lobes that clearly indicate that some colors cannot be reproduced by the simple use of the three primaries. The current CIE color-matching functions, Fig. 51, are a linear combination of these spectral distributions; they have no negative values and possess other properties that make them easy to use in color-matching calculations.

Using the CIE color-matching function shown in Fig. 55 it is possible to develop a color transformation matrix between the signals captured by a DSC and the values required to drive the voltages of the CRT monitor. The process is as follows. A large array of colors is measured to get their spectral reflectance values (for a given illuminant). These spectral reflectance curves are then used with the CIE color-matching functions to come up with three values (one each for red, green, and blue) that represent how the human observer responds to each of the colors in the array. Next, the spectral emission spectra of the CRT monitor phosphors are used with the CIE color-matching functions to calculate the amount of each required to give the same red, green, and blue values from the original array. Next, the red, green, and blue output values from the DSC are obtained by experiment. This set of values is then regressed with the values of the monitor phosphors to get a 3 × 3 conversion matrix. The net result is that the DSC output values when operated on by the conversion matrix will give the correct values to drive the CRT monitor to produce colors close to the originals. Two examples are given here.

Much of the pioneer work in this area was done for television camera systems. A matrix developed for television monitors for NTSC standards and modern phosphors (Robertson and Fisher, 1985) is given as (illuminant D65 white point is assumed)

  • equation image(39)

Using the same method, the conversion matrix for an Eastman Kodak DSC and modern CRT phosphors is given by (Martin, 1993)

  • equation image(40)

The differences in the matrices reflect the basic differences in the two imaging systems. For the DSC camera there is no interference between the red and blue signals, indicating that the red and blue spectral sensitivities of the DSC have good separation. However, there is clear overlap between the red and green and the green and blue spectral sensitivities.

9 Digital Hard Copy

  1. Top of page
  2. Introduction
  3. the Digital Imaging Chain
  4. Image Capture
  5. Image Interpolation
  6. Sampling, Aliasing and Artifacts
  7. Image Compression and Storage
  8. Image Processing and Manipulation
  9. Color Reproduction
  10. Digital Hard Copy
  11. Image Quality
  12. Works Cited
  13. Further Reading

The images from digital still cameras or images enhanced or manipulated on computers require some form of scanning printer to create high-quality hard copy. There are several types of marking engines (printers) and several types of rendering technologies associated with each of these engines.

9.1 Marking Engines

Marking engines can be segregated into three broad classifications: continuous-tone, multilevel, and binary. Each of these marking engines has its advantages and drawbacks.

9.2 Continuous-tone Printers

Continuous-tone printers provide the highest quality available for digital hard copy. The highest-quality printers use photographic paper as the output medium that is exposed by either lasers or CRT spots. The resolution can be has high as 20 lines/mm (500 dpi, dots per inch) with 255 levels. While there are 255 addressable levels, photographic paper may only resolve 64 levels, but this more than adequate for excellent hard copy.

Thermal dye printers are very popular since they do not require any chemical processing. A typical thermal dye printer has about 300 dpi and can support a full 255 addressable levels. However, like their photographic printer counterparts, they provide only about 64 well-defined levels of density on the final print.

One of the goals of any raster line printer is to reproduce a uniform field that looks a conventional photographic print; this means that the scan lines should not be visible (Patel, 1966; Biberman, 1973). A DSC with 500 lines and printed to a picture height of 4 in. will have a raster frequency of 5 lines/mm or a pitch of p = 0.2 mm. From Fig. 56 this requires that the modulation in a uniform area be less that 4%. If the DSC has 1000 lines, then the raster frequency is 10 lines/mm with p = 0.1 mm. This will require a modulation of less than 8%.

thumbnail image

Figure 56. Visibility at 305 mm of a raster in a uniform area.

If the raster lines have a Gaussian profile defined by variance σ and pitch p, then the modulation in a uniform area will be a function only of the ratio σ/p. Figure 57 shows a plot of the modulation as a function of σ/p. From this curve it is reasonable to pick σ/p = 0.4 as a good compromise for the two DSCs described above or

thumbnail image

Figure 57. Modulation of a Gaussian raster in a uniform area.

  • equation image(41)

Thus when designing a raster printing system for continuous-tone images one should have a pitch about 2.5 times the variance of the Gaussian profile. To improve overall quality it is best to have the raster frequency twice that of the sampling frequency. This doubling, coupled with Eq. 41, with provide ample safeguard against any visible raster patterns.

9.3 Binary Printers

Binary or halftone printers are very popular because of their relatively low cost and high quality. There are both color and black-and-white versions of binary printers based on laser-electrophotographic and ink-jet technology. The laser printers tend to have higher resolution (up to 2400 dpi), but the ink-jet printers (Rezanka and Eschbach, 1996) tend to be considerably less expensive at moderate resolution (up to 1440 × 720 dpi).

All binary printers render a continuous-tone image as a series of small black dots and empty area on the page. Besides high resolution (dpi), the major goal in binary printing is to develop rendering algorithms that create images that look as much as possible like the original continuous-tone image without introducing any distracting artifacts.

The most straightforward approach is to create a halftone dot that matches those created by graphic arts techniques. Consider Fig. 58. The image is made up of many continuous-tone values that have been quantized to 255 levels. Each pixel will be rendered as a dot where the area of the black dot represents the closest possible match to the value of the pixel. Assume that the halftone pixel has an N × N substructure. The number of gray values that can be generated is N2, ranging from white (no filled sub-dots) to black (all the sub-dots are filled). A typical laser printer has a resolution of 600 dpi. A typical value for N is 16, providing the required 256 levels for a full-tone scale. At 600 dpi, a 16 × 16 pixel is equivalent to 37.5 pixels/in. This translates to a 300 × 300 pixel image for an 8-in. (20-cm) square format. Most digital images are at least 500 × 750 or larger. A 600 × 600 pixel image printed to the same size at 600 dpi would use an 8 × 8 pixel with 64 gray levels. In the same fashion, a 1200 × 1200 pixel image printed to the same size would use a 4 × 4 pixel with only 16 gray levels. Thus, there is a definite tradeoff between spatial resolution of the image and the gray levels printed at a fixed printer dpi.

thumbnail image

Figure 58. A simple digital halftone dot. Each pixel in the image is rendered as an N × N set of binary states.

Instead of simulating a conventional graphic-arts halftone dot, many binary printers use dither patterns to render to continuous-tone images. Figure 59 shows four dither patterns in a 4 × 4 pixel format. The digital half-tone dots in these four are filled in according to the numbers given in each subpixel. If “0” is black and “16” is white, any given level between tween them is approximated by filling in all squares (make them white) that are equal to or less than the gray level. Thus, if the gray level is 10 all squares marked 10 or less will be made white. The Bayer (Bayer, 1973) dot is very popular, for it is designed to minimize the amount of visual low-frequency structure in a uniform area. Figure 55 shows four renderings of a uniform gray area.

thumbnail image

Figure 59. Dither patterns used to create digital halftone dots.

Another class of halftone renderings is based on stochastic processes. One of the major problems with all structured halftones is that when they are imaged by a DSC or a discrete scanner, strong aliasing artifacts are introduced. One way of preventing this is to form a random rendering of the image. There are two general classes of stochastic halftones. The first is a random mask where the value of each pixel is compared with a random threshold value (0 to 255); and if the pixel value equals or exceeds the random mask value the pixel is printed as “white,” otherwise as “black.” The exact nature of the random mask can vary to minimize local clumping of dots as in the case of the “blue noise mask” (Ulichney, 1987; Ulichney, 1988; Mitsa and Parker, 1992), emphasize preferred spatial directions, or confine the power of the random mask to desired spatial-frequency domains. Since a fixed mask is used, the rendering is very rapid.

The second class of stochastic rendering is called error diffusion (Floyd and Steinberg, 1976). This computationally intensive algorithm works in the following manner. A random threshold value is generated for each pixel. If the pixel value is greater than the threshold value, the pixel is printed as “white,” otherwise as “black.” The difference between the actual pixel value (0 to 255) and the printed value (0 or 255) is then calculated; this represents the error in the tone scale reproduction. The difference is then algebraically distributed by a weighting matrix to a set of the pixels that have not been processed. As the process continues, all the original pixel values are modified before they are thresholded. The results are similar to that shown in the lower right example in Fig. 60. Error-diffusion rendering is superior to the other forms of stochastic processing. Figure 61 shows three examples of stochastic processing.

thumbnail image

Figure 60. Four digital halftone renderings of a uniform gray scale of value 10 for a range of 16. The bottom right is a stochastic rendering. The central section has been enlarged by a factor of two.

thumbnail image

Figure 61. Three stochastic digital halftone renderings. The upper left is the original. The upper right uses a white noise mask. The lower right is rendered by an error-diffusion process. The lower left used a stochastic mask that confines the power to higher frequencies.

Current color ink-jet printers are using a combination of multilevel dots and error diffusion to produce images that are close to photographic quality at 600 dpi. These printers use six inks: cyan, magenta, yellow, light cyan, light magenta, and black.

10 Image Quality

  1. Top of page
  2. Introduction
  3. the Digital Imaging Chain
  4. Image Capture
  5. Image Interpolation
  6. Sampling, Aliasing and Artifacts
  7. Image Compression and Storage
  8. Image Processing and Manipulation
  9. Color Reproduction
  10. Digital Hard Copy
  11. Image Quality
  12. Works Cited
  13. Further Reading

Image quality in digital photographic systems can be segregated into four basic areas: sharpness, color reproduction, tone scale, and image artifacts. Only sharpness and image artifacts will be considered in this article. Sharpness and image artifacts are related, but can be treated by establishing separate quality metrics for both.

10.1 System Analysis for Sharpness

It is possible to treat digital photographic systems as linear when considering sharpness (Kriss et al., 1989). The concept of the cascaded modulation-transfer (CMT) acutance (Crane, 1964; Gendron 1973; Kriss, 1977) was developed for conventional photographic systems. It is used here to describe the sharpness of a DSC system.

Each component of a DSC can be defined by its MTF. Let MTFi(mif) be the MTF of the ith component in the plane of the hard copy, where f is the spatial frequency in the plane of the hard copy. The magnification mi is used to adjust the values of the MTF in the image plane of the DSC to that of the hard copy. The overall system MTF, MTFsystem, is given by

  • equation image(42)

System sharpness is related to the how a human observer sees the information on the final print (or screen). The MTFeye(f) (Mannos and Sakrison, 1974) is given by (see Fig. 52)

  • equation image(43)

This equation is scaled for a viewing distance of 300 mm. The concept of an MTF representing the spatial filtering characteristics of the human visual system is at best an approximation. However, the MTFeye(f) given by Eq. 43 is used in a variety of image-quality metrics and does give results consistent with the responses of observers. How the standard observer responds to the sharpness of an image is given by

  • equation image(44)

The area under MTFtotal(f) has been shown to be an adequate measure of the system sharpness as seen by a standard observer. It is best to normalize Eq. 44 by the area under MTFeye(f). The response, R, is then defined as

  • equation image(45)

Experiments have shown that observers relate system sharpness in the following way:

  • equation image(46)

A CMT value of 100 indicates that the DSC system has an MTFsystem(f) = 1.0 over the entire frequency response of the eye. CMT values above 92 are considered to represent excellent images and CMT values above 85 are considered to be good images. An untrained observer can detect a change of one CMT unit. It should be noted that when either chemical adjacency effects in films or digital enhancement techniques are used in DSC systems, it is possible to have CMT values greater than 100.

The nominal DSC system (Kriss, 1990) consists of a taking lens (F/5.6 diffraction-limited MTF), an optical prefilter, a CCD sensor, some form of digital enhancement, a laser printer, and photographic paper. The laser printer is assumed to print at twice the capture resolution for the reasons cited in Sec. 9. Printing at twice the taking resolution can add 5 CMT units to a DSC system. When a birefringent optical prefilter is used, the filter is optimized to minimize aliasing. The digital enhancement is optimized for normal viewing of the final print. The digital enhancement adds at least 5 CMT units to the overall sharpness. Figure 62 shows all the MTFs for a DSC with a 1000 × 1500 CCD array with each pixel being 10 µm square. Figure 63 shows MTFsystem along with MTFeye. As can be seen from Figure 63, the system response goes well beyond the that of the eye. The CMT acutance value for this system (at 400 mm viewing distance) is CMT = 96, thus representing an excellent print.

thumbnail image

Figure 62. Component MTFs for a 1000 × 1500 DSC. This DSC is a frame-transfer device and does not require any interpolation.

thumbnail image

Figure 63. MTFsystem and MTFeye for the MTFs in Fig. 62; CMT = 96.

If the same basic DSC is used as a color camera with a Bayer CFA, then the CMT acutance drops to CMT = 87, which is still a good image but not excellent. Figures 64 and 65 show the component MTFs and MTFsystem, respectively. Note that the interpolation MTF greatly reduces the quality of the system response. From Fig. 60 it can be seen that the eye response is clearly greater than the system response.

thumbnail image

Figure 64. Component MTFs for a 1000 × 1500 color DSC using a Bayer CFA. This DSC is a frame-transfer device and does require interpolation.

thumbnail image

Figure 65. MTFsystem for color DSC; CMT = 87.

Figure 66 shows a plot of CMT acutance for a series of different DSCs plotted versus the vertical resolution of the camera. Each of these cameras has the same 10-µm pitch and all but the interline-transfer camera has 10-µm square pixels; the interline-transfer camera has 5-µm square pixels. The loss in sharpness in the color cameras is due to both the need of the interpolation filter and the use of an optical prefilter. The viewing distance for the CMT acutance calculations is about 400 mm.

thumbnail image

Figure 66. CMT acutance for a series of DSCs.

thumbnail image

Figure 67. Potential for aliasing as a function of vertical line resolution for a series of DSCs.

thumbnail image

Figure 68. A parametric plot of CMT acutance and the potential for aliasing as a function of vertical line resolution.

10.2 System Analysis for Aliasing and Artifacts

As indicated in Sec. 5.3, one measure of the potential for aliasing and artifacts is the ratio of the power of the aliased signal to that of the nonaliased signal between the system's Nyquist frequencies. The work cited in Sec. 5.3 was a two-dimensional analysis. A one-dimensional analysis, consistent with the CMT acutance analysis above, can also be used to predict accurately the system potential for aliasing. Figure 61 shows a plot of the potential for aliasing as a function of the vertical line resolution of the DSC for the same set of cameras shown in Fig. 61. While the aliasing takes place in the plane of the CCD, the aliased and nonaliased spectra are modified by the rest of the DSC components. The results in Fig. 62 represent the impact of the entire system. It is very clear that the sparse sampling in color cameras (due to the CFA) greatly increases the potential for aliasing. The use of a good optical prefilter can reduce the aliasing in frame-transfer cameras, but the smaller pixel size in interline-transfer cameras makes it more difficult to reduce the potential for aliasing. Even when the best prefilter is used in a frame-transfer color camera, it has about three times the probability for aliasing as the black-and-white camera.

10.3 System Tradeoffs

The models developed to analyze DSC systems provide ample opportunity to study system tradeoffs. Only one example is given here. Figure 63 shows a parametric plot of CMT acutance versus the potential for aliasing as a function of the vertical line resolution. It is clear from this plot that color cameras using a CFA cannot obtain the same quality of a black-and-white camera or a three-CCD imaging-array color camera. It is critical to have some form of optical prefiltering to lower the potential for aliasing. It is also clear that interline-transfer color cameras will always have more aliasing than their frame-transfer counterparts. While the sharpness of a frame-transfer color DSC will greatly increase with increasing resolution, the potential for aliasing will not decrease as much, and if no optical prefilter is used (as is the case for some very high-resolution cameras) the potential for aliasing will be very high.

Considering the above analysis, it is safe to say that the best DSC cameras will use three-CCD image sensors. A three-CCD system with a resolution of 750 × 1025 pixels and using appropriate digital enhancement can equal the quality of film based images in the popular 4 × 6 − in.2 format. An enlargement of 16 × 20 in.2 and above will require a three-CCD camera with resolutions approaching 2000 × 3000 pixels.


The introduction of low-frequency signals when high-frequency information is sampled below the Nyquist frequency of an imaging sensor array.

Capacitor Reset Noise

Noise associated with the residual charge on a capacitor.

Cascaded Modulation Transfer (CMT) Acutance

A measure of sharpness for linear imaging systems based on the component MTFs.

CCD Shift Register

A linear array of CCD elements used to shift charges.

Charge-Coupled Device (CCD)

A solid state device that can both store and transport electrons.

Charge-Transfer Efficiency

The fraction of the original charge read out at the end of a shift register.

Charge-Transfer Inefficiency

The fraction of charge that is left behind when a charge packet is transferred from one element of a CCD shift register to the next element.

Chrominance Channels (I, Q)

The color components of a color (video) image.

CIE Color-Matching Functions

A set of three response curves that represent how a standard observer will respond to a color.

Color-Filter Array (CFA)

A well defined pattern of color filters that is integrated with a CCD image array to encode color.

Complementary Metal-on-Silicon (CMOS)

A process that allows the use of p-type and n-type devices on the same silicon substrate.

Dark-Current Noise (NDC)

The current associated with the variance of thermally generated electrons.

Dark Current

Current due to thermally generated electrons that are captured by the CCD potential wells.

Differential Pulse Code Modulation (DPCM)

A method of compression that operates on the differences between adjacent image (signal) values.

Digital Imaging Chain (DIC)

A series of operations that define a DSC.

Digital Mirror

A microelectronic device consisting of an array of very small mirrors that can be time multiplexed to form a high-resolution color digital-image projector.

Digital Picture Cards (DPC)

Solid state storage cards for DSCs.

Digital Still Camera (DSC)

An electronic camera that uses a CCD to record an image and store it in a digital format.

Digital Watermark

A digital signal, imbedded in an image, used as a source of identification.

Discrete Cosine Transform (DCT)

One of several mathematical transforms that expresses an image in terms of its frequency content.

Effective F-Number

The F-number associated with a diffraction-limited MTF of a lens and is based solely on the lens diameter, the focal length of the lens and the wavelength of light.

F-Number (F/N)

The number that characterizes the light-gathering properties of a lens; it is given by the ratio of the lens's focal length to the lens's diameter.

Fixed-Pattern Noise (NFPN)

The noise associated with the variance in the sensitivity of the pixels in a CCD array.

Frame-Interline–Transfer Device

An imaging array the combines the properties of a Frame-Transfer Device and an Interline-Transfer Device.

Frame-Transfer Device

A CCD image array that uses the imaging elements to transfer the image electrons to a storage area or an output shift register.

Full-Well Capacity (FWC)

The intrinsic capacity to store electrons in a doped silicon material.

Huffman Coding

The optimum method of assigning code values to a set of data values from a finite memoryless source.

Image Artifacts

Unwanted spatial patterns in an image introduced by aliasing or digital image processing.

Image Rendering

The process by which a continuous-tone image is transformed to a binary image.

Interline-Transfer Device

A photodiode imaging array that uses vertical shift registers to transfer image electrons to an output shift register.

ISO Speed

Conventional photographic speed used to define the proper exposure of film.

Johnson Noise (NJ)

Electronic noise associated with amplifiers.

Joint Photographic Expert Group (JPEG)

A standards committee established to determine compression algorithms for still images.

Luminance Channel (Y)

The black-and-white component of a color (video) image.

Metal-on-Silicon (MOS) Capacitor

A capacitor formed on a doped-silicon substrate used to form and capture photoelectrons.

Microlens Array

A two-dimensional array of small lenses integrated with a CCD image sensor to improve the light-gathering properties of the sensor.

Modulation-Transfer Function (MTF)

The amplitude characterization of a linear system as a function of frequency.

Motion Picture Expert Group (MPEG)

A standards committee established to determine compression algorithms for video images.


National Television Standards Committee.

Nyquist Frequency (fN)

Half the sampling frequency.

Optical Point-Spread Function (OPSF)

The two-dimensional intensity profile of a point of light at infinity imaged in the focal plane of a lens.

Optical Prefilter

An optical element that is used in a DSC to eliminate specific high-frequency information from the image.

Optical-Transfer Function (OTF)

The amplitude and phase characterization of an optical element as a function of spatial frequency.


A polycrystalline form of silicon that is electrically conductive.

Potential for Aliasing (Ω)

The ratio of the aliased power to the nonaliased power.

Quantum Efficiency (η)

The normalized response of the creation of photoelectrons as a function of the wavelength of light.

Reset Noise (NC)

Noise associated with resetting the output capacitors of a CCD.

Sampling Frequency (fS)

The rate at which a CCD samples an image.

Shot Noise (NS)

Noise due to the quantum nature of light.

Stochastic Mask

A halftone mask based on random rendering process.

Super Video Graphics Adapter (SVGA)

The standard graphics video display format for computer monitors.

Thermal Dye-Transfer Printer

A printer that forms a continuous image by means of transferring dyes from a donor sheet to a receiving sheet by means of local heating.

Thermal Noise (NT)

The noise generated in the output resistance of a CCD image array.

Uniaxial Crystal

A crystal that has one optical axis.

Visual Response Function (MTFeye)

The effective spatial-frequency response of the human eye.

Works Cited

  1. Top of page
  2. Introduction
  3. the Digital Imaging Chain
  4. Image Capture
  5. Image Interpolation
  6. Sampling, Aliasing and Artifacts
  7. Image Compression and Storage
  8. Image Processing and Manipulation
  9. Color Reproduction
  10. Digital Hard Copy
  11. Image Quality
  12. Works Cited
  13. Further Reading
  • Adams, J. E. (1995), in: C. N. Anagnostopoulos, M. P. Lesser (Eds.), Cameras and Systems for Electronic Photography and Scientific Imaging, SPIE Proceedings Vol. 2416, Bellingham, WA: SPIE.
  • Altman, J. H. (1977), in: H. T. James (Ed.), The Theory of the Photographic Process, 4th ed., New York: Macmillan.
  • Amelio, G. F., Tompsett, M. F., Smith, E. G. (1970), Bell Sys. Tech. J., 49, 593600.
  • Aoki, M., Ando, F., Ohba, S., Takemoto, I., Nagahara, S., Nakano, T., Kubo, M., Fujita, T. (1982), IEEE Trans. Electron. Dev. ED-29, 745750.
  • Balk, P., Folberth, O. G. (Eds.) (1986), Solid State Devices, Studies in Electrical and Electronic Engineering Vol. 30, New York: Elsevier.
  • Barbe, D. F. (1975), Proc. IEEE 63, 3867.
  • Barbe, D. F. (Ed.) (1980), Charge-Coupled Devices, Berlin: Springer-Verlag.
  • Bayer, B. E. (1973), in: Proceedings, IEEE 1973 International Conference on Communications, Vol. 1, New York: IEEE, pp. 2611 to 26–15.
  • Bayer, B. E. (1976), U.S. Patent No. 3,971,065.
  • Bayer, B. E., Powell, P. G. (1986), Advances in Computer Vision and Image Processing, Vol. 2, New York: JAI Press.
  • Bernard, T. M. (Ed.) (1996), Advanced Focal Plane Arrays and Electronic Cameras, Proceedings Europto Series Volume 2950, Bellingham, WA: SPIE.
  • Biberman, L. M. (1973), Perception of Displayed Information, New York: Plenum Press.
  • Blouke, M. M., (Ed.) (1995), Charge-Coupled Devices and Solid State Optical Sensors V, SPIE Proceedings Vol. 2415, Bellingham, WA: SPIE.
  • Blouke, M. M., Editor (1997), Solid State Sensor Arrays: Development and Applications, SPIE Proceedings Volume 3019, Bellingham, WA: SPIE.
  • Boyle, W. S., Smith, G. E. (1970), Bell Sys. Tech. J., 40, 587593.
  • Carr, W. N. (1972), in: R. E. Sawyer, J. R. Miller (Eds.), MOS/LSI Design and Application, New York: McGraw-Hill.
  • Cok, D. R. (1994), in: IS&T Proceedings, 47th Annual Conference, Springfield, VA: IS&T, pp. 380385.
  • Crane, E. (1964), SMPTE J. 73, p. 643.
  • Davis, P. J., Polonsky, I. (1964), in: M. Abramowitz, I. A. Stegun (Eds.), Handbook of Mathematical Functions, National Bureau of Standards Applied Mathematics Series No. 55, Washington, DC: U.S. GPO, Chap. 25.
  • Deguchi, M., Maruyama, T., Ymasaki, F., Hamamoto, T., Izumi, A. (1992), IEEE Trans. Consum. Electron. CE-38, NO. 3, 583589.
  • Dillon, P. L. P., Lewis, D. M., Kaspar, F. G. (1978), IEEE Trans. Electron. Dev. ED-25, No. 2, 102107.
  • Eastman Kodak, (1992), Performance Specifications, KAF-1600.
  • Englehardt, K., Seitz, P. (1993), Appl. Opt. 32, 30153023.
  • Floyd, R. W., Steinber, L. (1976), Proc. SID 17, 7577.
  • Françon, M. (1967), in: A. C. S. Van Heel (Ed.), Advanced Optical Techniques, Amsterdam: North-Holland.
  • Furukawa, J., Hiroto, I., Takamura, Y., Wada, T., Keigo, Y., Izumi, A., Nishibori, K., Tatebe, R., Kitayama, S., Shimura, M., Matsui, H. (1992), IEEE Trans. Consum. Electron. SC-38, 595600.
  • Gendron, R. (1973), SMPTE J. 82, 10091015.
  • Gonzalez, R. C., Woods, R. E. (1993), Digital Image Processing: New York: Addison-Wesley.
  • Goodman, J. W. (1968), Introduction To Fourier Optics, New York: McGraw-Hill.
  • Greivenkamp, J. E. (1990), Appl. Opt. 29, 676684.
  • Hobson, G. S. (1978), Charge-Transfer Devices, New York: John Wiley & Sons.
  • Holm, J. (1996), in: IS&T Proceedings, 49th Annual Conference, Springfield, VA: IS&T, pp. 253264.
  • Hynecek, J. (1986), IEEE Trans. Electron. Devices ED-33, 850862.
  • Hynecek, J. (1988), IEEE Trans. Electron. Devices ED-36, 646652.
  • Janesick, J. R. (1997), in: Blouke, M. M. (Ed.), Solid State Sensor Arrays: Development and Applications, SPIE Proceedings Vol. 3019, Bellingham, WA: SPIE, pp. 70103.
  • JPEG (1969), Joint Photographic Experts Group-ISO/IEC JTC1/SCS/WG8 CCITT SGVIII, JPEG Technical Specification Revision 8.
  • Kingslake, R. (1978), Lens Design Fundamentals, New York: Academic Press.
  • Kingslake, R. (1983), Optical System Design, New York: Academic Press.
  • Kriss, M. A. (1977), in: H. T. James (Ed.), The Theory of the Photographic Process, 4th ed., New York: Macmillan.
  • Kriss, M. A. (1987), J. Soc. Photogr. Sci. Technol. Jpn. 50, 357377.
  • Kriss, M. A. (Ed.) (1990), Can AM Eastern ‘90, SPIE Proceedings Vol. 1398, Bellingham, WA: SPIE, pp. 113.
  • Kriss, M. A. (1996a), Color Filter Arrays for Digital Electronic Still Cameras, in: 49th Annual Conference, IS&T Proceedings, Springfield, VA: IS&T, pp. 272278.
  • Kriss, M. A. (1996b), The Photographic Speed of CCD Imaging Device, in: 49th Annual Conference, IS&T Proceedings, Springfield, VA: IS&T, pp. 264268.
  • Kriss, M. A. (1997), in: C. N. Proudfoot (Ed.), Handbook of Photographic Science and Engineering, Springfield, VA: IS&T.
  • Kriss, M. A. (1998), in: G. Williams, (Ed.), Digital Solid State Cameras: Design and Application, SPIE Proceedings Vol. 3302, Bellingham, WA: SPIE, pp. 5667.
  • Kriss, M. A., Parulski, K., Lewis, D. M. (1989), in: J. C. Urbach (Ed.), Applications of Electronic Imaging, SPIE Proceedings, Vol. 1082, Bellingham, WA: SPIE, pp. 157184.
  • Lee, T. H., Tredwell, T. J., Burkey, B. C., Kelly, T. M., Khosla, R. P., Losee, D. L., Lo, F. C., Nielson, R. L., McColgin, W. C. (1983), Tech. Dig. IEDM83, 492496.
  • Manabe, D., Ohta, T., Shimidzu, Y. (1983), Proceedings CICC83, 451455.
  • Mannos, J. L., Sakrison, D. J. (1974), IEEE Trans. Inf. Theory IT-20, 525536.
  • Martin, G. J. (1993), in: H. Marv, R. L. Nielsen (Eds.), Cameras, Scanners, and Image Acquisition Systems, SPIE Proceedings Vol. 1901, Bellingham, WA: SPIE, pp. 120135.
  • Mitsa, T., Parker, K. J. (1992), J. Opt. Soc. Am. A9, 19201929.
  • Miura, H., Ohno, S., Caleca, S. (1993), IS&T's 46th Annual Conference, Proceedings IS&T, Springfield, VA: IS&T, pp. 2933.
  • Miyake, I., Iwabe, K. (1995), in: SPSTJ 70th Aniversary Symposia on Fine Imaging, Tokyo: Society for Photographic Science and Technology of Japan, pp. 7780.
  • Morimoto, Y., Sato, K., Sakakibara, K., Naruto, H., Yamanaka, M., Okano, Y., Ishida, T. (1995), in: SPSTJ 70th Aniversary Symposia on Fine Imaging, Tokyo: Society for Photographic Science and Technology of Japan, pp. 7376.
  • MPEG-I (1990), ISO/IECJTC1/SC2/WG11. Draft: Information Technology—Coding of Moving Pictures and Associated Audio for Digital Storage Media up to about 1.5 Mbits/s.
  • MPEG-II (1992), ISO/IEC JTC1/SC2/WG11, Draft: Information Technology-List of Requirements for MPEG-2 Video.
  • Parulski, K. (1985), IEEE Trans. Electron. Devices ED-32(8), p. 1381.
  • Patel, A. S. (1966), J. Opt. Soc. Am. 56, 689790.
  • Peddie, J. (1993), High-Resolution Graphic Display Systems, New York: Windcrest/McGraw-Hill.
  • Rabbani, M., Jones, P. W. (1991), Digital Image Compression Techniques, Tutorial Texts in Optical Engineering, Vol. 7, Bellingham, WA: SPIE.
  • Rao, K. R., Yip, R. (1990), Discrete Cosine Transform, New York: Academic Press.
  • Rezanka, I., Eschbach, R. (Eds.) (1996), Recent Progress in Ink Jet Technologies, Springfield, Virginia: IS&T.
  • Robertson, A. R., Fisher, J. F. (1985), in: K. B. Benson (Ed.), Television Engineering Handbook, New York: McGraw Hill.
  • Sadashige, K. (1987), SMPTE J. 96, 180185.
  • Sakai, A., Ichimura, E., Ohno, S. (1994), in: 47th Annual Conference/ICPS, IS&T Proceedings, Springfield, VA: IS&T, pp. 658660.
  • Saleh, B. E. A., Teich, M. C. (1991), Fundamentals of Photonics, New York: John Wiley & Sons.
  • Sangster, F. L. J., Teer, K. (1969), J. Solid-State Circuits SC-4, 131.
  • Schroder, D. K. (1990), Advanced MOS Devices, Reading, MA: Addison-Wesley.
  • Sequin, C. H., Tompsett, M. F. (1975), Charge Transfer Devices, New York: Academic Press.
  • Shannnon, C. E. (1949), Proc. IRE 37, 10.
  • Takizawa, S., Kotaki, H., Saito, K., Sugiki, T., Takemura, Y., (1983), IEEE Trans. Consum. Electron. CE-29, 358364.
  • Teranishi, N., Mutoh, N. (1986), IEEE Trans. Electron. Devices ED-33, 1696.
  • Theuwissen, A. J. P. (1995), Solid-State Imaging with Charge-Coupled Devices, Boston: Kluwer Academic Publishers.
  • Thorpe, L., Tamura, E., Iwasaki, T. (1988), SMPTE J. 97, 378387.
  • Tredwell, T. (1985), in: 1985 International Conference on Solid State Sensors and Actuators, New York: IEEE, pp. 424428.
  • Tsumura, N., Tanaka, T., Haneishi, H., Miyake, Y. (1997), in: IS&T's 50th Annual Conference, IS&T Proceedings, Springfield, VA: IS&T, pp. 373376.
  • Ulicheny, R. (1988), Proc. IEEE 76, 6579.
  • Ulichney, R. (1987), Digital Halftoning, Cambridge, MA: MIT Press.
  • Weimer, P. (1975), Adv. Electron. Electron Phys. 37, 181262.
  • Whittaker, E. T. (1915), Proc. R. Soc. Edinburgh A, 35, 181.
  • Wyszecki, G., Stiles, W. S. (1982), Color Science: Concepts and Methods, Quantitative Data and Formulae, New York: John Wiley & Sons.
  • Yoshida, H. (1995), in: SPSTJ 70th Aniversary Symposia on Fine Imaging, Tokyo: Society for Photographic Science and Technology of Japan, pp. 8184.
  • Yoshida, H. (1996), in: SPSTJ Symposium on Digital Phtography ‘96, Tokyo: Society for Photographic Science and Technology of Japan, pp. 7578.

Further Reading

  1. Top of page
  2. Introduction
  3. the Digital Imaging Chain
  4. Image Capture
  5. Image Interpolation
  6. Sampling, Aliasing and Artifacts
  7. Image Compression and Storage
  8. Image Processing and Manipulation
  9. Color Reproduction
  10. Digital Hard Copy
  11. Image Quality
  12. Works Cited
  13. Further Reading
  • Holst, G. C. (1996), CCD Arrays, Cameras and Displays, Bellingham, WA: SPIE.
  • Hunt, R. W. G. (1987), The Reproduction of Color in Photography, Printing and Television, 4th ed., London: Fountain Press.
  • James, T. H., (Ed.) (1977), The Theory of the Photographic Process, 4th ed., New York: Macmillan.
  • Kang, H. R. (1997), Color Technology for Electronic Imaging Devices, Bellingham, WA: SPIE.
  • Landy, M. S., Movshon, J. A. (Eds.) Computational Models of Visual Processing, Cambridge, MA: MIT Press.
  • Proudfoot, C. N. (Ed.) (1998), Handbook of Photographic Science and Engineering, 2nd ed., Sprinfield, VA: IS&T.
  • Schroder, D. K. (1990), in: G. W. Neudeck, R. F. Pierret (Eds.), Advanced MOS Devices, Modular Series on Solid State Devices, Reading, MA: Addison Wesley.
  • Theuwissen, A. J. P. (1995), Solid State Imaging with Charged-Coupled Devices, Boston: Kluwer Academic Publishers.
  • Wandell, B. A. (1995), Foundations of Vision, Sunderland, MA: Sinauer Associates, Inc.
  • Wyszecki, G., Stiles, W. S. (1992), Color Science: Concepts and Methods, Quantitative Data and Formuae, 2nd ed., New York: John Wiley & Sons.