Understanding how sensors operate can be important, because, ultimately, the resolution and quality of your digital image is largely determined by the solid-state capture array. Certainly, the lens that focuses the image on the sensor has an equal role, but optical technology is fairly mature, dating back to the invention of the first magnifying glasses (used for reading) in the 13th century. Even fairly recent developments, such as aspheric (non-spherical) lenses and optics created especially for digital cameras involve little more than refined application of well-understood principles.
Digital sensors are a whole new ballgame. The first CCD sensors were created around 30 years ago, and Kodak introduced the first megapixel sensor (with an incredible 1.4 million pixels) in 1986. The technology has improved in several directions, the most important of which for digital photographers is the increase in resolution that has come hand in hand with a huge reduction in price. We now have sensors with 16 million pixels or more, and the cost to produce them has dropped enough that cameras that can capture 6 to 8 megapixels or more can be purchased for much less than $1,000. During the life of this book, I fully expect the pixel counts to double while the cost is cut in half.
The most interesting developments seem to be coming in technology like that used with the Foveon X3 sensor, the first sensor capable of capturing any of the primary colors of light at any pixel position. As odd as it might seem, standard digital camera sensors only grab part of a picture with each exposure, using a mathematical process called interpolation to make some educated guesses about the missing pixels. This will all become clear after a bit of explanation.
Figure 2.9 shows a six-by-six pixel section of a CCD or CMOS sensor. The full sensor for a 6-megapixel camera would have something like 3,000 columns and 2,000 rows of pixels, but the array essentially would be identical to the section in the illustration. For the figure, I've broken the sensor section apart into two layers. The gray layer on the bottom is the actual photosites that capture photons for each pixel's information. The colorful layer on top consists of colored filters that each pass red, green, or blue light and block the other colors. Because of the filters, each of the six million pixels in the sensor can register only the amount of one of these three colors at its position.
Of course, a pixel designated as green might not be lucky enough to receive green light. Perhaps that pixel should have registered red or blue instead. Fortunately, over a 6-million pixel range, enough green-filtered pixels will receive green light, red-filtered pixels red light, and blue-filtered pixels blue light that things average out with a fair degree of accuracy. Algorithms built into the camera can look at surrounding pixels and calculate with some precision what each pixel should be. Those are the pixels saved when an image is stored as a JPEG or TIF file on your memory card. (Saving the uninterpolated RAW data is another option, which I'll explain in Chapter 4.)
For reasons shrouded in the mists of color science, the pixels in a sensor array are not arranged in a strict red-green-blue alternation, as you might expect to be the case. Instead, the pixels are laid out in what is called a Bayer pattern, which you can see in Figure 2.9: one row that alternates green and red pixels, followed by a row that alternates green and blue filters. Green is "over-represented" because of the way our eyes perceive light: We're most sensitive to green illumination. That's why monochrome monitors of the computer dark ages were most often green on black displays. Some sensors alternate true green pixels with a sort of blue-greensensitive pixel called emerald in an attempt to provide even better color correction.
The arrangement used is called a mosaic or Bayer pattern, and one result is that a lot of the light reaching the sensor is wasted. Only about half of the green light reaching the sensor is captured, because each row consists of half green pixels and half red or blue. Worse, only 25 percent of the red and blue lights are registered. Figure 2.10 provides a representation of what is going on. In our 36-pixel array segment, there are just 18 green-filtered photosites and 9 each of red and blue. Because so much light is not recorded, the sensitivity of the sensor is reduced (requiring that much more light to produce an image), and the true resolution is drastically reduced. Your digital camera, ostensibly with 6 megapixels of resolution, actually captures three separate images measuring 3 megapixels (of green), 1.5 megapixels (of blue), and 1.25 megapixels (of red).
The Unfulfilled Promise of Foveon Technology
The Foveon sensor is a CMOS device that works in a dramatically different way, but to date hasn't been perfected enough to threaten to displace traditional Bayerarrayed CCD and CMOS sensors. It's long been known that the various colors of light penetrate silicon to varying depths. So, the Foveon device doesn't use a Bayer filter mosaic like that found in a conventional sensor. Instead, it uses three separate layers of photodetectors, which are shown in Figure 2.11 colored blue, green, and red. All three colors of light strike each pixel in the sensor at the appropriate strength as reflected by or transmitted through the subject. The blue light is absorbed by and registers in the top layer. The green and red light continue through the sensor to the green layer, which absorbs and registers the amount of green light. The remaining red light continues and is captured by the bottom layer.
So, no interpolation (called demosaicing) is required. Without the need for this complex processing step, a digital camera can potentially record an image much more quickly. Moreover, the Foveon sensor can have much higher resolution for its pixel dimensions, and, potentially, less light is wasted. Of course, as a CMOS sensor, the Foveon device is less sensitive than a CCD sensor, anyway, so photons are wasted in another, different way. Compare its coverage pattern, shown in Figure 2.12 with that of the conventional mosaic in Figure 2.10.
To date, the Foveon sensor is used in only a few cameras, including two models from Sigma. Although the vendors call their cameras "10-megapixel" cameras, they are, in truth, 3.3 megapixel models. Because no interpolation has to be done to calculate the true color value of pixels, a camera equipped with a Foveon sensor does provide higher effective resolution, even if the absolute number of pixels is lower. So far, camera buyers have not flocked to this emerging technology.
CMOS vs CCD
After reading the previous section, you might conclude that CCD sensors are headed for the dustbin and that, in the very near future, we'll all be using the next generation of CMOS or Foveon-style imagers. In the real world, while the newest technologies have a lot of theoretical advantages, there are currently some disadvantages that need to be overcome. This next section discusses some of the techie issues. If you're not interested in nuts-and-bolts, you can skip ahead.
CCD and CMOS sensors have been duking it out for the past several years. As recently as a few years ago, most digital cameras, especially the highest-quality models, used CCDs. As I mentioned earlier, CMOS devices were most often used in lower-end cameras. Today, that distinction is no longer valid. Canon, which uses a type of CMOS sensor in its higher-end cameras, and Foveon, which produces the sensor used initially in the Sigma SLR, have mastered the art of coaxing high-quality images from CMOS sensors. Even Nikon has joined the fold with its $5,000 pro-model D2X, and the recently discontinued Kodak DCS Pro/n and Pro/c cameras used CMOS sensors, too.
The two types of sensors manipulate the light they capture in different ways. A CCD is an analog device. Each photosite is a photodiode that has the ability (called capacitance) to store an electrical charge that accumulates as photons strike the cell. The design is a simple one, requiring no logic circuits or transistors dedicated to each pixel. Instead, the accumulated image is read by applying voltages to the electrodes connected to the photosites, causing the charges to be "swept" to a readout amplifier at the corner of the sensor chip.
A CMOS sensor, on the other hand, includes transistors at each photosite, and every pixel can be read individually, much like a computer's random access memory (RAM) chip. It's not necessary to sweep all the pixels to one location, and, unlike CCD sensors, with which all information is processed externally to the sensor, each CMOS pixel can be processed individually and immediately. That allows the sensor to respond to specific lighting conditions as the picture is being taken. In other words, some image processing can be done within the CMOS sensor itself, something that is impossible with CCD devices.
However, the chief advantage of CMOS technology is that CMOS chips are less expensive to produce. They can be fabricated using the same kinds of processes used to create most other computer chips. CCDs require special, more expensive, production techniques. So, in the war between CCD and CMOS, there is quite an array of pros and cons facing each type of sensor. Things become even more interesting in the case of the Foveon chip, which has some additional limitations that I haven't mentioned yet.
First, you'll recall that light of all three primary colors strikes the Foveon chip, passing through the blue, green, and red layers. Some light is absorbed in each layer, so a much smaller amount reaches the bottom layer, providing reduced color information. In addition, a phenomenon called blooming, or the spreading of light from one layer to another, can occur. If one layer is overexposed, the excess light can "bleed" into the layer below. When you add these to the reduced sensitivity and extra noise of CMOS chips, you can see that, as promising as the Foveon sensor is, there is plenty of room for improvement before digital camera vendors abandon CCD technology.
There are other characteristics of sensors that are relevant and interesting, such as the infrared sensitivity that's inherent in CCD sensors. Indeed, camera vendors must install infrared blocking filters in front of sensors, or include a component called a hot mirror to reflect infrared to provide a more accurate color image. Luckily (for the serious photographer) enough infrared light sneaks through that it's possible to take some stunning infrared photos with many digital cameras. That's a capability we're going to have a lot of fun with in Chapter 7, "Scenic Photography."
A relatively recent development is the "4/3" standard proposed by Kodak and Olympus, which will establish a common 4:3 aspect ratio for sensors used in digital cameras, along with a standard size sensor and back focus distance. If adopted, it would mean that, among other things, lenses for digital cameras would perform similarly regardless of which camera they were used with.
Digital sensors are a whole new ballgame. The first CCD sensors were created around 30 years ago, and Kodak introduced the first megapixel sensor (with an incredible 1.4 million pixels) in 1986. The technology has improved in several directions, the most important of which for digital photographers is the increase in resolution that has come hand in hand with a huge reduction in price. We now have sensors with 16 million pixels or more, and the cost to produce them has dropped enough that cameras that can capture 6 to 8 megapixels or more can be purchased for much less than $1,000. During the life of this book, I fully expect the pixel counts to double while the cost is cut in half.
The most interesting developments seem to be coming in technology like that used with the Foveon X3 sensor, the first sensor capable of capturing any of the primary colors of light at any pixel position. As odd as it might seem, standard digital camera sensors only grab part of a picture with each exposure, using a mathematical process called interpolation to make some educated guesses about the missing pixels. This will all become clear after a bit of explanation.
Figure 2.9 shows a six-by-six pixel section of a CCD or CMOS sensor. The full sensor for a 6-megapixel camera would have something like 3,000 columns and 2,000 rows of pixels, but the array essentially would be identical to the section in the illustration. For the figure, I've broken the sensor section apart into two layers. The gray layer on the bottom is the actual photosites that capture photons for each pixel's information. The colorful layer on top consists of colored filters that each pass red, green, or blue light and block the other colors. Because of the filters, each of the six million pixels in the sensor can register only the amount of one of these three colors at its position.
Figure 2.9. A typical sensor consists of a sensor array overlaid with a series of filters arranged in a mosaic pattern.
Of course, a pixel designated as green might not be lucky enough to receive green light. Perhaps that pixel should have registered red or blue instead. Fortunately, over a 6-million pixel range, enough green-filtered pixels will receive green light, red-filtered pixels red light, and blue-filtered pixels blue light that things average out with a fair degree of accuracy. Algorithms built into the camera can look at surrounding pixels and calculate with some precision what each pixel should be. Those are the pixels saved when an image is stored as a JPEG or TIF file on your memory card. (Saving the uninterpolated RAW data is another option, which I'll explain in Chapter 4.)
For reasons shrouded in the mists of color science, the pixels in a sensor array are not arranged in a strict red-green-blue alternation, as you might expect to be the case. Instead, the pixels are laid out in what is called a Bayer pattern, which you can see in Figure 2.9: one row that alternates green and red pixels, followed by a row that alternates green and blue filters. Green is "over-represented" because of the way our eyes perceive light: We're most sensitive to green illumination. That's why monochrome monitors of the computer dark ages were most often green on black displays. Some sensors alternate true green pixels with a sort of blue-greensensitive pixel called emerald in an attempt to provide even better color correction.
The arrangement used is called a mosaic or Bayer pattern, and one result is that a lot of the light reaching the sensor is wasted. Only about half of the green light reaching the sensor is captured, because each row consists of half green pixels and half red or blue. Worse, only 25 percent of the red and blue lights are registered. Figure 2.10 provides a representation of what is going on. In our 36-pixel array segment, there are just 18 green-filtered photosites and 9 each of red and blue. Because so much light is not recorded, the sensitivity of the sensor is reduced (requiring that much more light to produce an image), and the true resolution is drastically reduced. Your digital camera, ostensibly with 6 megapixels of resolution, actually captures three separate images measuring 3 megapixels (of green), 1.5 megapixels (of blue), and 1.25 megapixels (of red).
Figure 2.10. A sensor's mosaic captures 50 percent of the green information, but only 25 percent of the red and blue.
The Unfulfilled Promise of Foveon Technology
The Foveon sensor is a CMOS device that works in a dramatically different way, but to date hasn't been perfected enough to threaten to displace traditional Bayerarrayed CCD and CMOS sensors. It's long been known that the various colors of light penetrate silicon to varying depths. So, the Foveon device doesn't use a Bayer filter mosaic like that found in a conventional sensor. Instead, it uses three separate layers of photodetectors, which are shown in Figure 2.11 colored blue, green, and red. All three colors of light strike each pixel in the sensor at the appropriate strength as reflected by or transmitted through the subject. The blue light is absorbed by and registers in the top layer. The green and red light continue through the sensor to the green layer, which absorbs and registers the amount of green light. The remaining red light continues and is captured by the bottom layer.
Figure 2.11. The Foveon sensor can record red, green, or blue light at each pixel position, with no interpolation needed.
So, no interpolation (called demosaicing) is required. Without the need for this complex processing step, a digital camera can potentially record an image much more quickly. Moreover, the Foveon sensor can have much higher resolution for its pixel dimensions, and, potentially, less light is wasted. Of course, as a CMOS sensor, the Foveon device is less sensitive than a CCD sensor, anyway, so photons are wasted in another, different way. Compare its coverage pattern, shown in Figure 2.12 with that of the conventional mosaic in Figure 2.10.
Figure 2.12. The Foveon sensor captures each color at every pixel position, using three layers of photodetectors.
To date, the Foveon sensor is used in only a few cameras, including two models from Sigma. Although the vendors call their cameras "10-megapixel" cameras, they are, in truth, 3.3 megapixel models. Because no interpolation has to be done to calculate the true color value of pixels, a camera equipped with a Foveon sensor does provide higher effective resolution, even if the absolute number of pixels is lower. So far, camera buyers have not flocked to this emerging technology.
CMOS vs CCD
After reading the previous section, you might conclude that CCD sensors are headed for the dustbin and that, in the very near future, we'll all be using the next generation of CMOS or Foveon-style imagers. In the real world, while the newest technologies have a lot of theoretical advantages, there are currently some disadvantages that need to be overcome. This next section discusses some of the techie issues. If you're not interested in nuts-and-bolts, you can skip ahead.
CCD and CMOS sensors have been duking it out for the past several years. As recently as a few years ago, most digital cameras, especially the highest-quality models, used CCDs. As I mentioned earlier, CMOS devices were most often used in lower-end cameras. Today, that distinction is no longer valid. Canon, which uses a type of CMOS sensor in its higher-end cameras, and Foveon, which produces the sensor used initially in the Sigma SLR, have mastered the art of coaxing high-quality images from CMOS sensors. Even Nikon has joined the fold with its $5,000 pro-model D2X, and the recently discontinued Kodak DCS Pro/n and Pro/c cameras used CMOS sensors, too.
The two types of sensors manipulate the light they capture in different ways. A CCD is an analog device. Each photosite is a photodiode that has the ability (called capacitance) to store an electrical charge that accumulates as photons strike the cell. The design is a simple one, requiring no logic circuits or transistors dedicated to each pixel. Instead, the accumulated image is read by applying voltages to the electrodes connected to the photosites, causing the charges to be "swept" to a readout amplifier at the corner of the sensor chip.
A CMOS sensor, on the other hand, includes transistors at each photosite, and every pixel can be read individually, much like a computer's random access memory (RAM) chip. It's not necessary to sweep all the pixels to one location, and, unlike CCD sensors, with which all information is processed externally to the sensor, each CMOS pixel can be processed individually and immediately. That allows the sensor to respond to specific lighting conditions as the picture is being taken. In other words, some image processing can be done within the CMOS sensor itself, something that is impossible with CCD devices.
However, the chief advantage of CMOS technology is that CMOS chips are less expensive to produce. They can be fabricated using the same kinds of processes used to create most other computer chips. CCDs require special, more expensive, production techniques. So, in the war between CCD and CMOS, there is quite an array of pros and cons facing each type of sensor. Things become even more interesting in the case of the Foveon chip, which has some additional limitations that I haven't mentioned yet.
First, you'll recall that light of all three primary colors strikes the Foveon chip, passing through the blue, green, and red layers. Some light is absorbed in each layer, so a much smaller amount reaches the bottom layer, providing reduced color information. In addition, a phenomenon called blooming, or the spreading of light from one layer to another, can occur. If one layer is overexposed, the excess light can "bleed" into the layer below. When you add these to the reduced sensitivity and extra noise of CMOS chips, you can see that, as promising as the Foveon sensor is, there is plenty of room for improvement before digital camera vendors abandon CCD technology.
There are other characteristics of sensors that are relevant and interesting, such as the infrared sensitivity that's inherent in CCD sensors. Indeed, camera vendors must install infrared blocking filters in front of sensors, or include a component called a hot mirror to reflect infrared to provide a more accurate color image. Luckily (for the serious photographer) enough infrared light sneaks through that it's possible to take some stunning infrared photos with many digital cameras. That's a capability we're going to have a lot of fun with in Chapter 7, "Scenic Photography."
A relatively recent development is the "4/3" standard proposed by Kodak and Olympus, which will establish a common 4:3 aspect ratio for sensors used in digital cameras, along with a standard size sensor and back focus distance. If adopted, it would mean that, among other things, lenses for digital cameras would perform similarly regardless of which camera they were used with.
No comments:
Post a Comment