Thursday, March 28, 2024

Optronic Sensors for Imaging (Part 2 of 6)

By Dr Anil Kumar Maini & Nakul Maini

- Advertisement -

Imaging sensors are an important component of a wide range of defence systems and are projected to play a growing role in the coming years. This article discusses CCD, CMOS and Ladar sensors in detail.

Two most common types of imaging sensors with potential military applications include charge-coupled device (CCD) sensors and complementary metal-oxide semiconductor (CMOS) sensors. Ladar sensor, which employs a two-dimensional array of avalanche photodiodes, is another important imaging sensor that finds application in precision-guided weapon seeker systems.

Both CCD and CMOS types of sensors use a two-dimensional array of thousands to millions of discrete pixels. The amount of light falling on each of the pixels generates free electrons with the number of electrons and hence the quantum of charge depending upon the intensity of impinging photons. These sensors differ in the mode in which this charge is converted into voltage and subsequently read out of the chip for further processing.

- Advertisement -

Charge-coupled device

Charge-coupled devices (CCDs) are basically an array of thousands to millions of light-sensitive elements called pixels etched onto a silicon surface. Each of the pixels is a buried-channel MOS capacitor.

CCDs are typically fabricated on a p-type substrate and a buried channel is implemented by forming a thin n-type region on its surface. A silicon dioxide layer is grown on top of the n-type region and an electrode, also called gate, on top of the insulating silicon dioxide completes the MOS capacitor. The electrode could be metal, but is more likely to be a heavily doped polycrystalline silicon conducting layer (Fig. 1). The sensor is not actually flat, but has tiny cavities (like wells) that trap the incoming light and allows it to be measured. Each of these wells or cavities is a pixel.

MOS capacitor
Fig. 1: MOS capacitor

The size of CCDs is specified in mega-pixels. Megapixel value can be computed by multiplying number of pixels in a row by number of pixels in a column. For example, 1000 pixels in a row and 1000 pixels in a column make 1.0-megapixel CCD chip.

When the light reflected off the target to be imaged is incident on the array of pixels, the impinging photons generate free electrons in the region underneath the pixels. In order to make sure that these free electrons don’t combine with holes and disappear as heat energy, the electrons underneath each pixel are held there by applying a positive bias voltage to the pixels.

If the sensor array were exposed to light for the same time, the number of electrons and hence the quantum of charge held under a certain pixel would vary directly as per the luminous intensity that particular pixel is exposed to. This charge pattern represents the light pattern falling on the device. The charge is read out by suitable electronics and then converted into a digital bit pattern that can be understood and stored in a computer. This digital bit pattern then represents the image.

The charge held in bins corresponding to different pixels is read out, converted into equivalent analogue voltages and then digitised with the help of an analogue-to-digital converter. Charge on a CCD is shifted in two directions: parallel (or horizontal) and serial (or vertical). While parallel shift occurs from right to left, serial shift is performed from top to bottom and directs charge packets to the measurement electronics. One way to make the read-out process faster is to split up the image into two or four different sections. Each section follows the process of parallel and serial shift. Fig. 2 shows a typical CCD sensor with a single-point read-out. Fig. 3 shows a typical packaged CCD chip.

CCD array
Fig. 2: CCD array
Packaged CCD chip
Fig. 3: Packaged CCD chip

The CCD sensor described in previous paragraphs can only determine number of photons collected by each pixel, and therefore it carries no information about the wavelength or colour of those photons. As a result, the CCD sensor records the image only in monochrome.

In order to record images in full colour, a filter array is bonded to the sensor substrate. One such common colour filter array is the Bayer filter. Bayer’s colour filter array (CFA) comprises an arrangement of red, green and blue filters to capture colour information. It is made up of alternating rows of red/green and blue/green filters and is sometimes called an RGBG filter.

Fig. 4 shows a CCD array with a Bayer’s filter bonded to its surface. A particular colour filter allows photons of only that colour to pass through to the pixel. This is illustrated in Fig. 5. The number of photons collected by each pixel in this case corresponds to the colour allowed by the filter above it. In Bayer pattern filter, there are twice as many green squares as red or blue squares. This is because the human eye is much more sensitive to green light than red and blue, and has a much greater resolving power in that range.

CCD array with Bayer’s filter
Fig. 4: CCD array with Bayer’s filter
Colour discrimination by Bayer’s filter
Fig. 5: Colour discrimination by Bayer’s filter

SHARE YOUR THOUGHTS & COMMENTS

Electronics News

Truly Innovative Tech

MOst Popular Videos

Electronics Components

Calculators