CCD (Pixel) and Image Processing Basics

CCD (pixel) and image processing basics

Image processing refers to the ability to capture objects on a two-dimensional plane. This has led to image processing being widely used in automated inspections as an alternative to visual inspections. This section introduces CCD (pixel) sensors—the foundation of image processing—and image processing basics.

CCD image sensor

A digital camera has almost the same structure as that of a conventional (analog) camera, but the difference is that a digital camera comes equipped with an image sensor called a CCD. The image sensor is similar to the film in a conventional camera and captures images as digital information, but how does it convert images into digital signals?

CCD image sensor

CCD image sensor

The CCD stands for a Charge Coupled Device, which is a semiconductor element that converts images into digital signals. It is approx. 1 cm in both height and width, and consists of small pixels aligned like a grid.

When taking a picture with a camera, the light reflected from the target is transmitted through the lens, forming an image on the CCD. When a pixel on the CCD receives the light, an electric charge corresponding to the light intensity is generated. The electric charge is converted into an electric signal to obtain the light intensity (concentration value) received by each pixel.

1/1.8-inch (approx. 9 mm)

1/1.8-inch(approx.9mm)

This means that each pixel is a sensor that can detect light intensity (photo diode) and a 2 million-pixel CCD is a collection of 2-million photo diodes.

A photoelectric sensor can detect presence/absence of a target of a specified size in a specified location. A single sensor, however, is not effective for more complicated applications such as detecting targets in varying positions, detecting and measuring targets of varying shapes, or performing overall position and dimension measurements. The CCD, which is a collection of hundreds of thousands to millions of sensors, greatly expands possible applications including the four major application categories on the first page.

Use of pixel data for image processing

The last section of this guide briefly details the method in which light intensity is converted into usable data by each pixel and then transferred to the controller for processing.

Individual pixel data (In the case of a standard black-and-white camera)

Image of 256 brightness levels

In many vision sensors, each pixel transfers data in 256 levels (8 bit) according to the light intensity. In monochrome (black & white) processing, black is considered to be “0” and white is considered to be “255”, which allows the light intensity received by each pixel to be converted into numerical data This means that all pixels of a CCD have a value between 0 (black) and 255 (white). For example, gray that contains white and black, exactly half and half, is converted into “127”.

An image is a collection of 256-level data

Image data captured with a CCD is a collection of pixel data that make up the CCD, and the pixel data is reproduced as a 256-level contrast data.

  1. Raw image
  2. When the image on the left is represented with 2500 pixels
  3. The eye is enlarged and represented as 256-level data The eye has a value of 30, which is almost black, and the surrounding area has a value of 90, which is brighter than 30.

As in the example above, image data is represented with values between 0 and 255 levels per pixel. Image processing is processing that finds features on an image by calculating the numerical data per pixel with a variety of calculation methods as shown below.

Example:Stain / Defect inspection

The inspection area is divided into small areas called segments and the average intensity data (0 to 255) in the segment is compared with that of the surrounding area. As a result of the comparison, spots with more than a specified difference in intensity are detected as stains or defects.

The average intensity of a segment (4 pixels x 4 pixels) is compared with that of the surrounding area. Stains are detected in the red segment in the above example.

Summary of CCD and image processing basics

Machine vision can detect areas (No. of pixels), positions (point of change in intensity), and defects (change in amount of intensity) with 256-level intensity data per pixel of a CCD image sensor. By selecting systems with higher pixel levels can higher speeds, you can easily expand the number of possible applications for your industry.

The next topic will be “lenses and lighting methods” As image processing needs to detect change of intensity data using calculations, a clear image must be captured in order to ensure stable detection. The next guide will feature use of lenses and lighting methods necessary to obtain a clear image.