Convolution Kernels - Java Tutorial
Many of the most powerful image processing algorithms rely upon a process known as convolution (or spatial convolution), which can be used to perform a wide variety of operations on digital images. Within the suite of image processing techniques available to microscopists with these algorithms are noise reduction through spatial averaging, sharpening of image details, edge detection, and image contrast enhancement. The choice of the convolution kernel is paramount in determining the nature of the convolution operation.
The tutorial initializes with a randomly selected specimen image (captured in the microscope) appearing in the left-hand window entitled Specimen Image. Each specimen name includes, in parentheses, an abbreviation designating the contrast mechanism employed in obtaining the image. The following nomenclature is used: (FL), fluorescence; (BF), brightfield; (DF), darkfield; (PC), phase contrast; (DIC), differential interference contrast (Nomarski); (HMC), Hoffman modulation contrast; and (POL), polarized light. Visitors will note that specimens captured using the various techniques available in optical microscopy behave differently during image processing in the tutorial.
Positioned on the right of the Specimen Image window is the Output Image window, which displays the image produced by convolving the specimen image with the convolution kernel shown directly beneath this window. To operate the tutorial, select an image from the Choose A Specimen pull-down menu, and select a kernel from the Choose A Kernel pull-down menu. For some of the available kernels, it is possible to change the dimensions of the kernel mask with the Kernel Size slider. The default kernel mask is a NxN Blur with a 5 × 5 kernel size. Visitors should explore the effects of convolving the specimen image with the variety of convolution kernels available in the tutorial.
Many powerful image-processing methods rely on multipixel operations, where the intensity of each output pixel is determined as a function of the intensity values of its neighboring pixels. This type of operation is called a convolution or spatial convolution. Convolution involves the multiplication of a group of pixels in the input image with an array of pixels in a convolution mask or convolution kernel. The output value produced in a spatial convolution operation is a weighted average of each input pixel and its neighboring pixels in the convolution kernel. This is a linear process because it involves the summation of weighted pixel brightness values and multiplication (or division) by a constant function of the values in the convolution mask.
Spatial filters can be implemented through a convolution operation. By using the brightness information of the input pixel's neighbors, spatial convolution techniques compute a measure of spatial frequency activity in the neighborhood of each pixel, and are therefore capable of spatially filtering the frequency content of a digital image. The weighting values of the convolution kernel determine what type of spatial filtering operation will result. In addition, the size of the kernel influences the flexibility and precision of the spatial filter. The numerical values employed in the convolution kernel may be any real numbers, and the size of the convolution kernel can be as large as allowed by the processing capability of the computer system. In the tutorial, a variety of convolution kernels are available that perform operations such as high-pass (Laplacian) and low-pass (blurs) filtering as well as edge detection.
The convolution operation on a pixel neighborhood can produce a wide range of numerical values. It is therefore important to adjust each value to match the range of the display or memory storage device. One method, termed clipping, is utilized to truncate the values to the bounds of the display range. Another method that generally produces better results operates by reducing the scale of values to fit the display range. The latter method is called normalization. In the tutorial, the term bias reflects the absolute value of the sum of the negative coefficients in the kernel mask. The bias represents the total deviation of the output values from the lower bound of the display range. The coefficient value reflects the sum of the absolute values of the elements in the convolution kernel. The initial coefficient has a value of 44, which corresponds to the default 5 × 5 blur convolution kernel. The bias is added to each output value to ensure that it lies within the lower boundary of the display range, and the coefficient is used to scale down the output values to fit the upper boundary of the display range. By default, the bias value is set to zero at tutorial initialization or after selecting a new specimen. When normalization is enabled by selecting the Allow Normalization checkbox, the output values may be normalized, as indicated above the checkbox. When normalization is disabled, the output values are clipped.
An important issue that arises in the convolution process centers on the fact that the convolution kernel will extend beyond the borders of the image when it is applied to border pixels. One technique commonly utilized to remedy this problem, usually referred to as centered, zero boundary superposition, is simply to ignore the problematic pixels and to perform the convolution operation only on those pixels that are located at a sufficient distance from the borders. This method has the disadvantage of producing an output image that is smaller than the input image. A second technique, called centered, zero padded superposition, involves padding the missing pixels with zeroes. Yet another technique regards the image as a single element in a tiled array of identical images, so that the missing pixels are taken from the opposite side of the image. This method is called centered, reflected boundary superposition and has the advantage of allowing for the use of modulo arithmetic in the calculation of pixel addresses to eliminate the need for considering border pixels as a special case.
Each of these techniques is useful for specific image-processing applications. The zero padded and reflected boundary methods are commonly applied to image enhancement filtering techniques, while the zero boundary method is often utilized in edge detection and in the computation of spatial derivatives. In the tutorial, the method of centered, constant padded superposition is employed, which is similar to zero padded superposition except that the missing pixels are replaced with the average pixel intensity value of pixels in the image border (instead of zeroes). This method has the advantage of producing less visible distortion in the border of the output image than zero padded superposition.
Sorry, this page is not
available in your country.