CONTENTS | PREV | NEXT | Java 2D API |
The Java 2D API provides a set of classes that define operations on BufferedImage and Tile objects.
BufferedImage extends java.awt.Image. It allows you to access individual pixels in an image. Tile defines the data and data layout within an image. These image processing classes share a common architecture. Each image processing operation is embodied in a class. Each class defines a filter method that performs the actual image manipulation. This filter method might operate on a source and destination BufferedImage, or a source and destination Tile. Figure 5-1 illustrates the basic model for Java 2D API image processing:
The operations supported include:
Affine transformation
The classes that implement these operations include AffineTransformOp and its subclasses BilinearAffineTransformOp and NearestNeighborAffineTransformOp, BandCombineOp, ColorConvertOp, ConvolveOp, LookupOp, RescaleOp, and ThresholdOp. These classes can be used to geometrically transform, blur, sharpen, enhance contrast, threshold, and color correct images.Figure 5-2 illustrates edge detection and enhancement, an operation that emphasizes sharp changes in intensity within an image. Edge detection is commonly used in medical imaging and mapping applications. Edge detection is used to increase the contrast between adjacent structures in an image, allowing the viewer to discriminate greater detail.
Figure 5-3 demonstrates lookup table manipulation via rescaling and thresholding. Rescaling can increase or decrease the intensity of all points. Rescaling can be used to increase the dynamic range of an otherwise neutral image, bringing out detail in a region that appears neutral or flat. Thresholding can clip ranges of intensities to a specified level.
The image processing classes provided by the Java 2D API are summarized in Table 5-1. Each operates on either a BufferedImage or Tile object.Table 5-1 Image processing classes Processing an Image
The following code fragment illustrates how to use one of the image processing classes, ConvolveOp. Convolution is the process that underlies most spatial filtering algorithms. Convolution is the process of weighting or averaging the value of each pixel in an image with the values of neighboring pixels.This allows each output pixel to be affected by the immediate neighborhood in a way that can be mathematically specified with a kernel.In this example, each pixel in the source image is averaged equally with the eight pixels that surround it.
float weight = 1.0f/9.0f;
float[] elements = new float[9]; // create 2D array
// fill the array with nine equal elements
for (i = 0; i < 9; i++) {
elements[i] = weight;
}
// use the array of elements as argument to create a Kernel
private Kernel myKernel = new Kernel(3, 3, 1, 1, elements);
public ConvolveOp simpleBlur = new ConvolveOp(myKernel);
// sourceImage and destImage are instances of BufferedImage
simpleBlur.filter(sourceImage, destImage) // blur the image
The variable simpleBlur contains a new instance of ConvolveOp that implements a blur operation on a BufferedImage or a Tile. Suppose that sourceImage and destImage are two instances of BufferedImage. When you call filter, the core method of the ConvolveOp class, it sets the value of each pixel in the destination image by averaging the corresponding pixel in the source image with the eight pixels that surround it. The convolution kernel in this example could be represented by the following matrix, with elements specified to four significant figures:
When an image is convolved, the value of each pixel in the destination image is calculated by using the kernel as a set of weights to average the pixel's value with the values of surrounding pixels. This operation is performed on each channel of the image.
The following formula shows how the weights in the kernel are associated with the pixels in the source image when the convolution is performed. Each value in the kernel is tied to a spatial position in the image.
The value of a destination pixel is the sum of the products of the weights in the kernel multiplied by the value of the corresponding source pixel. For many simple operations, the kernel is a matrix that is square and symmetric, and the sum of its weights adds up to one.1
The convolution kernel in this example is relatively simple. It weights each pixel from the source image equally. By choosing a kernel that weights the source image at a higher or lower level, a program can increase or decrease the intensity of the destination image. The Kernel object, which is set in the ConvolveOp constructor, determines the type of filtering that is performed. By setting other values, you can perform other types of convolutions, including blurring (such as Gaussian blur, radial blur, and motion blur), sharpening, and smoothing operations.