Arguably the most important step in pattern recognition is the appropriate choice of numbers to represent an image (such numerical descriptors of an image are called features). Since a long term goal is a system that is able to distinguish the localization of many proteins, not just the five patterns used in this study, it was decided that two sets of "general purpose" features would be used, rather than tailoring individual features to discriminate particular patterns. >>>>
Since cells in fluorescence images have arbitrary location and
orientation, all features were required to be invariant to the
translation and rotation of cells within a field of view. This search
led first to moment invariants [35], and then to the more
appealing Zernike moments [33] (Equation
2.4). Although originally used in the representation
of optical aberration [36,37], the Zernike
polynomials, on which the Zernike moments are based, have recently
found application in pattern recognition [20,38,21,39,40,41]. Based on this previous work, Zernike moments up
to degree 12 were calculated (>>>>
in Equation
2.4), providing 49 numbers for describing each
image. These complex valued moments are not invariant to rotation, so
the final 49 features were obtained by calculating their magnitudes
(which are invariant to rotation). The magnitudes of the 49 moments
are very different from one another, and this difference hindered
subsequent classification when using a neural network classifier (see
below). The features were therefore normalized as described in
Section 2.2.6 before using them with the BPNN
classifier.
>>>>
Since the Zernike polynomials are an orthogonal basis set (a set of functions for which the integral of the product of any pair of functions is zero), it is possible to use the Zernike moments calculated for a particular image to reconstruct that image. In theory, reconstruction of a continuous (i.e. not pixelated) image without error requires an infinite number of Zernike moments. Since only 49 moments were used to describe the images, it was of interest to examine representative images reconstructed from those moments. The reconstructions (Figure 2.4) provide some insight into the amount of information that is included in the 49 Zernike moments used for classification (it is clear that much of the detailed information in each image is not preserved in the low degree moments). Note, in particular, that the 5 reconstructed images are visibly different, despite representing more than 1400 to 1 compression of the circular region defined around each cell (70,650 pixels per 300 pixel diameter circle vs. 49 Zernike moments). >>>>
>>>>
![]() |
>>>>