    Next: Features Calculated Using Both Up: ad hoc features Previous: ad hoc features

### Features Calculated Using Only the Protein Localization Image

The number of fluorescent objects in the image
Objects were identified by applying the Matlab bwlabel function to a binarized version of the processed image. The bwlabel function defines an object as a contiguous group of non-zero pixels in an 8-connected environment (i.e., a given pixel is adjacent to each of its eight neighbors).
The Euler number of the image
The Matlab imfeature function was used to calculate the number of objects in the image minus the number of holes. A hole is defined as a contiguous group of zero-valued pixels contained entirely within an area of non-zero pixels.

The average number of above-threshold pixels per object
Simply the mean number of non-zero pixels per object over the entire cell.

The variance of the number of above-threshold pixels per object
The ratio of the size of the largest object to the smallest
The number of pixels in the largest object divided by the number of pixels in the smallest object.
The average object distance to the center of fluorescence (COF)
The center of fluorescence of the image was calculated for the entire image and used to calculate distances to the centers of fluorescence of each object in that cell.  where x and y are the coordinates of each pixel, and f(x,y) is the function describing the intensity of each pixel in the image. The centers of the objects were calculated in a similar manner, using only the pixels from that object to calculate and .

The variance of object distances from the COF
The ratio of the largest to the smallest object to COF distance
The fraction of the non-zero pixels that are along an edge
Edge detection was performed on each image using the method described by Canny , and as implemented in the Matlab edge function. Canny's method calculates the gradient of the image using the derivative of a Gaussian filter. It then assigns edges to strong and weak categories. Weak edges are only included in the final output if they are connected to strong edges. This approach is less sensitive to noise in the image than other edge detection methods. The area of the binarized edge image was then divided by the area of the binarized input image.
Measure of edge direction homogeneity 1
The ratio of the largest to smallest value in an eight element histogram of image gradient directions. To generate this histogram, each image ( I) was first convolved separately with the kernels in Equations 3.2 and 3.3 to find the intensity gradients in two orthogonal directions ( and ). (3.2) (3.3)

The overall gradient at each point in the image G was then calculated from the convolved images ( GN and GW) using (3.4)

The value of each pixel in the image G is therefore the direction (from to ) of the intensity gradient at that same point in the original image, I. An eight-bin histogram was then calculated using all of the values the gradient image G. Images with patterns containing edges oriented predominantly along a particular direction (some patterns of actin filaments, for example) will therefore result in edge gradient histograms in which a few bins will dominate.

Measure of edge direction homogeneity 2
The ratio of the largest to the next largest value in the image gradient direction histogram calculated above. This feature was included to overcome problems that may arise with values of the first measure of edge direction homogeneity becoming very large with very small values of the minimum value of the histogram.
Measure of edge intensity homogeneity
The fraction of all values that fall in the first two bins of a histogram of edge intensity. An image of edge intensities was calculated using the same convolved images described above ( GN and GW). The intensity of the gradient at all points in the image was calculated using (3.5)

An eight element histogram was calculated for the values in this edge intensity image.    Next: Features Calculated Using Both Up: ad hoc features Previous: ad hoc features