Feature Extraction Using Spatial Context

David Opitz

When classifying the contents of imagery, there are only a few attributes accessible to human interpreters. For any single set of imagery these are shape, size, color, texture, pattern, shadow, and association. Traditional image processing techniques incorporate only color (spectral signature) and perhaps texture or pattern into an involved expert workflow process. This is why these techniques typically fail when populating GIS databases. This paper shows a successful machine learning method for integrating spatial context into the feature extraction process, thus leveraging the remaining attributes of shape, size, shadow, and association. Results demonstrate the utility of this approach.


Introduction

Humans have a unique ability to easily recognize complex features in an image. Human are able to do this by utilizing the attributes of shape, size, texture, pattern, shadow, and association (Caylor, 2001). Despite the abilities of humans, manual methods fall woefully short of meeting government and commercial sector needs for three key reasons: (1) the lack of available trained analysts; (2) the laborious, time-consuming nature of manual classification; and (3) the high labor costs involved. Therefore, this has been a considerable amount of research on automating feature extraction. These automated approaches typically use only the spectral signature of the object; however, spectral signature alone is ineffective for many features of interest. This article discusses a new software product, Feature Analyst (www.featureanalyst.com) that succeeds in incorporating these other six attributes into the feature extraction process.

Background

Features extracted from an image populate a GIS database and support decision-makers for a wide variety of applications such as land-use planning, disaster and emergency services, telecommunications, etc. Image classifiers can also be used to extract from imagery some types of specific objects or targets, such as land-cover types using multispectral imagery (MSI). The real world task of feature extraction, however, it is impossible to extract many features using only spectral information. For example one cannot distinguish between asphalt roads and asphalt parking lots without using contextual information provided by spatial signatures. The classifier must not look only at the spectral value of a pixel but also look at surrounding pixels to infer the notion of spatial perspective.

Besides manual classification and traditional image processing techniques, an alternate and more recent approach is to model the feature extraction process using statistical and machine learning techniques (Maloof et al. 1998; McKeown 1996). The idea behind this approach is that, with a sample of extracted features from the image, a learning algorithm automatically develops a model that correlates known data (such as pixel values from images, terrain data, vector overlays, grids etc.) with targeted features. The learned model then automatically classifies and extracts the remaining features in the imagery. Traditional supervised classification methods have predominately used basic statistical methods, such as Maximum Likelihood. These methods require a priori statistical assumptions and often fail on small-feature capture because of their inability to (a) classify disjunctive concepts, (b) take into account spatial context, and (c) remove clutter.

Feature Analyst makes inductive learning the focal point of its extraction process. The Feature Analyst provides (a) a simple interface and workflow process, (b) ability to take into account spatial context (contextual classification), (c) the ability to mitigate clutter, and (d) the ability to learn disjunctive concepts.

The Value of Spatial Context

Problem representation is of the utmost importance for inductive learning. With feature extraction, the difficulty is including enough spatial information without overwhelming the learner. Figures 1-3 from Opitz, 2002 show the value of using spatial content in feature extraction. Figure 1 shows an image where we want to extract white lines on airport runways. Figure 2 demonstrates the best one can do using only spectral information. Since all materials with similar reflectance as the white lines are extracted, there is too much clutter. Results using Feature Analyst using a "square 7x7" input window is shown in Figure 3. Only thin white lines surrounded by pavement and/or grass are extracted. Opitz, 2002, showed that using a proper spatial context when extracting NIMA's Foundation Feature Classes resulted in features that reduced the error rate from traditional supervised classification by over 50%. Feature Analyst showed a labor savings of over 165 times as compared to heads-up digitizing, without sacrificing accuracy.

Caption

Figure 1: Sample image where we want the white lines on the airport runway.

Caption

Figure 2: Supervised classification with no spatial context.

Caption

Figure 3: Feature Analyst classification with spatial context.

Conclusions

Feature Analyst is an exciting new product that uses spatial context to effectively extract features from imagery. Proper spatial context can automate the feature extraction process and result in significant labor savings. Feature Analyst, with its easy-to-use interface, promises to change the way analysts currently extracted features from imagery.

Acknowledgements

This work was supported by NIMA contract NMA201-01-C-0016 and National Science Foundation grant IRI-9734419.

References

Caylor, J. 2001. Personal Communication. USDA Forest Service Remote Sensing Applications Center.

Maloof et al. 1998. "Learning to Detect Rooftops in Aerial Images." Image Understanding Workshop 835-845. Monterrey, CA.

McKeown, D. 1996. "Top Ten Lessons Learned in Automated Cartography," Technical Report CMU-CS-96-110, Computer Science Department, Carnegie Mellon University, Pittsburgh, PA.

Nayar, S., Poggio, T, eds, 1996. Early Visual Learning. New York, NY: Oxford University Press.

Opitz, D. 2002. "The Use of Spatial Context in Image Understanding." Ninth Biennial Remote Sensing Applications Conference.


David Opitz
Associate Professor
University of Montana
Computer Science Department