Measuring the Performance of Ground Slope Generation

Demetra Voyadgis and William Ryder, U.S. Army Topographic Engineering Center Calculation of ground slope is fundamental to many traditional Geographic Information System (GIS) applications. Slope is an important component in engineering, mining, hydrological, geomorphological, as well as environmental analyses. Various methods exist for calculating slope. Manual slope generation, based upon contour line information, is a long established and generally acceptable method. However, with automated tools for doing analysis, analysts must assume that the algorithm in use is ideal, accurate and comparable to the manual method of slope generation. As the Army's center of expertise for digital topographic data, the Digital Concepts and Analysis Center (DCAC) at the U.S. Army Topographic Engineering Center (TEC) has conducted a study of the accuracy of slope generation. The purpose of the study was to field test existing methods of slope generation used by Army terrain analysts: the manual slope wedge method, and the slope algorithm embedded in the ArcInfo GRID module. A manual slope map was compiled from a 1:24000-scale contour map using the slope wedge method, and the slope areas were digitized for analysis in ArcInfo. The elevation data used for slope generation in GRID was produced at TEC and covered the Yakima Training Center, Yakima, Washington. The elevation data was collected at 5-meter post spacings and was thinned to allow further comparisons at resolutions that are generally available to terrain analysts (30 and 100 meters). Rapid field collection of point slope data, a key aspect of this study, was made possible through Global Positioning System (GPS) and laser range-finding technology. To assess slope accuracy, field slope measurements were compared with manual and GRID-generated slope values. Results are summarized and discussed in the paper.



Error Detection and Correction of Hypsography Layers

Kevin S. Larson, Berger and Co.

As with any large data set, Digital Chart of the World (DCW), have some errors generated resulting from the map digitizing and subsequent processing. The problem then becomes how to detect these errors, and possibly correct them, as functions like TOPOGRID will not give desired results with incorrect input. Fortunately, the DCW hypsography layers have a systematic labeling of the data, allowing for a systematic solution. ArcInfo's (ArcInfo) vector and raster tools are used to detect errors within the hypsography layers. Errors within the contours are detected first. This is done by looking at the elevation difference between the current arc and its neighbor, determining if it is within the specified tolerance. ArcInfo's raster tool EUCALLOCATION, forms a polygon zone for each arc, and the border arcs are used to compute the needed difference. Arcs not within the specified tolerance are flagged as an error. A similar approach is then done for the points. The contours are also used with the points, however, the actual values between the two neighboring contours are needed. Here, the boundaries formed by EUCALLOCATION, are expanded back to the original arc's location with ArcInfo's COSTALLOCATION function. Points with an elevation value not within the range are flagged as an error. The only case where data point correction can be automated is when the point data has been generated based on another layer. The DCW supplemental point hypsography layer represents locations and values of collapsed contours. Because they are collapsed contours, their elevations will be based on the surrounding contours. Here contour correction would be much more difficult, and less certain, because more data than just an arc's neighbor is needed. Potential data errors are flagged, and where possible, corrected, after processing. Using the data to detect/correct itself helps keep problems (associated with incorporating external ancillary data) like registration, and projections from complicating the situation. Two major limitations are present in this solution. First, raster processing cannot represent vector data exactly. Second, the raster functions used are slow, taking several hours for results.



The Dot-Probability Paradigm for the Storage of Spatial Data

Michael Kennedy, University of Kentucky and Patricia E. Bomba, Mid-America Remote Sensing Center Murray State University

Organizations and agencies which use GISs are requiring more precise "metadata" which describe the confidence one might place in stored spatial data. This is true not only for primary datasets but for derived datasets as well. The need for such metadata, and for the quality control (QC) which it supports, will increase as GISs are used more often to decide issues which may produce litigation. The approach proposed herein allows the user to interactively ascertain the degree of accuracy of the spatial data concerned. The intent of its design is to provide a universal data frame that promotes truly "honest" GIS processing, while at the same time permitting "fuzziness" in GIS data which both polygon and cell paradigms deny. The Dot-Probability Paradigm (DPP) is a GIS dataframe for the storage and manipulation of areal, network, and point spatial data; further the DPP has built into it the ability to provide the user with detailed information about the quality of data contained in a given dataset. The DPP project was sponsored by Esri and the Ohio Center for Mapping.




Back to Paper Presentation Abstracts