Inertial Measurement and LIDAR Meet Digital Ortho Photography:

A Sensor Fusion Boon for GIS

 

Robert G. Kletzli and John L. Peterson

 

 

 

We see the future of the airborne spatial data collection industry as producing spatial data in real time from a variety of commonly geo-referenced airborne sensors with the potential delivery back to the end user occurring in near real time. We refer to this as "Sensor Fusion Technology". By this we mean that rapidly interchangeable suites of sensors will be able to meet a user's mapping and GIS needs by developing accurate map products simultaneously as the source data is being collected. Ground based operations will then be used to fuse, reformat, and provide automated feature extraction from the geo-referenced sensor data. This paper will present the components of Digital Ortho Photo/LIDAR Fusion Technology and impacts it is already making in regards to GIS technology. Concepts and a case study will be discussed. For more information on digital orthophotography, LIDAR, and Sensor Fusion Technology visit us at:

www.earthdata.com

 

 

 

Introduction

As a result of the increasing availability of digital orthophotography coming from such sources as the National Aerial Photography Program (NAPP) and with recent strides made in the world of softcopy (computer assisted) photogrammetry, the demand for digital orthophotography has exploded in the last 3 years. This is in large part due to the recognition of digital orthophotography as an excellent foundation upon which to build a GIS. Also, raster backdrops offer a sense of reality to vectorized features that lend viability to their worth. NAPP photography (Black & White 1meter resolution data) has contributed greatly to this trend. However, once exposed to the utility of digital orthophotography and the high quality of the newest color products now available, many users have found the 1-2 year post flight turnaround time and several year repeat cycle for NAPP products as reasons to seek other sources.

Natural color ortho photography and topographic mapping, once cost prohibitive and time consuming to produce, now can be delivered to a custom specification sooner than by traditional means with equivalent accuracy, at a similar, or in some cases, lesser cost. All of these points weighing in on the benefit side of the scale for GIS users. This is in large part due to advancements in the automation of the elemental components of digital (softcopy) orthorectification and mapping processes; geo-referencing and digital terrain modeling, respectively.

This paper will first look at what is involved in softcopy photogrammetric processes and then look at two of the most revolutionary contributing advancements to come along in the photogrammetry world to date that address the geo-referencing and terrain modeling challenges. These are: Inertial Measurement Unit technology (IMU) and Light Detection and Ranging (LIDAR). By integrating IMU and LIDAR into the photogrammetric process, GIS users are now the beneficiaries of high quality data collection with turnaround times never seen before.

 

A special case study will be described later in this paper, however for the benefit of readers not familiar with the orthorectification process we will provide a brief summary of the processes undertaken for a typical GIS ortho photo base-mapping project. Figure 1 summarizes the cross reference between the older more traditional operations required for orthorectification and the newer softcopy processes.

 

Figure 1 Traditional verses Digital (Softcopy) Photogrammetry

 

As you can see in figure 1, softcopy processes run parallel to the traditional processes when viewed sequentially. However, as CAD was to traditional drafting, the benefits of softcopy over traditional photogrammerty soon be come apparent when related to the data intensive GIS environment.

 

Softcopy Photogrammetry: The Processes

Softcopy photogrammetry refers to the automated realm of processes previously performed via opto-mechanical devices such as stereo plotters, point marking tools, and photo labs with large format copy cameras. Thanks to research and development coming out of the defense community, and more recently, the motion picture industry, raster scanners, work stations, and powerful software have now redefined the science of photogrammetry. This marriage of technologies has brought the power of rapidly generated and highly accurate mapping, to the doorstep of end users. When softcopy techniques are integrated with technology such as Airborne Global Positioning Systems (ABGPS), which automate and drastically reduce the labor and time intensive tasks of surveying ground control panels, additional time and resource savings are also realized.

On the frontier of related technology, these results are enhanced another order of magnitude when Inertial Measurement Unit (IMU) and LIght Detection and Ranging (LIDAR) technologies are integrated.

A digital orthophoto is a digital image of an aerial photograph in which displacements caused by the camera and the terrain have been removed. The specific processes required to accomplish a digital ortho photo project include, in sequential order: Airborne GPS/Photo acquisition, Photo scanning, Triangulation, Autocorrelation, Editing, Mosaicking, Vector extraction (point, line, and area features), Sheet formatting, (Tiling), and CD production.

 

The Photo Mission and Airborne GPS

To initiate a project, a flight plan will be prepared by digital means to conduct the aerial photo mission. The mission will be flown with an aerial camera calibrated by the USGS. When airborne GPS is used for a mapping project, the number of survey ground control points is reduced significantly; thereby reducing the field time involved by mapping survey personnel by as much as 85%. Upon completion of the mission, the GPS data is then processed using precise differential software that combines the ground and airborne phase measurements. This method provides horizontal and vertical control to within 10cm accuracy or better.

 

Film Scanning

After acquisition of the color aerial photography, diapositives (mylar contact prints) are generated and scanned by a calibrated photogrammetric scanner at resolutions of approximately 20 microns or 1270 pixels per inch. Digital data is then transferred to magnetic tape media for soft copy processing.

 

Aerial Triangulation/Air Trig (AT)

Once the diapositives are scanned aerial triangulation is initiated. This is the process by which each stereo model�s orientation (roll, pitch, and yaw angles of the airplane at the time of exposure) are derived. In the softcopy environment mensuration is computer aided by image matching techniques. Individual points are matched by an algorithm designed to look at digital patches of imagery, common to both diapositives, thereby eliminating errors caused by operator interpretation. The semi-automated AT process increases throughput while also increasing accuracy. It is our experience that this process is about 30% more accurate than its traditional counterpart. Later in this paper we will see how with an Inertial Measurement Unit, orientation angles are collected in real time, thereby eliminating several large batch operations.

 

AutoCorrelation (AC)

Following the aerial triangulation adjustment, digital stereo models are created. The objective of this process is to create a digital surface from the stereo models that represent the real world. The softcopy method of autocorrelation (AC) uses a batch program without operator intervention. AC is the process by which the computer matches patches of imagery on each side of the stereo pair and if it determines the imagery to be similar enough it calculates the positions of the patches and goes through the photogrammetric equations (collinearity) to calculate a 3 dimensional coordinate. This process is done 10�s of thousands of times for each stereo pair with the result being a dense grid of mass points throughout the stereo model. Mass points are the calculated heights of the objects that the pixel patches represent; be they dirt, concrete, bush, tree or building. Since having all of your mass points on the ground is what is necessary to create an accurate surface and a particular item is not on the ground, i.e., a tree or building, that data point must be edited by a stereo capable operator to "push" the point down to the ground or delete it. Terrain break lines are also added in this stereo phase where required. Later, we will discuss how the use of LIDAR terrain modeling technology will usurp this process in many cases.

 

Digital Orthophotography and Mosaics

In softcopy, all setup issues have already been performed by the time the surfaces have been completed, hence, only the rectification process is an additional step. This involves fitting the digital imagery to the surface created by autocorrelation, or as described later, a LIDAR derived surface. As a result, the additional cost to produce digital orthophotos in a softcopy environment, is significantly less than adding them to a traditional photogrammetric workflow (figure 2).

 

 

Figure 2 - Sample of digital orthophotography corrected using autocorrelated surface

 

After orthorectification has been completed, it is a necessary QC step to visually check the edgematch to all adjacent orthophotos. Using the hardware born out of Hollywood, we are able to digitally mosaic hundreds of digital orthophotos at a time. Radiometric (tone&color) corrections are performed by image histogram analysis and applied across the entire mosaic. Because the mosaic is digital it is also possible to resample it to any larger pixel size for display at various scales and as a backdrop to any GIS data.

 

Vector Capture

Most GIS databases currently are made up of vector information including boundaries, road details, buildings, parcels and drainage features. If captured by the older heads up digitizing method, the features are traced in the instrument by following each feature and are displayed on a graphics workstation in close proximity to the instrument itself. In softcopy, the vectors and the contours are superimposed in color and 3D on top of the stereo image displayed on the workstation�s screen. Therefore, as soon as the operator digitizes a feature it immediately appears on the imagery in front of him or her. This workflow aids in the QC process because all vectors are dynamically updated when changes are made (figure 3). Standard off the shelf CAD packages are used to collect and manipulate the vector information.

 

 

Figure 3 - Digital orthophotography with softcopy derived vector features

 

Digital Output

Final products are delivered on CDs. In this step the mosaic is tiled into indexed files of manageable size. Delivery format for the digital raster products is typically 5,000 x 5,000 pixels per tile (75 megabyte files) and 8 tiles per CD. All vector data accompanies each respective raster files. We find that this is a manageable size and format for quick displaying when using desktop CAD or GIS systems. We find ArcView is an excellent tool for both display and plotting softcopy derived orthophotography.

 

 

The Next Mapping Revolution

 

It is generally agreed that the integration of new technologies is fueling the spatial science realm. In photogrammetry we refer to our integration process as "Sensor Fusion". Two of the most revolutionary technologies to enter the domestic mapping world to date are Inertial Measurement and LIDAR. Harnessed together through the use of GPS technology and its inherent use of time (t) as a foundation, these two sciences are molding the future of airborne mapping (figure 4). To the GIS data user this means that accurate data, whether natural color, multi spectral, or thermal, can now be produced less expensively and faster than ever before.

 

 

Figure 4 - LIDAR System Hardware with Inertial Measurement Unit

 

Inertial Measuring Unit (IMU)

 

At the heart of sensor fusion is the integration of differential GPS onboard the aircraft (x,y,z, and time) with an inertial measuring unit (IMU) on each sensor to precisely position and provide orientation parameters (pitch, roll, and heading) for all sensors. In traditional air photography, as an airborne sensor collects data the position and orientation of the sensor is typically unknown. Classical procedures require surveying a number of points on the ground and using advanced mathematical adjustments to compute the position and orientation of the sensor. This process, referred to as "Airtrig" consumes a significant part of the mapping process, and is a requirement to successfully utilize imagery for further compilation and extraction of elevations and features. Further, many remote sensing projects using digital and / or multispectral camera systems simply forego this process, resulting in compromised spatial accuracies. Recent studies have shown that the accuracy required for the position of the sensor varies with the scale of the desired map products, ranging from 5 to 110 centimeters, for map scales from 1:600 to 1:7200. The required accuracy of the orientation parameters however, remains constant at all scales.

To generate accurate map products, with the various images rectified to each other, requires the camera orientation to be measured to an accuracy of 10 arc seconds. Over the past several years, differential GPS has been documented to provide the stated positional accuracies, using On-The-fly ambiguity resolution techniques for the carrier phase ambiguities for the higher accuracies, and differential code techniques for the lower accuracies. Using GPS to position the sensor itself, without orientation parameters, has allowed a reduction in ground control of 90 to 95%. At the same time the mathematical airtrig process has become more complex. By installing an inertial measuring unit on the camera and measuring the orientation of the platform to the 10-second accuracy, the requirement to install ground control and the airtrig process become unnecessary to accurately rectify the imagery for the map compilation process. This is extremely significant to end users of camera or sensor data utilizing an IMU because it assures consistent geo-referenced data that will easily merge into GIS based applications and studies such as airborne multi-spectral analysis, temporal change detection, land use inventories, etc.

The consistency of IMU technology to produce absolute spatial accuracies in the ten arc second range adds credibility and defensibility to data results if ever challenged. Also there is the obvious benefit of saving time and hence money when all things are considered in decision processes that relies on the data.

 

LIDAR (Light Detection and Ranging)

 

Airborne laser systems have been in use for many years to measure points on the earth�s surface. As with so much of our industry, LIDAR actually grew out of the defense industry. In the early 1980�s a second generation of systems were in use around the world. However, these systems were expensive and had limited capability, generally restricted to use by Federal agencies. Early experiments recognized several limitations, such as the requirement for enhanced laser systems with lower signal to noise ratios, shorter laser pulse widths, and higher laser repetition rates. However the single largest concern was the ability to geo-locate the sensor in the aircraft. Further improvements to aircraft positioning and attitude subsystems were required to facilitate application to the mapping community. As discussed above, with the enhanced computer power available today, and with the latest positioning and orientation systems, LIDAR systems have become a commercially viable alternative for development of digital elevation models (DEM) of the earth�s surface.

Airborne LIDAR is simply an aircraft mounted laser system designed to measure the three dimensional coordinates of a passive target. This is achieved by combining a laser with positioning and orientation measurements. The laser measures the range to the ground surface, or target, and when combined with the position and orientation of the sensor yields the 3D position of the target (figure 5).

 

Figure 5 - LIDAR derived digital elevation model of Roswell, New Mexico

The laser range sub system operates differently than a surveyor�s laser distance measuring unit. A surveyors "distance meter" measures the phase shift of a laser pulse, modulated through a series of frequencies to resolve finer and finer units until the total distance is measured to a high degree of precision.

For LIDAR applications the laser generates the range to a passive target by measuring the time of flight for a single laser pulse to make the round trip from the laser source to the target and return to the laser receiver. LIDAR systems typically use a single frequency, usually between 1064 nm or 532 nm, corresponding to the infrared and green areas of the electromagnetic spectrum, respectively. The electronic circuits measure this time to an accuracy of about 1/3 nanosecond and correlates to a distance resolution of about 5-cm. The ranging is primarily dependent on the ability of a system to detect the � peak points of the transmit and receive pulses. This ability is a function of the pulse width and the "steepness" of the rising and falling edges of the pulse.

The ability to detect a return signal from targets with weak reflective signatures and from targets at a great distance is a function of the laser power. Finally, the number of laser spots hitting the ground, hence the density of the resulting DEM, is a function of the pulse rate of the laser. Unfortunately, these three characteristics (pulse width, laser power and pulse rate) work against each other in the selection of the best laser source. As the pulse rate increases, to produce a DEM with greater density, the power decreases and the pulse width increases. The lower power results in fewer target returns and the increased width results in a lower range resolution.

Adding the range to the position and orientation information from the aircraft positioning system derives positioning of the final point on the ground. This is typically achieved through the use of differential carrier phase GPS and an inertial measuring unit. Some systems have included the positions of ground targets to aid the orientation process due to limited accuracy of low cost inertial units and to help solve the GPS ambiguity problem.

Early systems simply shot the laser downward, normal to the aircraft in a profile along the flight path. Systems today utilize a scanning mechanism to scan a path beneath the aircraft, yielding a pattern of points on the ground. These points then become the basis for a DEM after a minimal amount of processing.

A key component of all systems operating today is the scan width, or ground footprint. Today�s systems provide a footprint of roughly 20 to 30% of the ground coverage of a conventional camera. The most recent developments will operate at altitudes and swath widths to match the foot print of conventional camera systems, yet remain tunable to allow the ground to be "painted" with data for engineering applications. The most direct application of LIDAR technology is the creation of a DEM for application to mapping products. This technique is much faster than conventional photogrammetric techniques, and the data is easily combined through many commercial software products to numerous mapping applications.

As an active device, the laser pulse is less susceptible to shadows and sun angle. The timing and recording units may also record multiple returns from a single laser pulse indicating canopy height or vegetation density. The data is easily contoured for topographic applications and because each data point is geo-referenced the data is also easily merged with other feature data or imagery sources. LIDAR data is typically 3 or more times denser than photogrammetrically captured elevation data and provides an ideal DEM for the rectification of orthophoto images. Whereas photogrammetric elevation accuracy is a direct function of the flying height, LIDAR is relatively insensitive to height, yielding similar accuracies from any height. One of the most exciting applications of LIDAR data is its value to aid automated feature extraction. The LIDAR�s ability to easily discriminate the heights of nearby features yields valuable data for the automated interpretation and extraction of these features. The characteristics of the laser frequency may also play a role in this process. For example, infrared laser light provides an excellent discrimination along water boundaries. LIDAR data can also be collected to "see" discrete features such as power lines, providing a unique opportunity for engineering applications.

 

Case Study: New Mexico Highway 44 Widening Project

 

This project involved collection and processing of both natural color digital orthophotography and topographic LIDAR data for a 2,000 ft. wide by 120 mile-long highway corridor for New Mexico Highway 44 between San Ysidro and Bloomfield. The objective of the overall 2 year effort is to widen the existing two-lane highway into a 4-lane highway in a minimum amount of time. LIDAR data for the 120 miles was collected in one day in conjunction with Airborne GPS used to augment ground control. LIDAR elevation data points were derived from 10 to 14 ft. postings with 6-inch vertical accuracy. The LIDAR data was then processed over the course of 4 weeks into an accurate terrain surface. This was the first time LIDAR data had been used for a New Mexico highway design project (figure 6).

 

Figure 6 - LIDAR derived digital terrain model of Rio Puerco River crossing of NM 44

It is estimated that by using LIDAR to define detailed terrain features in the center of the corridor, several months were saved in the preliminary design process.

 

Digital Orthophotography and LIDAR

 

With the terrain component of the orthorectification process handled by LIDAR, GIS users become the beneficiaries of terrain data and orthorectified photography data creation turnaround times never before seen commercially. Because all of the data share common geo-reference, they can more easily be utilized for analysis with other data sets and software tools such as Esri's 3D Analyst and GRID. Figure 8 is an example of a 3 dimensional rendering of digital orthophotography draped on top of the LIDAR derived terrain surface for New Mexico Highway 44.

 

 

Figure 7 - Digital orthophotography of NM 44 draped over LIDAR derived surface

 

Conclusion

Advancements in the collection and delivery of color digital orthophotography are making parallel strides in line with those seen in the computer industry. Digital orthophotography, rapidly becoming a valuable commodity, can now be delivered to end users of this data faster than ever before through the fusing of Inertial Measurement and LIDAR technologies. As we see the GIS industry growing exponentially, sensor fusion will serve as a boon to data hungry GIS users and decision-makers tasked with "doing more with less".

 

 

 

 

Acknowledgements:

The authors would like to thank our co-partner, 3001 Inc., for their contributions to the development of the AeroScan LIDAR System.

 

Author Information:

Robert G. Kletzli

President

EarthData International of New Mexico

6100 Seagull Lane. NE Suite 105

Albuquerque, New Mexico 87109

505-872-0207

bkletzli@earthdata.com

www.earthdata.com

 

John L. Peterson

Business Development Manager

EarthData International of New Mexico

6100 Seagull Lane. NE Suite 105

Albuquerque, New Mexico 87109

505-872-0207

jpeterson@earthdata.com

www.earthdata.com