Two years ago, the term "softcopy photogrammetry" was almost unheard of in GIS circles. Today, the availability of low cost softcopy photogrammetry systems has opened up a vast range of data provision and updating options to GIS users. The two primary datasets that are created by softcopy photogrammetry are terrain data, in the form of a Digital Terrain Model (DTM) and an orthorectified image (Orthoimage), which is a georeferenced image, free from any sensor or relief distortion. This paper discusses why Softcopy Photogrammetry is needed and covers, in brief, the digital processes involved in the production of such data, together with a comparison of these process with traditional manual methods. It closes with a description of the type of GIS projects currently integrating softcopy photogrammetry.THE IMPORTANCE OF TERRAIN DATA There are a large number of military and commercial GIS applications that rely entirely on the ready availability of digital terrain databases. Their success or failure is dependent on the timely production and ultimate accuracy of the terrain models that are fed into them. The military applications range from simulation, mission planning and mission rehearsal to terrain referenced navigation and weapons guidance systems. Commercial applications include land use monitoring and assessment, such as the EC MARS program and base mapping for oil and gas exploration activities. At a time when budgets are under scrutiny, there is now a greater need for digital terrain data to be generated more cost effectively, whilst still being made available in a timely manner but also without sacrificing or compromising the accuracy of the data. With this in mind, new softcopy photogrammetric techniques have evolved which go some way to providing a solution for this. To understand how this has been achieved, it is necessary to look at conventional techniques for building terrain databases. There are three common methods, the most popular of which is using traditional analytical photogrammetry. An alternative is digitising contours or spot heights from hardcopy maps and creating a surface from them. The derivation of surfaces from digitised height features, will involve a degree of interpolation, hence the surface will be inherently more generalised than a photogrammetric compilation. Given that maps and charts are themselves derived from aerial photogrammetric surveys, then any errors in the photogrammetric compilation will also be propagated once they are digitised, hence true photogrammetric compilation will always provide a more accurate terrain database than map digitising. A third source of terrain data is to use existing products. Terrain databases do already exist in a digital form and are available as standard products like the US Defence Mapping Agency's Digital Terrain Elevation Database (DTED). Similar products are also available from other mapping organisations, such as the USGS and the UK Ordnance Survey and a number of other National Mapping Agencies throughout the world. DTED has been the most widely available dataset for military applications and is widely accepted, but its predominance masks some inherent problems associated with it and also with other digital elevation model (DEM) products. With DTED, the resolution of the height data is fixed and is generally available at either 100m spacing (Level 1) or at a nominal 50m spacing (Level 2). However, due to the higher resolution of Level 2, it is only available to authorised users on a restricted basis. Whilst 100m may be suitable for broad area applications, higher resolution DEM data is needed to provide greater levels of detail to fulfill the potential of terrain based applications. The Level 1 product represents a very generalised view of the terrain even at 100m resolution and some of the finer terrain detail is lost. This is sometimes done deliberately to protect compilation sources and provide collateral against national sources or because there was simply insufficiently accurate source material available at the time of compilation. Either way, the relatively poor resolution constrains the potential capabilities of the applications. This highlights the second major drawback as the user has no control over the accuracy and quality of the DEM. DTED is compiled from what ever sources were available for a given area using either photogrammetric extraction techniques or digitised mapping. RMS figures are provided for both horizontal and vertical accuracies but in some instances the application may demand higher levels of accuracy and detail. Mission planning projects for example, require higher quality at terminal locations whereas a lower quality may suffice for en route positions. The user needs to be able to both specify the resolution required and vary it as required for the application as well as the ability to edit the DEM to increase the accuracy if needed. The same criteria apply to commercial applications, where accuracy has an impact upon commercial decisions, rather than human lives. THE PHOTOGRAMMETRIC PROCESS Photogrammetry has established itself as the main technique for obtaining precise three dimensional measurements. It involves the use of overlapping images to recreate the original stereo geometry of each adjacent pair of images, from which precise three dimensional measurements can be derived. Conventional photogrammetry involves the use of specialist and expensive plotting equipment to mimic the stereo geometry at the moment of image exposure using optical trains. The operator first has to measure calibrated points on the film, either fiducials or reseau marks, to establish a relationship between film space coordinates and model space coordinates. The machines are set up or 'oriented' using a pair of original hardcopy diapositives in left and right stage plates. Each stage plate can be positioned with respect to each other and oriented in x, y and z using threaded spindles to emulate the precise attitude and position of each diapositive with respect to each other. In this way any roll, pitch or yaw in the taking camera or satellite can be recreated to replicate the attitude and position of each image at the moment of its exposure. At this point the images are said to be in relative orientation. Absolute orientation, based on real world coordinates, requires the operator to observe and measure known ground control points in the model space as well. Once oriented, all residual y-parallax will have been eliminated, allowing the operator to view the model in stereo, a projective geometry termed epipolar. When viewed in stereo, conjugate image points appear in different positions in each of the images. This 'apparent' movement of the imaged point is due to the movement of the observer (in this case the aircraft or the satellite platform) and is known a parallax. Its measurement forms the basis of determining height. The only remaining parallax will be in the x direction, the amount of x-parallax being a function of height. Using a half mark etched in the optics of each lens, the operator can 'float' the point and move it in a vertical direction. By placing the point "on the ground" , individual features in the model can be heighted. This process has some fundamental drawbacks when compared to digital techniques. Firstly, it is all based on very specialised hardware. It is largely mechanical (analogue) although some plotters can be upgraded to include linear encoders powered by servo-motors (analytical) which will drive the operator to pre-defined points for measuring. Both analytical and analogue machines however are designed to carry out these single specific tasks and cannot be used for other applications. Secondly, the process is a highly skilled one which requires many hours of training and hence increased staffing costs. Most of the operations are also very labour intensive, particularly the collection of height data as each point has to be visited and measured individually. Experienced operators can measure anywhere between 6 and 10 points a minute and like all manual work, it could only be maintained at the desired accuracy for a specified period of time, certainly no more than 8 hours maximum. This will also contribute significantly to overall production costs. AUTOMATED DEM GENERATION With the advent of sophisticated photogrammetric software and ever increasing and inexpensive computer power, softcopy photogrammetric workstations to a large extent replace the human operator and automatically create the DTM by means of digital image processing. With production speeds in excess of 150 points per second, the DTM production time is significantly reduced. The history of digital photogrammetry can be traced back to the late 1950's, since which time photogrammetry has undergone a tremendous change and softcopy photogrammetry now offers the potential to generate terrain databases with greater speed, at lower cost and with less training and photogrammetric skill than ever before. The major difference between digital and conventional photogrammetric systems is that images used in digital systems are in digital format and hence suitable for processing by computers. If conventional aerial photographs are used, then they will need to be scanned prior to input into the system. The systems can also make use of image data collected digitally, such as satellite imagery. In this context, the SPOT satellite is the most commonly used as it currently provides the highest resolution stereo overlap coverage. However other digital CCD cameras could also be used. As with conventional analytical instruments, digital photogrammetric workstations carry out the same orientation process in order to model the original stereo geometry. The principles used are exactly the same, but the implementation is faster and offers greater ease of use through intuitive software interfaces. There are a variety of automated tools based on cross correlation of image patches to locate and measure fiducials in the image, tie points, pass points and ground control points. The correlator can be trained to recognise and measure fiducials for various camera types and, with the exception of observing a minimal amount of ground control, the entire orientation process is automated, requiring very little attendance and operator time. The area where most research has been concentrated is that of automated DEM collection. Sophisticated algorithms have been developed to replace manual collection and whilst there are differences between various collection algorithms, the problem of automating the process of DEM capture has generally been solved. The methods that are mostly used are either area-based or feature-based matching techniques using correlation of small image templates between image pairs. Once oriented, the software computes the coefficients of a set of rational polynomials which summarises the stereo geometry. These are used by the DEM correlator to emulate the projective geometry of the cameras. The normalised cross correlation approach discussed here is an area based algorithm that digitally correlates points based on tonal variations present in each image. Areas that have high tonal and textural variation will be correlated very quickly as the correlator uses the high frequency components to correlate on. Image content is the single most contributing factor to correlation success. Low frequency areas will generally be correlated slower, although the templates will automatically increase in size until sufficient texture exists in the template to allow correlation. The correlator will visit and attempt to height every point in the DEM and if a correlation cannot be accurately computed, then a height is interpolated. The interpolation is performed by a weighted technique based on radial distances of points in a neighbourhood. Evidence has shown that collecting hierarchically from a very coarse resolution through successively finer levels, reduces the possibility of generating a false fix. By reducing the scale of the imagery, changes in elevation are less pronounced in image space, thus the effect of height variations are minimised and searches over broad ranges of elevation are quicker. Also correlations at reduced scales tend to reduce the confusion between similar appearing objects by locking into gross areas which include the objects. The heights derived from each level are used as an estimate for the next higher resolution collection level. EDITING TOOLS AND ACCURACY Central to generating an accurate DTM is the ability to edit the computer generated model. This is required either where man made features (with sharp edges) need to be highlighted, cliff edges need to be added as 'breaklines' or simply where the computer has failed to find suitable matching points from which to generate a height. Correlated points will be designated a 'quality figure' based on a user-defined set of signal-to-noise ranges, as either 'good', 'fair' or 'poor'. Failed attempts at correlation are labeled as 'interpolated' and are all rankings are made available to the operator to give assistance for the editing stage. Whilst the automated collection will generate heights based on statistical correlations, it is important for the sake of ensuring accuracy that the height values can be validated by the operator. The software displays all the points at their correlated positions in a stereo view, colour coded to allow a rapid visual inspection of the whole model. The editing tools provide an interactive method of modifying the height of points deemed to be in error by the operator. In this way, the resulting DEM has both a statistical statement of accuracy and one that has also been verified by an operator using their skill and judgment. As the accuracy of the correlation is dependent on the accuracy with which the stereo geometry was computed, the software provides a full summary of all mathematical calculations including standard deviations for all final computations of camera positions and attitude. DERIVED PRODUCTS As well as being used in a range of military and commercial spatial analysis applications, the terrain database can be used to generate additional products, such as orthoimages. These are images that have been corrected for displacements due to relief variation and sensor imperfection. In any imaging system, each imaged point will have a particular perspective geometry and in order to view each pixel in an orthogonal projection (i.e. from a nadir view, as if each pixel were being viewed from directly above) the effects of terrain have to be removed. The DTM is used to model the relief variation present in the image and each pixel in the raw image is resampled into an orthogonal projection which the user can define. The orthoimage is vital, especially if perspective views are to be rendered for mission planning or visualization as this will ensure that all features are in their true position with respect to the underlying elevation model and curious effects such as rivers flowing uphill can be avoided! It is also vital for use an highly accurate, up to date base map for commercial applications, including database updating. USES WITHIN GIS APPLICATIONS Uses with GIS applications can be divided into two primary types; those requiring height information (the DTM) or a derivative (slope, aspect) and those requiring high precision base maps (the orthoimage), either for backdrops or as a source of vector data. Many spatial modelling applications, such as site location and route planning require height derived "layers" as part of the process. For example, where new housing development projects are being planned, using soil type combined with slope can show areas where land slippage may occur. In route planning in military applications, slope again is important as certain vehicles may only be able to negotiate low angle slopes. One area where aspect (i.e. south facing, north facing etc.) is important is in vineyard location - it is important that vines are planted at the optimum location to produce the best quality grapes! DTMs can also be used in visualisation, specifically in environmental and military applications. The siting of new facilities can first of all be generated using the spatial analysis described above. The proposed site could then be viewed in 3D, with the facility "added' to the DTM, either by adding a polygon of the appropriate value or "height" in mono view or by accurately adding the height of the facility in stereo. This enables the user to check on its visibility from surrounding areas. Viewshed analysis can also be used in the opposite sense to show if your own location can be seen from other areas for the purposes of concealment. Orthoimages, as described above, provide the most accurate (and up to date!) base maps of all. Many natural resource management applications now simply use a symbolised base map instead of a complex vector based map. The old adage "a picture is worth a thousand words" could easily be changed to " a pixel is worth a thousand vectors" in this instance! However, the largest demand for orthoimages lies in the data provision aspect of GIS. An orthoimage can be used for generating vector map and other measurement information directly from the computer screen. In the past this has to be done by the photogrammetrist on an analytical stereo plotter using stereo imagery, as this was the only way to get accurate x, y & z map information, unaffected by relief distortion. Today, the operator can simply use the mouse to digitise vector map information (and attributes) directly from the orthoimage on screen, using the computer as a "monoplotter". This de-skills the entire vector generation process and hence reduces the cost of creating the database. As this is normally the major cost component of a GIS system , softcopy photogrammetry is a way of reducing that cost. 3D GIS Finally, a glimpse into the future. The computer world as we know it is becoming a 3D world. No longer are simple planimetric views enough, with users demanding perspective views and real time flythroughs. In Autumn 95, ERDAS will be releasing its own real time 3D Viewer called IMAGINE Virtual GIS, which will allow DTMs generated in IMAGINE OrthoMAX (or from anywhere else) to be flown around in real time. Vectors (such as ArcInfo coverages), symbols and annotation can also be draped and flown around. One unique feature of the software will be the 3D GIS capability, which allows the 3D image to be queried in real time. Essentially, it will provide all the functionality of a 2D GIS but in 3D! This is the first step in a new direction in GIS where the real world can be modelled, analysed and queried in 3D on the desktop. CONCLUSION It is easy to appreciate the advantages that are brought to the production of terrain databases and derived products such as orthoimages with the introduction of well developed software algorithms, combined with the increasing availability of powerful desk top workstations. It is however fair to say that despite these tremendous advances, there is still a significant caution in the user community and there are many published technical evaluations that bear witness to this. Without doubt however, digital systems are here to stay and are being constantly improved. They have already proved that they offer significant improvements in production throughput and their ability to operate with minimal operator intervention will certainly mean that reduced production costs can be easily achieved. It is anticipated in the future that fully automated systems will be available, requiring not only less operator time, but less skilled operators. As systems become easier to use, so the technology will be more accessible to a wider range of end users, who traditionally were excluded from undertaking photogrammetric projects by virtue of the technical complexity and the level of operator training. From the application engineer's perspective, it is now possible to generate digital terrain models and orthoimages as and when required on standard commercially available hardware. More importantly, the information can be generated on demand to the exact density, area and quality required by the particular project, and as all GIS users know, having the correct data in place on day one is the first major step towards a successful project.