BUILDING LANDSCAPE VISUALIZATIONS FOR INSTRUCTION

Robert MacArthur – University of Arizona, School of Renewable Natural Resources

Aaryn Olsson – University of Arizona, Math and Computer Science

Abstract: Higher education courses in certain disciplines use GIS extensively, but rarely is GIS used in large, general education courses, because the learning curve for the software is too steep. Visualization provides an instructional avenue into spatial data modeling by providing students with powerful visual tools.

At the University of Arizona, students in English Composition are using visualization in land management applications (see )http://ag.arizona.edu/agnet/srer/) as an aid to collaborative learning, and visual impact analysis. In freshman Art courses, students are working on a campus mall redesign. They insert their sculptures into synthetic scenes. Fellow students, their parents, and the public can access these scenes through a Web browser, walk around the sculptures, move them, scale them, change their color, etc. (see )http://ag.arizona.edu/agnet/icac/).

The heightened interactivity of this visualization-based learning model is pedagogically appealing. It gives students with very limited technological skills access to complex spatial data.

Background: Recent learning theories place a big value on visualization, simulations built on virtual reality, "virtual learning environments", and the like (Hodges). The reasons for that are that learners like to control where and how they learn, and the visualization makes a very complicated technology relatively simple to use for the non-technical. Visualization, or more recently, the emergence of technologies that merge visualization with GIS and other database technology, have extended this notion of building information environments that are open to the non-technical.

We’ll examine visualization in three parts

  1. Data capture via digital cameras or some other photographic source
  2. Computer graphics
  3. Merging these first two with non-visual data sources like DEMs and GIS to create immersive environments

1. Real world photogrammetric data capture

Visualizations can be fully computer generated – they don’t need real world input. But the emergence of digital cameras and software that can manipulate the imagery they capture has made the notion of creating real-world visualizations possible for anyone. Such visualizations are very popular in building Web advertising, particularly in fields like real estate. A realtor can list a house, and visitors from other states can get on line and "walk-through" the house, going from room to room, spinning around to see the entire room, and zooming in and out on particular parts of the room. In a classroom example at the University of Arizona, students in English Composition can take a "virtual field trip" of the Santa Rita Experiment Range south of Tucson:

Synthetic walk-throughs via panoramas made from mosaic-ed photos

Digital photos can also be enhanced with filters to create artistic effects, or to highlight some feature. You can also can use image processing technology to isolate or detect some pixels that reflect a certain theme or phenomenon in a picture. With multi-spectral photography more opportunities to isolate themes abound for building inventories of natural resources, or for use in a modeling exercise

Visualizations based solely on photography suffer from one limitation – they are strictly 2D. They can be used to give an illusion of 3D but the features in them are part of a bit-plane, and are not free standing data objects. Information from them can be used in GIS models, but these models are separate from the image. They have no topological relationship with each other, and are not tied to any real-world coordinates, or any other reference system for that matter.

 

 

2. Computer Graphics

Computer graphics can solve some of the limitations of photogrammetry, but at a cost. The verisimilitude of photos is lost in computer generated elements, although some of that can be recovered with techniques like surface rendering, texture mapping, illumination, and the like.

There are two basic types of computer graphics – simple, pixel-based programs, such as you get with Microsoft Paint, and object-oriented graphics used in CAD and 3D.

Pixel or bit-mapped graphics, have the same constraints as imagery and photogrammetry. There are some interesting things you can do with them in terms of effects, but they are not a modeling tool. You can isolate pixels to inventory certain features or themes, as is done with image processing of remotely sensed image data, but bit-mapped graphics are essentially a display vehicle, not easily used in an interactive environment. They are also quite large which adds to their general clumsiness in the classroom.

 

Bit-Mapped images before and after picture effects

Object graphics are much more useful in an interactive instructional environment, As data objects they possess more modularity and reusability, and can be assigned attributes, which they can pass to heirs, or share with classes of their type. They can also be stored in archives, which can eventually become shared academic data libraries.

 

 

 

Graphics Objects

Either graphics needs spatial topology to become more useful. They also need to be integrated with a database and then they are a true GIS. But GIS is itself limited by:

  1. learning curve of the technology itself
  2. 2D visual interface
  3. until recently, only an expert could do much modeling with GIS

Immersive technology is intended to overcome these limitations.

 

  1. Immersive technology

Immersive technology is all about merging computer graphics with non-visual data sources like databases, DEMs and GIS and building complete models of some area that simulate the real thing. These synthetic environments are geared toward bringing out the most positive aspect of on-line learning – a high level of interacitivity (Mesher).

Below is an example (work-in progress) of an immersive environment that integrates several types of data, including visual data.

 

 

 

 

Syntheic Environment with GIS

Building a synthetic environment

The process for making these synthetic environments has not been all that easy.

The first ingedient is VRML. VRML stands for Virtual Reality Modeling Language, an IEEE standard created in the 1990’s as an object-oriented 3D modeling language for the internet. VRML 1.0 was founded in 1995 and updated in 1997 with more nodes and better functionality. One of those new nodes, ElevationGrid, serves as the foundation for the entire SRER VRML model. An ElevationGrid is like a digital elevation model (DEM), but stored in ASCII as a comma delimited list of real numbers. With a satellite image draped over as an ImageTexture associated with the ElevationGrid, a somewhat realistic terrain model is a snap to make. To save on CPU cycles, a distance-based modeling technique based on the LOD (Level of Detail) node can render lower resolution pieces from a distance and then display a higher resolution version when the viewer is within a certain radius of that object. In addition, to keep the scene modular, we found it necessary to use Inlines. An Inline node is a pointer to include a node in an external file. When creating an object at different levels of detail, Inlines are very useful.

 

 

 

 

 

But there are problems. The VRML specifications are somewhat vague in defining how the VRML browser should load and unload the files associated with a given scene. As a result, some browsers load all of the Inlines into memory when the scene is first loaded. This causes problems particularly when you wish to browse a large data set, even if the data is not currently being rendered. A large scene created from high-resolution imagery and DEMs could take hundreds of megabytes of memory this way, even though the scene may be modeled to display only 10MB at a time. The entire set of VRML specifications can be found at http://www.vrml.org/fs_specifications.htm.

 

Our model allows the user to visualize in 3 dimensions objects defined as point, line, and polygon on the 2D side of things. The data we incorporated allows the user to visualize the polygonal regions defined by soil coverage, pasture boundaries, and major vegetation coverage; the line coverages defined by roads and contours; and the point coverages associated with raingauges, repeat photo sites, corrals, and transects. All of these coverages are tied to a plethora of data the University of Arizona has been collecting for over 100 years. The user has a Heads-Up Display; that is, a menu that allows them to turn on and off different coverages, as well as the normal controls for the VRML browser of their choice. Thus the user can turn on any combination of data layers, view the model from any point in the scene, and navigate their own fly-throughs. In addition, the point data is linked dynamically to a database that generates data such as rainfall history and repeat photography.

With that said, let’s get into the guts behind it. How do you make an interactive GIS in VRML? We already mentioned the ElevationGrid component is a major component, but neglected the details. There were several stages to this undertaking. Arcview’s 3D Analyst would be a perfect tool for this task if it accommodated the user by generating multiple levels of resolution and provided developers with models that are more "scriptable," so to speak. However, the interactivity planned for this model necessitated us going to more extreme measures. We started with Raw Landsat7 satellite data, projected the grids into a UTM coordinate system and clipped them using Arc/Info, imported the bands into both a color image and a greyscale DEM using Adobe Photoshop, then re-selected both at varying levels of resolution using MrSid ImageServer before we were ready to create the base VRML file. That’s just for the terrain map which is there mostly for realism. Getting the point, line, and polygon data into VRML required a different tact, and this is where the marriage of GIS with VRML could be vastly improved.

ß BASE LAYER (ELEVATION GRID & SATELLITE PHOTO) à

RAW SATELLITE DATA à

PROJECTED & CLIPPED ( Arc/Info 7) à

APPLIED SPECTRAL BANDS to RGB image (Photoshop) à

APPLIED DEM to GREYSCALE image (Photoshop) à

RESAMPLED IMAGE & DEM at DIFFERENT RESOLUTIONS (MrSid)

à CONVERT TO multi-resolution VRML (shell script)

We started with 7 Landsat7 spectral bands in Raw (BSQ) format and chose bands 1,2, and 4, for our Red, Green, and Blue components of our image. Using Arc/Info 7, we reprojected the image into its UTM coordinates using gridshift, then used gridclip to select the subset we wished to use. Our primary motivation for doing this was to line up the DEM and the satellite bands as closely as possible because nothing stands out in a 3D model more than an image that is not quite lined up to its DEM. The AML for this can be found at http://ag.arizona.edu/~aaryn/srer/aria/nad83tonad27.aml.

After selecting the grid subregions of the DEM and the spectral bands, we imported them as RAW images in Adobe Photoshop 5.0. The satellite bands were 8-bit BSQ’s, which basically means that every byte stands for a different 30x30-meter section of Earth and ranged in value between 0 and 255. Choosing which of the 7 bands to become the Red component of an RGB image required some experimentation, but we eventually settled on band 1 (0.45-0.52) for red, band 2 (0.63-0.69nm) for green, and band 4 (0.76-0.90nm) for blue. Because vegetation does not markedly register on the green band, we selected the Inrared (IR) band for red. This causes areas covered in vegetation to be pronouncedly red and allows students to see areas of dense vegetation on what would otherwise be an almost black representation of the forest. After some color balancing, the image was saved in TIFF format for later usage.

The DEM was a 16-bit BSF file, which means that every 30x30-meter section of Earth in this region takes up 2 bytes because the number of unique integral elevation values exceeds 255. Because our version of MrSid only handles images with 8-bit channels, it was necessary to redistribute the values of the DEM to the full range of 0 to 65,535, and then convert the channels to 8-bit format. As a result, some data was lost and we’ve been battling the vertical integration factor ever since.

To get the DEM and the satellite image into VRML format, we setup a dummy VRML file with an ImageTexture draped over an ElevationGrid as a template for the files we would be creating. Then we looped through the image/DEM pair at different scales, generating 1023 models of varying resolution by selecting the subregions of both using variations of MrSid’s image_convert.pl. That completed the base model.

After the base model was complete, we went to work on bringing the data into the VRML model. The point data was stored in DBF files, so converting those to ASCII and incorporating them into VRML was no problem. We wanted each data type to be visually distinct within the scene, so we created a prototype for each one. A raingauge is a clickable blue cylinder of varying height. At close range the name is visible and clicking on it brings up a webpage summarizing rainfall going back as far as we have data. A repeat photo point is red sphere, also clickable, whose name becomes visible at close range and also brings up a webpage when clicked showing black and white photographs going back as far as 1900 from that point on the SRER. Transect photos are yellow spheres and corrals are green spheres, both with unique identifiers becoming visible at close range. With the point data, we ended up with two files for each data set; a prototype and a master that positions, links, and names the prototype in its respective place on the base model.

The line data and the polygon data were more difficult to convert to VRML and we were going to let that wait indefinitely until we found that Arcview’s 3D Analyst will convert both 3D lines and polygons to VRML. We did that only recently to make the model more complete, but these portions of the model are not interactive; no data can be seen behind the visual VRML representation of these data. This example elucidates where the Arcview extension 3D Analyst could bridge the gap between online content providers and GIS specialists. The ability to generate interactive VRML from within Arcview is a very promising venture.

With the base model and the data layers all converted to VRML, we needed the glue between them—the menu. Creating an interface for users to load and unload data from the model was a difficult and complicated task. VRML does not have a menu utility, so creating a menu actually involved several steps. The first, of course, was creating the visual appearance of the menu. For each data layer, a 4-sided IndexedFaceSet (IFS – a rectangle in this case) was generated and a button texture rendered and draped over the IFS. Then we positioned them relative to each other as we would like to see them in the scene.

The second step involved making the menu move when the user navigates the world. This may sound strange, but VRML does not provide for menu-based object-avatar interaction. Fortunately, some very clever people have already solved this problem and we are forever grateful to those people and specifically to Floppy’s VRML Guide, which can be found at http://www.vapourtech.com/vrmlguide/. The specifics can be found directly in the code of our model at http://ag.arizona.edu/agnet/srer/vrml/, but we will summarize briefly here. The model has a PositionSensor node which generates events that can trigger other actions when we move. For instance, when we rotate, an event_out is generated called orientation_changed that contains, as a value, a 3D rotation vector. Likewise, when we move we generate an event_out called position_changed that we can then use to modify another node. That is, nodes may also have event_ins like set_rotation and set_translation, so when I change orientation, the rotation of another object in the world will change by the same orientation vector.

After getting the menu to move with the user, we rigged the menu items to load and unload the data layers. Another little trick involved keeping those freshly loaded models from moving and rotating with the menus that loaded them. But there was still something not quite right, and this is an area of 3D development that really complicates rendering.

One of the hardest operations in a dynamically changing 3D environment is that of picking. There are two objects relatively close to each other and the user is moving around them. How do you tell which one is closer? Which one do you render? This problem befuddles the best. An expensive combination of square roots for each ray in the intersection of the two objects is not a feasible solution in a changing environment. We were faced with this problem in laying polygon and line coverages over the ElevationGrid (EG). If the coverage is too close to the EG, the renderer may draw the satellite photo or the coverage, or mix the two by drawing alternating stripes. To combat this, we floated the coverages about 50 meters above the EG, which is hardly a realistic solution, so we used LOD to lower the coverage as the user approaches it.

But, because the polygonal region was created in ArcView, we had little or no control over the resolution of the node it exported to VRML. While the base EG itself is 217KB, the soils coverage is a whopping 1.6MB and covers 1/5th the range of the base EG. Considering that the coverage should lie on the EG, this is far from an optimal solution.

We have considered decoding the shapefile specs in order to more smoothly integrate the polygonal and line coverages into our model, but we are optimistic at the prospects of ArcView’s 3D Analyst. ArcView has already been created for GIS and does an excellent job at it. We hope that 3D Analyst carries that functionality and maneuverability into the online realm of VRML.

On-line modeling

We also need on-line modeling tools to further engage students in the scientific method, and from there to engage their critical decision-making skills, the goal we are seeking (Polichar, Bagwell). We need to allow students to do more than fly around and retrieve data. They need a tool that will let them run "what-if" scenarios, to play with the coefficients and produce different weighted scenarios. The matrix shown below was developed for English Composition students to learn something about Land Management and public issues in an argumentation class. It incorporates, at a very simple level, the principles we sought. Esri’s recently released Model Builder will do a better job, and we plan to migrate to that tool.

 

 

 

Conclusions

There are technical obstacles facing this project:

Despite these issues, our assessment studies indicate that our students like this learning and learn at least as much as their peers in more traditional classes. Immersive technology has great potential not only for instruction, but for all of industry. It’s value to E-commerce, for example is just being realized (Messmer). Students who use this technology can look forward to jobs in a variety of fields. As we assess and document its use more, we hope to tap its promise more.

References

Hodges, Mark, "Seeing Data in Depth", Computer Graphics World, May, 2000, p. 43 – 49.

Mesher, "Designing Interactivities for Internet Learning", Syllabus, March, 1999, p. 16 – 20.

Messmer, "E-comm Yet to Embrace Virtual Reality", Network World, May8, 2000, p.87.

Polichar, Valerie E., Bagwell, Christine, "Pedagogical Principles of Learning in the Online Environment", Syllabus, May, 2000, p.52 – p. 56.

Stolic, Mladen, "Digital Photography: Unleash the Power of 3-D GIS, GeoWorld, May, 2000, p. 30 – 37.

Links

Panoramas and decision matrix – http://ag.arizona.edu/englishcomp

Integrated GIS/VRML - http://ag.arizona.edu/agnet/srer/

Trip report that explains more about the technology - http://ag.arizona.edu/~robmac/web3d.htm