Marc Van Liedekerke, Arwyn Jones, Giovanni Graziani

The European Tracer Experiment Information System : where GIS and WWW meet

The European Tracer Experiment (ETEX) project has accumulated large sets of European wide geo-referenced data relating to atmospheric dynamics. The experiment, which involved some 40 organisations active in research on long range atmospheric transport models, released an atmospheric tracer which was monitored at a number of sampling stations throughout the European continent. The participating organisations, by using their models, attempted to predict the location of the tracer cloud at various intervals after the initial release.

To facilitate the comparison of the various models, an information system is being developed which seeks to give improved access to the predicted and measured data. Since the datasets involved are geo-referenced, a GIS is used as the development platform. ArcInfo and AML have been selected to construct an effective interface that allows a user to visualize the various data and to query, by simple point and click operations, a database of associated information. An additional requirement of the information system is the viewing of times series data (e.g. the movement of tracer clouds) in animated form.

Parallel to the development of the information system, we have investigated how to make the same information accessible to a much wider public without sacrificing the flexibility offered by AML. Through the WWW, easy access to any kind of information has become reality ; the effective use of the HTTP protocol, the HTML language and implementations of suitable client/server systems such as NCSA's Mosaic, provide sufficient ground to reconstruct the same information system based on such a client/server architecture with only slightly reduced functionality. The Client part emulates sufficiently the interface constructed by ARC and AML, while the Server part is still based on ARC through the CGI- BIN interface with an HTTP server.


1. Introduction

The aim of this paper is to demonstrate and discuss the development of a distributed information system for the European Tracer Experiment (ETEX). The ETEX Information System (EIS) will attempt to provide user friendly access to data resulting from the ETEX experiment. The information system is being developed along two approaches: firstly, using the Arc Macro Language (AML) to construct a menu-based interface which is launched from an ArcInfo session; secondly, by customising a network browser to interact with ArcInfo, allowing users to access datasets from anywhere in the world without leaving the environment of their networking application.

Both interfaces must allow users to select, display, query and analyse any dataset within the project. Both interfaces must be able to provide facilities for generating and displaying map based documents and other graphical representations of the data. It is imperative that both interfaces are user friendly. The paper describes and discusses the technologies behind the implementation of the two interfaces and examines their respective functionality, strengths and limitations.

2. The ETEX Experiment

Following the Chernobyl Nuclear Power Plant Accident in 1986, there has been considerable interest in being able to accurately predict the transfer of harmful substances by atmospheric processes. A variety of long range atmospheric transport models have been established in different countries for application in emergency management but their quality can only be assessed with difficulty since the experimental datasets against which the model results should be tested are frequently inconsistent or even non-existent.

For this reason, an international programme was agreed involving experimental studies of long range atmospheric transport over the European continent (i.e. ETEX). The major sponsors of the experiment are the World Meteorological Organisation, the International Atomic Energy Authority and the Joint Research Centre (JRC) of the European Commission. Within the JRC, the project is being coordinated by the Environment Institute at Ispra in Italy.

The overall aims of ETEX were fourfold:

  1. to conduct a long range atmospheric tracer experiment involving controlled releases under well defined atmospheric conditions together with coordinated atmospheric sampling at distances of some 2000 km;
  2. to provide notification of the tracer releases to institutes responsible for producing rapid forecasts of atmospheric dispersion over long distances and to test the capability of these institutes to produce such forecasts in real time;
  3. to evaluate the validity of their forecasts in light of the actual measurements;
  4. to assemble a database of predicted, measured and environmental data an to develop tools which will allow the evaluation and comparison of long range atmospheric dispersion models.

2.1 The Tracer Experiment

From a selected site in western France, two controlled releases of atmospheric tracers (Perfluorcarbons or PFC's) were performed in late 1994. These substances are environmentally safe, non toxic, are not washed out by rain and have an extremely high analytical detection sensitivity. Appropriate analytical techniques for PFC's have been developed by the Brookhaven National Laboratory and adopted by the Environment Institute of the JRC at Ispra.They have been successfully applied in long range experiments: for example, CAPTEX (Cross Appalachean Tracer Experiment) and ANATEX (Across North America Tracer Experiment) in the United States and in TRANSALP (Transalpine atmospheric transport) and TRACT (Transport over complex terrain) in Europe. The release location was selected following indications of model runs with actual and forecasted wind fields, with the scope of having the highest number of ground sampling stations and the majority of European countries hit by the tracer plume.

Before and during the release constant level balloons were launched in order to locate the plume position along the first 100 to 200 km distance. Air samples (with a 3 hours sampling period over a period of 72 hours) were collected at about 170 sampling stations spread over Europe. For logistical reasons, all sampling points are located at synoptic meteorological stations. A total of about 9000 samples were collected, shipped to Ispra and analyzed for their PFC content.

The tracer concentration in air was also measured at altitude, in order to obtain information on the vertical structure of the cloud. Three aircraft, the Hercules of the British Met Office, the Stemme Motorglider of the Swiss MetAir and the DO228 of the German Weather Service were employed.

In addition to routine meteorological measurements, during the ETEX experiment, several national meteorological services offered to perform extra measurements in order to obtain a comprehensive meteorological database. This information will be used to improve the general knowledge on atmospheric flow pattern in Europe and to improve the atmospheric models' performance.

Concurrent with the release of the tracer, a number of organisations were asked to predict the concentration of the tracer cloud across Europe. These predictions were derived from highly sophisticated atmospheric transport models. The results of these models were collated by the JRC Environment Institute.

2.2 The ETEX Information System

To facilitate the evaluation and comparison of the various models, an information system has been developed which seeks to give improved access to the collected data. Since most of the datasets involved are geo-referenced, a GIS (i.e ArcInfo) has been used as the basic development platform.

2.2.1 Functionality of the EIS.

The primary aims of the EIS were:

  1. to integrate all the ETEX data (predicted, measured and meteorological) within the ArcInfo environment; b) to define the functionality of tools that will allow a user to access and use datasets within the EIS;
  2. to select suitable technologies to implement the functionality of the tools;
  3. the system must provide remote access to the ETEX data and other related information (e.g. documentation, information on models, participating organisations, etc). which is potentially stored in a variety of formats. As the information system is being designed for users with a variety of competencies, access to the system needs to be in the form of a user friendly interface.

The extraction of relevant data from the participating organisations and the primary data handling which prepares the data for input into ArcInfo was made through a series of AML and external programs. For instance, the latitude and longitude locations of the measuring stations were used to generate a point coverage and the ETEX concentration data for the various sampling locations are attached by means of relates on the station names.

In collaboration with the users of the ETEX data, a series of specific functionalities for the information system were outlined. These include tools:

  1. to select a relevant dataset from a catalogue of possible choices;
  2. to view the position of sampling locations on a map base with an indication of the concentrations of the tracer;
  3. to query any individual point for the concentration, for any given time or for the duration of the experiment;
  4. to select all stations that satisfy particular selection criteria;
  5. to dynamically view the movement of the tracer cloud with time;
  6. to view and compare more than one dataset;
  7. to generate and display isolines of the tracer cloud;
  8. to generate automatic hardcopy version of any information in the system as graphs, maps or tables.

It was decided to attempt to develop the information system on two fronts. The first, and obvious, approach for starting the development of the EIS was to exploit the developing and customising power of ArcInfo's AML. This would allow the construction of a specific user interface, based on windows and simple point and click mouse-driven operations, that satisfies the above functionality. Although this method produces quickly an effective interface, it would not easily satisfy the requirements of access for remote users and to non- spatially organised data. Therefore, our second approach uses an environment that copes with these problems but does not, inherently, have the functionality of a GIS. The following sections present fuller discussions on both methods.

3. The AML Interface

A straightforward method for creating a user friendly interface to data held in ArcInfo is by means of AML. In addition to many programming tools, AML provides the ability to create a menu- based interface. Menus provide a simple to use and highly visual means of providing a customised front-end to specific datasets such as ETEX. In the case of AML menu functions, a user can make a selection from a possible list of choices that are displayed on a terminal. The choice invokes an action that will be taken by the AML program after the selection. The result of the selection can be the execution of a command, the evaluation of a function, run an AML program, invoke another menu, pass a value to a command or perform some other action.

In the case of the EIS interface, extensive use was made of pulldown and forms menus. A pulldown menu appears as a bar on the screen with a number of keywords which represent possible choices. In some instances, many of these keywords have associated sub-menus. These second-level choices are "pulled down" from the menu bar. A form menu is a dynamic interface that present graphic widgets that, when manipulated, dynamically define the action to be performed. These widgets include choice fields, slider bars, buttons, scrolling lists and check fields. A major feature of a form menu is that some element of structure can be applied to complex operations. In addition, input into the form can be validated and certain variables may be set manually before executing an operation.

An initial draft of the interface has been developed and, in conjunction with the managers of the ETEX databases, is being constantly revised and upgraded to provide increased functionality and user-friendliness.

The development of this interface has been, on the whole, relatively straightforward. Numerous examples of similar interfaces can be found throughout the world. However, the major limitations are that it is currently only locally accessible to the Environment Institute at the JRC in Ispra and on X based workstations (i.e. the interface would not work for the majority of PC or Mac users). The interface could be made available to an audience outside of the Institute by providing users with a 'telnet' login to the ArcInfo server and asking them to pass the details of their display environment to run an X display of the EIS locally. This has a number of drawbacks. In particular, response times can be slow and, again, the full functionality could not be made available to users not having X capabilities.

A further limitation of the AML development environment is the difficulty to integrate elegantly ETEX data with information which is not geo-referenced, e.g. the inclusion of supportive text and images have to be handled independently. In a similar manner, the provision of gateways to other databases from your current working environment is not a straightforward exercise.

4. The Network Interface

Parallel to the development of the AML version of the information system, we have investigated how to make the same information and functionality accessible to a much wider public without loosing too much of the flexibility offered by AML by means of distributed information technologies and how to integrate with the enhanced system differing information types.

During the last few years, many people have made an increased use of the facilities offered by various computer networks and their associated tools for accessing, sharing and publishing information. Many of the applications (also known as facilities, utilities, tools) that have emerged for use in networked infrastructure were initially simple and straightforward, and were gradually refined and extended. Examples of such applications and tools are electronic mail and file transfer, which automate and make it easier to perform interpersonal and group correspondence. Ordinary e-mail of text (ASCII) and binary files between interested parties has been upgraded to the unlimited power of multimedia mail (e.g [MIME]), which allows inclusion of pictures, graphs even sound and video in mail messages

There is a growing awareness by organizations and individuals that their computers are, or can be, part of a fully interconnected network ; this has started, and accelerated, the tendency to make information and services available for other users on the network. There are many reasons for doing this. The ease and directness of publicizing an organization and its research programmes, projects and activities involved in, in both an active and passive way, and providing pointers related matters are a few of the major reasons. Giving access to original research papers and results (e.g. under the form of data and their derived value added products) could enhance significantly the distribution of research work both in terms of speed and quality.

Having information and services available on the various nodes of a network is only effective if end users have the necessary tools to access the information. These tools are known as Resource Discovery or Network Information Retrieval (NIR) applications.

4.1 NIR Tools

The key to the availability of network resources is the provision of servers on computers all over the network. A server consists of special software which accepts requests (or queries or commands) and sends a response automatically. Requests received by the server may have originated from a user on the same computer as the server software, or from a user on a computer on the other side of the world. Software programs which ask for resources from servers are called client programs - they are clients of the server software. Clients send requests to a server, using a standardized format called a protocol. The server responds by supplying information, usually in the form of files containing text or data of various sorts. Such client software is being developed all the time, providing better and more convenient ways of interacting with servers. Different versions of a particular client may be developed for different desktop computers since these are increasingly more sophisticated, having advanced graphical, audio and storage capabilities. Thus different versions of a client may for instance be provided for use on PC's, Apple Macintosh, or Unix computers. There are also X Window System versions of many of the clients.

These tools can be divided into three functional themes. Several of the tools have more than one function, but they have been classified according to their main purpose. The first theme covers two services, World-Wide Web and Gopher, which use the client-server model to provide a means of moving through a wide range of network resources in a uniform and intuitive way. The second concerns searching through databases throughout the network ; the WAIS (Wide Area Information Server) is a tool that implements such functionality. The third theme addresses the problem of locating files and programs in the network and uses ARCHIE as a typical example client for searching archives of filenames.

This paper will concentrate on addressing the relevance of the WWW for developing our interface.

4.2 Networking and the World Wide Web

The vision of the World Wide Web is a collection of programs that can understand the numerous different information-retrieval protocols (e.g. FTP, Telnet, WAIS, Gopher) currently in use on the Internet as well as the data formats of those protocols (e.g. ASCII, GIF, JPEG, MPEG, Postscript,DVI, TeXinfo) and provide a single consistent user-interface to them all. In addition, these programs would understand a new protocol (HTTP) and a new data format, the Hyper Text Mark-up Language (HTML), which are both geared toward hypertext and hypermedia. Hypertext is a term that describes a computer interface to text which allows information to be cross-references by clicking with their mouse on a cross-referenced phrase. This action would bring up the document at the 'other end' of the cross-reference. Hypermedia is the extension of this to include graphics and audio as themes which can be selected or viewed.

Clients
Among the more popular WWW clients are the freely available Mosaic from NCSA and, on a commercial basis, Netscape Navigator from Netscape Communications Inc.. Both are available for various hard- and software platforms. Netscape Navigator is able to encrypt requests and decrypt received information based on the HTTP-S protocol. Also, Netscape appears to be more user-friendly; faster and presenting more capabilities to the user.

The HTTP Protocol
The HyperText Transfer Protocol (HTTP) is a protocol for a distributed collaborative hypermedia information system. HTTP is a transfer protocol used by the WWW to retrieve information from distributed servers. HTTP performs the request and retrieves functions necessary to display documents stored on remote computers.

For reasons of secure transfer, extensions have been made to the basic HTTP protocol. For instance, Netscape Communications Inc. has designed and specified a protocol for providing data security layered between application protocols (such as HTTP, SMTP, NNTP, or FTP) and TCP/IP. This security protocol, called Secure Sockets Layer (SSL), provides data encryption, server authentication, message integrity, and optional client authentication for a TCP/IP connection. SSL uses technology licensed from RSA Data Security Inc. "Https" is a unique protocol that is simply SSL underneath HTTP. For HTTP URLs with SSL, it is necessary to use "https://" for HTTP URLs with SSL, whereas for HTTP URLs without SSL "http://" is used as before.

Servers
There are many servers that allow providing access to information based on the HTTP protocol. Apart from security issues, they offer similar capabilities and are free of charge. Only the Netsite server, which is only commercially available, offers powerful means for securing information based on commercial authentication and encryption which are based on the extension of the HTTP protocol, called HTTP-S. An information provider running this server can 'secure' a document by only delivering an encrypted version of it to requesting client. It is obvious that such encrypted data is only delivered to clients which are able to decrypt it.

The Netsite Commerce Server is the first HTTP-compatible server designed for the conduct of electronic commerce, including online transactions, and the exchange of sensitive documents over networks. Its advanced security features make distributing confidential information to a select audience or accepting both orders and payment online simple and secure.

WWW Documents
Each document in the WWW information space has a unique identifier, called Universal Resource Locator (URL), which identifies the location (machine and directory) of the document and the protocol to be used to transfer it (e.g. http://rea.ei.jrc.it/about_ei/ein.html identifies the HTML document ein.html in the directory /about_ei on the machine rea.ei.jrc.it, which can be retrieved by the HTTP protocol). In the case of the WWW, the natural method of retrieving information is by browsing, carried out by following links within WWW pages.

Creation of WWW documents
The WWW open system allows different kinds of data to be put on-line by writing a simple script to generate a hypertext "view" of the database. The hypertext model is flexible and is potentially powerful as a communications medium as it allows one to put in a link whenever the reader might need background information.

HTML, a hypertext development language
The insertion of documents into the WWW is relatively very straightforward. However, to exploit the full potential of the hypermedia capabilities documents have to be written in HTML. Since this is requires some additional effort from the person making the data available, a number of translators and editors have been built. It is possible for documents to have no further links and consequently, need no additional elaboration (e.g. text files or images). The HTML 2.0 version currently being used to develop WWW documents is HTML 2.0 is based on the Standard Generalized Markup Language (SGML). Unlike conventional desktop publishing, many details of how documents are laid out are left to the capabilities of individual browsers or reader preferences, rather than being completely specified by authors. This allows documents to be viewed on a very wide range of equipment. HTML is a collection of styles indicated by markup tags that define various components of WWW document. Due to the diversity of clients in the Web, HTML documents have to be written in a way to look good for any client. The master feature of HTML, comes from its ability to link regions of text/images to other documents.

4.3 Developing applications in the Web environment: Gateways

The original idea of the Web of just linking hypermedia documents through links has been extended by various client/server implementations to include to possibility to access remotely any application.

At the client side, this has been achieved by offering a developer the possibility to create a FORMS based interface, featuring all kinds of buttons and menus. Through simple mouse operations such as pointing and clicking, the user submits a request to a customised program on a remote server. One of the most appealing elements to exploit is the possibility to include in a document sensitive maps: a developer can declare some areas within a picture as 'sensitive'; by clicking with a mouse on such an area, an action, specific for that area will be undertaken at the server side.

At the server side, the so-called CGI-BIN interface takes care of the incoming requests from FORMS or sensitive maps and routes them to the custom programs (or scripts) provided by the developer. For instance, a developer could create a FORMS interface that allows a user to specify a database query. On submission of the query to the server, CGI-BIN routes the request to a custom program that parses the request and constructs a corresponding query for a database ; the result of the query to the database is then processed to create a plain text or a new HTML document or even a sophisticated plot of the retrieved data, which is sent back to the requesting client.

Such a mechanism which allows access to an application to a server, is called a GATEWAY. Some applications lend themselves to a general approach for the construction of a gateway. An example of one such gateway is GSQL, a well structured gateway to SQL queryable databases. The main advantages of a gateway based on a general approach are the ease and speed with which a new network application can be implemented, and its portability across potential developers. A major drawback is, because of generality, its limited flexibility in processing the results from an application request.

4.4 WWW and ETEX

Given all the capabilities of Web based clients and servers (i.e. the ease of networked hypertext document creation and the integration of various information types; remote access to applications through gateways), a Web based ETEX information system has been developed which exploits the use of FORMS and sensitive maps at the client side, a custom made gateway to ArcInfo at the server side, and integrating this with a document base containing ETEX relevant information.

4.4.1 Results from the Web ETEX interface.

At the time of writing, the web version is still under development. However, we can report already many interesting results and observations.

FORMS and Sensitive Maps: The use of FORMS only to create an interface that solicits requests from a user has proven to be relatively straightforward and it is certainly sufficient to construct any request that could be thought of in the AML interface. A major disadvantage is the limited interface elements arranging capacity obliged us to exceed a pagesize, so that scrolling within a document is needed to fill in a full request; this contrasts with the AML interface, where all interface elements are clearly visible during the creation of a request. Splitting up a FORMS request document into a hierarchy of smaller request documents, so that one request is made through a sequence of documents does not alleviate the problem; on the contrary, if the user wants to restructure his request, he has to go again through a sequence of documents instead of just scrolling a page to fix it.

It is impossible to only use Sensitive Maps (assuming that a map represents a geographical region) to create a full request because clicking on such a map results in the sending of the clicked coordinates to the server, invoking a specific action corresponding with area surrounding the point. Embedding a sensitive map in a FORM should work in theory, provided that the user knows that clicking on the map immediately stops further submission of other parameters in the FORM and the action forwards a request to the server containing the clicked coordinates and the FORM parameters thus far selected. A possible approach is to have a sequence of FORMS and Sensitive map documents which solicit from the user request parameters. In ETEX, sensitive maps have been used to allow a graphical specification of a location for which information was required.

Currently, one of the main drawbacks of sensitive maps is that it is not possible for a user to specify a region of interest in the map by clicking more than once in the map as the first 'click' would automatically dispatch a request to the server. A development of sensitive maps that would tolerate such an action would have enormous benefits for spatially organised data. For example, a user may be able to select a region on a map and zoom in to see information in higher detail. Alternatively, a user may wish to calculate an area or measure a distance, both actions would involve the selection of more than one point on a sensitive map.

The ability to add flexibly help functionality for the user through embedded text or links to other help documents has been appreciated very much.

In contrast with the AML interface, a user does not see the results of a request embedded in the environment from which the request was made; results are presented as a new document, loosing the visual connection with the origina request FORM.

Also is it not possible to build visibly on results that come from previous requests. For instance, withan AML interface, a user could first request to view isolines for the tracer plume at a certain moment, and subsequently to overlay the result with isolines of other datasets. In the Web interface, a user should decide from the outset that he wants as a reult an overlay of isolines from two specific datasets, or alternatively, as a result of a first request he could view the isolines for one dataset embedded in a document that prompts him with a FORMS interface to overlay the current graphical result with the isolines of another dataset. In this case, the server and ARC should (re)compute the isolines for both datasets and overlay them in a new picture, or, if a caching technique is used to store the first coverage, fetch the first result and overlay it with the isolines of the new selected dataset

At the server side, we found it easy to parse a forwarded FORMS request and to invoke the appropriate actions. Typically, the server launches a Unix shell script which starts ARC on a remote server, invoking a specific AML script to which Unix environment parameters are passed. On finishing the AML script, the Unix shell retrieves the ARC output files for further processing, creating for instance a new HTML document with embedded text and images, a plain text file, an image or even a sequence of images, coded with the Mpeg standard. Writing such scripts requires good skills with various programming tools and is sometimes not straightforward at all. For instance, we experienced a lot of trouble when piping the results of a 'tables' request to the standard output due to the paging behaviour of the tables output.

An innovative exercise has been the creation of detailed sensitive maps for the selection of station locations. Basically, we faced and solved two problems.

The basic problem required the passing of parameters which were different from the clicked coordinates in a sensitive map to scripts in the cgi-bin interface; an example of such a parameter is the name of the station linked with particular coordinates. This has been solved elegantly by adapting the mapping file on the server (which provides the links between a senstive area on the sensitive map and action to take) so that both area related parameters are passed to a script.

A second problem was concerned on how to generate on the flight sensitive maps with corresponding meaningful mapping files. We have solved this problem for sensitive maps in which the sensitive areas are derived from a point coverage which also forms one of the layers that are used to create the image to be included as the sensitive map. Both the creation of the sensitive map (an image in the gif format derived from a PostScript file produced in ArcPlot) and the corresponding mapping file have been automized.

4.4.2 Examples of the ETEX gateway

This section shows three pages taken from the ETEX Web interface (i.e. Mosaic) which illustrate some of the functionality described above. "/library/userconf/proc95/to050/p0221.gif">Figure 1 shows a page with a FORMS interface featuring a few graphical elements (e.g. radio buttons, option menus and scroll lists) that allow a user to specify a request to the INFO database. In this case, a specific dataset is being selected.

"/library/userconf/proc95/to050/p0222.gif">Figure 2 shows a sensitive map that could be produced on the basis of the request from
Figure 1. This example only shows the location of the sampling stations. However, the map could easily show the concentrations of the tracer at various points by the use of markersymbols and Arcplot LUTs.

"/library/userconf/proc95/to050/p0223.gif">Figure 3 is an example of a typical output that results from the sequence of selections within the FORMS and the sensitive map. This document lists the concentrations of the tracer cloud against time for one measuring station, both graphically and as a table.

4.5. Future implications and required developments

There are a number of issues which could significantly enhance the developments of the interface described in this paper. One major development will be the release of HTML+, also called HTML 3.0 which is a set of extensions to the HTML hypertext markup format. HTML 3.0 will support text flow around floating figures, more advanced fill-out forms, tables and mathematical equations, and better features for controlling the layout of the document.

In particular, fill-out forms will have the possibility to embed sensitive maps within FORMS and that sensitive maps will support the selection of areas instead of simply points. a range of input controls including check boxes, radio buttons, single and multiple text fields and selection menus. The handling of constraints on the Client-side will be possible via simple scripts linked to the form. HTML 3.0 is due for release in the Spring of 1995.

In addition, there are a number of areas where Esri could facilitate further developments. One valuable area of development would be the extension of the AML to a 'higher-level' programming language. Such developments could range from true array handling, enhanced file and string manipulation (e.g. the provision to write multiple strings to one output record; cfr. unix tools such as awk and sed), the easier integration of the aml programs with the calling environment (e.g. passing of parameters). Such developments would avoid the need to jump between AML and other programming tools while developing gateway scripts.

5. Conclusions

The paper discussed and described the possible approaches for the provision of an interface to the ETEX datasets. AML was used to create a successful prototype interface which contained much of the functionality that the end-users required in terms of data manipulation. However, the AML developments did not possess the capability to function as an integrated information system.

As a result, the EIS was based on a combination of networking technologies which are widely available together with the analytical power of ArcInfo. Through the WWW, easy access to any kind of information has become reality. The effective use of the HTTP protocol, the HTML language and implementations of suitable client/server systems, such as NCSA's Mosaic or the Netscape Navigator, have provided sufficient ground to reconstruct a similar information system based on such a client/server architecture with only slightly reduced functionality over the AML interface.


Marc Van Liedekerke, Arwyn Jones, Giovanni Graziani
Joint Research Centre of the European Commission
Environment Institute
21020 Ispra (VA)
ITALY
tel. +39-332-785179
fax. +39-332-789256
e-mail : vanliede@ei.jrc.it