Requirements Analysis For GIS Over A Wide Area Network
The City of Philadelphia is building one of the largest
integrated municipal GIS systems in the world. We have a total
of 22 different coverages completed or in construction ranging
from fire hydrants to orthophotography. Parts of our multi-server
system, which also incorporates new and legacy databases, are
up and running over CityNet, our wide area network. However, the
GIS generated Ethernet traffic we are looking at now is only a
tiny fraction of what we anticipate, and we clearly recognize
the need to design and build new network infrastructure.
In order to effectively predict the City's network
requirements to support the GIS we had to identify and generalize
GIS user types and access methods, and, then, measure local Ethernet
traffic in a number of scripted scenarios. We also had to take
into consideration emerging communications technologies such as
Fast Ethernet and ATM. In addition, we had to find a way to mathematically
model the impact that unleashing access to the GIS would have
on CityNet.
Recent statistical analysis of Ethernet traffic at
Bellcore has shown that Ethernet traffic is self-similar or fractal
in nature; that there is no natural length of a "burst".
This startling but widely accepted result challenges the traditional
assumptions of a Poisson model for the interarrival distribution
of data communications traffic. This paper is a description of
the methodology that Philadelphia, looking beyond the Poisson
model, applied to the traffic measurements during scripted events
to help the City design network capacity and speed appropriate
for the additional demand that we anticipate GIS will be placing
on CityNet.
OVERVIEW OF PHILADELPHIA'S ENTERPRISE WIDE
GIS
Philadelphia is fortunate to have an administration
that is supportive of GIS. While capital funding for other information
systems is routinely denied, this administration has interpreted
the need for building GIS datasets as appropriate for the capital
budget. Already the City has spent over ten million dollars building
GIS infrastructure.
The Mayor's Office of Information Services (MOIS)
is charged with the responsibility for information technology
Citywide and is ultimately responsible for, and must sign off
on, all information technology procurements. This gives the MOIS
the ability to create and enforce standards which have been extremely
important in the success of the Citywide GIS. Our environment
includes a GIS software standard of Esri products and a hardware
platform of IBM RS6000 with AIX UNIX. The applications are on
Ethernet LANs running TCP/IP as the network protocol.
MOIS functions as the coordinator of the various
departmental GIS efforts and provides technical and administrative
support. There are currently four departments with large scale
GIS implementations whose activities are briefly described below.
The Philadelphia City Planning Commission
(PCPC) is the GIS production center where map products are created
to meet the needs of departments without access to the system
as well as PCPC's own extensive mapping requirements. Currently
PCPC maintains the parcel and zoning bases, as well as, numerous
other geo-political boundary coverages. There are five full time
GIS staff using the system as well as approximately 10 other planners
who are intermittent users. The number of staff and users will
increase only slightly over the next few years.
The Streets Department maintains the street
centerline coverage and uses it in an increasing number of applications
supporting their operations such as trash truck routing and curb
cut compliance. There are currently 8 full time GIS staff but
this number as well as the number of users is expected to grow
considerably. Streets wants to use GIS as a basis for a number
of other facilities management and operation support systems including
pavement management, street lighting maintenance, traffic control,
striping, routing for oversize and HAZMAT vehicles, snow plowing
control, and customer information service. In two years, there
will likely be as many as 50 users and staff accessing the system.
The Water Department has a number of GIS initiatives
underway. They are planning to convert the water distribution
and sewer return plans to GIS and build a new maintenance management
system around it. They are going to use impervious surface percentages
as part of a new method for storm water billing. They are also
using GIS for combined sewer/stormwater overflow planning. They
have procured Citywide orthophotography and numerous planimetric
layers including curblines, building footprints, impervious surfaces,
fire hydrants and sewer inlets, streetpoles, and two foot contours
to support these efforts. Water and sewer features will be maintained
by the Water Department but maintenance plans for the rest of
the features have not yet been solidified. They currently have
just three full time GIS staff but both the number of staff and
users is expected to increase dramatically to a likely total of
50
The Records Department is in the process of
converting the registry maps to GIS which will bring a whole new
level of accuracy to the parcel base. When complete, responsibility
for the parcel base will shift from PCPC to Records. Records
will have approximately 25 users and staff.
While there are other Departments which, on a much
smaller scale, currently use or plan to GIS our focus is on WAN
requirements for the four departments with major initiatives as
well as MOIS.
Since 1991 network requirements have been nagging
for attention in the background while resources were focused on
building the system. The advent of orthophotography brought the
need for attention to network requirements instantly to the forefront.
The Citywide orthophotographic compilation is 35GB. All of the
departments want access to it. In our current environment, described
below, access over the network is virtually impossible. The alternative
to increasing the capacity of the network is populating the departments
with redundant servers. From a GIS management point of view, redundant
servers would lead to a maintenance management nightmare and on
a Citywide basis would likely result in a much higher overall
cost. An understanding of the network capacity requirement became
critical.
NETWORK DESIGN ISSUES
Our existing WAN consists of two interconnected SONET
rings, one which links to major city buildings including the five
buildings with major GIS initiatives in the commercial core and
a second ring that links the Bell Atlantic C.O.s(central offices)
that best meet local access requirements for many smaller city
locations. This WAN infrastructure has been in existence for
a few years and was put in place to build City wide connectivity
at low cost and with high reliability . Both rings are currently
being upgraded(3Q97) form OC3(155Mbps) to OC12(622Mbps). Up until
this point in time the WAN requirements fit what could be called
the Frame Relay model, i.e., most applications needed FT1(fractional
T1) from the local site back to a central site serviced by one
or two T1s. Hence our OC3 was a "bundle" of multiplexed
T1s and did not allow us to exploit all the advantages of SONET
technology although it did meet our need to build a low cost/highly
reliable backbone.
One of the first things needed was a determination of the WAN
bandwidth requirement. After that, the question of whether or
not ATM would be needed to the desktop would be a key issue.
If it turned out that ATM to the desktop was not the right approach
then how we carried IP over ATM would become a critical issue.
The two contending architectures that were possible for us were
MPOA(multiple protocols over ATM) and LANE(LAN emulation). Since
GIS was known to be a high bandwidth delay sensitive application(something
we confirmed through measurement), the issue of router latency
in the LANE solution needed to be confronted and understood.
In every case where we needed to model traffic, e.g., the WAN
SONET ring, LANE router latency, we felt we were dealing with
high bandwidth "bursty" Ethernet traffic that often
required back to back images. We knew that LAN traffic of this
sort is statistically self-similar, that none of the commonly
used traffic models is able to capture this fractal behavior,
that such behavior has serious implications for the design control
and analysis of high speed, cell based(B-ISDN) networks, and that
aggregating streams of such traffic typically intensifies the
self-similar("burstiness") instead of smoothing it.
APPROACH
The following steps outline the major activities that were followed
in the network design process:
This required that before we began any analysis in support of
the network design and engineering we spent time to understand
how the application was used and what its users perceived its
benefits to be. This included observing the use of the application
in its current deployment in a stand-alone LAN environment and
discussion of what some future uses might be such as 911 emergency
services.
This included understanding the connectivity topology, i.e., where
the users would be, where the data would reside, the performance
objectives, LAN protocols to be used, the application system architecture,
database access, etc.
Based upon a controlled test we measured how many bytes were transferred
and received for each transaction type. This included accessing
orthographic, map, and text data.
Based upon the measured GIS transaction data, the response time,
reliability, and maintainability objectives, assumptions of the
peak period transaction arrival rate, the current and near term
available technology, we determined the feasible set of networking
solutions.
Each networking solution was evaluated based upon criteria that
management had helped identify and rank in importance. This included
a debate over how to trade off the benefits of some technologies
that reduced latency but were more "bleeding edge" against
other technologies that had higher latency but were more proven
in the business world.
This was an iterative process that converged over time and is
in fact still evolving at this writing.
NETWORK DESIGN: REQUIREMENTS & ASSUMPTIONS
ON BENCHMARK AND LATER USE C+ SIMULATION MODEL TO CAPTURE FRACTAL
NATURE OF TRAFFIC ARRIVAL BEHAVIOR
FIRST-CUT DESIGN
These networking assumptions above and the assumptions about the
initial number of users (on the chart below)
allowed us to make some very preliminary estimates of what the
WAN options might be. The assumption of a mean and maximum network
transaction size of 11MB and 40MB respectively was based on observing
the coverage and image file sizes. This information served as
a "first cut" to get us off the ground and begin to
understand the nature and range of WAN options to be evaluated.
Another assumption was that most of the users would need to access
most of the data. We also made some assumptions about the number
and type of transactions. GIS applications do not typically involve
the constant transfer of data. Data are requested and generally
used in some way or analyzed before more data are requested.
We estimated that in a peak 15 minute period each user would make
5 requests for 11MB of data.
Liza Casey, Program Manager - GIS, City of Philadelphia
Arthur J. Petrella, Network Engineer; and,
James L. Querry, Jr., GIS Technical Coordinator,
City of Philadelphia
Requirements Analysis For GIS Over A Wide Area Network
LARGE CITYWIDE SONET RING CONNECTING BELL-ATLANTIC COs
SMALL DOWNTOWN RING CONNECTING MAJOR BUILDINGS
DEPARTMENT | # USERS |
Records (City Hall) | 25 |
Streets | 50 |
Water (Water only) | 25 |
MOIS | 15 |
Planning | 15 |
Other (Spread across many locations) | 50 |
Total Users | 180 |
These numbers represent what might be expected over the next twelve
months (up to 2Q98). Later, perhaps by 4Q99 we might expect this
number to double to around 350. Based on these assumptions we
were able to identify four feasible options for the WAN backbone
interconnecting the five major sites. The four options are described
below and an evaluation matrix for these four options follows:
OPTION#1: A "STAR-HUB" WITH 4 POINT TO POINT ATM PVCs
@ DS3(45Mbps)
OPTION#2:A "STAR-HUB" WITH 4 POINT TO POINT HDLC CIRCUITS
@ DS3(45Mbps)
OPTION#3:A "STAR-HUB" WITH 4 POINT TO POINT ATM PVCs
@ OC3c(155Mpbs)
OPTION#4:A CONCATENDATED RING @ OC3c(155Mbps) running ATM
The rational for identifying these four WAN options is based on
the following reasoning:
We set the WAN component response time goal high (to a second
or less on average in the peak period). Since the maximum data
transfer that could occur was around 40 MB(320,000,000bytes),
we could not of course consider T1s. Depending on the distribution
of data transfer sizes a minimum of DS3(45Mbps) would need to
be considered for WAN connectivity. At the end of the spectrum
OC3c(155Mbps) was both a limit that would seem to meet the 1 second
criteria and was practical in terms of the availability of router
and ATM switching equipment capabilities.
GIS WAN OPTIONS - RESPONSE TIME(IN SECONDS):
The four response time curves shown above are based on assuming
an M/M/1 model. In queuing theory an M/M/1 model breaks down
as follows: an M in the first position indicates a Poisson traffic
arrival distribution; an M in the second position indicates an
exponential service distribution or exponential degradation of
the response time as the traffic increases; and, the number in
third position represents the number of servers in the queuing
model (in this case it is the single SONET OC3c ring). Because
the Poisson distribution does not reflect the self-similar phenomenon
of bursty EtherNet traffic we know it to be suspect, but nevertheless
allows us to put the four options into perspective. Options#1
and #2 are the same from the stand point of the response time
curve since whether we run ATM PVCs over DS3 or HDLC over DS3
circuits is immaterial. The HDLC option (which is a peer to peer,
point to point WAN protocol) allows for a non-ATM approach to
be considered. The OC3c concatenated ring has two flavors, one
assumes ATM OC3c attached file servers and one assumes 100Mbps
Ethernet attached file servers. The only reason to consider the
100Mbps file server option would be the availability of an ATM
NIC card for the RISC6000s. The M/M/1 calculation for the OC3c
concatenated ring with an ATM attached file server is shown
below:
OC3c Concatenated RING, F.S. ATM Attached:
Network Response Time = service time / (1 - circuit
utilization)
GIS WAN OPTIONS - EVALUATION MATRIX:
An evaluation matrix for the four primary WAN options
is shown below. The evaluation criteria are the database topology
(i.e., whether we have all servers at a central location or distributed
across a number of remote locations); the associated network
topology (i.e., point to point connections from remote sites to
a central site vs a ring); the bandwidth required and average
response time for that option; the DTE equipment at the premise
location; file server and workstation connectivity; protocols
used; and finally the % of the SONET ring bandwidth that would
be consumed by that option.
These data suggested to us that Option#4(ATM),
an OC3c concatenated ring with ATM attached file servers was
the most cost/effective and appropriate choice. Although Option#3
with OC3c ATM PVCs would deliver an average .7 second response
time over the WAN compared to 1.1 seconds for Option#4(ATM),
it would consume the entire OC12 SONET ring which would be a prohibitive
cost. Options #1 and #2 had a unacceptable WAN response time at
over 4 seconds on average. We could be required to start with
Option#4(100Mbps) and move to Option#4(ATM) if ATM NIC cards
for the RISC6000 were not available when first needed. Once we
have upgraded our existing OC3 ring to OC12 we will need to install
optical multiplexors at each site to manage and control at the
OC3c level, to isolate the 150MBs needed for GIS.
GIS WAN DESIGN - ATM BACKBONE:
At this point it is necessary to say something, however
brief, about ATM in general and specifically as it relates to
how we propose using it here for the WAN backbone. The need
to provide an indivisible 155Mbps "pipe" , i.e., OC3c
concatenated SONET ring, as transport for the GIS backbone requires
that ATM be used as the WAN protocol. ATM is one of the general
class of packet technologies that relay traffic via an address
contained within the packet. Unlike more familiar technologies
such as X.25 or Frame Relay, ATM uses very short, fixed-length
packets called cells. In fact, the umbrella technology for this
type of service is cell relay. ATM represents a specific type
of cell relay service defined as part of the overall B-ISDN(broadband
integrated services digital network) standard.
What makes packet technologies attractive is their
suitability for data communications. For such applications, packets
or cells make much more efficient use of communications channels
than TDM(time division multiplexing) technologies. ATM's small
fixed-length cells allow for very low switching latency and thus
ATM fits the high speed transport medium provided by optical fiber
technology or SONET.
WHAT IS ATM?
WHY ATM?
This chart printed with permission of BCR Enterprises,
INC.
BENCHMARK GIS TRANSACTIONS AND COMPUTE M/G/1:
A next step in refining the estimate of the WAN bandwidth
was to benchmark two classes of GIS transactions, one that accessed
ortho images and one that accessed vector data. Up to this point
in time we had used M/M/1 assumptions, i.e., Poisson arrival
distribution, exponential service time distribution, single server.
Now we were ready to benchmark these transactions in terms of
measuring how many bytes of data were transmitted and received
each time a request for data was made and actually calculate the
mean and variance of the service time distribution based upon
M/G/1 assumptions, the difference between this and the M/M/1 model
being the use of the benchmark measurements in a General service
time distribution. This approach still left us with an assumption
of a Poisson arrival distribution but did promise to reflect the
service time distribution much more accurately since it would
be based upon measurements of a representative range of transaction
types.
A Hewlett Packard network analyzer was used to measure the results summarized below.
The network analyzer was attached to an Ethernet
port on the same LAN as the RS6000 workstation that was generating
the requests for GIS data. The data resided on a file server at
a remote building connected via a T1 link over the SONET ring.
The workstation was mounted on the server's file system where
the data were located. We chose ArcView as the access mechanism
since in the future the majority of users will access the system
with ArcView. The response time was measured in minutes since
we were saturating the T1 with every request. The aim was to measure
the number of bytes that need to be transmitted and received for
a representative range of transaction types and calculate the
parameters of a general service time distribution over a OC3c
concatenated ring.
We broke down the activity into specific events observable
with the network analyzer. The first step, which we called "access",
corresponds to Add Theme. This was of a somewhat
variable size dependent on the size and structure of the directory
at which ArcView first looked. The mean measurement on the benchmark
file system was about 600KB. Using the Add Theme Browse
Box to change directories, which we called "Navigate
Directory" also caused network activity. In our case it
was about 200KB per move. Hitting the OK button in the Add
Theme Browse Box, which we called "Create Link"
caused small but measurable activity. The most interesting result
occurred when we clicked the check box to display the theme, which
we called "Display" while we were looking at an ortho
image. We set the extent to the full extent of the ortho tile,
which was about 3,000' x 4,000', and expected to see the full
file size, 56MB, move across the network. Instead, consistently,
16MB was all that came across. Apparently ArcView does some form
of dynamic resampling according to what it can reasonably display.
The display of vector data also proved to cause less network
activity than expected. The network transaction size was about
half the size of the coverage file. The final two activities
that we measured were Pan and Zoom. Using
the Pan and Zoom on the vector data caused no additional network
activity. Data about the complete coverage was already at the
workstation. However, performing these same functions using the
ortho caused interesting results. Zooming in from the full extent
invariably caused the transfer of additional data, typically about
8.5MB. Panning sometimes triggered the transfer of additional
data and sometimes not. Our unverified conclusion was that Zooming
brought additional data in "chunks" and sometimes we
Panned to an area where the data was already present and sometimes
where additional data still needed to be transferred.
Overall, in spite of the fact that we erred in our
assumption that the maximum network transaction size would be
up to 40MB, the mean network transaction size measured in the
course of the benchmark was 10.4MB, off just .6MB from our assumed
mean of 11MB.
SUMMARY OF BENCHMARK DATA:
The results of this benchmark were illuminating in
a number of ways. It gave us not only a better feeling for what
the WAN component of over all response time might be for various
level users and/or transaction rates(see chart on following page)
but it also gave us an understanding for how the transactions
request data when PAN and ZOOM functions are invoked.
Previous analysis indicated that an OC3c concatenated
SONET ring was the most cost/effective way to provision the required
WAN bandwidth. With this in mind the benchmark results were put
in an M/G/1 model where the service times assumed an OC3c(155Mbps)
concatenated ring with various transaction arrival rates.
RESULTS OF BENCHMARK - RESPONSE TIME CURVE:
The chart below shows a WAN component of response
time of under 1 second even if the transaction arrival rate is
2 or 3 times what we expect for 150 to 300 users. The "X"
axis of this chart is in "business transaction" as opposed
to network transactions. A business transaction would be composed
of a sequence of network interactions with the server and/or workstation,
i.e., an ACCESS, NAVIGATE DIRECTORY, CREATE LINK etc. This distinction
is emphasized since business transaction are the things which
usually can be forecasted based on some number of users.
At this point we need to remember that this work is still based on Poisson arrival distribution assumptions. If we were dealing with general Internet traffic use of Poisson models would certainly result in major underestimation of the WAN bandwidth requirements. We have a much simpler, bounded and controlled situation since we can describe with some assurance our traffic characteristics although we still have uncertainty as regards the effect of aggregated traffic sources. This chart would suggest that an OC3c concatenated ring would be sufficient for the initial deployment of between 150 and 300 users. It would also mean that we would have sufficient time to take corrective actions if we were off by a factor or two or three in terms of the actual bandwidth required . From a practical business perspective we would proceed with plans(which we are doing) to provision an OC3c concatenated ring for GIS over the WAN based on these results.
This SONET backbone connecting major sites is depicted
below:
It seems then that we had a solution in which we
should have a high degree of confidence that the OC3c concatenated
ring will deliver the needed WAN response time and does not warrant
further effort at this point in time. When we turn our attention
to the problem of how to carry LAN traffic over the ATM boundary
(MPOA vs LANE) alluded to in the introduction, we are, however,
again faced with the problem of how to deal with "bursty"
Ethernet traffic. The following section examines how we envisioned
connecting major locations to the WAN backbone and the performance
and architectural issues that were identified .
PREMISE LAN CONNECTIVITY:
The basic requirements that determined the design
options for the premise LAN environment were:
The number of users and the number of applications
and network connections that would be needed to be supported from
each location would preclude any thoughts of a "flat"
network.
10 and/or 100Mbps switched Ethernet and the emergence
of Gigabit Ethernet would mean ATM would only make sense for us
over the backbone WAN.
The benchmark had shown that requesting multimegabyte
back to back images would be done as a matter of course.
As the number of users grew, multiple users would
access data at the same location so we should expect that "bursty/fractal"
traffic behavior would result. We knew that near term future uses
of GIS would involve production environments including "911"
and other public safety users so that responsiveness would become
more of a concern over time. This meant that we might need to
be wary of any architectures that required that the data incur
traditional router latency when moving across IP subnets.
The two basic architectures that were available to
us to carry TCP/IP traffic over ATM were MPOA and LANE. The MPOA
solution was at the time of this writing(April 97) offered only
by Newbridge with a proprietary solution termed by them "VIVID".
That same month the ATM Forum has voted to make MPOA a standard
and we knew that most major vendors would or had already announced
support of MPOA. The issue of when a vendor would be able to deliver
equipment would become important since we needed to be in a position
to order and install equipment by the 2Q98. MPOA had the advantage
that it would avoid "router latency", i.e., when a user
requested back to back images across the WAN, each multimegabyte
request for data would result in an initial "visit"
to a router to resolve MAC and ATM addressing and then an ATM
SVC(switch virtual circuit) would be established after which the
data would be streamed across the network bypassing the router.
The chart below shows a Newbridge "VIVID" implementation
for a typical "large" GIS location.
The LANE solution, on the other hand, would route
every packet through the router at the file server location to
another router across the WAN where the user resides. This approach
works very well when you have "small" transmissions
to "many" locations. What we have are "large"
transmissions to a "few" locations. The back to back
image transfers using PAN and ZOOM, which are done as a matter
of course, are a very clear example of this. The issue of router
latency centers around the router's buffering capacity to handle
multiple users requesting back to back multimegabyte data transfers.
A BayNetworks LANE implementation is shown below:
There is more to the MPOA vs. LANE evaluation than
the issue of router latency, however critical that might be. The
chart below indicates the other major issues that needed to be
considered as of this writing. There is , of course, the issue
of costs, one time equipment costs, recurring circuit costs, maintenance
and support costs, all of which will enter into any final decision.
Since BayNetworks has also announced support for MPOA we need
at this writing to determine from them the time table for this
announcement.
We felt we had convincing evidence that a LANE implementation
would be untenable once the usage matured to the 150 to 300 user
range due to the router latency issue stated previously. This
conclusion was based upon the benchmark which demonstrated that
typical usage of the application would generate multimegabyte
back to back data transfers across the WAN. We corresponded with
Mr. Vern Paxson and others with BellCore who had done work on
the self-similar fractal nature of bursty LAN traffic and
they helped us understand that fetching back to back large images
fell into the category of traffic behavior they were studying.
We also attended a seminar sponsored by NetworkWorld and given
by Mr. Scott Bradner who runs a telecommunications testing LAB
at Harvard University. He indicated that the router buffer performance
under a LANE scenario would be something we would need to measure
carefully.
This led us to conclude that an MPOA-like solution
was needed. Whether or not it was a BayNetworks, Newbridge, or
CISCO solution or some other vendor was not as yet the issue.
We did and are in the midst of taking additional steps to add
a final degree of confidence in our conclusions. Those last steps
are twofold as of this writing. One is to build an MPOA prototype
with the help of the vendors and create the expected peak conditions
which would demonstrate the ability of MPOA to deliver the performance
required. At this point we see no need to go through the considerable
effort that it would take to demonstrate that LANE could not deliver
the needed performance. It was not designed to do so given the
nature of the high bandwidth latency sensitive traffic we need
to service and every indication we had ( the benchmark, studies
from BellCore, etc.) was that we needed to avoid such latency.
The second parallel effort is to use a C+ simulation model that
was developed by Mr. Paxson's colleague's. At this writing we
do not know which approach may prove most practical and useful
so at this stage we are pursuing both.
SUMMARY:
The challenge that faced us was how to determine
the network design and technology that would ensure the needed
reliability, availability, maintainability, scalability and performance
over time for Philadelphia's GIS, a high bandwidth latency sensitive
application, which originally existed in a stand-alone LAN environment.
This application is expected to grow substantially in number
of production applications and users. Therefor it was important
to anticipate any hidden latency that might only become evident
as traffic levels grew and to ensure that our architecture and
technology was scaleable.
These general guidelines helped lead us to some major
conclusions regarding the network architecture and the types of
technology we plan to employ. ATM was determined to be the protocol
to service the network backbone primarily because we needed an
indivisible 155Mbps "pipe" and because we viewed this
backbone in strategic terms, i.e. , we expected it to service
emerging B-ISDN applications like Internet and InTRAnet in addition
to GIS and hence be able to grow transparently to higher bandwidth.
On the premise side we plan to connect file servers
with ATM at 155Mbps since in general they need to service aggregate
traffic from many locations. If an ATM NIC card is not available
for the RIS6000s in the time frame required we would connect then
at 100Mbps switched Ethernet until such time. Desktops would be
connected at 10Mbps switched Ethernet and could grow to 100Mbps
switched Ethernet and eventually Gigabit Ethernet over time. We
also concluded that an MPOA-like architecture was required to
carry IP traffic over ATM since we needed to avoid the router
latency of a traditional LANE implementation. This lead us to
undertake building a MPOA prototype to replicate peak traffic
conditions using GIS applications and pursue a BellCore supplied
C+ simulation model until one approach proves more feasible.
Hopefully we will be able to report on the results at the Esri
User Conference.
ACKNOWLEDGEMENTS
We would like to acknowledge the generous help of
Mr. Russel Hale, Bell Atlantic Corp., Wilmington, DE; Mr. Harold
Farrey, CIGNA Corp, Voorhees, NJ; and Mr. Vern Paxson, Lawrence
Berkeley Laboratory, Berkeley, CA.