Paper No. 243
Building Heights and GIS: Different Technologies Require Different Solutions
Building height data has become one of the hottest commodities in the telecommunications industry as the latest wireless networks are rolled out across the country. But building heights are not a one-size-fits-all solution. An uninformed choice could result in excessive cost or an inferior solution.
This tutorial discusses the differences between the new wireless technologies and the varying characteristics of their radio frequencies. It then clearly differentiates the building height and GIS requirements for each technology, providing the reader with a firm basis for understanding the options and offerings in this burgeoning market.
Outline
1. Land vs. Cell Based
Body
Cellular Technologies
The analog phone in the car sends a radio signal out. The nearest tower receives the signal with the greatest strength. The signal is authenticated to make sure the right person is going to get the bill. The signal is then routed across landlines or microwave to the receiving phone user. If the transmitting phone goes out of range, the call is transferred to the appropriate in-range tower. The re-routing instructions can either be done through a separate microwave system between the towers or through landlines between the towers.
Whether the re-routing was landline based or cellular was the traditional distinguishing characteristic of the cellular alternatives in any market. Historically the consumer would choose between only these two alternatives. Now the car-phone consumer is offered these two alternatives (converted or converting to digital), a Specialized Mobile Radio provider at 800 MHz (Nextel in 353 markets), and up to six PCS providers between 1850 MHz and 2200 MHz. The choice of up to nine competitors to complete the car phone call can be overwhelming.
Conventional Cellular
With both of the conventional cellular providers at 850 MHz, sending a large wave with a lot of power (up to 20 W), they could both locate their cells with a large amount of distance between the towers. With radii of 1-20 KM, the towers could be up to 20 miles apart. These large cells are known as macrocells, and average around 36 sq. mis. served by each tower.
The actual area covered by any cellular cell is a function of the frequency being transmitted, the power behind the transmission, and the topology/coverage/clutter of the area being served. The lower the frequency, the larger the size of the individual radio waves. The larger the wave, the larger the size of the object(s) that would be needed to disrupt the wave.
Generally attenuation (the rate at which the signal gets weaker) doubles with the frequency doubling. Since PCS is slightly double the frequency of conventional cellular, with the same power behind each signal, the conventional cellular signal would always be expected to be at least twice as strong as PCS for a common distance from the tower.
Added to the size of the wave, conventional cellular uses transmissions that are 10 to 200 times more powerful than PCS. Combined with the size of the wave, it is little wonder that the current PCS users are clustered near the windows of the office building just to be able to pick up their signals, while the conventional cellular user is still using their phone in the elevator.
The third component of the size of the cell is the topography, coverage, and clutter of the cell area. Generically, a cell with deserts sloping down in every direction from the tower will be larger than a cell in wet forest with hills on every side.
With PCS users clustered near windows while conventional cellular users can still hear the other caller while they are in the middle of the building, it might seem logical that callers would stay on conventional cellular. However, the higher frequency and lower power needs of PCS phones means the handsets can be much smaller and have much longer battery lives than the conventional phones.
The sending of the signal as 1s and 0s allows the voice signal to be received with greater clarity. Sending the signal as 1s and 0s will allow the PCS phone to be able to transmit data with greater ease.
The use of newer digital equipment throughout the PCS environment should allow the PCS providers to lower costs to end users, while increasing the number of enhanced service features. First and foremost of the enhanced features available from all digital equipment is increased ease of roaming. Additional features associated with digital equipment include voice mail, weather updates, sports and wake up calls.
In order to combat loss of market share to PCS, the landline and cell based conventional cellular providers are blurring the distinction between PCS and conventional cellular, by converting the existing analog networks to digital networks in the same frequency spectrum. Now the differentiator between the two conventional cellular providers in each market can include their methodology of dividing the available digital spectrum.
The choices are to divide up the spectrum by codes or increments of time. The proponents of code use CDMA (Code Division Multiple Access). The proponents of time use TDMA (Time Division Multiple Access). However, the TDMA proponents are better known by the name of their marketing alliance- GSM (Global System for Multiple Communications). GSM/tDMA backers include AT&T, BellSouth, and SBC. CDMA proponents include Alltel, Ameritech, Bell Atlantic/NYNEX, Sprint, and AirTouch/US West.
As these companies rush to convert to all digital, there exists an opportunity to use the superior data available today to better model the effects of objects that induce attenuation, such as hills, trees, and buildings. Conventional cellular providers have a unique opportunity to both improve their service through digital deployment and through simultaneous redesign of cells associated with customer service issues such as dead spots and dropped calls.
PCS
The first operational PCS deployment in the US was only in the summer of 1996. Since then this bandwidth has flourished. PCS is adding users at a two to three year rate that equals 10 years of conventional cellular. By the year 2003, industry pundits predict that 50% of the U.S. will have some sort of cellular instrument. It is thought 50% of those will be PCS.
Starting in 1993 and concluding in 1997 (except for the C Band re-auction), the FCC auctioned the bandwidth from 1850 MHz to 2200 MHz into 102, A and B band, 30 MHz wide MTA (Major Trade Areas). Smaller communities and fractions of the MTAs were divided into 493, C band, 30 MHz wide BTA (Basic Trade Areas), and 1479, D-F, 10 MHz wide BTAs.
The C bands were desirable because their increased width translated to the ability to simultaneously serve more customers than D-F. C bands were overbid, leaving little capital to build out. While 51 of the 102 A and B bands were up and working within 12 months of the conclusion of the FCC auction, only 3 of the 493 C bands were up in the same time. As a consequence, a number of the C bands were returned to the FCC or reclaimed by the FCC following bankruptcies. The re-bid will conclude in 1999.
Building the PCS infrastructure may involve up to 4000 times more RF Engineering than conventional cellular. The PCS frequency being higher means attenuation occurs sooner and from more sources than conventional cellular. It has been said that to a PCS signal, "…A tree is equivalent to the Great Wall of China."
Combined with the higher frequency is the relative lack of power of the PCS signal and the resulting smaller cells. PCS cells are microcells that may only have a radius of 100 meters.
The towers to be reached from PCS towers are similar to conventional cellular in that both are mobile users, not clustered all in one location. Where conventional cellular could serve wide areas with a limited number of cells, PCS cells serving under a square mile each will require a large number to serve the same area. This is one of the reasons for the popularity of dual mode and tri-mode phones that switch the user to analog conventional cellular service (and/or digital in the case of tri-mode phones).
In the densest areas of downtowns an opposite problem occurs. Increased cellular traffic leads to increased reflected out of phase signals, co-channel interference from adjacent or near cells, and even harmonics from two-way radios at 450 MHz and paging at 900 MHz. These sources of interference in turn lead to propagating deadspots. Propagating deadspots is a reason to redesign already existing PCS networks.
Conventional cellular and PCS A and B bands should account for the addition of 28,300 (Paul Kagan and Associates) more towers to add to the number of towers already up and being managed for multiple uses by multiple cellular companies by "Tower Lords." PCS C band, LMDS, and data over cellular have are not included in these numbers. If the business traveler starts to check e-mails on cellular phones the moment the plane lands, or when stopped in traffic, the amount of PCS infrastructure will have to dramatically increase.
LMDS
LMDS stands for Local Multipoint Distribution Service. Two bandwidths were sold in each market. The bandwidths were wide spans of frequency in the 28 GHz and 31 GHz ranges.
In that the frequencies awarded are so wide, the user served by LMDS differs from the user served by PCS and conventional cellular. The LMDS user will consume much more bandwidth per cellular connection. The LMDS users will either be the phone system of a small to medium sized business, or will be a mobile, Internet-surfer.
The fixed location, small to medium business user is the more developed and more typical LMDS user. These users look to the LMDS provider for a number of voice and data connections, and can include high-speed data and wireless cable TV. In that the LMDS provider can provide an alternative to or a bypass of the local telephone to connect to the long distance network, the LMDS carrier is considered a CAP (Competitive Access Provider). The LMDS provider can also connect to a land based system to reach numbers local to the user or possibly serve the desired number on the same LMDS. When the LMDS provider connects local calls they become a CLEP (Competitive Local Exchange Provider).
One difficulty with LMDS providers being CAPs/CLEPs is the amount of uptime they can guarantee in areas where the rain falls in bigger drops. While light or continuous rain is too small to disrupt LMDS. Big raindrops can disrupt the wave that is about the size of a man’s thumbnail.
The LMDS providers will either deploy point-to-point technology or point-to-multipoint. Whether the user antenna points back to a dedicated transmitting antenna or a shared antenna, the user antenna is usually located on the roof of the user’s building and connected to the MDF (Main Distribution Frame) in the basement through raiser cable.
MMDS
MMDS (Multi-point Microwave Distribution System) resembles LMDS in that user locations are fixed. MMDS differs from LMDS in that the target customer is homes not small and medium businesses. Rather than multiple voice and data lines, MMDS delivers wireless cable TV and high-speed Internet access.
MMDS frequencies were formally reserved for local access public television. In major markets one of the MMDS providers will be granted the legacy public television bandwidth and its 35-mile radius. The other MMDS provider will be granted permission to transmit to the MTA boundaries.
The MMDS provider with the 35-mile radius will need a tower tall enough to serve the entire 3900-sq. mis. area of homes. The MMDS signal will be aimed at small dish antennas located under the eaves of the homes. With one high tower in the center, the home antennas will probably not transmit back or upload to the MMDS tower. The MMDS deployment will probably involve uploads on an alternate technology such as PCS or landlines.
GIS Data Products
Depending on the cellular market being served, different combinations of various GIS data products may be appropriate. Each of the GIS products should be delivered in the format appropriate for the GIS software or the RF Engineering software. Each of the GIS products should be delivered in a common earth projection, coordinate system, and level of accuracy. Otherwise simultaneous display of layers will cause cognitive dissonance with streets appearing to go through the middle of buildings.
Building Heights
The most intensive area of GIS data products is the building heights. Building heights can either be built by hand or through an automated program by computer. When the building heights are built by hand, the hand digitizer uses computer software to use trigonometry to calculate the difference between the ground and the parallel to the ground and nearly parallel to the ground surfaces. Auto-correlation software uses the differences between gray values to infer building 3D structures.
Hand digitizing has the advantage of hard corners and edges. These hard corners and edges are necessary for PCS microcell redesigns because using a building to stop a signal from bleeding over is as important as determining signal attenuation. The hard corners and edges are necessary for LMDS because the line of sight is so critical.
The downsides to hand digitizing are the expense and the need for manual quality control. In order to control the by hand costs of digitizing all the parallel and nearly parallel surfaces, buildings shorter than a certain threshold may be ignored in the hand digitizing. Shorter buildings ignored in the hand digitizing will then be captured in the canopy DEM. Manual QC is needed for hand digitizing because of the building to miss a building at the too short cut-off.
Auto-correlation has the advantages of capturing every surface and being able to be done very quickly. The major downside to auto-correlation is misinterpretation of shadow. Generically the misinterpretation of shadow manifests itself as the, "Melted ice cream," effect, where the corner and edges are incorrectly rounded. This melted ice cream effect may manifest itself all in one direction from the buildings or may obscure alleys between buildings.
Auto-correlation can be improved by surveying the values through finer grids, multiple passes of the software, and/or the inclusion of stereo imagery drapes and some hand digitizing. Unfortunately, all of these improvements add cost and reduce the speed of production.
A third possibility exists in the calculation of building heights through airborne (usually helicopter based) radar systems, known as LIDAR. The radar signal is broadcast from the aircraft and the distance from the roof surface to the aircraft is measured. With airborne GPS this distance is converted into X, Y, and Z values for the surface. The downsides to this technique appear to be cost and matching this information with the X, Y of the building structures and the other layers.
Since LMDS is concerned with lines of sight on rooftops and can easily be interfered with by air-conditioning equipment, stairwells, elevator equipment etc.; the building heights need to include "Significant" rooftop structures. What is a significant rooftop structure is open for interpretation, but will generically be a factor of both the numbers of visible pixels that represent the object being digitized, and the minimum difference in height from the roof to the top of the rooftop structure. Depending on the size of the pixel used in the digital files, the number of pixels will translate to a number of square feet or meters.
In addition to driving a requirement for the most rooftop surfaces, LMDS also requires the greatest degree of accuracy. The higher the frequency, the smaller the wave used in the frequency. The smaller the wave, the smaller the size of the object that can interfere with the signal. The smaller the object that can interfere with the signal, the greater the need for vertical (Z) accuracy.
LMDS and PCS frequencies are high enough to require height accuracies in the area of sub-meter. Conventional cellular with its lower frequency and wider signal can tolerate far less height accuracy.
The horizontal accuracy (X and Y) of the building edges and corners becomes a function of the amount of control necessary to build the desired vertical accuracy. In order to generate building heights with sub-meter Z accuracy, the produced data will need to have an X/Y accuracy of 2-4 meters. Generating multi-meter Z accuracy can be done with X/Y accuracies of 13 meters.
Bald Earth DEM
The software producing the building heights can record the heights as all being above mean sea level or as above ground level. When the building heights are measured above ground level, the height of the ground becomes critical. Or when the buildings are shown with the ground in a combined DEM (Digital Elevation Model), the height of the ground becomes critical. Or when the LMDS signal has to be sent from the roof of one building to the ground adjacent to the cable fault in a building without enough riser cable, the height of the ground becomes critical.
A bald earth DEM uses computer software to model what the earth would look like without buildings and vegetation. The most common US product describing the height of the ground is the USGS Level 1 DTED. This product has postings describing the Z every 30 meters. With the NMAS definition, these postings equal 90% of the points being within 15 meters of the actual Z.
With 15-meter holes, even four story buildings could disappear in holes in the USGS DEM. In would not appear to make sense to spend a great deal of money to make the building height as accurate as possible only to have the very accurate building disappear in a hole in the DEM. With sub-meter accuracy on the buildings, it probably makes sense to have 5 to 10 meter postings, and 2.5 to 5 meter accuracy, so that one-story buildings only would disappear.
Accuracy and the posting distance are coordinated in the NMAS definition, but do not have to be in the delivered products. In the NMAS definition, the accuracy of a DEM is assumed to be equal to one half of the posting distance. It is possible to build a DEM with a coordinated posting distance and accuracy and then interpolate additional points in between so that the posting distance becomes finer without increasing the accuracy. Likewise it is possible to build a DEM with a certain accuracy and then to downsample the posting distance. The tradeoff is visual appearance vs. computer storage.
Canopy DEM
When the taller buildings are captured in the building heights and the surface of the ground is captured in the BEDEM (Bald Earth DEM), the vegetation and the smaller buildings are most cost effectively displayed in a canopy DEM. Other terms for canopy DEMs include auto-correlation or cross-correlation.
As was the case in auto-correlation used for taller buildings, the canopy DEM will have rounded corners and edges. In addition, the auto-correlation procedure can lead to anomalous information. In areas without enough difference in gray values, this anomalous information shows up as false spikes of elevation. These false spikes can be perceived as false buildings over lakes, parking lots without painted lines, highways 100% parallel to the flightpath, etc..
The most cost-effective way of dealing with these spikes is to visually compare the canopy DEM to other layers such as the taller building heights and the imagery. However, just as in the taller building heights it is possible to improve the quality of the canopy DEM through processing with finer grids, multiple passes, and post-processing masking of lakes and parking lots. These improvements increase cost and decrease turnaround.
Another problem with canopy DEMs is what is being measured in vegetation. Since the auto-correlation is based on differences in gray values, individual plants in an area of vegetation will probably not have enough shadow (particularly between 10 a.m. and 2 p.m. local sun time, when all aerial and satellite imagery is captured). Also since the imagery used for traditional production is flown leaf-off, the changes in gray values from deciduous trees will occur in the tree itself where the branches are dense enough to make a difference.
Therefore, canopy DEMs should be considered as showing the relative height of a group of vegetation, and not the actual heights of individual trees. When using a canopy DEM only to measure a line of sight, the RF Engineer should allow a reasonable amount of room for error.
Combined DEM
Occasionally the RF Engineering or GIS software tool will require data input that combines the square corners and edges of the building heights along with the rounded presentation of vegetation, the ground in-between the buildings, or all three. Or upper management may want a presentation on the virtual 3D appearance of a city. In either case, what is needed is a combined DEM with the building polygons placed on the other surface(s). This is accomplished in Production and then the combination is sampled as a discrete product.
Depending on the input needs of the RF Engineering, GIS, or presentation software; it may be desired that the combined DEM have a posting distance fine enough to keep the corners and edges square. When this is done, the source DEMs of BEDEM and canopy DEM are not made magically more accurate. Combined DEMs is as accurate as the least accurate component.
Imagery
A picture is worth a thousand words, and no GIS presentation is complete without the use of that old adage. Another old adage with applicability to the use of imagery in RF Engineering, Sales and Marketing, and Customer Service (the principal applications of GIS data in telephony) is, "…Being able to see the forest for the trees." When continually viewing a city as a series of wireframes, it is easy to lose one’s place. The city makes sense, the big picture is restored, and/or the anomaly is recognized when the software user takes a step back and views the other GIS layers overlaid on actual imagery.
The level of resolution needed to make sense of the GIS data, see the big picture, and/or recognize the anomaly is somewhat a judgement call. However, there are four truths to the resolution question:
Resolution is often expressed in terms of the smallest object that is visible:
Landsat Inside of baseball diamond
SPOT Suburban home
IRS-1C Cessna
Russian 2 meter imagery Volkswagen bug
1 meter St. Bernard
1 foot Basketball
However, there is a difference between just being visible and being recognizable (particularly to an untrained eye). With a finer resolution, even larger objects have more pixels being used to describe the object, providing more detail, and increasing the ease of recognition.
It is a mathematical impossibility to derive accuracy greater than the source input. For example, if a room were measured with a tape measure with only inch indications, it would not make sense to store the measurements of the room in fractions of the inch. In GIS sub-meter (1 foot) pixels are suggested for determining pixels to sub-meter accuracy. One-meter pixels can be used to develop building heights with multi-meter accuracies.
While color files are natively three times larger than black and white files, the human eye is able to assemble more information from a color photo. Without the fuzzy logic and inference used in the human mind, computer programs may derive more information content from black and white imagery than color imagery, because the signal to noise ratio is greater with black and white imagery.
One foot, color imagery stored natively will take up approximately 90MB psm (per square mile). The resolution vs. computer costs equation can be modified by converting 24-bit color into 8-bit color and by compression.
The downsides to compression, are the decompression needs to be accompanied with the same software as was used in the compression, and even the best wavelet or fractal compression is still, "Lossy." Information content is lost through the compression algorithm. However, the lost information is now in the range of 1-2% of all the information that would have been available from the image, and 98% derivation is still beyond most unskilled users anyway. The decompression software can now be included with the compressed imagery in a cost-effective manner.
In order to correctly drape a two-dimensional photograph from an airplane being buffeted around (or a satellite shooting through the entire image distorting atmosphere) onto a three dimensional earth, the photographs have to be DO’d (Digitally Orthorectified). The DO process can be done to various degrees of accuracy. A reason to do the DO to a level of accuracy less than the building is to match the imagery to other GIS layers. For example:
Commercial intelligent street products 1:24000 to 1:100000 NMAS
(40 to 167 feet)
Commercial building geocoding products 1:96000 to 1:100000 NMAS
(160 to 167 feet)
USGS Quad Maps 1:24000 (40 feet)
SPOT 10 meter satellite imagery 1:24000 (40 feet)
If the DO used is the same level of accuracy as the sub-meter building heights, the commercial intelligent street product has a much higher chance of showing the street going through the building. If the DO is at the same level of accuracy as the commercial intelligent street product and they are in the same earth projection and coordinate system, the streets and the imagery of the buildings will be appropriate, but the wireframe layer of buildings will be offset.
This problem may be best served with the use of two DO products, varying depending on the application. RF Engineering might use a DO at 1:2400 NMAS to match the sub-meter wireframes, while Sales and Marketing and Customer Service use a DO at 1:24000 to match the intelligent streets and the building geocoding.
Building Geocoding
The building heights and imagery can be instantly more valuable to the LMDS Sales and Marketing departments with a linkage from the visual data to the more tabular real estate information such as building owner, tenant names, available square feet, etc.. The value to all cellular company’s Real Estate and Customer Service departments can be increased in the same way. The link from the visual side to the tabular real estate information is building geocoding.
Previously building geocoding consisted of address ranges collected by the Census Bureau, TIGER Files. These files attempted to show the range of addresses in a block or group of blocks, but did not attempt to place the correct address on the mapspace occupied by that building. TIGER files have accuracies of 167 feet.
Vendors have tried to take the TIGER address ranges and make them more accurate. Generically, these, "Corrected-TIGER" files have an accuracy of only 160 feet. To place the address of a particular building and all of the associated information directly on the wireframe of the building will require starting from scratch.
Where Telephony and GIS Intersect
Data vs. Drive Testing
The largest competitor for any GIS data vendor in telephony is not a fellow GIS firm. Instead all data vendors find themselves competing with the status quo. Prior to the commercial availability of cost effective, accurate, OTS (Off-the-shelf) and near-OTS data; the only option available to cellular companies was to drive test the area.
Two men crew collected points of sample information around an area using a temporary, test transmission antenna and either phone instruments and receiving antenna. Some of these crews have logged close to 1 million miles of testing.
The advantages of this incumbent way of collecting information are:
With the advent of a cost-effective alternative in OTS, 3D modeling, the disadvantages to drive testing have become:
1 day of conventional drive testing,
The coffee service for that city's engineering department, or
The monthly 7-11 expenses for that city's drive teams.
In reality, neither 100% drive testing nor 0% drive testing and 100% modeling are the correct answer. The correct answer is a proactive balance of the empirical information from some drive testing and the use of modeling to determine where to optimize the use of the drive testing.
LMDS
With the users of this service being fixed and being located in clusters in a city’s actual downtown and the suburban pseudo-downtowns, the GIS data delivery for the RF Engineering department of this company needs to concentrate on fairly defined areas of building heights. The building heights need to have rooftop structures included and be fairly accurate.
Depending on the software used and or the visualizations needed, a BEDEM of the same area may be appropriate. This BEDEM need not be near as accurate as the building heights, but cannot be so coarse as to allow buildings to disappear in holes.
Particularly in the pseudo-downtowns where the shorter, more spread-out office parks are hidden in amongst groups of trees, canopy DEMs can be a very helpful addition to the taller building heights and the BEDEM. Depending on the LMDS company’s RF Engineering Department’s rules of thumb for avoiding the possibility of sending the signal into vegetation, the canopy DEM might be at the same level of granularity and accuracy as the BEDEM.
While imagery and ancillary layers are always a great idea for every department and for presentations to building owners and tenants, the geocoding of all of these layers to the building polygons will give the Sales and Marketing the much coveted hot list. The hot list is the list of buyers who fall into the set with:
Proven ability to be served from the GIS building height data and the GIS line-of-sight or RF Engineering software,
High likelihood of need from a demographic layer, and
Known name and size (square feet can be translated to employees and phone usage) from the linked real estate databases.
PCS
In the C Band area where new deployments still need to occur and in the areas between current build-outs where users are served only on tri-mode phones, the GIS data product provisioning is best viewed as a bulls-eye. In the center of the bulls-eye is the downtown with a need for building heights of a fairly high degree of accuracy, but with no need for rooftop structures.
In the ring around this downtown, the canopy DEM is the best way of seeing the shorter buildings and vegetation that can obscure the cell phone user on foot or in their car. The canopy DEM gains value when combined with imagery. False positives for blockage are quickly resolved.
In the outer rings from there, bald earth DEMs of increasing coarseness can be used for tower locations. At a certain point, it would not make sense to combine a coarse DEM with a fine image, so less expensive satellite imagery would be matched to the coarseness of the DEM.
In effort to be first to each market, the A and B band winners, sometimes built infrastructure first and planned and engineered after the fact. This rapid deployment has left an A and B band winner with a need for $2B in infrastructure redeployment.
Redeployment needs will increase as deadspot propagation starts to occur. In Scandinavia the rule of thumb had been, when 30% of all conversations in a dense urban area are completed on PCS bandwidths, microcell redesign is needed.
Microcell redesign data needs are very different than initial deployment. In order to minimize cross-channel interference, PCS operators faced with microcell redesign needs, negotiate to move the antennas from the top of the building either down to the corners of the roof, or even further down to 50’ up the building.
When the antenna is partially up the side of the building, the need for accuracy of the Z value of the roof surface decreases. The argument can be made that PCS users are usually on the street or near the windows of the office floors. They are seldom using their PCS phones from the roof of the building. So without users on the rooftops the Z of the rooftops can be a little less accurate.
The Z of the rooftop cannot become so inaccurate that the X/Y of the building edges is degraded. In microcell redesign, RF Engineers use the building edges proactively to stop signals so as not to create cross-channel interference, and use the reflective surfaces of buildings to bounce signals to users on the street that could not otherwise be reached. With the antenna height at only 50’, microcell redesign might call for the need for more buildings to be captured as separate wireframes than an LMDS application. Or the problem might be best resolved by combining canopy DEMs with the wireframes, BEDEM and imagery in a full GIS solution.
Microcell redesign might be limited in the area that needs to be modeled to only the area where cellular traffic is great enough to be causing the propagating deadspot phenomena.
MMDS
Sales and Marketing activities of MMDS companies can be optimized through a 3D model that shows:
Upscale homes with multiple TVs and computers,
Located on ridges visible from the central antenna, and
Whose eaves are not obscured by vegetation.
The GIS data product that yields this optimal "Hot" set of prospects would include demographics, BEDEMs, canopy DEMs, and imagery.
As the MMDS signal attenuates from the center further out in the up to 3900 sq. mis. area, the point of diminishing returns for the GIS data sets can be avoided with a bulls-eye approach. The center of the bulls-eye has the finest DEMs and imagery. Surrounding rings have ever less expensive, coarser DEMs combined with coarser satellite imagery from different sources.
Conventional Cellular
With the lowest frequency, widest wave, higher power output, and completed deployments conventional cellular has the lowest immediate need for GIS data, and the least need for finer, more accurate data. However, three factors will drive conventional cellular’s ongoing need for GIS data:
In order to minimize lost marketshare to new PCS providers in the same markets, conventional cellular providers will try to minimize PCS’ major competitive advantage, while attacking PCS’s greatest weakness. The PCS advantage is being digital, and this is minimized by digital redeployment in existing markets. The PCS weakness is comparative customer service; e.g. coverage problems and response to those coverage problems. The digital firm, with the competitive rate, who can complete the most calls or responds best to dropped calls will win the customer loyalty.
The number one negative impact on the profitability of all cellular companies is customer churn: the opposite of customer loyalty. To minimize this negative effect on profitability, having viable 3D models and imagery of the area served available to everyone in Customer Service can be the difference between be profit and loss.
Quite honestly, an uniqueness of the US cellular market makes the last point harder to follow to a logical conclusion. Due to cross ownership issues, PCS companies that could be very concerned about the conventional cellular digital strategy in some markets could be the very same firm (or parent) who are perpetrating the conventional cellular strategy in other markets.
Mobile computing will change the face of the cellular industry. If mobile computing ever equals current voice loads, the number of antennas would have to double, and the number of towers would have to increase to handle the increase in antennas.
Due to less problems from harmonics, and cells being further apart; conventional cellular will take longer to succumb to propagating deadspots, but will have similar needs when macrocell redesign is needed.
How Accurate Are Those Building Heights?
The discussion of accuracy in our industry is complex and many sided. When representatives of another industry (such as wireless telecom RF Engineers) are added, things get even more complex.
There are a number of opportunities for mis-understanding, and for the flip side of mis-understanding- customer education:
What is being measured?
How is the accuracy being described?
What is a valid test of this accuracy statement?
What is not a valid test of this accuracy?
What is specifically excluded from this product?
What is being measured?
The most important point to be made here is:
Any individual point’s accuracy is NOT measured.
The second most important point to be made is:
Most accuracy measurements deal with absolute accuracy. Most RF Engineering software is concerned with relative accuracy.
GIS, mapping and photogrammetric expressions of accuracy are all descriptions of the behavior of a large population of points. Any individual point could be different than the accuracy statement and the statement could still be true.
In fact, all of the points in the sample could be wrong absolutely (as long as they were relatively accurate to each other), and still work for the purposes of RF engineering. The reason is the signal is being aimed from a point in the sample past intervening points in the sample to another point in the sample. If all the points were (A) feet to high due to a DATUM shift, or (B) miles off in X/Y due to the wrong State Plane, the data would still work to depict whether or not the signal will reach the intended destination(s).
How is the accuracy being described?
The description of the absolute accuracy of a bunch of points, usually is a description of a portion of a graphical curve used in statistical analysis, where the Y axis is the number of points and the X axis is the amount of inaccuracy of each point.
This curve reflects a large number of points. The sampling population of points is a large enough number to assure randomness. The statistical depiction of a random population is known as a chi-square distribution. In the popular literature, this is referred to as a bell curve.
In 1947, the USGS published a National Map Accuracy Standard (NMAS). The 1947 NMAS specified the behavior of 90% of the randomly tested points. Generally, the industry refers to this 90% by two other terms; 2 sigma, or 2 standard deviations from the mean. However, depending on the actual population of points, the 2 standard deviations from the mean may or may not actually equal 90%.
Also in use throughout the industry are 3 other percentages of points. These range from:
50%+, or a simple majority of the points,
67%, 1 sigma or 1 standard deviation (in general parlance these terms are assumed to be equivalent; but are not necessarily, statistically the same), to
95%, 3 sigma or 3 standard deviations (while equivalency is assumed it may not be the same), or the NSSDA standard.
The toughest of these standards is the proposed update/replacement for the older 1947 NMAS, the National Standard for Spatial Data Accuracy from the Federal Geodetic Data Committee.
The NSSDA standard also specifies the maximum inaccuracy of any point in the 5%, or flyer population. NSSDA says the most inaccurate a point can be is 3X the RMSE.
RMSE stands for Root Mean Square Error. Mathematically this number is derived by:
Squaring the errors,
Adding the squares,
Dividing by the number of errors, and
Taking the square root of the result.
The result of this calculation is to model the 360° nature of photogrammetric. In GIS, RMSE includes the differences of scale, skew, rotation, and translation
What is a valid test of this accuracy statement?
Using a suite of at least 20 Ground Test Points (GTPs), either an error distribution or RMSE analysis should be conducted on the entire population. Anything less than 20 points will suffer from the "Small Group Sampling" problem where false assumptions can be derived from the information. The best known example of the small group sampling phenomena is the example of 10 coins out of a bag of 1000 being drawn. If they were all heads-up, one could falsely assume that the population of 1000 coins were all in the bag heads-up, instead of the actual population of 50%/50%.
The 20 GTPs need to have the following characteristics:
The points need to be on the ground. (Points on tops of buildings and overpasses might be accurate for Z, but due to building lean will be off on X and Y.)
The points need to be photo-identifiable. (If the object or point on the ground cannot be seen from the photo, there would be no way to measure whether or not that point was measured correctly.)
The measurement of the GTPs must be equal to or better than the stated standard of the product being measured. (For example, it would not make sense to measure a 1:2400 NMAS BEDEM, where 90% of the points are within 7’; against a USGS DEM that has a stated accuracy of 1:24,000 NMAS, where 90% of the points are within 40’.)
What is not a valid test of this accuracy?
Some examples of invalid tests that we have seen include:
Comparing one point in our data to an "Almanac" listing of what the building was planned to be, with an unstated precision. One point is not enough. What is planned will usually be different from what is. If the almanac were less accurate in its measurements than 1:4800, it would also be an irrelevant measurement.
Comparing our points to architectural drawings. Since the drawings usually do not include any geo-information, the X and Y will not be measured. The Z in the drawing is measured from the planned ground directly at the edge of the building. The Z in the data is from the actual ground out in the street where the ground is visible in the imagery.
Comparing our more accurate BEDEM to the less accurate USGS DEM. When the difference occurred, the more accurate product was considered suspect.
Examples of exclusions