GIS Implementation: Measuring the Value of Lifelong GIS Learning

By Avis L. Webster and Kristi Lombard

 

Abstract. The short life cycles of GIS technology products, compounded by an increased complexity in job roles, heighten the demand for lifelong GIS training. Trends in GIS training include shifts from training to performance and from training to lifelong learning. Increasingly it is essential to demonstrate training outcomes, measure performance outcomes, deliver specialized training, emphasize knowledge management, and rapidly develop and deploy training. Efforts to cut GIS training costs underscore the need for accurate evaluation and measurement methods. This paper discusses the use of evaluation and measurement techniques to promote continuous improvement in the cost and performance effectiveness of GIS learning.


 

Introduction

Learning is the key to prosperity . . . . Investment in human capital will be the foundation of success in the knowledge-based global economy of the twenty-first century . . . . learning throughout life will build human capital by encouraging the acquisition of knowledge and skills and emphasizing creativity and imagination. To achieve stable and sustainable growth, we will need a well-educated, well-equipped and adaptable labour force. To cope with rapid change and the challenge of the information and communication age, we must ensure that people can return to learning throughout their lives. As well as securing our economic future, learning has a wider contribution. It helps make ours a civilised society, develops the spiritual side of our lives and promotes active citizenship. Learning enables people to play a full part in their community. It strengthens the family, the neighbourhood and consequently the nation. It helps us fulfil our potential and opens doors to a love of music, art and literature. That is why we value learning for its own sake as well as for the equality of opportunity it brings. -- David Blunkett, Secretary of State for Education and Employment, England

 

Our paper discusses a topic near and dear to our hearts. As GIS professionals, we (the authors, but you too?) often experience an persistent, overwhelming sense of urgency to re-tool and re-train in order to perform quality work competitively in the matrix of ever-changing technological, methodological and organizational changes. The last ten years have witnessed changes in the GIS Industry on a massive scale. Changes from command-line driven applications to GUI applications which increasingly have to be delivered across the Net using client-server architectures to exploit data from enormously complex data warehouses and Enterprise RDBMSs on new generations of hardware according to rapidly evolving standards. Within this ten-year timeframe, we have been in danger of becoming obsolete end-users of GIS technology. Those of us who continued to make our living in GIS have had to acquire whole new skillsets. In the face of the pressure to maintain our status as valuable employees, we are compelled to continue training and learning. We see no end in sight; hence the notion of lifelong learning in GIS.

In a broader context, we find ourselves in the knowledge-based economy of the post-technological Information Age, where knowledge and skills have become the key factor in competition and where knowledge is the only lasting resource of competitive advantage. As we approach the millennium, concerns about the decline of Western firms, technological change, social change, market drivers, demographic changes, and the increasing globalization of business are fueling a "desperate quest" for new approaches to management and organization. One of the latest outcomes of the search for new paradigms in management is the theory and practice of the learning organization. The learning organization is a key means of adapting to and shaping an increasingly evolving and complex environment because a learning organization mobilizes the learning of all its members in a process of continuous self-transformation. Knowledge management, is the next logical step beyond the learning organization. (Starkey 1998, Demchenko 1999, DfEE 1998). Within this framework, many enterprises are positioning themselves as being Knowledge Based. In order to realize the goal of being knowledge based, these organizations must become learning organizations exercising knowledge management.

Concurrently, the Information Technology industry faces a problem of global proportions: If the IT skills and labor shortages aren't addressed, many CIOs fear national IT sectors will lose their competitive edge, economies will suffer and innovation will slow (Busse and Brandel 1999). So it’s no surprise that training budgets are on the rise. The most significant trend in the marketplace is a shift from training (skills acquisition) to improving performance. Number two is computer skills training and the third most significant trend is a shift from training to learning (ASTD 1997, Ernst & Young LLP 1999). According to Ernst & Young LLP in 1997 companies spent $18 billion worldwide on IT training alone. The same Webpage cites research published by International Data Corp: that number is estimated to reach $27.9 billion by 2001. Training expert Jack J. Phillips (1994) estimates that total spending on training in the U.S. rose 7% in 1993 to about $30 billion. Training for Information Systems professionals is a significant part of that.

The Return on the Investment. Bits and bytes from the Ernst & Young Webpage cite a study entitled Competencies and Competitive Edge by Watson Wyatt Worldwide: Companies that link skill development to business strategy have a 40% higher shareholder return than companies that don’t. The ASTD reports the results of another survey based on a sample of 40 publicly traded firms in a broad range of industries. This study provides preliminary evidence indicating that a relationship between training expenditures per employee and measures of financial performance. (Bassi and McMurrer 1998). During the period of the study, the change in market-to-book ratios showed greater increases for companies that ranked in the top half of training expenditures. Publicly traded firms that spend more money on training tended to have higher net sales and gross profits per employee and were more highly valued on Wall Street. According to the Eric Rolfe Greenberg, Director of the American Management Association (AMA) International Research Reports, companies are 80% more likely to increase employee productivity with training (AMA 1998).

Despite the labor and skills shortage, and preliminary evidence that training is positively correlated with profitability, there is an increasing demand towards the need to measure the return on investment on human resource development (Malaysia HRD Online 1998, AMA 1998). The pressure for measuring ROI in training is increasing for the following reasons:

 

"Kat" by Mr. Kidane Habte

Figure 1. "How does the enterprise benefit from training!?"

 

Measuring the success of training

Measurement is the only way of providing hard evidence to senior management of the value and the bottom line impact of training. The following sections of this paper present a framework for measuring the success of training and provide the reader with an introduction to issues, tools and methods for calculating ROI. We, the authors, recommend that readers who want more than an introduction to the subject of designing training programs to raise their return on investment, or who plan to utilize the methods for calculating the ROI, explore the Websites referenced in our bibliography. In addition to being the source for the information presented below, a number of the Websites offer free tools for evaluating training. The tools we found available on the Web (for free) include evaluation matrices, anecdotal record forms, expert review checklists, focus group protocols, formative review logs, implementation logs, interview protocols, questionnaires, user interface rating forms, evaluation report samples and even spreadsheets for calculating ROI. (For example see the Georgia Tech Research Institute Intelligent Machines Branch (GTRI) Website currently at http://mime1.marc.gatech.edu/MM_Tools/evaluation.html.)

 

Kirkpatrick’s Four Levels

In 1959, Donald Kirkpatrick created what is still the most widely used method of evaluating training programs. An ASTD survey indicates that 67% of organizations that conduct training evaluations use the Kirkpatrick method to the degree that "level 1" and "level 2" have become part of the training lexicon. Nearly all organizations perform Level 1 and Level 2 evaluations, and many organizations perform Levels 3 and 4 evaluations. (Stone and Watson 1999). Kirkpatrick's four levels of evaluation are summarized in Table 1.

 

Table 1. Kirkpatrick’s four levels (Alliger, et al. 1998, Tannebaum 1998)

Level

Definition

Questions Addressed

Guidelines for Measuring Each Level

Level 1: Reaction

Assesses participants’ initial reactions to a course. Offers insights into participants’ satisfaction, or the effectiveness (value) of the training as perceived by the trainee. Usually assessed through a survey aka a "smiley sheet."

Does not measure learning.

Were the participants pleased?

 

What do they plan to do with what they learned?

  • Determine what you want to find out.
  • Design a form that allows questionnaire results to be easily tabulated.
  • Encourage honest written comments and suggestions. Attain an immediate response rate of 100 percent.
  • Develop standards.
  • Measure reactions against the standards and take appropriate action.
  • Communicate participants’ reactions.
  • Use focus groups to acquire qualitative feedback (i.e. more specific comments)

Level 2: Learning

Assesses the amount of information (principles, facts and techniques) understood and absorbed by trainees.

May use a criterion-referenced test

What skills, knowledge, or attitudes have changed? By how much?

  • Use a control group, if feasible.
  • Evaluate knowledge, skills, or attitudes both before and after the training.
  • Attain a response rate of 100 percent.
  • Use the results of the evaluation to take appropriate action.

Level 3: Behavior, or Transfer

Assesses the amount of material used on-the-job after taking the course, e.g. a week to 6 months (or longer) after taking the course. Assesses on-the-job behavior based on the objectives of the course and assessed through tests, observations, surveys and interviews.

Did the participants change their behavior based on what was learned in the program?

  • Use a control group, if feasible.
  • Allow enough time for a change in behavior to take place.
  • Survey or interview one or more of the following groups: trainees, their bosses, their subordinates, and others who often observe trainees' behavior on the job.
  • Choose a statistically significant sample, or 100 employees.
  • Repeat the evaluation.
  • Consider the cost of evaluation versus the potential benefits.

Level 4: Business results

Measures results, e.g. reduced costs, higher quality, increased production, and lower rates of employee turnover. Measure 6 mos. to 2 yrs. after completing the course.

Did the change in on-the-job behavior positively affect the organization?

  • Use a control group, if feasible.
  • Allow time for results to be achieved. Amount of time depends on course context.
  • Measure both before and after training. Repeat the measurement.
  • Consider the cost of evaluation versus the potential benefits.

 

The Augmented Kirkpatrick Model

Recognizing the prevalence and simplicity of Kirkpatrick’s typology, Alliger, et al. (1997) conducted an effort to analyze prior training effectiveness studies in order to provide more specificity about the levels. Their research was conducted with the USAF Armstrong Labs and the University at Albany and was the recipient of the American Society for Training and Development (ASTD) 1997 Research of the Year Award.

Alliger, et al. examined the research literature for studies that reported correlations among more than one measure of training effectiveness. Using "meta-analytic" procedures to combine results across 115 correlations from 34 prior evaluation studies, the researchers developed an augmented framework to provide clearer distinctions within and between frameworks. (See Table 2.)

 

Table 2. Comparison of Frameworks (From Alliger, et al. 1997)

Kirkpatrick’s Typology

Augmented Framework

Criterion

Definition

Criterion

Definition

LEVEL 1: Reactions

Trainees’ perceptions of the training – generally their satisfaction with the training

Reactions

 

Affective Reactions

Trainees’ perceptions of the training – generally their satisfaction with the training

Utility Judgments

Trainees’ belief about the value and usefulness of the training; the extent to which they believe they will use the training on the job

LEVEL 2: Learning

Acquisition of knowledge as a result of the training

Learning

 

Immediate Knowledge

The assessment of knowledge acquisition at the conclusion of training

Knowledge Retention

The retention of knowledge at some point after the immediate conclusion of training

Behavior/Skill Demonstration

Demonstrated capability or "can do" within the training context

LEVEL 3: Behavior

Demonstrated behavior change as a result of training (either within the training context or on the job)

Transfer

Demonstrated on the job performance some time after the conclusion of training ("does do")

LEVEL 4: Results

Organizational impact

Results

Organizational impact

 

Results. Due to a limited number of studies that reported Level 4 results, they limited their findings of the Levels 1 to 3.

The research examined reliability and correlational results. "Reliability" refers to the consistency or repeatability of a measure, for example, the authors explain that a scale that is consistently five pounds light is reliable, but not accurate. Low reliability places a ceiling on accuracy (e.g. a scale that reads the same weight differently intermittently can have limited accuracy). The researchers saw stronger correlations between measures when both measures were related to training content.

Based on their results, they present several recommendations for workplace learning and performance professionals:

 

Return on Investment (ROI)

Dr. Jack Phillips (1997) adds a fifth level in his model for evaluating training results – Return on Investment (ROI). ROI is the dollar value of benefits obtained by an organization over a specified time period in return for a given investment in a training program. An ROI analysis is not complete until the measured training results have been converted to monetary values and compared with the cost of the program. This shows the true contribution of training.

It was once considered impossible to measure the ROI of training, but now many organizations are doing so. It is a difficult and complex process that is dependent on a long-term perspective. Engaging in discussions with cost accounting experts can be helpful, and once the process of measuring ROI is begun, the process will improve. This section describes the steps in calculating the ROI, techniques for increasing the ROI and some challenges in applying ROI analysis to measuring training benefit.

 

Steps in Calculating ROI

The framework shown in Figure 2 serves as a simple and practical tool for developing ROI; from data collection to calculating the actual monetary return. ROI is calculated by determining the net benefit of the training program over the total cost of the training program.

 

Figure2. Flowchart for calculating ROI (From Malaysia HRD Online 1998)

 

The following steps outline the process for calculating ROI:

 

Isolating the effects of training

Training results can be divided into hard data and soft data (Phillips 1997).

Hard data are the traditional measures of organizational performance, such as labor cost, production rates and sales. Hard data can be used to measure output, quality, time and cost. They're objective, easy to measure, and easy to convert to monetary values. Management tends to find hard data highly credible. Hard data is available in most types of organizations, including manufacturing, service, not-for-profit, government, and educational.

Soft data is usually data that is behaviorally driven, such as absenteeism and turnover rates. This data is more difficult to measure, difficult to quantify and more subjective. It is difficult to convert to monetary values. Management tends to find this data less credible as a measure. For training that focuses on developing "soft" skills (e.g., communication skills), soft data is often required as a measurement tool.

The Table 3 provides examples of hard and soft data.

 

Table 3. Phillips' examples of hard and soft data.

Hard

Soft

Output

  • units produced
  • items assembled or sold
  • forms processed
  • tasks completed

Quality

  • scrap
  • waste
  • rework
  • product defects or rejects

Time

  • equipment downtime
  • employee overtime
  • time to complete projects
  • training time

Cost

  • overhead
  • variable costs
  • accident costs
  • sales expenses

Work Habits

  • employee absenteeism
  • tardiness
  • visits to the dispensary
  • safety-rule violations

Work Climate

  • employee grievances
  • employee turnover
  • discrimination charges
  • job satisfaction

Attitudes

  • employee loyalty
  • employees' self-confidence
  • employees' perceptions of job
  • responsibilities
  • perceived changes in performance

New Skills

  • decisions made
  • problems solved
  • conflicts avoided
  • frequency in use of new skills

Development and Advancement

  • number of promotions or pay increases
  • number of training programs attended
  • requests for transfer
  • performance-appraisal ratings

Initiative

  • implementation of new ideas
  • successful completion of projects
  • number of employee suggestions
  • frequency of goal setting

 

Converting hard and soft data into monetary values

1. Focus on a single unit. For hard data, identify a particular unit of improvement in output (measured in products, sales), quality (measured in errors, rejects), or time (measured in the amount required to complete an order or finish a product). For soft data, use employee grievance or turnover for a single unit.

2. Determine the value for each unit. This is easy for hard data and difficult for soft data

 

Benefits of training

The following list of benefits, adapted from Shepard (1999), can be used as a guideline for calculating benefits. Forecasting and measuring benefits – these categories are not mutually exclusive. Care should be taken to avoid including the same basic benefit more than once.

Labor savings – training results in less effort to achieve same levels of output.

Productivity increases – additional output can be achieved with the same level of effort. (implies that increased output is a goal. If not, then better to express benefit in cost savings)

Other cost savings – things other than savings related to labor.

Other income generation – new income as a direct result of training

Each training benefit outlined should be converted to monetary value. Two examples follow:

Time savings = number of hours saved * average hourly salary

Production gains = monetary value of extra goods produced

 

Costs of training

The cost of training includes one-time costs (development of course, equipment), and recurring costs (workbooks, room rental, trainer wages, and productivity loss). The following list of costs, adapted from Shepard, broken down by functional task, can be used as a guideline for calculating training costs. Keep in mind that this list is not all-inclusive and that other factors may apply to your specific organization.

Design and development costs – the costs of creating the training program.

Promotional costs – the costs of promoting or internally ‘advertising’ the training

Administration cost – the costs related to the time taken by the training department to administer the training

Faculty costs – the costs related to the time taken by the training department to deliver the training

Materials costs

Facilities cost

Student cost

Evaluation costs – the time spent evaluating the training and performing ROI analysis

 

Calculate ROI

ROI is essentially the comparison of the monetary value of the benefits and costs. There is more than one method of comparison.

The simplest comparison is to subtract the costs from the benefits. The difference, then, is the ROI:

$ROI = value of benefits – cost of training

Another method, used by Shephard (1999), computes a direct ratio of benefits to costs, expressed as a percentage

%ROI = (benefits / costs) * 100

The third method, used by Chase (1997), uses a slightly different ratio, calculating the difference in benefits and costs, and normalizing by the training cost

%ROI = (value of benefits – cost of training) / cost of training

The latter two ROI calculations provide information on the percentage return earned over a specified period as a result of investing in a training program.

Another related measure is the Payback Period (Shephard, 1999). This is a measure of how long it will take (measured in months) before benefits of the training match the costs incurred. This is when the training ‘pays for itself’

Payback period = costs / monthly benefits

If payback period is low (a few months), then management will be more encouraged to make investment. Also, no arbitrary time period is required.

The following example, adapted from Shephard, demonstrates the methods of calculating ROI.

 

Table 4. Listing of costs and benefits of a training program

Duration of training

33 hours

Estimated student numbers

750

Period over which
benefits are calculated

12 months

COSTS

Design and development

40,930

Promotion

4,744

Administration

12,713

Faculty

86,250

Materials

15,000

Facilities

40,500

Students

553,156

Evaluation

872

Total Costs

754,165

BENEFITS

Labor savings

241,071

Productivity increases

675,000

Other cost savings

161,250

Other income generation

0

Total Benefits

1,077,321

 

Calculations:

ROI

= value of benefits – cost

 

= 1,077,321 – 754,165

 

= 323,156

ROI

= (benefits / costs) * 100

 

= (1,077,321 / 754,165) * 100

 

= 143%

ROI

= (value of benefits – cost of training) / cost of training

 

= 323,156 / 754,165

 

= 43%

Payback

= costs / monthly benefits

 

= 754,165 / (1,077,321/12)

 

= 754,165 / 89,776

 

= 8.4 months

 

Raising Your Training ROI

There are several things a company can do to maximize the benefits of training. The following suggestions are outlined in Chase (1997).

One of the most effective methods of increasing the effects of training is to prepare employees with a pre-training briefing. This will provide a forum where management can tell trainees the purpose and objectives of their training, why they were selected, and what business need you hope to meet.

The purpose of training is to transfer skills learned back to the job. After training, new skills need to be nurtured until they become habits. This time is the most critical. Techniques learned in training may not be applied because coworkers think ‘they don’t have time’ or managers who don’t cooperate. Management support at this time will have more influence on whether training has any benefit than anything the trainer can do (Chase 1997). It is suggested that management ask employees what they learned, have them develop an action plan to implement what they learned, and follow up on it. Employee rewards also provide incentives. Rewards can be in the form of financial bonuses, certificates, acknowledgement or promotion.

 

Challenges in Applying ROI

First, decide which programs to evaluate for ROI. For large companies, it may be necessary to develop criterion for what training programs are most important to track. For example, management may wish to track the most expensive or controversial training programs, or those that run most frequently and involve the most people.

As ROI analysis is not an exact science, it is difficult to isolate the effects of training. There are many influences at work. For example, a training class may coincide with a new marketing gimmick, and it is difficult to pinpoint whether increased sales are related to training and which to the marketing. It is also difficult to know when to measure results, as they may accumulate over months or years.

Assuming that benefits continue to accrue some time after training is completed, it is important to specify an appropriate time period for ROI analysis. Choose a period that fits well with the organizations planning cycle (one year, two years). Alternatively, a period that corresponds to the lifetime of the benefit, can be used. This may be more difficult, as you would need to know how long the average student stays in a position where they use the skills being taught.

Some practitioners emphasize that it is not necessary to prove absolute cause and effect. It is sufficient to look for indicators that training has improved performance and productivity.

Lastly, how do you know what is a good ROI? For many companies, a good goal is 25%. For other companies, they believe it is worth it even if they break even. Some management training gets scores of 100% or more, since you can track the manager’s influence on the whole team, not on just one person.

 

Conclusion

Lifelong learning is increasingly important to the competitiveness of our nation and the success and well being of our individual citizens. Nevertheless, increasing ROI is the bottom line for management. The challenges to justifying investments in training are significant, and a real commitment from the enterprise to connect training and learning to the enterprise mission through evaluation can provide solutions.

 

Bibliography

Alexander, S. Want to Give Your Career its Best Skills Boost? Invest in These Honey Pots, Say Top IT Execs. In Computerworld, June 21,1999. Available: http://www2.computerworld.com/home/features.nsf/all/000621buzz [7/25/99].

Alliger, G., Tannebaum, S., Bennett, W., Traver, H. and Shotland, A. 1997. Selecting the "Right" Measures of Training Effectiveness: Lessons Learned From Over 30 Evaluation Studies. Available: http://www/ecqweb.com/r1/eval_rstc.html [7/26/99].

American Society for Training and Development (ASTD). 1996. The ASTD Training and Development Handbook: A Guide to Human Resource Development, 4th Edition, R. L. Craig (Ed.), McGraw Hill.

ASTD. 1998a. National HRD Survey: Information Technology Training, 1998 Second Quarterly Report. Available: http://www.astd.org/virtual/_commun…ch/nhrd_executive_survey_98it.html [7/25/99].

ASTD. 1998b. ASTD Award Winner Research Award. Available: http://www.astd.org/virtual_community/awards/research_winner.html [7/26/99].

Borzo, J., Essick, K. and D’Amico. European View: The West Comes Up Short, Despite Abundance to the East. In Computerworld, December 7, 1998. Available: http://www2.computerworld.com/home/print.nsf/9812078282.html [7/25/99].

Brown, S. M. 1997. Changing Times and Changing Methods for Evaluating Training. Available on the Knowledge Transfer International Website: http://www.ktic.com/tOPIC7/14_BROWN.HTM [8/5/99].

Busse, T. and Brandel, M. 1999. The Skills Struggle: It's Time to Restock the Global IT Labor Pool Through Training and Education. Available: http://computerworld.idg.com.au/globalinnovators/feb1999/skills.html [8/3/99].

Carliner, S. 1998. Demonstrating the Effectiveness and Value of Technical Communication Products and Services: A Four-Level Process. Available: http://www.fredcomm.com/articles/value/kirkpatr.htm [7/26/99].

Chase, N. 1997. Raise Your Training ROI. Available on Quality Magazine's Website: http://qualitymag.com/0997f3.html [7/25/99].

Demchenko, Y. 1999. Paradigm Change in Education in Conditions of Emerging New Information Technologies and Global Information Infrastructure Building. In ED-MEDIA/ED-TELECOM 98 Procedings. Freiburg, Germany. Available: http://www.uazone.com/demch/papers/edte98demch.html [7/26/99].

Department for Education and Employment (DfEE). 1998. The Learning Age: A Renaissance of New Britain. Available: http://www.lifelonglearning.co.uk/greenpaper/index.htm [8/4/99].

Foxon, M. and Coopers & Lyband. 1989. Evaluation of Training and Development Programs: A Review of the Literature. In Australian Journal of Educational Technology, 5(2), 89-104. Available: http://cleo.murdoch.edu.au/gen/aset/ajet5/su89p89.html [7/29/99].

ICESA. 1998. The Workforce Development Staff Skills and Training Challenge: A Report to the Workforce Development Leadership. Available: http://www.icesa.org/national/update/trainch.htm [7/25/99].

Kirkpatrick, D. L. Implementation Guidelines for the Four Levels of Evaluation. From Training & Development, January 1996. Available on the American Society for Training and Development Website: http://www.astd.org/CMS/templates/template_1.html?articleid=20842 [7/26/99].

Klienholtz, A. 1999. Systems ReThinking: An Inquiring Systems Approach to the Art and Practice of the Learning Organization. Available: http://www.cba.uh.edu/~parks/fis/inqre2a1.htm [7/26/99].

Malaysia HRD Online. 1998. Evaluating Training Results. Available at the Online Human Resource Development Malaysia Website: http://www.asia-online.com.my/hrmalaysia/news/article2.html [8/2/99].

Phillips, J. J. 1997. How Much is the Training Worth? Available on the American Society for Training and Development Website: http://www.astd.org/CMS/templates/template_1.html?articleid=11019 [7/25/99].

Robinson, D. G. and Robinson, J. C. 1989. Training for Impact: How to Link Training to Business Needs and Measure the Results. Jossey-Bass Publishers, San Francisco.

Rubin Systems, Inc. 1998. The Top 10 Mistakes in IT Measurements. Available on the Rubin Systems Inc. Website: http://www.hrubin.com/headline/mistake.html [7/26/99].

Shepard, C. 1999. Assessing the ROI of Training. Available on the Fastrak Consulting Ltd. Website: http://www.fastrak-consulting.co.uk/tactix/Features/tngroi/tngroi.htm [8/4/99].

Stone, J. and Watson, V. 1999. Evaluation of Training. Available: http://www.ispi-atlanta.org/Evaluation.htm [8/2/99].

Software Futures. 1999. Approach for Outcomes-based IT Training. Available: http://www.sfg.co.az/training/approach.html [7/26/99].

Takacs, G. J. 1995. FAQ – Does Training Get Results? Available: http://kell67.ed.psu.edu/trdev-l/summary/Result.txt [7/26/99].

Teach, E. How to Succeed in IT. In CFO Magazine, July 1997. Available: http://www.cfonet.com/html/Articles/CFO/1997/97JLhowt.html [7/26/99].

 


Avis L. Webster


Kristi Lombard

Berlin Consulting Associates, Inc.

University of Maryland College Park

Email:

awebster@earthling.net

Institute of Systems Research

berlin1@bigplanet.com

Email:

klombard@wam.umd.edu

Phone:

301.897.9079

Phone:

301.718.8027