prepare a 7- to 10-slide Microsoft® PowerPoint® presentation that lists and explains the types of software testing approaches.Explain why testing software is important. Incorporate information from
System Implementation
System implementation is made up of many activities. The six major activities we are concerned with in this chapter are coding, testing, installation, documentation, training, and support (see Figure 13-1). The purpose of these steps is to convert the physical system specifications into working and reliable software and hardware, document the work that has been done, and provide help for current and future users and caretakers of the system. Coding and testing may have already been completed by this point if Agile Methodologies have been followed. Using a plan-driven methodology, coding and testing are often done by other project team members besides analysts, although analysts may do some programming. In any case, analysts are responsible for ensuring that all of these various activities are properly planned and executed. Next, we will briefly discuss these activities in two groups: (1) coding, testing, and installation and (2) documenting the system and training and supporting users.
Coding, Testing, and Installation Processes
Coding, as we mentioned before, is the process whereby the physical design specifications created by the analysis team are turned into working computer code by the programming team. Depending on the size and complexity of the system, coding can be an involved, intensive activity. Regardless of the development methodology followed, once coding has begun, the testing process can begin and proceed in parallel. As each program module is produced, it can be tested individually, then as part of a larger program, and then as part of a larger system. You will learn about the different strategies for testing later in this chapter. We should emphasize that, although testing is done during implementation, you must begin planning for testing earlier in the project. Planning involves determining what needs to be tested and collecting test data. This is often done during the analysis phase because testing requirements are related to system requirements.
Installation is the process during which the current system is replaced by the new system. This includes conversion of existing data, software, documentation, and work procedures to those consistent with the new system. Users must give up the old ways of doing their jobs, whether manual or automated, and adjust to accomplishing the same tasks with the new system. Users will sometimes resist these changes, and you must help them adjust. However, you cannot control all the dynamics of user–system interaction involved in the installation process.
Deliverables and Outcomes from Coding, Testing, and Installation
Table 13-1 shows the deliverables from the coding, testing, and installation processes. Some programming languages include utilities to generate documentation automatically, while others require more effort on the part of the coder to establish good documentation. But even well-documented code can be mysterious to maintenance programmers who must maintain the system for years after the original system was written and the original programmers have moved on to other jobs. Therefore, clear, complete documentation for all individual modules and programs is crucial to the system’s continued smooth operation. The results of program and system testing are important deliverables from the testing process because they document the tests as well as the test results. For example, what type of test was conducted? What test data were used? How did the system handle the test? The answers to these questions can provide important information for system maintenance because changes will require retesting, and similar testing procedures will be used during the maintenance process.
Table 13-1 Deliverables for Coding, Testing, and Installation
|
|
The next two deliverables, user guides and the user training plan, result from the installation process. User guides provide information on how to use the new system, and the training plan is a strategy for training users so that they can quickly learn the new system. The development of the training plan probably began earlier in the project, and some training, on the concepts behind the new system, may have already taken place. During the early stages of implementation, the training plans are finalized and training on the use of the system begins. Similarly, the installation plan lays out a strategy for moving from the old system to the new, from the beginning to the end of the process. Installation includes installing the system (hardware and software) at central and user sites. The installation plan answers such questions as when the new system will be installed, which installation strategies will be used, who will be involved, what resources are required, which data will be converted and cleansed, and how long the installation process will take. It is not enough that the system is installed; users must actually use it.
As an analyst, your job is to ensure that all of these deliverables are produced and are done well. You may produce some of the deliverables, such as test data, user guides, and an installation plan; for other deliverables, such as code, you may only supervise or simply monitor their production or accomplishment. The extent of your implementation responsibilities will vary according to the size and standards of the organization you work for, but your ultimate role includes ensuring that all the implementation work leads to a system that meets the specifications developed in earlier project phases.
The Processes of Documenting the System, Training Users, and Supporting Users
Although the process of documentation proceeds throughout the life cycle, it receives formal attention during the implementation phase because the end of implementation largely marks the end of the analysis team’s involvement in systems development. As the team is getting ready to move on to new projects, you and the other analysts need to prepare documents that reveal all of the important information you have accumulated about this system during its development and implementation. There are two audiences for this final documentation: (1) the information systems personnel who will maintain the system throughout its productive life, and (2) the people who will use the system as part of their daily lives. The analysis team in a large organization can get help in preparing documentation from specialized staff in the information systems department.
Larger organizations also tend to provide training and support to computer users throughout the organization. Some of the training and support is very specific to particular application systems, whereas the rest is general to particular operating systems or off-the-shelf software packages. For example, it is common to find courses on Microsoft Windows® in organization-wide training facilities. Analysts are mostly uninvolved with general training and support, but they do work with corporate trainers to provide training and support tailored to particular computer applications they have helped to develop. Centralized information system training facilities tend to have specialized staff who can help with training and support issues. In smaller organizations that cannot afford to have well-staffed centralized training and support facilities, fellow users are the best source of training and support that users have, whether the software is customized or off the shelf.
Deliverables and Outcomes from Documenting the System, Training Users, and Supporting Users
Table 13-2 shows the deliverables from documenting the system, training users, and supporting users. At the very least, the development team must prepare user documentation. For most modern information systems, documentation includes any online help designed as part of the system interface. The development team should think through the user training process: Who should be trained? How much training is adequate for each training audience? What do different types of users need to learn during training? The training plan should be supplemented by actual training modules, or at least outlines of such modules, that at a minimum address the three questions stated previously. Finally, the development team should also deliver a user support plan that addresses issues such as how users will be able to find help once the information system has become integrated into the organization. The development team should consider a multitude of support mechanisms and modes of delivery. Each deliverable is addressed in more detail later in this chapter.
Table 13-2 Deliverables for Documenting the System, Training, and Supporting Users
|
|
As we mentioned previously, in traditional plan-driven systems development projects, analysts prepare system specifications that are passed on to programmers for coding. Although coding takes considerable effort and skill, the practices and processes of writing code do not belong in this text. However, because software application testing is an activity that analysts plan (beginning in the analysis phase) and sometimes supervise, depending on organizational standards, you need to understand the essentials of the testing process. Although this section of the text focuses on testing from the perspective of traditional development practices, many of the same types of tests can be used during the analyze–design–code–test cycle common to the Agile Methodologies. Coding and testing in eXtreme Programming will be discussed briefly toward the end of this section on testing.
Software testing begins early in the SDLC, even though many of the actual testing activities are carried out during implementation. During analysis, you develop a master test plan. During design, you develop a unit test plan, an integration test plan, and a system test plan. During implementation, these various plans are put into effect and the actual testing is performed.
The purpose of these written test plans is to improve communication among all the people involved in testing the application software. The plan specifies what each person’s role will be during testing. The test plans also serve as checklists you can use to determine whether the master test plan has been completed. The master test plan is not just a single document, but a collection of documents. Each of the component documents represents a complete test plan for one part of the system or for a particular type of test. Presenting a complete master test plan is far beyond the scope of this book. To give you an idea of what a master test plan involves, we present an abbreviated table of contents of one in Table 13-3.
Table 13-3 Table of Contents of a Master Test Plan
|
|
(Source: Adapted from Mosley, 1993.)
A master test plan is a project within the overall systems development project. Because at least some of the system testing will be done by people who have not been involved in the system development so far, the Introduction provides general information about the system and the need for testing. The Overall Plan and Testing Requirements sections are like a Baseline Project Plan for testing, with a schedule of events, resource requirements, and standards of practice outlined. Procedure Control explains how the testing is conducted, including how changes to fix errors will be documented. The fifth and final section explains each specific test necessary to validate that the system performs as expected.
Some organizations have specially trained personnel who supervise and support testing. Testing managers are responsible for developing test plans, establishing testing standards, integrating testing and development activities in the life cycle, and ensuring that test plans are completed. Testing specialists help develop test plans, create test cases and scenarios, execute the actual tests, and analyze and report test results.
Seven Different Types of TestsSoftware application testing is an umbrella term that covers several types of tests. Mosley (1993) organizes the types of tests according to whether they employ static or dynamic techniques and whether the test is automated or manual. Static testing means that the code being tested is not executed. The results of running the code are not an issue for that particular test. Dynamic testing, on the other hand, involves execution of the code. Automated testing means the computer conducts the test, whereas manual testing means that people complete the test. Using this framework, we can categorize the different types of tests, as shown in Table 13-4.
Inspections
A testing technique in which participants examine program code for predictable language-specific errors.
Let’s examine each type of test in turn. Inspections are formal group activities where participants manually examine code for occurrences of well-known errors. Syntax, grammar, and some other routine errors can be checked by automated inspection software, so manual inspection checks are used for more subtle errors. Each programming language lends itself to certain types of errors that programmers make when coding, and these common errors are well-known and documented. Code inspection participants compare the code they are examining with a checklist of well-known errors for that particular language. Exactly what the code does is not investigated in an inspection. It has been estimated that code inspections detect from 60 to 90 percent of all software defects as well as provide programmers with feedback that enables them to avoid making the same types of errors in future work (Fagan, 1986). The inspection process can also be used for tasks such as design specifications.
Table 13-4 A Categorization of Test Types Manual | Automated | |
Static | Inspections | Syntax checking |
Dynamic | Walk-throughs | Unit test |
Desk checking | Integration test | |
System test |
(Source: Adapted from Mosley, 1993.)
Unlike inspections, what the code does is an important question in a walk-through. The use of structured walk-throughs is a very effective method of detecting errors in code. As you saw in Chapter 5, structured walk-throughs can be used to review many systems development deliverables, including logical and physical design specifications as well as code. Whereas specification walk-throughs tend to be formal reviews, code walk-throughs tend to be informal. Informality tends to make programmers less apprehensive about walk-throughs and helps increase their frequency. According to Yourdon (1989), code walk-throughs should be done frequently when the pieces of work reviewed are relatively small and before the work is formally tested. If walk-throughs are not held until the entire program is tested, the programmer will have already spent too much time looking for errors that the programming team could have found much more quickly. The programmer’s time will have been wasted, and the other members of the team may become frustrated because they will not find as many errors as they would have if the walk-through had been conducted earlier. Further, the longer a program goes without being subjected to a walk-through, the more defensive the programmer becomes when the code is reviewed. Although each organization that uses walk-throughs conducts them differently, there is a basic structure that you can follow that works well (see Figure13-2).
Figure 13-2 Steps in a typical walk-through(Source: Based on Yourdon, 1989.)
It should be stressed that the purpose of a walk-through is to detect errors, not to correct them. It is the programmer’s job to correct the errors uncovered in a walk-through. Sometimes it can be difficult for the reviewers to refrain from suggesting ways to fix the problems they find in the code, but increased experience with the process can help change a reviewer’s behavior.
What the code does is important in desk checking, an informal process in which the programmer or someone else who understands the logic of the program works through the code with a paper and pencil. The programmer executes each instruction, using test cases that may or may not be written down. In one sense, the reviewer acts as the computer, mentally checking each step and its results for the entire set of computer instructions.
Desk checking
A testing technique in which the program code is sequentially executed manually by the reviewer.
Among the list of automated testing techniques in Table 13-4, only one technique is static—syntax checking. Syntax checking is typically done by a compiler. Errors in syntax are uncovered but the code is not executed. For the other three automated techniques, the code is executed.
Unit testing, sometimes called module testing, is an automated technique whereby each module is tested alone in an attempt to discover any errors that may exist in the module’s code. But because modules coexist and work with other modules in programs and the system, they must also be tested together in larger groups. Combining modules and testing them is called integration testing. Integration testing is gradual. First you test the coordinating module and only one of its subordinate modules. After the first test, you add one or two other subordinate modules from the same level. Once the program has been tested with the coordinating module and all of its immediately subordinate modules, you add modules from the next level and then test the program. You continue this procedure until the entire program has been tested as a unit. System testing is a similar process, but instead of integrating modules into programs for testing, you integrate programs into systems. System testing follows the same incremental logic that integration testing does. Under both integration and system testing, not only do individual modules and programs get tested many times, so do the interfaces between modules and programs.
Unit testing
Each module is tested alone in an attempt to discover any errors in its code.
Integration testing
The process of bringing together all of the modules that a program comprises for testing purposes. Modules are typically integrated in a top-down, incremental fashion.
System testing
The bringing together of all of the programs that a system comprises for testing purposes. Programs are typically integrated in a top-down, incremental fashion.
Current practice calls for a top-down approach to writing and testing modules. Under a top-down approach, the coordinating module is written first. Then the modules at the next level in the structure chart are written, followed by the modules at the next level, and so on, until all of the modules in the system are done. Each module is tested as it is written. Because top-level modules contain many calls to subordinate modules, you may wonder how they can be tested if the lower-level modules haven’t been written yet. The answer is stub testing. Stubs are two or three lines of code written by a programmer to stand in for the missing modules. During testing, the coordinating module calls the stub instead of the subordinate module. The stub accepts control and then returns it to the coordinating module.
Stub testing
A technique used in testing modules, especially where modules are written and tested in a top-down fashion, where a few lines of code are used to substitute for subordinate modules.
System testing is more than simply expanded integration testing where you are testing the interfaces between programs in a system rather than testing the interfaces between modules in a program. System testing is also intended to demonstrate whether a system meets its objectives. This is not the same as testing a system to determine whether it meets requirements—that is the focus of acceptance testing, which will be discussed later. To verify that a system meets its objectives, system testing involves using nonlive test data in a nonlive testing environment. Nonlive means that the data and situation are artificial, developed specifically for testing purposes, although both the data and the environment are similar to what users would encounter in everyday system use. The system test is typically conducted by information systems personnel and led by the project team leader, although it can also be conducted by users under MIS guidance. The scenarios that form the basis for system tests are prepared as part of the master test plan.
The Testing ProcessUp to this point, we have talked about the master test plan and seven different types of tests for software applications. We haven’t said very much about the process of testing itself. There are two important things to remember about testing information systems:
The purpose of testing is to confirm that the system satisfies requirements.
Testing must be planned.
These two points have several implications for the testing process, regardless of the type of test being conducted. First, testing is not haphazard. You must pay attention to many different aspects of a system, such as response time, response to boundary data, response to no input, response to heavy volumes of input, and so on. You must test anything (within resource constraints) that could go wrong or be wrong with a system. At a minimum, you should test the most frequently used parts of the system and as many other paths throughout the system as time permits. Planning gives analysts and programmers an opportunity to think through all the potential problem areas, list these areas, and develop ways to test for problems. As indicated previously, one part of the master test plan is creating a set of test cases, each of which must be carefully documented (see Figure 13-3 for an outline of a test case description).
Figure 13-3 Test case description form(Source: Adapted from Mosley, 1993.)
A test case is a specific scenario of transactions, queries, or navigation paths that represent a typical, critical, or abnormal use of the system. A test case should be repeatable so that it can be rerun as new versions of the software are tested. This is important for all code, whether written in-house, developed by a contractor, or purchased. Test cases need to determine that new software works with other existing software with which it must share data. Even though analysts often do not do the testing, systems analysts, because of their intimate knowledge of applications, often make up or find test data. The people who create the test cases should not be the same people as those who coded and tested the system. In addition to a description of each test case, there must also be a description of the test results, with an emphasis on how the actual results differed from the expected results (see Figure 13-4). This description will indicate why the results were different and what, if anything, should be done to change the software. This description will then suggest the need for retesting, possibly introducing new tests to discover the source of the differences.
Figure 13-4 Test case results form(Source: Adapted from Mosley, 1993.)
One important reason to keep such a thorough description of test cases and results is so that testing can be repeated for each revision of an application. Although new versions of a system may necessitate new test data to validate new features of the application, previous test data usually can and should be reused. Results from the use of the test data with prior versions are compared to new versions to show that changes have not introduced new errors and that the behavior of the system, including response time, is no worse. A second implication for the testing process is that test cases must include illegal and out-of-range data. The system should be able to handle any possibility, no matter how unlikely; the only way to find out is to test.
Testing often requires a great deal of labor. Manual code reviews can be very time consuming and tedious work; and, most importantly, are not always the best solution. As such, special purpose testing software, called a testing harness, is being developed for a variety of environments to help designers automatically review the quality of their code. In many situations, a testing harness will greatly enhance the testing process because it can automatically expand the scope of the tests beyond the current development platform as well as be run every time there is a new version of the software. For instance, with the testing harness called Costello, a developer can answer questions such as how stable is the code? Does the code follow standard rules? Will the code work across multiple platforms? When deploying large-scale, multi-platform projects, automatic code review systems have become a necessity.
Testing harness
An automated testing environment used to review code for errors, standards violations, and other design flaws.
Combining Coding and TestingAlthough coding and testing are in many ways part of the same process, it is not uncommon in large and complicated systems development environments to find the two practices separated from each other. Big companies and big projects often have dedicated testing staffs that develop test plans and then use the plans to test software after it has been written. You have already seen how many different types of testing there are, and you can deduce from that how elaborate and extensive testing can be. As you recall, with eXtreme Programming (XP) (Beck and Andres, 2004) and other Agile Methodologies, coding and testing are intimately related parts of the same process, and the programmers who write the code also write the tests. The general idea is that code is tested soon after it is written.
After testing, all of the code that works may be integrated at the end of each working day, and working versions of the system will be released frequently, as often as once per week in some cases. XP developers design and build working systems in very little time (relative to traditionally organized methods).
One particular technique used in XP to continually improve system quality is refactoring. Refactoring is nothing more than simplifying a system, typically after a new feature or set of features has been added. As more features are added to a system, it becomes more complex, and this complexity will be reflected in the code. After a time of increasing complexity, XP developers stop and redesign the system. The system must still pass the test cases written for it after it has been simplified, so rework continues until the tests can be passed. Different forms of refactoring include simplifying complex statements, abstracting solutions from reusable code, and removing duplicate code. Refactoring and the continuing simplification it implies reflect the iterative nature of XP and the other Agile Methodologies. As development progresses and the system gets closer to being ready for production, the iterations and the evolution of the system slow, a process Beck (2000) calls “productionizing.” A system ready to go into production is ready to be released to users, either customers ready to buy the software or internal users.
Refactoring
Making a program simpler after adding a new feature.
Acceptance Testing by UsersOnce the system tests have been satisfactorily completed, the system is ready for acceptance testing, which is testing the system in the environment where it will eventually be used. Acceptance refers to the fact that users typically sign off on the system and “accept” it once they are satisfied with it. The purpose of acceptance testing is for users to determine whether the system meets their requirements. The extent of acceptance testing will vary with the organization and with the system in question. The most complete acceptance testing will include alpha testing, in which simulated but typical data are used for system testing; beta testing, in which live data are used in the users’ real working environment; and a system audit conducted by the organization’s internal auditors or by members of the quality assurance group.
Acceptance testing
The process whereby actual users test a completed information system, the end result of which is the users’ acceptance of it.
Alpha testing
User testing of a completed information system using simulated data.
Beta testing
User testing of a completed information system using real data in the real user environment.
During alpha testing, the entire system is implemented in a test environment to discover whether the system is overtly destructive to itself or to the rest of the environment. The types of tests performed during alpha testing include the following:
Recovery testing—forces the software (or environment) to fail in order to verify that recovery is properly performed.
Security testing—verifies that protection mechanisms built into the system will protect it from improper penetration.
Stress testing—tries to break the system (e.g., what happens when a record is written to the database with incomplete information or what happens under extreme online transaction loads or with a large number of concurrent users).
Performance testing—determines how the system performs in the range of possible environments in which it may be used (e.g., different hardware configurations, networks, operating systems, and so on); often the goal is to have the system perform with similar response time and other performance measures in each environment.
In beta testing, a subset of the intended users runs the system in the users’ own environments using their own data. The intent of the beta test is to determine whether the software, documentation, technical support, and training activities work as intended. In essence, beta testing can be viewed as a rehearsal of the installation phase. Problems uncovered in alpha and beta testing in any of these areas must be corrected before users can accept the system. Systems analysts can tell many stories about long delays in final user acceptance due to system bugs.
Installation
The process of moving from the current information system to the new one is called installation. All employees who use a system, whether they were consulted during the development process or not, must give up their reliance on the current system and begin to rely on the new system. Four different approaches to installation have emerged over the years: direct, parallel, single-location, and phased (Figure 13-5). The approach an organization decides to use will depend on the scope and complexity of the change associated with the new system and the organization’s risk aversion.
Installation
The organizational process of changing over from the current information system to a new one.
Direct Installation
Direct installation
Changing over from the old information system to a new one by turning off the old system when the new one is turned on.
The direct, or abrupt, approach to installation (also called “cold turkey”) is as sudden as the name indicates: The old system is turned off and the new system is turned on (Figure 13-5a). Under direct installation, users are at the mercy of the new system. Any errors resulting from the new system will have a direct impact on the users and how they do their jobs and, in some cases—depending on the centrality of the system to the organization—on how the organization performs its business. If the new system fails, considerable delay may occur until the old system can again be made operational and business transactions are reentered to make the database up to date. For these reasons, direct installation can be very risky. Further, direct installation requires a complete installation of the whole system. For a large system, this may mean a long time until the new system can be installed, thus delaying system benefits or even missing the opportunities that motivated the system request. On the other hand, it is the least expensive installation method, and it creates considerable interest in making the installation a success. Sometimes, a direct installation is the only possible strategy if there is no way for the current and new systems to coexist, which they must do in some way in each of the other installation approaches.
(a) Direct installation
(b) Parallel installation
(c) Single-location installation (with direct installation at each location)
(d) Phased installation
Figure 13-5 Comparison of installation strategies
Parallel Installation
Parallel installation
Running the old information system and the new one at the same time until management decides the old system can be turned off.
Parallel installation is as riskless as direct installation is risky. Under parallel installation, the old system continues to run alongside the new system until users and management are satisfied that the new system is effectively performing its duties and the old system can be turned off (Figure 13-5b). All of the work done by the old system is concurrently performed by the new system. Outputs are compared (to the greatest extent possible) to help determine whether the new system is performing as well as the old. Errors discovered in the new system do not cost the organization much, if anything, because errors can be isolated and the business can be supported with the old system. Because all work is essentially done twice, a parallel installation can be very expensive; running two systems implies employing (and paying) two staffs to not only operate both systems, but also to maintain them. A parallel approach can also be confusing to users because they must deal with both systems. As with direct installation, there can be a considerable delay until the new system is completely ready for installation. A parallel approach may not be feasible, especially if the users of the system (such as customers) cannot tolerate redundant effort or if the size of the system (number of users or extent of features) is large.
Single-Location Installation
Single-location installation, also known as location or pilot installation, is a middle-of-the-road approach compared with direct and parallel installation. Rather than convert all of the organization at once, single-location installation involves changing from the current to the new system in only one place or in a series of separate sites over time. (Figure 13-5c depicts this approach for a simple situation of two locations.) The single location may be a branch office, a single factory, or one department, and the actual approach used for installation in that location may be any of the other approaches. The key advantage to single-location installation is that it limits potential damage and potential cost by limiting the effects to a single site. Once management has determined that installation has been successful at one location, the new system may be deployed in the rest of the organization, possibly continuing with installation at one location at a time. Success at the pilot site can be used to convince reluctant personnel at other sites that the system can be worthwhile for them as well. Problems with the system (the actual software as well as documentation, training, and support) can be resolved before deployment to other sites. Even though the single-location approach may be simpler for users, it still places a large burden on information systems (IS) staff to support two versions of the system. On the other hand, because problems are isolated at one site at a time, IS staff members can devote all of their efforts to success at the pilot site. Also, if different locations require sharing of data, extra programs will need to be written to synchronize the current and new systems; although this will happen transparently to users, it is extra work for IS staff. As with each of the other approaches (except phased installation), the whole system is installed; however, some parts of the organization will not get the benefits of the new system until the pilot installation has been completely tested.
Single-location installation
Trying out a new information system at one site and using the experience to decide if and how the new system should be deployed throughout the organization.
Phased Installation
Phased installation
Changing from the old information system to the new one incrementally, starting with one or a few functional components and then gradually extending the installation to cover the whole new system.
Phased installation, also called staged installation, is an incremental approach. With phased installation, the new system is brought online in functional components; different parts of the old and new systems are used in cooperation until the whole new system is installed. (Figure 13-5d shows the phase-in of the first two modules of a new system.) Phased installation, like single-location installation, is an attempt to limit the organization’s exposure to risk, whether in terms of cost or disruption of the business. By converting gradually, the organization’s risk is spread out over time and place. Also, a phased installation allows for some benefits from the new system before the whole system is ready. For example, new data-capture methods can be used before all reporting modules are ready. For a phased installation, the new and replaced systems must be able to coexist and probably share data. Thus, bridge programs connecting old and new databases and programs often must be built. Sometimes, the new and old systems are so incompatible (built using totally different structures) that pieces of the old system cannot be incrementally replaced, so this strategy is not feasible. A phased installation is akin to bringing out a sequence of releases of the system. Thus, a phased approach requires careful version control, repeated conversions at each phase, and a long period of change, which may be frustrating and confusing to users. On the other hand, each phase of change is smaller and more manageable for all involved.
Planning Installation
Each installation strategy involves converting not only software, but also data and (potentially) hardware, documentation, work methods, job descriptions, offices and other facilities, training materials, business forms, and other aspects of the system. For example, it is necessary to recall or replace all the current system documentation and business forms, which suggests that the IS department must keep track of who has these items so that they can be notified and receive replacement items. In practice, you will rarely choose a single strategy to the exclusion of all others; most installations will rely on a combination of two or more approaches. For example, if you choose a single-location strategy, you have to decide how installation will proceed there and at subsequent sites. Will it be direct, parallel, or phased?
Of special interest in the installation process is the conversion of data. Because existing systems usually contain data required by the new system, current data must be made error free, unloaded from current files, combined with new data, and loaded into new files. Data may need to be reformatted to be consistent with more advanced data types supported by newer technology used to build the new system. New data fields may have to be entered in large quantities so that every record copied from the current system has all the new fields populated. Manual tasks, such as taking a physical inventory, may need to be done in order to validate data before they are transferred to the new files. The total data conversion process can be tedious. Furthermore, this process may require that current systems be shut off while the data are extracted so that updates to old data, which would contaminate the extract process, cannot occur.
Any decision that requires the current system to be shut down, in whole or in part, before the replacement system is in place must be done with care. Typically, off-hours are used for installations that require a lapse in system support. Whether a lapse in service is required or not, the installation schedule should be announced to users well in advance to let them plan their work schedules around outages in service and periods when their system support might be erratic. Successful installation steps should also be announced, and special procedures put in place so that users can easily inform you of problems they encounter during installation periods. You should also plan for emergency staff to be available in case of system failure so that business operations can be recovered and made operational as quickly as possible. Another consideration is the business cycle of the organization. Most organizations face heavy workloads at particular times of year and relatively light loads at other times. A well-known example is the retail industry, where the busiest time of year is the fall, right before the year’s major gift-giving holidays. You wouldn’t want to schedule installation of a new point-of-sale system to begin December 1 for a department store. Make sure you understand the cyclical nature of the business you are working with before you schedule installation.
Planning for installation may begin as early as the analysis of the organization supported by the system. Some installation activities, such as buying new hardware, remodeling facilities, validating data to be transferred to the new system, and collecting new data to be loaded into the new system, must be done before the software installation can occur. Often the project team leader is responsible for anticipating all installation tasks and assigns responsibility for each to different analysts.
Each installation process involves getting workers to change the way they work. As such, installation should be looked at not as simply installing a new computer system, but as an organizational change process. More than just a computer system is involved—you are also changing how people do their jobs and how the organization operates.
Documenting the SystemIn one sense, every systems development project is unique and will generate its own unique documentation. The approach taken by the development team, whether more traditional and plan oriented or more Agile, will also determine the amount and type of documentation that is generated. System development projects do have many similarities, however, which dictate that certain activities be undertaken and which of those activities must be documented. Bell and Evans (1989) illustrate how a generic SDLC maps onto a generic list of when specific systems development documentation elements are finalized (Table 13-5). As you compare the generic life cycle in Table 13-5 with the life cycle presented in this book, you will see that there are differences, but the general structure of both life cycles is the same because both include the basic phases of analysis, design, implementation, and project planning. Specific documentation will vary depending on the life cycle you are following, and the format and content of the documentation may be mandated by the organization for which you work. However, a basic outline of documentation can be adapted for specific needs, as shown in Table 13-5. Note that this table indicates when documentation is typically finalized; you should start developing documentation elements early, as the information needed is captured.
Table 13-5 SDLC and Generic Documentation Corresponding to Each Phase Generic Life-Cycle Phase | Generic Document |
Requirements Specification | System Requirements Specification |
Resource Requirements Specification | |
Project Control Structuring | Management Plan |
Engineering Change Proposal | |
System Development | |
Architectural design | Architecture Design Document |
Prototype design | Prototype Design Document |
Detailed design and implementation | Detailed Design Document |
Test specification | Test Specifications |
Test implementation | Test Reports |
System Delivery | User’s Guide |
Release Description | |
System Administrator’s Guide | |
Reference Guide | |
Acceptance Sign-Off |
(Source: Adapted from Bell and Evans, 1989.)
We can simplify the situation even more by dividing documentation into two basic types, system documentation and user documentation. System documentation records detailed information about a system’s design specifications, its internal workings, and its functionality. In Table 13-5, all of the documentation listed (except for System Delivery) would qualify as system documentation. Whereas system documentation is intended primarily for maintenance programmers (see Chapter 14), user documentation is intended primarily for users. An organization may have definitive standards on system documentation. These standards may include the outline for the project dictionary and specific pieces of documentation within it. Standards for user documentation are not as explicit.
System documentation
Detailed information about a system’s design specifications, its internal workings, and its functionality.
User documentation
Written or other visual information about an application system, how it works, and how to use it.
User DocumentationUser documentation consists of written or other visual information about an application system, how it works, and how to use it. An excerpt of online user documentation for Microsoft Word appears in Figure 13-6. The documentation is the result of a search for “saving as PDF.” The information provided shows how a user can save a Word file as PDF, including where to save the new file. Such presentation methods have become standard for help files in online PC documentation.
Figure 13-6 An example document from Microsoft Word Help.(Source: Microsoft Corporation)
Figure 13-6 shows a help file, which is just one type of user documentation. Other types of user documentation include reference guides, quick reference guides, release descriptions, system administrator’s guides, and acceptance sign-offs (Table 13-5). A reference guide consists of an exhaustive list of a system’s functions and commands, usually in alphabetic order. Reference guides are very good for locating specific information; they are not as good for learning the broader picture of how to perform all of the steps required for a given task. A quick-reference guide provides essential information about operating a system in a short, concise format. When computer resources are shared and many users perform similar tasks on the same machines (as with airline reservation or mail-order catalog clerks), quick-reference guides are often printed on index cards or as small books and mounted on or near the computer terminal. An outline for a generic user’s guide (from Bell and Evans, 1989) is shown in Table 13-6. The purpose of such a guide is to provide information on how users can use a computer system to perform specific tasks. The information in a user’s guide is typically ordered by how often tasks are performed and by their complexity.
Table 13-6 Outline of a Generic User’s Guide Preface |
|
|
|
Appendix A—Error Messages |
([Appendix]) |
Glossary |
Terms |
Acronyms |
Index |
(Source: Adapted from Bell and Evans, 1989.)
In Table 13-6, sections with an “n” and a title in square brackets mean that there are many such sections, each for a different topic. For example, for an accounting application, sections 4 and beyond might address topics such as entering a transaction in the ledger, closing the month, and printing reports. The items in parentheses are optional, included as necessary. An index becomes more important for larger user’s guides. Figure 13-7 shows a quick start guide for Microsoft Excel. This particular reference guide is intended for people who have never used Excel before. The organization of user’s guides differs from one software product to the next. User guides also differ depending on the intended audience, whether novice or expert. You may want to compare the guide in Figure13-7 with ones for other packages to identify differences.
A release description contains information about a new system release, including a list of documentation for the new release, features and enhancements, known problems and how they have been dealt with in the new release, and information about installation. A system administrator’s guide is intended primarily for those who will install and administer a new system. It contains information about the network on which the system will run, software interfaces for peripherals such as printers, troubleshooting, and setting up user accounts. Finally, an acceptance sign-off allows users to test for proper system installation and then signify their acceptance of the new system with their signatures.
Figure 13-7 A quick start guide for Excel.(Source: Microsoft Corporation)
Training and Supporting UsersTraining and support are critical for the success of an information system. As the person whom the user holds responsible for the new system, you and other analysts on the project team must ensure that high-quality training and support are available. Although training and support can be talked about as if they are two separate things, in organizational practice the distinction between the two is not all that clear because the two sometimes overlap. After all, both deal with learning about computing.
Support
Providing ongoing educational and problem-solving assistance to information system users. For in-house developed systems, support materials and jobs will have to be prepared or designed as part of the implementation process.
Training Information Systems UsersComputer use requires skills, and training people to use computer applications can be expensive for organizations. Training of all types is a major activity in American corporations, but information systems training is often neglected. Many organizations tend to underinvest in computing skills training. It is true that some organizations institutionalize high levels of information system training, but many others offer no systematic training at all.
The type of training needed will vary by system type and user expertise. The list of potential topics from which you will determine if training will be useful includes the following:
Use of the system (e.g., how to enter a class registration request)
General computer concepts (e.g., computer files and how to copy them)
Information system concepts (e.g., batch processing)
Organizational concepts (e.g., FIFO inventory accounting)
System management (e.g., how to request changes to a system)
System installation (e.g., how to reconcile current and new systems during phased installation)
As you can see from this partial list, many potential topics go beyond simply how to use the new system. It may be necessary for you to develop training for users in other areas so that users will be ready, conceptually and psychologically, to use the new system. Some training, such as concept training, should begin early in the project because this training can assist in the “unfreezing” (helping users let go of long-established work procedures) element of the organizational change process.
Each element of training can be delivered in a variety of ways. Table 13-7 lists the most common training methods used by information system departments. The most common delivery method for corporate training remains traditional instructor-led classroom training (U.S. GAO, 2003). Many times, users turn to the resident expert and to fellow users for training. Users are more likely to turn to local experts for help than to the organization’s technical support staff because the local expert understands the users’ primary work and the computer systems they use. Given their dependence on fellow users for training, it should not be surprising that end users describe their most common mode of computer training as self-training.
Table 13-7 Types of Training Methods Resident expert |
Traditional instructor-led classroom training |
E-learning/distance learning |
Blended learning (combination of instructor-led and e-learning) |
Software help components |
External sources, such as vendors |
One conclusion from the experience with user training methods is that an effective strategy for training on a new system is to first train a few key users and then organize training programs and support mechanisms that involve these users to provide further training, both formal and on demand. Often, training is most effective if you customize it to particular user groups, and the lead trainers from these groups are in the best position to provide this training to their colleagues.
Increasingly, corporations are turning to e-learning as a key delivery mode for training. Although the term e-learning is not precisely defined, it generally means the same thing as distance learning; that is, a formalized learning system designed to be carried out remotely, using computer-based electronic communication. You may have taken a distance-learning course at your school, or you may have experience in on-campus classes with some of the dominant software packages used in e-learning, such as WebCT, Blackboard, or Desire2Learn. E-learning courses can be delivered over the Internet or over company intranets. Such courses can be purchased from vendors or prepared by the corporation’s in-house training staff. E-learning is relatively inexpensive compared to traditional classroom training, and it has the additional advantage of being available anytime from just about anywhere. Students can also learn at their own pace. E-learning systems can make available several different elements that enhance the learning experience, including simulations, online access to mentors and experts, e-books, net meetings, and video on demand. Another trend in corporate training is blended learning, the combining of e-learning with instructor-led classroom training. A recent survey reported that over 80 percent of respondents were using e-learning or blended learning to train their employees (Kim et al., 2008). Half of the respondents in the study believed that e-learning would become the dominant training delivery method in their organizations.
Another training method listed in Table 13-7 is software help components. Figure 13-8 shows the beginning of a tutorial for new users of Microsoft’s Excel 2013. The tutorial is designed for users of past versions of Excel who are switching to Excel 2013. Users can go through the tutorial at their own pace, whenever they want, stopping and starting it as necessary.
As both training and support for computing are increasingly able to be delivered online in modules, with some embedded in software packages and applications, the already blurred distinction between training and support blurs even more. Some of the issues most particular to computer user support are examined in the next section.
Supporting Information Systems UsersHistorically, computing support for users has been provided in one of a few forms: on paper, through online versions of paper-based support, by third-party vendors, or by other people who work for the same organization. As we stated earlier, support, whatever its form, has often been inadequate for users’ needs. Yet users consider support to be extremely important.
As computing spread throughout organizations, especially with the advent of personal computers, the need for support increased as more and more employees came to rely on computing to do their jobs. As organizations moved to client/server architectures, their need for support increased even more, and organizations began to rely more and more on vendor support (Crowley, 1993). This increased need for support came in part from the lack of standards governing client/server products and the resulting need to make equipment and software from different vendors compatible. Vendors are able to provide the necessary support, but as they have shifted their offerings from primarily expensive mainframe packages to inexpensive off-the-shelf software, they find they can no longer bear the cost of providing the support for free. Most vendors now charge for support, and many have instituted 900 numbers or sell customers unlimited support for a given monthly or annual charge.
Figure 13-8 A video tutorial for learning Excel 2013.(Source: Microsoft Corporation)
Automating SupportIn an attempt to cut the costs of providing support and to catch up with the demand for additional support services, vendors have automated many of their support offerings. Online support forums provide users access to information on new releases, bugs, and tips for more effective usage. Forums are offered over the Internet or over company intranets. Voice-response systems allow users to navigate option menus that lead to prerecorded messages about usage, problems, and workarounds. Organizations have established similar support mechanisms for systems developed or purchased by the organization. Internal e-mail and office automation can be used to support such capabilities within an organization.
Vendors may offer support that enables users to access a vendor’s knowledge bases, including electronic support services, a single point of contact, and priority access to vendor support personnel (Schneider, 1993). Product knowledge bases include all of the technical and support information about vendor products and provide additional information for on-site personnel to use in solving problems. Vendors routinely supply complete user and technical documentation via the Internet, including periodic updates, so that a user organization can provide this library of documentation, bug reports, workaround notices, and notes on undocumented features online to all internal users. Electronic support services include all of the vendor support services discussed earlier, but they are tailored specifically for the corporation. The single point of contact is a system engineer who is often based on-site and serves as a liaison between the corporation and the vendor. Finally, priority access means that corporate workers can always get help via telephone or e-mail from a person at the vendor company, usually within a prespecified response time of four hours or less.
Such vendor-enhanced support is especially appropriate in organizations where a wide variety of a particular vendor’s products is in use, or where most in-house application development either uses the vendor’s products as components of the larger system or where the vendor’s products are themselves used as the basis for applications. An example of the former would be the case where an organization has set up a client/server architecture based on a particular vendor’s SQL server and APIs. Which applications are developed in-house to run under the client/server architecture depends heavily on the server and APIs, and direct vendor support dealing with problems in these components would be very helpful to the enterprise information systems staff. An example of the second would include order entry and inventory control application systems developed using Microsoft’s Access or Excel. In this case, the system developers and users, who are sometimes the same people for such package-based applications, can benefit considerably from directly questioning vendor representatives about their products.
Providing Support through a Help DeskWhether assisted by vendors or going it alone, the center of support activities for a specific information system in many organizations is the help desk. A help desk is an information systems department function and is staffed by IS personnel. The help desk is the first place users should call when they need assistance with an information system. The help desk staff members either deal with the users’ questions or refer the users to the most appropriate person.
Help desk
A single point of contact for all user inquiries and problems about a particular information system or for all users in a particular department.
Help desk personnel need to be good at communicating with users, listening to their problems, and intelligently communicating potential solutions. These personnel also need to understand the technology they are helping users with. It is crucial, however, that help desk personnel know when new systems and releases are being implemented and when users are being trained for new systems. Help desk personnel should be well trained on new systems. One sure recipe for disaster is to train users on new systems but not train the help desk personnel these same users will turn to for their support needs.
Support Issues for the Analyst to ConsiderSupport is more than just answering user questions about how to use a system to perform a particular task or about the system’s functionality. Support also consists of tasks such as providing for recovery and backup, disaster recovery, and PC maintenance; writing newsletters and offering other types of proactive information sharing; and setting up user groups. It is the responsibility of analysts for a new system to be sure that all forms of support are in place before the system is installed.
For medium to large organizations with active information system functions, many of these issues are dealt with centrally. For example, users may be provided with backup software by the central information systems unit and a schedule for routine backup. Policies may also be in place for initiating recovery procedures in case of system failure. Similarly, disaster recovery plans are almost always established by the central IS unit. Information systems personnel in medium-to-large organizations are also routinely responsible for PC maintenance because the PCs belong to the enterprise. IS unit specialists might also be in charge of composing and transmitting newsletters or overseeing automated bulletin boards and organizing user groups.
When all of these (and more) services are provided by central IS, you must follow the proper procedures to include any new system and its users in the lists of those to whom support is provided. You must design training for the support staff on the new system and make sure that system documentation will be available to it. You must make the support staff aware of the installation schedule and keep these people informed as the system evolves. Similarly, any new hardware and off-the-shelf software has to be registered with the central IS authorities.
When there is no official IS support function to provide support services, you must devise a creative plan to provide as many services as possible. You may have to write backup and recovery procedures and schedules, and the users’ departments may have to purchase and be responsible for the maintenance of their hardware. In some cases, software and hardware maintenance may have to be outsourced to vendors or other capable professionals. In such situations, user interaction and information dissemination may have to be more informal than formal: Informal user groups may meet over lunch or over a coffeepot rather than in officially formed and sanctioned forums.
Organizational Issues in Systems ImplementationDespite the best efforts of the systems development team to design and build a quality system and to manage the change process in the organization, the implementation effort sometimes fails. Sometimes employees will not use the new system that has been developed for them or, if they do use it, their level of satisfaction with it is very low. Why do systems implementation efforts fail? This question has been the subject of information systems research for over 60 years. In the first part of this section, we will try to provide some answers, looking at the factors that research has identified as important to implementation success. In the second part of this section, you will read about another important organizational issue for information systems, security. You will read about the various threats to the security of organizational systems and some of the remedies that can be applied to help deal with the problem.
System Implementation FailuresHershey, famous all over the world for its chocolate, faced a crisis in October 1999. October is one of the key months of the year for candy makers because of the Halloween holiday. In 1999, Hershey was having problems trying to get its candy delivered to warehouses and from there to retailers in time for Halloween. Hershey was having trouble getting orders into its new system and getting the order details to the warehouses for fulfillment. Its new $112 million order-fulfillment system, containing components from SAP, Seibel, and Manugistics, was not working correctly. The system was supposed to have been installed in April of that year, but conversion was delayed until July due to incomplete development and testing. The remaining problems with the system were not found until the next high-volume ordering event of the candy maker’s year occurred, Halloween.
Another case of implementation failure involves SAP and the city of Richmond, California. Richmond began installing SAP in 2000. By mid-2004, the city had spent $4.5 million, and the implementation was still not complete. Instead of presenting the city with the functionality it wanted in SAP’s R/3, some of the city’s department heads said that the system had actually created more work for them. The finance director reported that using the system to prepare the budget had actually required hundreds of hours of extra work on the part of his staff. The planning director reported that the system fell far short of his needs for billing and revenue-tracking. While the city attorney contemplated a lawsuit against SAP and the Denver-based consulting company hired to help with implementation, the city’s information technology director maintained that the system implementation was not a failure at all. At the time, she said that the problems cited by staff were just the usual complaints from people not yet used to new technology. By the end of 2008, the city had decided to switch from SAP R/3 to a system called MUNIS, a system designed specifically for municipalities. The date for going live with MUNIS was set for January 1, 2009.
Avis Europe provides another example. In 2004, Avis Europe incurred a £45m charge as a result of shutting down its credit hire business and due to problems with information technology. In 2003, the company had announced it planned to implement PeopleSoft. A year later, it terminated the project due to delays and additional costs blamed on problems with the system’s design and implementation. The cancellation took place before the system had been rolled out to any aspect of Avis’s business, minimizing disruption to operations.
Sources: “Supply chain: Hershey’s bittersweet lesson.” http://www.cio.com/article/2440386/supply-chain-management/supply-chain—hershey-s-bittersweet-lesson.html. Accessed April 4, 2015; “City Manager’s Weekly Report for the week ending October 17th, 2008.” Available at www.ci.richmond.ca.us/Archive.asp?ADID=1931. Accessed April 4, 2015; Best, J. 2004. “Avis bins PeopleSoft after £45m IT failure.” ZDNet Australia. Available at http://www.zdnet.com/article/avis-bins-peoplesoft-system-after-eur45m-it-failure/. Accessed April 4, 2015.
Why Implementation Sometimes FailsThe conventional wisdom that has emerged over the years is that there are at least two conditions necessary for a successful implementation effort: management support of the system under development and the involvement of users in the development process (Ginzberg, 1981b). Conventional wisdom holds that if both of these conditions are met, you should have a successful implementation. But despite the support and active participation of management and users, information systems implementation sometimes fails (see the box “System Implementation Failures” for examples).
Management support and user involvement are important to implementation success, but they may be overrated compared to other factors that are also important. Research has shown that the link between user involvement and implementation success is sometimes weak (Ives and Olson, 1984). User involvement can help reduce the risk of failure when the system is complex, but user participation in the development process only makes failure more likely when there are financial and time constraints in the development process (Tait and Vessey, 1988). Information systems implementation failures are too common, and the implementation process is too complicated, for the conventional wisdom to be completely correct.
Over the years, other studies have found evidence of additional factors that are important to a successful implementation process. Three such factors are: commitment to the project, commitment to change, and the extent of project definition and planning (Ginzberg, 1981b). Commitment to the project involves managing the systems development project so that the problem being solved is well understood and the system being developed to deal with the problem actually solves it. Commitment to change involves being willing to change behaviors, procedures, and other aspects of the organization. The extent of project definition and planning is a measure of how well the project was planned. The more extensive the planning effort is, the less likely implementation failure is. Still another important factor related to implementation success is user expectations (Ginzberg, 1981a). The more realistic a user’s early expectations about a new system and its capabilities are, the more likely it is that the user will be satisfied with the new system and actually use it.
Although there are many ways to determine if an implementation has been successful, the two most common and trusted are the extent to which the system is used, and the users’ satisfaction with the system (Lucas, 1997). Lucas, who has studied information systems implementation in depth, identified six factors that influence the extent to which a system is used (1997):
User’s personal stake. How important the domain of the system is for the user; in other words, how relevant the system is to the work the user performs. The user’s personal stake in the system is itself influenced by the level of support management provides for implementation and by the urgency to the user of the problem addressed by the system. The higher the level of management support and the more urgent the problem, the higher the user’s personal stake in the system.
System characteristics. Includes aspects of the system’s design such as ease of use, reliability, and relevance to the task the system supports.
User demographics. Characteristics of the user, such as age and degree of computer experience.
Organizational support. These are the same issues of support you read about earlier in this chapter. The better the system support, the more likely an individual will be to use the system.
Performance. What individuals can do with a system to support their work will have an impact on extent of system use. The more users can do with a system and the more creative ways they can develop to benefit from the system, the more they will use it. The relationship between performance and use goes both ways. The higher the levels of performance, the more use. The more use, the greater the performance.
Satisfaction. Use and satisfaction also represent a two-way relationship. The more satisfied the users are with the system, the more they will use it. The more they use it, the more satisfied they will be.
The factors identified by Lucas and the relationships they have to each other are shown in the model in Figure 13-9. In the model, it is easier to see the relationships among the various factors, such as how management support and problem urgency affect the user’s personal stake in the system. Notice also that the arrows that show the relationships between use and performance and satisfaction have two heads, illustrating the two-way relationships between these factors.
Figure 13-9 Implementation success(Source: From Henry C. Lucas (1997) Information Technology for Management. Copyright © 1997 by McGraw-Hill. Reprinted by permission.)
It should be clear that, as an analyst and as someone responsible for the successful implementation of an information system, you have more control over some factors than others. For example, you will have considerable influence over the system’s characteristics, and you may have some influence over the levels of support that will be provided for users of the system. You have no direct control over a user’s demographics, personal stake in the system, management support, or the urgency of the problem to the user. This doesn’t mean you can ignore factors that you can’t change. On the contrary, you need to understand these factors very well because you will have to balance them with the factors you can change in your system design and in your implementation strategy. You may not be able to change a user’s demographics or personal stake in a system, but you can help design the system and your implementation strategy with these factors in mind.
The factors mentioned so far are straightforward. For example, a lack of computer experience can make a user hesitant, inefficient, and ineffective with a system, leading to a system that is not providing its full potential benefit. If top management does not seem to care about the system, why should subordinates care? However, additional factors can be categorized as political and might be more hidden, difficult to affect, and even unrelated to the system that is being implemented, yet instrumental to the system’s success.
The basis for political factors is that individuals who work in an organization have their own self-interested goals, which they pursue in addition to the goals of their departments and of their organizations. For example, people might act to increase their own power relative to that of their co-workers; at other times, people will act to prevent co-workers with more power (such as bosses) from using that power or from gaining more. Because information is power, information systems often are seen as instruments of one’s ability to influence and exert power. It is helpful to understand the history and politics around an information system, and to deal with negative political factors as well as the more objective and operational ones. Sometimes political interpretations provide better explanations for the implementation process and why events took the course they did.
Once an information system has been successfully implemented, the importance of documentation grows. A successfully implemented system becomes part of the daily work lives of an organization’s employees. Many of those employees will use the system, but others will maintain it and keep it running.
Security IssuesThe security of information systems has become an increasingly important issue for organizations and their management. According to CERT/CC (Computer Emergency Readiness Team/Coordination Center) at Carnegie Mellon University, the number of unique system vulnerabilities cataloged in 2007 was 7236. That number is seven times greater than the 1090 vulnerabilities reported in 2000. A vulnerability is a weakness in a system that can be readily exploited by someone who knows about it and knows how to take advantage of it. CERT/CC stopped reporting the number of actual security-related incidents in 2003, when the number hit 137,539, because such incidents had become so commonplace. Hard numbers about losses due to security breaches are difficult to obtain because most companies that have suffered breaches are too embarrassed to admit it, and they are certainly too embarrassed to communicate the actual dollar value of any losses. One estimate for how much security breaches cost companies comes from a survey on security conducted by PriceWaterhouseCoopers. For 2014, the reported estimated annual average financial loss due to cybersecurity incidents was $2.7 million USD. However, we can be sure the actual amount of loss across the entire global economy is much, much more. Most firms do not like to admit financial losses due to security breaches, and those that do are hesitant to report the actual true amounts.
If organizations are victims of security breaches, what are the sources of these threats? Table 13-8 provides some of the answers. As you might expect, a majority of firms report that they have been victims of external threats, including malware (malicious software). Other external security threats include phishing attacks, exploitation of applications, denial of service, and theft of computing or storage devices. Denial of service is a popular tactic used to prevent access to a website, orchestrated through sending the website server more messages than it can handle in a short period of time. Note that although external threats are common, internal threats are common as well. The top three internal threats come from current employees, former employees, or contractors. Employee abuse includes such seemingly innocent activities as sending personal e-mail on company systems or surfing the Internet for personal use during work hours. Although these activities may not damage company systems, they do use company resources that may be needed for work. Downloading large music or video files during work hours on company equipment could actually impede work because downloading large files can consume bandwidth and slow work processes. Unauthorized access to information or privilege escalation by insiders is more devious, as these activities are committed with the intent to harm the firm.
Companies can act, and most do, to deal with information security issues. On average, most companies spend more on systems security than the average loss due to cybersecurity for their company type. When companies and individuals start to think about systems security, they first think about technological solutions to the problem (Schneier, 2000). As Table 13-8 shows, common solutions include firewalls, email security and spam filtering software, antivirus software, virtual private networks, and data encryption.
Firewalls, used by 93 percent of firms, are built to keep intruders out. A firewall is a set of related programs that protects the resources of a network from users from other networks. Basically, a firewall works closely with a router program to examine each network packet to determine whether to forward it toward its destination. A firewall is often installed in a specially designated computer separate from the rest of the network so that no incoming request can get directly at private network resources.
Yet the weakest link in any computer defense is the people who use the computer system. For example, many system users fail to use good passwords: they may tell other people (including strangers) their passwords, or write their passwords on sticky notes they post on their computer monitors. The best defensive technology in the world cannot overcome human laziness and negligence. Experts argue that the human aspect of computer security can be dealt with through the implementation of procedures and policies regarding user behaviors (Denning, 1999; Mitnick and Simon, 2002). Such policies involve system users not giving out passwords, changing passwords regularly, keeping operating system and virus detection software updated, and so on. Sound systems security practice demands the effective use of appropriate information technologies as well as the diligent involvement of employees and decision makers in the defense of organization information technology assets.
Table 13-8 Selected Statistics on IT Security (Data compiled from various sources.) Information Security Budget by Company Size, 2014 | |
Small (revenues less than $100 million) | $0.73 million |
Medium (revenues $100 million – $1 billion) | $3.0 million |
Large (Revenues greater than $1 billion) | $10.8 million |
Average Financial Losses due to Security Incidents, 2014 | |
Small (revenues less than $100 million) | $0.41 million |
Medium (revenues $100 million – $1 billion) | $1.3 million |
Large (Revenues greater than $1 billion) | $5.9 million |
Top Three Insider Sources of Security Incidents, 2014 | |
Current Employees | 35% |
Former Employees | 30% |
Current Service Providers/Consultants/ Contractors | 18% |
Top Three Identified External Sources of Security Incidents, 2014 | |
Hackers | 24% |
Competitors | 24% |
Activists/Activist Organization/Hacktivists | 16% |
Information Brokers | 16% |
Top Five Security Products in Use, 2014 | |
Firewalls | 93% |
Email Security and Spam Filtering | 90% |
Endpoint Protection (e.g., antivirus) | 89% |
Virtual Private Network (VPN) | 83% |
Data Encryption | 73% |
Top Five Security Breaches, 2014 | |
Malware (e.g., viruses) | 76% |
Phishing | 59% |
Web or Software Applications Exploited | 35% |
Denial of Service | 26% |
Theft of Computers or Storage Devices | 25% |
Like many other analysis and design activities, system implementation and operation of an Internet-based electronic commerce application is no different than the processes followed for other types of applications. Previously, you read how Jim Woo and the Pine Valley Furniture (PVF) development team transformed the conceptual data model for the WebStore into a set of normalized relations. Here we will examine how the WebStore system was tested before it was installed and brought online.
The programming of all WebStore software modules has been completed. The programmers have extensively tested each unique module, and it is now time to perform a systemwide test of the WebStore. In this section, we will examine how test cases were developed, how bugs were recorded and fixed, and how alpha and beta testing were conducted.
Developing Test Cases for the WebStoreTo begin the systemwide testing process, Jim and the PVF development team developed test cases to examine every aspect of the system. Jim knew that system testing, like all other aspects of the SDLC, needed to be a very structured and planned process. Before opening the WebStore to the general public, every module and component of the system needed to be tested within a controlled environment. Based on his experience in implementing other systems, Jim felt that they would need to develop approximately 150 to 200 separate test cases to fully examine the WebStore. To help focus the development of test cases and to assign primary responsibility to members of his team to specific areas of the system, Jim developed the following list of testing categories:
Simple functionality: Add to cart, list section, calculate tax, change personal data
Multiple functionality: Add item to cart and change quantity, create user account, change address
Function chains: Add item to cart, check out, create user account, purchase
Elective functions: Returned items, lost shipments, item out of stock
Emergency/crisis: Missing orders, hardware failure, security attacks
The development group broke into five separate teams, each working to develop an extensive set of cases for each of the testing categories. Each team had one day to develop their test cases. Once developed, each team would lead a walk-through so that everyone would know the totality of the testing process and to facilitate extensive feedback to each team so that the testing process would be as comprehensive as possible. To make this point, Jim stated, “What happens when a customer repeatedly enters the same product into the shopping cart? Can we handle that? What happens when the customer repeatedly enters and then removes a single product? Can we handle that? Although some of these things are unlikely to ever occur, we need to be confident that the system is robust to any type of customer interaction. We must develop every test case necessary to give us confidence that the system will operate as intended, 24-7-365!”
A big part of successful system testing is to make sure that no information is lost and that all tests are described in a consistent way. To achieve this, Jim provided all teams with standard forms for documenting each case and for recording the results of each test. This form had the following sections:
Test Case ID
Category/Objective of Test
Description
System Version
Completion Date
Participant(s)
Machine Characteristics (processor, operating system, memory, browser, etc.)
Test Result
Comments
The teams also developed standard codes for each general type of test, and these were used to create the Test Case ID. For example, all tests related to “Simple Functionality” were given an ID with SF as a prefix and a number as the suffix (e.g., SF001). The teams also developed standards for categorizing tests, listing objectives, and writing other test form contents. Establishing these standards ensured that the testing process would be consistently documented.
Bug Tracking and System EvolutionAn outcome of the testing process is the identification of system bugs. Consequently, in addition to setting a standard method for writing and documenting test cases, Jim and the teams established several other rules to ensure a smooth testing process. Experienced developers have long known that an accurate bug-tracking process is essential for rapid troubleshooting and repair during the testing process. You can think of bug tracking as creating a “paper trail” that makes it much easier for programmers to find and repair the bug. To make sure that all bugs were documented in a similar way, the team developed a bug-tracking form that had the following categories:
Bug Number (simple incremental number)
Test Case ID That Generated the Bug
Is the Bug Replicable?
Effects
Description
Resolution
Resolution Date
Comments
The PVF development team agreed that bug fixes would be made in batches because all test cases would have to be redone every time the software was changed. The redoing of all the test cases each time the software is changed is done to ensure that in the process of fixing the bug, no other bugs are introduced into the system. As the system moves along in the testing process—as batches of bugs are fixed—the version number of the software is incremented. During the development and testing phases, the version is typically below the “1.0” first release version.
Alpha and Beta Testing the WebStoreAfter completing all system test cases and resolving all known bugs, Jim moved the WebStore into the alpha-testing phase, in which the entire PVF development team as well as personnel around the company would put the WebStore through its paces. To motivate employees throughout the company to actively participate in testing the WebStore, several creative promotions and giveaways were held. All employees were given a T-shirt that said, “I shop at the WebStore, do you?” Additionally, all employees were given $100 to shop at the WebStore and were offered a free lunch for their entire department if they found a system bug while shopping on the system. Also during alpha testing, the development team conducted extensive recovery, security, stress, and performance testing. Table 13-9provides a sample of the types of tests performed.
Table 13-9 Sample of Tests Conducted on the WebStore during Alpha Testing Test Type | Tests Performed |
Recovery |
|
Security |
|
Stress |
|
Performance |
|
After completing alpha testing, PVF recruited several of their established customers to help in beta testing the WebStore. As real-world customers used the system, Jim was able to monitor the system and fine-tune the servers for optimal system performance. As the system moved through the testing process, fewer and fewer bugs were found. After several days of “clean” usage, Jim felt confident that it was time to open the WebStore for business.
WebStore InstallationThroughout the testing process, Jim kept PVF management aware of each success and failure. Fortunately, because Jim and the development team followed a structured and disciplined development process, there were far more successes than failures. In fact, he was confident that the WebStore was ready to go online and would recommend to PVF’s top management that it was time to “flip the switch” and let the world enter the WebStore.
Project ClosedownIn Chapter 3, you learned about the various phases of project management, from project initiation to closing down the project. If you are the project manager and you have successfully guided your project through all of the phases of the SDLC presented so far in this book, you are now ready to close down your project. Although the maintenance phase is just about to begin, the development project itself is over. As you will see in the next chapter, maintenance can be thought of as a series of smaller development projects, each with its own series of project management phases.
As you recall from Chapter 3, your first task in closing down the project involves many different activities, from dealing with project personnel to planning a celebration of the project’s ending. You will likely have to evaluate your team members, reassign most to other projects, and perhaps terminate others. As project manager, you will also have to notify all of the affected parties that the development project is ending and that you are now switching to maintenance mode.
Your second task is to conduct post-project reviews with both your management and your customers. In some organizations, these post-project reviews will follow formal procedures and may involve internal or EDP (electronic data processing) auditors. The point of a project review is to critique the project, its methods, its deliverables, and its management. You can learn many lessons to improve future projects from a thorough post-project review.
The third major task in project closedown is closing out the customer contract. Any contract that has been in effect between you and your customers during the project (or as the basis for the project) must be completed, typically through the consent of all contractually involved parties. This may involve a formal “signing off” by the clients stating that your work is complete and acceptable. The maintenance phase will typically be covered under new contractual agreements. If your customer is outside your organization, you will also likely negotiate a separate support agreement.
As an analyst member of the development team, your job on this particular project ends during project closedown. You will likely be reassigned to another project dealing with an organizational problem. Maintenance on your new system will begin and continue without you. To complete our consideration of the SDLC, however, we will cover the maintenance phase and its component tasks in Chapter14.