For decades, relational databases remained essentially unchanged; data was segmented into specific chunks for columns, slots, and repositories, also called structured data. However, in this Internet o

Kuldeep Kumar Feb. 1990 Communications of the ACM (Vol. 33, Issue 2) Association for Computing Machinery, Inc. Article 5,917 wordsAbstract: Post-implementation of computer - based information systems (CBISs) may lead to improved development practices or beneficialdecisions and to the evaluation and training of new personnel. A study of summative evaluations involving a survey of computerusers is presented. Issues and concerns summarized in evaluation literature were used to develop a questionnaire that wasdistributed to 462 senior information systems executives. Ninety-two of these responded. Thirty percent of the respondents wereevaluating 75 percent or more of their CBIS; 26 percent were evaluating between 25 and 49 percent of the installed systems; and 21percent were not evaluating any systems. User managers and system development managers are the personnel most involved inperforming evaluations. It is concluded that the main reason for post-implementation evaluation is to verify the completeddevelopment project against specifications and transfer responsibility for the system to end users. This use limits the benefits ofevaluation. A longer-term view of the system and its development process would consider the ultimate impact on the organization andthe effectiveness of system users. Evaluators should consider adopting this approach and using a more global set of evaluativecriteria in order to realize the full benefits of post-implementation evaluation.Full Text: Post Implementation Evaluation of Computer - Based Information Systems : Current Practices With the increasing investment incomputers and computer - based information systems (CBIS), the evaluation of these systems is becoming an important issue in themanagement and control of CBIS [1, 3-5, 21, 29]. Both management [4, 27] and IS professionals [23] recognize evaluation of theapplications as one of the important unresolved concerns in the managing computer resources. A 1976 SHARE study [7]recommends evaluation as the primary technique for establishing the worth of information systems.This evaluation of the systems as they are developed and implemented may take place at the completion of various stages of thesystems development life cycle (SDLC) [13]. For example, when a system is evaluated prior to undertaking systems development,evaluation is referred to as feasibility assessment. The next set of evaluation activities may be performed at the end of requirementsspecification and the logical design phase (specification and design reviews, and approvals), followed by evaluations at the end ofphysical design, coding, or testing. Finally, evaluations may be performed just before (acceptance tests and management reviews) orjust after (post installation reviews) installation. This will be followed by evaluations of the system once it has a chance to settle down(systems-operations post installation reviews) [13].A useful way of summarizing and classifying the variety of evaluations is from the program and curriculum evaluation literature [24,28]. This literature distinguishes between formative and summative evaluations. Formative evaluation produces information that is fedback during development to help improve the product under development. It serves the needs of those who are involved in thedevelopment process. Summative evaluation is done after the development is completed. It provides information about theeffectiveness of the product to those decision makers who are going to be adopting it.In this study we focus on the summative or post implementation evaluation of computer - based information systems . The summativeevaluation, as defined above, serves the evaluative information needs of those (user and top management, systems managementand system developers) who would finally be accepting and using the information system. Therefore, post implementationevaluations include evaluations performed just before installation, just after installation, and considerably after installation after thesystem has a chance to settle down.The information systems literature lists a variety of benefits of post implementation evaluation of information systems. Hamilton [12]suggests that information system evaluation may result in beneficial outcomes such as improvement of systems developmentpractices; decisions to adopt, modify, or discard information systems; and evaluation and training of personnel responsible forsystems development. Green and Kiem [10] include benefits such as ensured compliance with user objectives, improvements in theeffectiveness and productivity of the design, and realization of cost savings by modifying systems through evaluation, before, ratherthan after, a real operation. Zmud [30] states that evaluation makes the computer - based information system "concrete" for managers and users so that they can recognize, if and how, the existing information systems need to be modified. Evaluations are critical to ISinvestment evaluation [20] and are highly rated by IS executives as a technique to evaluate information systems effectiveness [6].The need for evaluation and its associated benefits have also been described by others [8, 16, 19, 25].Despite the perceived importance of and the need for the post implementation, the state of knowledge concerning current informationsystems evaluation practices is relatively minimal [13, 17, 22]. The common perception seems to be that post implementationevaluation is seldom performed [12, 26] or is not being performed adequately [9, 10, 12, 30].There are three studies which provide limited empirical evidence on post implementation evaluation practices. The first is a survey of31 member companies of the Diebold Group, performed in 1977 [6]. The second is an unpublished survey of 51 mid-western U.S.organizations by Hamilton in 1979-80 [13]. Both these studies are somewhat dated and were conducted with limited,unrepresentative samples. Furthermore, the Diebold study was limited to a fairly localized sample of 31 participants in the DieboldResearch Program. Finally, a study by Hamilton [11] provides empirical evidence about the criteria, organization, and systemcharacteristics commonly correlated with the selection of applications for post implementation reviews.The purpose of our study is to document the current state of practice of post implementation evaluation of computer - basedinformation systems in business organizations. Specifically, it attempts to answer the following specific questions:* How prevalent is CBIS Post Implementation Evaluation?* Which stakeholders are typically involved in the evaluation process?* What criteria are currently being used for evaluating CBIS?* What benefits are attributed to CBIS evaluation?* What are likely barriers to post implementation evaluation?This study is useful from both practitioner and researcher perspectives. For the practitioner it highlights the current practices andidentifies areas which currently do not receive adequate attention. Practitioners can also use the study to compare theirorganization's evaluation practices against the overall norm and investigate the differences (if any). For the researcher, the studyhighlights areas which require further research efforts and are relevant to the business executives' evaluation needs.RESEARCH METHODOLOGYThe research approach consisted of three phases. In Phase I of the study an extensive review of the evaluation literature wasperformed. The survey revealed that although there was a variety of information systems literature dealing with the evaluation ofinformation systems, most was limited to prescriptive and normative techniques for performing the evaluation of the CBIS. There wasalso a vast amount of literature providing descriptive evaluations of the existing inventory of installed computer - based informationsystems . With the exception of the three studies [6, 11, 13] we have mentioned, however, very little attention seems to have beengiven towards understanding and describing information systems evaluation practices in their organizational setting.In Phase II, the issues and concerns summarized from the literature survey were used to develop a questionnaire that dealt with theevaluation practices in organizations. The questionnaire was designed using the principles used in market research and pretested attwo locations (Southeast U.S. and Southwest Ontario) using a total of five information systems executives and three informationsystems academics. Finally, the questionnaire was reviewed by a committee of four information systems professionals andacademics.In Phase III of the research project, a cover letter and the questionnaire were addressed and mailed to 462 senior informationsystems executives of the top 500 firms in the Canadian Dun and Bradstreet Index. In order to maintain the integrity and theindependence of the data-collection, data-coding, and data-conversion procedures, a professional marketing research firm wasengaged to manage the questionnaire mailing, collection, coding, and data-conversion tasks in this phase. Of the 462 questionnairesmailed, 32 were returned as "individuals moved--address unknown;" and 92 completed questionnaires were returned for a totalresponse rate of 21 percent.RESPONDENTS' CHARACTERISTICSThe range and distribution of the size of the information systems departments in the survey, as measured by the monthly hardwarebudget (rental equivalent) as shown in Figure 1 indicates that the sample includes a wide breadth of MIS organizations. The medianmonthly hardware budget for the firms in the sample was between $20,000 to $50,000 with the mode being between $100,000 and$500,000. Ten percent of the organizations in the sample had monthly hardware budgets exceeding $500,000.A majority of the respondent organizations (69.6 percent) have a history (greater than 10 years) of computer - based informationsystems use. On an average, the organizations in the sample have been using computer - based information systems forapproximately 15 years. The detailed distribution of the number of years of CBIS use for the respondent sample shown in Figure 2 isconsistent with that reported in a 1979 study (adjusted for time) using the same population base [4, p. 81, Table 5.4]. This evidence isa further confirmation of the representativeness of the sample.Along with the maturing use of computer - based information systems , the IS function appears to be becoming independent of itsearlier origins, where it was often a subunit of accounting, finance, or some other operating department. The sample statisticsregarding the organizational location of the IS function shown in Figure 3 are consistent with this trend. In 41 percent of theorganizations, MIS is an independent-line function, and in 15 percent of the organizations, it is a staff department reporting directly to top management. Only in 40 percent of the organizations does the MIS department continue to report to the accounting or financedepartments.Finally, the approximate percentage of the MIS budget (operations, development, and maintenance) spent on the three majorcategories of information systems in the IS portfolio [2] is presented in Figure 4. It reflects the current preponderance of transactionprocessing and operation-support applications with a move towards management control and strategic planning systems.RESEARCH FINDINGSThis section presents the detailed research findings regarding post implementation evaluation (PIE) practices in the respondentorganizations.The study found that 30 percent of the organizations surveyed were evaluating 75 percent or more of their computer - basedinformation systems as shown in Figure 5. Another 26 percent of the organizations were evaluating between 25 percent and 49percent of the installed CBIS. Twenty-one percent of the organizations were not evaluating any of their installed CBIS. (Theserespondents were eliminated from further analysis). These figures are consistent with Hamilton's earlier findings that in 1980,approximately 80 percent of the organizations were either performing post implementation review (PIRs) or indicated plans forimplementing PIRs [13, p.14].Timing of EvaluationRespondents were asked to indicate the most frequent stage in the systems development process that post implementation wasperformed. As shown in Figure 6, most of the organizations performed post implementation evaluations either just before (28 percent)or just after (22 percent) the cut-over to the newly installed CBIS. These, along with the evaluations performed at cut-over (4 percent),constitute the majority (52 percent) of the organizations. The distribution had two minor peaks at 3 months (18 percent) and 6 months(14 percent) indicating the presence of systems operations PIRs after the system is fully installed and has had a chance to settledown and meaningful performance data is available. In total, 39 percent and 18 percent of the organizations reported that suchoperations PIRs are performed after 3 and more, and 6 and more months, respectively, after the cut-over, respectively.Who is Involved in Evaluation?Table I presents summary statistics about the nature of the involvement of different system stakeholders in the system evaluationprocess. The data indicate that the system development team members are the major participants in post implementation evaluation.In 54 percent of the organizations, they actively manage and perform evaluation, and in 32 percent of the cases, they determine boththe evaluation criteria and the evaluation method. In only 18 percent of the organizations, however, the team members are allowed toapprove follow-up action (such as system enhancements or modifications) that may result from evaluation.After development, the user managers (32 percent) and the systems department managers (18 percent) are the most involved inmanaging and performing evaluation. In 25 percent and 19 percent of the organizations, respectively, they also determine the criteriaused for evaluation. This reflects their interest in adopting an effective system and maintaining adequate quality of the systemsimplemented.Being a post implementation or summative evaluation, the evaluation process produces evaluative information for those decisionmakers who adopt and use the system. This is reflected in the high percentage of organizations where the results of evaluation arereviewed by the user management (56 percent), systems department management (54 percent), and the corporate seniormanagement (32 percent). (1) They are also the major participants in approving such action (SD managers (38 percent), usermanagers (32 percent), and corporate senior management (26 percent)).Finally, in the management and external-auditing literature, there is an increasing indication of the desirability of auditor involvementin the development and evaluation of computer - based information systems . Our data seem to indicate cautious progress towards thisgoal. In 18 percent of the organizations, internal auditors actively perform or manage evaluations. Though in 24 percent of theorganizations they are not involved in the evaluative process, they review the results of evaluations in 40 percent of the organizations.In 15 percent of cases, they are instrumental in determining evaluation criteria and evaluation methods.CBIS Evaluation Criteria--What is being Evaluated?A substantive issue in evaluation is the question of what is being evaluated? In order to measure this, the respondents werepresented with a list of criteria or factors which are commonly mentioned in information systems literature as candidates forevaluation. The respondents were asked to indicate the frequency with which these criteria were considered in the evaluationprocess. A five-point scale raning from "never evaluated" through "occasionally," "frequently," and "usually evaluated," to "alwaysevaluated" was used to determine the extent to which these criteria were being evaluated in practice.As shown in Table II, the five most frequently evaluated criteria, in order of the frequency with which they are evaluated, were theaccuracy of information, timeliness and currency of information, user satisfaction and attitudes towards the system, internal controls,and project schedule compliance. These top criteria reflect the user, systems development team, and management and internal-auditparticipation in the riteria determination process discussed in the previous section.The five least used criteria, sorted by lowest to highest frequency, were the system's fit and impact upon the organization structure;quality of programs, net operating costs, and savings of the system: system's impact on users and their jobs; and quality andcompleteness of system documentation. In the context of previous studies which indicate the lack of interest in socio-technical issuesexhibted by the systems professionals, [14, 18], it is not surprising that the two criteria dealing with these issues (system's fit withorganization and system's impact on users and their jobs) were among the least frequently evaluated criteria. In light of the large amount of professional and research literature dealing with program and documentation quality and cost-benefit analysis ofinformation systems, however, it was surprising to find that technical and economic issues such as program quality, quality andcompleteness of documentation, and net operating costs and savings were also among the least frequently evaluated criteria.Finally, in order to understand the underlying structure of the evaluation criteria, a factor analysis of the criteria was performed. (2)After a factor-loading cutoff level of 0.5 was employed, a three-factor structure resulted, with sixteen of the seventeenth criterialoading at that level. The results of the factor analysis are shown in Table III. The first factor includes all criteria related to theinformation product of the system (i.e., accuracy, timeliness and currency, adequacy, and appropriateness of information) and hasbeen named the "Information Criteria" Factor. The second factor includes those criteria that do not directly influence the use andeffectiveness of the primary system product (information) but are important aspects for the continuing operation of the system (suchas system security, internal control, user satisfaction, net operating costs and savings, and quality of documentation). We call thisfactor the "System Facilitating Criteria" factor. The third factor includes those criteria concerned with evaluating the consequences orimpacts of the newly-installed system (system's impact on users and their jobs, system's fit with and impact upon the organization,system usage, and the user friendliness of the system interface) and is termed the "System Impact Criteria" factor. The only criteriathat did not load onto any of the three factors at the 0.5 level was "Quality of Programs," which was also found to be one of the leastevaluated (second from bottom) criteria in practice. While no priori loadings were hypothesized, the factor analysis indicates that alogical structure of criteria (i.e., Information Criteria, System Facilitating Criteria, and System Impact Criteria) does exist.Uses and Benefits of EvaluationThe senior information systems executives, as the major reviewers of the evaluation results, and the most frequent approves offollow-up action were asked their opinion about the more important uses of the results. The importance of a variety of uses andbenefits was measured on a five-point, Likert-like importance scale ranging from 1 for low importance to 5 for high importance. Theresults are presented in Table IV.The five most important uses, in the order of importance, are to verify that the installed system meets user requirements, to providefeedback to the development personnel; to justify the adoption, continuation, or termination of the installed system; to clarify and setpriorities for needed modifications; and to transfer the responsibility for the system from the development team to the users. The leastimportant use indicated is the evaluation of the systems development personnel. This finding should reassure those who may beresisting a formal system evaluation because of apprehension of its use as a personnel evaluation device. The use of the evaluationprocess to assess the system's development methodology and the project management method is also rather low on the importancescale, thereby indicating that systems management has not been able to conceptualize the link between development methodologiesand the quality of information systems produced. The results of a factor analysis on the uses and benefits of evaluation variableswere inconclusive.Inhibitors of EvaluationAll respondents (including those who did not perform evaluations) were asked to rate reasons for not performing evaluations. Thereasons were rated on a five-point scale from "Very Unlikely to Inhibit Evaluation (1)" to "Very Likely to Inhibit Evaluation (5)." Asshown in Table V, the reason most likely to inhibit evaluation was the unavailability of users to spend time on the evaluation activities.This, along with the unavailability of qualified personnel and management perceiving inadequate benefits from evaluation were thegreatest inhibitors of evaluation efforts. The IS executives did not seem to feel that the lack of evaluation methodologies and the lackof agreement on evaluation criteria were likely to hinder post implementation evaluation.After a factor-loading cutoff of 0.5 employed, a factor analysis (Table V) of the inhibiting variables resulted in a two-factor structure,with four of the seven variables loading at that level. The first factor, which we term "Evaluator Availability," included the variables"users not available to spend time on evaluation' and "project personnel reassigned; not available for evaluation." The second factor,"Evaluation Criteria and Methods," included two relatively weak inhibitors: "the lack of an appropriate methodology" and "the lack ofagreement on evaluation criteria."DISCUSSIONThis study investigated the current practices in the post implementation evaluation of computer - based information systems . Theresults of the study indicate that 79 percent of the organizations surveyed are currently performing post implementation evaluations ofsome or most of their installed CBIS. Only 30 percent of the organizations, however, evaluate a majority (75 percent or more) of theirCBIS, whereas 26 percent evaluate between 25 percent to 49 percent of their CBIS. This finding is consistent with Hamilton's earlierfindings [12, 13] that post implementation evaluation is performed only on a small fraction of the systems developed.Among those organizations that perform post implementation evaluations, most are performed either just before or just after systemcut-over and installation. This may reflect the high importance attached to project cut-over and close-out uses of the evaluationprocess such as: verification that system requirements are met by the installed system, justification of the adoption or termination ofthe installed system, clarification and priority setting for further modifications, and transfer of responsibility for the installed system tothe user. Only 18 percent of the organizations perform systems operations PIRs (six or moe months after installation, [12]), with theprimary intention of assessing and improving the systems product rather than with closing out the development project.The view that, in most cases, evaluation could be a project close-out device is further supported by the finding that the majorparticipants in evaluation are the members of the systems development team. As the developers are usually interested in finishing upthe current project so that they can move on to the next set of development projects, the closing out of the current project could be amotivation for performing evaluation.The research findings reveal that much of evaluation is performed and managed by the members of the systems development team. These are the people who have the most say in determining evaluation criteria and evaluation methodology. Since the design idealsand the values of the developers are instrumental in shaping the system design and the systems development process [14, 18], it isunlikely that an evaluation managed and performed by the development team will discover any basic flaws in the process or theproduct of design.Nonetheless, both user managers and systems development managers participate in evaluation and are the major stakeholders whoreview the results of evaluation and approve follow-up action. As long as this participation is substantive some of the concerns aboutthe bias of developer-conducted evaluation may be mitigated. Though internal-audit groups have not made inroads as majorparticipants in performing evaluations, they help to determine criteria and to review results.The most frequently evaluated criteria include evaluations of the information product (accuracy, timeliness, adequacy, andappropriateness of information), user satisfaction with the system, and internal controls. Not surprisingly, reflecting the current valuebiases of systems developers [14, 18], socio-technical factors such as the system's impact on the users and the organization wereamong the least evaluated criteria. Finally, two criteria, quality of programs and the quality and completeness of systemdocumentation, which are usually emphasized in both practitioner and computer science literature as being important to futureoperations and maintenance of the system, are also among the least frequently evaluated criteria. This could reflect the use ofevaluation primarily as a responsibility transfer device and as a method for the justification and adoption of the installed system.It seems that at least two primary stakeholders in the evaluation process, i.e., the systems development team and the systemsmanagement, use evaluation primarily as a means of closing out the systems project and disengaging from the system. The mostimportant uses of evaluation results included: the verification that the installed system met requirements; the justification of theadoption, continuation, or termination of the new system; the clarification and prioritization of further modifications: and the transfer ofsystem responsibility to the user. All of these activities are important for closing out the development project. The use of evaluationresults, as a feedback device for improving future development and project management methods and for evaluating (and improving)the systems development project personnel, was found to be unimportant, thereby reinforcing the conclusion regarding the primaryuse of evaluation as a disengagement strategy.The factors most likely to inhibit evaluation were found to be the unavailability of two of the major participants in the systemsdevelopment process--the users and the qualified project team personnel. This again suggests that once the system is completedand implemented, the major stakeholders are interested in getting on with other work and use evaluation as a milestone forcompletion.It was also felt by the respondents that the corporate management did not perceive adequate benefits from evaluation. Hamilton [12,pp.133-137] has empirically demonstrated that the behavioral intention to perform post implementation reviews is strongly influencedby the evaluators' normative beliefs about what salient referents think should be done and the motivation to comply with them. Sincecorporate management is a strong salient referent and since it does not perceive adequate benefits from evaluation, evaluation isless likely to be performed as a evaluative rather than a close-out device.Finally, the lack of agreement on evaluation criteria and the lack of appropriate methodology for evaluation were not found to bemajor inhibitors of evaluation. Given the current controversy in the information systems literature regarding appropriate criteria,measures, and methods for information systems evaluation, the finding that these factors do not inhibit evaluation is surprising. It ispossible that, given the close-out nature of evaluation, the evaluators have given only superficial consideration to the substantiveissues that make the criteria and methods controversial.CONCLUSIONS AND RECOMMENDATIONSThe study findings point to three key conclusions. First, it appears that the major reason for performing post implementationevaluation is the formalization of the completion of the development project whereby the deliverable (i.e., the installed system) isverified against specifications, any unfinished business, such as further modifications, is noted, and the responsibility for the systemis transferred to the users. Evaluation then becomes a major tactic in a project disengagement strategy for the systems developmentdepartment. Evaluation does not seem to be for the purpose of either long-term assessment of the system impact and effectivenessor for the purpose of providing feedback to modify inappropriate development and project management practices. Further, it is not tocounsel and educate ineffective project team personnel.This conclusion seems to be reinforced by finding that the majority of evaluations are performed either just before, at, or after systemcut-over, and only in 18 percent of the organizations are true systems operations PIRs performed. Given the limited objectives ofevaluation, it is doubtful that management and the users perceive adequate benefits from this exercise. This could be the reason forthe study finding that the top inhibitors of evaluation include: the unavailability of users and development personnel for evaluationactivities and management nor perceiving adequate benefits from evaluation.Second, much of evaluation is managed and performed, and evaluation criteria and methods are determined by those who havedesigned the system being implemented. Since the designers would already have designed most of the factors they considerimportant, it is not likely that it will uncover any basic flaws in the product or the process of systems design.Third, the most frequently evaluated criteria seem to be informed quality criteria (accuracy, timeliness, adequacy, andappropriateness) along with facilitating criteria such as user satisfaction and attitudes and internal controls. Socio-technical criteriasuch as system's impacts on the user and the organization, as well as the long-term maintenance and growth of the system (systemdocumentation and program quality), are evaluated much less frequently.These conclusions suggest that post implementation evaluations are being performed for the limited, short-term reason of formalizingthe end of the development project and may not provide the more important long-term, feedback-improvement benefits of theevaluation process. In order to realize these benefits, the evaluators need to take a longer-term view of the system and its development process. In sucha view, long-term impacts (such as its impact on the organization and the system users and their effectiveness) would be considered,and long-term viability (in terms of cost savings, security, maintenance, program quality, and documentation quality) would beassessed. This would require that corporate and systems management formally recognize the role of post implementation evaluationas a took for providing feedback about both the systems development product, as well as the development process, and realize thatthis feedback is invaluable for improving both the product and the process. The results of the evaluation process can then bereviewed to see which of these long-term objectives have been addressed by the evaluation.As this evaluation would be looking at the longer-term impacts and viability, the formal evaluation should be performed when thesystem has had a chance to settle down and its impacts are becoming visible through continued operation. Depending on the scopeof the system this may be where between three and twelve months after the system cut-over.Next, in order to ensure the independence of evaluation and a more global set of criteria than those conceived by the developers,evaluation should be managed and performed by people other than the members of the development team. The mechanism forperforming post implementation evaluation may either be an independent quality assurance group or a multi-stakeholder evaluationteam led by the users.An evaluation group independent of the development team does not preclude the possibility of the developers contributing to theevaluation process. The existence of a formal quality assurance group will also reduce the effect of two of the major inhibitors, i.e.,the unavailability of the users and the development personnel for evaluation.Finally, a longer-term, feedback-improvement-oriented post implementation evaluation, with the accompanying system and thedevelopment process improvement benefits, would be helpful in gaining corporate management support for evaluations, therebyincreasing the possibility of more substantive and meaningful evaluations being performed. Unless the above recommendations areimplemented, post implementation evaluations will continue to serve the limited purpose of closing out the development project.(1) For the sake of brevity, the detailed statistics for corporate senior management, external auditors and MIS staff other than theproject team members are not included in Table I. Significant statistics for these stakeholder groups are presented in theaccompanying narrative.(2) Factor analysis is a statistical techniques used to discover which of the elements or variables in a sample population vary togetherand therefore may be candidates for grouping together into groups called factors. For an introduction to and an explanation of factoranalysis see [15].REFERENCES[1] Ball, L., and Harris, R. SMIS member: A Membership analysis. MIS Q. 6, 1 (Mar. 1982), 19-38.[2] Benbasat, I., Dexter, A., and Mantha, R. W. Impact of organizational maturity on information system skill needs. MIS Q. 4, 1 (Mar.1980), 21-34.[3] Brancheau, J.C., and Wetherbe, J. C. Key issues in information systems management. MIS Q. 11, 1 (Mar. 1987), 23-45.[4] Cooke, J. E., and Drury, D. H. Management planning and control of information systems. Society of Management AccountantsResearch Monograph, Hamilton, Ontario, 1980.[5] Dickson, G. W., et al. Key information system issues for the 1980s. MIS Q. 8, 3 (Sept. 1984), 135-153.[6] The Diebold Group Inc. Key measurement indicators of ADP performance. Doc. No. S25, Diebold Research Program, New York,N.Y., 1977.[7]Dolotta, T. A., et al. Data Processing in 1980-1985: A Study of Potential Limitations to Progress. John Wiley and Sons, New York,1976.[8] Domsch, M. Effectiveness measurement of computer - based information systems through cost-benefit analysis. In Design andImplementation of Computer - Based Information Systems . Szyperski, N. and Gorchla, E., Eds., Sijthoff & Noordhoff, Alphen an denRijn, The Netherlands, 1979.[9] Dumas, P. J. Management Information Systems: A dialectic theory and the evaluation issue. Ph.D Dissertation, Univ. of Texas,Austin, 1978.[10] Green, G. I., and Keim, R. T. After implementation what's next? Evaluation. J. Syst. Manage. 34, 9 (Sept. 1983), 10-15.[11] Hamilton, J. S. EDP quality assurance: Selecting applications for review. In Proceedings of the Third International Conference onInformation System Systems (Ann Arbor, Mich., Dec. 13-15). ACM/SIGBDP, New York, 1982, pp. 221-238.[12] Hamilton, J. S. Post Installation systems: An empirical investigation of the determinants for use of post installation reviews. Ph.D.Dissertation, Univ. of Minnesota, 1981.[13] Hamilton, J. S. A survey of data processing post installation evaluation practices. MIS Research Center Working Paper, MISRC-WP-80-06, Univ. of Minnesota, 1980. [14] Hedberg, B., and Mumford, E. Design of computer systems: Man's vision of Man as an integral part of the system designprocess. In Human Choice and Computers. E. Mumford and H. Sackman, Eds., North-Holland, Amsterdam, The Netherlands, 1975.[15] Kim, J.-O., and Mueller, C. W. Introduction to Factor Analysis. Sage University Press, Beverly Hills, Calif., 1978.[16] Kleijnen, J.P.C. Computer and Profits. Addison-Wesley, Waltham, Mass., 1980.[17] Kriebel, C. H. The evaluation of Management Information Systems. IAG J. 4, 1 (1971), 1-14.[18] Kumar, K., and Welke, R. J. Implementation failure and system developer values: Assumptions, truisms and empirical evidence.In Proceedings of the Fifth International Conference on Information Systems (Tucson, Ariz., Nov. 28-30). ACM/SIGBDP, New York,1984, pp. 1-12.[19] Land, F. Evaluation of systems goals in determining a strategy for a computer - based Information System . Comput. J. 19, 4(1978), 290-294.[20] Matlin, G. L. What is the value of investment in Information Systems? MIS Q. 3, 3 (Sept. 1979), 5-34.[21] Mautz, R. K., et al. Senior management control of computer - based Information Systems . Research Monogrpah of the ResearchFoundation of Financial Executives Institute, New Jersey, 1983.[22] Norton, R. L., and Rau, K. G. A Guide to EDP Performance Management. QED Information Sciences, Wellesley, Mass., 1978.[23] Powers, R. F., and Dickson, G. W. MIS project management: Myths, opinions, and reality. Calif. Manage. Rev. 15, 3 (Spring1973), 147-156.[24] Scriven, M. The methodology of evaluation. In Perspectives of Curriculum Evaluation. R. W. Tyler, R. M. Gagne, and M. Scriven,Eds., AERA Monograph Series on Curriculum Evaluation, Vol. 1, Rand McNally and Co., Chicago, 1967, pp. 39-83.[25] Seibt, D. User and specialist evaluations in system development. In Design and Implementation of Computer - Based InformationSystems . N. Szyperski and E. Gorchla, Eds., Sijthoff and Noordhoff, Holland, 1979.[26] Sollenberger, H. M., and Arens, A. A. Assessing Information Systems projects. Manage. Account. (Sept. 1973), 37-42.[27] Waldo, C. Which departments use the computer best. Datamation 27, (Mar. 1980), 201-202.[28] Weiss, C. H. Evaluation Research--Methods for Assessing Program Effectiveness. Prentice-Hall, Englewood Cliffs, N. J., 1972.[29] Welke, R. J. Information Systems effectiveness evaluation. Working Paper, Faculty of Business, McMaster University, Hamilton,Ontario, Canada.[30] Zmud, R. W. Information Systems in Organizations. Scott, Foresman and Company, Glenview, Ill., 1983.CR Categories and Subject Descriptors> K.6.1 [Management of Computing and Information Systems]: Project and PeopleManagement--life cycle; management techniques; system development; K.6.4 [Management of Computing and Information Systems]:System Management--management audit; quality assurance.General Terms: ManagementAdditional Key Words and Phrases: Information Systems evaluation, post implementation evaluation, post implementation review,post implementation audit.KULDEEP KUMAR is an assistant professor of Computer Information Systems in the College of Business, Georgia state University.He is a member of IFIP TC WG 8.2 and the IEEE computer society and has served in several program committees for IFIP and theInternational Conference on Information System. His current research interests include: management of information systems,information systems planning, information systems development methodologies, and methodology engineering. Author's PresentAddress: Computer Information Systems, Georgia State Univeristy, University Plaza, Atlanta, GA 30303Permission to copy without fell all or part of this material granted provided that the copies are not made or distributed for directcommercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copyingis by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specificpermission. COPYRIGHT 1990 Association for Computing Machinery, Inc. http://www.acm.org (MLA 8th Edition)    Kumar, Kuldeep. "Post implementation evaluation of computer-based information systems: current practices." , vol. 33, no. 2, Feb. 1990, p. 203+. , https://link-gale-com.libraryresources.columbiasouthern.edu/apps/doc/A8146918/ITBC?u=oran95108&sid=ITBC&xid=639e2ef1. Accessed 20July 2020. GALE|A8146918