Printer Friendly

Post implementation evaluation of computer-based information systems: current practices.

Post Implementation Evaluation of Computer-Based Information Systems: Current Practices With the increasing investment in computers and computer-based information systems (CBIS), the evaluation of these systems is becoming an important issue in the management and control of CBIS [1, 3-5, 21, 29]. Both management [4, 27] and IS professionals [23] recognize evaluation of the applications as one of the important unresolved concerns in the managing computer resources. A 1976 SHARE study [7] recommends evaluation as the primary technique for establishing the worth of information systems.

This evaluation of the systems as they are developed and implemented may take place at the completion of various stages of the systems development life cycle (SDLC) [13]. For example, when a system is evaluated prior to undertaking systems development, evaluation is referred to as feasibility assessment. The next set of evaluation activities may be performed at the end of requirements specification and the logical design phase (specification and design reviews, and approvals), followed by evaluations at the end of physical design, coding, or testing. Finally, evaluations may be performed just before (acceptance tests and management reviews) or just after (post installation reviews) installation. This will be followed by evaluations of the system once it has a chance to settle down (systems-operations post installation reviews) [13].

A useful way of summarizing and classifying the variety of evaluations is from the program and curriculum evaluation literature [24, 28]. This literature distinguishes between formative and summative evaluations. Formative evaluation produces information that is fed back during development to help improve the product under development. It serves the needs of those who are involved in the development process. Summative evaluation is done after the development is completed. It provides information about the effectiveness of the product to those decision makers who are going to be adopting it.

In this study we focus on the summative or post implementation evaluation of computer-based information systems. The summative evaluation, as defined above, serves the evaluative information needs of those (user and top management, systems management and system developers) who would finally be accepting and using the information system. Therefore, post implementation evaluations include evaluations performed just before installation, just after installation, and considerably after installation after the system has a chance to settle down.

The information systems literature lists a variety of benefits of post implementation evaluation of information systems. Hamilton [12] suggests that information system evaluation may result in beneficial outcomes such as improvement of systems development practices; decisions to adopt, modify, or discard information systems; and evaluation and training of personnel responsible for systems development. Green and Kiem [10] include benefits such as ensured compliance with user objectives, improvements in the effectiveness and productivity of the design, and realization of cost savings by modifying systems through evaluation, before, rather than after, a real operation. Zmud [30] states that evaluation makes the computer-based information system "concrete" for managers and users so that they can recognize, if and how, the existing information systems need to be modified. Evaluations are critical to IS investment evaluation [20] and are highly rated by IS executives as a technique to evaluate information systems effectiveness [6]. The need for evaluation and its associated benefits have also been described by others [8, 16, 19, 25].

Despite the perceived importance of and the need for the post implementation, the state of knowledge concerning current information systems evaluation practices is relatively minimal [13, 17, 22]. The common perception seems to be that post implementation evaluation is seldom performed [12, 26] or is not being performed adequately [9, 10, 12, 30].

There are three studies which provide limited empirical evidence on post implementation evaluation practices. The first is a survey of 31 member companies of the Diebold Group, performed in 1977 [6]. The second is an unpublished survey of 51 mid-western U.S. organizations by Hamilton in 1979-80 [13]. Both these studies are somewhat dated and were conducted with limited, unrepresentative samples. Furthermore, the Diebold study was limited to a fairly localized sample of 31 participants in the Diebold Research Program. Finally, a study by Hamilton [11] provides empirical evidence about the criteria, organization, and system characteristics commonly correlated with the selection of applications for post implementation reviews.

The purpose of our study is to document the current state of practice of post implementation evaluation of computer-based information systems in business organizations. Specifically, it attempts to answer the following specific questions:

* How prevalent is CBIS Post Implementation Evaluation?

* Which stakeholders are typically involved in the evaluation process?

* What criteria are currently being used for evaluating CBIS?

* What benefits are attributed to CBIS evaluation?

* What are likely barriers to post implementation evaluation?

This study is useful from both practitioner and researcher perspectives. For the practitioner it highlights the current practices and identifies areas which currently do not receive adequate attention. Practitioners can also use the study to compare their organization's evaluation practices against the overall norm and investigate the differences (if any). For the researcher, the study highlights areas which require further research efforts and are relevant to the business executives' evaluation needs.

RESEARCH METHODOLOGY

The research approach consisted of three phases. In Phase I of the study an extensive review of the evaluation literature was performed. The survey revealed that although there was a variety of information systems literature dealing with the evaluation of information systems, most was limited to prescriptive and normative techniques for performing the evaluation of the CBIS. There was also a vast amount of literature providing descriptive evaluations of the existing inventory of installed computer-based information systems. With the exception of the three studies [6, 11, 13] we have mentioned, however, very little attention seems to have been given towards understanding and describing information systems evaluation practices in their organizational setting.

In Phase II, the issues and concerns summarized from the literature survey were used to develop a questionnaire that dealt with the evaluation practices in organizations. The questionnaire was designed using the principles used in market research and pretested at two locations (Southeast U.S. and Southwest Ontario) using a total of five information systems executives and three information systems academics. Finally, the questionnaire was reviewed by a committee of four information systems professionals and academics.

In Phase III of the research project, a cover letter and the questionnaire were addressed and mailed to 462 senior information systems executives of the top 500 firms in the Canadian Dun and Bradstreet Index. In order to maintain the integrity and the independence of the data-collection, data-coding, and data-conversion procedures, a professional marketing research firm was engaged to manage the questionnaire mailing, collection, coding, and data-conversion tasks in this phase. Of the 462 questionnaires mailed, 32 were returned as "individuals moved--address unknown;" and 92 completed questionnaires were returned for a total response rate of 21 percent.

RESPONDENTS' CHARACTERISTICS

The range and distribution of the size of the information systems departments in the survey, as measured by the monthly hardware budget (rental equivalent) as shown in Figure 1 indicates that the sample includes a wide breadth of MIS organizations. The median monthly hardware budget for the firms in the sample was between $20,000 to $50,000 with the mode being between $100,000 and $500,000. Ten percent of the organizations in the sample had monthly hardware budgets exceeding $500,000.

A majority of the respondent organizations (69.6 percent) have a history (greater than 10 years) of computer-based information systems use. On an average, the organizations in the sample have been using computer-based information systems for approximately 15 years. The detailed distribution of the number of years of CBIS use for the respondent sample shown in Figure 2 is consistent with that reported in a 1979 study (adjusted for time) using the same population base [4, p. 81, Table 5.4]. This evidence is a further confirmation of the representativeness of the sample.

Along with the maturing use of computer-based information systems, the IS function appears to be becoming independent of its earlier origins, where it was often a subunit of accounting, finance, or some other operating department. The sample statistics regarding the organizational location of the IS function shown in Figure 3 are consistent with this trend. In 41 percent of the organizations, MIS is an independent-line function, and in 15 percent of the organizations, it is a staff department reporting directly to top management. Only in 40 percent of the organizations does the MIS department continue to report to the accounting or finance departments.

Finally, the approximate percentage of the MIS budget (operations, development, and maintenance) spent on the three major categories of information systems in the IS portfolio [2] is presented in Figure 4. It reflects the current preponderance of transaction processing and operation-support applications with a move towards management control and strategic planning systems.

RESEARCH FINDINGS

This section presents the detailed research findings regarding post implementation evaluation (PIE) practices in the respondent organizations.

The study found that 30 percent of the organizations surveyed were evaluating 75 percent or more of their computer-based information systems as shown in Figure 5. Another 26 percent of the organizations were evaluating between 25 percent and 49 percent of the installed CBIS. Twenty-one percent of the organizations were not evaluating any of their installed CBIS. (These respondents were eliminated from further analysis). These figures are consistent with Hamilton's earlier findings that in 1980, approximately 80 percent of the organizations were either performing post implementation review (PIRs) or indicated plans for implementing PIRs [13, p.14].

Timing of Evaluation

Respondents were asked to indicate the most frequent stage in the systems development process that post implementation was performed. As shown in Figure 6, most of the organizations performed post implementation evaluations either just before (28 percent) or just after (22 percent) the cut-over to the newly installed CBIS. These, along with the evaluations performed at cut-over (4 percent), constitute the majority (52 percent) of the organizations. The distribution had two minor peaks at 3 months (18 percent) and 6 months (14 percent) indicating the presence of systems operations PIRs after the system is fully installed and has had a chance to settle down and meaningful performance data is available. In total, 39 percent and 18 percent of the organizations reported that such operations PIRs are performed after 3 and more, and 6 and more months, respectively, after the cut-over, respectively.

Who is Involved in Evaluation?

Table I presents summary statistics about the nature of the involvement of different system stakeholders in the system evaluation process. The data indicate that the system development team members are the major participants in post implementation evaluation. In 54 percent of the organizations, they actively manage and perform evaluation, and in 32 percent of the cases, they determine both the evaluation criteria and the evaluation method. In only 18 percent of the organizations, however, the team members are allowed to approve follow-up action (such as system enhancements or modifications) that may result from evaluation.

After development, the user managers (32 percent) and the systems department managers (18 percent) are the most involved in managing and performing evaluation. In 25 percent and 19 percent of the organizations, respectively, they also determine the criteria used for evaluation. This reflects their interest in adopting an effective system and maintaining adequate quality of the systems implemented.

Being a post implementation or summative evaluation, the evaluation process produces evaluative information for those decision makers who adopt and use the system. This is reflected in the high percentage of organizations where the results of evaluation are reviewed by the user management (56 percent), systems department management (54 percent), and the corporate senior management (32 percent). (1) They are also the major participants in approving such action (SD managers (38 percent), user managers (32 percent), and corporate senior management (26 percent)).

Finally, in the management and external-auditing literature, there is an increasing indication of the desirability of auditor involvement in the development and evaluation of computer-based information systems. Our data seem to indicate cautious progress towards this goal. In 18 percent of the organizations, internal auditors actively perform or manage evaluations. Though in 24 percent of the organizations they are not involved in the evaluative process, they review the results of evaluations in 40 percent of the organizations. In 15 percent of cases, they are instrumental in determining evaluation criteria and evaluation methods.

CBIS Evaluation Criteria--What is being Evaluated?

A substantive issue in evaluation is the question of what is being evaluated? In order to measure this, the respondents were presented with a list of criteria or factors which are commonly mentioned in information systems literature as candidates for evaluation. The respondents were asked to indicate the frequency with which these criteria were considered in the evaluation process. A five-point scale raning from "never evaluated" through "occasionally," "frequently," and "usually evaluated," to "always evaluated" was used to determine the extent to which these criteria were being evaluated in practice.

As shown in Table II, the five most frequently evaluated criteria, in order of the frequency with which they are evaluated, were the accuracy of information, timeliness and currency of information, user satisfaction and attitudes towards the system, internal controls, and project schedule compliance. These top criteria reflect the user, systems development team, and management and internal-audit participation in the riteria determination process discussed in the previous section.

The five least used criteria, sorted by lowest to highest frequency, were the system's fit and impact upon the organization structure; quality of programs, net operating costs, and savings of the system: system's impact on users and their jobs; and quality and completeness of system documentation. In the context of previous studies which indicate the lack of interest in socio-technical issues exhibted by the systems professionals, [14, 18], it is not surprising that the two criteria dealing with these issues (system's fit with organization and system's impact on users and their jobs) were among the least frequently evaluated criteria. In light of the large amount of professional and research literature dealing with program and documentation quality and cost-benefit analysis of information systems, however, it was surprising to find that technical and economic issues such as program quality, quality and completeness of documentation, and net operating costs and savings were also among the least frequently evaluated criteria.

Finally, in order to understand the underlying structure of the evaluation criteria, a factor analysis of the criteria was performed. (2) After a factor-loading cutoff level of 0.5 was employed, a three-factor structure resulted, with sixteen of the seventeenth criteria loading at that level. The results of the factor analysis are shown in Table III. The first factor includes all criteria related to the information product of the system (i.e., accuracy, timeliness and currency, adequacy, and appropriateness of information) and has been named the "Information Criteria" Factor. The second factor includes those criteria that do not directly influence the use and effectiveness of the primary system product (information) but are important aspects for the continuing operation of the system (such as system security, internal control, user satisfaction, net operating costs and savings, and quality of documentation). We call this factor the "System Facilitating Criteria" factor. The third factor includes those criteria concerned with evaluating the consequences or impacts of the newly-installed system (system's impact on users and their jobs, system's fit with and impact upon the organization, system usage, and the user friendliness of the system interface) and is termed the "System Impact Criteria" factor. The only criteria that did not load onto any of the three factors at the 0.5 level was "Quality of Programs," which was also found to be one of the least evaluated (second from bottom) criteria in practice. While no priori loadings were hypothesized, the factor analysis indicates that a logical structure of criteria (i.e., Information Criteria, System Facilitating Criteria, and System Impact Criteria) does exist.

Uses and Benefits of Evaluation

The senior information systems executives, as the major reviewers of the evaluation results, and the most frequent approves of follow-up action were asked their opinion about the more important uses of the results. The importance of a variety of uses and benefits was measured on a five-point, Likert-like importance scale ranging from 1 for low importance to 5 for high importance. The results are presented in Table IV.

The five most important uses, in the order of importance, are to verify that the installed system meets user requirements, to provide feedback to the development personnel; to justify the adoption, continuation, or termination of the installed system; to clarify and set priorities for needed modifications; and to transfer the responsibility for the system from the development team to the users. The least important use indicated is the evaluation of the systems development personnel. This finding should reassure those who may be resisting a formal system evaluation because of apprehension of its use as a personnel evaluation device. The use of the evaluation process to assess the system's development methodology and the project management method is also rather low on the importance scale, thereby indicating that systems management has not been able to conceptualize the link between development methodologies and the quality of information systems produced. The results of a factor analysis on the uses and benefits of evaluation variables were inconclusive.

Inhibitors of Evaluation

All respondents (including those who did not perform evaluations) were asked to rate reasons for not performing evaluations. The reasons were rated on a five-point scale from "Very Unlikely to Inhibit Evaluation (1)" to "Very Likely to Inhibit Evaluation (5)." As shown in Table V, the reason most likely to inhibit evaluation was the unavailability of users to spend time on the evaluation activities. This, along with the unavailability of qualified personnel and management perceiving inadequate benefits from evaluation were the greatest inhibitors of evaluation efforts. The IS executives did not seem to feel that the lack of evaluation methodologies and the lack of agreement on evaluation criteria were likely to hinder post implementation evaluation.

After a factor-loading cutoff of 0.5 employed, a factor analysis (Table V) of the inhibiting variables resulted in a two-factor structure, with four of the seven variables loading at that level. The first factor, which we term "Evaluator Availability," included the variables "users not available to spend time on evaluation' and "project personnel reassigned; not available for evaluation." The second factor, "Evaluation Criteria and Methods," included two relatively weak inhibitors: "the lack of an appropriate methodology" and "the lack of agreement on evaluation criteria."

DISCUSSION

This study investigated the current practices in the post implementation evaluation of computer-based information systems. The results of the study indicate that 79 percent of the organizations surveyed are currently performing post implementation evaluations of some or most of their installed CBIS. Only 30 percent of the organizations, however, evaluate a majority (75 percent or more) of their CBIS, whereas 26 percent evaluate between 25 percent to 49 percent of their CBIS. This finding is consistent with Hamilton's earlier findings [12, 13] that post implementation evaluation is performed only on a small fraction of the systems developed.

Among those organizations that perform post implementation evaluations, most are performed either just before or just after system cut-over and installation. This may reflect the high importance attached to project cut-over and close-out uses of the evaluation process such as: verification that system requirements are met by the installed system, justification of the adoption or termination of the installed system, clarification and priority setting for further modifications, and transfer of responsibility for the installed system to the user. Only 18 percent of the organizations perform systems operations PIRs (six or moe months after installation, [12]), with the primary intention of assessing and improving the systems product rather than with closing out the development project.

The view that, in most cases, evaluation could be a project close-out device is further supported by the finding that the major participants in evaluation are the members of the systems development team. As the developers are usually interested in finishing up the current project so that they can move on to the next set of development projects, the closing out of the current project could be a motivation for performing evaluation.

The research findings reveal that much of evaluation is performed and managed by the members of the systems development team. These are the people who have the most say in determining evaluation criteria and evaluation methodology. Since the design ideals and the values of the developers are instrumental in shaping the system design and the systems development process [14, 18], it is unlikely that an evaluation managed and performed by the development team will discover any basic flaws in the process or the product of design.

Nonetheless, both user managers and systems development managers participate in evaluation and are the major stakeholders who review the results of evaluation and approve follow-up action. As long as this participation is substantive some of the concerns about the bias of developer-conducted evaluation may be mitigated. Though internal-audit groups have not made inroads as major participants in performing evaluations, they help to determine criteria and to review results.

The most frequently evaluated criteria include evaluations of the information product (accuracy, timeliness, adequacy, and appropriateness of information), user satisfaction with the system, and internal controls. Not surprisingly, reflecting the current value biases of systems developers [14, 18], socio-technical factors such as the system's impact on the users and the organization were among the least evaluated criteria. Finally, two criteria, quality of programs and the quality and completeness of system documentation, which are usually emphasized in both practitioner and computer science literature as being important to future operations and maintenance of the system, are also among the least frequently evaluated criteria. This could reflect the use of evaluation primarily as a responsibility transfer device and as a method for the justification and adoption of the installed system.

It seems that at least two primary stakeholders in the evaluation process, i.e., the systems development team and the systems management, use evaluation primarily as a means of closing out the systems project and disengaging from the system. The most important uses of evaluation results included: the verification that the installed system met requirements; the justification of the adoption, continuation, or termination of the new system; the clarification and prioritization of further modifications: and the transfer of system responsibility to the user. All of these activities are important for closing out the development project. The use of evaluation results, as a feedback device for improving future development and project management methods and for evaluating (and improving) the systems development project personnel, was found to be unimportant, thereby reinforcing the conclusion regarding the primary use of evaluation as a disengagement strategy.

The factors most likely to inhibit evaluation were found to be the unavailability of two of the major participants in the systems development process--the users and the qualified project team personnel. This again suggests that once the system is completed and implemented, the major stakeholders are interested in getting on with other work and use evaluation as a milestone for completion.

It was also felt by the respondents that the corporate management did not perceive adequate benefits from evaluation. Hamilton [12, pp.133-137] has empirically demonstrated that the behavioral intention to perform post implementation reviews is strongly influenced by the evaluators' normative beliefs about what salient referents think should be done and the motivation to comply with them. Since corporate management is a strong salient referent and since it does not perceive adequate benefits from evaluation, evaluation is less likely to be performed as a evaluative rather than a close-out device.

Finally, the lack of agreement on evaluation criteria and the lack of appropriate methodology for evaluation were not found to be major inhibitors of evaluation. Given the current controversy in the information systems literature regarding appropriate criteria, measures, and methods for information systems evaluation, the finding that these factors do not inhibit evaluation is surprising. It is possible that, given the close-out nature of evaluation, the evaluators have given only superficial consideration to the substantive issues that make the criteria and methods controversial.

CONCLUSIONS AND RECOMMENDATIONS

The study findings point to three key conclusions. First, it appears that the major reason for performing post implementation evaluation is the formalization of the completion of the development project whereby the deliverable (i.e., the installed system) is verified against specifications, any unfinished business, such as further modifications, is noted, and the responsibility for the system is transferred to the users. Evaluation then becomes a major tactic in a project disengagement strategy for the systems development department. Evaluation does not seem to be for the purpose of either long-term assessment of the system impact and effectiveness or for the purpose of providing feedback to modify inappropriate development and project management practices. Further, it is not to counsel and educate ineffective project team personnel.

This conclusion seems to be reinforced by finding that the majority of evaluations are performed either just before, at, or after system cut-over, and only in 18 percent of the organizations are true systems operations PIRs performed. Given the limited objectives of evaluation, it is doubtful that management and the users perceive adequate benefits from this exercise. This could be the reason for the study finding that the top inhibitors of evaluation include: the unavailability of users and development personnel for evaluation activities and management nor perceiving adequate benefits from evaluation.

Second, much of evaluation is managed and performed, and evaluation criteria and methods are determined by those who have designed the system being implemented. Since the designers would already have designed most of the factors they consider important, it is not likely that it will uncover any basic flaws in the product or the process of systems design.

Third, the most frequently evaluated criteria seem to be informed quality criteria (accuracy, timeliness, adequacy, and appropriateness) along with facilitating criteria such as user satisfaction and attitudes and internal controls. Socio-technical criteria such as system's impacts on the user and the organization, as well as the long-term maintenance and growth of the system (system documentation and program quality), are evaluated much less frequently.

These conclusions suggest that post implementation evaluations are being performed for the limited, short-term reason of formalizing the end of the development project and may not provide the more important long-term, feedback-improvement benefits of the evaluation process.

In order to realize these benefits, the evaluators need to take a longer-term view of the system and its development process. In such a view, long-term impacts (such as its impact on the organization and the system users and their effectiveness) would be considered, and long-term viability (in terms of cost savings, security, maintenance, program quality, and documentation quality) would be assessed. This would require that corporate and systems management formally recognize the role of post implementation evaluation as a took for providing feedback about both the systems development product, as well as the development process, and realize that this feedback is invaluable for improving both the product and the process. The results of the evaluation process can then be reviewed to see which of these long-term objectives have been addressed by the evaluation.

As this evaluation would be looking at the longer-term impacts and viability, the formal evaluation should be performed when the system has had a chance to settle down and its impacts are becoming visible through continued operation. Depending on the scope of the system this may be where between three and twelve months after the system cut-over.

Next, in order to ensure the independence of evaluation and a more global set of criteria than those conceived by the developers, evaluation should be managed and performed by people other than the members of the development team. The mechanism for performing post implementation evaluation may either be an independent quality assurance group or a multi-stakeholder evaluation team led by the users.

An evaluation group independent of the development team does not preclude the possibility of the developers contributing to the evaluation process. The existence of a formal quality assurance group will also reduce the effect of two of the major inhibitors, i.e., the unavailability of the users and the development personnel for evaluation.

Finally, a longer-term, feedback-improvement-oriented post implementation evaluation, with the accompanying system and the development process improvement benefits, would be helpful in gaining corporate management support for evaluations, thereby increasing the possibility of more substantive and meaningful evaluations being performed. Unless the above recommendations are implemented, post implementation evaluations will continue to serve the limited purpose of closing out the development project.

(1) For the sake of brevity, the detailed statistics for corporate senior management, external auditors and MIS staff other than the project team members are not included in Table I. Significant statistics for these stakeholder groups are presented in the accompanying narrative.

(2) Factor analysis is a statistical techniques used to discover which of the elements or variables in a sample population vary together and therefore may be candidates for grouping together into groups called factors. For an introduction to and an explanation of factor analysis see [15].

REFERENCES

[1] Ball, L., and Harris, R. SMIS member: A Membership analysis. MIS Q. 6, 1 (Mar. 1982), 19-38.

[2] Benbasat, I., Dexter, A., and Mantha, R. W. Impact of organizational maturity on information system skill needs. MIS Q. 4, 1 (Mar. 1980), 21-34.

[3] Brancheau, J.C., and Wetherbe, J. C. Key issues in information systems management. MIS Q. 11, 1 (Mar. 1987), 23-45.

[4] Cooke, J. E., and Drury, D. H. Management planning and control of information systems. Society of Management Accountants Research Monograph, Hamilton, Ontario, 1980.

[5] Dickson, G. W., et al. Key information system issues for the 1980s. MIS Q. 8, 3 (Sept. 1984), 135-153.

[6] The Diebold Group Inc. Key measurement indicators of ADP performance. Doc. No. S25, Diebold Research Program, New York, N.Y., 1977.

[7]Dolotta, T. A., et al. Data Processing in 1980-1985: A Study of Potential Limitations to Progress. John Wiley and Sons, New York, 1976.

[8] Domsch, M. Effectiveness measurement of computer-based information systems through cost-benefit analysis. In Design and Implementation of Computer-Based Information Systems. Szyperski, N. and Gorchla, E., Eds., Sijthoff & Noordhoff, Alphen an den Rijn, The Netherlands, 1979.

[9] Dumas, P. J. Management Information Systems: A dialectic theory and the evaluation issue. Ph.D Dissertation, Univ. of Texas, Austin, 1978.

[10] Green, G. I., and Keim, R. T. After implementation what's next? Evaluation. J. Syst. Manage. 34, 9 (Sept. 1983), 10-15.

[11] Hamilton, J. S. EDP quality assurance: Selecting applications for review. In Proceedings of the Third International Conference on Information System Systems (Ann Arbor, Mich., Dec. 13-15). ACM/SIGBDP, New York, 1982, pp. 221-238.

[12] Hamilton, J. S. Post Installation systems: An empirical investigation of the determinants for use of post installation reviews. Ph.D. Dissertation, Univ. of Minnesota, 1981.

[13] Hamilton, J. S. A survey of data processing post installation evaluation practices. MIS Research Center Working Paper, MISRC-WP-80-06, Univ. of Minnesota, 1980.

[14] Hedberg, B., and Mumford, E. Design of computer systems: Man's vision of Man as an integral part of the system design process. In Human Choice and Computers. E. Mumford and H. Sackman, Eds., North-Holland, Amsterdam, The Netherlands, 1975.

[15] Kim, J.-O., and Mueller, C. W. Introduction to Factor Analysis. Sage University Press, Beverly Hills, Calif., 1978.

[16] Kleijnen, J.P.C. Computer and Profits. Addison-Wesley, Waltham, Mass., 1980.

[17] Kriebel, C. H. The evaluation of Management Information Systems. IAG J. 4, 1 (1971), 1-14.

[18] Kumar, K., and Welke, R. J. Implementation failure and system developer values: Assumptions, truisms and empirical evidence. In Proceedings of the Fifth International Conference on Information Systems (Tucson, Ariz., Nov. 28-30). ACM/SIGBDP, New York, 1984, pp. 1-12.

[19] Land, F. Evaluation of systems goals in determining a strategy for a computer-based Information System. Comput. J. 19, 4 (1978), 290-294.

[20] Matlin, G. L. What is the value of investment in Information Systems? MIS Q. 3, 3 (Sept. 1979), 5-34.

[21] Mautz, R. K., et al. Senior management control of computer-based Information Systems. Research Monogrpah of the Research Foundation of Financial Executives Institute, New Jersey, 1983.

[22] Norton, R. L., and Rau, K. G. A Guide to EDP Performance Management. QED Information Sciences, Wellesley, Mass., 1978.

[23] Powers, R. F., and Dickson, G. W. MIS project management: Myths, opinions, and reality. Calif. Manage. Rev. 15, 3 (Spring 1973), 147-156.

[24] Scriven, M. The methodology of evaluation. In Perspectives of Curriculum Evaluation. R. W. Tyler, R. M. Gagne, and M. Scriven, Eds., AERA Monograph Series on Curriculum Evaluation, Vol. 1, Rand McNally and Co., Chicago, 1967, pp. 39-83.

[25] Seibt, D. User and specialist evaluations in system development. In Design and Implementation of Computer-Based Information Systems. N. Szyperski and E. Gorchla, Eds., Sijthoff and Noordhoff, Holland, 1979.

[26] Sollenberger, H. M., and Arens, A. A. Assessing Information Systems projects. Manage. Account. (Sept. 1973), 37-42.

[27] Waldo, C. Which departments use the computer best. Datamation 27, (Mar. 1980), 201-202.

[28] Weiss, C. H. Evaluation Research--Methods for Assessing Program Effectiveness. Prentice-Hall, Englewood Cliffs, N. J., 1972.

[29] Welke, R. J. Information Systems effectiveness evaluation. Working Paper, Faculty of Business, McMaster University, Hamilton, Ontario, Canada.

[30] Zmud, R. W. Information Systems in Organizations. Scott, Foresman and Company, Glenview, Ill., 1983.

CR Categories and Subject Descriptors> K.6.1 [Management of Computing and Information Systems]: Project and People Management--life cycle; management techniques; system development; K.6.4 [Management of Computing and Information Systems]: System Management--management audit; quality assurance.

General Terms: Management

Additional Key Words and Phrases: Information Systems evaluation, post implementation evaluation, post implementation review, post implementation audit.

KULDEEP KUMAR is an assistant professor of Computer Information Systems in the College of Business, Georgia state University. He is a member of IFIP TC WG 8.2 and the IEEE computer society and has served in several program committees for IFIP and the International Conference on Information System. His current research interests include: management of information systems, information systems planning, information systems development methodologies, and methodology engineering. Author's Present Address: Computer Information Systems, Georgia State Univeristy, University Plaza, Atlanta, GA 30303

Permission to copy without fell all or part of this material granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission.
COPYRIGHT 1990 Association for Computing Machinery, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1990 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Kumar, Kuldeep
Publication:Communications of the ACM
Date:Feb 1, 1990
Words:5750
Previous Article:Self-assessment procedure XX.
Next Article:SNA and OSI: three strategies for interconnection.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters