Printer Friendly

Evaluating educational system designs.

Keywords evaluation; education; system design; system evaluation

INTRODUCTION

Background

How adequate are the more traditional evaluation theories and methodologies for the task of evaluating new systems designs? Created in response to the large-scale development of social programs in the 1960s, current evaluation practice does not adequately serve the information needs of school districts attempting to bring about educational change through system design. Over the past half century, an extraordinarily impressive literature concerning evaluation theory, issues, definition, and problems has been produced (McGlaughlin and Phillips, 1991). Evaluation theory initially focused on the importance of finding effective solutions to social problems and over the past several decades this emphasis has persisted. Evaluation theory development has been strongly influenced by an idealized problem-solving sequence for (a) problem identification, (b) generating and implementing alternatives to reduce symptoms, (c) evaluating the alternatives, and (d) adopting one or more of the most satisfactory (Shadish et al., 1991).

Not surprisingly, evaluation thinking that emerged differed substantially in terms of how evaluation theorists viewed such issues as purposes for evaluations, audiences for evaluation findings, roles of the evaluator and of program stakeholders in designing and conducting the evaluation, the extent to which goodness or value can be assigned by an external factor, the nature of knowledge produced by evaluations, and the data collection methods employed by evaluators. A number of comprehensive evaluation-planning frameworks and models, applicable to educational settings, were developed. These frameworks were classified as either being judgmental strategies, represented by Lee J. Cronbach, Michael Scriven, and Robert Stake, or as decision-making strategies, represented by Daniel L. Stufflebeam, Marvin Alkin, Robert Hammond, and Malcolm Provus (Worthen and Sanders, 1973).

The critical omission of the various evaluation models produced is that they are largely silent concerning the evaluation of a system as a system. The notions of evaluating a system for its contextual appropriateness or its overall performance are missing. Instead, the concern is limited to the amelioration of problems possessed by existing systems or of selecting the best program or practice among competing alternatives. In short, they ignore system concepts. There is little acknowledgment that in open systems the fact that everything affects everything else, or that if one thing is altered, other conditions change in response, that direct causal relationships seldom exist. One of several accepted definition of evaluation, as it is commonly practiced, makes the point: evaluation is `determining the worth of a thing. It includes obtaining information for use in judging the worth of a program, product, procedure, or objective, or the potential utility of alternative approaches designed to obtain specified objectives' (Worthen and Sanders, 1973). This paper suggests that both evaluation theory and practice need to be expanded to include attention to system theory concepts and to those pertaining to the design of systems.

The Impact of System Design Thinking

Recently, the thinking and particular formulations that have contributed to traditional evaluation theory and practice are being challenged. For example, and having substantial implications for evaluation of system designs, `naturalistic inquiry' has been proposed as a more appropriate way of thinking about and conducting research (Lincoln and Guba, 1985). Their construction of the naturalist paradigm included the following axioms: (a) realities are multiple, constructed, and holistic; (b) the knower and the known are interactive and inseparable; (c) only time- and context-bound hypotheses are possible; (d) all entities are in a state of mutual simultaneous shaping, so that it is impossible to distinguish causes from effects; (e) inquiry is value-bound.

Perhaps the strongest force, however, in emphasizing the critical role that systems thinking system designs ought to play in the design of evaluation studies has been the work of philosophers and theorists who have developed the new intellectual technology dealing with the design of social systems. This relatively new technology, a decision-oriented disciplined inquiry, is the manifestation of `open-systems' thinking and `soft-systems approaches' (Banathy, 1996). Banathy's review of approaches to `Design evaluation' reveals an increasing number of system thinkers and designers concerned with (a) evaluation as contributing to stakeholder decision-making about alternative design features (Nadler, 1981; Ackoff, 1981); (b) evaluation as helping establish `acceptability zones' for design solutions that must confront multiple perspectives (Jones, 1980); (c) evaluation as `argumentation' (Rittel and Webber, 1984); (d) evaluation of design solutions through use of system criteria (Checkland and Scholes, 1990); (e) evaluation as a trade-off analysis at key decision points (Warfield, 1990); and (f) evaluation and design as complementary parts of the same process (Rowland, 1994).

Social system design and its evaluation is in sharp contrast to more traditional social planning. Social system design

seeks to understand a problem situation as a

system of interconnected, interdependent, and

interacting issues; and seeks to create a design

as a system of interconnected, interdependent,

interacting, and internally consistent solution

ideas. System designers envision the entity to

be designed as a whole. A systems view

suggests that the essential quality of a part of a

system resides in its relationships with, and

contribution to, the whole (Banathy, 1992)

Integration, then, is the sine qua non of a system design.

There are many other, equally important, characteristics of the `system design' technology; for example: (a) the importance of multiple perspectives and how these might be integrated; (b) the importance of creating an `idealized image' of the system to be that reflects the core values and needs of the system stakeholders, one that will serve as a target for design and from which, over time, a feasible and implementable operating design is extracted.

A FRAMEWORK FOR EVALUATING SYSTEM DESIGNS

The main section of the paper is divided into three parts. The first part presents some general guidelines related to evaluating system designs. The second part describes a framework for thinking about evaluation as it applies to system designs. The final part discusses some limitations and complications that need to be recognized and taken into account by system design evaluators.

Guidelines for System Design Evaluation

A set of guidelines are proposed below. These are not seen as requirements. Instead they should be viewed as `gentle guidelines' for consideration. Their purpose is to help ensure that evaluation models to be utilized will respond to the complexity, the fullness, and the importance of the enterprise being evaluated.

1. System design evaluations can be directed toward (a) determining the appropriateness and worth of existing systems; (b) the appropriateness of new systems being designed but not yet implemented; (c) the implementation process for new systems; and (d) the `over time' appropriateness and worth of new systems.

2. The task of evaluating system designs begins with, is integrated with, and is concurrent with the systems design process itself. This means that the design process for most aspects of system design evaluations are accomplished with full involvement of system stakeholders. The design needs to be responsive to stakeholder interests.

3. Social systems are in constant flux. Independent variables and, in some cases, dependent variables are vulnerable to change. Thus, capturing, understanding, and making sense of the process of change should have at least equal status to assessing effects.

4. When system components are the objects of inquiry, evaluation activities should address the performance of the component itself, the contributions of the component to the anticipated performance of the larger system, and the inevitable interactions that occur between the component and the rest of the system. Components that have no effect on a larger system are not really a part of that system.

5. Performance indicators for the `system as system' are needed in addition to indicators for component performance.

6. The complexity of the evaluation design needs to match the complexity of the system design itself at succeeding stages of development. As system design proceeds, the levels of specificity, and therefore complexity, tend to increase. Evaluation issues (questions, variables, activities) should be consistent with the information needs of designers at particular design and/or implementation stages.

7. Organizational commitment for evaluation through stakeholder involvement is absolutely necessary. To the extent possible system members should be involved in, if not responsible for, evaluation design, planning and conduct. Building the capacity for this function is crucial to develop the habits and skills for disciplined inquiry and to create the conditions for organizational learning. The evaluation professional becomes the trainer, facilitator, and quality ensurer of the evaluation process.

8. The organization needs to be readied for participation in an evaluation process that (a) engages in the complexities of system thinking, and (b) involves the use of multiple perspectives, an extensive knowledge base, large amounts of data, and extensive two-way communication procedures.

Framework for Thinking about Evaluation of System Designs

The first guideline given above states that system design evaluations can be directed toward four different targets: the appropriateness and worth of existing systems, the appropriateness of new systems (being designed or not yet implemented), the implementation process, and the continued appropriateness and worth of new systems over time as experience, inquiry, and changing conditions provide new information.

The most basic and most difficult issue for system evaluators is how to define the system so that it can be evaluated. The framework below describes a set of system areas for creating a definition and toward which the design of an evaluation can be directed, regardless of evaluation purpose. The framework is based primarily on the integration of three lenses for viewing social systems (Banathy, 1992) and a set of features defined as common to all social systems (Churchman, 1971; Checkland, 1981). The framework, following Banathy, is organized into three systemic areas.

The System's Purposes, Functions, and Structure

This lens defines a system in terms of why it exists, what work is to be accomplished in carrying out its purposes, and how it will be organized for the work. These aspects of a system represent its most basic and core definition. The system features that relate to this area are as follows:

1. Mission and Purposes. Social systems have an ongoing mission and a set of purposes. General purposes, although not directly measurable, provide the starting point for the design of the entire system and for its evaluation. From these purposes, a set of system functions are selected that will ensure that purposes are being met.

2. Components. The system has components which themselves are systems having all the properties of the larger system. These components interact to some degree. The components are purposeful and are selected to perform the primary system functions and delivery key services to its clients. Organizational capacity building, resource development, monitoring, and evaluation are common functions of educational organizations for which components must be developed. The identification and organization of the key components help define the system.

3. Measures of Performance. Systems have measures of performance that are used to signify progress or regress in accomplishing its purposes. With schools, the primary measures of performance are focused on the quality and effects of its primary services to clients; e.g., instruction and learning. There are other kinds of system performances and related measures that are useful in sustaining system performance; e.g., cost-effectiveness of operations, staff development, internal and external communication processes and outcomes, and stakeholder support. But these tend to be enabling of the system performance and are not primary system properties themselves.

4. All systems have designers and decision-makers. The definition of this feature states who `owns the system', who has authority to change or redesign the system's major features, and what is the decision-making process for making design changes and taking action if the measures of performance are not matching expectations.

The Environment of the System

The environment of the system is the larger context in which the system exists, performs its functions, and delivers its services. Social systems are always embedded within larger environments and have boundaries (may be physical or communication) that set them apart from the rest of the environment. Generally, it is the more immediate environment with which the system must interact. Primary features related to the environment are:

1. Interactions with the environment. With regard to interactions with the environment, it is the intended nature, intensity, and frequency of the interactions that help define the system. Interactions might consist primarily of exchanges of information and resources. Alternatively, cooperative or collaborative efforts might be developed. At another level, some key functions of the desired educational system might be merged with other systems having similar purposes. The more intense and frequent the interactions, the more blurred become the boundaries between the system and its environment and the more likely fundamental systemic changes will occur.

2. System clients. Systems need to clearly define the clients for whom the system is designed to serve. When a traditional client base is expanded or contracted, the system needs to be altered, perhaps even redesigned, to accommodate the changes. Changes in the client base may create a need to rethink purposes, organizational structures, services, and the relationships of the system with its environment.

The Process Lens

The `purpose, functions, structure' lens was depicted as revealing a `still-picture' model whereas the `process' lens is depicted as revealing a `moving-picture' model. The functions-structure lens is concerned with what the system is, what it does, and how it is organized. The process lens is concerned with how the system works through time in order to accomplish its mission. Features include:

1. Communication. The system communicates both externally with the environment and internally so as to bring about expected system performance. The appropriateness, clarity, timeliness, and quality of communication is a critical measure of system health and potential survival.

2. Instruction. The procedures, practices, and arrangements utilized by the system to deliver services to its clients and bring about system performance is a key design feature of the system.

3. Monitoring and evaluation. Utilizing the defined measures of performance, the system observes itself, collects, analyzes, communicates, and uses information to evaluate performance and make needed adjustments.

APPLYING THE FRAMEWORK

The framework identifies a set of features which system designers need to define. These definitions, in turn, form the basis for designing an evaluation. Each feature represents a characteristic of the system that may or may not pass muster using criteria that will have been generated for them. In the process of designing the various system features and associated criteria, system stakeholders will have ideally generated and used large amounts of information relative to each for the purpose of choosing among possible designs. Thus, evaluation begins with the design process.

The primary basis for judging the appropriateness of a system design, whether an existing one, or one that is aspired to, is `goodness of fit'. Goodness of fit is an overall measure that can be addressed by such evaluation questions as:

1. How closely do relevant design features, e.g., mission, purposes, system clients, respond to an aggregate of stakeholder needs and interests? Is the design authentic?

2. To what extent is the design consistent with the relevant research and practice knowledge base, e.g., instruction? What is the balance between personal values and preferences of stakeholders and what is believed to be known about learning?

3. Is the design ethical? Are important client needs addressed? Are some clients rewarded at the expense of others? Are/will stake-holders, including staff, be treated fairly?

4. Is the design systemic? Are all of the features defined? Have any features been defined in such a way that their contribution to the larger system functioning and sustainability is vague or problematic?

5. Is the design implemented (existing system) or implementable (new design)? What needs to be done in order to ensure that the design can be implemented? The choices tend to be (a) change the design to facilitate implementation or (b) spend time and resources developing system capability/readiness for implementation.

6. How well does the system perform? Are performance measures being met with reference to clients? How well is the system performing in terms of `system-level' indicators?

These questions are appropriate for use when either an existing system is being evaluated or a new system design is being created. As was stated above, the framework may also be used to evaluate an implementation process by using the overall design and its features as developmental targets and assessing progress towards them.

LIMITATIONS AND COMPLICATIONS

The barriers to evaluating system designs are closely related to the system design process itself. The most problematic issues are described briefly below:

1. Evaluation must be valued by the organization so that use of data for the decision-making becomes routine. Organizations must provide the time and resources necessary and be committed to organizational learning. People within organizations need to become competent to engage with the evaluation process because experience suggests that the more traditional role of the evaluator, acting in isolation from the organization, does not necessarily lead to high information use.

2. The evaluation of a system design requires that an explicit design exists; i.e., its features are known to stakeholders. Given the complications of designing social systems, the future of system design evaluation literally depends on the willingness of schools to engage in relatively new behavior. The various difficulties perceived by educational administrators represent a major challenge. Clearly, a critical design task will be for stakeholders to design an `inquiring system' capable of coping with the traditional barriers to change (Jenks, 1995).

3. The principle of `requisite variety' (Ashby, 1958) states that a control mechanism's capacity for control cannot exceed its capacity as a channel of communication. In other words, a control mechanism must be at least as complex as, and matched to, the system it wishes to control. If the control system has less variety than the system, then it can control only part of the system, and such limited control may lead to unanticipated consequences in other parts of the system. The concept of `requisite variety' underlines the importance of developing evaluation designs that are conceptually appropriate with the system designs being evaluated. As system designs advance in complexity (as a result of progress from conceptual testing) evaluation designs keep pace.

4. Most of the challenges faced by system evaluators are the same ones with which all evaluators must contend. In general, the challenge is to design and conduct a form of disciplined inquiry that meets professional standards and provides useful and credible information. But system evaluators have additional challenges. There is the need to examine system variables that are not well understood or easily observed. Defining and measuring the effects of component interactions, the contributions of components to system-level performance, or the degree and quality of systemic integration, all acknowledged to be important characteristics of systems, are very complicated. Understanding and measuring them will require considerable effort and cumulative experience on the part of system thinkers and practitioners.

CONCLUSIONS

The problem-centered and program-oriented evaluation theory, models and methodologies created in the 1960s and 1970s to facilitate policymaking decision-making are not providing the kind of information and level of understanding needed to make changes in our educational systems. The recent development of a theory of system design and accompanying technology offers a foundation for the creation of more suitable models for evaluating systems and system designs. Moreover, a consideration of the nature of knowledge as suggested in the tenets of `naturalistic inquiry' and its associated methodology provides methods more suitable for understanding complex systems and sustaining systemic changes.

REFERENCES

Ackoff, R. L. (1981). Creating the Corporate Future, Wiley, New York.

Ashby, R. (1958). Requisite variety, and its implications for the control of complex systems. Cybernetica 1(2), 1-17.

Banathy, B. H. (1991). Systems Design of Education, Educational Technology Press, Englewood Cliffs, NJ, pp. 1-20.

Banathy, B. H. (1992). A Systems View of Education: Concepts and Principles for Effective Practice,

Educational Technology Publications, Englewood Cliffs, NJ.

Banathy, B. H. (1996). Designing Social Systems in a Changing World, Plenum Press, New York.

Checkland, P. (1981). Systems Thinking, Systems Practice, Wiley, New York.

Checkland, P., and Scholes, J. (1990). Soft Systems Methodology, Wiley, New York.

Churchman, C. W. (1971). The Design of Inquiring Systems, Basic Books, New York.

Jenks, C. L. (1995). Educational systems design: making it more user friendly. Systems Practice June 1995.

Jones, C. J. (1980). Design Methods: Seeds of Human Futures, Wiley-Interscience, New York.

Lincoln, Y., and Guba, E. (1985). Naturalistic Inquiry, Sage, Beverly Hills, CA.

McGlaughlin, M., and Phillips, D. C. (eds) (1991). Evaluation and Education: At Quarter Century, Ninetieth Yearbook of the National Society for the Study of Education, University of Chicago Press, Chicago, IL.

Nadler, G. (1981). The Planning and Design Approach, Wiley, New York.

Rittel, H., and Webber, M. (1984). Dilemmas in a general theory of planning. Policy Sciences, no. 4.

Rowland, G. (1994). Designing and evaluating. Educational Technology, January.

Shadish, W. R., Cook, T. D., and Leviton, L. C. (1991). Foundations of Program Evaluation: Theories of Practice, Sage, Beverly Hills, CA.

Warfield, J. (1987). Features relevant to effective system design. A paper presented at the annual meeting of the International Society of Systems Sciences.

Warfield, J. (1990). A Science of General Design, Intersystems, Salinas, CA.

Worthen, B. R., and Sanders, J. R. (1973). Educational Evaluation: Theory and Practice, Charles A. Jones, Worthington, OH.

C. Lynn Jenks, Correspondence to: C. L. Jenks, International Systems Institute, Buck Institute for Education, PO Box 734, Stinson Beach, CA 94970, USA
COPYRIGHT 1998 John Wiley & Sons, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1998 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Systems Design
Author:Jenks, C. Lynn
Publication:Systems Research and Behavioral Science
Date:May 1, 1998
Words:3533
Previous Article:Human nature, humanistic social systems, and design.
Next Article:Guidelines for facilitating systemic change in school districts.
Topics:

Terms of use | Privacy policy | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters