Printer Friendly
The Free Library
23,375,127 articles and books


Stakeholder utility: perspectives on school-wide data for measurement, feedback, and evaluation.

Abstract

More than 10,000 schools in the United States have adopted the multi-tiered model of behavioral and academic supports known as school-wide positive behavior interventions and supports (PBIS). Schools and districts adopting, implementing, and sustaining PBIS are charged with collecting and disseminating data generated by and related to students, parents, teachers, and administrators. Additionally, researchers and technical assistance providers collect school- and district-level measures to measure outcomes related to PBIS implementation. The interests and needs of this broad range of stakeholders impact the usefulness of each piece of data that is collected for each stakeholder group. This paper presents a construct called stakeholder utility, driven by stakeholder role and purpose, which may help stakeholders design and appraise measures to be used for assessment, evaluation, and research.

**********

More than a decade has passed since Kauffman (1996) wrote about the failure of educational research to make much impact on what happens in the classroom. Other authors (Robbins, Collins, Liaupsin, Illback, & Call, 2003) have noted a twenty-year recess between the development of a best practice and its adoption into actual practice. More recently, however, when approaching the subject of research into practice, the success of school-wide positive behavior interventions and supports (PBIS) has been touted with bridging the gap between research and practice with astounding speed (Walker, 2004).

The National Technical Assistance Center on Positive Behavioral Interventions and Supports reports that more than 10,000 schools have received technical assistance to implement this multi-tiered model (http://www.pbis.org/). Closely related to the increasing adoption of PBIS is the growth in attendance to the international conference by the Association for Positive Behavior Support over past eight years. Additionally, a national conference focused specifically on implementation of PBIS is hosted in Illinois each year, and across the United States, regional and state conferences focused on PBIS are becoming more common as the PBIS model becomes more widely disseminated in schools and districts. The growth in adoption of PBIS practices has led to the need for multiple venues targeting various stakeholder groups.

The widespread acceptance of PBIS has been attributed to its efficacy of implementation, intelligence of design, and availability of technical support, among other factors (Walker, 2004). Hallmarks of the three-tiered PBIS model include (a) a pre-implementation agreement of at least 80% of building-level staff; (b) three distinct "levels" of support for students and decision rules for identifying student needs across the three levels; (c) practices which are proactive, based on principles of applied behavior analysis, and supported by empirical investigations (i.e., "evidence-based"); (d) the use of systems of reinforcement designed to draw attention to pro-social student behavior and reduce the focus on student misbehavior; (e) development of the "contextual fit" of the implementation to support increased community involvement, including parent and family involvement; and (f) the use of student and school-level data to make decisions about behavioral and academic practices within the school.

This last element of PBIS, student and school-level data, has offered empiricism a firm foothold in schools. The most commonly used outcome measures for assessing the impact of school-wide PBIS are the reduction in student disciplinary events such as suspensions, expulsions, and office discipline referrals (ODRs). The use of ODRs, in particular, for decision making has been attributed to their ease of use and "utility for making a wide range of decisions at the school and individual level" (McIntosh, Campbell, Carter, & Zumbo, 2009, p. 101). The use of a single measure like ODRs by so many stakeholders, (e.g., staff, school PBIS team, building and district administration), for a variety of purposes might seem problematic; after all, how can one measure have meaning or utility for so many, positioned as differently as they are from the data source? In this paper, we suggest that applying the construct of stakeholder utility may shed some light on how ODRs came to be a viable option for multiple evaluation and research purposes across a wide range of data users. In addition, stakeholder utility may also be helpful in the development of new measures related to PBIS outcomes, implementation effectiveness, program evaluation, or research.

Stakeholders and the Playing Field of PBIS

In this paper, a construct is offered to explain the various factors affecting the degree to which a given PBIS evaluation measure (e.g., ODRs) can inform any invested group or individual in a way that would impact that group's/individual's professional performance, investment and/or time commitment. Some of the promises, possibilities, and constraints of measuring various aspects of PBIS outcomes and products are collected here under the unifying construct of stakeholder utility. The general playing field of PBIS, in which stakeholders involved in implementation or evaluation interact with each other, provides some innate restrictions to the kinds of stakeholders (roles) and the need for measurement (purposes).

One way to define stakeholder utility is: The degree to which a given SWPBIS evaluation, measure can serve any invested group or individual in a way that would impact that group's/individual's professional performance, investment and time commitment. Although the definition above uses the term "degree," the stakeholder utility concept outlined in this paper does not have provisions for measurement on any kind of quantitative scale. In that way, stakeholder utility shares something in common with internal validity, social validity, or construct validity, which are all constructs designed to guide in the design and execution of experimental social and educational research. A less complex way to think about stakeholder utility might be that it is how useful a given measure is for a given stakeholder and for a particular purpose. (1)

Stakeholder utility can be broken down into four factors impacting the use of measurement tools among PBIS stakeholders. These are role/purpose, reflexivity, stability, and contingency. Two potential applications of the construct of utility to stakeholder interests will be discussed here; descriptive and prescriptive. In a descriptive application of stakeholder utility, stakeholders identify for themselves measures already in use and map them according to the four factor areas. For the purpose of illustration, the measure of ODR data is mapped using the descriptive approach. Table 1 lists other potential data sources by the level of measurement (i.e., student-, classroom-, school-level).
Table 1
Potential sources of PBIS data.

Level of assessment   Measure

Student-level data    Test scores, DPR data, percent change data
                      (behavior), rates of engagement

Classroom data        Test scores by grade, normative engagement,
                      class climate

School-level data     PBIS implementation scores, test scores, ODRs,
                      attendance/absence, suspensions/expulsions,
                      Adequate Yearly Progress (AYP) scores, school
                      climate/safety data, time-saved measures,
                      perception data about safety, climate and
                      student behavior

District-level data   Test scores, ODRs, attendance/absence,
                      suspensions/expulsions, AYP, school
                      climate/safety data, time-saved measures.

State/National level  Implementation scores, suspensions/expulsions,
                      AYP, attendance, school climate/safety data,
                      State performance measures on targeted
                      initiatives, such as students with IEPs
                      relative to educational environment (EE) or
                      least restrictive environment (LRE).


The second application of stakeholder utility is prescriptive and can be used in planning for measurement. By identifying (1) the proposed stakeholders, (2) their roles and purposes in using the data in question, (3) their distance from the measure (reflexivity), (4) the intrinsic properties of the measure itself which allow it to be used for comparison/evaluation purposes (stability), and (5) the professional and persona) rewards' that exist for the use of such a measure (contingencies), the usefulness that measure will have for a given party and purpose, its stakeholder utility, can be assessed.

Stakeholder role/purpose. In keeping with suggestions from those involved in the study of stakeholder theory (Freeman, 1984; Weiss, 1983), the first step in measuring stakeholder utility is to locate the stakeholders. The second is to understand the professional/personal perspective of stakeholders; their "stakes" or "roles" in the activity that is being evaluated.

According to Freeman (1984), stakeholders are anyone who "can affect or are affected by the achievements of the organization's objectives" (p.52). For a school or district initiative, such as PBIS, researchers, classroom teachers, students, parents, district supervisors, state-level technical assistance team members and administrators all belong to something that resembles an organization. These individuals and groups have at least one common purpose and they represent a wide array of interests within the achievement of that purpose. These two contradictions (separate and common interests that come together in a single entity or organization to achieve its mission) are the linchpin of stakeholder theory (Phillips, 2003). The following is a (non-exhaustive) list of potential stakeholder roles specific to PBIS and possible purposes for using an evaluation measure (e.g., ODRs):

1. Student/Self-monitoring

2. Parent/Progress monitoring of son/daughter

3. Teacher/Student assessment, progress monitoring, classroom management assessment

4. Diagnostician/Student assessment, progress monitoring

5. Coach/Student assessment, PBIS team functioning, decision-making, implementation assessment (Site-level)

6. Leadership (Universal or Secondary/Tertiary) Team Member/Student assessment, PBIS team functioning, decision-making, implementation assessment (site-level)

7. Building-level Administrator/PBIS team functioning, decision-making, implementation assessment (site-level), district-wide assessment

8. District-level Administrator/Implementation assessment (site-level), district-wide assessment

9. Statewide Technical Assistance Personnel/Implementation assessment (site-level), implementation assessment (Technical Assistance-level), district-wide assessment, technical program evaluation, program evaluation, implementation research, experimental research

10. Researcher/Implementation assessment (site-level), implementation assessment (state-level), district-wide assessment, technical program evaluation, program evaluation, implementation research, experimental research

Accompanying each role is a purpose for which a given measure can be used. The two factors, role/purpose, are paired because although there is some level of independence for each, there are also certain parameters that tie the stakeholder role to the measurement purpose. For example, a building-level administrator might use ODR data for decision-making (at the site level) or to demonstrate gains (in a district-level meeting). However, a teacher would be more likely to use the ODR data to make classroom decisions, but might use ODR data at the district level if they were acting in the role of a school-wide PBIS team member. Role and purpose determine one dimension of utility; other dimensions such as contingency may actually override these two when it comes to determining stakeholder utility, although contingency is also affected by role and purpose. The dimensions of stakeholder utility and how they influence each other are outlined in Figure 1.
Dimension of Stakeholder  Relationship to Other Factors
Utility

Role/Purpose              Connects Stakeholder to Purpose

Reflexivity               Connects Measure to Stakeholder

Stability                 Connects Measure to Purpose

Contingency               Connects Stakeholder to Role, Purpose,
                          and Measure

Figure 1. Inter-relationships among stakeholder utility dimensions.


Reflexivity. The property of reflexivity is a stakeholder's potential to impact, and be impacted by, the measure in question and is affected by the relationship between the stakeholder and the measurement in question. While role and purpose share some relation to one another, reflexivity is a function of the distance between the stakeholder and the measure itself. The farther away a stakeholder is (in their relevant role) from the data in which they are placing their interest, the more human error is involved and the less those data might be expected to impact that stakeholder's professional performance. Figure 2 provides a graphic map for an array of measures and stakeholders, placing each relative to their distance from, and ability to interact with, and be interacted upon by, the consequences of various measures.

[FIGURE 2 OMITTED]

Reflexivity impacts a stakeholder's control over the measure in question, both as it relates to integrity and as it relates to job performance. For example, ODR data have high reflexivity for both students (for self-monitoring) and teachers (as PBIS team members). A student who is actively monitoring their own ODRs has a stake in whether or not they receive an ODR, and also has a large degree of control (whether or not the student perceives themselves as having control is another issue) over whether or not they may receive an ODR. A classroom teacher working in a PBIS school (whose team has set as one of their goals the reduction of ODRs from classroom settings) also has some influence over the number of ODRs his/her classroom generates (control) as well as s/he stands to contribute to solving a team-identified problem (professional performance). ODR data from hundreds of schools, however, have a lower degree of reflexivity for a researcher working on a large-scale project using those data; s/he has a low level of control over the generation of those data, although s/he may still reap professional benefits through the findings they generate from those data.

Reflexivity has a kind of rubber-band quality--for a given stakeholder role and purpose, different kinds of data have the ability to be more or less reflexive. In the example of ODR data, these data are differentiated in reflexivity by the stakeholder's physical distance from the measure in question as in Figure 2, in which for students and teachers, ODRs have the highest degree of reflexivity for the purposes of self-monitoring and measuring implementation effectiveness, respectively.

Stability. Because reflexivity itself measures the distance between a stakeholder and a particular outcome or variable, it also affects the stability of that variable. Both in terms of integrity and objective measurement, stability is an important dimension for any outcome measure and refers to the relationship of the variable to its purpose. In other words, stability can be thought of as a special case of construct validity. For research purposes, applying considerations of construct validity might be sufficient to addressing the appropriateness of a given measure to its construct, but the playing field of PBIS includes a much wider range of stakeholders, not all of whom are researchers. Parents, students, teachers and administrators alike should be able to answer the following questions when considering objective measures of progress:

1. How well does the variable measure what it is supposed to measure?

2. Is it easy to collect?

3. Does it work equivalently across contexts (with good reliability)?

4. Does it lend itself to cyclical use and dissemination?

All these are questions which can help map out a measure's stability for a given purpose.

In the case of ODR data, we could say that for the role/purpose combination of PBIS school team member/decision making, that ODR data have a high degree of stability. In this case, we expect the data to reflect changes in the school's implementation of PBIS practices. On the other hand, for a researcher/evaluation purpose (measuring changes in student social behavior among schools and districts) we might say that ODR data have a lower level of stability; too much between-school variation may preclude getting reliable measures of student social behavior among schools (Kern & Manz, 2004). Stability alone is not enough to determine whether a given measure will have high stakeholder utility, however, as another dimension, contingency, actually may weigh more heavily on whether a tool or variable is valuable to stakeholders.

Contingency. Contingency is the relationship between a given behavior and its outcome. In the case of stakeholder utility, the factor of contingency involves outcomes that impact stakeholders in their use of various measures. This factor may have a controlling effect on what data get used by stakeholders for various purposes, and comes directly from applied behavior analysis; mapping contingencies is a way of understanding the existing or potential reinforcers (positive and negative) which exist for stakeholders with respect to a given measure. These include but are not limited to the presence of professional and personal rewards, although it is important to remember that some rewards are not necessarily tangible. Professional rewards do not always appear in the form of access to resources or financial gain; for most educators, this category of reward is notoriously elusive. However, identifying potential reinforcers helps get to the root of contingency, and can help plan for future measures that have the highest stakeholder utility.

For example, the following are all likely to affect the future data collection behavior of stakeholders: ease of data collection, information dissemination, and the communicability of results. Career and financial rewards are also potent sources of motivation for researchers, teachers, parents, and administrators alike. In comparison to the other utility dimensions, contingency likely carries the most weight. A measure may have very poor stability for its intended purpose, and very low reflexivity, but the contingencies associated with its continued use may trump the limited influence of those factors on the future likelihood of continued use of that measure. One instance of this is the frequent use of ODRs as outcome data in the research literature on PBIS implementation despite issues raised in discussions of measurement in PBIS (Kern & Manz, 2004; Sasso, 2004) which question the validity of ODR data as outcome measures (their stability), especially in between-school comparisons.

Mapping out contingencies can shed light on why this is the case. First, ODR data are easy to collect on a large-scale level, and secondly, other stakeholder groups understand the measure. Even outsiders (non-educators and international educators) understand ODR data as a measure of student behavior. Thirdly at the school level, (for the role/purpose of PBIS team member/decision-making) ODR data can be very stable and have high reflexivity, helping team members understand their own job performance, which adds to the communicability of the results from research to practice.

The preceding constructs are intertwined: Reflexivity matches the stakeholder to the measure in question; stakeholder role/purpose marries the stakeholder with the rationale for measurement; stability links the measure to the purpose to gauge the appropriateness of the measure; and contingency connects the stakeholder to their environment by considering the effect of outcomes. Consequently, the stakeholder's professional role, the environment in which they operate, the measure itself, and the purpose for measurement all share different facets of relationship. Thus, the level of stakeholder utility for a single measure can differ drastically depending on the stakeholder's professional role and the purpose for which the data are being used.

Implementation of the PBIS model generates the possibility for multiple sources of data (e.g., ODRs, suspensions, expulsions, daily behavior ratings, staff survey ratings, test scores, implementation scores) by multiple stakeholders (e.g., teachers, administrators, team members, TA coordinators). Parents, students, district and state-level administrators are learning to become conversant in data, and, as more schools adopt a multi-tiered approach with respect to behavior and academics, this trend will continue. Despite widespread availability of multiple measures, however, the principle of stakeholder utility suggests that measures for which the highest contingencies are operating, as in the ODR examples, will be the ones which are most commonly used by multiple stakeholders for multiple purposes. To prepare for this, mapping the various factors associated with stakeholder utility, with special attention to contingencies, around the use of data measures relative to stakeholder involvement may be a useful step for PBIS school, district, and technical assistance team members.

Applications of stakeholder utility can include other forms of validation as well. Multiple technical assistance (TA) teams assist schools and districts with the task of planning implementation at start-up often use planning and goal-setting tools such as the School Assessment Survey (SAS; formerly known as the EBS Survey), the Benchmarks of Quality (BOQ; Cohen, Kincaid, & Childs, 2007), and the School-wide Evaluation Tool (SET; Horner, Todd, Lewis-Palmer, Irvin, Sugai, & Boland, 2004). By using Freeman's guiding question: "Who are the groups and individuals who can affect and are affected by the achievements of an organization's purpose?" groups of stakeholders in PBIS can be organized by categories of interest. Once identified, these groups (roles) can be placed relative to their distance from (and/or impact upon; reflexivity) various implementation assessments or evaluations of nested intervention effectiveness, thus mediating the utility of the data being collected with the role of the stakeholder.

For example, if a school's PBIS leadership team wishes to find out which data measures currently in use have the highest stakeholder utility, a preliminary step might be to conduct a stakeholder survey (or other equivalent assessment) prior to mapping out the dimensions of stakeholder utility which might be most influential. In addition to asking stakeholders to respond to a survey, other assessments of tools most commonly used might include a survey of either the research literature or (in the case of school-based stakeholders) any existing school-wide data collection tools. By identifying and mapping out stakeholder roles and purposes for evaluation, researchers and practitioners alike can explore a myriad of options for evaluating PBIS effectiveness, including measures of school social climate, organizational health, and student behavioral outcomes. One of the more exciting opportunities in the study and evaluation of PBIS is the novelty of being able to look at a host of outcomes as they emerge, not as they have historically, from the research literature, but from the field.

Once the exclusive province of researchers and program evaluators, measurements of implementation fidelity, progress, and effectiveness are beginning to be the concern of a growing body of stakeholders when it comes to school-wide implementation of PBIS. Considering the numerous combinations of many roles and purposes of evaluation measures, the application of stakeholder-based considerations such as those outlined in this article may prove useful when designing future data-based measurement systems. Currently, researchers, teachers, district-level administrators, and parents are all invested in multiple facets of PBIS evaluation. The rewards (and punishments) that accompany the use of such measures as well as the stability of each measure can be mapped, to the end that each stakeholder group can reap the maximum possible benefits of data collection and evaluation. The crux of this argument is regardless of impact, students, parents, teachers, administrators, and researchers are all responsive to differential barriers or reinforcers in the use of data, professional or otherwise. By carefully weighing the vast options for data collection and measurement, all stakeholders might be better able to match data to purpose and purpose to people.

Limitations

As a model, stakeholder utility itself can provide a starting point to explain and identify the barriers and benefits that pave the way for various stakeholder purposes in PBIS implementation, but as explained here, several shortcomings can be noted. Most notably, as it is presented here, stakeholder utility is not yet a quantifiable construct and categorization is limited to simple (and rather non-specific) ordinal levels of low and high. With respect to contingencies, simply counting the availability of rewards (both additive and subtractive) for the collection or use of a particular form of measurement may assist in facilitating and maintaining stakeholder-based evaluation, but will not necessarily lead to conclusions about a measure's value on a quantified scale.

Moreover, each of the dimensions of stakeholder utility, as it is explained here, is dominated by the single dimension of contingency. Without the mediating factor of contingency, simply mapping how reflexive or how stable a measure is will not tell us whether it will be used by any given stakeholder group. If a measure is difficult to collect and share, if it is hard to understand, or if its use incurs more costs than benefits for a particular stakeholder group, it stakeholder utility will be diminished. Moreover, the distance between a stakeholder and a measure (reflexivity) can itself be either a help or a hindrance for a given purpose. Take as an example the level of reflexivity of curriculum-based measurement (CBM) data used by a teacher to assess a student. Even if accounting for how well the material in question was addressed instructionally, the teacher will be measuring their own performance as well. One of the confounding aspects of high reflexivity is that it can also mean high subjectivity. Based on the preeminence of contingency in the stakeholder utility construct, the other factors are subsumed under its weight; ODR data provide the best working example of the power of contingency to move the use of an outcome measure forward for multiple purposes.

Conclusion

Within the field of education in particular, students, parents, practitioners and researchers face a host of different priorities and contingencies. A framework such as PBIS owes its continued success in schools to a variety of factors, not the least of which are that multiple stakeholders are involved in its maintenance and improvement, that objective data is used to make decisions (Horner, Sugai, & Todd, 2001; Scott & Martinek, 2006), and that the framework itself emphasizes efficiency and effectiveness in addressing student and school system needs simultaneously (Sugai et al., 2000). Shedding light on the utility of outcome measures can effectively help stakeholders who use those measures to recognize the distinct professional (and nonprofessional) requirements and rewards with which others must contend, and thus add to efficiency and effectiveness of PBIS practices in schools. A model like stakeholder-utility can also provide students, parents, teachers, and researchers alike with a map of stakeholder role and purpose (their own and possibly those of other stakeholders) to predict which measures might be most useful to which parties, and for what purpose. While all have a vested interest in the reduction of ODRs in a school building, for example, different kinds of rewards exist for different stakeholders based on their level of involvement and the purpose for their interest in that measure (self-monitoring, program effectiveness, program evaluation, to name a few).

With respect to the broad application of evidence-based practices in our schools, some very valuable school-wide and discipline data are being generated and shared regularly, in many cases with outstanding acuity. The number of schools implementing PBIS is expected to increase exponentially in the next ten years, and Response to Intervention (RTI) models for measuring student academic progress are expected to parallel data collection pertaining to student behavior. Additionally, the study of outcome measurement has changed in the last thirty years or so (Fixsen, Naoom, Blase, Friedman, & Wallace, 2005), narrowing (especially in the case of PBIS) the gap between the playing fields of research and program evaluation, and changing measurement models to fit field-based studies which generate less homogenously variant data.

A vast array of potential difficulties threatens a legitimate return from practice to research. Pinpointing the purpose of any field-initiated research so that it might more aptly address the appropriate audience can help school-and district-based stakeholders inquire into their own PBIS implementation progress with increased precision. If evaluations of existing implementations can speak from the point of view of program evaluation and not experimental research, the breadth of the literature on PBIS implementations may widen enough to include studies of program evaluation as legitimate examples of applied research. To that end, reflections on the role stakeholder utility in PBIS evaluation may streamline the gap between practitioners and researchers, and provide a common language that honors each stakeholder's role in upholding a commitment to principles of effective, socially responsible intervention.

Notes

(1) The fields of business management and education have both made and maintained interest in the stakeholder concept for many years. The term stakeholder was coined about 1963 in an internal memorandum circulated within the Stanford Research Institute (Freeman, 1984). In 1983, Bryk and colleagues from the field of educational research and program evaluation revisited the term with a different lens, in an entire issue of New Directions for Program Evaluation devoted to Stakeholder-Based Evaluation. Freeman's landmark 1984 book, Strategic Management: A Stakeholder Approach, spearheaded a new movement in business, away from the traditional top-down hierarchical model of management.

References

Bryk, A. S. (1983). Stakeholder-Based Evaluation, New Directions for Program Evaluation, no. 17. San Francisco, CA: Jossey-Bass.

Bryk, A. S., & Raudenbush, S. W. (1983). The Potential Contribution of Program Evaluation to Social Problem Solving: A View Based on the CIS and PUSH/Excel Experiences. In A. S. Bryk, Stakeholder-Based Evaluation, New Directions for Program Evaluation, no. 17 (pp. 97-108). San Francisco: Jossey-Bass.

Cohen, R., Kincaid, D., & Childs, K. E. (2007). Measuring school-wide positive behavior support implementation: Development and validation of the Benchmarks of Quality. Journal of Positive Behavior Interventions, 9(4), 203-213.

Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M., & Wallace, F. (2005). Implementation Research: A Synthesis of the Literature. Tampa, FL: National Implementation Research Network (NIRN), University of South Florida.

Freeman, R. F. (1984). Strategic Management: A Stakeholder Approach. Boston, MA: Pitman.

Horner, R. H., Sugai, G., & Todd, A. W. (2001). "Data" need not be a four-letter word: Using data to improve school-wide discipline. Beyond Behavior, 11(1), 20-22.

Horner, R. H., Todd, A. W., Lewis-Palmer, T., Irvin, L. K., Sugai, G., & Boland, J. B. (2004). The School-Wide Evaluation Tool (SET): A research instrument for assessing school-wide positive behavior support. Journal of Positive Behavior Interventions, 6(1), 3-12.

Kauffman, J. M. (1996). Research to practice issues. Behavioral Disorders, 22(1), 55-60.

Kern, L., & Manz, P. (2004). A look at current validity issues of school-wide behavior support. Behavioral Disorders, 30(1), 47-59.

McIntosh, K., Campbell, A., Carter, D. R., & Zumbo, B. D. (2009). Concurrent validity of office discipline referrals and cut points used in schoolwide positive behavior support. Behavioral Disorders, 34(2), 100-113.

Office of Special Education Programs Technical Assistance Center on Positive Behavioral Interventions and Supports. (2010, January 15). Schools that are implementing SWPBIS and counting, 2010. Retrieved from http://www.pbis.org/

Phillips, R. (2003). Stakeholder Theory and Organizational Ethics. San Francisco, CA; Berrett-Koehler Publishers, Inc.

Robbins, V., Collins, K., Liaupsin, C. J., Illback, R. J., & Call, J. (2003). Evaluating school readiness to implement positive behavior supports. Journal of Applied School Psychology, 20(1), 47-66.

Sasso, G. (2004). Measurement issues in EBD research: What we know and how we know it. Behavioral Disorders, 30(1), 60-71.

Scott, T. M. & Martinek, G. (2006). Coaching positive behavior support in school settings: Tactics and data-based decision making. Journal of Positive Behavior Interventions, 8(3), 165-172.

Sugai, G., Horner, R. H., Dunlap, G., Hieneman, M., Lewis, T. J., Nelson, C. M., Scott, T. M., et al. (2000). Applying positive behavior support and functional behavioral assessment in schools. Journal of Positive Behavior Interventions, 2(3), 131-143.

Walker, H. M. (2004). Commentary: Use of evidence-based interventions in schools: Where we've been, where we are, and where we need to go. School Psychology Review, 33(3), 398-407.

Weiss, C. H. (1983). The Stakeholder Approach to Evaluation: Origins and Promise. In A. S. Bryk, Stakeholder-Based Evaluation: New Directions for Program Evaluation, no. 17 (pp. 3-14). San Francisco: Jossey-Bass.

Gita Upreti

Illinois PBIS Network

Carl Liaupsin

University of Arizona

Dan Koonce

Illinois PBIS Network, Chicago School of Professional Psychology

Correspondence to Gita Upreti, Ph.D., Dept. of Educational Psychology and Special Services, College of Education, 500 West University Avenue, University of Texas at El Paso, El Paso, TX 79968; e-mail: gitaupreti@gmail.com.
COPYRIGHT 2010 West Virginia University Press, University of West Virginia
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2010 Gale, Cengage Learning. All rights reserved.

 Reader Opinion

Title:

Comment:



 

Article Details
Printer friendly Cite/link Email Feedback
Author:Upreti, Gita; Liaupsin, Carl; Koonce, Dan
Publication:Education & Treatment of Children
Article Type:Report
Geographic Code:1USA
Date:Nov 1, 2010
Words:5057
Previous Article:Introduction.
Next Article:Decision-making in secondary and tertiary interventions of school-wide systems of Positive Behavior Support.
Topics:

Terms of use | Copyright © 2014 Farlex, Inc. | Feedback | For webmasters