Printer Friendly

Treatment fidelity in applied educational research: expanding the adoption and application of measures to ensure evidence-based practice.


In intervention research, treatment fidelity is defined as the strategies that monitor and enhance the accuracy and consistency of an intervention to ensure it is implemented as planned and that each component is delivered in a comparable manner to all study participants over time. Reviews of the literature in special education and other disciplines reveal that reports of treatment fidelity are limited. In this article, we examine some recommendations made by the National Institutes of Health Behavior Change Consortium that may be adapted to document treatment fidelity in educational research. We discuss the critical importance of planning for, collecting, and reporting treatment fidelity data at each stage of intervention research and discuss the implications of these practices for validity issues, efficacy and effectiveness studies, and cost-benefit considerations. Throughout the article, we use our own classroom-based research to provide examples of expanding treatment fidelity in randomized field trials.


There is increased emphasis on investigative and experimental rigor for educational researchers because of the No Child Left Behind Act of 2001 (NCLB) and the reauthorization of the Elementary and Secondary Education Act, which demands the use of "scientifically-based research" as the basis for educational programming and classroom practices. As in the early developmental stages of research-based fields such as medicine and psychology, education has often relied on tradition, anecdotal evidence, and a collective sense of expert opinion to guide its instructional practices. These sources of evidence, however, do not effectively provide educators with precise or accurate enough information to determine which educational practices are truly effective, for whom, and under what conditions.

Scientifically- based research, on the other hand, "... means research that involves the application of rigorous, systematic, and objective procedures to obtain reliable and valid knowledge relevant to education activities and programs" (NCLB, 2001). Such studies are judged on their scientific merit, which involves appropriate research design, thorough investigative methods, and valid and reliable measures to provide objective evidence from which to draw conclusions about an educational practice. Yet, scientifically-based research in education not only encompasses rigorously conducted efficacy studies (i.e., whether an intervention can work under specified conditions), but also includes effectiveness studies that show intervention outcomes under less controlled, authentic educational situations across a variety of conditions. As educational research continues to evolve, researchers need to determine how dissemination of scientifically-based effective practices can be achieved for the benefit of thousands of students, teachers, and other education professionals.

A critical factor in determining the efficacy, effectiveness, and successful dissemination of an educational practice is ensuring that the professionals who are responsible for its implementation deliver an intervention under study with accuracy and conformity. Treatment fidelity is defined as the strategies that monitor and enhance the accuracy and consistency of an intervention to (a) ensure it is implemented as planned and (b) make certain each component is delivered in a comparable manner to all participants over time (Bellig et al., 2004; Detrich, 1999; Dumas, Lynch, Laughlin, Smith, & Prinz, 2001; Lane, Bocian, MacMillian, & Gresham, 2004). If treatment fidelity is not measured, researchers cannot ascertain with confidence whether study outcomes were due to treatment or to factors incidental to the intervention (Bellg et al., 2004). Lack of treatment fidelity data also makes replication of, and comparisons between, interventions problematic (Hester, Baltodano, Gable, Tonelson, & Hendrickson, 2003). Thus, the purpose of this article is to (a) examine the current practice of treatment fidelity, (b) describe a framework proposed for advancing the definition, methodology, and measurement of treatment fidelity, (c) discuss the assessment of fidelity strategies, and (d) explore the implications of using thorough treatment fidelity measures to determine the outcomes of field-based interventions for the betterment of school-aged children and youth.

Documentation of Treatment Fidelity

Current Practice

The practice of reporting treatment fidelity in outcome research is limited. A meta-analysis of academic intervention research for students with Emotional or Behavioral Disorders (EBD) revealed that only 27 percent of studies selected between 1975 and 2002 reported treatment fidelity data (Mooney, Epstein, Reid & Nelson, 2003). This trend is matched in youth psychotherapy outcome research where only 32 percent of studies selected for meta-analyses between 1965 and 2002 included adherence checks (Weisz, Doss, & Hawley, 2005). In an examination of research methodology and practice in early intervention with children at risk for EBD, only 38 percent of studies reported on content and process fidelity (all components delivered, intervention delivered as designed) while 43 percent reported no fidelity measures at all (Hester et al., 2003). These findings are supported by other studies about students with disabilities (Gresham, MacMillian, Beebee-Frankenberger, & Bocian, 2000) and in related disciplines (Borelli et al., 2005). Although not a deliberative sampling of special education, education, and psychotherapy research, the findings of these reviews indicate a lack of focus on a research component necessary for establishing evidence-based practices in education. Despite this, there are multiple ways for establishing and measuring treatment fidelity adequately.

Importance of Planning

The assessment of treatment fidelity during the implementation phases of intervention research is critical, and researchers should begin by carefully integrating fidelity measurement during the planning stages of an intervention. Studies conducted in a variety of clinical and applied settings have provided some direction on strategies that work. For example, the Multimodal Treatment Study of Children With Attention Deficit Hyperactivity Disorders (MTA Cooperative Group) concluded that treatment fidelity is enhanced by emphasis on rapport with study participants, manualization of treatments, videotaping sessions, and regular supervision of treatment providers (MTA Cooperative Group, 1999). Similarly, Detrich (1999) argues that contextual variables such as child characteristics, required resources, classroom culture, and similarity to current practice should be considered when implementing treatments in applied settings. Interventions that closely resemble current practices within the classroom will increase the likelihood that treatments will be implemented with fidelity.

Researchers have also recommended the specification of treatment through operational definitions derived by a task analysis of essential components to ensure fidelity (Gresham et al., 2000). Hennessey & Rumrill (2003) suggest that (a) providing uniform training procedures and (b) convening panels of experts and/or consumers to evaluate treatment consistency across trainers, occasions, and sessions help control for threats to treatment validity. Further, Lane et al. (2004) note that minimizing needed resources, requiring a specific amount of implementation time, and maximizing teacher understanding of treatment usefulness all help to maintain higher levels of treatment fidelity.

A Conceptual Framework

Bellg et al. (2004), who comprise the Treatment Fidelity Workgroup of the National institutes of Health Behavior Change Consortium (BCC), agree that these strategies are important and provide a framework for viewing fidelity components. The BCC is a multi-site effort to identify treatment fidelity concepts and strategies in health behavior intervention research. The Treatment Fidelity Workgroup was created to advance the definition, methodology, and measurement of treatment fidelity both within the BCC and for the field of health behavior change in general. The Workgroup reviewed the research literature, identified prominent techniques, and proposed standards of performance to ensure fidelity of clinical trials designed to test behavioral interventions. Their cumulative findings are offered as a framework for identifying and organizing best practices in treatment fidelity and promoting their adoption and application in the field of health behavior intervention research. Although proposed for use in clinical trials, many of the suggestions adapt appropriately to randomized field trial research in education.

We are currently integrating many of the recommendations from the Workgroup as part of our own randomized field trial research. Our study is focused on the effects of a universal cognitive-behavioral intervention (i.e., a classroom based social problem-solving curriculum) on the disruptive/aggressive behavior of students in the 4th-5th grades who are at risk for developing serious behavior problems (see Daunic, Smith, Brank, & Penfield, 2006). We designed the study with the Workgroup's treatment fidelity considerations in mind and are incorporating them where appropriate, as we progress through the stages of treatment implementation and evaluation. We will occasionally refer to our work throughout this paper, to provide examples of how the treatment fidelity components can apply to prevention/intervention research.

Five Key Areas of Treatment Fidelity

The framework proposed by the BCC is intended to link treatment fidelity theory and application across five key areas, arranged chronologically: Study design, training, treatment delivery, treatment receipt, and treatment enactment. Bellg et al. (2004) suggest that careful attention to each step in the sequence is critical for the accurate appraisal of data. Inattention to any one of these categories could thus compromise the internal validity of the study or the accurate determination of whether observed differences could be attributed to intervention effects. (see also Borelli et al., 2005). Each area comprises suggestions for measures that help ensure fidelity with a high degree of rigor.

Study design refers to the establishment of procedures that are consistent with relevant theory and practice and strategies that address and anticipate potential implementation setbacks. For example, interventions could be designed for prescribed lengths of time that are feasible in educational settings, and protocols could be put in place to ensure that the duration of treatment fits appropriately within a typical school calendar. In our randomized field trial, for example, we designed lessons for delivery in approximately 30-minute segments twice a week. The entire curriculum can be implemented within the context of a typical school year, accounting for disruptions such as high-stakes testing, holidays, field trips, and assemblies. Even with the best planning, however, an additional challenge in field-based educational research is that of teacher and student attrition. Researchers need to anticipate attrition during the design phase of a study, so that they can attend to the potential threat to fidelity by training extra teachers, obtaining larger than needed student populations, and tracking attrition throughout treatment implementation. These are proactive and indispensable strategies in attempting to ensure that treatment is delivered as planned.

Training issues related to treatment fidelity are relevant whenever human providers deliver intervention components. Fidelity of training covers the appraisal of whether treatment providers (e.g., teachers) are able to deliver an intervention as designed with an acceptable level of quality, or effectiveness. Quality in delivery and implementation can be accomplished with strategies such as identifying specific competencies required for successful implementation and designing the training procedures accordingly. Training can also be standardized to ensure systematic delivery across treatment providers (Hennessey & Rumrill, 2003). Researchers can assess provider competency during and after training sessions to ensure a minimal level of understanding and performance. In addition, these strategies are not exclusively pre-intervention considerations. Bellg et al. (2004) suggest that procedures be included to ensure provider skills do not decay once implementation begins. Similarly, Lane et al. (2004) note that direct observations to compare actual implementation to established criteria, weekly supervision or periodic meetings with providers, and/or requiring providers to reflect on their performance following implementation using process evaluation forms all contribute to the accurate measurement and enhancement of treatment fidelity. In our research, we evaluate provider competency by examining the permanent products they complete during and after training sessions. In addition, members of the research team conduct weekly meetings with teachers to answer questions about curriculum implementation.

Fidelity of treatment delivery consists of monitoring the implementation of the intervention and reporting the methods used to do so. This step is crucial in helping to establish internal validity. Bellg et al. (2004) describe four goals for monitoring and improving the delivery of treatment: (a) Minimize differences among providers in the selection process and assess participant feedback concerning provider characteristics that might have an impact on treatment delivery (see also Devilly & Spence, 1999), (b) reduce differences within treatment conditions by ensuring that providers in the same condition are delivering the same intervention, (c) ensure adherence to treatment delivery protocols by providing manuals or other specific instructions, and (d) minimize contamination between conditions. As part of our research, we monitor treatment delivery by direct observation using a checklist of essential components derived from the curriculum manual and lesson plans given to the teachers during training. Observations of treatment and control settings by observers blind to experimental condition are also conducted to assess possible contaminating variables that could obscure or override treatment effects. These assessments and adequate evidence that the intervention is delivered as intended in treatment groups are key to the preservation of internal validity (Lane et al., 2004; MTA Cooperative Group, 1999).

Treatment receipt is the fourth step in the process and involves ensuring that treatment recipients understand the information provided during the intervention. Especially important when treatment recipients have lower levels of literacy or education (Borelli et al., 2005), it involves monitoring participants' ability to comprehend and perform the skills and strategies taught to them during treatment delivery. In keeping with these recommendations, we have employed several strategies to assess student acquisition of curriculum content during the intervention phase of our study. First, we administer pre- and post-test knowledge questionnaires that assess student understanding. We also examine students' permanent products, such as worksheets and written activities provided in the curriculum and completed throughout the school year. Finally, we encourage teachers to perform ongoing assessments of student knowledge and understanding through informal questioning and observations of students during the role-plays that are embedded throughout the curriculum.

Treatment enactment is the final consideration in monitoring treatment fidelity. In this phase, researchers use processes to monitor and improve the ability of clients to perform treatment-related strategies in their daily lives, when and where appropriate. This area of treatment fidelity is primarily concerned with evidence that trained participants carry out strategies learned in training outside the intervention conditions (i.e., in "real-life" settings and situations). Evidence that they do so does not necessarily mean that outcome measures provide evidence of treatment efficacy, such as lower ratings of classroom disruption or aggression, for example, but that participants have at least learned some of the skills prerequisite to better adjustment. An assessment of treatment enactment can be accomplished with structured interviews, participant self-reporting, or direct observation.

In attempting to measure treatment enactment in our research on aggression prevention, we developed an "On-the-Spot" assessment protocol that guides teachers in documenting occurrences during which they observe students using the knowledge and skills taught in the curriculum. Similar to a "catch them being good" strategy, teachers are trained to look for examples of student skill enactment, such as stopping to think, calming down, or using a non-aggressive action such as walking away or seeking adult help, in a situation that might previously have elicited an aggressive response. When teachers witness such an occurrence, they are directed to follow up with the student at an appropriate time (i.e., On-the-Spot) with questions designed to explore what they were thinking about when they enacted their response. This step is particularly critical in reinforcing the cognitive skills taught in a social problem-solving curriculum, and in increasing the chance that these skills will generalize to situations outside the classroom. Teachers are also taught to attribute the action to the student by noting what he or she accomplished in a challenging situation. This "attribution training" reinforces the newly acquired skills and helps build self-esteem.

Despite providing a useful framework for a discussion of treatment fidelity, the Bellg et al. (2004) recommendations are not without their critics. Leventhal and Friedman (2004) warned that strict adherence to each of these steps may inhibit development of interventions that can be readily implemented in real world settings. They caution that rigid observance of study design, training, and treatment delivery recommendations may promote the practice of placing more importance on provider adherence to protocol, or manualization, than attention to treatment parameters feasible in realistic delivery contexts such as schools. Leventhal and Friedman assert that applied research settings sometimes demand adjusting procedures to the context, setting, or individual participants, and the study of those adjustments is worthy of researchers' efforts. In addition, they are also critical of the BCC's view of treatment receipt. They argue that in some cases, participant understanding may not be necessary for enactment, and that an inflexible standard of treatment receipt may thwart the study of important variables that contribute to positive treatment outcomes. It is their contention that distinguishing between treatment adherence and enactment is not always clear and it tends to present individuals as reactive organisms rather than active problem solvers. Despite these reservations, Leventhal and Friedman commend the BCC for taking steps to provide a comprehensive framework and acknowledge that in specific types of studies, "rigid adherence to these standards will advance the field" (Leventhal & Friedman, 2004, p. 455).

Assessment of Treatment Fidelity

In general, the assessment of treatment fidelity can take various forms, each with its own advantages and disadvantages. Lane et al. (2004) outline five common methods: Direct observation, feedback from consultants, self-monitoring and reporting from teachers, review of permanent products, and treatment manualization. Direct observation is perhaps the most accurate assessment, but also the most costly. In direct observation, a protocol is typically created so that observers can note the absence or presence of each treatment component. Feedback from a consultant, typically an expert in an area relevant to the study, would observe the implementation of treatment and provide corrective feedback to the teacher/treatment agent.

Treatment agents can self-monitor and report whether or not they are implementing each component. This method, while cost effective, often inflates estimated levels of integrity relative to direct observation findings. Evaluating permanent products that result from an intervention help determine content fidelity but do not provide much information about process fidelity. Finally, manualized treatments may facilitate the accuracy with which interventions are implemented. Use of treatment manuals allows trained treatment providers a reference to consult when delivering instruction, thereby increasing the probability that the treatment will be implemented as designed (see also Gresham et al., 2000). The methods suggested by Lane et al. (2004) are not mutually exclusive, and implementation of multiple methods concurrently should enhance fidelity.

The calculation of treatment fidelity is dependent on the types of fidelity assessed. The most common form of fidelity reported in the intervention literature is treatment delivery. This is typically reported as content delivery, with measures across days of treatment to ensure both component integrity and process fidelity, and with measures calculated within sessions reported as session integrity (Gresham et al., 2000). Researchers can also (a) measure the effectiveness of training by administering post training assessments to treatment providers and (b) assess treatment receipt by administering post-treatment knowledge checks that measure whether students received the content of instruction and the extent to which they understood it. Finally, treatment enactment is measured by the extent to which treatment recipients use intervention skills in situations outside the treatment implementation context such as a classroom.

While fidelity strategies and guidelines exist in the professional literature, we found no professional consensus on "acceptable levels" that document the fidelity of treatment. Typically, literature reviews on the topic focus on the inclusion or exclusion of treatment fidelity components and not on actual levels of fidelity achieved. For example, Borelli et al. (2005) reviewed treatment fidelity in health behavior research from 1990 to 2000 using the Bellg et al. (2004) framework and reported only the percentage of articles that included treatment fidelity strategies. What Borelli et al. did do, however, is define studies that had 80% or greater adherence across all five fidelity categories as having "high treatment fidelity."

When we reviewed quality indicators for reporting research (e.g., Chambless & Hollon, 1998; Davidson et al., 2003; Gersten et al., 2005; Kratochwill et al., 2003), we found that treatment fidelity is considered important but found no specific levels that would establish acceptable standards. For example, the Consolidated Standards for Reporting Trials (CONSORT) Statement developed to improve the design, reporting, and reviewing of interventions using randomized clinical trials in medical journals points out that treatment delivery and adherence should be monitored and reported, but the Statement does not specify acceptable levels of concurrence with prescribed intervention guidelines (see Davidson, et al. 2003). Likewise, the Quality Indicators for Group Experimental and Quasi-Experimental Research in Special Education (Gersten, Fuchs, Compton, Coyne, Greenwood, & Innocenti, 2005) also indicate the importance of reporting fidelity but do not offer specific guidelines as to acceptable levels, what percentage of intervention sessions should be monitored, or what level of integrity is considered desirable. Thus, acceptable standards for desired levels or amounts of fidelity are issues in need of further investigation in randomized field trial research.


The ultimate purpose of experimental research in education is to improve the lives of children and youth. Although there are many components that constitute quality educational research design, the assessment of treatment fidelity in intervention studies helps researchers understand, as unequivocally as possible, how the intervention relates to child outcomes (Gersten et al., 2005). The necessary and sufficient assessment of treatment fidelity, then, helps determine whether interventions can contribute to desirable outcomes for school-aged children and youth.

The critical importance of treatment fidelity in educational research has several noteworthy implications. First, doctoral level training should include treatment fidelity as an essential aspect of designing and conducting intervention studies. Treatment fidelity cannot be adequately addressed as an afterthought but must be part of study planning from the outset. Researchers should carefully consider its measurement and the acceptable parameters of treatment variability prior to implementing treatment. Thus, fidelity at all stages of research should be included in doctoral level coursework discussions of internal and external validity and research design.

Second, enhanced treatment fidelity assurance, measurement, and dissemination should be reflected in professional standards for peer-reviewed publications and proposals for funding in educational research. Manuscript reviewers should note the extent to which fidelity measures are included and the type of measures used. For example, direct observation of treatment implementation would be considered stronger evidence than indirect measures such as teacher/implementer surveys. Similarly, the research community can continue to require the inclusion of fidelity measures as part of rigorous standards for fundable applications. The assessment of treatment fidelity in intervention conditions goes hand in hand with the assessment of "business as usual" or alternative treatments in comparison groups. Without these measures, researchers cannot adequately determine the impact of treatment or the possible contamination from other sources of variance in outcomes.

Rigid adherence to treatment protocols with adequate fidelity measures in studies designed to test the efficacy of an intervention does not, however, come without a price. Although rigid protocols can enhance internal validity, requiring them could impede the development of effectiveness studies that provide essential information about treatments in naturally occurring settings such as schools and classrooms, given the vast differences among treatment providers and students. Essentially, a demand for rigid adherence to prescribed treatment protocols can affect teachers' or other treatment providers' acceptance of a proposed intervention, because these practitioners may not feel free to improvise, adapt, or otherwise deviate from prescribed protocols according to their knowledge about the idiographic nature of their classrooms. Thus, researchers need to consider the importance of establishing efficacy while also considering the parameters of appropriate and realistic adaptation within diverse contexts.

Finally, the move from typical, relatively simple, fidelity measures of the extent to which an intervention occurs (i.e., dose, or how much of the intervention was delivered) to more comprehensive assessments of design quality, provider training, treatment receipt, and treatment enactment, involves the development and implementation of more sophisticated and complex measurement. There may be, however, a cost-benefit ratio that researchers need to consider because of the more complicated and task intensive work required to expand fidelity activities across the five areas outlined by Bellg et al. (2004). Since treatment fidelity as described by Bellg et al. includes areas not typically assessed in contemporary field trials, increased budgetary considerations will be needed to cover associated costs. Increased staff and/or staff time, staff and teacher training, material development, observer time, increased travel time, employment of external evaluators, and the required data analysis expertise associated with measuring treatment fidelity adequately require increased funding. When faced with budget constraints, researchers will need to examine carefully the extent to which they can afford treatment fidelity assessment and still conduct basic research activities necessary for determining treatment outcomes.

In summary, there is little doubt that treatment fidelity is expanding as a substantive component of experimental research in education and other disciplines. Past efforts at assessing fidelity were often limited to treatment integrity, or the extent to which a treatment was delivered as intended (Resnick et al., 2005b). We are currently expanding the assessment of treatment integrity in our own research to include the recommendations of Bellg et al. (2004), and we are beginning to understand the necessity of the increase in research activities to support documentation of this broader framework. By attending to the careful documentation and reporting of this critical aspect of research validity, the research community will continue to advance educational and behavioral practice.


Bellg, A. J., Borrelli, B., Resnick, B., Hecht, J., Minicucci, D. S., Ory, M., Ogedegbe, G., Orwig, D., Ernst, D., & Czajkowski, S. (2004). Enhancing treatment fidelity in health behavior change studies: Best practices and recommendations from the NIH behavior change consortium. Health Psychology, 23, 443-451.

Borelli, B., Sepinwall, D., Bellg, A. J., Breger, R., DeFrancesco, C., Sharp, D. L., Ernst, D., Czajkowski, S., Levesque, C., Ogedegbe, G., Resnick, B. & Orwig, D. (2005). A new tool to assess treatment fidelity and evaluation of treatment fidelity across 10 years of health behavior research. Journal of Consulting and Clinical Psychology, 73, 852-860.

Chambless, D. L., & Hollon, S. D. (1998). Defining empirically supported therapies. Journal of Consulting and Clinical Psychology, 66(1), 7-18.

Daunic, A. P., Smith. S. W., Brank, E. M., & Penfield, R. D. (2006). Classroom based cognitive-behavioral intervention to prevent aggression: Efficacy and social validity. Journal of School Psychology, 44, 123-139.

Davidson, K. W., Goldstein, M., Kaplan, R. M., Kaufmann, P. G., Knatterud, G. L., Orleans, C. T., Spring, B., Trudeau, K. J., & Whitlock, E. P. (2003). Evidence-based medicine: What is it and how do we achieve it? Annals of Behavioral Medicine, 26(3), 161-171.

Detrich, R. (1999). Increasing treatment fidelity by matching interventions to contextual variables within the educational setting. School Psychology Review, 28, 608-620.

Devilly, G. J., & Spence, S. H. (1999). The relative efficacy and treatment distress of EMDR and a cognitive-behavior trauma treatment protocol in the amelioration of posttraumatic stress disorder. Journal of Anxiety Disorders, 13, 131-157.

Dumas, J. E., Lynch, A. M., Laughlin, J. E., Smith, E. P., & Prinz, R. J. (2001). Promoting intervention fidelity: Conceptual issues, methods, and preliminary results from the early alliance prevention trial. American Journal of Preventive Medicine, 20(1), 38-47.

Gersten, R., Fuchs, L. S., Compton, D., Coyne, M., Greenwood, C., & Innocenti, M. S. (2005). Quality indicators for group experimental and quasi-experimental research in special education. Exceptional Children, 71, 149-164.

Gresham, F. M., MacMillian D. L., Beebe-Frankenberger, M. E., & Bocian, K. M. (2000). Treatment integrity in learning disabilities intervention research: Do we really know how treatments are implemented? Learning Disabilities Research and Practice, 15, 198-205.

Hennessey, M. L., & Rumrill, P. D. (2003). Treatment fidelity in rehabilitation research. Journal of Vocational Rehabilitation, 19, 123-126.

Hester, P. P., Baltodano, H. M., Gable, R. A., Tonelson, S. W., & Hendrickson, J. M. (2003). Early intervention with children at risk of emotional/behavioral disorders: A critical examination of research methodology and practices. Education and Treatment of Children, 26(4), 362-381.

Kratochwill, T. R., Stoiber, K. C., Christenson, S., Durlack, J., Levin, J. R., Talley, R., et al. (2002). Procedural and coding manual for review of Evidence-Based Interventions. Developed in conjunction with the Division 16 and Society for the Study of School Psychology Task Force on Evidence-Based Interventions. (Retrieved July 18, 2007 from

Lane, K. L., Bocian, K. M., MacMillan, D. L., & Gresham F. M. (2004). Treatment integrity: An essential but often forgotten component of school based interventions. Preventing School Failure, 48, 36-43.

Leventhal, H., & Friedman, M. A. (2004). Does establishing fidelity of treatment help in understanding treatment efficacy? Comment on Bellg et al. (2004). Health Psychology, 23(5), 452-456.

Mooney, P., Epstein M., Reid, R., & Nelson, J. R. (2003). Status and trends in academic intervention research for students with emotional disturbance. Remedial and Special Education, 24, 273-287.

MTA Cooperative Group (1999). Moderators and mediators of treatment response for children with attention-deficit/hyperactivity disorder. Archives of General Psychiatry, 56, 1088-1096.

No Child Left Behind Act of 2001, P.L. 107-110, 115 Stat. 1425 (2002).

Resnick, B., Inguito, P., Orwig, D., Yahiro, J. Y., Hawkes, W., Werner, M., Zimmerman, S., & Magaziner, J. (2005a). Treatment fidelity in behavior change research: A case example. Nursing Research, 54, 139-143.

Resnick, B. et al. (2005b). Examples of implementation and evaluation of treatment fidelity in the BCC studies: Where we are and where we need to go. Annuals of Behavioral Medicine (Special Supplement), 29, 46-54.

Weisz, J. R., Doss, A. J., Hawley, K. M. (2005). Youth psychotherapy outcome research: A review and critique of the evidence base. Annual Review of Psychology, 56, 337-363.

Stephen W. Smith

Ann P. Daunic

Gregory G. Taylor

University of Florida

Correspondence to Stephen W. Smith, Dept. of Special Education, University of Florida, G315 Norman Hall, PO Box 117040, Gainesville FL 32611; e-mail:
COPYRIGHT 2007 West Virginia University Press, University of West Virginia
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2007 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Smith, Stephen, W.; Daunic, Ann P.; Taylor, Gregory G.
Publication:Education & Treatment of Children
Article Type:Report
Geographic Code:1USA
Date:Nov 1, 2007
Previous Article:A treatment integrity analysis of function-based intervention.
Next Article:Identifying and supporting students at risk for emotional and behavioral disorders within multi-level models: data driven approaches to conducting...

Terms of use | Copyright © 2016 Farlex, Inc. | Feedback | For webmasters