Intervention-based assessment: evaluation rates and eligibility findings.
Traditional methods for determining special education eligibility employ test results to classify children's characteristics under a number of disability categories specified by the Individuals with Disabilities Education Act (IDEA; U.S. Department of Education, 1995). This traditional assessment approach, which employs norm-referenced tests measuring child characteristics, serves a gatekeeping function for special education placement, where (it is presumed) effective interventions are more likely to occur. Problem-solving (or intervention-based) assessment, in contrast, employs direct measurement of student performance in natural settings for the design and evaluation of interventions, which may or may not incorporate the specialized resources associated with special education. Consequently, assessment resources are devoted to discovering and documenting effective intervention methods, rather than simply documenting a match between child characteristics and eligibility criteria. Since traditional eligibility determination activities have been criticized as yielding information that is not helpful in the process of intervention design (Reschly, 1988a, 1988b), problem-solving methods are viewed as more consistent with the goal of providing effective services to children who have disabilities or who are at educational risk.
In his argument for the adoption of problem-solving assessment methods, Reschly (1988b) predicted that they "have the potential to be as good as traditional measures in regard to eligibility determination and provide information related to interventions" (p. 498), and that they "will identify the same students currently found to be eligible through expensive procedures that are largely unrelated to interventions" (p. 498). Reductions in inappropriate referrals for eligibility determination and increases in service delivery in the least restrictive general education environment are reflective of these outcomes and consistent with the goals of educational reform (Sindelar, Griffin, Smith, & Watanabe, 1992).
Research has demonstrated that problem-solving assessment models yield outcomes consistent with special education reform goals, including increased services in general education settings and reductions in evaluation rates (Fuchs, Fuchs, Harris, & Roberts, 1996; Graden, et. al., 1985; Graden, Zins, & Curtis, 1988; Gutkin, Henning-Stout, & Piersel, 1988; Nelson, Smith, Taylor, Dodd, & Reavis, 1991; Ponti, Zins, & Graden, 1988; Reschly & Starkweather, 1997). However, three principal concerns limit the relevance of these findings to current initiatives promoting problem-solving methods in place of more traditional special education eligibility determination procedures.
First, reductions in evaluation rates may be an inadequate or inappropriate goal for reform initiatives, especially if they reflect a loss of access to services needed by children who have disabilities or who are at educational risk. A more suitable variable for study may be a reduction in "inappropriate" evaluations or decreases in the proportion of children referred for multifactored evaluation who are found to be ineligible for special education. Conversely, an increase in "appropriate" evaluations would be evident if greater proportions of evaluated children were found eligible for special education (Sindelar et al., 1992).
Second, most states already encourage or require some form of problem-solving assessment and prereferral intervention (Carter & Sugai, 1989), although only a few have articulated specific policies and procedures for incorporating these approaches in the process of special education eligibility determination (Allen & Graden, 1995; Nunn, 1998; Reschly & Ysseldyke, 1995). Consequently, the "baseline" condition against which problem-solving assessment's impact on reform-oriented outcomes probably should be compared is no longer the absence of any form of prereferral intervention, but the absence of policy-sanctioned procedures linking problem-solving activities with eligibility determination.
Third, there is a paucity of data describing outcomes associated with statewide, field-based implementation of policy initiatives promoting a problem-solving orientation. One reason for this may be a reluctance to devote research efforts to studies in which strict control over such design elements as sample selection and measurement/ data collection activities is limited by the sheer magnitude (and field-based nature) of the project. On the other hand, survey data describing phenomena associated with a major shift in special education policy (which is operationalized at the state level) seems sufficiently valuable as to warrant its collection in spite of these limitations. Although interpretations of such data are limited by aspects of research design, they provide a window through which the changing nature of the special education landscape can be observed.
INTERVENTION-BASED ASSESSMENT (IBA) AND INTERVENTION ASSISTANCE TEAM (IAT) MODELS FOR DETERMINING SPECIAL EDUCATION ELIGIBILITY
In the present study, school-based multidisciplinary teams (MDTs) served as the primary resource in every school for systematically addressing the needs of children with learning and behavior problems. Referrals were made to MDTs when classroom teachers were unable to resolve such problems through informal consultation with colleagues and resource personnel. Although their missions were similar, MDTs employed special education eligibility determination procedures under the IBA model that differed in several important ways from those that had been used by teams under the IAT model.
At any point in the problem-solving process, MDTs employing the IBA process might suspect the presence of a disability. If this occurred, they were required to conduct a multifactored evaluation to determine eligibility on the basis of the following criteria:
* The interventions provided to the child, which are necessary to help the child attain targeted goals or objectives or to maintain that performance, are of such a unique and extraordinary nature and intensity, as determined through an analysis of the use of instructional methods, materials, equipment, services, personnel, or environmental or physical adaptations, that they qualify as specially designed instruction;
* The child's characteristics meet the federal definition of (one or more disability conditions specified under the Individuals with Disabilities Education Act);
* Without special education and related services, the child's condition has an adverse effect on his or her educational performance (Ohio Department of Education, 1995, p. 44).
State regulations required the IBA/multifactored evaluation (MFE) team to conduct an MFE by conducting observations; assessing the effects of the child's environment on learning; conducting structured interviews; systematically implementing and monitoring planned interventions, and identifying as a result those interventions found to be effective; and considering data from the parents and any other assessments available to the team (Ohio Department of Education, 1995).
MDTs employing IBA were required to document eligibility in a written report that included a display of data describing baseline levels of behavior, as well as data describing such behavior under intervention conditions (i.e., progress monitoring). Multifactored evaluation procedures were prescribed to include consideration of data from "any intervention provided to the child prior to referral and/or implemented during the ... evaluation process" (Ohio Department of Education, 1995, p. 42). Interventions requiring the use of specialized technology, modifications of the curriculum, individual instruction by educational specialists, and custom-made or specialized instructional materials would be considered "unique and extraordinary," and would therefore serve as the "specially designed instruction" defining special education. As a matter of policy, then, MDTs not only examined the characteristics of children, but placed primary emphasis on the quality of interventions shown to be necessary to address identified problems in children's school performance. The latter requirement necessitated (a) documented evaluation of interventions and their results, and (b) persistence in the problem-solving process until an effective intervention was identified.
Although the IAT method encouraged the delivery of interventions prior to determining a child's eligibility for special education services, it did not require teams to identify effective interventions, and required only an informal written summary of attempted interventions. There was no provision for collecting data describing behavior under baseline and progress monitoring conditions, nor for using intervention data in any specified manner to make eligibility decisions. Moreover, multifactored evaluation methods employed under the IAT model were described as "instruments" and "tests" that were "administered by trained and qualified personnel in accordance with any instruction provided by the producer of the tests" (Ohio Department of Education, 1995, p. 7). The results of such tests were reviewed to determine "whether the child has a particular category of disability; the present levels of performance and education needs of the child; recommended strategies for improving instruction; and whether the child needs special education and related services" (Ohio Department of Education, 1995, pp. 14-15). As an example of the criteria used by teams under the IAT model, eligibility criteria for "specific learning disability" stated that the child's characteristics must be consistent with the definition of a learning disability presented in IDEA (i.e., "a disorder in one or more of the basic psychological processes involved in understanding or in using language" (34 CFR, 300.7 (c) (10). In addition, the team was required to provide evidence of a severe discrepancy between achievement and ability through the application of a discrepancy formula to test scores obtained during the multifactored evaluation (Ohio Department of Education, 1995). Clearly, the IAT method placed primary emphasis on the results of standardized, norm-referenced measures of child characteristics to determine special education eligibility. Although teams maintained written records of children referred to teams, and for whom interventions were implemented, the role of interventions in the IAT eligibility determination process was limited to a review of prior efforts to address the child's difficulties.
PURPOSE OF THE STUDY
This study examined the implementation of an eligibility-linked problem-solving assessment project in Ohio with respect to several outcomes consistent with special education reform goals. It addressed whether schools reported differences in a number of outcome variables when their performance under a structured, eligibility-linked problem-solving model (IBA) was compared with their earlier performance under a prereferral intervention model (IAT) that emphasized standardized, norm-referenced testing, and that required neither data collection to document interventions and their results, nor specific linkages between interventions and eligibility determination.
Outcome variables examined in the study included: (a) proportion of children suspected of having disabilities and undergoing the eligibility determination process; (b) proportion of children undergoing eligibility determination procedures found eligible for special education; and (c) proportion of children receiving interventions that involved removal from the general education classroom setting for at least a portion of the school day.
Of the 155 schools participating in the IBA waiver program, 80 reported the presence of an IAT during the baseline year immediately preceding implementation of IBA. These schools comprised the sample for the present study. No school served children beyond the sixth grade, and the age range of children in sample schools ranged from age 5 through about 12. Sample schools were primarily located in suburban or metropolitan regions of the state (61.5%), with 28.8% located in rural areas and 9.6% in urban areas. Across the sample, 16.7% of schools reported their student populations as primarily "lower" income; 45.8% as "low-middle to middle" income; 18.7% as "upper-middle to upper" income; and 18.7% reported serving a mixed-income population. Schools reported percentages of ethnic representation that were averaged to yield descriptions of the student populations across sample schools: 8.7% of students were African American; 1.1% Latino Hispanic; 0.7% Asian American; 0.1% Native American; and 88.4% White Caucasian. Only 0.7% of the student population across sample schools was reported as speaking English as a second language. The sample of schools was biased in favor of those in their earliest years of implementation of the IBA problem-solving model, with 44 (55%) in their first year, 23 (29%) in their second year, and 13 (16%) in their third year of implementation.
Schools throughout Ohio volunteered and were approved for participation in the IBA project by the Ohio Department of Education through its network of 16 Special Education Regional Resource Centers (SERRCs). Characteristics upon which selection decisions were based included the presence of an IAT, strong building leadership, and a plan for staff training and support. All schools received technical support from the local SERRC, including training in the problem-solving sequence devised by a state planning team. In addition, minimal requirements for problem-solving MDT membership were prescribed to include the school principal, at least one special education teacher, at least one general education teacher, and a school psychologist. Membership of parents on school teams was strongly encouraged and required when a disability was suspected. Finally, schools were required to make a commitment to participate in state-sponsored evaluation activities, which served as the data source for the present study (Tolan, 1998).
Adherence to the recommended IBA problem-solving sequence (i.e., integrity) was promoted through requirements for MDTs to document cases on standardized Problem-Solving Worksheets disseminated through the SERRC network. The Worksheets were devised by a state planning team to serve as a step-by-step guide to the problem-solving sequence. MDTs also furnished copies of Evaluation Team Reports for students who had undergone multifactored evaluation of a suspected disability.
Schools participating in Ohio's IBA pilot project submitted information from the year immediately preceding implementation of the IBA process (in which the IAT model had been employed), and for the current year of IBA implementation. The Student Data Form submitted by MDTs requested information about the number of students for whom a disability was suspected and who underwent multifactored evaluation to determine special education eligibility; the number of students found eligible or ineligible as a result of such evaluation; and the number of students whose interventions required placement outside of the general education classroom setting for at least a portion of the school day. Each MDT submitted Student Data Forms to its local SERRC, which transmitted them to the statewide evaluation team for analysis. Results for each relationship examined in this study are based only on data obtained from schools that submitted complete information describing each variable for both their baseline and IBA implementation years; consequently, sample sizes for each analysis were reduced from the entire sample of 80 schools. Because schools entered the multiyear IBA project at different times, they differed in their years of experience with the IBA approach at the time this study was conducted.
To eliminate years of participation in the statewide IBA project as a potential explanation for findings of differences in relationships between the IAT and IBA conditions, a preliminary ANOVA using a General Linear Models procedure was conducted. Results indicated that there were no differences as a function of school years of IBA participation in the number of children undergoing eligibility determination procedures during either the IAT year (F (df = 2, 47) = .89, p < .42) or the IBA year (F (df = 2, 47) = 2.78, p < .07).
Because sample schools were expected to vary in the number of children whose problems were addressed by each problem-solving team, hypothesized relationships were analyzed using proportions of team caseloads, rather than raw numbers of children reported on the Student Data Form. Analyses employed Guilford's Comparison of Proportions (Guilford, 1965), a procedure in which percentages of cases can be analyzed for statistical significance across conditions. The formula for this procedure is shown in Figure 1.
Formula for Guilford's Comparison of Proportions
Z=[p.sub.1]-[p.sub.2]/ [square root of [p.sub.e] [q.sub.e] [[N.sub.1]+[N.sub.2]/[N.sub.1]+[N.sub.2]]]
where [p.sub.e]= [[N.sub.1] [p.sub.1] + [N.sub.2] [p.sub.2]/ [N.sub.1] [N.sub.2]] and [q.sub.e]=1 - [p.sub.e]
Findings related to the first outcome variable are presented in Table 1. Of children discussed by MDTs for initial problem-solving, a smaller proportion of those whose cases were addressed using the IBA problem-solving model underwent MFE to determine eligibility for special education (z = 10.30, p < .01). For the IAT problem-solving method, 53% of cases discussed for initial problem-solving underwent MFE, while only 26% of cases discussed for initial problem-solving underwent MFE under the IBA problem-solving method.
The second outcome variable examined was the proportion of "eligible" versus "ineligible" findings for children undergoing MFE to determine special education eligibility (see Table 2). Of children who underwent MFE, 63% were found eligible by teams using the IAT eligibility determination procedure, while 77% were found eligible when the IBA eligibility determination procedure was employed (z = -3.48, p < .01). For "ineligible" findings, the IAT procedure found 36% of children ineligible for special education, in comparison to only 17% found ineligible by teams when the IBA procedure was employed (z = 5.18, p < .01).
Results for the final outcome variable are presented in Table 3. Although differences were found between the two problem-solving models in the proportion of children discussed for initial problem-solving who received interventions requiring removal from the general education classroom setting, results favored IAT over the IBA model (i.e., under IAT, a greater proportion of children received intervention services exclusively in general education class settings). While more than half (54%) of children whose cases were discussed for initial problem-solving received interventions involving time spent outside of general education classrooms when the IBA model was employed, only 35% received such interventions when the IAT model was employed (z = -7.09, p < .01).
Several features of this study distinguish it from earlier research. First, the study examined reductions in "inappropriate" multifactored evaluation rates, rather than reductions in overall evaluation rates. It has been suggested that improving the "hit rate" (or proportion of cases evaluated that result in a finding of eligibility) is a more appropriate goal for reform initiatives (Sindelar et al., 1992). Second, the study compared outcomes under the pilot model of IBA with the same outcomes under an earlier, prereferral intervention model (IAT), rather than to a condition in which there was no problem-solving provision. This is important because states' initial efforts to promote some form of problem-solving have been largely successful (Carter & Sugai, 1989) so that the logical next step is the creation of policies (such as Ohio's IBA plan) to integrate intervention procedures in the process of special education eligibility determination. Although "resistance to intervention" is a frequently used indicator of suspected disability (or even eligibility), the IBA procedure employed in Ohio requires teams to proceed through interventions of increasing sophistication until they arrive at interventions that work. This policy creates a direct and critical role for interventions in the eligibility determination process. Third, this study of broad implementation of problem-solving policy initiatives differs from many studies that examine smaller, more homogeneous samples. Thus, in spite of its methodological limitations, this study provides valuable information about outcomes associated with the "real world" implementation across an entire state of a policy-based problem-solving model for team activities, including eligibility determination.
The problem-solving models examined in this study differed from one another in several important The IBA procedure required direct measurement and documentation of children's performance problems under baseline (pre-intervention) and progress-monitoring (intervention) conditions, requirements that were not present in the IAT prereferral intervention procedure. In addition, and especially pertinent to the purposes of this study, determination of special education eligibility under the IBA procedure required teams to identify and examine successful interventions (i.e., interventions that had resulted in goal attainment or satisfactory progress toward goal attainment), to judge whether such interventions constituted "specially designed instruction." In contrast, although interventions were recorded and documented in the eligibility determination process under IAT, decisions were based primarily on the results of tests, which were analyzed by teams to determine whether they matched eligibility criteria for one or more disability conditions (e.g., a significant discrepancy in scores on IQ and achievement tests as an indicator of specific learning disability). Consequently, findings of this study are interpreted in light of each model's degree of rigor, as well as its linkage with determinations of children's eligibility for special education.
In this study, a smaller proportion of children discussed by teams for initial problem-solving underwent MFE to determine special education eligibility when the IBA procedure was employed, as compared to the proportion of children who underwent MFE when the IAT procedure was employed. The IBA procedure undoubtedly improves the validity of referrals for eligibility determination through its use of direct measures of student performance (e.g., observation of behavior in natural settings; assessment of academic performance using curriculum-based measures), its requirement that problem-solving efforts continue until an effective intervention can be identified, and its stipulation that eligibility decisions must be based on evidence produced through intervention delivery and evaluation.
Reductions in the proportion of MFEs conducted by IBA teams also may be related to the more demanding and time-consuming nature of the problem-solving and eligibility determination procedures required by IBA, in comparison to those required under IAT. Indeed, this has been a criticism of problem-solving methods (Fuchs et al., 1990; Wilson, Gutkin, Hagen, & Oats, 1998). IAT teams may have viewed MFE for eligibility determination as a method to more quickly respond to the needs of children displaying school performance problems, and were thus more likely to conduct MFEs for a greater proportion of referred children.
Findings of this study also indicated that IBA (in comparison to IAT) was associated with a greater proportion of "appropriate" referrals (i.e., those resulting in eligibility decisions) and a smaller proportion of "inappropriate" referrals (i.e., those resulting in ineligibility decisions). These findings are almost certainly attributable to the "screening" function served by elements of the IBA process. MFEs conducted under the IAT model relied on two primary data sources--teacher or parent referral and test scores--and the literature provides evidence of the limitations of both of these sources (Reschly, 1988a; Reschly & Ysseldyke, 1995; Shinn, Tindal, & Spira, 1987). The IAT procedure did not require teams to validate referral concerns through direct measurement of children's performance; moreover, test scores require an unacceptably high degree of inference as predictors of actual performance. In contrast, the IBA procedure required validation of referral concerns through direct measurement of problematic performance, as well as documentation of attempted interventions and their results.
Use of the IBA problem-solving model did not result in a greater proportion of interventions delivered exclusively in general education classroom settings. In fact, the IAT model yielded superior results in this regard, raising questions about the conditions under which a major goal of special education reform--an increase in the delivery of interventions in general education--is likely to be achieved. A number of factors negatively influence implementation of problem-solving interventions in schools, including resistance on the part of general education teachers owing to a lack of skill, knowledge, or ownership of the intervention process; inadequate resources to maintain needed interventions in general education class settings; institutional barriers to flexible intervention design; and persistent belief in special education as a panacea for children's school performance problems (Fuchs et al., 1996; Pugach & Johnson, 1989; Wilson et al., 1998). However, such factors would be assumed to have interfered with the delivery of general education class interventions under both the IAT and IBA problem-solving conditions, and therefore provide an inadequate explanation for findings in this study.
Why did the IBA problem-solving model produce a smaller proportion of interventions implemented in general education classrooms than did the [AT model? The answer may lie in differences in the perceived mission and activities of each type of problem-solving team. Perhaps MDTs employing the [AT model perceived their mission as one of developing interventions for problems that were judged as unlikely to result in special education eligibility and amenable to intervention in general education class settings. The finding of a greater proportion of [AT cases receiving interventions exclusively in general education classrooms is consistent with this hypothesis. Under conditions where there is no meaningful link between IAT problem-solving and eligibility determination procedures, teams would have no mandate to address (in any persistent, structured, or documented manner) problems of a moderate and severe nature, except to refer them for MFE. Further support for this explanation is found in research showing that prereferral intervention is often viewed in schools as little more than a bureaucratic prelude to eventual multifactored evaluation for purposes of accessing the "magic bullet" of special education (Wilson et al., 1998). Although a greater proportion of IAT cases might have received interventions in general education class settings, this explanation also would predict the proportion undergoing MFE (i.e., those judged as moderate and severe) to be greater, as occurred in this study. However, this explanation, coupled with the finding of a poorer "hit rate" under IAT, raises the possibility that a sizable proportion of moderately severe cases discussed by IAT teams resulted neither in an "eligible" finding as a result of MFE (which would have led to intervention via special education), nor in interventions applied in general education class settings. Since previous research (Wilson et al., 1998) found that, following referral, "the development of new strategies all but ceased in favor of continuing prior, unsuccessful efforts" (p. 56), the probability that these children would have received subsequent meaningful interventions in general education classrooms seems discouragingly remote.
Teams employing the IBA procedure, in contrast, may have perceived their mission as one of conducting problem-solving for all cases, regardless of team judgments of their severity. The IBA eligibility determination process required documentation of effective interventions; consequently, referral for MFE would not have been perceived by teams as an expedient alternative to problem-solving. Moreover, the interventions documented and reported by IBA teams probably reflected a much broader range of intensity and intrusiveness (i.e., removal from the classroom) than those reported for the IAT process, since IBA teams had a mandate to pursue increasingly more sophisticated interventions until success, or satisfactory progress, had been attained. Unfortunately, this study did not address whether there were differences in the quality of interventions planned under IBA and IAT, although earlier studies suggest that both problem-solving models (IBA and IAT) likely generated interventions that were inadequate in quality and fidelity of implementation, regardless of the setting in which they were provided (Flugum & Reschly, 1994; Telzrow, McNamara, & Hollinger, 2000; Wilson et al., 1998). In the absence of skilled intervention-planning and delivery, a problem-solving procedure could not be expected to achieve the goal of increasing the extent of service delivery in general education class settings, especially when applied to problems reflecting a heterogeneous range of severity.
Characteristics of the sample employed in this study raise concerns about external validity. First, schools participated in the IBA project on a voluntary basis, introducing selection bias; second, the entire sample consisted of elementary schools, with a majority located in suburban areas; and, third, the sample was biased in favor of schools that were relative novices in applying the IBA problem-solving procedure. In addition, the failure of some schools to submit complete information on Student Data Forms resulted in reduced sample sizes for the analyses conducted in this study. Consequently, findings describe results for schools whose demographic characteristics differ somewhat from those reported for the entire sample of schools, though it is unlikely that schools supplying incomplete data differ in any systematic way from those supplying complete information. Finally, the use of a self-report instrument to gather data in this study limits internal validity, insofar as data reported by teams may include some inaccuracies. Attempts were made to minimize this threat by directing teams to use records of MDTs' case-related activities as the source of information for the Student Data Form.
IMPLICATIONS FOR PRACTICE
This study demonstrated that refinement of a problem-solving model to include specific linkages with special education eligibility determination and mandatory documentation of intervention-planning resulted in reductions in rates of multifactored evaluation and an increase in the proportion of "appropriate" referrals for such evaluation (Sindelar et al., 1992). These findings differ from those of earlier studies in that this problem-solving model (i.e., IBA) was compared not with a traditional "test and place" model, but with an earlier version of problem-solving (i.e., IAT) employed in the same sample of schools across the state of Ohio. Consequently, results support the notion that team-based problem-solving is not a unitary construct, and should not be treated as such in research and practice. Approaches to problem-solving differ in many respects, including the extent and quality of case-related data collection, adherence to a systematic problem-solving sequence, and degree of perseverance in devising and documenting effective interventions. Results of this study suggest that MDTs should define their missions in terms of desired outcomes (i.e., screening for "resistance to intervention" versus devising effective interventions) and the procedures to be followed to reach those outcomes (i.e., behavioral definition of problems with baseline data; detailed intervention plan; training and coaching interventionists; and collecting and evaluating progress monitoring data).
The lower proportion of multifactored evaluations conducted by IBA teams (in comparison with IAT teams) suggests that the team was able to identify effective interventions for many children referred to the MDT, thereby eliminating the need for a multifactored evaluation. However, it appears that teams continued to rely on interventions involving removal from the general education classroom to a greater degree than expected. Assuming that delivery of interventions in general education settings is both adequate and desirable for greater numbers of children (Reschly, 1988a), analysis of this study's findings suggests that policy support for persistence in attempting meaningful interventions must be supplemented by efforts to improve problem-solving teams' skill in designing, implementing, and achieving acceptance of such interventions.
Results of this study led to the prediction that, as long as special education policies allow problem-solving teams to exercise the option of referral for traditional ("test and place") MFE, teams will continue to select this option, especially for problems judged to be of moderate or extreme severity--with meaningful problem-solving efforts reserved only for mild problems. Teams should ensure that a rigorous problem-solving process is applied to all referred cases of children experiencing difficulty, regardless of differences in problem severity and (possibly inaccurate) predictions of the likely success of interventions in general education class settings. Enrollment in a special education program is not itself an intervention, nor should the simple fact of placement be expected to yield favorable outcomes in the absence of strategies determined to be effective for a given child. This fact, more than any other, should determine how, and with what goal problem-solving activities are implemented. Absent a commitment to effective intervention-planning, the time-consuming and half-hearted process of sifting through a series of inadequate and inappropriate interventions can hardly compare with the perceived efficiency and rewards of "test and place" models--placement without delay; receipt of special education funds; and transfer of problem ownership away from already-overburdened general education teachers. Unless teams are charged with the responsibility of implementing progressively more sophisticated interventions until an effective approach has been identified, we can expect the perceived advantages of a "test and place" model to prevail.
TABLE 1 Results of Comparison Between Intervention Assistance Teams (IAT) and Intervention-Based Assessment (IBA): Number of Children Undergoing Multifactored Evaluation (MFE) as a Proportion of Number Discussed for Initial Problem-Solving Problem-Solving Model IAT Team Team Teams Cases Cases Total Raw Dependent Variable N M SD Cases Proportion Number undergoing MFE 43 10.70 7.87 460 -- Number discussed for initial problem-solving 43 20.28 12.92 872 .53 Problem-Solving Model IBA Team Team Teams Cases Cases Total Raw Dependent Variable N M SD Cases Proportion Number undergoing MFE 43 4.00 4.49 172 -- Number discussed for initial problem-solving 43 15.12 17.06 650 .26 Comparison Dependent Variable Z Number undergoing MFE Number discussed for initial problem-solving 10.30 * * p < .01 (2-tailed test) TABLE 2 Results of Comparison Between Intervention Assistance Teams (IAT) and Intervention-Based Assessment (IBA): Number of Children Eligible and Ineligible for Special Education as a Proportion of Number Undergoing Multifactored Evaluation (MFE) Problem-Solving Model IAT Team Team Teams Cases Cases Total Raw Dependent Variable N M SD Cases Proportion Number undergoing MFE 43 11.81 8.85 508 -- Number eligible for special education 43 7.47 6.27 321 .63 Number ineligible for special education 43 4.28 4.32 184 .36 Problem-Solving Model IBA Team Team Teams Cases Cases Total Raw Dependent Variable N M SD Cases Proportion Number undergoing MFE 43 4.79 5.52 206 -- Number eligible for special education 43 3.67 3.60 158 .77 Number ineligible for special education 43 .79 1.30 34 .17 Comparison Dependent Variable Z Number undergoing MFE Number eligible for special education -3.48 * Number ineligible for special education 5.18 * * p < .01 (2-tailed test) TABLE 3 Results of Comparison Between Intervention Assistance Teams (IAT) and Intervention-Based Assessment (IBA): Number of Children Receiving Interventions Requiring Placement Outside of General Education as a Proportion of Number Discussed for Initial Problem-Solving Problem-Solving Model IAT Team Team Teams Cases Cases Total Raw Dependent Variable N M SD Cases Proportion Number receiving interventions outside of general education 42 6.98 6.24 293 -- Number discussed for initial problem- solving 42 20.02 13.00 841 .35 Problem-Solving Model IBA Team Team Teams Cases Cases Total Raw Dependent Variable N M SD Cases Proportion Number receiving interventions outside of general education 42 7.67 10.82 322 -- Number discussed for initial problem- solving 42 14.31 15.51 601 .54 Comparison Dependent Variable Z Number receiving interventions outside of general education Number discussed for initial problem- solving -7.09 * * p < .01 (2-tailed test)
Algozzine, B. & Maheady, L. (1985). When all else fails, teach! Exceptional Children, 52, 487-488.
Allen, S. J., & Graden, J. L. (1995). Best practices in collaborative problem-solving for intervention design. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology-III (pp. 667-678). Washington, DC: National Association of School Psychologists.
Carlberg, C., & Kavale, K. (1980). The efficacy of special versus regular class placement for exceptional children: A meta-analysis. Journal of Special Education, 14, 295-309.
Carter, J., & Sugai, G. (1989). Survey on prereferral practices: Responses from state departments of education. Exceptional Children, 55, 298-302.
Flugum, K. R., & Reschly, D. J. (1994). Prereferral interventions: Quality indices and outcomes. Journal of School Psychology, 32, 1-14.
Fuchs, D., Fuchs, L. S., Bahr, M. W., Fernstrom, P., & Stecker, F. M. (1990). Pre-referral intervention: A prescriptive approach. Exceptional Children, 56, 493-513.
Fuchs, D., Fuchs, L. S., Harris, A. H., & Roberts, F. H. (1996). Bridging the research-to-practice gap with mainstream assistance teams: A cautionary tale. School Psychology Quarterly, 11, 244-266.
Graden, J. L., Casey, A., & Christenson, S. L. (1985). Implementing a prereferral intervention system: Part I: The model. Exceptional Children, 51, 377-384.
Graden, J. L., Zins, J. E., & Curtis, M. J. (1988). The need for alternatives in educational services. In J. L.
Graden, J. E. Zins, & M. J. Curtis (Eds.), Alternative educational delivery systems: Enhancing instructional options for all students. (pp. 3-13). Washington, DC: National Association of School Psychologists.
Guilford, J. P. (1965). Fundamental statistics in psychology and education (4th ed.). New York: McGraw-Hill.
Gutkin, T. B., Henning-Stout, M., & Piersel, W. C. (1988). Impact of a district-wide behavioral consultation prereferral intervention service on patterns of school psychological service delivery. Professional School Psychology, 3, 301-308.
Kavale, K. A., & Forness, S. R. (1999). Effectiveness of special education. In C. Reynolds & T. Gutkin (Eds.), Handbook of School Psychology (pp. 984-1024). New York: Wiley.
Marston, D. (1987). Does categorical teacher certification benefit the mildly handicapped child? Exceptional Children, 53, 423-431.
Nelson, J. R., Smith, D. J., Taylor, L., Dodd, J. M., & Reavis, K. (1991). Prereferral intervention: A review of the research. Education and Treatment of Children, 14, 243-253.
Nunn, G. D. (1998, March). `IDEAL' problem-solving: A collaborative needs-based intervention approach for special needs and at-risk students. The Psychologist, 21, 2-13.
Ohio Department of Education, Division of Special Education. (1995). Model policies and procedures for the education of children with disabilities. Columbus, OH: Author.
Ponti, C. R., Zins, J. E., & Graden, J. L. (1988). Implementing a consultation-based service delivery system to decrease referrals for special education: A case study of organizational considerations. School Psychology Review, 17, 89-100.
Pugach, M. C., & Johnson, L. J. (1989). Prereferral interventions: Progress, problems, and challenges. Exceptional Children, 56, 217-226.
Reschly, D. J. (1988a). Special education reform: School psychology revolution. School Psychology Review, 17, 459-475.
Reschly, D. J. (1988b). Obstacles, starting points, and doldrums notwithstanding: Reform/revolution from outcomes criteria. School Psychology Review, 17, 495-501.
Reschly, D. J., & Starkweather, A. R. (1997). Evaluation of an alternative special education assessment and classification program in the Minneapolis Public Schools. Minnesota State Board of Education, Department of Children, Families, and Learning.
Reschly, D. J., & Ysseldyke, J. E. (1995). School psychology paradigm shift: In A. Thomas & J. Grimes (Eds.), Best practices in school psychology-III (pp. 17-31). Washington, DC: National Association of School Psychologists.
Shinn, M. R., Tindal, G. A.,& Spira, D. A. (1987). Special education referrals as an index of teacher tolerance: Are teachers imperfect tests? Exceptional Children, 54, 32-40.
Sindelar, P. T., Griffin, C. C., Smith, S. W., & Watanabe, A. K. (1992). Prereferral intervention: Encouraging notes on preliminary findings. The Elementary School Journal, 92, 245-259.
Telzrow, C. E, McNamara, K., & Hollinger, C. L. (2000). Fidelity of problem-solving implementation and relationship to student performance. School Psychology Review, 29, 443-461.
Tolan, P. (1998, April). Ohio's Intervention-Based Assessment (IBA) and IBMFE Initiative: History, Demo, graphics, and Current Status. In D. Reschly (Chair), Ohio's Implementation of Intervention-Based Assessment. Symposium conducted at the meeting of the National Association of School Psychologists, Orlando, FL.
U.S. Department of Education, Office of Special Education Programs. (1995). Improving the Individuals With Disabilities Education Act: IDEA Reauthorization. Washington, DC: Author. (ERIC Document Reproduction Service No. ED 386 018).
Wilson, C. P., Gutkin, T. B., Hagen, K. M., & Oats, R. G. (1998). General education teachers' knowledge and self-reported use of classroom interventions for working with difficult-to-teach students: Implications for consultation, prereferral intervention and inclusive services. School Psychology Quarterly, 13, 45-62.
KATHY MCNAMARA, Associate Professor; and CONSTANCE HOLLINGER, Professor, Department of Psychology, Cleveland State University, Ohio.
Correspondence concerning this article should be addressed to Kathy McNamara, Department of
Psychology, Cleveland State University, Euclid Avenue at East 24th Street, Cleveland, OH 44115 (216-687-2521) or via Internet at k. firstname.lastname@example.org
The preparation of this manuscript was supported by funding and information provided by the Ohio Department of Education, Office for Exceptional Children. Comments contained herein are solely the authors', and should not be interpreted as having agency endorsement. The contributions of Jim DeLamatre and Colleen McMahon to the preparation of this manuscript are gratefully acknowledged.
Manuscript received November 2001; accepted July, 2002.
|Printer friendly Cite/link Email Feedback|
|Author:||McNamara, Kathy; Hollinger, Constance|
|Date:||Jan 1, 2003|
|Previous Article:||Critical social skills for adolescents with high incidence disabilities: parental perspectives.|
|Next Article:||Stakeholders' views of factors that impact successful interagency collaboration.|