"Old habits die hard:" past and current issues pertaining to response-to-intervention.
In December 2004, Congress passed the Individuals with Disabilities Education Improvement Act (IDEIA, 2004), which permitted local education agencies to use a Response-to-Intervention (RtI) approach for identifying children with possible learning disabilities for special education. With the passage of IDEIA, 2004, educators were essentially given the choice of using the traditional Iqachievement discrepancy model or RtI for identifying students at-risk for a Specific Learning Disability (SLD). Unfortunately the passage of IDEIA, 2004 has not resolved the debate regarding the best approach for identifying children with SLD. On one side of the debate some scholars argue that RtI should be used for identifying children with suspected learning disabilities for numerous reasons including: (1) RtI relies on early screening and identification, which leads to better intervention outcomes; (2) RtI employs empirically validated screening and progress monitoring procedures such as curriculum-based measurement (CBM), Dynamic Indicators of Basic Early Literacy Skills (DIBELS), Screening to Enhance Equitable Placement (STEEP), and AIMSWEB; (3) RtI employs assessment procedures that are directly linked to intervention; (4) RtI relies on evidence-based interventions for both younger and older children in areas such as reading fluency, reading comprehension, mathematics, and written expression; (5) RtI moves the field away from Refer-Test-Place logic to Refer-Assess-Intervene-Evaluate logic; and (6) RtI moves the field away from a highly questionable and highly inferential within-child explanation of learning difficulties (Fletcher et al., 2002; Fuchs & Fuchs, 1998; Gresham, 2002; Gresham, 2007; National Association of State Directors of Special Education [NASDSE], 2005). For reasons that will be discussed in more detail, many researchers have called for the abandonment of the IQ-achievement discrepancy model and intelligence tests for the purposes of identifying learning disabilities (Fletcher, Coulter, Reschly, & Vaughn, 2004; Fletcher et al., 2002; Gresham, 2002). On the other side of the debate, at least three scholars have argued that the IQ-achievement discrepancy model affords educators with the most valid approach for preserving the construct of SLD and identifying those students with "real" learning disabilities (Kavale, 2002; Mastropieri & Scruggs, 2002). Despite a viable second option for the assessment of children at-risk for SLD, most states and local school districts continue to use the IQachievement discrepancy model. School personnel may persist in using the IQ-achievement discrepancy model for a number of reasons including waiting for state code and regulations on RtI to be developed, guidance on how best to transition to and implement RtI, and insufficient RtI-related knowledge and skills. The purpose of the current article is to (a) provide a brief review of the discrepancy model, (b) provide a compendium of the issues related to the IQ-discrepancy model for school psychology practitioners in California, (c) review the issues related to the use of intelligence tests within an RtI model, and (d) provide a rationale for applying RtI across school districts in California.
The Discrepancy Model
In general, the discrepancy model employed in most states including California requires that the following four criteria are met before determining eligibility for SLD: (a) establishing a significant discrepancy between intellectual/cognitive ability and academic achievement, (b) identifying the existence of a psychological/cognitive processing deficit, (c) determining if the child's educational needs can or cannot be met without special education and related services, and (d) exclusionary considerations. Once these four criteria are met, the student may be eligible for Special Education services as a child with SLD. Most school psychologists and educators involved in this type of assessment would agree that the IQachievement discrepancy and identification of psychological or cognitive processing strengths and weaknesses are the most heavily weighted considerations. Although this type of assessment process is not always so simple and clear-cut, it demonstrates a common assessment process used daily by thousands of school psychologists and multidisciplinary teams (MDT) across the United States. Interestingly, the Education for All Handicapped Children Act (1975; renamed the Individual with Disabilities Education Act [IDEA] in 1990) did not require the assessment of intelligence or psychological processing for determining eligibility of SLD. Although IDEA never required this type of assessment as part of SLD, the IQ-achievement discrepancy model was implemented in an arbitrary fashion in the 1977 federal regulations as a way to operationalize the construct of SLD and prevent a de facto prevalence cap of 2% from being enacted automatically (U.S. Office of Education, 1977).
From its inception, the discrepancy model has been problematic for numerous reasons. Over the past 30 years dozens of research articles (many of which will be discussed below) have provided empirical evidence of the problems inherent with the IQ-achievement discrepancy model. As previously stated, the forthcoming discussion is meant to provide practitioners with a compendium of the issues related to the IQ-achievement discrepancy model and a brief review of the research literature supporting each of the points.
First and foremost, use of the IQ-achievement discrepancy model has made early identification and intervention of children with suspected SLDs difficult. For the most part, young children experiencing academic problems in kindergarten, first, and second grades do not demonstrate the IQ-achievement discrepancy necessary to meet eligibility as SLD (Speece, 2002). As a result, it is not uncommon for these students to continue to fail for an additional two or three years, and often longer, before their academic achievement is sufficiently low compared to their IQ and they are eligible to receive special education services. In fact, special education identification rates indicate that the odds of being classified as SLD peaks in the third and fourth grades (Lyon, Fletcher, Fuchs, & Chhabra, 2006). This model represents a "wait-to-fail" approach, which results in students not being provided with the appropriately intense general and/or special education interventions in a timely manner (Fletcher et al., 2002; Gresham, 2002; Torgeson et al., 2001). For example, in the area of reading, children at-risk for later reading difficulties can be reliably identified as early as the beginning of first grade (Juel, 1988). When these children are not intervened upon early in their academic careers, there is a high probability (>70%) that they will continue to be poor readers into the secondary grades and beyond (Fletcher & Lyon, 1998). On the other hand, when educators are able to meet the academic achievement needs of children early on, the likelihood of positive, long-term educational outcomes is greatly increased (Fletcher et al., 2002; Stanovich, 2000). Furthermore, when educators are able to meet the academic needs of children early on, the likelihood of negative long-term outcomes such as school drop-out, delinquency, and unemployment are significantly reduced (Alexander, Entwisle, & Horsey, 1997; Williams & McGee, 1994). Although the ability to provide early intervention and prevention for all children at risk for school failure alone should justify moving to an RtI approach, it is just one of many problems with the IQ-achievement discrepancy approach for identifying SLD.
A second major criticism of the IQ-achievement discrepancy model is that there is little scientific basis for using this approach (Francis, Fletcher, & Stuebing, 2005; Stuebing et al., 2002). That is, empirical evidence demonstrating reliability and validity of the ability-achievement discrepancy model for identifying SLD is virtually non-existent (Fletcher et al., 2002; Stuebing et al., 2002; Vellutino, Scanlon, & Lyon, 2000). On the contrary, a rather substantial body of evidence has concluded that ability-achievement discrepancy models do not accurately identify SLD (Hoskyn & Swanson, 2000; Stuebing et al., 2002; Peterson & Shinn, 2002; Vellutino et al., 2000). With respect to the reliability of the ability-achievement discrepancy model, Fletcher et al. (2002) concluded that making a decision based on a single test score, at a single point in time, with an instrument that has measurement error is not a reliable or psychometrically sound practice. Since a student is generally administered a measure (e.g., IQ or achievement test) only once, the repeated measures necessary to establish the reliability (consistency) of their performance cannot be determined. Without repeated measures, issues such as examinee characteristics, examiner characteristics, and situational conditions are difficult to account for, making the reliability of ability-achievement discrepancy models particularly problematic. In discussing the unreliability of the discrepancy approach, Shepard (1980) proposed that students be administered at-least four separate combinations of IQ and achievement tests in order to derive a reliable estimate of students' discrepancy scores. However, this procedure would take school psychologists up to 12 hours of testing just on IQ and achievement tests, which does not have much practical appeal.
A substantial body of research has concluded that using an ability-achievement discrepancy model is not a valid approach for identifying SLD (Fletcher et al., 2002; Francis, Shaywitz, Stuebing, Shaywitz, & Fletcher, 1996; Hoskyn & Swanson, 2000; Vellutino et al., 2000). Most researchers would agree that discrepant and nondiscrepant low-achievers differ to some degree. The central issue is to determine whether those differences are meaningful enough to warrant continued research and to influence practice. A consistent theme when reviewing the research literature pertaining to discrepant versus non-discrepant low-achievers is how to distinguish "real LD" students from low-achieving students. It appears that we have become so intent on preserving the construct of LD, as arbitrarily operationalized by the discrepancy model, that we seem to have forgotten that the more important goal is to help children who are struggling academically. Whereas studies comparing IQ discrepant and non-discrepant students have demonstrated no meaningful differences between the two groups, studies of students defined as responders and nonresponders to interventions using RtI procedures have clearly demonstrated differences on key variables among the groups. For example, a number of studies found that nonresponders to early intervention differed from responders in both preintervention achievement scores and preintervention cognitive tasks (Stage, Abbott, Jenkins, & Berninger, 2003; Vaughn, Linan, Thompson, & Hickman, 2003; Vellutino et al., 2000). Moreover, Fletcher et al. (2004) found through neuroimaging procedures that intervention nonresponders tend to have deficient left hemispheric activity in areas of the brain that are consistent with the development of reading skills, providing further evidence of the differentiation between responders and nonresponders to high quality interventions. Previous research also suggests that the discrepancy approach lacks strong evidence of external validity with respect to achievement, behavior, neuro-biological factors, prognosis, and response to intervention (Fletcher et al., 2002).
A third criticism of the ability-achievement discrepancy model for identifying SLD is the inconsistent manner in which this approach is applied by practitioners. Gresham, MacMillan and colleagues concluded that over half of the school-identified SLD children included in their study did not meet federal or state eligibility criteria (Gresham, MacMillan, & Bocian, 1996; MacMillan, Gresham, & Bocian, 1998; MacMillan & Speece, 1999). That is, many of the children included in this study did not demonstrate a significant discrepancy, had IQ scores below 75 (i.e., mild mental retardation [MMR]), or were Emotionally Disturbed (ED). In addition, Gresham, MacMillan, and colleagues reported that an unknown number of children who did in fact meet the criteria as SLD, were not identified as such. Furthermore, a number of researchers have concluded that SLD eligibility criteria are not uniformly applied within and across states and local school districts (Bocian, Beebe, MacMillan, & Gresham, 1999; Gottlieb, Alter, Gottlieb, & Wishner, 1994; MacMillan et al., 1998; Peterson & Shinn, 2002). Although well-intentioned, when school personnel select children for special education in such an inconsistent and subjective manner, they negate the very objectivity and precision the discrepancy model proposes to offer. Furthermore, it is reasonable to assume that school personnel will continue to identify students for special education based on their perceptions regarding the individual needs of their students.
A fourth criticism of the ability-achievement discrepancy approach is that many students that experience long-term academic achievement problems never receive special education services because of below average intellectual ability (i.e., slow learner). This is a problem with which school psychologists working under an ability-achievement discrepancy mandate are all too familiar. For example, a child with an IQ score of 85 and a reading decoding score of 70 is not likely to receive special education services. In this scenario, the student's IQ score is not low enough to warrant special education placement as MR, nor do they demonstrate the necessary discrepancy between ability and achievement to qualify for special education as SLD. Although few would argue that such a child demonstrates an urgent need for the type of support available from special education, school psychologists and educators have been hamstrung for nearly 30 years by laws and regulations from helping a child with this all-to-common profile. The result of this scenario is that school psychologists and educators are presented with a serious, ethical dilemma. That is, to qualify a child for special education as SLD who does not meet the criteria for such a placement or to not qualify a child for services from which they would clearly benefit. As a result, many school psychologists engage in questionable practices in their effort to address the academic achievement needs of such children. This conclusion is consistent with those of MacMillan, Gresham, Lopez, and Bocian (1996) and Gottlieb et al. (1994) who indicated that school personnel tend to base their decisions on an "absolute low achievement" criterion, thereby ignoring the discrepancy component. Although use of the ability-achievement discrepancy approach is problematic due to the previously described issues inherent in this approach, perhaps the more troubling part of the ability-achievement equation is the use of intelligence tests for the identification of SLD at all.
A fifth criticism questions the use of intelligence tests in any manner as part of the SLD definition. Originally the rationale for including intelligence tests as part of the definition of SLD was to determine if a student's underachievement in a given area of academic achievement was expected or unexpected. This conceptualization can be traced to the Isle of Wright studies by Rutter and Yule (1975). These authors identified two types of reading underachievement difficulties that they termed general reading backwardness (GRB) and specific reading retardation (SRR). GRB was defined as reading below the level expected of a child's chronological age, whereas, SRR was defined as reading below the level predicted by a child's intelligence (i.e., discrepant underachievement). This conceptualization formed the basis for current notions of expected and unexpected underachievement.
The concept of unexpected underachievement has been a central premise in the conceptualization of SLD. That is, it is reasonable to expect that if a child performs within the average range on some measure of intelligence, that his or her ability in the various areas of academic achievement should also be in the average range. Following this logic, it is also reasonable to assume that if a child performs within the average range on some measure of intelligence, but his or her performance in an area of academic achievement is significantly below average, then his or her performance in that area would be unexpected. The latter scenario represents the most fundamental component of the construct of SLD. The logic of this, however, is based on the faulty premise that IQ and academic achievement are perfectly correlated. In fact, at best, the correlation between measures of cognitive ability and academic achievement rarely exceed .60, thereby accounting for only 36% of shared variance (Sattler, 2001). Although determining expected or unexpected underachievement was a major reason for including intelligence tests in the identification of SLD, over the past 30 years the use of intelligence tests has evolved into a realm far beyond its original intent.
IQ Tests and RtI
A number of researchers have argued that IQ tests should continue to be an integral component of a comprehensive assessment for identifying children with suspected learning disabilities (Flanagan, Ortiz, Alfonso, & Dynda, 2006; Hale, Naglieri, Kaufman, & Kavale, 2006). More specifically, these researchers posit that children who do not respond to research-based interventions within an RtI framework should be given intelligence tests to help school psychologists and other invested professionals identify the cognitive or psychological processes that are adversely impacting each child's academic performance. With this perspective in mind, we believe that there a number of reasons why IQ tests should not necessarily be included as part of the assessment process for children who have not responded to interventions in the initial phases of the RtI process.
First, as previously stated, the authors and publishers of popular intelligence tests such as the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV; Wechsler, 2003), Cognitive Assessment System (CAS; Naglieri & Das, 1997), Kaufman Assessment Battery for Children-Second Edition (KABC-II; Kaufman & Kaufman, 2004), and Woodcock-Johnson III Tests of Cognitive Ability (W-J III COG; Woodcock, McGrew & Mather, 2001) assert that their measures can assist school psychologists and other educators in identifying the cognitive or psychological processes that have lead to a child's academic underachievement (Kaufman & Kaufman, 2004; Naglieri & Das, 1997, 2005; Woodcock et al., 2001). Furthermore, these researchers posit that once these underlying processing strengths and weaknesses are identified, instructional treatments can be developed to produce positive academic achievement outcomes. The assumption that instructional treatments can be matched to aptitudes or cognitive processes to produce unique and positive academic outcomes is not new. This idea can be traced back to Cronbach's research on aptitude x treatment interactions (ATI; Cronbach, 1957, 1975). The basic logic of ATIs is that instructional treatments can be matched to aptitudes or modalities (e.g., auditory processing, visual processing). It is believed that if instructional treatments are matched to processing strengths or that if aptitude weaknesses are targeted for remediation, improved academic performance will result. Although the idea of matching instructional treatments to aptitudes is intuitively appealing, empirical evidence supporting the existence of ATIs is spurious and for the most part, nonexistent (Ayers & Cooley, 1986; Cronbach, 1975; Gresham, 2002; Kavale & Forness, 1987; Torgeson, 2002). For example, Torgeson (2002) concluded that speculation about processing weaknesses as they relate to a child's academic difficulties are often not supported by scientific evidence and represent "psychometric phrenology" that has limited reliability and instructional utility. To our knowledge, there is not a single randomized clinical trial using the Institute for Educational Sciences (IES) evidence-based standards that has related processing strengths to effective intervention outcomes. Despite the fact that there is a paucity of research supporting the existence of ATIs, school psychologists continue to focus on cognitive strengths and weaknesses and their presumed relevance to treatment. We have reached a point in school psychology and education that when we discuss a child's achievement difficulties, we automatically attribute the child's difficulties to some "processing deficit" inherent within the child. Although intelligence tests were originally developed to determine individuals' overall cognitive ability and used by educators to determine special education eligibility (i.e., expected versus unexpected underachievement), school psychologists now regularly use and "interpret" intelligence tests for the purposes of identifying the processing strengths and weaknesses that "cause" a child to perform poorly in some area of academic achievement.
Popular intelligence tests such as the CAS (Naglieri & Das, 1997), WISC-IV (Wechsler, 2003), KABC-II (Kaufman & Kaufman, 2004), and the W-J III COG (McGrew et al., 2001) actually discourage the use of their overall scores (i.e., Full Scale IQ) and strongly urge the user to use their tests for the purposes of identifying processing strengths and weaknesses. These authors imply that their tests are not necessarily measures of intelligence, but rather measures of processing. Ironically, when these tests were validated, they were not validated against other tests purporting to measure similar constructs such as auditory processing or memory, but rather, against other well-established tests of intelligence. Although advocates of intelligence testing argue that the core procedure of a comprehensive evaluation of LD is an objective, norm-referenced assessment of the presence and severity of any cognitive processing strengths and weaknesses (Flanagan et al., 2006; Hale et al., 2006), there is no corpus of evidence to support such a practice to enhance SLD identification, control prevalence, translate into more effective instruction, or improve prediction of the outcomes of interventions (Cronbach, 1975; Fletcher et al., 2002; Gresham, 2002; Gresham in press; Torgeson, 2002). Absent such evidence, the benefits of intelligence and psychological process testing simply do not outweigh the costs in terms of school personnel time, resources, and student outcomes (Gresham & Witt, 1997; Reschly & Wilson, 1995).
A second important issue regarding the use of intelligence tests within an RtI framework pertains to the manner in which the authors of intelligence tests recommend that we use their measures. As previously stated, the authors and publishers of intelligence tests have assured us that by using their methods of interpretation, not only can their tests help us to identify processing strengths and weaknesses, but that in doing so we can create instructional interventions that will help children who are struggling academically to improve their academic performance. Methods of interpretation recommended by Kaufman and others include ipsative or profile analysis (i.e., subtest analysis; Kaufman, 1994). The two common methods of subtest analysis involve: (a) comparing individual subtest scores to the unique mean subtest score of the child and (b) directly comparing one subtest score to another for the purposes of identifying specific patterns of subtest scores. Proponents of subtest analysis posit that subtest scores that are significantly higher or lower than a child's own average are considered relative and/or cognitive strengths and weaknesses. Additionally, certain subtest patterns are thought to be unique and indicative of learning and emotional problems. Although a thorough review of the subtest analysis literature is beyond the scope of this paper, research on the topic has reached the following conclusions. First, subtests have low reliability and specificity, therefore, making decisions regarding cognitive strengths and weaknesses based on the scores produced from these measures is an unsound practice (Macmann & Barnett, 1997; Watkins et al., 2005). Second, ipsative subtest scores do not contribute anything to the prediction of academic achievement not already accounted for by the global Full Scale score (Macmann & Barnett, 1997; McDermott et al., 1990). Third, ipsative scores cannot be interpreted as if they possess the same psychometric properties as normative scores; therefore, such interpretation is not recommended (McDermott & Glutting, 1997; Watkins et al., 2005). Fourth, it is not uncommon for children to demonstrate a considerable degree of variation across subtests; thus, using score differences obtained from subtests should not be used to make diagnostic decisions. Fifth, not all children from a particular diagnostic category will exhibit the profile thought to be unique to that diagnostic category (Watkins et al., 2005). Overall, proponents of subtest analysis have not demonstrated that this practice has adequate reliability, diagnostic utility, or treatment validity. Despite overwhelming evidence to the contrary, many school psychologists continue to use, or rather, misuse intelligence tests in a manner that is inconsistent with these research findings.
A third issue regarding the use of intelligence testing within an RtI framework is the assumption that RtI alone will not provide the information necessary to develop appropriate interventions for students who do not respond sufficiently to initial attempts to prevent or remediate their academic difficulties. Proponents of intelligence testing argue that without IQ tests and other measures of psychological processing, school psychologists and teachers will not have the necessary information needed for developing interventions for children experiencing academic difficulties (Flanagan et al., 2006; Hale et al., 2006). Specifically, these researchers argue that children who do not respond to research-based interventions within an RtI framework should be given intelligence tests to help identify the cognitive/psychological processes that are causing their academic underachievement. Although recently a number of these researchers have conceded that the discrepancy approach for identifying SLD is flawed beyond repair, they insist that measures of intelligence and psychological processing complement RtI. Many proponents of RtI would argue that traditional SLD assessments that incorporate the use of intelligence tests lack treatment validity or instructional utility (Fuchs & Fuchs, 1998; Gresham, 2002; Reschly & Ysseldyke, 2002). Whether used within a discrepancy approach, measures of intelligence and/or psychological processing do not add information necessary for developing instructional interventions (Fletcher et al., 2004; Gresham, 2006; National Association of State Directors of Special Education, 2005; Reschly & Ysseldyke, 2002). In short, many proponents of RtI take issue not only with the use of IQ tests within a discrepancy approach, but with the use of IQ tests in and of themselves for the purposes of identifying SLD. It is precisely because of the failure of measures of intelligence and psychological processing to provide school psychologists and teachers with the information necessary to develop instructional interventions that researchers were compelled to seek and explore alternative approaches for identifying SLD. One such approach was RtI. Within an RtI approach, instead of asking, "what kind of processing deficit does the child have?" we ask, "what kind of problems is the child demonstrating, where and when do they occur, and what can we do about it?"
Response to Intervention
Proponents of RtI acknowledge that this approach is unlikely to address the academic needs of all children in its initial phases and that some children will require long-term intensive interventions. It is, however, important to recognize that RtI serves a number of important functions. First, RtI allows educators the opportunity to address the academic needs of children at the first signs of problems. The only criteria necessary for a student to receive additional support within an RtI framework is that they demonstrate a need. Concerns such as discrepant versus non-discrepant underachievement, expected versus unexpected underachievement, processing strengths and/or weaknesses, and levels of intelligence become non-issues within an RtI approach. Since we are referring to the long-term academic health of our children, this point cannot be over-emphasized. As previously stated, educators are often forced to wait for children to continue to fail in order to provide them with the support they need and are legally entitled, however, such unwillingness or inability by educators to act in a timely manner is equivalent to educational malpractice.
Second, RtI is a decision-making framework predicated on a systematic assessment process. By employing a problem-solving approach within an RtI framework, school psychologists and teachers are able to directly and accurately identify a child's problems, analyze and determine why the problems are occurring, develop and implement interventions to address the child's needs, and monitor the effects of the interventions. RtI will allow educators to gather meaningful information that is directly related to the child's academic underachievement, thereby reducing the amount of inference necessary when making decisions based on the results of tests of intelligence and processing. Perhaps the most compelling aspect of RtI assessment practices is that they allow educators to proactively provide assistance to students on an as-needed basis before they have developed a well-ingrained pattern of academic problems and failure. Moreover, when data show that a student has not responded to a well-delivered set of empirically sound intervention strategies, the intervention team has, at their fingertips, a comprehensive collection of data by which they can make well-informed decisions as to the need for more intensive services and supports including, but not limited to, special education and related services (Gresham, 2004).
Third, due to the previously described problems with the discrepancy model, RtI provides educators with a viable alternative for the assessment and treatment of children at-risk for SLD. In addition to questionable empirical support, incorporating measures of intelligence and processing into an RtI framework runs counter to the core principles of RtI. At the foundation of RtI is an assessment process that employs direct, repeated measures of a student's academic progress. A traditional assessment approach that uses measures of intelligence and processing employs indirect measures that are typically administered only once. When school psychologists and other educators use indirect measures such as tests of intelligence and processing that require unacceptable amounts of inference and guesswork for making educational decisions, the likelihood of making incorrect educational decisions is significantly increased. A secondary, but equally important premise of RtI is providing research-based interventions to students demonstrating academic underachievement, whereas, the focus of the traditional testing models is classification and compliance. Since RtI uses direct measures of assessment such as CBM, DIBELS, and STEEP for evaluation of a student's difficulties, the likelihood of developing an intervention that actually addresses the student's needs is greatly increased. That is, RtI has treatment validity since it employs assessment measures that are linked to intervention. As previously stated, the authors of intelligence and processing tests have not empirically demonstrated that matching instructional treatments to cognitive processes or aptitudes leads to positive educational outcomes.
Fourth, from a logistical viewpoint, incorporating testing into an RtI approach presents school psychologists and other educators involved in the special education eligibility process with a time-management and organizational nightmare. School psychologists and educators involved in this process must decide how best to use their time. If in addition to their responsibilities within an RtI approach school psychologists are also expected to conduct evaluations that include a battery of tests that will yield little, if any additional useful information, the likely result is a poorly executed RtI model. For RtI to be successful, school psychologists must be allotted the time to consult with teachers throughout the course of the RtI approach, which will not be the case if testing loads remain as they have for the past 30 years. School psychologists have historically operated from an assessment and/or diagnosis-based orientation, however, it is of utmost importance that school psychologists shift their thinking to an intervention-based perspective. This shift in thinking must move from a Refer-Test-Place logic to a Refer-Assess-Intervene-Evaluate logic. School psychologists, educators, and parents must learn to trust in the empirical research to guide practice, otherwise, conducting research becomes an activity with no practical implications.
Fifth, an RtI approach is likely to reduce identification biases. A teacher's decision to refer a child for SLD assessment is typically guided by the student's performance relative to the modal performance of the other students in the class or to that of other low-performing students (MacMillan & Siperstein, 2002). This method of referral is based largely on teacher opinion, which is likely to lead to differential rates of referrals due to teacher tolerance, teacher's perceptions of student progress, and teacher's optimism about his or her capacity to effectively teach a student within the context of a larger group (Zigmond, 1993). Furthermore, teacher referral may also be influenced by factors such as student's gender, socioeconomic status, and/ or ethnicity (MacMillan & Siperstein, 2002). Donovan and Cross (2002) contend that an RtI approach to referral has the potential to reduce and possibly eliminate the overrepresentation of certain minority groups in special education from the biases inherent in the teacher referral process. Universal screening of all students incorporated into a problem-solving model within an RtI framework has the potential to reduce disproportionate identification of academic difficulties by ethnicity and gender and is superior to other identification methods such as teacher referral (Donovan & Cross, 2002; VanDerHeyden, Witt, & Naquin, 2003).
Sixth, the focus of RtI is on child outcomes. RtI emphasizes direct measurement of achievement and the instructional environment as the focus of a comprehensive evaluation of a child's academic difficulties. RtI emphasizes assessment of measurable and changeable aspects of a child's academic performance and instructional environment. Within an RtI approach changeable aspects of a child's instructional environment that are considered in the assessment process include; instructional variables, alterable factors such as pace of instruction and opportunity to respond, prior and current instructional opportunities, and application of evidence-based instructional strategies (National Reading Panel, 2000; Witt, VanDerHeyden, & Gilbert, 2007). Clay (1987) suggests that many children learn to be learning disabled because they are exposed to ineffective or marginally effective general education reading curricula and instruction that either have not been empirically validated or have been implemented with poor integrity (National Reading Panel, 2000). Treatment integrity is also a key component to RtI and must be directly measured over time to ensure that interventions are being implemented as planned (Gresham, 1989).
To reiterate, the primary contention proffered against employing an RtI model without the use of tests of intelligence or psychological processing is that RtI methods alone do not constitute a comprehensive evaluation (Hale et al., 2006). That is, individuals supporting this position insist that the administration of IQ and/or psychological processing tests is a necessary condition for a comprehensive evaluation. However, what these individuals fail to understand is that the resulting student response data from RtI procedures are not the only components of an RtI-based comprehensive evaluation. Rather, data obtained from record reviews, interviews, direct observations, rating scales, and/or medical screenings are combined with student response data to serve as the comprehensive evaluation and to inform decisions as to whether the student has an underlying disability and need for special education services (Gresham, 2002; Gresham et al., 2004). It is likely that knowing a child's overall cognitive ability will be deemed important for some special education referrals. However, the notion that tests of intelligence and/or psychological processing are necessary for all children who are at risk for a learning disability that may need long-term, intensive intervention(s) is without empirical merit. The key features of a comprehensive evaluation under an RtI model are the direct measurement of achievement, behavior, and instructional environment in relevant domains. This alters the focus of assessment from a search for a within-child pathology to one concerned primarily with the assessment of measurable and changeable aspects of the instructional environment that are related to child outcomes. The authors of the current paper concur with the many researchers who have called for the abandonment of intelligence tests for the purposes of identifying children with SLD.
Many researchers and practitioners in school psychology and special education would agree that moving from a classification and/or eligibility-based assessment approach to one that focuses on intervention would be in the best interest of children experiencing academic achievement difficulties (e.g., Burns & Ysseldyke, 2005; Fuchs, Mock, Morgan, & Young, 2003; Gresham, 2001, 2002, 2005; Kovaleski & Prasse, 2004). In light of the numerous problems with the IQ-achievement discrepancy model, RtI may offer the most viable approach for making this shift. RtI has not only garnered the support of many researchers and practitioners across the country, but it has been endorsed by the President's Commission on Excellence in Special Education (PCESE, 2001) and the National Association of School Psychologists (NASP, 2007).
RtI offers an improved approach to assessment that allows educators to help children they know are struggling and does not include circumventing the problems that many school psychologists and special education teachers must face when using the IQ-achievement discrepancy approach. Further, an RtI approach to eligibility determination moves away from using measures that yield minimal benefits with respect to treatment and instead focuses on direct measures of student achievement and the instructional environment that produce data that are in the best interest of both the children served and the educators that serve them. The data resulting from the application of RtI methods allow school psychologists and teachers to focus on issues related to intervention, rather than issues related to classification and eligibility. Although RtI is not a perfect system, it is an approach with promising empirical support, which is not the case for the traditional, testing-oriented IQ-achievement discrepancy model. The authors of this paper have been unable to locate an empirically based rationale for the inclusion of measures of intelligence or psychological processing within a properly conducted RtI approach. Old habits die hard, but when it becomes clear that old habits are also bad habits, then, the time for making the paradigm shift that Reschly and Ysseldyke (1995) spoke of over a decade ago is truly upon us.
Alexander, K. L., Entwisle, D. R., & Horsey, C. S. (1997). From first grade forward: Early foundations of high school dropout. Sociology of Education, 70, 87-107.
Ayers, R. R., & Cooley, E. J. (1986). Sequential versus simultaneous processing on the K-ABC: Validity in prediction learning success. Journal of Psychoeducational Assessment, 4, 211220.
Bocian, K. M., Beebe, M. E., MacMillan, D. L., & Gresham, F. M. (1999). Competing paradigms in learning disabilities classification by schools and the variations in the meaning of discrepant achievement. Learning Disabilities Research and Practice, 14, 1-14.
Burns, M. K., & Ysseldyke, J. E. (2005). Comparison of existing responsiveness-to-intervention models to identify and answer implementation questions. The California School Psychologist, 10, 9-20.
Clay, M. (1987). Learning to be learning disabled. New Zealand Journal of Educational Studies, 22, 155-173.
Cronbach, L.J. (1957). The two disciplines of scientific psychology. American Psychologist, 12, 671-684.
Cronbach, L.J. (1975). Beyond the two disciplines of scientific psychology. American Psychologist, 30, 116127.
Donovan, S., & Cross, C. (Eds.). (2002). Minority students in gifted and special education. Washington, DC: National Academy Press.
Education for All Handicapped Children Act, Pub. L. No. 94-142 (1975). Regulations appeared in 1977.
Flanagan, D. P., & Kaufman, A. S. (2004). Essentials of WISC-IV assessment. Hoboken, NJ: John Wiley and Sons, Inc.
Flanagan, D. P., Ortiz, S. O., Alfonso, V. C., & Dynda, A. M. (2006). Integration of response to intervention and norm-referenced tests in learning disability identification: Learning from the Tower of Babel. Psychology in the Schools, 43, 807-825.
Fletcher, J. M., Coulter, W. A., Reschly, D. J., & Vaughn, S. (2004). Alternative approaches to the definition and identification of learning disabilities: Some questions and answers. Annals of Dyslexia, 54, 304-331.
Fletcher, J. M., & Lyon, G. R. (1998). Reading: A research-based approach. In W. M. Evers (Ed.), What's gone wrong in America's classrooms (pp.49-90). Stanford, CA: Hoover Institution Press.
Fletcher, J. M., Lyon, G. R., Barnes, M., Stuebing, K. K., Francis, D. J., Olson, R. K., Shaywitz, S. E., & Shaywitz, B. A. (2002). Classification of learning disabilities: An evidence-based evaluation. In R. Bradley, L. Danielson, & D. Hallahan (Eds.), Identification of learning disabilities: Research to practice (pp. 467-519). Mahwah NJ: Erlbaum.
Francis, D. J., Fletcher, J. M., & Stuebing, K. K. (2005). Psychometric approaches to the identification of LD: IQ and achievement scores are not sufficient. Journal of Learning Disabilities, 38, 98-108.
Francis, D. J., Shaywitz, S. E., Stuebing, K. K., Shaywitz, B. A., & Fletcher, J. M. (1996). Developmental lag versus deficit models of reading disability: A longitudinal individual growth curves analysis. Journal of Learning Disabilities, 24, 495-500.
Fuchs, L. S., & Fuchs, D. (1998). Treatment validity: A unifying concept for reconceptualizing the identification of learning disabilities. Learning Disabilities Research & Practice, 13, 204219.
Fuchs, D., Mock, D., Morgan, P. L., & Young, C. L. (2003). Responsiveness-to-intervention: Definitions, evidence, and implications for the learning disabilities construct. Learning Disabilities Research & Practice, 13, 204-219.
Gottlieb, J., Alter, M. Gottlieb, B., & Wishner, J. (1994). Special education in urban America: It's not justifiable for many. Journal of Special Education, 27, 453-465.
Gresham, F. M. (1989). Assessment of treatment integrity in school consultation and prereferral intervention. School Psychology Review, 18, 37-50.
Gresham, F. (2001). Responsiveness to intervention: An alternative to the identification of learning disabilities. Paper presented at the 2001 Learning Disabilities Summit: Building a Foundation for the Future. Retrieved March 8, 2007, from http://www.air.org/ldsummit/download.
Gresham, F. M. (2002). Responsiveness to intervention: An alternative approach to the identification of learning disabilities. In R. Bradley, L. Donaldson, & D. Hallahan (Eds.), Identification of learning disabilities: Research to practice (pp. 467-519). Mahwah, NJ: Erlbaum.
Gresham, F. M. (2004). Current status and future directions of school-based behavioral interventions. School Psychology Review, 33, 326-343.
Gresham, F. M. (2005). Response to intervention: An alternative means of identifying students as emotionally disturbed. Education and Treatment of Children, 28, 328-344.
Gresham, F. M. (2006). Response to intervention. In G. Bear & K. Minke (Eds.), Children's needs III (pp. 525540).
Bethesda, MD: National Association of School Psychologists.
Gresham, F.M. (in press). RTI and identification of Specific Learning Disabilities: An analysis of some critical issues. Communique.
Gresham, F. M., Reschly, D. W., Tilly, D., Fletcher, J., Burns, M., Christ, T., Prasse, D., Vanderwood, M., &
Shinn, M. (2004). Viewpoint: Response to AASP. Comprehensive evaluation of learning disabilities: A response-to-intervention perspective. Communique, 33(4), 34-35.
Gresham, F. M., & Witt, J. C. (1997). Utility of intelligence tests for treatment planning, classification, and placement decisions: Recent empirical findings and future directions. School Psychology Quarterly, 12, 249-267.
Hale, J. B., Kaufman, A., Nagileri, J. A., & Kavale, K. A. (2006). Implementation of IDEA: Integrating response to intervention and cognitive assessment methods. Psychology in the Schools, 43, 753-770.
Hoskyn, M., & Swanson, H. L. (2000). Cognitive processing of low achievers and children with reading disabilities.
A selective meta-analytic review of the published literature. School Psychology Review, 29, 102-119. Individuals with Disabilities Education Act, Pub. L. 101-476 (1990). Individuals with Disabilities Education Improvement Act, Pub. L. 108-446 (2004).
Juel, C. (1988). Learning to read and write: A longitudinal study of 54 children from first to fourth grades. Journal of Educational Psychology, 80, 437-447.
Kaufman, A.S. (1994). Intelligent testing with the WISC-III. New York: Wiley.
Kaufman, A. S., & Kaufman, N. L. (2004). Manual for the Kaufman Assessment Battery for Children--Second Edition (KABC-II), Comprehensive Form. Circle Pines, MN: American Guidance Service.
Kavale, K. (2002). Discrepancy models in the identification of learning disability. In R. Bradley, L. Donaldson, & D. Hallahan (Eds.), Identification of learning disabilities: Research to practice (pp. 369-426). Mahwah, NJ: Erlbaum.
Kavale, K. A., & Forness, S. R. (1987). Substance over style: A quantitative synthesis assessing the efficacy of modality testing and teaching. Exceptional Children, 54, 228-234.
Kovaleski, J., & Prasse, D. P. (2004). Response to instruction in the identification of learning disabilities: A guide for school teams. Communique, 32(5), 14-18.
Lyon, G. R., Fletcher, J. M., Fuchs, L. S., & Chhabra, V. (2006). Learning disabilities. In E. Mash & R. Barkley (Eds.), Treatment of childhood disorders (pp. 512-591). New York: Guilford Press.
Macmann, G. M., & Barnett, D. W. (1997). Myth of the master detective: Reliability of interpretations of Kaufman's "intelligence testing" approach to the WISC-III. School Psychology Quarterly, 12, 197-234.
MacMillan, D. L., Gresham, F. M., & Bocian, K. M. (1998). Discrepancy between definitions of learning disabilities and school practices: An empirical investigation. Journal of Learning Disabilities, 31, 314326.
MacMillan, D. L., Gresham, F. M., Lopez, M. F., & Bocian, K. M. (1996). Comparison of students nominated for prereferral interventions by ethnicity and gender. The Journal of Special Education, 30, 133-151.
MacMillan, D. L., & Siperstein, G. N. (2002). Learning disabilities as operationally defined by schools. In R. Bradley, L. Donaldson, & D. Hallahan (Eds.), Identification of learning disabilities: Research to practice (pp. 287-333). Mahwah, NJ: Erlbaum.
MacMillan, D. L., & Speece, D. (1999). Utility of current diagnostic categories for research and practice. In R. Gallimore, C. Bernheimer, D. MacMillan, D. Speece, & S. Vaughn (Eds.), Developmental perspectives on children with high incidence disabilities (pp. 111-133). Mahwah NJ: Lawrence Erlbaum.
Mastropieri, M. A., & Scruggs, T. E. (2002). Discrepancy models in the identification of learning disability: A response to Kavale. In R. Bradley, L. Donaldson, & D. Hallahan (Eds.), Identification of learning disabilities: Research to practice (pp. 449-455). Mahwah, NJ: Erlbaum.
McDermott, P. A., Fantuzzo, J. W., & Glutting, J. J. (1990). Just say no to subtest analysis: A critique on Wechsler theory and practice. Journal of Psychoeducational Assessment, 8, 290302.
McDermott, P. A., & Glutting, J. J. (1997). Informing stylistic learning behavior, disposition, and achievement through ability subtests--or, more illusions of meaning? School Psychology Review, 26, 163-175.
Naglieri, J. A., & Das, J. P. (1997). Cognitive Assessment System. Chicago: Riverside Publishing.
Naglieri, J. A., & Das J. P. (2005). Planning, Attention, Simultaneous, Successive (PASS) theory: A revision of the concept of intelligence. In D. Flanagan & P. Harrison (Eds.), Contemporary intellectual assessment: Theories, tests, and issues (pp. 120-136). New York: Guilford Press.
National Association of State Directors of Special Education (2005). Response to intervention: Policy considerations and implementation. Alexandria, VA: National Association of State Directors of Special Education, Inc.
National Association of School Psychologists (2007). NASP position statement on identification of students with specific learning disabilities. Bethesda, MD: National Association of School Psychologists.
National Reading Panel (2000). Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction. Washington, DC: National Institute of Child Health and Human Development.
Peterson, K. M. H., & Shinn, M. R. (2002). Severe discrepancy models: Which best explains school identification practices for learning disabilities? School Psychology Review, 31, 459-476.
President's Commission on Excellence in Special Education. (2001). A new era: Revitalizing special education for children and their families. Washington D.C.: Department of Education.
Reschly, D. J., & Wilson, M. S. (1995). School psychology practitioners and faculty: 1986 to 1991-92 trends in demographics, roles, satisfaction, and system reform. School Psychology Review, 24, 62-80.
Reschly, D. J., & Ysseldyke, J. E. (2002). Paradigm shift: The past is not the future. In A. Thomas, & J. Grimes (Eds.), Best practices in school psychology IV (pp. 3-20). Bethesda, MD: National Association of School Psychologists.
Rutter, M., & Yule, W. (1975). The concept of specific reading retardation. Journal of Child Psychology and Psychiatry, 16, 181-197.
Sattler, J. M. (2001). Assessment of children: Cognitive applications. San Diego, CA: Jerome Sattler Publisher.
Shepard, L. (1980). An evaluation of the regression discrepancy method for identifying children with learning disabilities. Journal of Special Education, 14, 79-91.
Speece, D. (2002). Classification of learning disabilities: Convergence, expansion, and caution. In R. Bradley, L. Danielson, & D. Hallahan (Eds.), Identification of learning disabilities: Research to practice (pp. 467-519). Mahwah NJ: Erlbaum.
Stage, S. A., Abbott, R. D., Jenkins, J. R., & Berninger, V. W. (2003). Predicting response to early reading intervention from verbal IQ, reading-related language abilities, attention ratings, and verbal IQ-word reading discrepancy: Failure to validate discrepancy method. Journal of Learning Disabilities, 36, 2433.
Stanovich, K. E. (2000). Progress in understanding reading: Scientific foundations and new frontiers. New York: Guilford Press.
Stuebing, K. K., Fletcher, J. M., LeDoux, J. M., Lyon, G. R., Shaywitz, S. E., & Shaywitz, B. A. (2002). Validity of IQ-discrepancy classification of reading disabilities: A meta-analysis. American Educational Research Journal, 39, 469-518.
Torgeson, J. K., Alexander, A. W., Wagner, R. K., Rashotte, C .A., Voeller, K. K., & Conway, T. (2001). Intensive remedial instruction for children with severe reading disabilities: Immediate and long-term outcomes from two instructional approaches. Journal of Learning Disabilities, 34, 33-58.
Torgeson, J. K. (2002). Empirical and theoretical support for direct diagnosis of learning disabilities by assessment of intrinsic processing weaknesses. In R. Bradley, L. Donaldson, & D. Hallahan (Eds.), Identification of learning disabilities: Research to practice (pp. 565-613). Mahwah, NJ: Erlbaum.
U.S. Office of Education. (1977). Assistance to states for education of handicapped children: Procedures for evaluating specific learning disabilities. Federal Register, 42(250), 65082-65085.
VanDerHeyden, A. M., Witt, J. C., & Naquin, G. (2003). Development and validation of a process for screening referrals to special education. School Psychology Review, 32, 204-227.
Vaughn, S., Linan-Thompson, S., & Hickman, P. (2003). Response to instruction as a means of identifying students with reading/learning disabilities. Exceptional Children, 69, 391-409.
Vellutino, F. R., Scanlon, D. M., & Lyon, G. R. (2000). Differentiating between difficult-to-remediate and readily remediated poor readers: More evidence against the IQ-achievement discrepancy definition for reading disability. Journal of Learning Disabilities, 33, 223-238.
Watkins, M. W., Glutting, J. J., & Youngstrom, E. A. (2005). Issue in subtest profile analysis. In D. Flanagan & P. Harrison (Eds.), Contemporary intellectual assessment: Theories, tests, and issues (pp. 251-268). New York: The Guilford Press.
Wechsler, D. (2003). Wechsler Intelligence Scale for Children--Fourth Edition: Technical and interpretive manual. San Antonio, TX: Psychological Corporation.
Williams, S., & McGee, R. (1994). Reading attainment and juvenile delinquency. Journal of Child Psychology and Psychiatry, 35, 441-459.
Witt, J. C., VanDerHeyden, A. M., & Gilbertson, D. (2007). A multi-year evaluation of the effects of a Response to Intervention (RTI) model on identification of children for special education. Journal of School Psychology, 45, 225-256.
Woodcock, R. W., McGrew, K. S., & Mather, N. (2001). Examiner's manual. Woodcock-Johnson III Tests of Cognitive Ability. Itasca, IL: Riverside Publishing.
Zigmond, N. (1993). Learning disabilities from an educational perspective. In G. R. Lyon, D. B. Gray, J. F. Kavanaugh, & N. A. Krasnegor (Eds.). Better understanding learning disabilities: New views from research and their implication for education and public policies (pp. 251-272). Baltimore, MD: Paul H. Brookes Publishing Co.
Alberto F. Restori
California State University, Northridge
Frank M. Gresham
Louisiana State University
Clayton R. Cook
University of California, Riverside
Address correspondence to Alberto Restori, Ph.D., Assistant Professor, Cocoordinator--School Psychology Program, Department of Educational Psychology and Counseling, Michael D. Eisner College of Education, California State University, Northridge, Northridge, California 91330-8265