Cognitive processing deficits and students with specific learning disabilities: a selective meta-analysis of the literature.Abstract. Many practitioners and state education agency staff would likely agree that the accuracy and consistency of specific learning disability (SLD) eligibility decisions is in need of improvement. One component of the SLD definition particularly controversial in the identification procedures is the evaluation of cognitive processes, primarily due to a lack of information about the role they might play in informing an SLD diagnosis and eligibility for special education services. A meta-analysis of 32 studies was conducted to examine the cognitive processing differences between students with SLD and typically achieving peers. The analysis found moderately large to large effect sizes in cognitive processing differences between groups of students with SLD and typically achieving students. These differences are of sufficient magnitude to justify including measures of cognitive processing ability in the evaluation and identification of SLD.
Ideally, diagnosis of a specific learning disability (SLD) should consist of a three-step process: (a) categorical diagnosis, (b) explanatory diagnosis, and (c) treatment planning (Witteman, Harries, Bekker, & VanAarle, 2007). As currently practiced, however, SLD diagnosis places nearly exclusive emphasis on treatment planning.
This is problematic on several levels. Most directly for the student, this can lead to treatment planning that does not appropriately address an individual student's needs (Swanson, 2009). Over time, the emphasis on treatment without first ensuring accurate classification erodes our understanding, treatment, research, and prevention of specific learning disabilities. The criteria for diagnosing an SLD are outlined in federal regulations (IDEA, 2004) as well as in clinical references such as the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV) (American Psychiatric Association [APA], 1994), but these criteria have neither been easily nor consistently applied in practice.
Research on practitioners' assessment of SLD confirms that the outcomes of this approach have been less than optimal (Gerber, 1988, 2005; Hallahan & Mercer, 2002; MacMillan & Siperstein, 2002; Ysseldyke, Algozzine, Richey, & Graden, 1982). The poor application of the diagnostic process can be attributed to a lack of clarity of both the environmental information about students and the relevant, scientifically established knowledge (Witteman et al., 2007).
Factors Contributing to Problems with SLD Identification
The factors identified as contributing to problems with SLD identification generally fall into three main categories: (a) resources, (b) stakeholder values, and (c) measurement issues (Johnson, Mellard, & Byrd, 2006). Constraints on resources have a significant impact on SLD eligibility decisions. At the classroom level, resource constraints are primarily related to a teacher's ability to adequately meet the needs of all the students in the room. A teacher without a broad range of instructional strategies is less likely to be successful in reaching all students and, therefore, more students in his or her class may be identified as having an SLD (Gerber, 2005). At the school level, low-achieving students may be identified as having an SLD because resources to provide services to other categories of struggling learners are not available (MacMillan, Gresham, & Bocian, 1998).
Stakeholder values also impact identification procedures. The role that stakeholder values play is clearly demonstrated in research examining differences between researcher- vs. school-identified populations (Mellard, Deshler, & Barth, 2004; MacMillan & Siperstein, 2002). This is further evidenced in the current debate over using a response to intervention (RTI) approach or a cognitive hypothesis testing approach to eligibility decisions (see, e.g., Hisock & Kinsbourne, 2009).
Finally, measurement issues are evident across the four stages of SLD determination: prereferral, referral, evaluation, and eligibility determination. Research has shown that problems exist across these four stages and that these problems result in inaccurate identification of students with learning disabilities. At the prereferral stage, many practitioners use nonvalidated or incomplete prereferral processes (Fuchs, Mock, Morgan, & Young, 2003). The focus on RTI in IDEA 2004 is on one level an attempt to bring more standardization to the prereferral stage. Such standardization can reduce the number of referrals, which is important because evidence has shown that once a student is referred, she or he is highly likely to be found eligible for services (Ysseldyke et al., 1982). At the evaluation stage, Hallahan and Mock (2003) noted the limited utility of the discrepancy formula for SLD identification. Although RTI is emerging as a possible alternative, many researchers caution that its exclusive use as an SLD determination model could result in many of the same issues the field faces now (Fletcher, Morris, & Lyon, 2003; Gerber, 2005; Hallahan, 2006). Others caution that even if RTI is implemented with fidelity, it fails to provide the explanatory diagnosis (e.g., why hasn't this student responded to instruction?) critical to improving treatment planning and long-term research efforts on SLD.
Calls for an Improved Classification Model
Together, these issues complicate the way in which SLD is diagnosed and have led to numerous calls for an improved classification model. However, while there is agreement that current methods are not optimal, there is not a strong consensus on what an improved model should look like.
Calls from the field to improve SLD identification center around the need to align operational definitions of SLD with theories about what SLD is and what the construct should accomplish (Francis, Fletcher, Catts, & Tomblin, 2005). Additionally, there is a need to move closer to (a) adoption of a systematic measurement framework, (b) identification of primary measurement constructs, and (c) a set of reference tests at a national level that represents the current best available measures of the constructs of interest (Morris, 1994). The concerns that hinder accurate SLD identification are evident in these calls.
Diagnostic classifications serve many purposes. Their primary function is to provide a commonly accepted method for grouping signs and symptoms into commonly accepted clusters and assigning an identifying term (e.g., SLD) that is accepted by most practitioners. Such labeling facilitates the transmission of clinically relevant information to providers of treatment (e.g., teachers/intervention specialists) (Witteman et al., 2007). Consistent classification also serves as a commonly agreed-upon clinical definition to ensure uniformity in research and statistical evaluation. This is critical information, especially for teachers, because SLD represents a fairly unique disorder in which teachers play an important role in the evaluation and eligibility process, if the approach to identification is purely pragmatic (i.e., the child needs to be identified in order to receive intervention), the explanatory component (i.e., why does this particular child fail to respond to instruction?) of the classification process is lost. That means fewer advances in the intervention and treatment research that can continue to move the field forward.
Accurate SLD determination is the most important outcome in improving SLD identification and subsequent service delivery. To achieve accuracy, a classification model must reliably distinguish students who have an SLD from those who do not. A primary difficulty in classification is that some conditions are not clearly defined and easily categorized. SLD is an example of such difficulties, due in large part to the heterogeneity of the disability.
A review of state-level regulations for SLD eligibility readily highlights the difficulty with reaching consensus on operational definitions. For example, some states use an RTI and noncategorical approach to identification (e.g., Iowa); some states have patterned their policy language on the federal regulations, which allow either an RTI approach or a pattern of strengths and weaknesses approach (e.g., Michigan); other states require a comprehensive evaluation that includes documentation of low achievement, an assessment of the instructional environment, and documentation of a cognitive processing deficit (e.g., Maine). The range in operational definitions may result in less clarity about SLD, because a student who is identified by one procedure may not be identified when a different procedure is used (Fuchs, Fuchs, & Compton, 2004; McMaster, Fuchs, Fuchs, & Compton, 2005; Sparks & Lovett, 2009).
Classification models for SLD consist of inclusion and exclusion criteria for group formation. However, these criteria often only reflect a symptom or set of symptoms rather than the condition itself. For example, the focus on low achievement and the ruling out of other explanations for the low achievement has led to an "SLD by default" approach to eligibility decisions, where SLD is primarily defined by low achievement that is not explained by hearing or vision problems and cultural or environmental factors.
The shift towards low achievement-only definitions of SLD is on the rise because the effects of a neuropsychological disorder, which are thought to underlie SLD, are usually manifested in the symptom of low achievement in reading, writing, or performing math calculations (Swanson, 2009). Given the relative ease of identifying low achievement compared to identifying cognitive processing deficits, models that rely on low achievement-only definitions of SLD can be appealing to practitioners. Thus, in a process that should include three steps (categorical diagnosis, explanatory diagnosis, treatment planning), there has been a steady shift away from the explanatory diagnosis (Hisock & Kinsbourne, 2009).
Ideally, an SLD classification model would clearly specify which conditions to include or exclude, and provide clear rules about when these conditions were present. Conditions exist along a continuum, and clearly defined cut scores may not be a realistic goal. Instead, guidelines on a variety of factors will likely be needed. Achieving such a classification system may require a renewed emphasis on the explanatory diagnosis component of the classification process, as discussed below.
Low academic achievement is typically the first marker of an SLD (Fletcher, Lyons, Fuchs, & Barnes, 2007; Kavale, 2005; Mastropieri, 2001). Reading and math intervention research shows that many students respond well to evidence-based interventions (Fuchs et al., 2007; Torgesen et al., 1999), indicating that such intervention programs are the logical next step with low-achieving students. Studies also show that a small number of students do not respond as well to high-quality, evidence-based interventions (e.g., Torgesen et al.; Vellutino et al., 1996). Thus, they are underachieving based on an expectation that they should respond. Many of these treatment resisters have IQ scores in the average range (i.e., > 85) but achievement scores below the 25th percentile (Swanson, 2006). We suspect that some students, despite being of average intelligence, fail to achieve even when provided with high-quality instruction because of underlying cognitive processing deficits.
Cognitive processing deficits have long been considered to be at the core of SLD (Individuals with Disabilities Act [IDEA] 2004, Public Law 108-446), but these processes are not frequently assessed as a part of SLD determination. This omission in practice is worthy of reconsideration in light of a rapidly growing body of research that provides an understanding of how underlying cognitive processes are related to academic functioning as well as the difficulties children with learning disabilities tend to experience with them (Semrud-Clikeman, 2005).
Research suggests that deficits in various cognitive processes are manifested in low basic academic skills achievement (e.g., reading, math). In reading, for example, deficit processes commonly identified in the research include expressive and receptive language, phonological processing, processing speed (Flanagan, Ortiz, Alfonso, & Dynda, 2006; Pennington, 2009), and verbal working memory (Swanson, 2009). Phonological processing deficits have perhaps received the most attention in the research literature. Interventions that address deficits in these commonly researched processes have been shown to be effective for a significant number of poor readers (Torgesen, 2000).
Research also demonstrates that students with reading disabilities (RD) may experience deficits in cognitive processes that are not as thoroughly researched. For example, recent advances in cognitive neuroscience have increased our understanding of how children with SLD access information (Swanson, 2009). This concept of accessibility assumes that when a child is taught information needed to perform a task, that information resides in the child's mind and may be accessed to perform the task at a later time. Several researchers have converged on the notion that children with RD have difficulty drawing on previously taught information and that, therefore, the information remains relatively inert, and some reading tasks are difficult to perform. Key executive processing and self-monitoring strategies provided to such students during instruction can improve their information accessing processes (Swanson).
The research in mathematics is not as extensive as that in reading. Yet, deficits in verbal working memory, visual working memory (Hitch & McAuley, 1991), processing speed (Bull & Johnston, 1997; Swanson & Jerman, 2006), attention (Fuchs, Compton, Fuchs, Paulsen, Bryant, & Hamlett, 2005), and executive function (Geary, 2004) have been demonstrated to differentiate between average achievers and children with math disabilities (MD). These deficits manifest in difficulties with math fact fluency (Geary, Brown, & Samaranayake, 1991), problem solving (Geary), and number sense.
Despite this body of research and the inclusion of disorders in basic psychological processes in the federal definition, cognitive processes are not routinely assessed as part of the SLD identification process. The absence of clearly defined criteria for determining when a cognitive processing weakness constitutes a disability may be the reason for this practice. With IDEA's (2004) renewed emphasis on improving approaches to SLD identification, the debate about and interest in the role of cognitive processes has reemerged (Mather & Kaufman, 2006). There is strong support for a combined approach to SLD identification that is well summarized in the following three-step approach (Mather & Gregg, 2006):
1. Observe a limitation in one or more of the following achievement areas: reading (basic skills, fluency, comprehension); written language (basic skills, fluency, expression); or mathematics (basic skills, fluency, application). Rule out alternative explanations for the limitation (e.g., cognitive impairment, lack of opportunity).
2. Document the limitation using multiple sources of data (e.g., standardized or curriculum-based measurements using multiple test formats; response to intervention; teacher, student, and parent reports; class work samples; educational history).
3. Identify specific cognitive or linguistic correlates that appear to be related to the identified area of underachievement. Rule out alternative explanations for the cognitive or linguistic difficulties. Key to this approach is a focus on explanatory diagnoses that can help inform treatment planning. Cognitive processing assessments offer the potential for providing explanations about why and how individual differences occur (Swanson, 2009).
Evidence supports the view that students with SLD have cognitive process deficits when compared to their typically developing peers (Berninger, 2006; Semrud-Clikeman, 2005; Swanson & Jerman, 2006; Swanson, 2009). However, the research to date has not been systematically examined and synthesized to determine how these differences might inform comprehensive evaluation for special education eligibility decisions. Thus, the primary purpose of this study was to determine if differences in cognitive processes between children with SLD and average achievers are of sufficient magnitude to justify inclusion of such measures in SLD assessment batteries.
The central research questions in the present study were as follows: Is there evidence that ...
1. Differences in cognitive processing ability are of sufficient magnitude to warrant including them in the evaluation and identification of specific learning disabilities?
2. If significant differences exist, they vary as a function of definitional criteria?
To judge whether important differences exist between the cognitive processes of students with SLD and typically achieving students, we chose to conduct a meta-analysis. Meta-analysis allows conclusions to be drawn from a pattern of findings across several studies and provides stronger evidence for its conclusions than if drawn from a single study. Further, meta-analysis can help control for methodological issues that might limit the conclusions and generalizations from a single study. (1)
The criterion for judging differences between groups on effect sizes were based on Cohen's (1988) recommendations, in which effect sizes greater than 0.80 are considered a large difference, 0.50 moderate, and 0.20 small. Further, because research has identified that different cognitive processes underlie different academic areas, we opted to separately analyze studies that addressed students with reading disabilities (RD) and students with math disabilities (MD).
Although a meta-analysis can synthesize research findings across a large number of studies, parameters for conducting the analysis must be delineated, and the selection of these parameters limits the generalization of findings to studies that fall within those parameters.
To initially identify relevant studies published in peer-reviewed journals, we used three approaches. The primary approach was a literature search of the ERIC, PsychInfo, EBSCO, and Academic Search Premier databases using keywords with a subject delimiter and Boolean truncation (see Figure 1). Next, we searched the reference lists of initially identified articles to locate studies not included in the initial pool. We also conducted a hand search of frequently cited authors such as Berninger, Fletcher, Fuchs, D., Fuchs, L., Geary, Shaywitz, Siegel, Swanson, and Vellutino. Third, we manually searched the journals in which many of the initially identified articles were published, including Journal of Educational Psychology, Journal of Experimental Child Psychology, Review of Educational Research, Journal of Child Psychology and Psychiatry, Reading and Writing, Child Development, Applied Psycholinguistics, Journal of Special Education, Cognition, Reading Psychology, Journal of Research in Reading, Journal of Abnormal Child Psychology, Developmental Neuropsychology, Journal of Learning Disabilities, Learning Disabilities Research & Practice, and Learning Disability Quarterly. We limited selected articles to publication dates from 1974 to 2008.
The initial search yielded more than 5,000 citations. We next reviewed the abstracts of these citations and selected only those articles that measured cognitive processing differences between students with SLD and students who were typically achieving and/or students with low achievement. This review narrowed the list to 177 potential studies.
Study Inclusion Criteria
To determine the relevance of the 177 potential studies to the current meta-analysis, we evaluated each study using the 10 inclusion criteria shown in Figure 2. Two independent readers reviewed the articles to determine if they met the selection criteria.
A total of 32 studies fully met the inclusion criteria for the meta-analysis, with inter-rater agreement exceeding 95%. The raters identified 26 studies that addressed RD (denoted in the references by *) and 9 studies that addressed MD (denoted in the references by =), three of which were also studies of RD.
Five general categories were established and applied to coding the 32 studies: (a) definitions of groups, (b) demographics, (c) definitions of academic achievement, (d) assessments used, and (e) cognitive processes assessed. Inter-rater coding agreements across categories exceeded 90%. However, the variety of definitions and measures encountered when coding achievement and cognitive processes posed a challenge for further analysis. The planned academic coding categories were based on the current federal definition of SLD (i.e., basic reading skills, reading comprehension, reading fluency, math calculation, math problem solving), but proved to be insufficient to describe these studies. In particular, all of the math studies' academic achievement results were measured by broad math ability measures that did not fit our coding scheme. As a result, we revised our cognitive process codes from 15 to 7 reading-related codes and 9 to 5 math-related codes, shown in Figure 3. Recoding the studies using these subcategories yielded an inter-rater agreement exceeding 90% for academic achievement, and exceeding 85% for cognitive processes.
Effect Size Calculation
The comprehensive meta-analysis (CMA; Borenstein, Hedges, Higgins, & Rothstein, 2005) software supported our effect sizes (ES) calculation and analysis. CMA computes ES using researcher-entered data (i.e., M, SD, N) for treatment (in our case, students with SLD) and control (in our case, typically achieving students) groups. We assumed a fixed-effects model because the goal was to compute the common effect size, which would then be generalized to other examples of the same population (Borenstein, Hedges, & Rothstein, 2007). We used Hedges' g (Hedges & Olkin, 1985) as the primary index of ES. This inferential measure is computed by using the square root of the mean square error from the analysis of variance testing for differences between the two groups (Rosenthal, 1994). Based on each group's M, SD, N, the raw difference in Ms and pooled variance, CMA computes Hedge's g as shown.
Raw difference (RawDiff) in M = [M.sub.1] - [M.sub.2]
Pooled variance (SDP) = Square root (([N.sub.1] - 1) x [SD.sub.1.sup.2] + (([N.sub.2] - 1) x S[D.sub.2.sup.2])/([N.sub.1] + [N.sub.2] - 2))
Standardized difference in means (StdDiff) = RawDiff/SDP
StdDiff SE = Square root (1/[N.sub.1] + 1/[N.sub.2] + StdDiff2/(2 X ([N.sub.1] + [N.sub.2]))
The CMA software then multiplies the standardized mean difference, d, by a correction factor, J, to compute g.
Correction factor J = 1 - (3/4 x df- 1), where df = NTot - 2
g = d x J
Std Err (g) = Std Err (d) x J
Variance (g) = Std Err (g) 2
A fixed-effects model, such as the one used in this study, assumes that variability across studies is due to sampling error only, and not to differences in key characteristics of the study design (Hedges, 1994). Fixed-effects model analyses produced a Q statistic, which, if significant, suggests that a random-effects model should be used or that important moderator variables may exist (Hedges; Hedges & Olkin, 1985; Raudenbush, 1994). However, the Q statistic has been criticized in the literature because it is directly tied to the number of studies included in the analysis, resulting in poor power to detect differences when analyzing a small number of studies and excessive power to detect negligible differences when analyzing a large number of studies (Huedo-Medina, Sanchez-Meca, Marin-Martinez, & Botella, 2006).
Critics of the Q statistic recommend reporting the [I.sup.2] statistic, which provides a percentage index of the amount of variability and is calculated using the following formula:
[I.sup.2] = Q - (k - 1)/Q
[I.sup.2] indices of 25%, 50%, and 75% are classified as low, medium, and high heterogeneity, respectively (Higgins & Thompson, 2002).
As shown in Table 1, for the categories in which the Q statistic was significant, most of the corresponding [I.sup.2] levels were high, with two in the medium range (processing speed and executive function). This finding prompted us to determine whether there were important moderator variables across studies. As discussed earlier in this article, we hypothesized that differences in the way in which SLD was defined might moderate effect size, as has been the case in other meta-analytic reviews of reading disability (Swanson, 2006). Using the three SLD definitional categories described previously, we next analyzed outcomes by SLD definition.
Definitions of SLD Groups
For the RD studies, we primarily found definitions of RD to be based on a discrepancy formula, with discrepancies ranging from 15 to 17 points (and others between one to two SD) between IQ and reading achievement. Other studies defined the RD group as students with an average IQ at or above 85 or 90 and with reading achievement below the 25th percentile. In some studies of reading comprehension, RD was defined as one standard deviation or more below the mean for students with reading skills in the average range. Other studies simply identified students with low achievement (e.g., reading achievement below the 25th percentile) as RD.
A variety of SLD identification models are currently under debate. Such models include those based on (a) achievement-aptitude discrepancy formulas, (b) intraindividual differences, and (c) low achievement that is not readily explained (Fletcher, Lyon, Fuchs, & Barnes, 2007). Differences in evaluation procedures result in differences in who is identified as having an SLD (Fuchs et al., 2005; McMaster et al., 2005; Mellard & Deshler, 1984; Sparks & Lovett, 2009).
This ongoing debate about the role of IQ in SLD eligibility decisions (2) raised the question of whether the method of identification moderated differences in outcomes. Therefore, we conducted subanalyses using RD definition as a moderator variable to assess differences in effect sizes. Too few studies on MD were available to conduct similar meaningful subanalyses.
In our subanalysis we defined the three groups of RD studies as follows:
1. Studies that used low achievement-only definitions of RD (e.g., reading achievement below the 25th percentile) and in which no IQ score was reported.
2. Studies that relied on discrepancy definitions of SLD, where a stated difference (e.g., 15 points, 1.5 SD) in the standard scores on achievement and IQ tests existed, or where the study used a school-identified population and indicated that the school used a discrepancy approach to identification.
3. Studies that defined SLD as having intelligence within the average range (IQ > 85) and reading performance one SD or more below the mean (e.g., reading achievement < 85).
Our meta-analysis found moderate to large differences in the cognitive processing abilities of students with SLD compared to typically achieving students. We first present the overall findings for reading studies, followed by our analysis of studies using a moderator variable of SLD definition. Next, we present the overall findings for math studies.
Differences in Cognitive Processes for Students with Reading Disabilities
Table 1 reports effect sizes in total and for 11 achievement and cognitive processing categories of RD. As illustrated, large or moderately large effects (Cohen, 1988) were detected across all categories. Students with RD had reading achievement levels nearly two SD below those of typically achieving students, with slightly larger differences when achievement was measured by reading comprehension (ES = -1.921) rather than reading basic skills (ES = -1.839). Students with RD had the largest deficits in phonological processing (ES = -1.276), followed by processing speed (ES = -.947) and verbal working memory (ES = -.920). Of interest was the large effect size for receptive and expressive language (ES = -.782, K = 8, where K = the number of dependent measures across all studies).
As shown in Table 1, for the categories in which the Q statistic was significant, most of the corresponding [I.sup.2] levels were high (all reading measures, basic skills, comprehension, intelligence, and phonological processing), with two [I.sup.2] values in the medium range (processing speed, executive function). This finding contributed to our decision to determine whether important moderator variables might exist across studies. Using the three SLD definitional categories described previously, we analyzed outcomes by SLD definition. The results of this analysis are presented in Table 2.
Differences in effect sizes across the three categorical definitions were evident for all reading achievement measures, and largest for reading basic skills measures. Additionally, although only two groups (Group 2--students defined as SLD through a discrepancy approach--and Group 3--students defined as SLD with average IQ and low achievement) included measures of intelligence in their studies, differences in effect sizes for reading achievement were evident. In studies that used a discrepancy definition of SLD, ES = -.253, and in studies that used average IQ and low achievement definitions of SLD, ES = -.983. Differences in the effect sizes for phonological processing were also evident, with effect sizes for SLD Groups 2 and 3 approximately double the effect sizes for Group 1--students with low achievement.
Differences in Cognitive Processes for Students with MD
Table 3 reports the effect sizes for each of the seven cognitive categories for studies of MD. The nine studies in this analysis included a range of definitions of MD, from students with math achievement below the 15th percentile to students with math achievement below the 35th percentile. Additionally, the studies used a variety of math measures; some focused on basic computational skills while others examined more complex problem solving abilities.
Effect sizes for all MD categories were moderate to large. The main finding of this analysis was that students with MD were severely impaired in math ability, despite having intelligence scores within the average range. These students also had difficulties with executive functioning, processing speed, and short-term memory.
As outlined in our rationale for conducting this meta-analysis, an ideal SLD classification model would specify clearly which conditions to include or exclude, and would provide clear rules about when these conditions were present. To date, no such model exists. Cognitive processing deficits have long been a core component of the SLD definition, but they are not consistently assessed because there are no clear rules about when a processing deficit is present.
The present meta-analysis represents an attempt to synthesize the literature to determine the current state of the research base in this area and to obtain an estimate of the magnitude of such cognitive differences. The results show that the differences in cognitive processes between students with SLD and their typically developing peers are large (ES > .80) and, therefore, justify inclusion in SLD assessment. When synthesized across studies, deficits in processes and achievement areas range from moderate to large. On all cognitive measures, students with reading disabilities tended to perform significantly lower than their typically developing peers (see Table 1). Findings in the math studies were similar to those of the reading studies, in that students with MD, on average, had processing deficits that ranged from moderate effect sizes for visual working memory to large effect sizes for executive function (see Table 3). Across many cognitive areas (e.g., working memory, processing speed, executive function) differences were of high magnitude (> .80) for both RD and MD.
Our findings provide support for including cognitive processes related to the suspected area of disability in the explanatory component of an SLD diagnostic process. Clinicians should assess processes strongly related to a specific area of academic achievement (Flanagan et al., 2006; Mather & Gregg, 2006; Pennington, 2009). The magnitude of effect sizes suggests that the key cognitive areas on which to focus include working memory, processing speed, executive function, and receptive and expressive language.
While the findings in this meta-analysis do not imply that all processes included in this study should systematically become part of the classification model for SLD, and that clinicians should look for scores that are approximately one SD below the mean of typically achieving students, they do suggest that substantial, important differences in cognitive processes exist and warrant an intentional approach to selecting cognitive processing assessment procedures as part of SLD identification. As Pennington (2009) noted,
... it is unlikely that we will be able to reduce any developmental learning disorder to a single cognitive component. Any disorder will present us with a range of cognitive deficits, some more global and some more specific. Because of the fundamental interactivity of cognitive processing and development, it is unlikely that we will be able to reduce individual differences to initial differences in either bottom-up processes or top down processes. (p. 14)
When taken in the context of meta-analyses examining treatment outcomes for students with RD, the effect sizes reported in the present study take on even greater import. Meta-analyses conducted by Swanson and colleagues found that best-evidence studies on instructional interventions are moderated by a host of environmental and individual difference variables that make a direct translation to assessing children at risk for RD as a function of educational intervention (i.e., RTI) difficult. Although RTI relies on evidence-based instruction in each instructional tier, even under the most optimal instructional conditions for teaching reading, less than 21% of the variance in outcomes is related to instruction (Swanson, 1999).
This finding suggests that, while a number of struggling readers will respond favorably when provided evidence-based instruction, continued work to develop interventions that support students with SLD is needed. The students who tend to be the greatest treatment resisters are those with IQs in the average range and achievement below the 25th percentile (Swanson, 2006). The findings of our reading subanalyses confirm this pattern; students defined by average IQ and low achievement had the largest deficits of the three RD groups in reading ability and most cognitive processes.
Much of the debate surrounding changes to SLD identification procedures stems from research that has called the aptitude-achievement discrepancy approach into question (e.g., Siegel, 1992; Stage, Abbott, Jenkins, & Berninger, 2003). As previously noted, the reauthorization of IDEA moved the field away from this discrepancy component. Thus, a fundamental question in the redesign of an SLD classification model has been, "Should IQ be maintained in current models of SLD?"
As this question pertains to the current study, of particular concern was the finding of an ES = -.73 for intelligence between RD and typically achieving groups (see Table 1) and an ES = -.90 for intelligence between MD and typically achieving groups (see Table 3). Although across studies, students with SLD had IQ scores within the normal range (e.g., > 85), our findings suggest that comparisons made across studies are between groups of students with IQs at nearly one SD below their typically developing peers.
This finding led us to analyze groups according to the way in which SLD was defined, since research in this area suggests that different approaches to identification lead to different groups of students identified as SLD (Fuchs et al., 2004; McMaster et al., 2005; Sparks & Lovett, 2009).
In studies that used a discrepancy approach to RD identification, differences in intelligence between students with and without RD was small (ES = -.253), but differences in cognitive processes were similar to other RD subgroups (see Table 2). For example, when all studies were included in the analysis, processing speed effect size equaled -.947. Analyzed by RD definition subgroups, studies with processing speed for SLD students defined by a discrepancy approach had ES = -.958, whereas for SLD students defined by low achievement only procedures ES = -. 781. For phonological processing, studies with students in Group 1 had an ES = -.688, with Group 2, ES = -1.193, and with Group 3, ES = -1.434. Further analysis of findings was prohibited by lack of IQ information in studies defining RD by low achievement.
Taken in the context of prior meta-analyses related to the diagnosis and treatment of RD, our findings are consistent. Traditionally, the case for RD rests on three assumptions: (a) reading difficulties are not due to inadequate opportunity to learn, general intelligence, or physical or emotional/behavior disorders, but to basic disorders in specific cognitive information processes; (b) these specific information processing deficits are a reflection of neurological, constitutional, and/or biological factors; and (c) these specific information processing deficits underlie a limited aspect of academic behavior (e.g., reading). Thus, to assess RD at the cognitive level, systematic efforts are needed to detect (a) normal psychometric intelligence, (b) below-normal achievement in reading, (c) below-normal performance in specific cognitive processes, (d) that evidence-based instruction has been presented under optimal conditions but deficits in isolated cognitive processes remain, and (e) that cognitive processing deficits are not directly caused by environmental factors or contingencies (e.g., low SES).
One limitation of the current review is that SLD groups were predefined by research protocols using a wide variety of criteria. The definitions shared the common trait of low achievement, but studies included different cut-off scores for IQ, discrepancy, and other factors related to SLD identification. However, it appears that regardless of the RD definition used, students with SLD demonstrated substantial cognitive processing deficits compared to typically achieving students.
A comparison to "garden-variety" low-achieving students is important in order to differentiate between students with general low achievement and students with an SLD. However, our literature search did not find a sufficient number of studies conforming to our inclusion criteria that compared the cognitive processes of students with SLD to students with general low achievement. This limitation is particularly important in that school staffs are rarely asked to differentiate students with SLD from students with low achievement. Referral and screening practices are more likely to focus on low achievement as a marker, and the multidisciplinary team is then asked to conduct a comprehensive evaluation to further differentiate these two groups. Further research is needed to understand the role that cognitive processing deficits might play for different types of reading and math problems.
The purpose of this study was to determine whether reliable differences exist in the cognitive processes of students with SLD when compared to typically achieving students and, if so, whether such differences are of sufficient magnitude to justify including those measures in assessment of SLD.
The study supports the following conclusions that can inform the development of an SLD classification model: (a) students with SLD significantly underachieve in reading and math; (b) across all cognitive processes, students with SLD tend to have deficits when compared to typically achieving students; and (c) these deficits tend to be large. Therefore, we contend that inclusion of cognitive processing measures as part of an SLD identification process is warranted.
Areas in reading that evidence the most difference include phonological processing, verbal working memory, and processing speed. Language ability also emerged as an area in which large effect sizes were found. As suggested by Flanagan et al. (2006) and Mather and Gregg (2006), diagnosticians should assess these areas when they are suspected as underlying a student's reading problems. Areas in math are more difficult to determine based on the heterogeneity of participants and research studies, though deficits in executive function appear to be large. This finding is consistent with research suggesting that measures of attention are strongly related to math ability.
Finally, this study supports the need for measures with better reliability and validity to assess some cognitive processes. As with all SLD classification components, a standard set of cognitive processing measures with established cut scores likely cannot be applied across the board in an SLD classification model. Rather, clinicians will have to assess processes that are known to underlie specific academic areas to help identify and explain the particular problems an individual student faces. The assessed scores need to demonstrate reliability, evidential validity, and, particularly for a differential diagnosis, a meaningful difference from the norm group in terms of the profile's elevation, pattern, and scatter.
*Ackerman, P., & Dykman, R. (1993). Phonological processes, confrontational naming, and immediate memory in dyslexia. Journal of Learning Disabilities, 26(9), 597-609.
American Psychiatric Association. (2000). Diagnostic and statistical manual of mental disorders (4th ed., text revision). Washington, DC: Author.
Berninger, V. (2006) Research supported ideas for implementing reauthorized IDEA with intelligent professional psychological services. Psychology in the Schools, 43(7), 781-796.
Borenstein, M., Hedges, L., Higgins J., & Rothstein, II. (2005). Comprehensive meta-analysis, version 2. Englewood, NJ: Biostat, Inc.
Borenstein, M., Hedges, L., & Rothstein, H. (2007). Meta-analysis fixed effect vs. random effects. Retrieved January 24, 2010, from http://www.Meta-Analysis.com.
Bull, R., & Johnston, R. (1997). Children's arithmetical difficulties: Contributions from processing speed, item identification, and short-term memory. Journal of Experimental Child Psychology, 65(1), 1-24.
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Earlbaum Associates.
[dagger] D'Angiulli, A., & Siegel, L. (2003). Cognitive functioning as measured by the WISC-R: Do children with learning disabilities have distinctive patterns of performance? Journal of Learning Disabilities, 36(1), 48-58.
*Das, J., Mishra, R., & Kirby, J. (1994). Cognitive patterns of children with dyslexia: A comparison between groups with high and average nonverbal intelligence. Journal of Learning Disabilities, 27(4), 235-242.
*Eisenmajer, N., Ross, N., & Pratt, C. (2005). Specificity and characteristics of learning disabilities. Journal of Child Psychology and Psychiatry, 46(10), 1108-1115.
Flanagan, D., Ortiz, S., Alfonso, V., & Dynda, A. (2006). Integration of response to intervention and norm-referenced tests in learning disability identification: Learning from the Tower of Babel. Psychology in the Schools, 43(7), 807-825.
Fletcher, J. M., Lyon, G. R., Fuchs, L. S., & Barnes, M. A. (2007). Learning disabilities: From identification to intervention. New York: The Guilford Press.
Fletcher, J. M., Morris, R. D., & Lyon, G. R. (2003). Classification and definition of learning disabilities: An integrative perspective. Handbook of Learning Disabilities (pp. 30-57). New York: The Guilford Press.
*Fletcher, J., Shaywitz, S., Shankweiler, D., Katz, L., Liberman, I., Stuebing, K., et al. (1994). Cognitive profiles of reading disability: Comparisons of discrepancy and low achievement profiles. Journal of Educational Psychology, 86(1), 6-23.
*Floyd, R., Bergeron, R., & Alfonso, V. (2006). Cattell-Horn-Carroll cognitive ability profiles of poor comprehenders. Reading and Writing: An Interdisciplinary Journal, 19(5), 427-456.
Francis, D. J., Fletcher, J. M., Catts, H., & Tomblin, J. B. (2005). Dimensions affecting the assessment of reading comprehension. In S. G. Paris & S. A. Stahl (Eds.), Children's reading comprehension and assessment (pp. 369-394). Mahwah, NJ: Lawrence Erlbaum Associates.
Fuchs, D., Fuchs, L., & Compton, D. (2004). Identifying reading disabilities by responsiveness-to-instruction: Specifying measures and criteria. Learning Disability Quarterly, 27(4), 216-228.
Fuchs, D., Mock, D., Morgan, P., & Young, C. (2003). Responsiveness-to-intervention: Definitions, evidence, and implications for the learning disabilities construct. Learning Disabilities Research & Practice, 18(3), 157-171.
Fuchs, L., Fuchs, D., Compton, D., Bryant, J., Hamlett, C., & Seethaler, P. (2007). Mathematics screening and progress monitoring at first grade: Implications for responsiveness to intervention. Exceptional Children, 73(3), 311-330.
Fuchs, L. S., Compton, D. L., Fuchs, D., Paulsen, K., Bryant, J. D., & Hamlett, C. L. (2005, March/April). Responsiveness to intervention: Preventing and identifying mathematics disability. Teaching Exceptional Children, 60-63
Geary, D. (2004). Mathematics and learning disabilities. Journal of Learning Disabilities, 37(1), 4-15.
Geary, D., Brown, S., & Samaranayake, V. (1991). Cognitive addition: A short longitudinal study of strategy choice and speed-of-processing differences in normal and mathematically disabled children. Developmental Psychology, 27(5), 787-797.
*[dagger] Geary, D., Hamson, C., & Hoard, M. (2000). Numerical and arithmetical cognition: A longitudinal study of process and concept deficits in children with learning disability. Journal of Experimental Child Psychology, 77(3), 236-263.
[dagger] Geary, D., Hoard, M., Byrd-Craven, J., Nugent, L., & Numtee, C. (2007). Cognitive mechanisms underlying achievement deficits in children with mathematical learning disability. Child Development, 78(4), 1343-1359.
Gerber, M. M (1988). Tolerance and technology of instruction: Implications of the NAS report for special education. Exceptional Children, 54, 309-314.
Gerber, M. M. (2005). Teachers are still the test: Limitations of response to instruction strategies for identifying children with learning disabilities. Journal of Learning Disabilities, 38, 516-524.
Gresham, F., MacMillan, D., & Bocian, K. (1996). Learning disabilities, low achievement, and mild mental retardation: More alike than different? Journal of Learning Disabilities, 29(6), 570-581.
*Griffiths, Y., & Snowling, M. (2001). Auditory word identification and phonological skills in dyslexic and average readers. Applied Psycholinguistics, 22(3), 419-439.
Hallahan, D. P. (2006, April). Challenges facing the field of learning disabilities. Presentation at the National Research Center on Learning Disabilities SEA Conference on SLD Determination, Kansas City, MO.
Hallahan, D. P., & Mercer, C. D. (2002). Learning disabilities: Historical perspectives. In R. Bradley, L. Danielson, & D. Hallahan (Eds.), Identification of learning disabilities: Research to practice (pp.1-65). Mahwah, NJ: Erlbaum.
Hallahan, D. P., & Mock, D. R. (2003). A brief history of the field of learning disabilities. In H. L. Swanson, K. R. Harris, & S. Graham (Eds.), Handbook of learning disabilities (pp. 16-29). New York: Guilford Press.
Hedges, L. (1994). Fixed effects models. In H. Cooper & L. V. Hedges (Eds.), The handbook of research synthesis (pp. 285-299). New York: Russell Sage Foundation.
Hedges, L., & Olkin, I. (1985). Statistical methods for meta-analysis. Orlando, FL: Academic Press.
Higgins, J., & Thompson, S. (2002). Quantifying heterogeneity in a meta-analysis. Statistics in Medicine, 21(11), 1539-1558.
Hisock, M., & Kinsbourne, M. (2009). The education empire strikes back: Will RTI displace neuropsychology and neuroscience from the realm of learning disabilities? In E. Fletcher-Janzen & C. R. Reynolds (Eds.), Neuropsychological perspectives on learning disabilities in the era of RTI: Recommendations for diagnosis and intervention (pp. 54-65). Hoboken, NJ: Wiley & Sons, Inc.
Hitch, G., & McAuley, E. (1991). Working memory in children with specific arithmetical learning difficulties. British Journal of Psychology, 82(3), 375-386.
Hoskyn, M., & Swanson, H. L. (2000). Cognitive processing of low achievers and children with reading disabilities: A selective meta-analytic review of the published literature. School Psychology Review, 29(1), 102-119.
*Howes, N., Bigler, E., Burlingame, G., & Lawson, J. (2003). Memory performance of children with dyslexia: A comparative analysis of theoretical perspectives. Journal of Learning Disabilities, 36(3), 230-246.
Huedo-Medina, T., Sanchez-Meca, J., Marin-Martinez, F., & Botella, J. (2006). Assessing heterogeneity in meta-analysis: Q statistic or 12 index? Storrs: University of Connecticut, Center for Health, Intervention, and Prevention (CHIP). Retrieved June 22, 2009, from http://digitalcommons.uconn.edu/chip_docs/19.
Hurford, D., Johnston, M., Nepote, P., Hampton, S., Moore, S., Neal, J., et al. (1994). Early identification and remediation of phonological processing deficits in first grade children at risk for reading disabilities. Journal of Learning Disabilities, 27(10), 647-659. Individuals with Disabilities Education Act of 2004 (IDEA). (2004). Public Law 108-446.
Johnson, E., Mellard, D. F., & Byrd, S. E. (2006). Challenges with SLD identification: What is the SLD problem? Teaching Exceptional Children Plus, 3(1).
Kavale, K. (2005). Identifying specific learning disability: Is response to intervention the answer? Journal of Learning Disabilities, 38(6), 553-562.
Kavale, K., Fuchs, D., & Scruggs, T. (1994). Setting the record straight on learning disabilities and low achievement. Learning Disabilities Research & Practice, 9(2), 70-77.
*King, W., Giess, S., & Lombardino, L. (2007). Subtyping of children with developmental dyslexia via bootstrap aggregated clustering and the gap statistic: Comparison with the double-deficit hypothesis. International Journal of Language & Communication Disorders, 42(1), 77-95.
*Kirby, J., Booth, C., & Das, J. (1996). Cognitive processes and IQ in reading disability. Journal of Special Education, 29(4), 442-456.
*[dagger] Landerl, K., Bevan, A., & Butterworth, B. (2004). Developmental dyscalculia and basic numerical capacities: A study of 8-9 year old students. Cognition, 93(2), 99-125.
*Littlefield, L., & Klein, E. (2005) Examining visual-verbal associations in children with and without reading disorder. Reading Psychology, 26(4-5), 363-385.
[dagger] Mabbott, D., & Bisanz, J. (2008). Computational skills, working memory, and conceptual knowledge in older children with mathematics learning disabilities. Journal of Learning Disabilities, 41(15), 15-28.
MacMillan, D. L., Gresham, F. L., & Bocian, K. M. (1998). Discrepancy between definitions of learning disability and school practices: An empirical investigation. Journal of Learning Disabilities, 31, 314-326.
MacMillan, D. L., & Siperstein, G. N. (2002). Learning disabilities as operationally defined by schools. In R. Bradley, L. Danielson, & D. Hallahan (Eds.), Identification of learning disabilities: Research to practice (pp. 287-233). Mahwah, NJ: Erlbaum.
Mastropieri, M. (2001). Discrepancy models in the identification of learning disabilities. Paper presented at the U.S. Department of Education L.D. Summit, Washington, DC.
Mather, N., & Gregg, N. (2006). Specific learning disabilities: Clarifying not eliminating a construct. Professional Psychology: Research and Practice, 37(1), 99-106.
Mather, N., & Kaufman, N. (2006). Introduction to the special issue, part two: It's about the what, the how well, and the why. Psychology in the Schools, 43(8), 829-834.
[dagger] Mclean, J., & Hitch, G. (1999). Working memory impairments in children with specific arithmetic learning disabilities. Journal of Experimental Child Psychology 74(3), 240-260.
McMaster, K., Fuchs, D., Fuchs, L., & Compton, D. (2005). Responding to nonresponders: An experimental field trial of identification and intervention methods. Exceptional Children, 71(4), 445-463.
*McNamara, J., & Wong, B. (2003). Memory for everyday information in students with learning disabilities. Journal of Learning Disabilities, 36(5), 394-406.
Mellard, D. F., & Deshler, D. D. (1984). Modeling the condition of learning disabilities on post-secondary populations. Educational Psychologist, 19, 188-197.
Mellard, D. F., Deshler, D. D., & Barth, A. (2004). LD identification: It's not simply a matter of building a better mousetrap. Learning Disability Quarterly, 27(4), 229-242.
Morris, R. (1994). A review of critical concepts and issues in the measurement of learning disabilities. In R. G. Lyon (Ed.), Frames of reference for the assessment of learning disabilities: New views on measurement issues (pp. 530-563). Baltimore: Paul H. Brookes Publishing.
*[dagger] Murphy, M., Mazzocco, M., Hanich, L., & Early, M. (2007). Cognitive characteristics of children with mathematics learning disability (MLD) vary as a function of the cutoff criterion used to define MLD. Journal of Learning Disabilities, 40(5), 458-478.
*Palmer, S. (2000). Phonological recoding deficit in working memory of dyslexic teenagers. Journal of Research in Reading, 23(1), 28-40.
Pennington, B. (2009). Diagnosing learning disorders: A neuropsychological framework .(2nd ed.). New York: Guilford Press.
Raudenbush, S. (1994). Random effects models. In H. Cooper & L. V. Hedges (Eds.), The handbook of research synthesis (pp. 301-321). New York: Russell Sage Foundation.
Rosenthal, M. (1994). The fugitive literature. In H. Cooper & L. V. Hedges (Eds.), The handbook of research synthesis (pp. 86-94). New York: Russell Sage Foundation.
*Savage, R., & Frederickson, N. (2006). Beyond phonology: What else is needed to describe the problems of below-average readers and spellers? Journal of Learning Disabilities, 39(5), 399-413.
Semrud-Clikeman, M. (2005). Neuropsychological aspects for evaluating learning disabilities. Journal of Learning Disabilities, 38(6), 563-568.
*Shanahan, M., Pennington, B., Yerys, B., Scott, A., Boada, R., Willcutt, E., et al. (2006). Processing speed deficits in attention deficit/hyperactivity disorder and reading disability. Journal of Abnormal Child Psychology, 34(5), 584-601.
Siegel, L. (1992). An evaluation of the discrepancy definition of dyslexia. Journal of Learning Disabilities, 25(10), 618-629.
[dagger] Siegel, L., & Ryan, E. (1989). The development of working memory in normally achieving and subtypes of learning disabled children. Child Development, 60, 973-980.
[dagger] Sikora, D., Haley, P., Edwards, J., & Butler, R. (2002). Tower of London test performance in children with poor arithmetic skills. Developmental Neuropsychology, 21(3), 243-254.
Sparks, R., & Lovett, B. (2009). Objective criteria for classification of postsecondary students as learning disabled. Journal of Learning Disabilities, 42(3), 230-239.
Stage, S., Abbott, R., Jenkins, J., & Berninger, V. (2003). Predicting response to early reading intervention from verbal IQ, reading-related language abilities, attention ratings, and verbal IQ-word reading discrepancy: Failure to validate discrepancy method. Journal of Learning Disabilities, 36(1), 24-33.
*Stanovich, K., Siegel, L., & Gottardo, A. (1997). Converging evidence for phonological and surface subtypes of reading disability. Journal of Educational Psychology, 89(1), 114-127.
*Swanson, H. L. (1999). Reading comprehension and working memory in learning-disabled readers: Is the Phonological loop more important than the executive system? Journal of Experimental Child Psychology, 72(1), 1-31.
Swanson, H. L. (1999). Reading research for students with LD: A meta-analysis in intervention outcomes. Journal of Learning Disabilities, 32(6), 504-532.
*Swanson, H. L. (2003). Age-related differences in learning disabled and skilled readers' working memory. Journal of Experimental Child Psychology, 85(1), 1-31.
Swanson, H. L. (2006, April). Who is the student with SLD? Paper presented at the National Research Center on Learning Disabilities National SEA Conference on SLD Determination, Kansas City, MO.
Swanson, H. L. (2009). Neuroscience and RTI: A complementary role. In E. Fletcher-Janzen & C. R. Reynolds (Eds.), Neuropsychological perspectives on learning disabilities in the era of RTI: Recommendations for diagnosis and intervention (pp. 28-53). Hoboken, NJ: Wiley & Sons, Inc.
Swanson, H. L. (in press). Meta-analysis of research on children with reading disabilities. In R. Allington & A. McGill-Franzen (Eds.), Handbook of reading disabilities research. New York: Routledge.
*Swanson, H. L., & Alexander, J. E. (1997). Cognitive processes as predictors of word recognition and reading comprehension in learning-disabled and skilled readers: Revisiting the specificity hypothesis. Journal of Educational Psychology, 89(1), 128-158.
*Swanson, H. L., & Howard, C. B. (2005). Children with reading disabilities: Does dynamic assessment help in the classification? Learning Disability Quarterly, 28(1), 17-31.
Swanson, H. L., & Jerman, O. (2006). Math disabilities: A selective meta-analysis of the literature. Review of Educational Research, 76(2), 249-274.
*Swanson, H. L., & Jerman, O. (2007). The influence of working memory on reading growth in subgroups of children with reading disabilities. Journal of Experimental Child Psychology, 96(4), 249-283.
*Tiu, R., Jr., Thompson, L., & Lewis, B. (2003). The role of IQ in a component model of reading. Journal of Learning Disabilities, 36(5), 424-436.
Torgesen, J. (2000). Individual differences in response to early interventions in reading: The lingering problem of treatment resisters. Learning Disabilities Research & Practice, 15(1), 55-64.
Torgesen, J., Wagner, R., Rashotte, C., Rose, E., Lindamood, P., Conway, T., et al. (1999). Preventing reading failure in young children with phonological processing disabilities: Group and individual responses to instruction. Journal of Educational Psychology, 91 (4), 579-593.
*Vellutino, F., Scanlon, D., & Spearing, D. (1995). Semantic and phonological coding in poor and normal readers. Journal of Experimental Child Psychology, 59, 76-123.
Vellutino, F. R., Scanlon, D. M., Sipay, E. R., Small, S. G., Pratt, S., Chen, R., & Denckla, M. B. (1996). Cognitive profiles of difficult-to-remediate and readily remediated poor readers: Early intervention as a vehicle for distinguishing between cognitive and experiential deficits as basic causes of specific reading disability. Journal of Educational Psychology, 88(4), 601-638.
*Willcutt, E., Pennington, B., Olson, R., Chhabildas, N., & Hulsander, J. (2005). Neuropsychological analyses of comorbidity between reading disability and attention deficit hyperactivity disorder: In search of the common deficit. Developmental Neuropsychology, 27(1), 35-78.
Witteman, C.L.M., Harries, C., Bekker, H. L., & Van Aarle, E.J.M. (2007). Evaluating psychodiagnostic decisions. Journal of Evaluation in Clinical Practice, 13, 10-15.
Ysseldyke, J. E., Algozzine, B., Richey, L., & Graden, J. (1982). Declaring students eligible for learning disability services: Why bother with the data? Learning Disability Quarterly, 5, 37-44.
(1.) Despite the advantages that a meta-analysis can bring to more fully understanding a particular area of research, some limitations warrant consideration when interpreting findings. In the current context, consider (a) The Dodo verdict (Swanson, 2006), in which studies that report positive outcomes (i.e., differences in groups) tend to be published at a much higher rate than those that do not support the use of the targeted intervention; (b) Definitional differences between studies (e.g., study may examine short-term memory and another working memory); (c) Methodological flaws may exist in studies (e.g., no clear definition of groups, use of non-standardized measures); and (d) Insufficient reporting of the data required to calculate effect sizes.
(2.) The role of IQ in evaluation of an SLD has been contentious. Historically, SLD have been identified through an achievement-aptitude discrepancy approach. Limitations of this approach are well documented and have led to the recent focus on employing RTI and low achievement models for SLD identification. Evidence suggests that measures of IQ should not be abandoned in an SLD assessment process (Fuchs, Mock, Morgan, & Young, 2003; Swanson, 2006). For example, evidence supports that SLD, low achieving and intellectually disabled groups can be reliably differentiated using measures of cognitive ability and tested academic achievement (Gresham, MacMillan, & Bocian, 1996; Kavale, Fuchs, & Scruggs, 1994). Meta-analytic literature in reading (Swanson, in press) provides evidence that children with higher IQ scores (> 91) than reading scores (< 25th percentile; i.e., a discrepancy in IQ and academic achievement) are less responsive to treatment than children whose IQ and achievement scores are in the same low range (e.g., both IQ and reading below 90). Hoskyn and Swanson (2000) found verbal IQ moderates overall level of cognitive performance of RD and low-achieving readers, and although differences between groups on specific cognitive variables yielded unstable results, differences between groups on composite scores of verbal ability were robust for all comparisons.
Please address correspondence about this article to: Evelyn S. Johnson, Department of Special Education, Boise State University, 1910 University Drive, Mailstop 1725, Boise, ID 83725; e-mail: EvelynJohnson@boisestate.edu
EVELYN S. JOHNSON, Ed.D., Department of Special Education, Boise State University.
MICHAEL HUMPHREY, Ed.D., Department of Special Education, Boise State University.
DARYL F. MELLARD, Ph.D., Center for Research on Learning, University of Kansas.
KARI WOODS, Center for Research on Learning, University of Kansas.
H. LEE SWANSON, Ph.D., School of Education, University of California, Riverside.
Table 1 Effect Sizes, Standard Errors, Confidence Intervals, and Homogeneity of Categories for Comparisons Between Children with RD and Typically Achieving Students Measured Process K ES SE Lower Total 213 -1.042 .016 -1.074 Reading (all measures) 48 -1.872 .039 -1.948 Reading (basic skills) 29 -1.839 .050 -1.938 Reading (comprehension) 19 -1.921 .062 -2.043 Intelligence 22 -.734 .052 -.836 Verbal working memory 32 -.920 .045 -1.008 Visual working memory 8 -.637 .098 -.828 Short-term memory 12 -.624 .083 -.787 Phonological processing 17 -1.276 .051 -1.377 Receptive and expressive language 8 -.782 .103 -.984 Processing speed 36 -.947 .031 -1.009 Executive function 13 -.595 .052 -.696 Measured Process Upper Q [I.sup.2] Total -1.010 1732.037 * Reading (all measures) -1.795 562.295 * 92% Reading (basic skills) -1.741 369.365 * 92% Reading (comprehension) -1.799 191.890 * 91% Intelligence -.631 157.565 * 87% Verbal working memory -.832 33.724 Visual working memory -.446 7.662 Short-term memory -.460 10.230 Phonological processing -1.176 107.453 * 85% Receptive and expressive language -.580 10.942 Processing speed -.885 66.381 * 47% Executive function -.494 27.064 * 52% * p <.001. Table 2 Effect Sizes, Standard Errors, Confidence Intervals, and Homogeneity of Categories for Comparisons Between Children with RD (by RD Definition) and Typically Achieving Students Measured Process Group K ES SE Lower Total 1 11 -.602 .075 -.748 2 63 -1.066 .032 -1.129 3 139 -1.063 .019 -1.100 Reading (all measures) 1 5 -.462 .114 -.686 2 16 -1.845 .066 -1.974 3 27 -2.201 .005 -2.306 Reading (basic skills) 1 3 -.191 .139 -.462 2 12 -1.838 .079 -1.993 3 14 -2.307 .074 -2.452 Reading (comprehension) 1 2 -1.043 .203 -1.441 2 4 -1.861 .118 -2.092 3 13 -2.079 .079 -2.233 Intelligence 2 6 -.253 .090 -.429 3 16 -.983 .064 -1.109 Phonological processing 1 2 -.688 .172 -1.025 2 6 -1.193 .083 -1.357 3 9 -1.434 .070 -1.573 Processing speed 1 1 -.781 .242 -1.255 2 6 -.958 .093 -1.140 3 29 -.949 .034 -1.015 Executive function 2 4 -.781 .175 -1.125 3 9 -.578 .054 -.683 Measured Process Group Upper Q Total 1 -.455 98.088 * 2 -1.002 483.653 * 3 -1.025 1113.825 * Reading (all measures) 1 -.237 94.150 * 2 -1.716 95.032 * 3 -2.095 183.939 * Reading (basic skills) 1 .081 10.165 * 2 -1.682 68.749 * 3 -2.163 107.776 * Reading (comprehension) 1 -.645 71.965 * 2 -1.630 25.255 * 3 -1.925 71.686 * Intelligence 2 -.078 11.875 * 3 -.856 101.951 * Phonological processing 1 -.351 1.005 2 -1.030 8.827 3 -1.296 79.860 * Processing speed 1 -.307 0 2 -.777 27.439 * 3 -.882 38.454 Executive function 2 -.438 18.636 * 3 -.472 7.198 * * p < .001. Note. Group 1 includes studies that used low achievement definitions only (e.g., reading achievement < 25th percentile); Group 2 includes studies that relied on discrepancy definitions of SLID with either a 15- to 17-point difference in standard score on achievement and IQ tests or a school identified population with indication that the school used a discrepancy approach; Group 3 included studies that defined SLD as having intelligence within the average range (IQ > 85) and reading performance one SD or more below the mean (e.g., reading achievement < 85). Table 3 Effect Sizes, Standard Errors, Confidence Intervals, and Homogeneity of Comparisons Between Children with MD and Typically Achieving Students Measured Process K ES SE Lower Upper Total 44 -0.892 .037 -0.965 -0.819 Intelligence 6 -0.904 .091 -1.082 -0.726 Math ability 7 -2.655 .115 -2.880 -2.430 Working memory verbal 3 -0.909 .219 -1.338 -0.479 Working memory visual 4 -0.441 .095 -0.626 -0.256 Executive function 9 -1.049 .091 -1.226 -0.871 Processing speed 4 -0.453 .082 -0.614 -0.293 Short-term memory 11 -.594 .008 -0.766 -0.422 Measured Process Q Total 481.328 ** Intelligence 18.877 * Math ability 68.42 ** Working memory verbal 7.843 * Working memory visual 17.87 ** Executive function 17.256 * Processing speed 26.753 ** Short-term memory 22.647 * * p < .05; ** p < .001. Figure 1. Meta-analysis search terms and subject delimiters. Learning Disability-Related Cognitive Processing- Terms Related Terms General terms low achiev * cognitiv * process * remedi * menta * process * LD though * process * Id working memory learn * disabil * verbal IQ Specific terms High-incidence disability * processing speed HID phonological process * math disability fact retrieval dyscalculia automatic retrieval math * performance visual memory math * ability * visio-spatial research * math * assessment stud * comparison test * math * exam math * tests * reading disability dyslexia read * performance read * ability read * assessment read * exam read * tests Figure 2. Meta-analysis study inclusion criteria. 1. The study was published in a referred journal. 2. The study compared an SLD group with RD or MD with an average-achieving and/or a low-achieving group. 3. The study reported criteria for defining the SLD group along with means and standard deviations for each measurement. 4. The study reported criteria for defining the control or typically achieving group. 5. The study reported a standardized measure of achievement on reading or math. 6. The study included assessment of a cognitive process. Examples of tasks are provided in Figure 3. 7. The study reported norm-referenced scores for the cognitive measures or reported psychometric information (e.g., sample reliability) on the measures (specifically excluding researcher-developed measures with no reported psychometric information (e.g., Hurford et al., 1994). 8. The study provided sufficient quantitative information to permit the calculation of effect sizes. 9. The study measured performances of English-speaking students. 10. The study sampled only school-aged students (ages 5-18 years) or reported data disaggregated by age so that only school-aged students could be included in the meta-analysis. Figure 3. Achievement and cognitive processing categories and measures. Achievement Categories Category Descriptions Basic reading skills This category included measures of decoding and word reading skill. Reading comprehension This category included measures of reading comprehension. General reading Some studies used broad reading scores ability on standardized measures without disaggregating performance on the subtests. Broad mathematics This category included all measures of ability math ability due to the small number of effect sizes. Cognitive Processing Categories Category Descriptions Verbal working memory This category included measures that require recall of sets of words, digits, and sentences after a distracter question has been asked. Visual working memory This category included measures that require recall of sets of matrices, figures, designs or objects after a distracter question as been asked. Short-term memory This category included only measures that required recall of digits, words, or sentences. Phonological This category included a range of measures, including those that asked processing students to identify initial sounds, segment words, blend sounds, and/or rhyme words. Receptive and This category included measures of expressive language vocabulary and listening comprehension. Processing speed This category included measures that asked for the rapid naming of letters, numbers and objects, as well as speeded tests such as coding and symbol search. Executive function This category included measures that require mental activities such as planning, organizing, strategizing and paying attention to and remembering details. Common assessments included the Wisconsin Card Sort and Tower of London assessments.