Printer Friendly

Evaluating the evidence base for cognitive strategy instruction and mathematical problem solving.

Cognitive strategy instruction focuses on teaching youngsters a range of cognitive and metacognitive processes, strategies, or mental activities to facilitate learning and improve performance. These strategies may be relatively simple or complex as a function of the level of the task and the contextual conditions. Cognitive strategies appear to meet the learning needs of many students with disabilities. In particular, students with learning disabilities (LD) typically have not acquired strategies that facilitate problem solving or may have difficulty selecting strategies that are appropriate to the task, orchestrating their use, and following through with their execution (Swanson, 1990). These students infrequently abandon and replace ineffective strategies, rarely adapt previously learned strategies, and typically do not generalize strategy use across domains (Swanson, 1993). In contrast, strategic learners have a repertoire of strategies and use them effectively and efficiently. They are self-directed, self-regulating, and motivated problem solvers who can also generalize strategy use (Pressley, Borkowski, & Schneider, 1987). Students with other disabilities such as cognitive delay, behavioral disorders, and attention deficit hyperactivity disorder also display problems with strategic learning and self-regulation (Morris & Mather, 2008). Like students with LD, these students may benefit from cognitive strategy instruction (e.g., Mesler, 2004).

Cognitive strategy instruction and direct instruction, which have many procedural commonalities, emerged as the two most powerful instructional approaches in Swanson's meta-analyses of 20 years of intervention research in LD (Swanson, 1999; Swanson & Sachs-Lee, 2000). Unlike direct instruction, which is based primarily on behavioral theory, cognitive strategy instruction is based on both behavioral and cognitive theory (i.e., information processing and developmental theory). Cognitive strategy instruction combines instruction in cognitive processes (e.g., visualization), and metacognitive or self-regulation strategies (e.g., self-questioning). To illustrate, Montague's (1992) model includes seven cognitive processes critical to solving mathematical word problems: (a) reading the problem for understanding, (b) paraphrasing by putting the problem into one's own words, (c) visualizing by drawing a schematic representation, (d) hypothesizing or setting up a plan, (e) estimating or predicting the answer, (f) computing, and (g) checking that the plan and answer are correct. The model also includes a self-regulation component, a SAY, ASK, CHECK procedure whereby students give themselves instructions, ask themselves questions, and monitor their performance as they solve problems.

The purpose of cognitive strategy instruction is to teach students how to think and behave like proficient problem solvers and strategic learners. As proficient problem solvers, students must be able to understand, analyze, represent, execute, and evaluate problems. The procedural basis of cognitive strategy instruction is explicit instruction, which is characterized by highly structured and organized lessons, appropriate cues and prompts, guided and distributed practice, cognitive modeling, interaction between teachers and students, immediate and corrective feedback on performance, positive reinforcement, overlearning, and mastery (Montague, 2003).

The self-regulated strategy development model (SRSD; Graham & Harris, 2003), designed in the early 1980s to improve composition skills in students with LD, contains the basic components of all cognitive strategy instructional routines. The model has six stages: (a) developing and activating background knowledge; (b) discussing the strategy; (c) modeling the strategy; (d) memorizing the strategy; (e) supporting the strategy (e.g., guided practice using scaffolded instructional techniques); and (f) independent performance. Modeling the strategy is critical to the success of cognitive strategy instruction. Cognitive modeling, sometimes referred to as process modeling, is simply thinking aloud while demonstrating a cognitive activity. The teacher models how successful problem solvers/strategic learners think and behave as they engage in academic tasks like solving mathematical problems. This technique stresses learning by imitation and provides students the opportunity to observe and hear how successful problem solvers understand and analyze a problem or task, develop a plan to complete the task, and evaluate the outcome. In sum, cognitive strategy instruction teaches students both cognitive and metacognitive processes and strategies using a specific and explicit instructional routine.

In a comprehensive review of cognitive strategy instruction research, Wong, Harris, Graham, and Butler (2003) discussed the applications of cognitive strategy instruction across age levels and domains, focusing particularly on students with LD. Some of the notable variations included reciprocal teaching (Palincsar & Brown, 1984) and transactional strategies instruction (Pressley et al., 1992) to improve reading comprehension; SRSD (Harris & Graham, 1992) and the cognitive strategy instruction for writing model (Englert, Raphael, & Anderson, 1992) to improve composition; the Kansas learning strategies curriculum (Schumaker & Deshler, 1992) to improve the reading and writing skills of adolescents with LD; genre-specific writing strategies (Wong, Butler, Ficzere, & Kuperis, 1996); strategic content learning (SCL; Butler, 1994); and cognitive strategy instruction for math problem solving (Montague, 1992). Cognitive strategy instruction is particularly well suited to improving mathematical problem solving for students with strategic learning problems as it provides the necessary cognitive and metacognitive tools needed for higher order thinking and comprehension tasks.

Researchers in special education have recently proposed specific quality indicators for high-quality research in special education as well as standards for evidence-based practices (e.g., Gersten et al., 2005; Horner et al., 2005). The quality indicators proposed for single-subject and group intervention research focus on meeting specific criteria associated with the participants and settings, dependent and independent variables, internal and external validity, social validity, and data analysis. Given (a) that no systematic review of the research literature has been conducted specifically for cognitive strategy instruction in math problem solving for students with disabilities and (b) that cognitive strategy instruction as an instructional approach in the critical area of mathematics seems promising, the purpose of this review is to apply the proposed quality indicators and standards to the body of research on cognitive strategy instruction in math problem solving for students with disabilities.

METHOD

IDENTIFICATION OF STUDIES

We conducted a review of the literature using PsycINFO and Education Full Text electronic databases from 1969 through 2006 using the keywords mathematical problem solving and cognitive strategy instruction and examined reference lists from studies located through these databases. This search resulted in 42 articles for potential study. We then applied the following criteria to identify research studies suitable for the analysis:

* The article reported a research study.

* The study was published in a peer-reviewed journal.

* The independent variable (intervention) met the definition of cognitive strategy instruction as described in the introduction (i.e., the intervention presents a routine for mathematical problem solving that incorporates both cognitive processes and metacognitive strategies for solving problems and utilizes explicit instructional procedures with an emphasis on process modeling and verbalization of the routine during the acquisition stage).

* The study design was either single-subject or group experimental (i.e., true experimental or quasi-experimental).

* Participants included students identified as having a disability.

* Mathematical problem solving was an outcome variable.

We discussed each study with regard to the criteria for inclusion in the analysis and reached 100% agreement on inclusion and exclusion of studies. We limited our review to published, peer-reviewed studies as that is one of the criteria for determining evidence-based practice as proposed by Horner et al. (2005). Seven of the 42 published articles met all the criteria for inclusion in the analysis; 5 studies used a single-subject design and 2 used a group design. The other articles did not qualify primarily because they did not meet the definition for cognitive strategy instruction and/or did not focus specifically on mathematical problem solving. Three dissertations met all but the publication criterion (Daniel, 2003; Mesler, 2004; Tarraga, 2007).

CONTENT ANALYSIS OF THE STUDIES

We examined and described each of the seven selected studies individually with respect to its purpose and research questions, participants and setting, design, dependent and independent variables, and results. Additionally, we were interested in whether the studies examined maintenance and generalization of the strategy and performance. Table 1 describes these components for each study (as well as a summary of the purpose and/or research questions, participants and setting, design, dependent variables, independent variables, and results including maintenance and generalization). The Results section summarizes the components across studies.

IDENTIFYING THE PRESENCE OF QUALITY INDICATORS

We reviewed each study for content and methodology and then applied the quality indicators to each study as appropriate (i.e., Horner et al., 2005, for the five single-subject research studies and Gersten et al., 2005, for the two group experimental studies). Three independent raters (i.e., the two authors and a professor in special education from another university with expertise in cognitive interventions in mathematics) assessed the presence of quality indicators in each study using forms that we developed.

Each evaluator used the following questions to assess the quality (as defined by Horner et al., 2005) of the single-subject studies.

1. Participants and setting. Were the participants and the selection process described with sufficient detail to allow replication? Was the setting described with sufficient detail to allow replication?

2. Dependent variables. Were the dependent variables operationalized, quantifiable, valid, and measured repeatedly over time? Were dependent measures' reliability or interobserver agreement data collected and reported?

3. Independent variables. Was the independent variable described in sufficient detail for replicability? Was the independent variable systematically manipulated and under the control of the experimenter? Was there evidence of fidelity of implementation through continuous direct measurement of the independent variable or through an equivalent technique?

4. Baseline. Was the baseline phase described in detail and did it ascertain a pattern of responding to enable prediction of future performance if no intervention is provided?

5. Experimental control/internal validity. Did the design control for internal validity and provide at least three demonstrations of experimental effect at three different points in time? Did the results indicate a pattern that demonstrated experimental control?

6. External validity. Were experimental effects replicated across participants, settings, or materials to demonstrate external validity?

7. Social validity. Was the independent variable and the change associated with it socially important, cost efficient, feasible, and practical (i.e., can be implemented by typical agents in typical settings over extended periods of time)?

Each evaluator used the following questions to assess the quality (as defined by Gersten et al., 2005) of the group design studies:

1. Participants. Were the participants described with sufficient detail to determine whether they demonstrated the disability presented? Were procedures used to ensure the comparability of participant characteristics across conditions? Were procedures used to ensure the comparability of intervention agents across conditions?

2. Intervention. Was the intervention clearly described and specified? Was implementation fidelity described and measured? Was the nature of services provided in comparison conditions described?

3. Outcome measures. Were multiple measures included that provided a balance between measures tightly aligned with the intervention and measures of generalized performance? Were measures administered at appropriate intervals for determining effects of the intervention?

4. Data analysis. Were the data analyses closely and appropriately linked to the research questions? Was the unit of analysis linked to the statistical analyses? Were both inferential statistics and effect sizes reported?

Some questions assessed multiple components of quality indicators. For these questions, we judged and rated each component individually. We calculated interrater reliability as a percentage agreement score reflecting the frequency of the occurrence of unanimous agreement out of all possible opportunities to agree on each indicator. For the single-subject studies, the raters unanimously agreed 113 times out of 121 possible scoring opportunities across the five studies, an interrater reliability of 93%. For the two group experimental studies, the raters unanimously agreed 17 times out of 22 possible times (77%). Taken together, the raters unanimously agreed on 130 scoring opportunities out of a possible 143, resulting in interrater agreement of 91% across the seven studies of cognitive strategy instruction and mathematical problem solving. The final ratings for items that did not have unanimous agreement across raters reflected the ratings of the two raters who did agree. For example, two raters did not think that Chung and Tam (2005) described the participants in the study sufficiently, whereas one rater did. The final rating was "no," reflecting the rating of the two raters in agreement.

APPLICATION OF STANDARDS FOR EVIDENCE-BASED PRACTICE

For a practice to be considered evidence-based on the basis of single-subject research, Horner et al. (2005) proposed that experimental control must be established across at least five single-subject studies that meet the quality indicators for acceptable methodological rigor (see Table 2). In addition, the studies must include a total of at least 20 participants and must be conducted by at least three different researchers in at least three different geographical locations.

Gersten et al. (2005) divided the quality indicators for group experimental studies into "essential" or "desirable" quality indicators and developed standards for studies to be judged as "high quality" or "acceptable" studies. The four essential quality indicators focus on study participants, intervention and comparison conditions, outcome measures, and data analysis (see Table 3). The eight desirable quality indicators provide specific criteria pertaining to attrition rates, reliability and validity of measures, follow-up measurement of outcomes, fidelity of treatment, documentation of comparison conditions, the nature of the intervention, and clarity of presentation (see Gersten et al., p. 152). "High-quality" studies must meet all but one of the essential indicators and at least four of the desirable indicators. "Acceptable" studies must meet all but one of the essential indicators and at least one of the desirable indicators. Specifically, Gersten et al. suggested that for a practice to be categorized as evidence-based or promising, the body of research supporting the practice must include at least two high-quality studies or four acceptable studies that support the practice. In addition, for a practice to be evidence-based, the weighted effect size of acceptable and high-quality studies examining the practice must be significantly greater than zero; for a promising practice, there must be at least a 20% confidence interval for the weighted effect size that is greater than zero. We examined the five single-subject studies and two group studies to determine whether, as a "body of research," they met the standards for cognitive strategy instruction being an evidence-based practice for math problem solving for students with disabilities.

RESULTS

Table 1 summarizes the purpose, participants, setting, design, dependent and independent variables, and results for each of the studies included in the review. Tables 2 and 3 summarize the results of analyses for determining the presence of the seven quality indicators for the single-subject design studies and the four essential quality indicators for the group experimental design studies, respectively.

CONTENT FINDINGS

The purpose of all seven studies was to investigate the effects of a cognitive strategy routine on the mathematical problem solving of students with disabilities. The number of steps involved in the cognitive strategy routines utilized in the studies reviewed ranged from five to nine. A total of 142 students participated in the studies and ranged in age from a mean of 8-4 to 16-7 years. The majority of students (n = 110) were described as having LD, 2 were identified as having mild mental retardation (Cassel & Reid, 1996), and 30 as having mild intellectual disabilities (Chung & Tam, 2005). Students were instructed individually or in small groups in a resource or classroom setting. One study was conducted in a "special school for students with mild intellectual disabilities in Hong Kong" (Chung & Tam, p. 209).

All single-subject design studies utilized a multiple baseline across participants design. One of these studies (Hutchinson, 1993) also included a control group (n = 8) that completed the pretests and posttests only. Although Hutchinson used nonparametric statistical procedures for between-group comparisons, the primary design of the study was single-subject. The two group experimental design studies each compared three types of instruction. Chung and Tam (2005) compared conventional instruction, worked example instruction, and cognitive strategy instruction. Montague, Applegate, and Marquard (1993) compared cognitive instruction only, metacognitive instruction only, and a combination of both. The primary dependent variable in each study was curriculum-based measures of word problems, which varied as a function of the level of instruction (e.g., five types of addition and subtraction problems, three types of algebra problems). Other dependent variables included completion time, types of errors, strategy use, metacognitive awareness, the Math Problem Solving Assessment-Short Form (Montague, 1992), and the British Columbia Mathematics Achievement Test, Grade 7/8 Applications (Robitaille, Sherrill, Kelleher, Klassen, & O'Shea, 1980).

In all studies, the independent variable was a cognitive routine in the form of a sequence of cognitive and metacognitive activities designed to facilitate problem solving. One study included a worked example routine (Chung & Tam, 2005). Explicit instruction, which includes techniques such as distributed practice and progress monitoring, was used across studies to teach the routine to participants. In general, findings indicated that the strategies were successful and problem solving improved. Some variation was evident across single-subject studies suggesting that individuals may need additional instruction or modifications to meet the criterion for mastery. In the group studies, Chung and Tam found that the group receiving the intervention significantly outperformed the group receiving conventional instruction, and Montague et al. (1993) found that students with LD improved to the level of average achieving students on the posttest. All studies measured performance maintenance, and all but two (Case, Harris, & Graham, 1992; Hutchinson, 1993) measured generalization (e.g., setting generalization or level of problem solving). Montague and colleagues (1993) provided booster lessons when performance fell below a certain level, which increased performance to criterion.

METHODOLOGICAL FINDINGS

Single-Subject Design Quality Indicators. We investigated the presence of the quality indicators for single-subject studies (Horner et al., 2005) in the five single-subject studies reviewed (i.e., Case et al., 1992; Cassel & Reid, 1996; Hutchinson, 1993; Montague, 1992; Montague & Bos, 1986). The first quality indicator focuses on participants and setting. The criteria were that the participants, the process for selection, and the physical setting must be described in sufficient detail for replication. All three raters agreed that each of the five studies met the three criteria and thus agreed that all studies met the first quality indicator. With respect to the level of detail provided by the researchers, one rater noted that the selection process as described by Cassel and Reid could have been more detailed. Also, ethnicity data were missing in four studies, and one study did not provide the degree of discrepancy used to define LD. However, the ratings were positive and, thus, the studies met the criteria for this indicator.

The second quality indicator requires that the dependent variable be described with operational precision and measured with a procedure that generates a quantifiable index. The measurement must also be valid and described with replicable precision and be administered repeatedly over time. Interrater reliability or interobserver agreement must be reported and minimal standards met. Although all raters concurred that measurement was sufficiently described, quantifiable, and occurred repeatedly, there were some comments. One rater mentioned that the description of the measures could have been more detailed in Montague and Bos (1986) and Montague (1992). Another noted that data were collected on only three occasions rather than the five recommended by Horner et al. (2005) in many of the phases across studies. However, this same rater recognized that Kazdin (1982) recommended a minimum of three data points, particularly during baseline, which was the basis for the decision in the Hutchinson (1993), Montague (1992), and Montague and Bos studies. Only Cassel and Reid (1996) met the last criterion, interrater reliability. Case et al. (1992) reported a reliability coefficient, but, according to Horner et al., "reporting interobserver agreement ... only as one score across all measures in a study would not be appropriate" (p. 167). Likewise, Montague reported "interrater agreement averaged 82%" (1992, p. 233) for the 12 protocols across the study. Hutchinson reported reliability data only for the metacognitive interviews.

To meet the criteria for the third quality indicator, the independent variable must be described with replicable precision and be manipulated systematically under the control of the experimenter. Also, treatment fidelity must be ascertained either through continuous and direct measurement of implementation or an equivalent (Gresham, Gansel, & Kurtz, 1993). Although raters agreed that all five studies adequately described the independent variable and manipulation of the independent variable was under the control of the experimenter, none of the five studies reported measurement of treatment fidelity.

The fourth quality indicator requires the baseline to be described with replicable precision and to provide evidence of a pattern prior to treatment. Although Horner et al. (2005) recommended five or more data points for the baseline phase, they also suggested that fewer data points are acceptable in specific cases. We used three data points as the criterion based on Kazdin's (1982) recommendation. Four studies collected data on at least three occasions during baseline, whereas the fifth (Hutchinson, 1993) collected data only twice during baseline. However, Hutchinson's rationale seemed sound in that, according to Kazdin, "requiring a subject to complete a task for assessment purposes may be difficult for an extended baseline" (p. 146) and "the clearest instance of stability would be if the behavior never occurs or reflects a complex skill that is not likely to change over time without special training" (p. 148). Hutchinson's participants had low and consistent baseline scores, which indicated a distinct pattern. Indeed, most participants did not get a single problem correct during the initial baseline phase. Also, one rater noted that in three of the studies (i.e., Case et al., 1992; Montague, 1992; Montague & Bos, 1986) one or two participants appeared to have an upward trend during baseline. She noted, though, the performance level of these participants was very low. All studies received positive ratings and, thus, met the criteria for the fourth quality indicator.

The criteria for the fifth, sixth, and seventh quality indicators have to do with validity. To address experimental control and internal validity (the fifth quality indicator), there must be at least three demonstrations of experimental effect, a design that controls threats to internal validity, and a pattern that demonstrates experimental control (as judged by visual analysis). The criterion related to external validity (the sixth quality indicator) requires that effects be replicated across participants, settings, or materials. In addition, for social validity (the seventh quality indicator), the dependent variable and the magnitude of change due to the intervention must be socially important. As well, the independent variable must be cost effective and implemented over time, in typical contexts, and by typical intervention agents. The raters unanimously voted yes with respect to the criteria related to internal and external validity issues and recognized the social importance of these interventions for improving mathematical problem solving for students with disabilities. However, there was some variation among the raters regarding the final criterion related to social validity. Only two studies (i.e., Cassel & Reid, 1996; Montague & Bos, 1986) were conducted in typical contexts by typical intervention agents: in the participants' resource classrooms by their teacher, who was also one of the researchers. The other three studies were conducted either by the researchers, both former teachers, or by graduate assistants in school settings described as a self-contained classroom, small classroom, or resource room. Two of the three raters accepted these intervention agents and settings as typifying instruction that students with disabilities usually receive (i.e., individualized or small group instruction in small classrooms that serve students with disabilities). Thus, all five studies met all criteria for the quality indicators related to internal, external, and social validity.

Single-Subject Standards far Evidence-Based Practice. The single-subject studies failed to meet all criteria for two quality indicators (i.e., dependent and independent variables). Four studies did not sufficiently report interrater agreement, and none appeared to measure treatment fidelity. For a practice to be evidence-based, Horner et al. (2005) recommended that the body of single-subject research reflect the following standards:

(1) a minimum of five single-subject studies that meet minimally acceptable methodological criteria and document experimental control have been published in peer-reviewed journals,

(2) the studies are conducted by at least three different researchers across at least three different geographical locations, and

(3) the five or more studies include a total of at least 20 participants. (p. 176)

In sum, although the five studies reviewed met Standards 2 and 3, they did not meet the standard related to the methodological criteria. Four did not report interobservor agreement for the dependent variables, and none addressed measurement of treatment fidelity. Therefore, cognitive strategy instruction does not meet proposed standards for being an evidence-based practice for improving mathematical problem solving for students with disabilities based on the extant single-subject research literature.

Group Experimental Design Quality Indicators. We evaluated both group design studies selected for this review (Chung & Tam, 2005; Montague et al., 1993) for the presence of four essential quality indicators proposed by Gersten et al. (2005) for group experimental research (see Table 3). The first essential quality indicator has to do with participant description. Montague et al. (1993) sufficiently described participants, setting, and interventionists and described comparability of groups (students and interventionists). For example, the authors reported participant demographics and conducted analyses indicating no significant group differences on ability and reasoning measures. In addition, the two graduate assistants and the researcher had previously taught students with disabilities, the assistants were trained by the researcher, and they used scripted lessons developed by the researcher. In contrast, Chung and Tam (2005) did not sufficiently describe the disability diagnosis of "mild intellectual disabilities" (p. 209). The only criteria the authors used were scores of "around 55-70" (p. 209) on the WISC-III (Wechsler, 1991) and "around 70%" (p. 209) on a mathematics screening measure. Previous records were used to determine low ability, and no specific range of scores was provided. Participant description, therefore, lacked sufficient specificity. Moreover, although Chung and Tam randomly assigned participants to groups, they did not perform statistical analysis of group equivalency and gave no background information about the researcher who was the interventionist for all three conditions.

The second essential quality indicator for group design studies has to do with the intervention. All raters agreed that the interventions for both studies were described clearly. However, neither study measured fidelity of implementation, an important aspect of intervention research. In the Montague et al. (1993) study, the researcher trained the research assistants, and all intervention agents (two research assistants and the researcher) used scripted lessons across conditions. In the Chung and Tam (2005) study, the same researcher was the interventionist across conditions. Regardless of these seeming safeguards with respect to training, scripted lessons, and consistency of the interventionist, no formal measures of fidelity were collected. Thus, there was no documentation that the intervention was implemented with integrity across intervention agents and conditions. With regard to the requirement that the comparison condition be adequately described, Montague et al. conducted a component analysis study rather than the typical experimental-versus-control group study by comparing cognitive instruction to metacognitive instruction only and then combining the two components for comparison. However, they included a "normally achieving" group for comparison on one outcome measure of math problem solving. As a result, raters scored this criterion as not applicable.

The third essential quality indicator has to do with the outcome measures. Both studies collected data at appropriate points in time. However, both studies used only researcher-developed measures of math problem solving, and these were closely aligned with the treatment. According to Gersten et al. (2005), a combination of both aligned measures and generalizable measures (e.g., a standardized math problem-solving test) are required. Neither study reported reliability and validity information for the measures, although Montague et al. (1993) provided some information on the validity of the six problems included in one of her measures.

The fourth essential quality indicator concerns data analysis. Both studies were quasi-experimental and, thus, the analysis seemed appropriate to their design and purpose. Both studies used a repeated measures design addressing two factors: condition and trial. Both studies used the student as the unit of analysis; that is, in the Montague et al. (1993) study, intact groups of students with LD were instructed, and the mean posttest scores of students were the outcome measures. Chung and Tam (2005) randomly assigned students to one of three groups, provided one individualized session to each student, then instructed students in groups for the remaining four sessions, and used mean posttest scores as the dependent variable. Neither study, however, reported effect size, thereby not meeting all criteria for this quality indicator.

The analysis of these two group studies indicated both were uneven in their adherence to the criteria for the quality indicators and, consequently, did not sufficiently meet the essential quality indicator criteria required for high quality or acceptable research status. Indeed, Montague et al. (1993) met only the first quality indicator-the indicator associated with participants. With respect to the other essential quality indicators, Montague et al. failed to document treatment fidelity, include generalizable posttest measures, and report effect sizes. Chung and Tam (2005) did not meet any of these quality indicators or provide sufficient description of the participants and interventionist, thereby failing to meet any of the quality indicators. Generally, there was consensus among the raters. However, for the Chung and Tam study, one rater scored all three criteria related to the participants positively, whereas the other two did not. One rater thought that both studies included multiple measures; the other two did not. Because Montague et al. did not explicitly state the research questions, one rater scored the criterion related to the linkage of the techniques to the research questions negatively.

Group Experimental Standards for Evidence-Based Practice. Because neither of the group design studies met Gersten et al.'s (2005) initial standard of addressing three of four essential quality indicators, we did not evaluate these studies formally for the presence of the eight desirable quality indicators nor did we calculate a weighted effect size. Because neither group study evaluating the practice was rated as high or acceptable, cognitive strategy instruction in math for students with disabilities could not be considered evidenced-based (or promising) on the basis of the extant group experimental research base regardless of the number of desirable quality indicators met or the weighted effect size.

DISCUSSION

We reviewed seven studies that utilized cognitive strategy instruction to improve mathematical problem solving for students with disabilities using the benchmarks for determining the quality of the research (Gersten et al., 2005; Horner et al., 2005), ultimately (on the basis of research conducted thus far) to draw conclusions as to whether the practice is evidence-based. The five single-subject studies stood up relatively well against the quality indicators developed by Horner et al. (2005) to evaluate single-subject research. Specifically, our analysis suggests that all five single-subject studies met most of the "minimally acceptable methodological criteria" (p. 176). However, none of the studies met the criterion for describing and reporting measurement of treatment fidelity, and only Cassel and Reid (1996) met the criterion for reporting interrater agreement for scoring the dependent variables. Consequently, there is no quantifiable way to determine whether the intervention was applied systematically and consistently across groups or whether the measures were scored reliably.

For the two group design studies, the most salient problems had to do with the outcome measures and fidelity of implementation. Both studies used only researcher-developed outcome measures. Gersten et al. (2005) specified that multiple measures be used (e.g., for these two studies, a standardized measure of mathematical problem solving would have been appropriate). Like the single-subject studies, neither study described procedures to measure treatment fidelity nor did they report level of fidelity. If only for reasons having to do with outcome measures and treatment fidelity, we concluded that neither the single-subject studies nor the group design studies supported cognitive strategy instruction as an evidence-based practice for improving mathematical problem solving for students with disabilities. This analysis suggests that future intervention studies must be designed more stringently with particular attention to clearly specifying the procedures used to measure and report treatment fidelity to ensure that the intervention is implemented with integrity.

With respect to participant descriptions, two of the seven studies included students with mild cognitive delays; the majority focused on students with LD. It is important to note the wide variation in ability, achievement, and grade level in the students participating in these studies. The variation among participants affects generalizability of findings, which is a concern in most intervention studies. The critical question for intervention researchers is: For whom is the intervention appropriate and effective? One criterion that both Horner et al. (2005) and Gersten et al. (2005) included is that participants must be described sufficiently; this may need further explication. If a practice is identified as evidence-based, it may also be important to identify specifically for whom the practice was effective. For example, one recommendation for single-subject studies could be that the study be replicated with individuals who have characteristics similar to those individuals in the previous study and/or who meet certain criteria for participation. In group design studies, researchers could be required to describe not only the sample as a whole, but also to describe in detail subsets of participants within the larger group and then conduct appropriate analyses relevant to the subsets. In this way, educators can determine who benefited and also those who did not, and make informed decisions about using the practice with students with disabilities who vary considerably in cognitive, behavioral, and social characteristics.

Certainly, the demographics and characteristics of participants should be described in detail, but there is another important consideration regarding criteria for participation in research studies. We recommend including both inclusion and exclusion criteria for selecting participants. For example, in all of her studies, Montague established criteria for inclusion (i.e., in addition to meeting criteria for district eligibility for LD, participants had to score at least 85 on the IQ test, have knowledge of the four basic mathematical operations using whole numbers and decimals, meet a certain preset criterion for determining poor math problem-solving performance, and achieve a reading stanine of at least 3 or a grade equivalent score of at least 3.5 on an individualized reading test). Exclusion criteria precluded participation of students enrolled in English as a Second Language programs. Additionally, as Test, Fowler, Brewer, and Wood (2005) and Gersten et al. (2005) suggested, any attrition occurring during the study and reasons for it should also be reported. A description of the participants who drop out of a study and the reasons for the attrition may provide insight into the appropriateness of the intervention for individuals with disabilities.

There was some disagreement among the raters about the description of the intervention agents. For the single-subject studies, one rater thought the intervention agents in the Montague (1992) and Hutchinson (1993) studies were not "typical." In both studies, the intervention was implemented by the researcher who was the students' LD resource teacher. The question remains as to whether researchers can be considered "typical" intervention agents. For the group studies, one rater thought the Montague et al. (1993) study did not sufficiently describe the intervention agents. One rater thought Chung and Tam (2005) sufficiently described the intervention agents; the other two did not. Further operationalizing the criteria regarding "typical" intervention agents may avoid confusion.

Threats to internal validity--such as possible bias on the part of the interventionist and bias in measurement--are of particular importance. To control for these methodological concerns, according to the proposed quality indicators for both group experimental and single-subject research, researchers must (a) describe the independent variable (i.e., the intervention) so readers will know exactly what was implemented and (b) assess treatment fidelity. We believe this is necessary but insufficient. It is also vital that structured and stringent observations of the intervention across time be conducted by at least two impartial observers to ensure the fidelity and integrity of the intervention. Interrater agreement for observations of treatment fidelity must be at a suitable level (i.e., at least 80% agreement).

No effect sizes were reported for the group studies, a criterion for the quality indicator pertaining to data analysis. We recommend that effect sizes also be reported and discussed for single-subject studies to determine if the body of research qualifies as evidence-based practice. Current methodologies are under investigation for computing effect size for single-subject research (e.g., Van den Noortgate & Onghena, 2003).

With respect to one of the criteria relating to social validity (i.e., conducting the research in "typical contexts"), all of the studies reviewed were conducted in resource room settings or separate classrooms either with individual students or small groups. The reader should keep in mind that these studies, for the most part, were conducted in the 1990s. With the current move toward inclusion in most school districts, it is essential that the practice be tested in general education classrooms that include students with disabilities. A caveat here is that these interventions were designed for students with disabilities and may not be appropriate for nondisabled students, especially those who are average or high performing. Horner et al.'s (2005) criterion related to conducting the intervention in "typical contexts" may need further explication. Again, there are implications for generalizability. If the context is the general education classroom, as it will be in many future studies, then a description of the logistics involved in implementing an intervention designed specifically for students with disabilities in general education classrooms seems warranted for understanding how to implement the practice with integrity.

These interventions are cognitive strategy instructional packages; as such, they include a variety of components and instructional procedures. The "packages" in these studies had common elements but varied to some degree as well. It is important that researchers sufficiently describe the components and elements of a complex intervention like cognitive strategy instruction. It may also be important in the identification of evidence-based practice to attempt to "unpack" these interventions and identify the salient content and procedures with a goal of identifying the most efficient or parsimonious intervention. To do this, a systematic series of componential analysis studies would be ideal. Componential analysis is complicated in that it entails isolating particular elements of an intervention to determine their independent impact on outcomes and also to determine the optimal combination of components that are appropriate for specific populations of students. However, intervention research is time-consuming and expensive, which limits the feasibility of conducting a series a componential analyses of the optimal variation in cognitive strategy routines for improving mathematical problem solving. Nonetheless, we recommend that researchers not lose sight of this goal.

Particularly apparent in this review were problems associated with reporting reliability of the outcome measures, ensuring treatment fidelity, establishing and describing baseline performance, and reporting effect sizes. However, again, it should be noted that most of these studies were conducted in the 1990s when the requirements for reporting research were not nearly as stringent as they are now. The raters agreed that applying many of these indicators and standards to these studies was a challenge, and thought that many of the criteria needed further clarification particularly as they pertain to these different research methodologies. For example, what constitutes treatment fidelity for single-subject research as compared with group design research and what is the bottom line for cost effectiveness? Another question had to do with the "essential" and "desirable" quality indicators proposed by Gersten et al. (2005). To be judged "high quality" or "acceptable," a study must meet all but one of the essential quality indicators. Although that criterion is clear, the raters thought that all of the essential quality indicators were equally important. In addition, to be judged as "high quality," the study must demonstrate at least four of the desirable quality indicators; to be judged as "acceptable," the study must demonstrate at least one of the desirable quality indicators. There are eight desirable quality indicators. Are these indicators comparable or are some more important than others? These are questions that surely will be answered as the process for judging intervention research becomes more refined.

This review was, in essence, a "field test" of the application of the quality indicators and standards proposed by Horner et al. (2005) and Gersten et al. (2005) to a literature base. The literature base on cognitive strategy instruction for improving mathematical problem solving for students with disabilities is small but promising. We found that the five single-subject studies met most of the quality indicators for high-quality research. Indeed, our review revealed several general strengths of both the single-subject and group design studies (e.g., description of participants and the setting, description of the dependent variable). However, all the studies were flawed in some way when we applied the criteria for the quality indicators. Researchers in both general and special education are now being held accountable for conducting high-quality research. The quality indicators proposed by Horner et al. and Gersten et al. provide an important foundation for analyzing the methodology of individual studies and further developing and refining research standards that will be used to determine if a practice is evidence-based. Now, more than ever, school districts are pressed to select only programs that have a solid research base. With more methodological rigor and adherence to the recently proposed quality indicators, future research studies should be able to provide substantive evaluation of educational practice.

REFERENCES

References marked with an asterisk were included in the study.

Butler, D. L. (1994). From learning strategies to strategic learning: Promoting self-regulated learning by postsecondary students with learning disabilities. Canadian Journal of Special Education, 4, 69-101.

*Case, L. P., Harris, K. R., & Graham, S. (1992). Improving the mathematical problem-solving skills of students with learning disabilities: Self-regulated strategy development. The Journal of Special Education, 26, 1-19.

*Cassel, J., & Reid, R. (1996). Use of a self-regulated strategy intervention to improve word problem-solving skills of students with mild disabilities. Journal of Behavioral Education, 6, 153-172.

*Chung, K. H., & Tam, Y. H. (2005). Effects of cognitive-based instruction on mathematical problem solving by learners with mild intellectual disabilities. Journal of Intellectual and Developmental Disability, 30, 207-216.

Daniel, G. E. (2003). Effects of cognitive strategy instruction on the mathematical problem solving of middle school students with learning disabilities. Unpublished doctoral dissertation, Ohio State University, Columbus.

Englert, C. S., Raphael, T. E., & Anderson, L. M. (1992). Socially mediated instruction: Improving students' knowledge and talk about writing. Elementary School Journal, 92, 411-449.

Gersten, R., Fuchs, L. S., Compton, D., Coyne, M., Greenwood, C., & Innocenti, M. S. (2005). Quality indicators for group experimental and quasi-experimental research in special education. Exceptional Children, 71, 149-164.

Graham, S., & Harris, K. R. (2003). Students with learning disabilities and the process of writing: A meta-analysis of SRSD studies. In H. L. Swanson, K. R. Harris, & S. Graham (Eds.), Handbook of learning disabilities (pp. 323-334). New York: Guilford Press.

Gresham, E M., Gansel, K. A., & Kurtz, P. E (1993). Treatment integrity in applied behavior analysis with children. Journal of Applied Behavior Analysis, 26, 257-263.

Harris, K. R., & Graham, S. (1992). Self-regulated strategy development: A part of the writing process. In M. Pressley, K. R. Harris, & J. T. Guthrie (Eds.), Promoting academic competence and literacy in school (pp. 277-309). New York: Academic Press.

Horner, R. H., Cart, E. G., Halle, J., McGee, G., Odom, S., & Wolery, M. (2005). The use of single-subject research to identify evidence-based practice in special education. Exceptional Children, 71, 165-179.

*Hutchinson, N. L. (1993). Effects of cognitive strategy instruction on algebra problem solving of adolescents with learning disabilities. Learning Disability Quarterly, 16, 34-63.

Kazdin, A. E. (1982). Single-case research designs: Methods for clinical and applied settings. New York: Oxford.

Mesler, J. (2004). The effects of cognitive strategy instruction on the mathematical problem solving of students with spina bifida. Unpublished doctoral dissertation, University of Miami, Coral Gables, Florida.

*Montague, M. (1992). The effects of cognitive and metacognitive strategy instruction on the mathematical problem solving of middle school students with learning disabilities. Journal of Learning Disabilities, 25, 230-248.

Montague, M. (2003). Solve It!: A practical approach to teaching problem solving skills. Reston, VA: Exceptional Innovations.

*Montague, M., Applegate, B., & Marquard, K. (1993). Cognitive strategy instruction and mathematical problem-solving performance of students with learning disabilities. Learning Disabilities Research & Practice, 8, 223-232.

*Montague, M., & Bos, C. S. (1986). The effect of cognitive strategy training on verbal math problem solving performance of learning disabled adolescents. Journal of Learning Disabilities, 19, 26-33.

Morris, R. J., & Mather, N. (2008). Evidence-based interventions for students with learning and behavioral challenges. New York: Routledge.

Palincsar, A. S., & Brown, A. L. (1984). Reciprocal teaching of comprehension-fostering and comprehension-monitoring activities. Cognition and Instruction, 1, 117-175.

Pressley, M., Borkowski, J. G., & Schneider, W. (1987). Cognitive strategies: Good strategy users coordinate metacognition and knowledge. In R. Vasta & G. Whitehurst (Eds.), Annals of child development (Vol. 5, pp. 89-129). New York: JAI Press.

Pressley, M., EI-Dinary, P. B., Gaskins, I. W., Schuder, T., Begman, J. L., Almasi, J., et al. (1992). Beyond direct instruction: Transactional instruction of reading comprehension strategies. Elementary School Journal, 92, 513-555.

Robitaille, D. F., Sherrill, J. M., Kelleher, H. J., Klassen, J., & O'Shea, T. J. (1980). British Columbia mathematics achievement tests, grade 7/8 applications. Victoria, BC, Canada: Ministry of Education.

Schumaker, J. B., & Deshler, D. (1992). Validation of learning strategy interventions for students with LD: Results of a programmatic research effort. In B. Y. L. Wong (Ed.), Contemporary intervention research in learning disabilities: An international perspective (pp. 2246). New York: Springer-Verlag.

Swanson, H. L. (1990). Instruction derived from the strategy deficit model: Overview of principles and procedures. In T. E. Scruggs and B. Y. L. Wong (Eds.), Intervention research in learning disabilities (pp. 34-65). New York: Springer-Verlag.

Swanson, H. L. (1993). Principles and procedures in strategy use. In L. Meltzer (Ed.), Strategy assessment and instruction for students with learning disabilities (pp. 61-92). Austin, TX: Pro-Ed.

Swanson, H. L. (1999). Interventions for students with learning disabilities: A meta-analysis of treatment outcomes. New York: Guilford Press.

Swanson, H. L., & Sachs-Lee, C. (2000). A meta-analysis of single-subject-design intervention research for students with LD. Journal of Learning Disabilities, 33, 114-136.

Tarraga, R. (2007). The effects of cognitive strategy instruction on mathematical problem solving of low performing elementary school students. Unpublished doctoral dissertation, University of Valencia, Spain.

Test, D. W., Fowler, C. H., Brewer, D. M., & Wood, W. (2005). A content and methodological review of self-advocacy intervention studies. Exceptional Children, 72, 101-125.

Van den Noortgate, W., & Onghena, P. (2003). Hierarchical linear models for the quantitative integration of effect sizes in single-case research. Behavior Research Methods, Instruments, and Computers, 35, 1-10.

Wechsler, D. (1991). WISC-III: Wechsler intelligence scale for children (3rd ed.). San Antonio, TX: Psychological Corporation.

Wong, B. Y. L., Butler, D. L., Ficzere, S. A., & Kuperis, S. (1996). Teaching adolescents with learning disabilities and low achievers to plan, write, and revise opinion essays. Journal of Learning Disabilities, 29, 197-212.

Wong, B. Y. L, Harris, K. R., Graham, S., & Butler, D. L. (2003). Cognitive strategies instruction research in learning disabilities. In H. L. Swanson, K. R. Harris, & S. Graham (Eds.), Handbook of learning disabilities (pp. 383-402). New York: Guilford Press.

MARJORIE MONTAGUE

SAMANTHA DIETZ

University of Miami

Address correspondence to Marjorie Montague, School of Education, Merrick Building, 5202 University Drive, Coral Gables, FL 33146 (e-mail: mmontague@miami.edu).

The authors wish to thank Dr. Asha Jitendra, Rodney Wallace Professor for the Advancement of Teaching and Learning at the University of Minnesota, for serving as the third rater. Her expertise and willingness are greatly appreciated.

Manuscript received August 2007; accepted May 2008.

MARJORIE MONTAGUE (CEC FL Federation), Professor; and SAMANTHA DIETZ (CEC FL Federation), Research Associate, Department of Teaching and Learning, School of Education, University of Miami, Coral Gables, Florida.
TABLE 1
Summary of Cognitive Strategy Instruction and Mathematical
Problem-Solving Studies

 Purpose/Research Participants
Study Question(s) & Setting

Case, Examine effects 4 students with LD (3
Harris, & of 5-step cogni- boys, 1 girl; mean CA,
Graham, tive strategy for 11-3; mean IQ 79);
1992 solving 1-step intervention provided
 addition and individually in small
 subtraction classroom; large
 word problems, metropolitan city
 focus on opera-
 tion errors

Cassel & Investigate the 4 elementary school
Reid, 1996 effects of SRSD students (3 girls, 1 boy):
 instruction on 2 with LD (mean CA,
 students' per 8-4; mean IQ 104),
 formance on 4 2 with MMR (mean
 types of CA, 9-9; mean IQ, 71);
 addition and rural elementary school;
 subtraction students received 60
 word problems min/day resource class-
 room math instruction

Chung & Compare 30 students with mild
Tam, 2005 performance on intellectual disabilities
 2-step addition in a special school in
 and subtraction Hong Kong: 22 boys,
 word problems 8 girls; mean CA,
 among three 10-6; mean IQ, 63;
 groups/ instruction given indi-
 conditions vidually for first session
 followed by 5 group
 practice sessions

Hutchinson, Investigate the 20 students (Grades
1993 effects of a 8-10) with LD
 2-phase attending 2 junior high
 cognitive schools in suburban
 strategy on Vancouver (IQ range,
 algebra word 85-115) randomly
 problem assigned to
 solving intervention or
 comparison condition

Montague, Investigate the 6 students (Grades
1992 effects of cogni- 6-8) with LD: 3 boys,
 tive and 3 girls; mean CA,
 metacognitive 13-7; mean IQ, 98;
 strategy instruc- large metropolitan
 tion on mathe- school in southeastern
 matical problem U.S.; individual
 solving instructional and test
 sessions provided
 by the researcher

Montague, Determine dif- 72 students with LD
Applegate, ferential effects (53 boys, 19 girls; mean
& of 3 treatment CA, 14-2; mean IQ,
Marquard, conditions and 99) and 24 average
1993 2 cycles of treat- achieving students
 ment on word (12 boys, 12 girls;
 problem solving mean CA, 13-7; mean
 and to identify IQ, 99) from 4 public
 most salient in- schools in southeast
 structional Florida; separate class-
 components room instruction in
 groups

Montague Investigate 6 students with LD (5
& Bos, effects of 8-step boys, 1 girl; mean CA,
1986 cognitive 16-7; mean IQ 93)
 strategy on attending southern
 math problem Arizona metropolitan
 solving high school; interven-
 tion provided in re-
 source setting

 Dependent
Study Design Variable(s)

Case, Single-subject * Math word problem-
Harris, & multiple solving measures
Graham, baseline
1992 across * Strategy use by count-
 participants ing type of strategy

 * Count of correct and
 incorrect operations
 used to solve prob-
 lems

Cassel & Single-subject Math word problem-
Reid, 1996 multiple solving measures: tests
 baseline of 1-step addition and
 across subtraction problems
 participants using 16 variations of
 4 basic problem types

Chung & Group quasi- Math problem-solving
Tam, 2005 experimental measures: tests of 5
 posttest only two-step addition and
 comparison subtraction word
 design, problems
 students
 randomly
 assigned to
 groups

Hutchinson, Single subject * Algebra word prob-
1993 multiple base- lem-solving measures
 line across incorporating 3 types
 participants of problems
 and 2-group
 comparison * 25 word problems
 design from BC Mathemat-
 ics Achievement Test,
 Grade 7/8

 * 13 open-ended prob-
 lems from Q2 BC
 Achievement Test,
 Grade 10

 * 10-question metacog-
 nitive interview

 * Problem type classifi-
 cation/sorting task

Montague, Single-subject * Math word problem-
1992 multiple solving measures:
 baseline 1-, 2-, and 3-step
 across word problems
 participants
 * Mathematical
 Problem Solving
 Assessment-Short
 Form (MPSA-SF;
 Montague, 2003)

 * Minutes to
 complete tests

 * Error analysis: process
 and computation
 errors

Montague, Group quasi- Math word problem-
Applegate, experimental solving measures: tests
& repeated mea- of 1-, 2-, and 3-step
Marquard, sures design word problems
1993 comparing 3
 treatment con-
 ditions and 2
 cycles of treat-
 ment, groups
 randomly as-
 signed to
 treatment
 conditions

Montague Single-subject * Math word problem-
& Bos, multiple solving measures:
1986 baseline 2-step problem tests
 across administered during
 participants baseline, treatment,
 maintenance, and
 retraining phases;
 3-step problems
 administered as
 generalization test

 * Minutes to complete
 tests

 * Type of errors

 Independent
Study Variable(s)

Case, 5-step cognitive strategy
Harris, & routine using SRSD
Graham, procedures
1992

Cassel & 9-step SRSD
Reid, 1996

Chung & * 5-step cognitive strategy
Tam, 2005 instruction

 * Worked example instruction
 (worked solutions provided,
 students taught to use
 shapes and symbols to
 represent problem, students
 taught to use specific
 problem-solving steps)

 * Conventional instruction
 (control group: 2 problem
 solutions modeled by in-
 structor, practice provided)

Hutchinson, 10-step cognitive strategy
1993 routine with metacognitive
 components

Montague, * 7-step cognitive strategy
1992 routine (CSI)

 * Three metacognitive
 strategies (MSI; SAY, ASK,
 CHECK)

 * Combination CSI-MSI

Montague, * 7-step cognitive strategy
Applegate, routine (CSI)
&
Marquard, * 3 metacognitive strategies
1993 (MSI; SAY, ASK,
 CHECK)

 * Combination CSI-MSI

Montague 8-step cognitive strategy
& Bos, routine using explicit
1986 instruction techniques

Study Results/Maintenance/Generalization

Case, Phase 1: Addition-only problems, 3
Harris, & out of 4 students improved; subtrac-
Graham, tion-only problems, 2 out of 4 students
1992 improved

 Phase 2: All 4 students improved on
 both addition and subtraction problems
 Mixed results for maintenance, positive
 results on generalization measure

Cassel & All students reached mastery and
Reid, 1996 maintained gains for 8 weeks

Chung & Both cognitive strategy instruction
Tam, 2005 group and worked example group
 scored significantly higher than
 conventional instruction group on
 immediate and delayed tests

Hutchinson, Single-subject: All 12 students reached
1993 Relational Problems criterion, 10 stu-
 dents maintained (6 weeks); 10 stu-
 dents reached Proportion Problems
 criterion and maintained performance

 2-Variable 2-Equation Problems: 5 stu-
 dents reached criterion and maintained
 performance

 2-group comparison: Statistically sig-
 nificant differences found between the
 treatment and comparison groups on
 all posttest measures

Montague, CSI-MSI combination more effective
1992 than either alone, 3 of 5 students gen-
 eralized the strategy to the classroom,
 booster sessions provided to maintain
 strategy use 4 months later

 Completion time improved
 Strategy knowledge, use, and control
 improved

 Fewer process errors noted after
 treatment

Montague, No significant performance differences
Applegate, found among conditions on first
& posttest
Marquard,
1993 Sequence of instruction findings indi-
 cated significant differences between
 trials favoring cognitive and combined
 cognitive-metacognitive conditions
 compared with metacognitive condi-
 tion.

 All students improved on first mainte-
 nance measure after 3 weeks following
 instruction

 Students declined in performance after
 7 weeks but reached satisfactory level
 after booster session

Montague All 6 students substantially improved
& Bos, performance, 4 students successfully
1986 generalized the strategy, all students
 improved in test completion time and
 stabilized to completion time of 50
 min or less

Note. LD = learning disabilities; CA = chronological age; SRSD =
self-regulated strategy development; MMR = mild mental retardation;
CSI = cognitive strategy instruction; MSI = metacognitive strategy
instruction.

TABLE 2
Quality Indicators for Single-Subject Studies

 Case,
 Harris Cassel & Hutchinson,
 & Graham Reid 1993
Quality Indicator 1992 1996

Participants
 Described sufficiently yes yes yes
 Selection described yes yes yes
 sufficiently
 Setting described yes yes yes
 sufficiently
Dependent Variable
 Described with yes yes yes
 replicable precision
 Quantifiable yes yes yes
 Measurement valid and
 described with
 replicable precision yes yes yes
 Measurement occurred yes yes yes
 repeatedly
 IOA data reported and no yes no
 met minimal standards
Independent Variable
 Described with yes yes yes
 replicable precision
 Systematically yes yes yes
 manipulated
 Procedural fidelity no no no
 measured and described
Baseline
 Conditions described yes yes yes
 with replicable
 precision
 Baseline provided yes yes yes
 evidence of pattern
 prior to intervention
Experimental Control/
 Internal Validity
 Three demonstrations of yes yes yes
 experimental effect
 Design controlled threats yes yes yes
 to internal validity
 Pattern that demonstrates yes yes yes
 experimental
 control (as judged
 by visual analysis)
External Validity
 Effects replicated across yes yes yes
 participants, settings,
 or materials
Social Validity
 DV socially important yes yes yes
 Magnitude of change yes yes yes
 in DV due to
 intervention socially
 important
 IV is cost effective yes yes yes
 IV implemented over time, yes yes yes
 in typical contexts,
 and by typical
 intervention agents

 Montague, Montague
 1992 & Bos,
Quality Indicator 1986

Participants
 Described sufficiently yes yes
 Selection described yes yes
 sufficiently
 Setting described yes yes
 sufficiently
Dependent Variable
 Described with yes yes
 replicable precision
 Quantifiable yes yes
 Measurement valid and yes yes
 described with
 replicable precision
 Measurement occurred yes yes
 repeatedly
 IOA data reported and no no
 met minimal standards
Independent Variable
 Described with yes yes
 replicable precision
 Systematically yes yes
 manipulated
 Procedural fidelity no no
 measured and described
Baseline
 Conditions described yes yes
 with replicable
 precision
 Baseline provided yes yes
 evidence of pattern
 prior to intervention
Experimental Control/
 Internal Validity
 Three demonstrations of yes yes
 experimental effect
 Design controlled threats yes yes
 to internal validity
 Pattern that demonstrates yes yes
 experimental
 control (as judged
 by visual analysis)
External Validity
 Effects replicated across yes yes
 participants, settings,
 or materials
Social Validity
 DV socially important yes yes
 Magnitude of change yes yes
 in DV due to
 intervention socially
 important
 IV is cost effective yes yes
 IV implemented over time, yes yes
 in typical contexts,
 and by typical
 intervention agents

Note. Criteria for quality indicators as proposed by Horner et al.
(2005); IOA = interobserver agreement; DV = dependent variable;
IV = independent variable.

TABLE 3
Quality Indicators for Group Studies

 Montagne,
 Applegate,
 Chung & Tam, & Marquard,
Indicator 2005 1993

Participants
 Described sufficiently no yes
 Equivalency of groups no yes
 Intervention agents
 described sufficiently and
 shown to be equivalent no yes
Intervention
 Described clearly yes yes
 Procedural fidelity no no
 measured and described
 Difference between yes n/a
 intervention and control
 described clearly
Outcome Measures
 Multiple outcome measures no no
 Timing appropriate yes yes
Data Analysis
 Techniques linked to research
 questions and unit
 of analysis linked
 to statistical analysis yes yes
 Effect sizes reported no no

Note. Criteria for quality indicators as proposed by Gersten et al.
(2005).
COPYRIGHT 2009 Council for Exceptional Children
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2009 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Montague, Marjorie; Dietz, Samantha
Publication:Exceptional Children
Article Type:Report
Geographic Code:1USA
Date:Mar 22, 2009
Words:9431
Previous Article:Repeated reading interventions for students with learning disabilities: status of the evidence.
Next Article:Teaching writing to at-risk students: the quality of evidence for self-regulated strategy development.
Topics:


Related Articles
The nature of cognitive strategy instruction: interactive strategy construction.
Imagining instructions: mental practice in highly cognitive domains.
Effectiveness of lesson planning: factor analysis.
Assessing Zimbabwean children's mathematics problem solving for Cognitively Guided Instruction.
Exploring mathematical exploration: how two college students formulated and solved their own mathematical problems.
Self-regulation strategies to improve mathematical problem solving for students with learning disabilities.
The effects of the explicit inquiry routine on the performance of students with learning disabilities on one-variable equations.
Design principles of worked examples: a review of the empirical studies.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters