Printer Friendly


In 2001, President George W. Bush announced his endorsement of federal legislation meant to address the nation's educational shortcomings and remediate those deficiencies experienced by students throughout the nation within the core academic subjects of math and reading. This legislation was christened with a bold name, The No Child Left Behind Act of 2001. The NCLB Act reauthorized the Elementary and Secondary Education Act of 1965. It further outlined principles and strategies meant to strengthen the American educational system (U.S. Department of Education, 2004). Among these principles and strategies was a new emphasis on accountability systems concerning students' math and reading proficiency (U.S. Department of Education, 2004).

In order to determine schools' progress towards the mandated outcomes, many states have implemented end of the year statewide testing protocols that are aligned to their learning standards. These "high stakes" tests play a large role in both measuring the progress of individual students and funding issues. They have also forced some educators to place an increased emphasis on the early identification and intervention of students' educational difficulties. The logic behind this shift is that the earlier educational difficulties are detected, the greater the probability that these difficulties will be successfully remediated (Juel, 1988). In order to identify students as early as possible, many school districts have turned to the use of benchmarking with an assessment technology known as curriculum-based measurement (CBM).

Review of the Literature CBM and Benchmarking

CBM is often administered within a benchmarking process. Within the benchmarking process every student is administered the same measurement at the same time. These scores are then compiled to develop norms for each grade and class (Shinn, 1988). Individual student scores are compared to these norms to determine a student's need for intervention (January, Ardoin, Christ, Eckert, & White, 2016). CBM utilizes numerous short and easily collected measurements of students' academic skills, which provide general outcome measures (GOMs) of student growth toward broad basic skill acquisition (Deno, 1992). GOMs are more focused on a composite skill rather than separate subcomponents. These GOMs are collected through continuous measurement on equivalent forms across time (Deno, 1992). This data models student growth within the domain being measured and represent improvement on many skills subsumed by the broad domain being measured (Deno, Espin, & Fuchs, 2002; January, Ardoin, Christ, Eckert, & White, 2016).

Benchmarking is typically administered on a trimester basis (Fall, Winter, and Spring) to the entire student body. Students' performances are then either compared against national norms or against some form of local norms. If a student's score falls within the bottom quarter of the national or local norms the student may then begin to receive remedial educational services (Shinn, Shinn, Hamilton, & Clarke, 2002).

CBM elucidates how to select, create, and administer stimulus materials, as well as score student performance. The emphasis on standardized construction and measurement within CBM means that the stimulus materials being utilized are based on students' curriculum (Deno, 1992) unlike the traditional approach to assessment using traditional norm-referenced tests. The two CBM procedures of primary focus in this study are oral reading fluency (ORF) and MAZE. ORF is individually administered and is a simple method of assessment. According to Deno (1985), students are presented with a reading passage that has either been developed from reading materials utilized locally in the classroom or is a generic passage. Students read the printed text out loud for one minute. The individual's score is determined by obtaining the total number of correctly read words by the student and then subtracting the total number of errors made by the student (correct words per minute). An error is made when a student omits, inserts, substitutes, or mispronounces a word (Deno, 1985). Hesitations (pauses of longer than three seconds) are also considered reading errors (Deno, 1985). Scores on ORF typically range from 0 to approximately 250.

MAZE has been developed as an alternative to ORF. Students are administered the MAZE task in a class-wide setting. They silently read a passage for three minutes in which the first sentence is fully intact. Throughout the rest of the passage every seventh word is deleted and replaced with three different options (Way-man, Wallace, Wiley, Ticha, & Espin, 2007). One of the three words is both grammatically and logically correct while the other two are meant to distract the student (Wayman et al., 2007). Students are asked to select one word that makes the most sense within the context of the presented sentence. The students' total score is determined by counting the total number of items completed and the total number of incorrect answers. The number of incorrect answers is then subtracted from the total number of items completed. The difference is the students' score on the MAZE task (number of correct words). Scores typically range from 0 to 50.

Currently, within the applied setting there is much controversy concerning which measure is most appropriate for benchmarking purposes at the upper elementary school grade levels. Some school districts have elected to administer only ORF, while some have elected to administer only MAZE, and still others have decided to administer both tools. Many teachers have presented concerns about "word callers," students who read fluently but do not understand the text, being mis-identified by ORF measures (Knight-Teague, Vanderwood, & Knight, 2014; Meisinger, Bradley, Schwanenflugel, & Kuhn, 2010). MAZE has been posited as being a more authentic assessment encompassing not only fluency but also comprehension (Jenkins & Terjeson, 2011; Shinn et al., 2002; Wayman et al., 2007). As opposed to ORF, MAZE can be group administered (Wayman et al, 2007). This is of the utmost importance, as it may help to maximize and save instructional time throughout the educational day.

CBM and Reading

CBM was developed for use within a problem solving approach to remediate students' educational issues primarily in the core academic areas, but much of the current research has focused on reading (Wayman et al., 2007). ORF involves quickly and accurately reading text within a passage. Fluent readers abide by punctuation, alter their pitch, and make no or minimal mistakes. Once text can be decoded fluently students are more able to gain meaning from the presented text possibly due to a bridge effect that links working memory to comprehension (Swanson & O'Connor, 2009; Taylor, Meisinger, & Floyd, 2016).

Fluent reading ability allows individuals to focus more on understanding the meaning of words (vocabulary) and comprehending the concepts present within the reading passage (Joseph, 2015). Comprehension of written text allows readers to obtain new information to which they may not have been exposed. Furthermore, as students read they may activate long term memory in order to integrate newly presented information with knowledge they had previously attained. Finally, short-term memory is also implicated in adequate reading comprehension. In order to make sense of text being read individuals must be able to retrieve encoded information in the correct sequence within and between sentences and words (McMaster, Esping, & van den Broek, 2014; Swanson, Zheng, & Jerman, 2009).

Numerous researchers from across the United States have sought to examine the ability of ORF to predict student performance on state wide standardized tests. ORF has shown to be an efficient predictor of general reading performance on statewide high stakes tests across the country (Hintze & Silberglitt, 2005; Neese, Park, Alonzo, & Tindal, 2011; Tindal, Nese, Stevens, & Alonzo, 2016; Way-man et al., 2007). Despite this, Jenkins and Jewell (1993) have previously indicated the existence of an inverse relationship between ORF and standardized achievement tests as a function of grade. Fuchs and Fuchs (1993) discovered similar findings while attempting to establish weekly growth goals for students utilizing both ORF and MAZE. The inverse relationships between ORF growth and grade was noted, but it was also determined that growth on MAZE was steady regardless of students' grade level. Silberglitt, Burns, Madyun, and Lail (2006) conducted a study that further supports previous conclusions concerning the inverse relationship between ORF and performance on state wide standardized tests. These results may be evidence of a shift from fluent reading to reading for understanding within the upper elementary school grades.

Shinn, Good, Knutson, Tilly and Collins (1992) conducted a confirmatory factor analysis to determine the best fit of varied reading data to four differing theoretical models of reading for third grade and fifth grade students. For the fifth grade sample, the authors observed a two-factor model of reading which they determined consisted of Reading Comprehension and Reading Decoding. Given this evidence, it may be more appropriate to utilize MAZE in the prediction of statewide reading performance than ORF at higher-grade levels.

CBM and Mathematics

It has been indicated that reading may be an important access skill to mathematics proficiency (Helwig, Rozeck-Tedesco, Tindal, Heath, & Almond, 1999; Thurber, Shinn, & Smolkowski, 2002). Robinson, Menchetti, and Torgesen (2002) suggested a similar path for reading and math difficulties that involves primary weaknesses in phonological processing and encoding and retrieval problems. Central to attaining adequate calculation is the ability to retrieve verbal information from long-term memory (Koponen Aunola, Ahonen, & Nurmi, 2007). Poor phonological skills may severely hinder individuals' abilities to encode this information. This concept has been supported by research indicating that phonological awareness, rate of retrieval of phonological codes from long-term memory, and phonological memory possibly explain nearly all of the covariance between calculation and reading (Koponen et al., 2007).

Further, Helwig et al (1999), provide a possible explanation of cognitive processing that may underlie the relationship between reading and mathematics performance on tests involving word problems. Students with established reading fluency are able to more automatically decode words. This decreased processing demand allows for more cognitive resources to be devoted to higher order operations such as comprehension and synthesis of text. Thus, individuals with weaker word decoding abilities may devote so much effort in determining what a word is that they are less likely to comprehend what is being read (Helwig et al, 1999). Koponen et al. (2007) state that, like reading, mathematical calculation is a multi-component skill. In order to solve multi-digit problems children must be able to automatically and accurately retrieve single-digit calculations as they are often intermediary steps (Koponen et al, 2007).

Several researchers have extended the use of CBM reading measures to predict student mathematics performance on statewide tests. Thurber et al. (2002) conducted a confirmatory factor analysis with data derived from fourth grade students to determine whether math computation or general math achievement was measured by a math curriculum based measurement (M-CBM). The correlations between Reading and Computation ranged from .69 to .79, while the correlations between Reading and Applications ranged from .76 to .77. The authors state that their results support the conclusion that reading is necessary for overall mathematics proficiency.

In line with the above evidence, Crawford, Tindal and Stieber (2001) conducted a study to not only predict third grade student reading performance on a statewide achievement test through the use of ORF, but to additionally predict student performance on the test's mathematics portion also through the use of ORF. Moderate correlations were observed between the CBM and both reading (r = .60) and mathematics (r = .46) performance. Jiban and Deno (2007), utilized MAZE to predict student performance on a statewide mathematics assessment in Minnesota. Once again moderately strong correlations were observed (.40 to .60) between reading based CBM and mathematics performance on high stakes mathematics assessment.

Purposes and Hypotheses

The first purpose of the current study is to extend the use of two reading CBM tools in the prediction of upper elementary students' performance on high stakes reading assessments. ORF was selected due to an extensive research base, while MAZE was selected because there is an established inverse correlation between ORF and statewide assessment performance (Fuchs & Fuchs, 1993; Jenkins & Jewell, 1993; Silberglitt et al., 2006). Therefore, it was hypothesized that MAZE is a better predictor of student performance on the reading section of the high stakes test than ORF.

The second purpose of this study is to examine the relationship between these two CBM methods and fifth and sixth grade student performance on the mathematics sections of a high stakes assessment. The establishment of this relationship is sought for several reasons. First, it has been established that reading and mathematics may share similar cognitive processes (Koponen et al., 2007) and reading may be an important access skill to mathematics proficiency (Helwig et al, 1999; Thurber et al, 2002). Furthermore, researchers have established a relationship between math performances on statewide multiple choice mathematics assessments and ORF (Crawford et al, 2001), as well as MAZE (Jiban & Deno, 2007), but these studies have utilized samples comprised of early elementary school students.



Participants of the study consisted of 197 students in the fifth (n = 102) and sixth (n = 92) grades from three urban schools in Illinois. The school-wide student population of the rural elementary school was 95% Caucasian and the school-wide teacher population was 100% Caucasian. The school-wide student population from one of the urban schools was composed of 68% Caucasians, while the teacher population was also predominately Caucasian, 90%. The second urban school was composed of a student population that is 44% Caucasian and 48% African American. The teacher population was 91% Caucasian.


The CBM materials utilized were the Oral Reading Fluency subsection of the Dynamic Indicators of Basic Early Literacy Skills, 6th edition or DIBELS (Good & Kaminski, 2002), and a MAZE selection task created by the staff of the two schools from which the data was collected. The CBM data were collected through universal benchmarking conducted during September of the 2008-2009 school year. The fall benchmark data were utilized because they provide educators with the greatest amount of time to implement any required interventions. The high stakes mathematics and reading data were collected from the Spring administration of the ISAT during the same school year. A discussion of the technical properties of each measure follows.

Oral reading fluency. ORF has been shown to possess technical adequacy and a strong relationship to overall reading proficiency (Wayman et al, 2007). Numerous researchers have sought to establish the psychometrics of ORF data. It has shown to possess adequate levels of reliability. Tindal, Marston, and Deno (1983) evaluated ORF's reliability through three different indexes including test-retest reliability, alternate forms reliability, and inter-judge agreement. All coefficients were shown to be adequate as the test-retest coefficient was .92, the alternate forms coefficient was .89, and the inter-judge agreement was .99.

MAZE. Much like ORF, MAZE has been shown to be a technically adequate tool. Shin, Deno, and Espin (2000) evaluated MAZE with forty-three second graders. The results indicated that MAZE possesses adequate alternate forms reliability. The correlations between forms administered from one to nine months ranged from .69 to .91 (M= .81). Coefficients for forms administered within one month ranged from .75 to .90 (M = .83). The correlation range for the two-month interval was .75 to .87 while the range for the three-month interval was .69 to .91, with .80 being the mean for both interval conditions

Illinois State Achievement Test (ISAT). In order to determine if schools are meeting the prescribed percentage of children demonstrating grade level skills in reading and mathematics, students were administered the Illinois Standards Achievement Test (ISAT) during the spring of each school year prior to the adoption of the Common Core State Standards. The ISAT measured the extent to which students were meeting the Illinois Learning Standards (Illinois State Board of Education Division of Assessment, 2008). Internal consistencies for the fifth and sixth grade reading and mathematics assessments ranged .89 to .93 (Illinois State Board of Education Division of Assessment, 2008). It was evaluated for concurrent validity with the SAT 10. The correlation between the two measures for fifth grade reading was .78. For the fifth grade math section the correlation was .85. The correlation for sixth grade reading was .77 and the correlation for sixth grade mathematics was .86 (Illinois State Board of Education Division of Assessment, 2008).

Design and Data Analysis

Given the exploratory nature of the research questions, a correlational design was utilized within the current study. Means and standard deviations were computed for all measures at both grade levels. Pearson-product moment correlations were also computed between all measures. Individual regression formulas for each CBM were derived from simple linear regression analyses. This was done to obtain standardized beta values in order to determine which CBM showed the strongest relationship the high stakes tests.


Descriptive Statistics

The fifth grade sample showed a mean ORF score of 122.42. This indicated that this sample of students was displaying adequate fluency performance (Good & Kaminski, 2002). This was also true concerning the sixth grade sample as they showed a mean ORF performance of 126.46. The fifth grade sample showed a mean performance of 17.03 and the sixth grade showed a mean performance of 19.38 on MAZE.

In regards to the ISAT reading subsection, both grade levels' mean performance was higher than the minimum score needed to meet standards on the subsection. The fifth graders required a minimum score of 215 to meet standards while the mean performance was 234.61. The sixth grade sample required a minimum score of 220 to meet standards, but the mean score was 231.99. Within fifth grade math, the minimum score to meet standards was 214, but the mean performance of this sample surpassed this as the average score was 237.80. The sixth grade minimum score needed was 225, while the mean performance of this sample was 238.41. This information is summarized in Tables 1 and 2.

CBM and ISAT Subsection Correlations for Fifth Grade

A Pearson's r was conducted on all measures within the study for both grade levels. Concerning the relationship between the two CBM measures for fifth grade, a significant and positive relationship was obtained r(100) = .72, p < .01 (one-tailed). This was also the case relative to the relationship between the two subsections of ISAT for fifth grade, r(100) = .74, p < .01 (one-tailed).

A Pearson's r was computed between MAZE and ISAT reading, as well as ORF and ISAT reading. While utilizing an alpha level of .05, all correlations between each measure were significant, p < .01, (one-tailed). The correlation between MAZE and the ISAT reading subsection was moderately strong, r(100) = .67. ORF displayed an identical correlation with ISAT reading.

Coefficients were also calculated between each CBM measure and ISAT mathematics. All correlations were once again significant, p < .01, (one-tailed). ORF showed a moderately strong positive correlation with ISAT mathematics, r(100) = .55. MAZE also showed a somewhat attenuated, but still moderately strong, relationship with ISAT mathematics, r(100) = .53. The difference in correlations was not significant (t(99) = .33, p = .75). See Table 3.

CBM and ISAT Subsection Correlations for Sixth Grade

Pearson's r analyses were also performed on each CBM measure and each ISAT subsection at the sixth grade. Once again, ORF and MAZE showed a significant relationship, r(93) = .65, p < .01 (one-tailed). The results of the correlation between the two ISAT subsections also revealed a significant relationship, r(93) = .78, p<M (one-tailed).

Significant (p < .01, one-tailed) and positive correlations were obtained across each CBM and each ISAT subsection. Within the realm of reading, ORF showed a higher degree of correlation with the ISAT, r(93) = .64 than did MAZE, r(93) = .62, but this difference was not significant (t(92) = .31, p =.75). In relation to mathematics, this did not occur. MAZE showed a higher degree of correlation with the ISAT, r(93) = .53, than did ORF, r(93) = .49, but once again this difference was not significant (t(92) = .55, p = .58). The results of the sixth grade correlations are summarized in Table 4.

Simple Regression Analyses

Simple regression analyses were conducted at each grade level to examine how the previously discussed benchmarking tools measure predicted performance on each subsection of the ISAT. In order to determine if there is a significant difference between the amount of variance in each ISAT subsection accounted for by the respective CBM, the difference between the squared standardized beta for each CBM must equal or surpass 5%. For fifth grade reading performance, results show that ORF accounted for 15% of the variance in ISAT performance, t(100) = 3.91, p = .00, while MAZE accounted for 14% of the variance in ISAT performance, t(100) = 3.83, p = .00. The difference between the two CBMs was not significant. The results for fifth grade math showed that ORF accounted for 12% of the variance in ISAT performance, t(100) = 3.00, p = .00. MAZE accounted for less variance, 8%, t(100) = 2.34, p = .02, than did ORF. Once again the difference was not significant. See Tables 5 and 6.

ORF accounted for more variance, 17%, t (93) = 4.11, p = .00, than did MAZE, 13%, t (93) = 3.60, p = .00, within ISAT reading, but this difference was not significant. The opposite was true in regards to sixth grade ISAT mathematics. MAZE accounted for 13% of the variance in ISAT scores, t (93) = 3.18, p = .00, whereas ORF accounted for 7% of the variance in ISAT mathematics performance, t (93) = 2.26, p = .03. The above difference was significant. These results are summarized in Tables 7 and 8.

Discussion and Limitations

The purpose of this study was to examine the ability of two CBMs, ORF and MAZE, to predict 5th and 6th grade students' performances on the reading and mathematics subsections of a high stakes test. In relation to both grade levels, the two CBMs showed similar correlations to both subsections of the ISAT. The strength of these relationships replicates the results of previous studies (Hintze & Silberglitt, 2005; Neese et al., 2011; Wayman et al., 2007). The study's results suggest the two measures are essentially equivalent when predicting ISAT performance in reading and math. It was not until the 6th grade that MAZE accounted for significantly more variance in ISAT performance than did ORF, but this was only true for the mathematics subsection. This may be a product of the ISAT mathematics subsection. At the 5th grade level, reading may be required to simply access the necessary information to complete each item whereas at the 6th grade level students may require greater comprehension abilities to understand more complex tasks, but until this question is further investigated the above statement will remain speculation.

The results of this study at both grade levels also indicate the need to consider students' reading abilities when instructing them in mathematics. Reading may be a necessary access skill to obtaining mathematical instruction and knowledge (Helwig et al., 1999; Koponen et al., 2007). Further, the relationship between reading and mathematics may be of vital importance to perform adequately on statewide-standardized tests that use timed multiple-choice questions (Jiban & Deno, 2007; Silberglitt et al., 2006). This is important when one considers that the above-mentioned tests are those likely to be used for holding schools accountable to the general public.

These results only partially support the hypotheses that MAZE may account for more variance in predicting ISAT performance than ORF. It appears that it is not until the very last part of elementary school that reading comprehension performance as measured by MAZE is more strongly related to mathematics performance on high stakes assessments than is ORF. All of these discussion points are important considerations when considering the time restrictions placed on educators. MAZE may represent a more time efficient method to accurately benchmark students at the upper elementary school grade levels.

Despite the above discussion, this study contains several limitations that temper any conclusions based on its results. First, smaller sample sizes may have reduced the variability of the study's scores. This may have influenced the statistics calculated, and thus, possibly negate the conclusions drawn from its results. Furthermore, MAZE scores were derived from simple correct identifications as opposed to percent correct selections. This may have restricted the range of MAZE scores. Finally, the CBM data were collected by teachers and other school personnel not typically involved in standardized test administration. No formal administration of implementation integrity was conducted. CBM scores could have varied by not only individual student performance, but also by the individual administering the assessment.

The above discussion helps to inform future investigations. Future research concerning the ability of CBM measures to predict student performance on high stakes tests should consider several areas. First, future research should utilize a larger and more representative sample size to increase the amount of variability within the CBM and high stakes assessment scores. For the same reasons, it may be advantageous to utilize percent correct selections instead of correct selections when examining MAZE'S predictive abilities. Also, it would be prudent to implement some sort of integrity check concerning the administration of the CBM assessments.


Crawford, L., Tindal, G., & Stieber, S. (2001). Using oral reading rate to predict student performance on statewide achievement tests. Educational Assessment, 7, 303-323. Retrieved from

Deno, S. L., (1985). Curriculum-based measurement: The emerging alternative. Exceptional Children, 52, 219-232. doi: 10.1177/001440298505200303

Deno, S. L., (1992). The nature and development of curriculum-based measurement. Preventing School Failure: Alternative Education for Children and Youth, 36(2), 5-10. doi: 10.1080/1045988X. 1992.9944262

Deno, S. L., Espin, C. A.. & Fuchs. L. S. (2002). Evaluation strategies for preventing and remediating basic skill deficits. In M. R. Shinn, H. W. Walker, & G. Stoner (Eds.), Interventions for academic and behavior problems II: Preventative and remedial approaches (pp.213-241). Bethesda, MA: NASP.

Fuchs, L. S., Fuchs. D. (1993). Formative evaluation of academic progress: How much growth can we expect? School Psychology Review, 22(1), 1-30.

Good, R., & Kaminski, R. (Eds.) (2002). Dynamic Indicators of Basic Early Literacy Skills (6th Ed.) Eugene, OR: Institute for the Development of Academic Achievement.

Helwig, R., Rozeck-Tedesco, M. A., Tindal, G., & Heath, B, Almond, P. J. (1999). Reading as an access to mathematics problem solving on multiple-choice tests for sixth-grade students. The Journal of Educational Research, 93, 113-125. doi: 10.1080/00220679909597635

Hintze, J. M., & Silberglitt, B. (2005). A longitudinal examination of the diagnostic accuracy and predictive validity of R-CBM and high-stakes testing. School Psychology Review, 34, 372-386.

Illinois State Board of Education Division of Assessment. (2008). Illinois standards achievement test: 2008 technical manual. Retrieved from

January, S. A., Ardoin, S. P., Christ, T. J., Eckert, T. L., & White, M. J. (2016). Evaluating the interpretation and use of curriculum-based measurement in reading and word lists for universal screening in first and second grade. School Psychology Review, 45, 310-326.

Jenkins, J. R., & Jewell, M. (1993). Examining the validity of two measures for formative teaching: Reading aloud and maze. Exceptional Children, 59, 421-432. doi: 10.1177/001440299305900505

Jenkins, J., & Terjeson, K. J. (2011). Monitoring reading growth: Goal setting, measurement frequency, and methods of evaluation. Learning Disabilities Research & Practice. 26, 28-35. doi: 10.111/j.1540-5826.2010.00322.x

Jiban, C. L., & Deno, S. L. (2007). Using math and reading curriculum-based measurements to predict state mathematics test performance: Are simple one-minute measures technically adequate? Assessment for Effective Intervention, 32, 78-89. doi: 10.1177/1534 5084070320020501

Joseph, L. (2015) Understanding, Assessing, and Intervening on Reading Problems, 2nd ed. Bethesda, MD: NASP.

Juell, C. (1988). Learning to read and write: A longitudinal study of 54 children from first through fourth grades. Journal of Educational Psychology, 80, 437-447. doi: 10.1037/0022-0663.80.4.437

Knight-Teague, K., Vanderwood, M. L., & Knight, E. (2014). Empirical investigation of word callers who are English learners. School Psychology Review, 43, 3-18.

Koponen, T., Aunola. K.. Ahonen, T.. & Nurmi, J. (2007). Cognitive predictors of single-digit and procedural calculation skills and their covariation with reading skill. Journal of Experimental Child Psychology, 97, 220-241. doi: 10.1016/j.jecp.2007.03.001

McMaster, K. L., Espin, C. A., & van den Broek, P. (2014). Making connections: Linking cognitive psychology and intervention research to improve comprehension of struggling readers. Learning Disabilities Research & Practice, 29, 17-24. doi: 10.1111/ldrp.12026

Meisinger, E. B.. Bradley, B. A., Schawanenflugel, P. J., & Kuhn, M. R. (2010). Teacher's perceptions of word callers and related literacy concepts. School Psychology Review, 39, 54-68. Retrieved from

Neese, J., Park, B., Alonzo, J., & Tindal, G. (2011) Applied curriculum-based measurement as a predictor of high-stakes assessment: Implications for researchers and teachers. The Elementary School Journal, 111, 608-624. doi: 10.1086/659034

Robinson, C. S., Menchetti, B. M., & Torgesen, J. K. (2002). Toward a two-factor theory of one type of mathematics disabilities. Learning Disabilities Research & Practice, 17, 81-89. doi: 10.1111/1540-5826.00035

Shinn. M. R. (1988). Development of curriculum-based local norms for use in special education decision-making. School Psychology Review, 17, 61-80.

Shin. J., Deno, S. L., & Espin, C. A. (2000). Technical adequacy of the MAZE task for curriculum-based measurement of reading growth. The Journal of Special Education, 34, 164-172. doi: 10.1177/002246690003400305

Shinn, M. R., Shinn, M. M., Hamilton, C, & Clarke, B. (2002). Using curriculum-based measurement in general education classrooms to promote reading success. In M. R. Shinn, H. M. Walker, & G. Stoner (Eds.), Interventions for academic and behavior problems II: Preventative and remedial approaches (pp.113-139). Bethesda, MA: NASP.

Shinn, M. R., Good, R. H., Knutson, N., Tilly, W. D., & Collins. V. L. (1992). Curriculum-based measurement oral reading fluency: A confirmatory analysis of its relation to reading. School Psychology Review. 21, 459-479. Retrieved from

Silberglitt, B., Burns, M. K., Madyun, N. H., & Lail, K. E. (2006). Relationship of reading fluency assessment data with state accountability test scores: A longitudinal comparison of grade levels. Psychology in the Schools. 43, 527-535. doi: 10.1002/pits.20175

Siberglitt, B., & Hintze, J. (2005). Formative assessment using CBM-R cut scores to track progress toward success on state-mandated achievement tests: A comparison of methods. Journal of Psychoeducational Assessment, 23, 304-325. doi: 10.1177/073428290502300402

Swanson, H. L., Zheng, X., & Jerman, O. (2009). Working memory, short-term memory, and reading disabilities: A selective meta-analysis of the literature. Journal of Learning Disabilities. 42. 260-287. doi: 10.1177/0022219409331958

Swanson, H. L., & O'Connor, R. (2009). The role of working memory and fluency practice on the reading comprehension of students who are dysfluent readers. Journal of Learning Disabilities. 42, 548-574. doi: 10.1177/0022219409338742

Taylor, C. D., Meisinger, E. B., & Floyd, R. G. (2016). Disentangling verbal instructions, experimental design, and sample characteristics: Results of curriculum-based measurement of reading research. School Psychology Review, 45, 53-72.

Thurber, R. S., Shinn. M. R., & Smolkowski, K. (2002). What is measured in mathematics tests? Construct validity of curriculum-based mathematics measures. School Psychology Review, 31, 498-513. Retrieved from d3caee 16b4a97/l?pq-origsite=g-scholar&cbl=48217

Tindal, G., Marston, D., & Deno, S. L. (1983). The reliability of direct and repeated measurement. Minnesota: University of Minnesota Institute for Research on Learning Disabilities.

Tindal, G., Nese, J. F., Stevens, J. J., & Alonzo, J. (2016). Growth on oral reading fluency measures as a function of special education and measurement sufficiency. Remedial and Special Education. 37, 28-40. doi: 10.1177/0741932515590234

U.S. Department of Education. (2004). Executive summary. Retrieved from

Wayman, M. M, Wallace, T., Wiley, H. I.. Ticha, R.. & Espin, C. A. (2007). Literature synthesis on curriculum-based measurement in reading. The Journal of Special Education, 41, 85-120. doi: 10.1177/002246 69070410020401

Wiley, H. I., & Deno, S. L. (2005). Oral reading and maze measures as predictors of success for English language learners on a state standards assessment. Remedial and Special Education, 26, 207-214. doi: 10.1177/07419325050260040301


SPD 186 Springfield, Il
Table 1 Sample Sizes, Means, and Standard Deviations for CBM Scores and
ISAT Subscales for Fifth Grade

Measure   N     M        SD

ORF       102   122.42   35.92
MAZE      102    17.03    7.10
ISAT-R    102   234.61   21.92
ISAT-M    102   237.80   26.83

Note. ORF = Oral Reading Fluency. ISAT-R = ISAT Reading. ISAT-M = ISAT
Mathematics. Test.

Table 2 Sample Sizes, Means, and Standard Deviations CBM Scores and
ISAT Subscales for Sixth Grade

Measure   N    M        SD

ORF       95   126.46   37.90
MAZE      95    19.38    9.61
ISAT-R    95   231.99   26.98
ISAT-M    95   238.41   27.74

Note. ORF = Oral Reading Fluency. ISAT-R = ISAT Reading. ISAT-M = ISAT

Table 3 Correlation Matrix between CBM Scores and ISAT Subscales for
Fifth Grade (N = 102)

         ORF        MAZE       ISAT-R     ISAT-M

ORF                 .72 (**)   .67 (**)   .55 (**)
MAZE     .72 (**)              .67 (**)   .53 (*)
ISAT-R   .67 (**)   .67 (**)              .74 (**)
ISAT-M   .55 (**)   .53 (**)   .74 (**)

Note. ORF = Oral Reading Fluency. ISAT-R = ISAT Reading. ISAT-M = ISAT
(**) = Correlation is significant at 0.01 (1 -tailed).

Table 4 Correlation Matrix between CBM Scores and ISAT Subscales for
Sixth Grade (N = 95)

         ORF        MAZE       ISAT-R     ISAT-M

ORF                 .65 (**)   .64 (**)   49 (**)
MAZE     .65 (**)              .62 (**)     .53 (**)
ISAT-R   .64 (**)   .62 (**)                .78 (**)
ISAT-M   .49 (**)   .53 (**)   .78 (**)

Note. ORF = Oral Reading Fluency. ISAT-R = ISAT Reading. ISAT-M = ISAT
(**) = Correlation is significant at 0.01 (1 -tailed).

Table 5 Summary of Simple Linear Regression Analysis for 5th Grade CBM
Predicting ISAT
Reading Performance (N = 102)

Variable   B      SE B   [beta]   t      P

ORF        0.24   0.06   0.39     3.91   <.01
MAZE       1.19   0.31   0.38     3.83   <.01

[R.sup.2] = .52, p<.00

Table 6 Summary of Simple Linear Regression Analysis for 5th Grade CBM
Predicting ISAT
Mathematics Performance (N = 102)

Variable   B      SE B   [beta]   t      P

ORF        0.26   0.09   0.35     3.00   <.01
MAZE       1.04   0.44   0.28     2.34    .02

[R.sup.2] = .34. p < .00At the sixth grade level,

Table 7 Summary of Simple Linear Regression Analysis for 6th Grade CBM
Predicting ISAT Reading Performance (N = 95)

Variable   B      SE B   [beta]   t      P

ORF        0.29   0.07   0.41     4.11   <.01
MAZE       1.00   0.28   0.36     3.60   <.01

[R.sup.2] = .48, p<.00

Table 8 Summary of Simple Linear Regression Analysis for 6th Grade CBM
Predicting ISAT
Mathematics Performance (N = 95)

Variable   B      SE B   [beta]   t      P

ORF        0.19   0.07   0.26     2.26    .03
MAZE       1.04   0.25   0.36     3.20   <.01

[R.sup.2] = .32 p< .00
COPYRIGHT 2019 Project Innovation (Alabama)
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2019 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Whitley, Samuel
Publication:Reading Improvement
Date:Mar 22, 2019

Terms of use | Privacy policy | Copyright © 2022 Farlex, Inc. | Feedback | For webmasters |