Printer Friendly

Effects of curriculum within curriculum-based measurement.

Curriculum-based measurement (CBM) is a methodology for indexing student proficiency in the curriculum (Deno, 1985; Deno & Fuchs, 1987; Shinn, 1989). CBM has been used successfully for a variety of psychoeducational assessment purposes, including screening and referral (Marston, Mirkin, & Deno, 1984), identification (German & Tindal, 1985; Marston, 1988), individual education plan (IEP) formulation (Deno, Mirkin, & Wesson, 1983), progress monitoring and program development (Fuchs, L. S., Deno, & Mirkin, 1984), program evaluation (Germann & Tindal, 1985), and reintegration of handicapped students (Fuchs, D., Fuchs, & Fernstrom, in press).

CBM differs from other forms of curriculum-based assessment in several important ways. Distinguishing features include (a) CBM's focus on the long-term instructional goal, rather than on a series of short-term objectives, and (b) its blending of traditional and alternative assessment models (see Fuchs, L. S., & Deno, 1991, for discussion). But perhaps the most salient feature of CBM is that, whereas other forms of curriculum-based assessment require teachers to design their own measurement procedures, CBM relies on a standardized measurement methodology. That is, using material drawn from the student's own curriculum, CBM prescribes methods for creating measurement samples, for administering and scoring tests, and for analyzing the measurement data base. Research supports the reliability and validity of these standardized CBM methods. (For summaries of the literature on the technical adequacy of CBM, see Deno, 1985; Fuchs, L. S., 1986; Shinn, 1989.)

The purpose of the current study was to extend the literature on technical features of CBM in the area of reading by focusing on the effects of differences in "curriculum" materials used for CBM in reading. In this article, curriculum denotes the material students read during measurement. Using standard methods, the primary CBM datum in reading is the number of words read aloud correctly in 1 minute (min) from passages randomly sampled from a constant level of the student's curriculum. A diverse and replicated literature documents the reliability and validity of this standard CBM methodology in reading (e.g., Shinn, 1989). Nevertheless, almost all of the research on the reliability and validity of CBM in reading has been conducted by having all students, regardless of instructional level or age, read from a single reading level--typically third-grade material. In addition, little attention has been directed at contrasting psychometric features across different curricula. To document the scope of the validity of CBM methods, which are designed to be used at many different grade levels and within many different cirricula, direct contrasts of the reliability and validity of the CBM procedures in reading across grade levels and curricula are necessary.

The current study investigated the effects of curriculum differences on the technical features of CBM in reading. Curriculum was defined in two ways. First, it was defined in terms of difficulty level: Students were measured on material sampled from each level within series, including preprimer through Grade 7. Second, curriculum was defined in terms of textbook series: Students were measured in two basal series with different foci. The technical features investigated in this study were criterion validity and developmental growth rate.



To establish the validity of the measures employed within CBM, the primary focus has been the criterion validity of alternative curriculum-based tasks compared with widely used, commercially available achievement tests (Deno, 1985). For example, Deno, Mirkin, and Chiang (1982) compared criterion validity coefficients for cloze, word meaning, isolated oral word reading, and passage oral reading tasks. L. S. Fuchs, Fuchs, and Maxwell (1988) employed similar methodology, but looked at additional criteria and curriculum-based measures, including cloze, retell, question answering, and passage oral reading measures. Across these and other (see Shinn, 1989) criterion validity studies, one finding recurs: Reading aloud from a text demonstrates the strongest relation with socially important, widely used criterion measures of reading. Because of this finding, as well as other technical features of the measure (see Fuchs, L. S., 1982; Shinn, 1989), reading aloud from passages is employed most frequently with CBM (Deno & Fuchs, 1987).

Despite the literature strongly supporting the criterion validity of the number of words read aloud correctly from text in 1 min, the effects of variations in the curriculum on the criterion validity of this datum are relatively unknown. One salient dimension of curriculum is the difficulty of the material presented to the student. A basic tenet of traditional teaching lore is that placement of students in material of "the right" difficulty level (usually called "instructional level") is of critical importance to their learning. We must hypothesize, then, that differences in the level of passage difficulty when conducting CBM in reading should affect obtained scored.

The optimal level of difficulty of material on which to conduct CBM has been an ongoing concern (see Taylor, Willits, & Richards, 1988, for discussion). CBM research has addressed this issue in terms of the effect of passage difficulty on sensitivity to student growth. Mirkin and Deno (1979), for example, measured students at three difficulty levels: independent, where students read 50 to 75 words correct per minute (wpm); instructional, where students read 35 to 60 wpm; and frustration, where students read 10 to 30 wpm. Students read at each level for 18 days, and the average growth rates were 1.00, 1.03, and .48 for independent, instructional, and frustration levels, respectively. Mirkin and Deno concluded that independent or instructional level was superior to frustration-level material in terms of sensitivity of the measurement to actual student growth. No research, however, has addressed the effect of passage difficulty on the criterion validity of CBM. The importance of this issue becomes evident when one considers that almost all CBM criterion-validity studies have been conducted with students reading third-grade passages. Consequently, one major purpose of the current study was to compare correlation coefficients for students,' reading performance in material ranging from preprimer/primer (PP-P) to seventh-grade level.

A second dimension of curriculum is the particular basal reading series used. Because a persistent question in the measurement field is that of curriculum bias--or the extent to which a particular curriculum series affects the validity or usefulness of measurement--the curriculum used might affect the criterion validity of CBM. In critiquing the field of curriculum-based assessment, for example, Taylor, Willits, and Richards (1988) cited the potential for curriculum bias as a serious concern. This has not been true in the area of spelling, where L. S. Funchs, Allinder, Hamlett, and Fuchs (1990) assessed the comparability of four commonly used, mainstream spelling curricula, and found substantial similarity across programs. In contrast, Tindal, Marston, Deno, and Germann (1982) identified differences in mean performance levels as a function of reading curricula. Nevertheless, it is possible to have mean differences in level of performance, with similar criterion validity coefficients. That is, the rank ordering of pupils may be preserved even though one text is more difficult than another. Consequently, an additional purpose of the current study was to contrast criterion-validity coefficients for different reading curricula.



Another important technical feature addressed in the CBM literature is developmental growth rate. Previous work has indicated that patterns of growth across grade levels on the CBM measures are similar to those documented for most physical and intellectual skills (see Figures 2-4 in Deno, 1985). Developmental growth rate patterns indicate that (a) reading aloud from basal text reliably indexes growth in reading proficiency through the elementary years, and (b) samples of reading aloud from text may be used as a "vital sign" of reading achievement in much the same way that heart rate or body temperature is used as a vital sign of physical health (Deno, 1985).

Unfortunately, research demonstrating strong developmental growth rates has been conducted exclusively using third-grade passages (i.e., a midrange level of difficulty). Moreover, growth rates for different curricula never have been contrasted. Consequently, the extent to which developmental growth rates vary as a function of curricular dimensions, such as the difficulty of material students read or the series from which passages are drawn, is unknown. The final purpose of the current study was to investigate the robustness of developmental growth rates within CBM by comparing those rates when students read material from alternative curricula, ranging in difficulty from PP-P to seventh grade. Across the questions posed in the current study, then, the primary focus was the potential for curriculum bias on CBM validity in reading.



Subjects were 91 students in one metropolitan public elementary school (N = 504), which served 37.59% of families on Aid to Families with Dependent Children (AFDC). To identify the study sample, 134 students with limited English proficiency were eliminated from the population. From the remaining 370 pupils, a stratified sample of 162 children (27 at each grade level) was selected randomly, and consent forms were mailed to parents of these children. Written consent was returned for 91 (56%) children. These students (51 males, 40 females) were at Grades 1 (n = 14), 2 (n = 17), 3 (n = 15), 4 (n = 18(, 5 (n = 16), and 6 (n = 11). Of the 91 pupils, 15 (16%) received special education resource service and were classified as learning disabled according to state guidelines that conformed to Public Law 94-142. Twenty-three students (25%) were enrolled in Chapter 1 programs for children "seriously behind" in reading. Grade equivalency scores on the Word Identification Test of the Woodcock Reading Mastery Tests (administered within 2 weeks of the currenty study) were associated with students' mean raw score performance. For students in Grades 1-6, these grade scores were 1.4, 2.0, 2.9, 3.7, 3.3, and 7.4, respectively. Students' average Ginn placement scores (as reported by their teachers within 2 weeks of the current study) were 1.01, 2.34, 3.52, 4.43, 5.08, and 6.58.


Two types of measures were employed: a commercial standardized achievement test and curriculum-based measures.

Commercial Test. The Passage Comprehension (PC) test of the Woodcock Reading Mastery Test-Form A (Woodcock, 1973) was employed. The PC contains 85 items of a modified close procedure (Bormuth, 1969). The subject's task is to read silently a passage from which a word has been deleted and to supply orally to the examiner an appropriate missing word. The passages range in difficulty from first grade to college level (Woodcock, 1973). As reported by Woodcock (1973), technical data are as follows. Split-half reliability coefficients (calculated on Grades 2.9 and 7.9 and corrected by the Spearman-Brown formula) ranged from .93 to .96 for Form A, whereas the test-retest alternate form reliability coefficient was .88 for PC. Construct validity examined by employing a multimethod-multitrait matrix at Grades 2.9 and 7.9 supported the relationships among similar and dissimilar measurement tasks that had been hypothesized by the tests' author. The PC was selected as the criterion measure because it is widely used to index reading comprehension (i.e., students' understanding


of text)--the commonly agreed-on purpose of reading.

Curriculum-Based Measures. Reading passages from the Ginn 720 (1976) and the Scott-Foresman (1976) series were used in measurement. The Ginn basal series was selected because it was employed within the target school. As described by the authors of the program, the Ginn series approaches reading instruction eclectically, addressing skills in decoding (phonemic and structural analysis), comprehension, and study skills. The relative emphases among these components shift from decoding to comprehension and study skills as the levels increase in difficulty. The series comprises 13 levels.

The Scott-Foresman Unlimited series was selected as a contrast to the Ginn program. As described by the Scott-Foresman authors, the series places greater emphasis on comprehension skills and less emphasis on decoding skills. The skills are categorized into comprehension (decoding and language experience), critical reading, study skills, literary skills, and attitudes and habits. The relative emphases among these components remain similar through the levels. The Scott-Foresman series is segmented into 21 books.

For 10 levels in Ginn and 9 levels in Scott-Foresman (see Table 1), two 100-word passages were selected as representative of the average readability level of the material from which the passages were drawn. Representative passages were employed because, as Fitzgerald (1980) demonstrated for seven series, substantial variability exists in the readability of passages from the same books. The following procedures, adapted from D. Fuchs and Balow (1974), were employed to select these passages: (a) from the last 25% of each level, five pages were selected randomly from all pages without excessive phonics exercises, dialogue, indentations, or proper nouns; (b) from each page, one 100-word passage was identified; (c) for each passage, a readability score was calculated using the Spache formula


(1953) for passages in books from preprimer through Grade 3, and using the Dale-Chall formula (1948) for passages in books from Grades 4-6; (d) the average readability over the passages within a level was calculated; (e) if readability scores for two passages were within 1 month grade level of the mean, then these two passages were selected; and (f) if, however, no two passages' readabilities were within 1 month grade level of the mean, then another passage was selected randomly and the preceding steps were repeated. Table 1 displays level numbers, publishers' grade levels, and readability information for each level within each series.

In the Ginn series, passages increased an average .44 readability grade score per level; in Scott-Foresman, .43. As shown in Table 2, within the Ginn series, the difference in readability was statistically signifiant in 8 of 10 consecutive pairs of levels. In Scott-Foresman, differences were significant for only three of nine pairs. This greater unreliability in the Scott-Foresman Unlimited series was due to greater variability or a larger standard error for the Scott-Foresman series, not to smaller differences in the mean readability scores between levels.

From each passage, students were required to read aloud for 1 min, while examiners recorded errors and the last word read. Omissions, insertions, substitutions, and mispronunciations were counted as errors. If a student completed a passage in less than 1 min, the examiner recorded the number of seconds in which the student finished. After administration of each passage, examiners scored the number of words read correctly per minute and the number of errors per minute for each passage. When students finished in less than 1 min, scores were adjusted to 1-min rates. Test-retest reliability for this curriculum-based measurement procedure ranged from .93 to .96, as reported by L. S. Fuchs, Deno, and Marston (1983).


The commercial and the curriculum-based measures were administered individually in one 45-to 60-min testing session. Examiners administered the PC according to standard manual directions, and they gave the curriculum-based measures in standard format (see Mirkin et al., 1984). Examiners, who were master's and doctoral students in special education, were trained in one 1-hour (hr) session. The three measures (i.e., PC, Ginn passages, and Scott-Foresman passages) were administered in random order. In addition, within the Ginn and the Scott-Foresman series, the reading passages were administered in random order. Orders for the three measures and for passages within series were counterbalanced.


Average Performance

For Grades 1 through 6, respectively, grade equivalency scores associated with students' mean raw score performance on the PC were 1.3, 2.3, 3.2, 4.1, 3.9, and 7.4. Means and standard deviations on each reading passage are shown in Table 3.

Effect of Curriculum on the Criterion

Validity of Curriculum-Based Measures

A correlation matrix was generated between the PC scores and the oral reading rate CBM scores on each of the 19 passages. These correlations, shown in Table 4, were examined by passage difficulty level and by curriculum series (Ginn vs. Scott-Foresman).

As shown in Table 4, when averaged by passage difficulty across series, correlations for the different grade-level materials were of similar magnitude: For PP-P, .92; for Grade 1, .93; for Grade 2-1, .92; for Grade 2-2, .92; for Grade 3-1, .91; for Grade 3-2, .92; for Grade 4, .89; for Grade 5, .91; for Grade 6, .90; and for Grade 7, .89. Tests of differences between correlations for dependent samples (Walker & Lev, 1953), applied to these average correlations, revealed no reliable difference between any of the grade levels.

Across difficulty level, the average correlation for the Ginn series was .913; for the Scott-Foresman series, .914. A test of differences between correlations for dependent samples (Walker & Lev, 1953) indicated that this small difference was not statistically significant.

Effect of Curriculum on Development

Growth Rate

Effect of series. Using a common third-grade passage within each series, as typically has been done in research on CBM (e.g., Deno, 1985), a least-squares regression formula was used to compute a slope for the scores shown in Table 3. This slope (or the average yearly increase in score) was 30.15 (standard error of estimate [SEE] = 18.80) for the Ginn program and 27.28 (SEE = 19.72) for the Scott-Foresman. So, the developmental growth rate (or the average yearly increase in score) to be expected, when all students are tested at the third-grade curriculum material, is approximately 30 wpm for the Ginn series and 27 wpm for Scott-Foresman. The left-hand panel of Figure 1 illustrates this developmental growth rate by series on the common third-grade task. As can be seen for both series, the trend is (a) similar, (b) relatively stable, and (c) dramatically linear, with the exception of a fifth-grade drop in performance across both reading series.

Effect of difficulty: Developmental growth on contrasting common grade levels. To determine the extent to which developmental growth rate depends on the use of third-grade (i.e., a mid-level) task, slopes also were calculated for each grade level, for both series. Slopes and SEEs by grade level and series are shown in Table 5. As can be seen, within the Ginn series, the slopes ranged from a low of 22.57 at Grade 4 to a high of 33.98 at Grade 1. Within the Scott-Foresman series, slopes ranged from 22.62 at Grade 5 to 30.92 at the PP-P level. The correlation between grade and slope was -.65 (ns) for Ginn and -.83 (p < .01) for Scott-Foresman. Thus, as the difficulty of reading material increases, developmental growth scores may decrease.

Effect of difficulty: Developmental growth rate on students' individual grade-appropriate passage level. To assess the effect of having students read common material (i.e., all students reading one level of material) as opposed to grade-appropriate material (i.e., each student reading the level of material appropriate to his or her grade level), a slope was calculated for each series using first-grade passage scores for students at Grade 1, second-grade passage scores for students at Grade 2, and so forth. For Ginn, this slope was 19.80 (SEE = 18.06); for Scott-Foresman, 24.10 (SEE = 20.64). The right-hand panel of Figure 1 illustrates these trends in performance within each curriculum.


The purpose of this study was to examine the effects of "curriculum" on the technical adequacy of CBM in reading. "Curriculum" was defined in


two ways: as the level of difficulty of the material and as the basal reading series. The dimensions of technical adequacy investigated were (a) criterion validity with respect to a widely used, commercial, standardized test of reading comprehension, that is, the Passage Comprehension Test of the Woodcock Reading Mastery Tests, and (b) developmental growth rate. Overall, the finding reveal a robustness in the technical adequacy of CBM across curricular dimensions.

Regarding criterion validity, results indicated that passage difficulty did not affect the criterion


For Each Difficulty Level Within Two Basal

Text Series, Correlations Between Reading

Aloud and PC Scores
 Correlations (a)
Difficulty Scott-Fores-
 Level Ginn 720 man Unlimited
 PP-P .93 .91
 1 .93 .93
 2-1 .93 .92
 2-2 .93 .91
 3-1 .92 .90
 3-2 .92 .92
 4 .89 .90
 5 .90 .92
 6 .89 .92
 7 .89 --
(a) All correlations are statistically significant, p <

Note: PC = Passage Comprehension Test of the Woodcock Reading Mastery Tests-Form A (Woodcock, 1973); PP-P = preprimer/primer grade level.

validity of CBM. Despite that third-grade material typically has been used in CBM criterion validity studies (e.g., Deno et al., 1982; Fuchs, L. S. et al., 1988), findings here support the conclusion that the high criterion validity of CBM is robust. Coefficients were similar when calculated on students' performance on material ranging across the elementary school years. Consequently, this study demonstrates that the criterion validity of CBM is not dependent on passage difficulty and not related to the use of midlevel difficulty material.

Second, findings revealed that, given two reading curricula with contrasting foci, the criterion validity of CBM was preserved. The mean coefficient for the Ginn series, a program with an eclectic focus, was .913; the mean coefficient for the contrasting Scott-Foresman series, a program with a heavier focus on comprehension-related skills, was .914. The similarity of these coefficients is striking. It supports the tenability of CBM across reading curricula and suggests that users of CBM need not worry extensively about the possibility of curriculum bias. Of course, the current study was conducted with two mainstream reading programs. It is unknown whether criterion validity would be supported for specialized


Slopes by Curriculum by Difficulty Level of Two

Basal Text Series
Level Slope (SEE)
Ginn 720 Series
 Preprimer/Primer 25.64 (16.72)
 Grade 1 33.98 (21.68)
 Grade 2-1 30.86 (18.69)
 Grade 2-2 28.55 (20.33)
 Grade 3-1 30.15 (18.80)
 Grade 3-2 26.52 (14.66)
 Grade 4 22.57 (16.97)
 Grade 5 24.38 (15.72)
 Grade 6 23.13 (12.71)
 Grade 7 24.83 (17.85)
 Across Grade Levels 19.80 (18.06)
Unlimited Series
 Preprimer/Primer 30.92 (20.25)
 Grade 1 28.32 (14.44)
 Grade 2-1 27.82 (17.15)
 Grade 2-2 26.73 (14.68)
 Grade 3-1 27.28 (19.72)
 Grade 3-2 27.52 (20.24)
 Grade 4 24.29 (17.18)
 Grade 5 22.62 (16.15)
 Grade 6 25.56 (17.03)
 Across Grade Levels 24.10 (20.63)

Note: SEE is Standard Error of Estimate, indicating the average variability around the slope.

reading curricula, such as the Merrill Linguistics or SRA series, which have passages with highly controlled vocabulary. Criterion validity research is necessary before CBM should be applied to such specialized programs.

In terms of developmental growth rates, findings also support the robustness of the CBM reading measure. Using the conventional methods for investigating CBM developmental growth rates (i.e., having all students, regardless of grade level or age, read common third-grade passages), developmental growth rates appeared generally stable, linear, and similar to those previously reported (see Deno, 1985). Importantly, however, when calculated on students' scores from contrasting reading levels and series (i.e., having students read common passages from any other elementary grade level from either of two curricula), developmental growth rates were similar and strong. Even put to an extreme test of calculating developmental growth rates by having students of different ages read different levels of material (using scores for first graders calculated on first-grade passages, scores for second graders calculated on second-grade passages, etc.), developmental growth rates (although lower) remained respectably strong. Consequently, not even the difficulty of the material from which students read could negate the strong patterns of developmental growth evident with the oral reading fluency measure.

Nevertheless, when students read common material from different levels of curriculum (e.g., all students reading Grade 1 material vs. all students reading Grade 6 material), negative, moderate to strong correlations were demonstrated between grade level and slope. For Ginn, the correlation was a nonsignificant -.65; for Scott-Foresman, a significant -.83. This suggests the possibility that, when students read lower level material, CBM may be more sensitive to student growth. This finding echoes the work of Mirkin and Deno (1979) and indicates the need for additional, related research. Despite this association, however, the developmental growth rates at all levels, even the more difficult passages, were strong.

Together, findings of the current study suggest an overall robustness for the criterion validity and for the developmental growth rates for the reading aloud from text as an index of proficiency. Neither feature of curriculum (i.e., difficulty of material or series) disturbed the strong criterion validity or developmental growth rate associated with the CBM reading measure. Results support continued development and research using the passage reading measure to index reading achievement and to monitor student growth in reading. Moreover, findings indicate that technical features of measurement may not be influenced in major ways by curricular dimensions. Consequently, as indicated by the work of Mehrens and associates (Mehrens, 1984; Phillips & Mehrens, 1987, 1988), the issue of curriculum bias--or the extent to which a particular curriculum affects the validity of measurement--may be less important than some research, which is largely based on hypothetical rather than empirical analyses, has indicated (e.g., Armbruster, Stevens, & Rosenshine, 1977; Jenkins & Pany, 1978; Shapiro & Derr, 1987). Of course, the current study examined curricular bias within the context of CBM, whereas related research has focused on curriculum bias within commercial tests.

One additional, interesting point warrants attention. The fifth graders in this study appeared to have lower reading skills than their fourth-grade counterparts. The methodology of this study does not permit us to speculate on whether this drop was due to sampling error or to some generalizable phenomenon. In either case, however, the deterioration in fifth-grade performance was indexed in every objective measure of achievement used in this study: the World Identification Test of the Woodcock Reading Mastery Tests (see description in "Subjects" section), the PC scores (see scores reported in the beginning of the "Results" section), and the curriculum-based measures within both series (see Table 3 and Figure 1). Interestingly, the one measure in which this fifth-grade drop was not observed was the teacher-reported index of reading level (see description in "Subjects"). Consequently, despite corroborating, objective information indicating that fifth graders' reading skills were lower than those of fourth graders, teachers still placed the fifth graders in more difficult reading material than they did fourth graders. Previous research corroborates this pattern in which teachers over-estimate student skills, especially when performance is less than desirable or expected (Einhorn & Hogarth, 1978; Fuchs, L. S., & Fuchs, 1984). This finding appears to support the need for objective measurement of achievement, such as CBM, to assist teachers in their instructional decision making.


Armbruster, B. B., Stevens, R. J., & Rosenshine, B. (1977). Analyzing content coverage and emphasis: A study of three curricula and two tests (Tech.Rep.No. 26). Urbana-Champaign: Center for the Study of Reading, University of Illinois.

Bormuth, J. R. (1969). Factor validity of cloze tests as measures of reading comprehension ability. Reading Research Quarterly, 4, 358-365.

Dale, E., & Chall, J. (1948). A formula for predicting readability. Educational Research Bulletin, 27, 11-20.

Deno, S. L. (1985). Curriculum-based measurement: The emerging alternative. Exceptional Children, 52, 219-232.

Deno, S. L., & Fuchs, L. S. (1987). Developing curriculum-based measurement systems for data-based special education problem solving. Focus on Exceptional Children, 19(8), 1-16.

Deno, S. L., Mirkin, P. K., & Chiang, B. (1982). Identifying valid measures of reading. Exceptional Children, 49, 35-45.

Deno, S. L., Mirkin, P. K., & Wesson, C. (1983). Procedures for writing data-based IEPs. TEACHING Exceptional Children, 16(2), 99-104.

Einhorn, H. J., & Hogarth, R. H. (1978). Confidence in judgment: Persistence of the illusion of validity. Psychological Bulletin, 85, 395-416.

Fitzgerald, G. G. (1980). Reliability of the Fry sampling procedure. Reading Research Quarterly, 15, 489-503.

Fuch, D., & Balow, B. (1974). Formulating an informal reading inventory. Unpublished manuscript. (Available from D. Fuchs, Box 328, Peabody College, Vanderbilt University, Nashville, TN 37203.)

Fuchs, D., Fuchs, L. S., & Fernstrom, P. (in press). Responsible reintegration of learning disabled students with curriculum-based measurement and trans-environmental programming to achieve responsible mainstreaming. Elementary School Journal.

Fuchs, L. S. (1982). Reading. In P. K. Mirkin, L. S. Fuchs, & S. L. Deno (Eds.), Considerations for designing a continuous evaluation system: An integrative review (Monograph No. 20). Minneapolis: University of Minnesota Institute for Research on Learning Disabilities. (ERIC Document Reproduction Service No. ED 226 042)

Fuchs, L. S. (1986). Monitoring the performance of mildly handicapped students: Review of current practice and research. Remedial and Special Education, 7(5), 5-12.

Fuchs, L. S., Allinder, R. M., Hamlett, C. L., & Fuchs, D. (1990). An analysis of spelling curricula and teachers' skills in identifying error types. Remedial and Special Education, 11(1), 42-53.

Fuchs, L. S., & Deno, S. L. (1991). Paradigmatic distinctions between instructionally relevant measurement models. Exceptional Children, 57, 488-500.

Fuchs, L. S., Deno, S. L., & Marston, D. (1983). Improving the reliability of curriculum-based measures of academic skills for psychoeducational decision making. Diagnostique, 8, 135-149.

Fuchs, L. S., Deno, S. L., & Mirkin, P. K. (1984). Effects of frequent curriculum-based measurement and evaluation on pedagogy, student achievement, and student awareness of learning. American Educational Research Journal, 21, 449-460.

Fuchs, L. S., & Fuchs, D. (1984). Criterion-referenced assessment without measurement: How accurate for special education? Remedial and Special Education, 5(4), 29-32.

Fuchs, L. S., Fuchs, D., & Maxwell, L. (1988). The validity of informal reading comprehension measures. Remedial and Special Education, 9(2), 20-29.

Germann, G., & Tindal, G. (1985). An application of curriculum-based assessment: The use of direct and repeated measurements. Exceptional Children, 52, 244-256.

Ginn and Company. (1976). Reading 720. Lexington, MA: Ginn (Xerox Corp.).

Jenkins, J. R., & Pany, D. (1978). Standardized achievement tests: How useful for special education? Exceptional Children, 44, 448-453.

Marston, D. (1988). The effectiveness of special education: A time series analysis of reading performance in regular and special education settings. The Journal of Special Education, 21, 13-26.

Marston, D., Mirkin, P. K., & Deno, S. L. (1984). Curriculum-based measurement: An alternative to traditional screening, referral, and identification of learning disabled students. The Journal of Special Education, 18, 109-118.

Mehrens, W. A. (1984). National tests and local curriculum: Match or mismatch? Educational Measurement: Issues and Practice, Fall, 9-15.

Mirkin, P. K., & Deno, S. L. (1979). Formative evaluation in the classroom: An approach to improving instruction (Research Report No. 10). Minneapolis: University of Minnesota Institute for Research on Learning Disabilities.

Mirkin, P. K., Deno, S. L., Fuchs, L. S., Wesson, C., Tindal, G., Marston, D., & Kuehnle, K. (1984). Procedures to develop and monitor progress on IEP goals. Minneapolis: University of Minnesota.

Phillips, S. E., & Mehrens, W. A. (1987). Curricular differences and unidimensionality of achievement test data: An explanatory analysis. Journal of Educational Measurement, 24(1), 1-16.

Phillips, S. E., & Mehrens, W. A. (1988). Effects of curricular differences on achievement test data at item and objective levels. Applied Measurement in Education, 1(1), 33-51.

Scott-Foresman Systems, Revised. (1976). Unlimited Series. Glenview, IL: Scott, Foresman & Co.

Shapiro, E. E., & Derr, T. (1987). An examination of overlap between reading curricula and standardized achievement test. The Journal of Special Education, 21, 59-68.

Shinn, M. R. (Ed.). (1989). Curriculum-based measurement: Assessing special children. New York: Guilford.

Spache, G. (1953). A new readability formula for primary grade materials. Elementary English, 53, 410-413.

Taylor, R. L., Willits, P. P., & Richards, S. B. (1988). Curriculum-based assessment: Considerations and concerns. Diagnostique, 14, 14-21.

Tindal, G., Marston, D., Deno, S. L., & Germann, G. (1982). Curricular differences in direct repeated measures of reading (Research Report No. 93). Minneapolis: University of Minnesota Institute for Research on Learning Disabilities.

Walker, H. M., & Lev, H. (1953). Statistical inference. New York: Holt.

Woodcock, R. (1973). Woodcock Reading Mastery Tests manual. Circle Pines, MN: American Guidance Service, Inc.

LYNN S. FUCHS (CEC Chapter #185) is an Associate Professor in the Department of Special Education of Peabody College, Vanderbilt University, Nashville, Tennessee. STANLEY L. DENO is a Professor in the Department of Educational Psychology of the University of Minnesota, Minneapolis.
COPYRIGHT 1991 Council for Exceptional Children
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1991 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Fuchs, Lynn S.; Deno, Stanley L.
Publication:Exceptional Children
Date:Dec 1, 1991
Previous Article:Classroom applications of mnemonic instruction: acquisition, maintenance, and generalization.
Next Article:Feedback about integrating middle-school students with severe disabilities in general education classes.

Related Articles
Effects of alternative goal structures within curriculum-based measurement.
Curriculum-based assessment and direct instruction: critical reflections on fundamental assumptions.
The contribution of skills analysis to curriculum-based measurement in spelling.
A personal view of curriculum-based assessment: a response to "Critical reflections...." (response to Lous Heshusius, 57 Exceptional Children 315,...
Classwide curriculum-based measurement: helping general educators meet the challenge of student diversity.
Must instructionally useful performance assessment be based in the curriculum?
A comment on "Must instructionally useful performance assessment be based in the curriculum?" (this journal, number 61, p15-24)
Much ado about something (though we're not sure it's our article): a reply to Howell and Evans.
When some is not better than none: effects of differential implementation of curriculum-based assessment.
Is it working?: an overview of curriculum based measurement and its uses for assessing instructional, intervention, or program effectiveness.

Terms of use | Copyright © 2017 Farlex, Inc. | Feedback | For webmasters