Printer Friendly

The contribution of skills analysis to curriculum-based measurement in spelling.

The Contribution of Skills Analysis to Curriculum-Based Measurement in Spelling * Curriculum-Based Measurement (CBM) is used to measure the development of pupils' basic skills. Using standardized CBM procedures for creating, administering, and scoring tests (see Deno & Fuchs, 1987), practitioners can summarize CBM information in different ways to formulate traditional special education eligibility decisions and to develop students' instructional programs (see Fuchs, in press). In this research, we focus on strategies for analyzing CBM data to formulate effective instructional programs.

Within CBM, the primary datum is the performance indicator, or the students's score on the test; this score represents the student's overall proficiency in the annual curriculum. When used to monitor student progress, teachers administer CBM repeatedly over time and graph the scores. Increasing scores signify the acquisition and retention of skills; flat or decelerating scores indicate the lack of acquisition or the failure to maintain skills.

Practitioners can use the graphed CBM performance indicator data base in at least three ways to monitor and develop educational programs: (a) to determine the appropriateness of goals and revise them as necessary (Fuchs, Fuchs, & Hamlett, 1989a); (b) to judge the adequacy of student growth and modify instruction, when warranted, to enhance student growth (Fuchs, Fuchs, & Hamlett, 1989b); and (c) to compare the efficacy of different interventions and to develop more effective components and eliminate less effective dimensions (see Casey, Deno, Marston, & Skiba, 1988).

Research indicates that when practitioners systematically use CBM in progress monitoring, the graphed CBM performance indicators can be employed as a dependent variable to describe student growth in the curriculum, to evaluate program success, and to determine when and how to revise plans. More than a decade of CBM research and development (e.g., Deno, 1985, 1986; Shinn, 1989) has focused on the performance indicator.

Despite this attention to the performance indicator, additional, more qualitative information about the student's performance is available within CBM. Each CBM test samples across the skills of the annual curriculum. Consequently, one can summarize student performance by type of curricular skill. At an informal level, practitioners may observe student performance on the multiple skills of the curriculum as they score tests and take these observations into account when designing instructional improvements. In additional to informal observations, an analysis of student responses can be systematized. For example, one might order words recently spelled during CBM tests from most to least correct, or one might count types of errors to determine which types occur frequently.

Unfortunately, at present, the value of additional analysis within CBM remains unknown: Research has not addressed whether teachers can use diagnostic spelling information to improve the quality of instruction. In math, limited research indicates taht teachers do not successfully incorporate analysis of students' faulty algorithms into their instruction (Putnam, 1987). Consequently, the extent to which teachers can benefit from the analysis of spelling errors remains unknown. The purpose of the current study was to explore different levels of additional analysis of student errors within CBM.

METHOD

Design

Teachers were assigned randomly to three groups: CBM with skills analysis (Group 1), CBM without skills analysis (Group 2), and control (Group 3). CBM teachers selected four pupils to participate, then the four students were assigned randomly to two subgroups. Group 1 teachers used the performance indicator plus skills analysis for both subgroups (A & B). Groups 2 teachers proceeded differently with each subgroup: For one subgroup (A), they used the performance indicator analysis alone; for the second subgroup (B), they used the performance indicator analysis, but also saw ordered lists of student spellings. Each CBM group's performance was compared with the same control group of teachers (Group 3) who each selected two pupils to participate. The design treated the teacher as the unit of analysis. The between-subjects factor was CBM (i.e., CBM with skills analysis [Group 1] vs. CBM without skills analysis [Group 2] vs. control [Group 3]). One within-subjects factor was subgroup (skills analysis vs. performance indicator alone; and skills analysis vs. ordered lists of errors); the second within-subjects factor was time (pretreatment vs. posttreatment).

Subjects

Teachers. Participants were 30 special educators in 16 schools, who taught self-contained or resource programs. Teachers were assigned randomly to the three groups. One-way analyses of variance (ANOVAs) revealed no reliable differences on the following variables: age level, total years teaching, years teaching special education, years in current position, and personal or general teacher efficacy (see Gibson & Dembo, 1984). Descriptive statistics are shown in Table 1. A chisquare test applied to teachers' highest educational degree revealed no relation. In Groups 1, 2, and 3, respectively, 7, 6, and 6 teachers had bachelor's degrees; 3, 3, and 3 had master's degrees; and 0, 1, and 1 had specialist's certificates. In both CBM groups, five teachers had CBM experience.

Students. Each CBM teacher selected four students to participate; each control teacher, two students. All pupils had mild or moderate disabilities, were in Grades 3-9, had a current IEP (Individualized Education Program) spelling goal, and were classified as learning disabled or emotionally disturbed according to state regulations, conforming to Public Law 94-142. Within the CBM groups, two of each teacher's four students were assigned randomly to one subgroup; two, to a second subgroup. During the study, 10 of the 100 students moved, leaving student ns of 19 and 20 in the two subgroups of Group 1; 18 and 17 in the two subgroups of Group 2; and 18 in the control group.

ANOVAs on students' age, grade, teacher-estimated math level, years in special education, and individually measured IQ (available for 60 students) revealed no significant differences. Chi-square tests applied to race, sex, and handicapping condition indicated that grups were comparable (see Table 2).

Measures

Fidelity of Treatment. The accuracy with which teachers implemented the treatment was assessed using the Spelling Modified Accuracy of Implementation Rating Scale-Revised (S-MAIRS; Fuchs, 1988), which comprises three subscales: Structure, Measurement, and Data Utilization. Items are rated on a 5-point Likert-type scale (0 ="low"; 4 = "high"), in accordance with detailed scoring guidelines. Staff were trained in scoring during one 3-hour session. Percentage of agreement, calculated on 15 protocols, was 92. (See Coulter, cited in Thompson, White, & Morgan, 1982, for formulate.)

Number of CBM measurements was isolated for analysis, because previous work (Fuchs, Fuchs, & Hamlett, 1989b; Jenkins, Mayhall, Peshka, & Townsend, 1974; Wesson, et al., 1988) indicated that measurement alone may affect achievement outcomes. Number of measurements was counted from computer files that stored students scores. Percentage of agreement (see Coulter formula), calculated on 20 cases, was 100.

Program Development. The number of goal changes introduced for each student was counted from computer files on which teachers recorded these changes. The level of goal ambitiousness was calculated by dividing the final goal level by the baseline median, each of which was derived from computer files on which this information was stored. The number of instructional changes introduced by teachers during the study was counted from computer files, and the number of specific spelling skills referenced by teachers in instructional changes was counted from plan sheets teachers maintained during the study. Percentages of agreement (see Coulter formula), calculated for 20 cases, were 100, 100, 100, and 91, respectively, for these four measures.

Achievement. The Word Spelling Test-Revised (WST; Deno, Mirkin, Kuehnle, & Lowry, 1980) comprises 60 words randomly selected from Grades 1-6 of Basic Elementary Reading Vocabularies (Harris & Jacobson, 1972). Words are dictated at 10 second (s) intervals. Performance is scored as number of letter sequences (LSs) correct. LSs (i.e., correct pairs of letters in words, see White & Haring, 1980) award credit for partially correct responses. As an index of achievement, LSs are more sensitive than words (see Deno, 1985).

The WST was selected because it is one of few spelling achievement tests requiring a production rather than a recognition task. Criterion validity with the Test of Written Spelling, the Peabody Individual Achievement Test-Spelling, and the Stanford Achievement Test-Spelling was between .73 and .99 (Deno et al., 1980). Test-retest reliability was .92 or higher (Fuchs, Deno, & Marston, 1983), and interscorer agreement (see Coulter formula), assessed on 28% of the protocols, was 97%. The highest possible score was 395. No ceiling effect was observed.

Curriculum-Based Measurement Treatment

CBM teachers tracked pupil progress toward spelling goals for 15 weeks. The CBM treatment comprised goal selection, ongoing measurement on the goal material, and evaluation of the data base to develop instructional programs.

Goal Selection and Ongoing Measurement. Teachers determined the appropriate curriculum and spelling level on which to establish student goals; the goal level represented the pool of words teachers hoped the student would master by year's end. Spelling performance was assessed at least twice weekly, for a maximum of 180s, each time on an alternate form of a standard CBM test, which comprised randomly selected words from the goal-level material. Correct letter sequences were graphed. Students were trained to take the CBM spelling test at computers, using Basic Spelling software (Fuchs, Hamlett, & Fuchs, 1990). This program automatically administers and scores the tests and saves the scores.

When students demonstrated mastery with Basic Spelling, teachers calculated a median baseline performance, using the most recent three scores. Next, teachers set a performance criterion, representing their best estimate of what the student might accomplish by year's end.

Evaluating the Data Base. Each week teachers employed Basic Spelling (Fuchs, Hamlett, & Fuchs, 1990), which automatically (a) graphs the performance indicators, (b) applies decision rules to the graphed performance indicators, (c) communicates those decisions, and (d) describes student responses. The first three components constituted the performance indicator analysis in this study; the last component varied depending on CBM group.

Basic Spelling shows a graph on the computer screen, with (a) the pupil's scores over time, (b) a goal line reflecting the desired slope of improvement from baseline to goal, and (c) a quarter-intersect (White & Haring, 1980) line of best fit superimposed over the scores collected since the last instructional change and extrapolated to the goal date (see Figure 1).

Decision rules guided the development of instructional programs. After 8 scores were collected, if the line of best fit was flatter than the goal line, the teacher introduced a teaching change to improve the rate of progress, and collected 8 new scores. If the line of best fit was steeper than the goal line, the teacher raised the goal and collected 8 new scores.

For students in the first subgroup (performance indicator alone) of Group 2, teachers received only this performance indicator analysis. For students in the second subgroup (ordered word lists) of Group 2, teachers received the performance indicator analysis along with ordered listings of recently spelled words (see Figure 2). For teachers in both subgroups of Group 1, the computer also provided a skills analysis. This skills analysis showed (a) the ordered lists of words (see Figure 2), (b) numbers of words correct, almost correct, and far from correct (see Figure 3), and (c) the three most frequent types of errors, with examples of each error (see Figure 3). (For information on a revised skills analysis, which provides more information, see Fuchs, Hamlett, & Fuchs, 1990.)

Control Treatment

Control (Group 3) teachers set goals using standard IEP forms. On open-ended posttreatment questionnaires, techers reported that they monitored pupil progress toward goals using weekly teacher-made spelling tests and workbook/worksheet performance. Such descriptions match those reported by larger samples of special educators (e.g., mirkin & Potter, 1982).

CBM Training and Consultation

CBM teachers (Groups 1 and 2) were provided initial training to implement CBM over 8 weeks, with two 2-hour after-school workshops, teacher practice, and staff observation of teachers. Then, staff met with teachers in their classrooms once every 2 to 3 weeks to help them solve implementation problems. During the study, Group 1 teachers received 5.73 visits (SD = 2.01); Group 2 teachers, 6.60 visits (SD = 2.12), F (1,18) = .94, ns.

An additional purpose of these visits was to provide consultation to teachers concerning the development of instructional changes. Instructional packets were prepared to provide teaching suggestions related to the 27 spelling error types potentially identified by the skills analysis (see Fuchs, Allinder, Hamlett & Fuchs, in press, for listing of error types) and one packet with suggestions for types of drill. Project staff employed these packets in their consultation using these guidelines. They never showed the file of instructional packets to any teacher. They offered a packet to any teacher, regardless of tretment group, only when a teacher independently identified a particular skill for instruction. Because the skills analysis categorized students responses into error types, it was more likely for a Group 1 teacher to identify one of the 27 skills for instruction; however, whenever a Group 2 teacher identified a skill (even by a different name), the corresponding instructional packet was offered.

Data Collection and Analysis

S-MAIRS observations were conducted 10 weeks into the study, and scoring from documents was completed after the study. An S-MAIRS was completed for two randomly selected students per teacher, one from each subgroup. The number of measurement points and the four program development measures were tallied for all students from documents after the study. The WST was administered in small groups preceding and following the study.

All analyses were conducted with "teacher" as the unit of analysis. One-between (Group 1 vs. Group 2), one-within (subgroup vs. subgroup) subjects ANOVAs were applied to the S-MAIRS subscales. For number of CBM measurements and the four program development measures, data were aggregated by teacher subgroups and analyzed with analogous ANOVAs. WST scores were aggregated by teacher subgroups. Then, a one-between (Group 1 vs. Group 2 vs. Group 3), two-within (subgroup vs. subgroup; pretreatment vs. posttreatment) subjects ANOVA was applied.

RESULTS

Fidelity of Treatment

ANOVAs conducted on the S-MAIRS subscales revealed no significant main effects or interactions. See Table 3 for means and standard deviations.

Program Development

For number of intervention changes, number of goal changes, and goal ambitiousness, only one significant differences was revealed (see Table 3). Main effects for the Group 1/Group 2 factor produced F values of .11, .88, and .88 for the three indexes, respectively. Main effects for the subgroup factor revealed F ratios of .30, .00, and 4.94, respectively. Interaction F values were .11, 1.29, and 1.03, respectively. One significant difference was revealed for subgroup for goal ambitiousness, 94, p < .05. However, this difference is not of particular interest within the context of this study and is not discussed further.

For number of skills specified in the instructional changes, a significant difference of importance was identified. For the Group 1/Group 2 factor, the F ratio was 4.97, p < .05. Group 1 teachers (with skills analysis) cited a greater number of specific skills than did Group 2 (without skills analysis) teachers. F values for the subgroup main effect and the interaction were .20 and 1.45, ns (see Table 3).

Achievement

Table 3 displays WST scores by treatment group. Significant effects were identified for: (a) time, F(1, 27) = 59.68, p < .001, (b) the CBM factor X time interaction, F(2, 27) = 5.38 p < .05, and (c) the three-way interaction, F(2,27) = 3.76, p < .05. To identify the nature of the 3-way interaction, which takes precedence over main effects and the simple interaction, a pre-post difference was calculated for each treatment group. Scheffe tests were applied to the CBM with skills analysis vs. CBM with no additional analysis (performance indicator only) versus control contrast and to the CBM with skills analysis versus CBM with ordered lists versus control contrast. The achievement of the CBM with skills analysis group exceeded that of the control group and that of the CBM with no additional analysis group. By contrast, the achievement of both CBM groups (with skills analysis and with ordered lists) surpassed that of the controls, with no reliable difference between the two CBM treatments.

DISCUSSION

Fidelity information revealed that teachers in different CBM groups implemented their treatments with similar and high levels of accuracy. In terms of program development, teachers made comparable numbers of intervention and goal changes and implemented similar levels of goal ambitiousness across CBM groups. However, compared with teachers in Group 2 who received no additional analysis or who received ordered word lists, teachers who had the skills analysis (Group 1) cited a greater number of specific skills for instruction. The skills analysis provided listings of the most frequent types of students' phonetic errors. Specific reference to these error types probably prompted teachers to identify related skills for instruction. Support for this explanation is derived from Fuchs, Allinder, Hamlett, & Fuchs (1990), indicating that teachers who received skills analysis became more proficient in independently identifying phonetic errors, compared with teachers who did not receive skills analysis.

In addition to citing a greater number of skills for instruction, teachers who received the supplemental skills analysis also effected better achievement. Mean levels of student growth increased as the supplemental CBM information became more descriptive: The mean growth for students in the control group, whose teachers received no systematic information about student performance in the curriculum, was 11.65 LS; for students in the subgroup whose teachers only received the graphed performance indicators, 13.65; for students in the subgroup whose teachers saw ordered spellings, 25.83; and for students in both skills analysis groups, whose teachers saw ordered word lists with frequent errors, a mean of 35.05. In addition, inferential statistics revealed the following. First, achievement associated with the Group 1 (skills analysis) teachers was reliably better than that of the control and the Group 2 subgroup with no additional analysis. Second, achievement asociated with Group 2 subgroup with the ordered word lists was reliably better than that of the controls, but not significantly different from that Group 1 (skills analysis).

Results are important for the development of a CBM technology. Current work (Fuchs, in press) suggests that the CBM performance indicator analysis may be essential for describing rates of student progress in the curriculum, for evaluating program effectiveness, and for prompting teachers about the need to revise goals and teaching strategies. Nevertheless, results indicate that, with increasingly rich, supplemental analyses of student responses to the CBM tests, teachers may be able to design better programs to enhance achievement. Previous research lends credence to this conclusion. Tindal, Fuchs, Mirkin, Christenson, & Deno (1981) found that, when graphs indicated that current instructional programs were ineffective and that teaching changes were warranted, teachers frequently found it difficult to use the CBM graphed information independently to formulate promising, substantial plans for programmatic changes. The CBM skills analysis provides teachers with more qualitative information and appears to facilitate more effective program formulation.

With continuing press to identify feasible, promising strategies for enhancing the quality of special education, work demonstrating the effectiveness of special technologies is critical to our field. Current results are important, therefore, because they support the effectiveness of CBM in improving educational outcomes in spelling for children with disabilities, and because they provide an innovative strategy for enhancing CBM applications through skills analysis. Moreover, findings appear robust, with additional research, conducted concurrently with this study, supporting the usefulness of the supplemental CBM skills analysis in reading and math (Fuchs, Fuchs, & Hamlett, 1989c; Fuchs, Fuchs, Hamlett, & Stecker, 1990).

In this study, teachers who received only the CBM performance indicator analysis failed to effect differential student achievement. At first glance, this appears to contradict previous research indicating teachers can use the CBM performance indicator analysis to enhance growth (e.g., Fuchs, Deno, & Mirkin, 1984; Jones & Krouse, 1988). Yet, in this study, all teachers used computerized data collection, which automatically administered and scored tests. This technology saves teachers substantial amounts of time (Fuchs, Hamlett, Fuchs, Stecker, & Ferguson, 1988); yet it fails to provide an opportunity for teachers to inspect student responses as they routinely score tests. Therefore, the current study differs from previous CBM work because, with automatic data collection, teachers saw graphs without any additional opportunity to informally view student responses. Results of the current study suggest that, with no opportunity to inspect student responses, teachers may find it especially difficult to use CBM graphs to develop better instructional programs.

Consequently, computerized test administration or scoring may reduce the quality of information derived from assessment and may limit the usefulness of assessment for instructional planning. Nevertheless, the current study, which provided teachers with systematic, organized, qualitative information about student responses to test items through skills analysis, represents one example of how technology can be developed to facilitate practitioners' meaningful use of computerized assessment information.

REFERENCES

Casey, A., Deno, S. L., MArston, D., & Skiba, R. (1988). Experimental teaching: Changing teacher beliefs about effective instructional practices. Teacher Education and Special Education, 11, 123-132.

Deno, S. L. (1985). Curriculum-based measurement: The emerging alternative. Exceptional Children, 52, 219-232.

Deno, S. L. (1986). Formative evaluation of individual student programs: A new role for school psychologists. School Psychology Review, 15, 358-374.

Deno, S. L., & Fuchs, L. S. (1987). Developing curriculum-based measurement systems for data-based special education problem solving. Focus on Exceptional Children, 19(8), 1-16.

Deno, S. L., Mirkin, P.K., Kuehnle, K., & Lowry, L. (1980). Relationships among simple measures of spelling and performance on standardized achievement tests (Research Report No. 22). Minneapolis: University of Minnesota, Institute for Research on Learning Disabilities.

Fuchs, L. S. (1988). Effects of computer-managed instruction on teachers' implementation of systematic monitoring programs and student achievement. Journal of Educational Research, 81, 294-304.

Fuchs, L. S. (in press). Enhancing instructional programming and student achievement with curriculum-based measurement. In J. Kramer (Ed.), Curriculum-based assessment: Examining old problems, evaluating new solutions. Hillsdale, NJ: Lawrence Erlbaum.

Fuchs, L. S., Allinder, R., Hamlett, C. L., & Fuchs, D. (1990). Analysis of spelling curriculum and teachers' skills in identifying phonetic error types. Remedial and Special Education, 11(1), 42-53.

Fuchs, L. S., Deno, S. L., & Marston, D. (1983). Improving the reliability of curriculum-based measures of academic skills for psychoeducational decision making. Diagnostique, 6, 135-149.

Fuchs, L. S., Deno, S. L., & Mirkin, P. K. (1984). Effects of frequent curriculum-based measurement and evaluation on pedagogy, student achievement, and student awareness of learning. American Educational Reseach Journal, 21, 449-460.

Fuchs, L. S., Fuchs, D., & Hamlett, C. L. (1989a). Effects of alternative goal structures within curriculum-based measurement. Exceptional Children, 55, 229-238.

Fuchs, L. S., Fuchs, D., & Hamlett, C. L. (1989b). Effects of instrumental use of curriculum-based measurement to enhance instructional programs. Remedial and Special Education, 10(2), 43-52.

Fuchs, L. S., Fuchs, D., & Hamlett, C. L. (1989c). Monitoring student progress with student recalls: Effects of two teacher feedback systems. Journal of Educational Research, 83, 103-111.

Fuchs, L. S., Fuchs, D., Hamlett, C. L., & Stecker, P. M. (1990). The role of skills analysis to curriculum-based measurement in math. School Psychology Review, 19, 6-22.

Fuchs, L. S., Hamlett, C., & Fuchs, D. (1990). Basic Spelling. [Computer program]. Austin, TX:PRO-ED.

Fuchs, L. S., Hamlett, C., Fuchs, D., Stecker, P. M., & Ferguson, C. (1988). Conducting curriculum-based measurement with computerized data collection: Effects on efficiency and teacher satisfaction. Journal of Special Education Technology, 9(2), 73-86.

Gibson, S., & Dembo, M. H. (1984). Teacher efficacy: A construct validation. Journal of Educational Psychology, 76, 569-582.

Harris, A. J., & Jacobson, M. D. (1972). Basic elementary reading vocabularies. New York: Macmillan.

Jenkins, J. R., Mayhall, W., Peshka, C., & Townsend, V. (1974). Using direct and daily measures to increase learning. Journal of Learning Disabilities, 10, 604-608.

Jones, E. D., & Krouse, J. P. (1988). The effectiveness of data-based instruction by student teachers in classrooms for pupils with mild handicaps. Teacher Education and Special Education, 11(1), 9-19.

Mirkin, P. K., & Potter, M. L. (1982). A survey of program planning and implementation practices of LD teachers (Research Report No. 80). Minneapolis: University of Minnesota, Institute for Research on Learning Disabilities.

Putnam, R. T. (1987). Structuring and adjusting content for students: A study of live and simulated tutoring of addition. American Educational Research Journal, 24, 13-48.

Shinn, M. R. (Ed.) (1989). Curriculum-based measurement: Assessing special children. New York: Guilford Press.

Thompson, R. H., White, K. R., & Morgan, D. P. (1982). Teacher-student interaction patterns in classroom with mainstreamed mildly handicapped students. American Educational Research Journal, 19, 220-236.

Tindal, G., Fuchs, L. S., Mirkin, P. K., Christenson, S., & Deno, S. L. (1981). The effect of measurement procedures on student achievement (Research Report No. 61). Minneapolis: University of Minnesota, Institute for Research on Learning Disabilities. (ERIC Document Reproduction Service No. ED 218 846)

Wesson, C., Deno, S. L., Mirkin, P. K., Maruyama, G., Skiba, R., King, R., & Sevcik, B. (1988). A causal analysis of the relationships among ongoing curriculum-based measurement and evaluation, the structure of instruction, and student achievement. The Journal of Special Education, 22, 330-343.

White, O. R., & Haring, N. G. (1980). Exceptional teaching (2nd ed.). Columbus, OH: Merrill.

LYNN S. FUCHS (CEC Chapter #185) and DOUGLAS FUCHS (CEC Chapter #185) are Associate Professors; CAROL L. HAMLETT (CEC Chapter #242) is a Research Associate in the Department of Special Education at Peabody College, Vanderbilt University, Nashville, Tennessee. ROSE M. ALLINDER (CEC Chapter #236) is an Assistant Professor in the Department of Special Education at the University of Nebraska, Lincoln.
COPYRIGHT 1991 Council for Exceptional Children
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1991 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Fuchs, Lynn S.; Fuchs, Douglas; Hamlett, Carol L.; Allinder, Rose M.
Publication:Exceptional Children
Date:Mar 1, 1991
Words:4169
Previous Article:Learning disabilities as functions of familial learning problems and developmental problems.
Next Article:Peer collaboration: accommodating students with mild learning and behavior problems.
Topics:


Related Articles
Effects of alternative goal structures within curriculum-based measurement.
Curriculum-based measurement and developmental reading models: opportunities for cross-validation.
Curriculum-based assessment and direct instruction: critical reflections on fundamental assumptions.
Paradigmatic distinctions between instructionally relevant measurement models.
Effects of curriculum within curriculum-based measurement.
Classwide curriculum-based measurement: helping general educators meet the challenge of student diversity.
Must instructionally useful performance assessment be based in the curriculum?
Validity of general outcome measures for predicting secondary students' performance on content-area tasks.
When some is not better than none: effects of differential implementation of curriculum-based assessment.
Is it working?: an overview of curriculum based measurement and its uses for assessing instructional, intervention, or program effectiveness.

Terms of use | Copyright © 2016 Farlex, Inc. | Feedback | For webmasters