Patterns of Growth: Cluster Analysis of Written Expression Curriculum-Based Measurement.
Written expression curriculum-based measurement is a quick and effective way to screen for writing difficulties and evaluate student progress during an intervention. In this study the scores of third and fourth grade students in the fall, winter, and spring were analyzed as related to grade level, gender, special education participation, and handwriting style. A cluster analysis was used to examine patterns of student scores. Implications related to gender and handwriting style are discussed. Writing scores indicate the need to examine accuracy and skill improvement in addition to overall fluency rates.
Written expression is a complex process that is essential to obtaining well-paying jobs and communicating in today's society (National Commission on Writing, 2004). In order to prepare students for positive post-school outcomes, teachers must work with students to build knowledge and proficiency across many areas related to written expression, including generating and organizing ideas, producing legible handwriting, spelling words correctly, forming grammatically correct sentences, monitoring one's own performance, including interesting and specific vocabulary, and understanding the audience's perspective and needs. The expectations for clear and effective writing must begin early in school, which is reflected in the Elementary English Language Arts Common Core standards. By fourth grade, standards relate to correct spelling and production of a coherent piece of writing addressing a topic and accounting for the task, purpose, and audience (National Governors Association Center for Best Practices, Council of Chief State School Officers, 2010). Unfortunately, many students with and without disabilities do not meet these standards or achieve requisite levels of writing proficiency (Salahu-Din, Persky, & Miller, 2008). Current National Assessment of Educational Progress data indicate only 27% of U.S. eighth graders are writing at or above a grade-proficient level (National Center for Education Statistics, 2012). Specifically, students with disabilities often demonstrate skill deficits in planning, organizing, and writing conventions, and show insufficient metacognitive awareness to write strategically (Troia, 2006).
To understand students' strengths and needs and develop meaningful instruction, teachers require effective assessments. Evaluation of a student's writing can include an analysis of content or ideas, organization, vocabulary or word choice, voice, sentence fluency, and conventions (Culham, 2003). Although analytic rubrics, such as those used in popular writing programs, including The Six +1 Trait Writing Model (Culham, 2003), are structured, detailed, and comprehensive, it is often difficult to demonstrate progress over time or to identify specific areas of need with these rubrics alone (Parker, Tindal, & Hasbrouck, 1991). In addition, using class time for students to develop a revised and final product for every assessment is not always feasible. Difficulties with quickly and accurately assessing students' writing performance may contribute to students' limited progress.
Written Expression Curriculum-Based Measurement
A possible assessment option for teachers is Curriculum-Based Measurement (CBM), which was initially developed to aid in the teachers' decision-making processes regarding the impact of instruction on student learning (Deno, 2003). The measures are short and easy to administer, complete, and score, and can be used with students in early elementary through high school (Gansle et al., 2004; Lopez & Thompson, 2011; Mercer, Martinez, Faust, & Mitchell, 2012). Students write for three minutes in response to a story starter prompt, and the sample is scored using a number of indices (e.g., total words written, grammatically correct sequences, etc.). These scores related to spelling, grammar, or fluency are generally examined individually, rather than considering a pattern of student scores across several components of written expression. Although research on written expression CBM is not as robust as for reading or mathematics (see Christ, Scullin, Tolbize, & Jiban, 2008; Wayman, Wallace, Wiley, Ticha, & Espin, 2007 for reviews), it reflects adequate reliability and validity when considering the complexity of written expression (McMaster & Espin, 2007).
The most straightforward indices from written expression CBM involve counting the total number of words written (TWW), the number of those words that are spelled correctly (WSC), the number of correct punctuation marks, or the number of grammatically correct word sequences (CWS). The TWW is the simplest score to calculate and provides a quick way to evaluate fluency of writing (Malecki & Jewell, 2003). Both TWW and WSC are valid and reliable indicators of writing performance, with moderate correlations with other measures of writing ability (Marston, 1989). Student scores of TWW, WSC, and CWS increase over time in elementary school as students gain more experience and skills with writing (Malecki & Jewell, 2003). Therefore, these scores are useful in considering student progress. Technical adequacy for TWW results is mixed for differentiating between students with and without disabilities (Deno, Mirkin, & Marston, 1980; Tindal & Parker, 1989; Watkinson & Lee, 1992), which limits its usefulness as a screening measure (Parker, et al., 1991).
Correct Word Sequences (CWS) is a valid and reliable indicator of narrative writing for students in grades three and above (McMaster & Campbell, 2008) and is a useful screening measure since it differentiates between students with and without learning disabilities (Watkinson & Lee, 1992). While scoring procedures for TWW and WSC require less time (Gansle, Noell, VanDerHeyden, Naquin, & Slider, 2002), CWS is more strongly correlated with other established writing assessments (Deno, Marston, & Mirkin, 1982), teacher rankings of student writing (Gansle et al., 2002), and language arts grades (Jewell & Malecki, 2005). The number of Correct Punctuations Marks (CPM) is a count of periods, commas, apostrophes, and other punctuation applied appropriately. For third graders, TWW, CWS, and CPMs predicted teachers' rankings of student writing. CPM was not significantly correlated with fourth graders' other measures of writing (Gansle et al., 2002).
These commonly used CBM scores evaluate spelling and grammar, which are important components for a writer being understood; however, effective communication requires more than correct spelling and grammar. A three-minute first draft may be too short to fully evaluate a student's organization and topic development, but CBM as an assessment of vocabulary usage has been previously evaluated with two scores: the number of different words (DW) and the number of long words (LW). In a study of elementary-aged students who are deaf or have hearing impairments, Chen (2002) found DW to have adequate reliability and to be significantly correlated with scores from standardized writing tests and teacher ratings of student writing. Gansle et al. (2002) found LW to be significantly correlated with standardized writing subtests for third graders, but not fourth graders.
The scores described above are dependent upon how much a student writes in three minutes. Students who are not as fluent will have lower scores, not necessarily because they have fewer ideas or skills related to grammar or spelling, but because they cannot get their ideas on paper as quickly. As a result, it is beneficial to use percentages to consider the accuracy of what was written, regardless of how much was produced. In a study of fourth grade students' written expression CBM scores, the percentage of words spelled correctly (PWSC) and the percentage of correct word sequences (PCWS) in the fall were significantly correlated with other standardized writing scores and their language arts grades (Jewell & Malecki, 2005). It is important to note there are concerns that percentage scores are not sensitive to growth (Espin, Shin, Deno, Skare, Robinson, & Benner, 2000; Tindal & Parker, 1989), particularly for students in grades three through five (Malecki and Jewell, 2003).
Variations in Written Expression Scores
It is beneficial when writing assessments reflect changes in scores as students become more proficient and accurate during the school year or as they advance in grade (e.g., from third to fourth grade). It is also useful for scores to differentiate between those with and without disabilities in written expression. However, there are other factors that may influence CBM scores and result in inaccurate conclusions about a student's progress or areas of need. Specifically, handwriting and gender may be reflected in evaluations of written expression in general, and CBM scores in particular.
Overall, females tend to write faster than males (Graham, Berninger, & Weintraub, 1998), and specifically on CBM, females have higher scores related to both fluency and accuracy. Malecki and Jewel (2003) examined the written expression CBM scores of students in first through eighth grades and found that females wrote more words, spelled more words correctly, and had more grammatically correct sequences. Similarly, Fearrington et al. (2014)'s study of students in third through eighth grade corroborated these findings. Malecki and Jewell (2003) also reported that females experienced more growth over the three administrations (fall, winter, and spring) on measures of fluency in writing for several grade levels, including students in third and fourth grades. Females outperformed males, at the fall administration and over the school year, on measures of the percentage of words spelled correctly and percentage of correct word sequences.
A student's handwriting proficiency can influence their performance on timed tasks, such as CBMs. When children first learn how to write, the process of creating meaningful marks on the paper is time consuming (Kandel & Perret, 2015). As students develop their handwriting skills, they become more proficient and generate text more quickly. Some students continue to struggle, even after substantial instruction, due to fine motor and processing difficulties, which can inhibit fluency and pose a challenge to effective communication. Learning a new handwriting script when changing from print to cursive may also influence writing speed. In the United States, students often initially receive instruction on how to print in manuscript and then they learn cursive in second or third grade (Graham et al., 2008; Supon, 2009). When switching script style, students have to relearn how to form the letters. As students who are new to cursive encounter novel vocabulary words, there is an increase in processing time for both producing writing and spelling the words correctly (Kandel & Perret, 2015). As a result, the production of words on the page can be impeded. Classroom teachers vary in their introduction of and instruction regarding cursive, which may impact students' CBM scores as they attempt new handwriting scripts without sufficient practice.
The discussion of handwriting in CBM has been limited to the analysis of handwriting legibility within CBMs, and the relationship between using print or cursive during text copying tasks. Tindal and Parker (1989) examined the legibility of middle school students' handwriting. The percentage of legible words written was moderately correlated with teachers' holistic ratings of student writing. On a copying task, Graham et al. (1998) examined legibility across four handwriting styles--(a) manuscript, (b) mostly manuscript, (c) cursive, or (d) mostly cursive--for students in fourth through ninth grades. There were no significant differences in speed between those who write in print, cursive, or combinations of both script styles. The researchers did not investigate speed with generative text tasks, which would have implications for CBMs, since the writing sample is timed and relatively short.
Commonly used programs to monitor student performance such as Aimsweb or teacher-targeted books on CBM (e.g., Hosp, Hosp, & Howell, 2006), include only three scores of fluency: TWW, WSC, and CWS. While this is a helpful start, there is more to be understood about strengths and needs of student writing from written expression CBM than just how much was written. The reliance on these three measures ignores other valuable pieces of information about students' written expression, such as their use of vocabulary or accuracy. Furthermore, these limited scores are considered in isolation, which may not capture the complexity of written expression. While teachers might readily consider factors such as grade level and disability when considering student's scores and progress, other factors such as student gender and the handwriting used may not be considered as factors influencing the scores.
In this study, third and fourth grade students completed written expression CBMs at three time points: fall, winter, and spring. The purpose of the study was to examine patterns of student performance from several different writing scores over time and to investigate characteristics that may be related to student scores including commonly considered factors of grade and disability, and less examined factors of gender and handwriting.
Participants included 324 third and fourth grade students in a midwestern school district. Specific information related to race and ethnicity and free and reduced lunch status were not available for individual students. District-level statistics indicate approximately 90% of the students in the school were white and a third received free or reduced lunch. Of the 154 third graders, 51.9% (n = 80) were male. Of the 170 fourth graders, 56.5% (n = 96) were male. About 7% of the students participating in the study received special education services, which is less than the national average (U.S. Department of Education, National Center for Education Statistics, 2015). Several students receiving special education services were not included in this analysis because they did not respond to the prompt using the class-wide standardized directions or were not in a general education English language arts classroom during the assessments.
CBM administration. Students completed the assessments in their general education classroom at three points during the school year: late September, late January/early February, and late April. Two make-up days were offered for students who were absent. Each assessment administration took approximately 10 minutes. Three narrative story starter prompts were selected from the Aimsweb manual (Powel-Smith & Shinn, 2004), and addressed topics familiar to all students (sleeping, sitting in a chair, going to the park).
The first and third authors administered the writing CBMs to the eight 3rd grade and eight 4th grade English language arts classes. The administrators followed scripted directions for how to complete the cover page and writing assessment. Once the entire class completed the cover page, the Aimsweb directions were read (Powel-Smith & Shinn, 2004). After the story starter was read aloud, students were given one minute to think about the prompt. At the end of the one-minute of planning, students were instructed to begin writing. After three minutes, time was called to stop writing, and the papers were collected.
CBM scoring. Each writing sample was transcribed and checked by another reader for accuracy. Three trained scorers evaluated the three-minute drafts: two graduate students and one advanced undergraduate student. In addition to having completed coursework in educational assessment, the scorers received an additional three-hour training that included (a) reading and discussing an article about written expression CBMs (Benson & Campbell, 2009) and (b) practicing of the scoring procedures for 25 writing samples following a detailed scoring procedure derived from the Aimsweb scoring guidelines (Powel-Smith & Shinn, 2004), previous research (Gansle et al., 2002; Gansle, VanDerHeyden, Noell, Resetar, & Williams, 2006; Minner, Prater, Sullican, & Gwaltney, 1989), and additional guidelines based on rules of grammar and punctuation (Strauss, 2008). The scorer's results of the training samples were compared to the first author's scores. Discrepancies were resolved through discussion.
Scorers individually evaluated the samples for TWW, WSC, CWS, CPM, DW, and LW. Table 1 includes descriptions of each of the scores and an example of the scoring procedure. Scorers also tallied the number of incorrect word sequences, which was used to calculate the percentage of correct sequences. All writing samples were scored twice, and the average of the scores was used in further analyses.
A variety of statistical methods were employed to analyze the collected data. Interscorer reliability for the six fluency scores was determined by calculating the Pearson correlation coefficient. Interscorer reliability ranged from .90 to .99 (see Table 2), which is consistent with extant research (Gansle et al., 2002).
Statistical inference focused on hypothesis testing to explore a series of bivariate associations between key covariates and writing outcomes. A combination of f-tests and one-way analyses of variance (ANOVA) were employed to explore written expression CBM scores and various subgroups (e.g., gender or grade). In addition, a combination of Chi-Square tests and Fisher's exact tests were used to analyze bivariate associations between subgroups and written expression CBM scores. In general, the Chi-Square test was employed for such comparisons except in scenarios where there are small counts or an imbalance in comparison groups where Fisher's exact test was the chosen alternative. A level of significance of .05 was used for all statistical testing in this exploratory analysis.
It is important to note that there are some variations in sample size across the results tables. Data were collected on school children across multiple time points with student absenteeism responsible for missing values across some time points. The authors assume that data are missing completely at random (Rubin & Little, 2002). That is to say, not only was there not a specific mechanism by which students were made unavailable for testing (say a competing event) but that the instances where students were not assessed occurred independent of the fact that they were being studied. More concretely, students being absent on any particular testing date has no bearing on the results.
In addition to traditional bivariate analysis, the scores at each time point along with changes in fluency scores across time points were used in a multivariate cluster analysis (Rencher & Chistensen, 2012) to explore and form distinct clusters of similar observations. Cluster analysis is a popular technique for exploring multivariate data in a framework outside of traditional statistical testing. Many clustering methods exist, all based on different criteria for determining associations between data values. The authors focused on distance-based clustering methods and fit several different clustering algorithms to the data and explored several different cluster solutions. Distance-based clustering (as opposed to hierarchical clustering) is generally preferred in scenarios where cluster groupings are formed for quantitative data that ranges across a continuum, and there is an advantage in creating groups where association is related to some sort of geometric proximity. These cluster solutions were explored with the original data as well as centered and scaled versions of the data. While many techniques were explored, the results presented here come from Ward's method with unstandardized original data values (not centered or scaled). The decision to use the specific solution provided here centered on the clinical interpretability of the given solution. Specifically, the five-cluster solution provided by Wald's methods was most interpretable in terms of student ability and timeline. JMP Pro 11[R] software was used for all statistical analyses.
Differences in CBM Scores at Fall Administration
A series of f-tests were conducted with the fall three-minute written expression CBM scores to determine if there were differences based on grade level, gender, special education identification, type of handwriting style used, or teacher handwriting requirement (see Table 3). Third and fourth grade students demonstrated similar performance in terms of the number of words they produced (e.g., TWW). However, fourth graders wrote significantly more correct word sequences (t(1) = 3.98, p = .045). In addition, fourth grade students were more accurate regardless of how much they wrote, with a greater percentage of correctly spelled words (t(1) = 16.29, p < .001) and percentage of correct word sequences (t(1) = 9.81, p = .002).
The amount written varied by gender. Females, on average ([bar.x] = 30.4, sd = 12.0), produced significantly more words than males ([bar.x] = 26.0, sd = 11.0); t(1) = 18.67, p < .0001). Females also wrote more correctly spelled words, correct word sequences, different words, and long words, and they included more correct punctuation. The third and fourth grade females also had higher accuracy with significantly greater PWSC and PCWS than males.
With the exception of CPM, students not receiving special education services scored higher compared to students who received special education services. On average, students not receiving special education services wrote about 10 more words in the three-minute time period. Students receiving special education services wrote samples with more errors in grammar and spelling. Students receiving special education services had an average PCWS of 57.2% (sd = 22.4%) while those not receiving special education services had an average of 70.1% (sd = 18.8%).
While there were not significant differences between groups of students using print or cursive in terms of accuracy (i.e., PWSC, PCWS), the amount of writing varied by handwriting script. Students had a choice in the type of handwriting, manuscript or cursive, they used for the assessment, even if their general education teacher typically required cursive to be used in classroom assignments. The majority (n = 256) chose to write in manuscript during the fall writing assessment. Students who used manuscript wrote significantly more words (TWW; [bar.x] = 29.4, sd = 11.9) than those who used cursive ([bar.x] = 22.7, sd = 9.0). Students using manuscript also had higher scores on WSC, CWS, and DW. There were no significant differences between handwriting script groups in terms of accuracy in PWSC or PCWS scores.
The results of the f-tests based on the handwriting classroom requirement indicate significant differences between the groups in terms of the number of words written (t(1) = 2.28, p = .023), different words (t(1) = 2.52, p = .012), accuracy of the spelling of the words that were written (t(1) = 2.26, p = .025), and accuracy of correct word sequences (t(1) = 2.70, p = .007). Students in classrooms that did not require cursive in the fall wrote more words overall than students in classrooms requiring cursive. However, students in cursive classrooms were more accurate in what they wrote in terms of spelling and grammar.
Patterns of Writing Scores During the School Year
While a wide range of study demographic variables were collected, some of the intended subgroup analyses were not performed due to small n values, the potential for imbalance in examining writing scores over all three survey administrations, and the relative homogeneity of this sample. Therefore, the focus is on the indicators of performance and not simply comparisons across gender, grade level, or disability. With a goal of avoiding a series of potentially underpowered and less-informative single item analyses, a cluster analysis was conducted to group participants based on similar performances on the writing assessment. Students' scores at all three time points as well as differences in scores across time points were used for the analysis. After exploring multiple solutions, a cluster analysis of the writing scores indicated five groups were most interpretable from this sample. A graph depicting each group's scores on three commonly used writing indices (TWW, WSC, and CWS) over the three administrations is shown in Figure 1.
One cluster of students demonstrated low fall scores but demonstrated a large increase in scores during the school year (Low Initial, High Change; LIHC). For example, on average this group wrote 18.5 words with an increase of about 25 words to average 43.6 words written in the spring. The second cluster includes students with low initial scores in the fall, who demonstrated a more moderate score increase during the year (Low Initial, Medium Change; LIMC). The average TWW score in the fall for this second group was similar to the LIHC cluster at 19.6 words written, but LIMC students had less than a 12 word increase to an average of 31.2 in the spring. Other writing assessment scores followed a similar pattern for students in these two clusters.
A third cluster included students with scores on the initial assessment in the middle of the range of scores who then demonstrated a large increase during the year (Medium Initial, High Change; MIHC). On average, MIHC students initially wrote about 15 more words in the fall than the LIHC cluster for an average TWW of 33.7, and demonstrated a large increase of almost 22 words. The fourth cluster of students also had average scores in middle of the range on the fall assessment, but they demonstrated a mixed pattern of growth that varied by score (Medium Initial, Variable Change; MIVC). While this MIVC group demonstrated little change in the total number of words written (from an average of 31.5 in the fall to 32.7 in the spring), they demonstrated a greater, yet still modest, increase on scores related to spelling and grammar fluency (i.e., WSC, CWS). For instance, the average number of grammatically correct sequences in the fall was 16.4 compared to 24.4 in the spring for a change of eight more correct word sequences.
The final cluster included students who demonstrated the highest scores initially, but also experienced a very small change in scores during the school year (High Initial, Small Change; HISC). Specifically, the average TWW score for this HISC group changed from 40.5 in the fall to just 41.0 in the spring. On average, students in clusters with a large change in fluency during the school year had similar or higher spring scores than those spring scores of students in this initially high, HISC, group. The LIHC students had average spring TWW scores of 43.6, and the MIHC group had the highest spring scores with an average TWW of 55.1. In contrast, this HISC group had lower average scores in the spring of 41.0 on TWW.
Cluster Differences in Student Characteristics
The percentages of students included in each cluster based on their characteristics are shown in Table 4. Fisher's exact tests were used to determine if there were significant associations between cluster status and either gender, grade level, or special education services (see Table 4). If subgroups were evenly distributed across clusters, there would be 20% of that subgroup in each cluster. The results indicated there were not significant differences between the clusters based on gender (p > .05). Across the five clusters, males and females were evenly distributed. However, there were significant differences based on grade level (p = .003). While Fisher's exact test only illustrates broad associations, note that more third graders (26.80%) than fourth graders (10.06%) are in the MIVC cluster, which indicates that although these students did not write many more words over the three administrations, what they did write included more correctly spelled words and more correct grammatical sequences over time. Special education service participation also varied significantly across clusters (p = .04). Students receiving special education services tended to be in clusters with lower fall scores (22.73% in LIHC; 31.82% in LIMC) or in the cluster with more change in grammar and spelling than overall words written (36.36% in MIVC). Students not receiving special education services were generally evenly distributed across clusters, with a slightly greater than would be expected percentage of students in the LIMC cluster (26.67%), and a slightly lower percentage (16.67%) in the MIVC cluster.
Results of a Chi-Square test indicated significant differences in clusters depending on the handwriting style the students used during the assessments ([X.sup.2](2) = 25.91, p = .001): manuscript only, mix of manuscript and cursive, and cursive only across the assessment administrations. Students who consistently used manuscript were generally evenly distributed across the clusters (between 15.66% and 23.23%). Almost half (43.9%) of the students who consistently wrote in cursive and about one third (33.73%) of students who wrote in cursive for at least one administration were in the same cluster. These students began with lower scores and only demonstrated a medium increase in scores during the school year.
When examining the percentages of students in each cluster based on the classroom requirements of writing in cursive all year, part of the year, or no cursive requirement, there were also significant differences in cluster composition (Chi-Square statistic [X.sup.2](2) = 16.40, p = .04). Over a third (37.62%) of students required to use cursive in the classroom all year were in the cluster with the second lowest scores in the fall with only a medium increase during the year. Comparatively, students not required to use cursive or only required part of the school year to use cursive, respectively, were more evenly distributed (18.03% and 23.36%).
This study examined the written expression CBM scores of third and fourth grade students at three points during the school year. Specifically, this study extended the research to further examine gender, handwriting style, and written expression CBM scores, and the patterns of change over a school year in clusters of scores.
In the fall, students in the third and fourth grades wrote about the same number of words, but fourth graders were more accurate in what they produced. Third graders were more likely to be in the cluster with a pattern of school-year performance indicating that although the increase in the overall number of words was modest, the spelling and grammar of those words improved. When considering gender differences, fall scores alone indicate that females were more fluent and accurate with writing; however, over time males and females were generally evenly distributed across cluster subgroups.
Consistent with previous research findings, at the fall administration students with disabilities had lower scores on most fluency measures than those without disabilities (Watkinson & Lee, 1992), including measures of vocabulary (i.e., DW, LW). In addition, students with disabilities demonstrated significantly lower percentages of correctly spelled words or correct grammatical sequences, indicating that even what they did produce included significantly more errors than the writing of those without disabilities. Patterns of performance during the school year indicate students with disabilities were more likely to be in the clusters with lower initial scores or the cluster (MIVC) that did not increase in fluency but did show an increase in accuracy.
This study analyzed the writing produced when students generated original ideas in response to a story prompt. The results of this study indicate that students who used manuscript produced more writing in the three-minute time limit, but the spelling and grammatical accuracy of what was written was not significantly higher than the writing by students who used cursive. This pattern of performance was not found in an analysis of the writing produced in a copying task (Graham et al., 1998). These differences suggest that the written expression production rates may vary by handwriting script depending on the task demands. Additional research is needed to evaluate the differences in writing production depending on handwriting script, experience with that script, and writing task expectations. When considering performance over time, the use of cursive indicates additional patterns of performance. Students who consistently wrote in cursive were overrepresented in clusters with low initial fluency scores and made less growth than other clusters. It is not possible from this study to determine if that limited growth is due to overall writing skills of the cluster or specific difficulties related to producing cursive writing. Additional research is needed comparing timed and untimed samples of cursive and manuscript writing samples of elementary school writers.
These data support previous findings that females produce more writing than males (Graham et al., 1998), and specifically write more during CBMs. Unlike previous research (e.g., Malecki & Jewell, 2003), females in this study were also more accurate. When considering the clusters of performance on the CBMs over the year, there were no significant differences in cluster membership related to gender. Additional research is needed in this area, but this research study further supports the caution of drawing conclusions based on norms that do not account for gender (Malecki & Jewell, 2003), especially when considering only one assessment. Evaluating patterns of several scores over time may provide a more accurate understanding of students' writing skills and progress than just a single fluency score.
The results of this study shed light on several suggestions for written expression CBM, including to consider (1) multiple scores when evaluating student writing over time, (2) the context of handwriting script a student used, the amount of instruction received, and the ease with which the student uses that script, and (3) alternative types of assessments for students with the highest scores at the beginning of the school year.
Examining student scores in isolation can provide insight into student performance. In particular, CWS is a valuable indicator of student performance and is moderately correlated with other, more robust measures of student writing (McMaster & Espin, 2007). This score can be used to create meaningful Individual Education Program goals for students receiving special education services to evaluate student progress and performance toward an annual goal (Hessler & Konrad, 2008). However, it is also important to consider multiple scores and particularly not to rely on the number of words a student wrote. For many younger students in this study, the number of words might not increase but the accuracy of spelling and grammar might improve. Considering percentages of correctly spelled words or correct word sequences will also provide a better understanding of the difficulties faced by students as well as their improvements.
The issue of teaching handwriting has received considerable attention in education, with arguments both for and against cursive instruction. The Common Core Standards for Language Arts (National Governors Association Center for Best Practices, Council of Chief State School Officers, 2010) do not include specific goals related to handwriting production. However, some states now require handwriting instruction, while other states have removed cursive from their required curriculum (Shapiro, 2013). Individual teachers express varying views on whether cursive should be taught and may not have received extensive instruction on how to teach handwriting in their preparation programs (Graham et al., 2008). The results of this study indicate those students using cursive, and those being regularly required to use cursive, are more likely to have lower fluency scores. This may be due to difficulty integrating emerging handwriting skills and spelling and grammar skills at the same time (Kandel & Perret, 2015). When considering writing scores, teachers should make note of the script used. If a student had low scores in cursive, comparing scores of a manuscript sample and considering differences in the production and accuracy may be warranted.
Finally, four of the five clusters included patterns of scores indicating some progress in scores during the school year. As a group, students who had the highest number of words written and had writing that was generally grammatically correct in the fall demonstrated little change in scores during the school year. This pattern is similar to reading CBMs in which the highest performing students demonstrate limited change in scores during the year (e.g., Silberglitt & Hintze, 2005). The continued use of the brief CBM may not be beneficial in providing instructional information or monitoring performance for these students. Instead, longer samples may identify other areas of need related to sentence structure or idea development through primary trait scoring rubrics (e.g., Culham, 2003) that may be of greater benefit for planning instruction and monitoring student performance.
Limitations and Directions for Future Research
The current study only examined the writing samples of students in third and fourth grades. More third grade than fourth grade students demonstrated a pattern of grammar and spelling improvement without an increase in overall fluency. Additional research is needed to examine the writing samples of younger and older students to determine if these differences in growth patterns are related to grade level. In addition, the sample was limited in terms of race and ethnicity, as indicated by district-level statistics. Research is specifically needed with a more diverse sample, particularly since CBM directly measures student skills and can be used to evaluate student progress, which may be a useful alternative to other standardized tests (Fore, Burke, & Martin, 2006).
Although there were students with disabilities participating in this study, further investigation of the progress and trends of performance, specifically for students with and at-risk for disability identification, is warranted. Increasingly, states are using Multi-Tiered Systems of Supports, or similar models, to assist in the identification process of students with learning disabilities. Curriculum-based measurement is often one of the integral assessments used to determine if a student is making progress in an intervention, as well as referring students for special education evaluation (Hoover, 2011; Shinn, 2007). More research is needed, specifically examining patterns of performance for students with writing difficulties and with learning disabilities in written expression.
This study did not examine all possible written expression CBM scores (see Gansle et al., 2002; Jewell & Malecki, 2005) related to student performance. Additional research is needed comparing students' written expression CBM scores and legibility, similar to previous studies with older students (e.g., Tindal & Parker, 1989). Samples in this study were transcribed before being scored, which controlled for legibility and handwriting style influencing the scores. Investigating legibility would add to understanding about handwriting and CBM scores.
Furthermore, the results of this study are descriptive and may not indicate evidence of a true causal relationship. Cluster analysis is data-set specific and, traditionally, more of an exploratory technique. Follow-up studies are needed to refine and confirm what is presented here. While the current data seem to be good candidates for latent growth curve modeling, the missing data at various time points actually made it a better fit for traditional cluster analysis for this early exploration. It is not known how much and of what type of instruction students received related to writing and handwriting specifically. Additional research is needed to compare student production in both manuscript and cursive to examine if some students produce more during the three minutes in one handwriting type than another. Furthermore, this study did not include detailed information about the types of instruction provided during the school year in handwriting and written expression. Additional research should consider the instruction provided over time in addition to overall changes in student scores.
Written expression CBM can be a useful tool in monitoring student progress and screening for writing difficulty. CBM is limited in the areas of writing assessed and is based on the brief, initial draft. Although scores such as CWS have consistently been shown to be correlated with other writing measures, those correlations tend to be moderate (McMaster & Espin, 2007). For high-achieving students, writing CBM scores may not capture improvements in student writing. When identifying students at-risk for writing difficulty, factors such as handwriting script used and gender may lead teachers and school-based teams to incorrect conclusions when evaluating students' performances. Using multiple measures of writing performance and considering those factors that may lead to lower writing fluency will better assist teachers in identifying and meeting students' writing needs.
Benson, B. J., & Campbell, H. M. (2009). Assessment of student writing with curriculum-based measurement. In G. A. Troia (Ed.), Instruction and assessment for struggling writers: Evidence-based practices. New York: The Guilford Press.
Chen, Y. (2002). Assessment of reading and writing samples of deaf and hard of hearing students by curriculum-based measurements (Doctoral dissertation). Retrieved from ProQuest Information and Learning Company. UMI Number: 3069192.
Christ, T. J., Scullin, S., Tolbize, A., & Jiban, C. L. (2008). Implications of recent research: Curriculum-based measurement of math computation. Assessment for Effective Intervention, 33,198-205. doi:10.1177/1534508407313480
Culham, R. (2003). 6 +1 Traits of writing: The complete guide, grades 3 and up. New York: Scholastic Professional Books.
Deno, S. L. (2003). Developments in Curriculum-Based Measurement. Journal of Special Education, 37, 184-192. doi:10.1177/002246690 30370030801
Deno, S. L., Marston, D., & Mirkin, P. (1982). Valid measurement procedures for continuous evaluation of written expression. Exceptional Children, 48, 368-371.
Deno, S. L., Mirkin, P. K., & Marston, D. (1980). Relationships among simple measures of written expression and performance on standardized achievement tests. The University of Minnesota, Minneapolis, MN: Institute for Research on Learning Disabilities. Retrieved from ERIC database. (ED197509)
Espin, C., Shin, J., Deno, S. L., Skare, S., Robinson, S., & Benner, B. (2000). Identifying indicators of written expression proficiency for middle school students. Journal of Special Education, 34,140-153. doi:10.1177/002246690003400303
Fearrington, J. Y., Parker, P. D., Kidder-Ashley, P., Gagnon, S. Gv McCane-Bowling, S., & Sorrell, C. A. (2014). Gender differences in written expression curriculum-based measurement in third--through eighth-grade students. Psychology in the Schools, 51, 85-96. doi:10.1002/pits.21733
Fore, C., Burke, M. D., Martin, C. (2006). Curriculum-based measurement: An emerging alternative to traditional assessment for African American Children and youth. The Journal of Negro Education, 75(1), 16-24.
Gansle, K. A., Noell, G. H., VanDerHeyden, A. M., Naquin, G. M., & Slider, N. J. (2002). Moving beyond total words written: The reliability, criterion validity, and time cost alternate measures for curriculum-based measurement in writing. School Psychology, 31,477-497.
Gansle, K. A., Noell, G. H., VanDerHeyden, A. M., Slider, N. J., Hoffpauir, L. D., Whitmarsh, E. L., & Naquin, G. M. (2004). An examination of the criterion validity and sensitivity to brief intervention of alternate curriculum-based measures of writing skill. Psychology in the Schools, 41, 291-300. doi:10.1002 /pits.10166
Gansle, K. A., VanDerHeyden, A. M., Noell, G. H., Resetar, J. L., & Williams, K. L. (2006). The technical adequacy of curriculum-based and rating-based measures of written expression for elementary school students. School Psychology Review, 35, 435-450.
Graham, S., Berninger, V. A., & Weintraub, N. (1998). The relationship between handwriting style and speed and legibility. The Journal of Educational Research, 91(5), 290-296. doi:10.1080 /00220679809597556
Graham, S., Harris, K., R., Mason, L., Fink-Chrzempa, B., Moran, S., & Saddler, B. (2008). How do primary grade teachers teach handwriting? A national survey. Reading and Writing, 21, 49-69. doi:10.1007/s11145-007-9064-z
Hessler, T., & Konrad, M. (2008). Curriculum-based measurement to drive IEPs and instruction in written expression. Teaching Exceptional Children, 41(2), 28-37.
Hoover, J. J. (2011). Response to Intervention Models: Curricular implications and interventions. New York: Pearson.
Hosp, M. K., Hosp, J. L., & Howell, K. W. (2006). The ABCs of CBM: A practical guide to curriculum-based measurement. New York: The Guilford Press.
Jewell, J., & Malecki, C. K. (2005). The utility of CBM written language indices: An investigation of production-dependent, production-independent, and accurate-production scores. School Psychology Review, 34(1), 27-44.
Kandel, S., & Perret, C. (2015). How do movements to produce letters become automatic during writing acquisition? Investigating the development of motor anticipation. International Journal of Behavioral Development, 39, 113-120. doi:10.1177 /0165025414557532
Lopez, F. A., & Thompson, S. S. (2011). The relationship among measures of written expression using curriculum-based measurement and the Arizona Instrument to Measure Skills (SIMS) at the middle school level. Reading & Writing Quarterly, 27,129-152. doi:10.1177/0022466916670637
Malecki, C. K., & Jewell, J. (2003). Developmental, gender, and practical considerations in scoring curriculum-based measurement writing probes. Psychology in the Schools, 40, 379-390. doi:10.1002/pits.10096
Martson, D. (1989). Curriculum-based measurement: What is it and why do it? In M. R. Shinn (Ed.), Curriculum-based measurement: Assessing special children (pp. 112-156). New York: Guilford Press.
McMaster, K., & Espin, C. (2007). Technical feature of curriculum-based measurement in writing: A literature review. The Journal of Special Education, 41, 68-84.
McMaster, K. L., & Campbell, H. (2008). New and existing curriculum-based writing measures: Technical features within and across grades. School Psychology Review, 37, 550-566.
Mercer, S. H., Martinez, R. S., Faust, D., & Mitchell, R. R. (2012). Criterion-related validity of curriculum-based measurement in writing with narrative and expository prompts relative to passages copying speed in 10th grade students. School Psychology Quarterly, 27,85-95. doi:10.1037/a0029123
Minner, S., Prater, G., Sullican, C, & Gwaltney, W. (1989). Informal assessment of written expression. Teaching Exceptional Children, 22(2), 76-79.
National Center for Education Statistics. (2012). The Nation's Report Card: Writing 2011. Institute of Education Sciences, U.S. Department of Education, Washington, D.C. Retrieved from: https://nces .ed.gov/nationsreportcard/pdf/main2011/2012470.pdf
National Commission on Writing. (2004). Writing: A ticket to work ... or a ticket out: A survey of business leaders. Retrieved from www .collegeboard.com
National Governors Association Center for Best Practices, & Council of Chief State School Officers (2010). Common Core State Standards for English Language Arts. Retrieved from http://www .corestandards.org/ELA-Literacy/
Parker, R. I., Tindal, G., & Hasbrouck, J. (1991). Countable indices of writing quality: Their suitability for screening-eligibility decisions. Exceptionality, 2,1-17. doi:10.1080/09362839109524763
Powell-Smith, K. A., & Shinn, M. R. (2004). Administration and scoring of written expression curriculum-based measurement (WE-CBM) for use in general outcome measurement. Eden Prairie, MN: Edformation Inc.
Rencher, A. C., & Christensen, W. F. (2012). Methods of multivariate analysis. New York: Wiley.
Rubin, D. B., & Little, R. J. A. (2002). Statistical analysis with missing data (2nd ed.). New York: Wiley.
Salahu-Din, D., Persky, H., & Miller, J. (2008). The Nation's Report Card: Writing 2007 (NCES 2008-468). National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education, Washington, D.C. Retrieved from https://nces .ed.gov/nationsreportcard/pdf/main2007/2008468.pdf
Shapiro, T. R. (2013, April 4). Cursive handwriting is disappearing from public schools. The Washington Post. Retrieved from https:// www.washingtonpost.com
Shinn, M. R. (2007). Identifying students at risk, monitoring performance, and determining eligibility within Response to Intervention: Research on educational need and benefits from academic intervention. School Psychology Review, 36, 601-617.
Silberglitt, B., & Hintze, J. (2005). Formative assessment using CBM-R cut scores to track progress toward success on statemandated achievement tests: A comparison of methods. Journal of Psychoeducational Assessment, 23, 304-325. doi:10.1177 /073428290502300402
Strauss, J. (2008). The blue book of grammar and punctuation: An easy to use guide with clear rules, real world examples, and reproducible quizzes. San Francisco: Jossey Bass.
Supon, V. (2009). Cursive writing: Are its last days approaching? Journal of Instructional Psychology, 36, 357-359.
Tindal, G., & Parker, R. (1989). Assessment of written expression for students in compensatory and special education programs. The Journal of Special Education, 23,169-183. doi:10.1177 /002246698902300204
Troia, G. A. (2006). Writing instruction for students with learning disabilities. In C. A. MacArthur, S. Graham, & (Eds.), Handbook of Writing Research (pp. 324-336). New York: Guilford Press.
U.S. Department of Education, National Center for Education Statistics. (2015). Digest of Education Statistics, 2013 (NCES 2015-011), Table 204.30.
Watkinson, J. T., & Lee, S. W. (1992). Curriculum-based measures of written expression for learning-disabled and nondisabled students. Psychology in the Schools, 29, 184-191. doi:10.1007 /S11145-008-9124-Z
Wayman, M. M., Wallace, T., Wiley, H. L, Ticha, R, & Espin, C. A. (2007). Literature synthesis on curriculum-based measurement in reading. The Journal of Special Education, 41, 85-120. doi:10.1177/00224669070410020401
Stacy L. Weiss
East Carolina University
Solomon Schechter Day School of Greater Boston
Author note: Preparation for this paper was supported in part by a grant from the Proffitt Endowment thru Indiana University-Bloomington's School of Education Research and Development Office.
Address correspondence to: Stacy L. Weiss, Department of Special Education, Foundations, and Research, Mailstop 504, East Carolina University, Greenville, NC 27858. E-mail: email@example.com
Caption: Figure 1. A cluster analysis was completed to identify patterns of student performance on the writing curriculum-based measurement during the school year. The five cluster solution is found here with trend lines of the three most commonly used scores: total words written, words spelled correctly, correct word sequences.
Table 1 Description of indices used in scoring of students' writing curriculum-based measurements Writing Index Description Sample Scoring Procedure Total Words Overall fluency: We playd tag for 2 hors. Written (TWW) Total number of TWW = 5 Sequences (CWS) individually distinguishable words, regardless of spelling. Does not include numerals. Words Spelled Fluent spelling: We playd tag for 2 hors. Number of words WSC = 3 spelled correctly for their context. Correct Word Grammar: Number of My friend Bill won./ he Correctly (WSC) correct adjacent are rlly fast./ words or word and CWS=5 ending punctuation. Includes spelling, and capitalization of sentence & proper nouns. Correct Grammar: Number of I was fasf but he was Punctuation correctly used faster. He won! Marks (CPM) punctuation marks CPM = 3 including ending punctuation, commas, and quotation marks. Long Words (LW) Vocabulary: The I ran to the basketball number of words goal. Then he ran to me. with more than 7 LW = 1 letters. Different Words Vocabulary: The I ran to the basketball (DW) number of words by goal. Then he ran to the counting any DW=5 repeated word only once. Percentage of Accuracy of We playd tag for 2 hors. Words Spelled spelling: The PWSC = 60% Correctly number of words (PWSC) spelled correctly divided by the total number of words. Percentage of Accuracy of My friend Bill won.|he Correct Word grammar: The are rlly fast. Sequences number of correct PCWS = 56% (PCWS) word sequences divided by total number of word sequences. Table 2 Interscorer Reliability Fall Winter Spring Total Words Written .99 .99 .99 Words Spelled Correctly .99 .99 .99 Correct Word Sequences .98 .98 .98 Different Words .98 .99 .98 Long Words .91 .93 .90 Correct Punctuation Marks .96 .96 .96 Table 3 Fall writing curriculum-based measurement indices (means and standard deviations) for grade level, gender, special education services, handwriting used during the assessment, and classroom teachers' handwriting requirement TWW WSC CWS DW CPM Grade Third 27.8 23.4 19.8 21.6 1.7 n = 154 (10.0) (10.0) (10.3) * (7.0) (1.8) Fourth 28.1 24.5 22.2 21.5 2.2 n = 170 (13.0) (11.9) (11.9) * (8.2) (2.4) Gender Male 26.0 21.9 18.8 20.2 1.7 n = 176 (11.0) *** (10.2) *** (10.2) *** (7.3) *** (1.7) * Female 30.4 26.5 23.8 23.2 2.3 n = 148 (12.0) *** (11.5) *** (11.8) *** (7.8) *** (2.6) * Special Education Services No 28.7 24.8 21.9 12.0 2.1 n = 301 (11.6) *** (10.9) *** (11.1) *** (7.6) *** (2.2) Yes 18.8 14.0 10.7 15.2 1.2 n = 23 (7.7) *** (6.4) *** (5.9)*** (5.7)*** (2.0) Handwriting Used in Fall Print 29.4 25.2 22.0 22.4 2.1 n = 256 (11.9) *** (11.3) *** (11.5) ** (7.8)*** (2.3) Cursive 22.7 19.6 17.6 18.4 1.6 n = 68 (9.0) *** (8.6) *** (9.3) ** (6.2) *** (1.6) Cursive Requirement in Fall No 29.5 25.1 21.7 22.6 2.0 n = 199 (11.9) * (11.4) (11.7) (7.9) ** (2.3) Yes 26.4 23.1 20.9 20.4 2.1 n = 102 (10.3) * (9.6) (9.9) (6.5) ** (2.0) LW PWSC PCWS Grade Third 1.0 82.70 64.8 n = 154 (1.1) (12.90) *** (20.20) ** Fourth 0.9 86.7 73.2 n = 170 (1.0) (10.2) *** (17.5) ** Gender Male 0.8 83.5 67.2 n = 176 (0.9) ** (11.7) * (19.6) * Female 1.1 86.3 71.6 n = 148 (1.2) ** (11.6) * (18.7) * Special Education Services No 1.0 85.5 70.1 n = 301 (1.1) * (11.1) *** (18.8) *** Yes 0.5 75.2 57.2 n = 23 (0.7) * (15.5) *** (22.4) *** Handwriting Used in Fall Print 1.0 84.6 68.6 n = 256 (1.1) (12.1) (19.6) Cursive 1.6 85.7 71.6 n = 68 (0.9) (10.4) (17.9) Cursive Requirement in Fall No 0.9 83.8 67.0 n = 199 (1.1) (12.2) * (19.7) ** Yes 1.0 86.9 73.2 n = 102 (1.1) (10.9) * (18.2) ** Note. The variation in sample size is due to missing data on the specific variable. * p < .05. ** p < .01. *** p < .001. Table 4 Percentage of students in each cluster by characteristics Low Low Medium Medium Initial, Initial, Initial, Initial, High Medium High Variable Change Change Change Change n (LIHC) (LIMC) (MIHC) (MIVC) Gender Male 176 18.86% 32.00% 13.71% 18.86% Female 148 18.37% 21.09% 21.09% 17.01% Grade Level Third 154 16.34% 24.84% 13.73% 26.80% Fourth 170 20.71% 28.99% 20.12% 10.06% Special Education No 301 18.33% 26.67% 18.00% 16.67% Yes 23 22.73% 31.82% 4.55% 36.36% Handwriting Used Print Only 199 15.66% 20.71% 19.70% 23.23% Mixed 83 20.48% 33.73% 14.46% 9.64% Cursive Only 42 29.27% 43.90% 9.76% 9.76% Cursive Requirement None 61 14.75% 18.03% 14.75% 24.59% Part-year 138 17.52% 23.36% 16.06% 23.36% All year 102 16.83% 37.62% 18.81% 9.90% High Initial, Small Change (HISC) Gender Male 16.57% Female 22.45% Grade Level Third 18.30% Fourth 20.12% Special Education No 20.33% Yes 4.55% Handwriting Used Print Only 20.71% Mixed 21.69% Cursive Only 7.32% Cursive Requirement None 27.87% Part-year 19.71% All year 16.83% Note. Variation in sample size due to missing data on the specific variable.
|Printer friendly Cite/link Email Feedback|
|Author:||Weiss, Stacy L.; Brinkley, Jason; Bock, Josh|
|Publication:||Education & Treatment of Children|
|Date:||May 1, 2019|
|Previous Article:||The Use of Contingent Acoustical Feedback to Decrease Toe Walking in a Child with Autism.|
|Next Article:||Decreasing Toe Walking with Differential Reinforcement of Other Behavior, Verbal Rules, and Feedback.|