Impact of a technology-infused middle school writing program on sixth-grade students' writing ability and engagement.
Over the last few decades, efforts to improve writing instruction in schools have employed what is known as the process-oriented approach (Applebee & Langer, 2006; Pritchard & Honeycutt, 2006). It focuses on the skills, strategies, and techniques writers use (Collins, Brown, & Newman, 1989), including prewriting, drafting, peer editing, publishing activities, and, more recently, computer technologies such as word processing and Internet research. These steps closely match many of the elements identified by Graham and Perin (2007) in their meta-analysis of effective strategies to improve adolescent writing.
One example of the writing process approach that has been gaining traction in schools is known as the writer's workshop or writing workshop model. In it, teachers take students through the writing process steps several times during a school year, using a variety of genres such as expository writing, persuasive writing, narrative writing, and poetry. This approach places a tremendous demand on teachers to produce original lesson and unit plans throughout the year and to find relevant resources, such as texts that model the genre. Teachers also must develop minilesson activities focused on skills and strategies for each genre taught. Moreover, teachers who are not familiar with or are uncomfortable writing in a particular genre find it especially difficult to conceptualize how to teach students to write in the genre. They also find it challenging to develop their own writing pieces to model the writing process to students.
This approach has been incorporated into learning standards in many districts and states (Patthey-Chavez, Matsumura, & Valdes, 2004) as well as the new Common Core State Standards (English Language Arts Standards, 2011), but there has been little empirical research on the writing process approach, especially at the K-12 level (Pritchard & Honeycutt, 2006).
The most recent National Assessment of Educational Progress in Writing results offer some indication of middle school student performance in writing. Scores show improvement from prior years; the average score was six points higher in 2007 compared to 1998 (National Center for Education Statistics, 2008). However, performance in large cities trailed the nation as a whole. Students are defined as achieving one of three levels of proficiency: basic, proficient, and advanced. Using New York City as an example, one quarter of eighth graders obtained a "proficient" score in 2007 compared to 31% nationwide, and one fifth of New York City's eighth grade public school students scored below "basic" compared to 13% nationwide (National Center for Education Statistics), suggesting that fewer students in New York City attained basic writing skills than elsewhere and more do not have even a general grasp of the writing endeavor.
This article presents results of a study that evaluated the impact of a writing program on middle school students' writing abilities and engagement with writing. The program was developed to support teachers--especially newer teachers in hard-to-staff urban schools--in implementing the process approach to learning to write. The program, Writing Matters, integrates process writing and a writer's workshop approach within units of study that focus on different genres of writing, such as personal narrative, persuasive writing, expository writing, and responding to literature. These units incorporate technological supports such as animations, samples of student writing, and online exercises that allow students to share ideas. In addition, the program provides resources and professional development for teachers.
The study used a quasi-experimental approach to compare students of teachers who used the writing program, Writing Matters (www.writingmatters.org), with students of teachers who used a writer's workshop approach without the program. The research questions were:
1. Does any change in writing ability of sixth grade-students whose teachers participate in the Writing Matters program differ from that of students whose teachers do not participate in the program?
2. Does any change in writing engagement of sixth grade students whose teachers participate in the Writing Matters program differ from that of students whose teachers do not participate in the program?
After describing the research methods used, there is a presentation of the results organized around the research questions. A discussion of the findings in light of the methodology, literature, and limitations follows, and offers implications and conclusions.
The study used a quasi-experimental design to compare the students of two groups of sixth grade teachers: (1) the Writing Matters group, consisting of three English language arts teachers who used the Writing Matters program; and (2) a comparison group, consisting of three English language arts teachers at another school, whose teachers used the writer's workshop approach for writing instruction but did not use the Writing Matters program.
Site Selection and Participant Description
Participating schools. Researchers selected one Writing Matters (WM) school and one comparison school for the study, with the help of colleagues at the organization responsible for Writing Matters. The treatment school was selected from a pool of schools in New York City that had at least three sixth-grade1 teachers who had used the WM program the prior school year (2006-2007). Next, a comparison school, which used the writer's workshop approach to teach writing (but not WM specifically) was recruited, based on how similar it was to the treatment school in context and student composition, elements of schools that research shows can make a difference in student outcomes (see, for example, Konstantopoulos, 2006). By selecting schools that were similar, researchers were attempting to control for potential school effects on changes in students' writing ability and attitudes toward writing.
The comparison school was matched to the treatment school in community setting, student demographic data and percentages of sixth-grade students proficient in English language arts and mathematics, based on state exams. Table 1 displays the characteristics of each school, demonstrating the similarities they share in context and student composition. Both schools were similarly sized and had similar percentages of students from the four ethnic groups that comprised the bulk of their populations. The percentage of proficient students in sixth grade in ELA and mathematics, as measured by the New York State Testing Program, was similar across the two schools in the 3 years examined: percentages are within four percentage points for ELA across the 3 years, and within seven percentage points for mathematics across the 3 years. The only major difference was that the Writing Matters school had slightly more English language learners.
Participating teachers. Six teachers--three from the WM school and three from the comparison school--participated in the study. The teachers from the two schools were matched based on their teaching experience and educational attainment, to reduce the impact of teacher background on study results. Specifically, all teachers had at least 5 years of teaching experience and all had obtained masters degrees in the field of education. Researchers obtained informed consent from all participating teachers. Two of the three participating teachers in the treatment school had at least 1 year of prior experience with WM. The three teachers in the comparison school had never used the WM program but had received training in the writing workshop model through a nationally recognized, New York City-based program.
Participating students. A total of 371 students comprised the sample, with 256 from the WM school and 115 from the comparison school. Nine classes from the WM school and four classes from the comparison school participated. In both schools, each class had on average 31 sixth grade students. At the start of the study, no students had prior exposure to Writing Matters, since the program is for middle school students only. Students and their parents or guardians in all classes received written information about the study prior to students' involvement, and were given the option of not participating. At the beginning of data collection, students also listened to a verbal statement about the study tasks they would engage in and how their responses would be confidential. They were again given the option of not participating. Table 2 shows the number of students within each class of each teacher at the two schools.
About the Intervention
Writing Matters is designed to assist teachers in using an instructional model that incorporates process writing (prewriting through publishing) and the workshop approach (whole-group minilesson, independent work time, and sharing) with digital supports. Its curriculum for students in Grades 5-9 contains a roadmap of lessons, technology-based activities that correspond with the lessons, assessment resources to be used by teachers, grade-appropriate writing examples ("mentor texts"), and a tool for publishing student work once completed. Participating teachers typically receive weekly school-based professional development as they implement.
For each lesson, there is a 10- to 15-minute whole-group activity led by the teacher, known as the minilesson, which emphasizes a specific skill or strategy that the teacher models, usually with her own writing. This is followed by independent work time, where individual writing and teacher and peer conferencing take place. During the last 5 to 10 minutes of the class, students convene and share their work based on the minilesson and the teacher's observations of students' progress. The program also features supports for performance assessments and student self-assessment.
Writing Matters offers a series of genre study units that cover narrative writing (memoir, short fiction), persuasive writing (editorials), and informational/expository writing (feature articles), as well as literary criticism (response to literature). Genre study provides an opportunity to explore specific authentic text structures in terms of their key features and intentions, while also giving students the chance to make their writing relevant and personal by generating their own topics, developing their own points of view, and expressing themselves coherently by practicing not only their genre-based skills but also the fundamentals of effective writing related to organization, language use, and conventions (Lattimer, 2003). Each unit takes students through the steps of the writing process; Figure 1 shows an example.
Each genre-based unit within Writing Matters contains between 18 and 24 lessons that provide teachers with a roadmap through the entire writing process, from generating ideas to publishing completed pieces of writing in an online publication that may be public or can remain private to the classroom community. Embedded in the lessons are digital resources, such as introductory character-based animations, think alouds, sample student notebooks, and online activities that give students the chance to interact with one another and the teacher about their writing. Ideally, students use computers for planning, researching, drafting, and revising their writing pieces, and, to the degree possible, sharing with peers and publishing online through the Writing Matters ezine. Teachers and students have access to curriculum resources electronically in the Writing Matters Online Classroom (see Figure 2) and in a print guide.
Teachers at the treatment school implemented a minimum of six Writing Matters units and followed the roadmap of lessons. For each unit, teachers projected to the whole class animations, student notebooks, and other digital visual resources. They did not use the interactive activities in the online classroom with any regularity. Students published their work to the accompanying online ezine from home.
[FIGURE 2 OMITTED]
Three types of professional development activities were available to teachers: (1) Summer institute (3 days)--teachers practice the process writing approach as learners, are introduced to the digital resources, and receive a print copy of the curriculum; (2) Genre study institute (1 day for each genre)--available throughout the school year to immerse teachers in a particular genre and facilitate planning prior to implementation with students; and (3) In-school job-embedded professional development (weekly)--staff developers work with teachers in their classrooms, modeling activities, coteaching, and facilitating the integration of technology into the lessons. Educators in the treatment school participated in the professional development as follows: Each of the three teachers received weekly in-class support from the Writing Matters professional development consultant assigned to the school, one period a week for approximately 40 weeks. The consultant attended the grade-level department meeting at least once a month, where she distributed materials, helped determine the pacing of the instructional units, assisted in determining student needs, and coached on the effective use of technology. One or two teachers attended each of the 1-day genre study units, and only the school's literacy coach attended the 3-day summer institute.
The study used the data sources described below to assess students' growth in writing ability as well as changes in engagement with writing. In addition, researchers examined fidelity of classroom implementation for the Writing Matters program and teachers participated in a group interview to provide contextual information about teaching practice in both WM and comparison classrooms.
Writing prompts. In order to answer research question 1, the study used timed writing prompts. The prompts required students to use the explanatory writing genre, presenting students with a topic and asking them to write an essay explaining their point of view on the topic, based on their personal knowledge and experience. The prompt directions encouraged prewriting and planning; space was provided for this endeavor. Prompt A was derived from the National Writing Project's archive of writing assessments. Prompt B was developed by the research team in consultation with the research director of the National Writing Project to be parallel to Prompt A. The Appendix contains the writing prompts.
Student attitude surveys. In order to answer research question 2, the study used a survey based on a validated and reliable measure of writing engagement, the Writing Apprehension Test (Daly & Miller, 1975). The term writing apprehension, coined by Daly and Miller, refers to a "person's general tendencies to approach or avoid situations perceived to demand writing accompanied by some amount of evaluation" (Daly, 1978, p. 10). This instrument has been found to detect changes in students' writing apprehension before and after receiving an intervention targeting writing skills (Mouritzen, 1993; Schweiker-Marra & Marra, 2000), which supports the utility of the instrument for assessing changes in writing apprehension in the current study. Of the 26 items that comprise the survey, 13 are positively worded and 13 are negatively worded. To score the survey, researchers used the formula suggested by Daly and Miller (1975). The reliability of the assessment was demonstrated using Cronbach's alpha, a measure of the consistency of item responses. Reliability was assessed for both presurvey and postsurvey data. Table 3 shows the reliability statistics, demonstrating the high reliability of the measure in line with previous research using this survey (Daly, 1978; Faigley, Daly, & Witte, 1981).
Teacher fidelity checklists and interviews. To assess the extent to which teachers employed the resources and recommended strategies put forward by the Writing Matters program, checklists were completed by both WM and comparison school teachers at the end of each unit taught, throughout the school year. Two checklists were developed, one for teachers from the WM school and one for those from the comparison school; checklists were identical except for a series of additional questions about the specific features or elements of the Writing Matters program (e.g., use of the Online Classroom publishing tool) for the WM teachers. Both checklists contained items such as "teacher models her own writing most of the time (yes/no)." An example of a WM-specific item from the Writing Matters checklist is "teacher implemented at least 75% of lessons." In addition, researchers interviewed teachers in both schools to gain a better understanding of the teachers' backgrounds, as well as the classroom context of the writing instruction. The findings obtained through the checklists and interviews helped researchers to better contextualize the results of the student writing and student surveys.
This research project was approved by the school district and considered exempt from oversight by the researchers' institutional review board.
The two timed writing prompts were administered to treatment and comparison students as a pretest-posttest measure in October 2008 (2) and in May 2009. Participating teachers administered the writing prompts to their students, following procedures provided by the research team. Students were given 45 minutes to write their essays after reading the prompts. To control for practice effects, Prompts A and B were administered in a counterbalanced fashion within each individual and within each class. Specifically, for the pretest, roughly half of students in each class were randomly assigned Prompt A and the remaining students Prompt B. For the posttest, students given Prompt A in the pretest were given Prompt B. Conversely, students given Prompt B in the pretest were given Prompt A in the posttest. Therefore, no student received the same prompt at pretest and posttest.
The writing responses were scored alongside middle school writing samples from National Writing Project sites at NWP's national scoring conference in June 2009. Student essays were first assigned an ID number, and all identifying information (e.g., site of origin, program or comparison group, pre- or posttest, grade) was removed. Each essay was scored analytically and holisticially using NWP's Analytic Writing Continuum, a rubric adapted for research purposes from the 6+1 Traits of Writing model (Culham, 2003). The Analytic Writing Continuum measures six attributes of student writing on a scale of 1 to 6 (National Writing Project, 2008a; Singer & Scollay, 2007):
* Ideas/Content (including quality and clarity of ideas and meaning): establishing purpose and focus, selecting and integrating ideas, and including evidence and details
* Structure: establishing a logical arrangement, coherence, and unity within the work
* Stance: communicating a perspective through an appropriate level of formality, elements of style, and tone appropriate for the audience and purpose
* Sentence Fluency: constructing sentences to serve the intent of the writing, in terms of rhetorical purpose, rhythm, and flow
* Diction (Language): choosing words and expressions appropriate for the writing task
* Conventions: demonstrating age-appropriate control of usage, punctuation, spelling, capitalization, and paragraphing
Each essay was rated on these six characteristics and also assigned an overall holistic score on a scale of 1 to 6: 6 (superior writing, may have very minor flaws); 5 (clearly competent, may have minor flaws); 4 (generally competent, with occasional flaws); 3 (developing competence, but flawed in some significant ways); 2 (emerging competence, but seriously flawed); 1 (not yet competent: a base beginning, not yet showing much control).
Scorers participated in 6 hours of training and their scoring was recalibrated at regular intervals during the scoring conference. Ten percent of essays were scored twice by trained raters. Any scores that differed by two or more points on a given attribute for an essay were adjudicated by a third rater. These double scores were found to be highly reliable, with agreement for the holistic score being 94% and ranging from 92% to 96% across the attributes, confirming the reliability of the rubric for assessment in research and rater training.
A number of studies have been conducted to validate this continuum for use in research, demonstrating its reliability and validity, including face validity as determined by measurement experts, concurrent validity by assessing its association with other writing measures, and predictive validity by relating it to state standardized ELA assessment scores (P. LeMahieu, personal communication, April, 2008).
In order to answer research question 2 (Does any change in writing engagement of sixth-grade students whose teachers participate in Writing Matters differ from that of students whose teachers do not participate in the program?), students took the Writing Apprehension Test survey twice, once in October 2008 and May 2009. The survey took about 10 minutes to complete.
Students received both the prompt and the survey in one booklet with information about the study and directions for completing the tasks.
Teacher fidelity checklists and interviews. Teachers were asked to complete instructional checklists after each unit they taught. Researchers emailed a new checklist to teachers after they reported finishing a unit.
Descriptive statistics of the pre- and post-essay scores and the pre- and postwriting engagement scores were calculated, overall, by group (Writing Matters or comparison), teacher (T1, T2, T3, T4, T5, T6), and class (1-12). Independent samples t-tests were conducted to examine whether pretest essay scores differed between the two groups and whether the prewriting engagement scores differed between the two groups. Repeated measures analysis of variance tests (ANOVAs) were conducted to assess the group differences in scores over the two time points for the Writing Matters and comparison group students (within-subjects analysis), and the differences in the patterns of student pre- and post-test scores by group, teacher, and class (between-subjects analysis). (3) In order to control for potential influential factors on the students' essay scores and writing engagement scores, student gender was included as a covariate in the repeated measures ANOVAs. A similar set of analyses were performed for students scoring in the bottom fifth (below the first quintile) of the holistic scores on the pretest, to see if Writing Matters had any impact on the targeted student population in these schools.
Teacher interview data were analyzed for Writing Matters teachers' fidelity of implementation of the Writing Matters program and comparison teachers' fidelity of implementation of the writing-process approach. The results of this analysis were used to contextualize the quantitative analysis results.
This study used a rigorous quasi-experimental design. Still, as in any research, there were limitations to this study related to its design, sample, and implementation that constrain the generalizability of the results. These are explored in depth in the discussion section, below.
Research question 1: Does any change in writing ability of sixth-grade students whose teachers participate in the Writing Matters program differ from that of students whose teachers do not participate in the program?
The pre- and posttest essay scores indicate that, as a group, students who were exposed to Writing Matters over a school year did not make gains in writing ability or gains that differed from students who were not exposed to Writing Matters. However, the students with the lowest initial writing ability who were exposed to Writing Matters over the year did make significantly greater gains than those with the lowest initial writing ability who were not exposed to Writing Matters. A detailed description of the results related to research question 1 follows, first for the whole group and then for the students who scored in the bottom quintile on the holistic pretest score.
There were no statistically significant increases or decreases in essay scores, in either the Writing Matters group or the comparison group of students. In fact, the scores of students in both groups declined slightly, though the Writing Matters students' scores decreased by less than the scores of the comparison students.
Descriptive statistics were calculated on the pre- and posttest essay scores. The distribution of data on students' scores was found to be normal, confirming the suitability of employing repeated measures ANOVAs to examine changes in students' pre- and posttest scores over time by group, teacher, and class. In addition, independent samples t tests showed no difference in holistic scores on Prompts A and B at pre- and at posttest (prescores by prompt: t = -1.83, p = .07; posttest scores by prompt: t = .44, p = .66), ruling out the possibility of prompt effects on the results.
Average holistic and domain scores for all students who had both pre- and posttest scores by group are represented in Table 4, which shows that, on average, the holistic scores of students in both the Writing Matters and comparison groups decreased slightly between preand posttest administration. In the Writing Matters group, the mean holistic scores went down .04 score points from pre- to posttest essay, and in the comparison group, the mean holistic scores went down .11 score points--slightly more than the Writing Matters students' scores. Figure 3 demonstrates these average pre- and posttest holistic scores in a graphical display.
Comparison students' domain average scores declined slightly from pre- to posttest, but only the average content, stance, and diction scores of the Writing Matters students declined from pre- to posttest. Among the Writing Matters students, average sentence fluency scores did not change, and both average Structure scores and average Conventions scores went up .02 score points.
Both the Writing Matters group and the comparison group began the study at a similar level of writing ability, as determined by the pretest essays, which means that any changes from pre- to posttest essay administration would be more accurate than if the groups were starting at different levels. To determine this, independent samples t tests were conducted to determine whether the students' prescores were different by group. Note that the sample on which the t tests and upcoming ANOVAs are based includes only students for whom both pre- and posttest scores were obtained (ranging from 298 to 299 depending on the score type). Although the comparison group's pretest scores were slightly higher, the tests showed that the means of students' holistic prescores in each group did not differ significantly (t = 3.08, p = .08). The same result was found for the domain scores. (4) This confirmed that both groups began the study at a similar level of writing ability, allowing for the assessment of changes from pre- to posttest to be more accurate than if the groups were starting at different levels.
[FIGURE 3 OMITTED]
With regard to changes in students' scores over time, across holistic, content, stance, sentence fluency, and diction categories, the Writing Matters students declined less than the comparison students. For structure and conventions, the Writing Matters students actually had slight increases, whereas the comparison students declined in all categories. However, despite these differences in pre- and posttest scores, the changes were not significant. There was no interaction between the changes in students' pre- and posttest scores over time and whether the students were in the program or comparison group, suggesting that the Writing Matters program did not make a difference in students' writing ability. To arrive at these findings, repeated measures ANOVAs were conducted, with group (Writing Matters or comparison) as the between-subjects factor, pre- and posttest holistic and domain scores as the dependent or outcome variables, and gender of student as a covariate. See Table 5 for an overview of these ANOVA results. (5)
Next, the analysis was carried out for those students in the bottom 20% of the Writing Matters group and the comparison group with holistic pretest essay scores. Table 6 contains the raw pre- and posttest scores for students in the bottom 20% by group. It shows that both groups of students who started out with the lowest 20% of holistic pretest scores made increases over the school year; the Writing Matters students increased .79 points, and the comparison students increased only .31 points. Figure 4 demonstrates this finding graphically. Scores increased from pre- to posttest for both groups for all domain scores; students in the lowest 20% of the Writing Matters group made slightly greater gains than the students in the lowest 20% of the comparison group.
Independent samples t tests confirmed that both groups in the bottom 20% began the study at a similar level of writing ability, allowing for the assessment of changes from pre- to posttest to be more accurate than if these groups were starting at different levels. Independent samples t tests to determine whether the prescores of students in the lowest 20% in each group were different show that the means of students' holistic prescores in each group did not differ significantly: t(58) = -1.48, p = .15. The same result was found for the domain scores, (6) except for the Structure scores. Students in the lowest 20% in the comparison group had significantly higher mean Structure pretest scores than the students in the lowest 20% in the Writing Matters group; therefore, an ANOVA was not run for this domain score.
With regard to changes in students' scores over time, students in the lowest 20% of the Writing Matters group made significantly greater gains in holistic scores compared with students in the lowest 20% of the comparison group. There was a significant interaction between the changes in students' pre- and posttest scores over time and whether the students were in the program or comparison group, suggesting that the Writing Matters program made a difference in the lowest-performing students' writing ability. In addition, although the lowest-performing students in both groups made increases in each of the domain scores, they did not differ in these gains. As with the full sample, to arrive at these findings researchers conducted repeated measures ANOVAs, with group (Writing Matters or comparison) as the between-subjects factor, pre- and posttest holistic and domain scores as the dependent or outcome variables, and gender of student as a covariate. See Table 7 for an overview of these ANOVA results. (7)
Research question 2: Does any change in writing engagement of sixth grade students whose teachers participate in the Writing Matters program differ from that of students whose teachers do not participate in the program?
Findings indicate that, overall, students in the Writing Matters group did not differ significantly from the comparison students in their writing engagement over time. An analysis of students from the Writing Matters and comparison groups who scored in the lowest quintile of the holistic pretest scores also showed no significant differences. Descriptive statistics for both sets of analyses are below.
Descriptive statistics of the pre- and post-test writing engagement scores (8) show that students from the Writing Matters group had higher writing engagement scores at the beginning of the study compared with those from the comparison group (with an average prescore of 63.79, compared with 60.29) (Table 8 and Figure 5). The writing engagement scores of students from the comparison group increased slightly from the beginning to the end of the school year, by only .76 points. The slight decline (.36 points) of the average scores of Writing Matters students from pre- to post-test is very small on a scale from 30 to 130.
The pretest writing engagement scores for Writing Matters students and comparison students were not significantly different, meaning that students from both groups had similar attitudes toward writing at the start of the study. Independent samples t-tests were run to assess whether students whose teachers used Writing Matters had average prewriting engagement scores that were different from students whose teachers used the writing-process approach but not Writing Matters. No differences were found, suggesting that both groups felt similarly about writing (t = 1.31, p = .19), despite the fact that the Writing Matters group had higher engagement with writing at both pre- and posttest administration of the program.
[FIGURE 5 OMITTED]
Although students from the Writing Matters group generally did not change their attitudes toward engagement with writing over time and the comparison group increased in engagement slightly, students from the Writing Matters group and the comparison group had similar patterns of pre- and postscores. Repeated measures ANOVAs were conducted on the preand postsurvey writing engagement scores with group as the between-subjects factor. There was no significant interaction between the group students belonged to and their patterns of pre- and postsurvey writing engagement scores: F(1, 218) = .51, p = .48. (9)
Students in the bottom fifth of holistic pre-test scores in each group were compared in their changes in engagement over time. Descriptive statistics of the pre- and post-test writing engagement scores show that the students with the lowest pretest essay scores from the Writing Matters group were more engaged at the pretest than their counterparts from the comparison group, by an average of 2.27 points. At the posttest, both groups had lower writing engagement, with the Writing Matters students reducing their engagement by an average of 2.54 points, and the comparison students reducing theirs by 5.07 points (see Table 9 and Figure 6).
As with the writing engagement results for the full sample, the pretest writing engagement scores for Writing Matters students with the lowest holistic essay scores at pretest were not significantly different from comparison students with the lowest holistic essay scores at pretest. Students from both groups had similar attitudes toward writing at the start of the study. Independent samples t tests demonstrate this result (t = .35, p = .73).
The students with the lowest writing ability from the Writing Matters group and the comparison group had similar patterns of pre- and postsurvey scores. Repeated measures ANOVAs were conducted on the pre- and postsurvey student writing engagement survey scores, with group as the between-subjects factor, and gender as a covariate. There was no significant interaction between the group these students belonged to and changes in their engagement scores over time: F(1, 37) = .27, p = . 61. (10)
[FIGURE 6 OMITTED]
Results show that there were no significant differences between the full sample of students in the Writing Matters group and those in the comparison group overall in writing ability, as measured by a timed writing task, or in writing engagement, as measured by the Writing Apprehension Test. However, when looking specifically at low-performing students in writing, there was a statistically significant difference in change in writing ability between WM students scoring in the bottom fifth on the holistic pretest essay scores and their counterparts in the comparison school. On one hand, the null effect on the full sample of students can be interpreted as meaning that Writing Matters was at least as good as the "business as usual" approach used at the comparison school, whose teachers had previously been trained by a nationally known writing program. On the other hand, although this study employed a rigorous quasi-experimental design, several elements of the study--including its design, the sample population, the instruments used, data collection issues, and the fidelity of implementation by participants--may have influenced the results and must be taken into consideration.
Recently, several larger and more well-resourced studies funded by the Institute for Education Sciences have released results showing "no effects" (e.g., Agodini et al., 2009; Garet et al., 2008; Viadero, 2009). It is very difficult to find significant effects in school settings; researchers typically hope for even a small effect because a variety of individual student, family, school, and community factors can have a strong impact on students' abilities and attitudes when it comes to learning. This makes it very difficult for a curriculum taught over 1 year (or parts of 1 year) to have an effect that prevails over these factors. Having students receive a heavier dose of the program and its components, via more units, more time per unit, and more professional development for teachers, might increase the likelihood that the program will make a difference in students' abilities and engagement with writing. Furthermore, assessing impact over more than 1 year may be desirable, as the effects of the program may be cumulative or show up later in children's schooling. Similarly, effects may appear in subsequent years of teachers' implementation.
The null effect of WM on the full sample of participating students may also be due to the measurement of writing ability in this study. The on-demand writing prompt measured students' ability to carry out a timed writing task and perhaps did not measure additional aspects of writing ability that Writing Matters students may have acquired. Such aspects include the ability to develop a written piece over time, generate their own ideas for writing topics, provide peer feedback, revise drafts, and edit writing. Future research might additionally explore changes in students' ability to carry out these writing-related tasks over time.
Elements of the data collection for the study also may have influenced the null results obtained. Students completed the pretest approximately 6 weeks after teachers had begun using Writing Matters in their classrooms. A technical report for the U.S. Department of Education's Institute for Education Sciences (Schochet, 2008) explored the late pretest problem for randomized controlled trials in education and concluded that late pretests are preferable to no pretests at all. Nevertheless, the pretest in this study was administered later than was optimal, potentially contributing to the lack of gains by the full cohort of Writing Matters students or of the comparison students. Furthermore, researchers were unable to collect data on individual student demographics other than gender (e.g. ethnic background and native language) for this study. Study researchers recognize this as a limitation, since these variables may be associated with student outcomes despite study attempts at matching schools based on student demographics. However, we make the assumption that the matching of the two schools at least reduces these potential individual influences on changes in the writing ability and apprehension of the average student.
The significant difference found for low-performing students suggests that Writing Matters was effective in influencing the writing of its target population of struggling students. On the other hand, some studies have shown that lower-performing schools make greater gains overall compared with higher-performing students, since there is simply more room to make gains (Buckley, Ehrlich, Midouhas, & Brodesky, 2008; Ehrlich, Buckley, Midouhas, & Brodesky, 2008). Known as the "ceiling effect," potential gains made by higher-performing students are almost by necessity smaller than potential gains by lower-performing students--perhaps explaining the lack of gains by either group overall but contributing to gains made by the students in the bottom quintile.
Three main issues related to fidelity may have influenced the findings. First, teachers reported implementing certain key strategies (e.g., teaching a minilesson at the beginning of each class). However, it is not clear that simply using a strategy is enough; rather, it may be that how a teacher uses a strategy determines student impact (Marzano, 2009). The Teacher Instructional Checklists assessed self-reported amount of use but not the quality of implementation. In addition, the study was not designed to clarify the number of Writing Matters units that are essential to make a difference for student learning.
Second, teachers may have adapted the curriculum in ways that did not support student learning. Students had access to computers only on a sporadic basis, constraining the amount of time students spent individually on the Writing Matters website and on writing and revising on computers. In interviews, teachers said that they skipped sections of the curriculum, citing time constraints. The optimal amount of teacher and student computer use in the Writing Matters program is not clear; however, we know that students in this study got relatively little exposure to computers for writing.
Third, schools implementing Writing Matters can take full advantage of all program resources only if they have consistent access to computer technology and Internet access. Even a relatively well-resourced middle school such as the treatment school still found access to technology to be a barrier to complete implementation. The number of working computers in classrooms limited drafting, revision, and publishing of work. Internet access for research, exploration of resources, and peer-to-peer interaction in the Online Classroom was inconsistent. While teachers embraced the lessons in the program, the media assets that are both highly motivating and instrumental in reinforcing new strategies and concepts often went unused in class. Some teachers abandoned technology use while persevering with genre-study lesson plans. In this particular school, many students had computers with Internet access at home. Teachers encouraged students to comment on their peers' work and to publish their writing from their home computers, yet this strategy did not address classroom use of the technology-based materials.
This study investigated the potential for a technology-infused writing program. The findings demonstrated its potential to have an impact on students' writing ability especially for those students struggling the most. The gains of students in the bottom quintile in the intervention group were significantly greater than those of the comparison group, pointing to the strength of Writing Matters for its target population of struggling students.
This study is important for two key reasons. First, while the process approach to writing and the writing workshop model are promising for student learning, they are immensely challenging for teachers, most of whom need support if they are to implement it effectively. In school systems that regularly experience high teacher turnover, getting every educator up to speed on these approaches to writing instruction is an enormous task. Many teachers, especially those working in schools serving low-income communities where many students score below grade level in reading and writing, are new to the profession. Teachers, especially new teachers with fewer than 5 years in the classroom, require significant scaffolding if they are to execute an innovative and rigorous literacy curriculum with a population of students who struggle with basic English literacy. Professional learning supports can help keep new teachers in the system, thus reducing the cycle of constant retraining and uncertainty, and can introduce veteran teachers to more effective instructional methods.
This study begins to shed light on the effectiveness of programs like Writing Matters to provide supports for teachers, thereby influencing their students' ability to learn to write. Research results on effective writing programs will become even more critical as states begin to implement the Common Core Standards, which emphasize the explanatory, argumentative, and narrative writing genres; highlight the necessity of writing over time as well as over short periods; and encourage the use of technology to produce writing.
The second reason this study is important is the scarcity of research on student outcomes related to writing instruction in middle school (Pritchard & Honeycutt, 2006). The Teachers College Reading and Writing Project (n.d.) reports that, in schools participating in their program, the number of students who do not meet learning and performance standards on New York City's English Language Arts test decreases in all grades, and the number of students who meet or exceed standards increases by 10% to 15%. The National Writing Project reports that in nine studies of local Writing Project networks, students whose teachers participated in professional development outperformed students of comparison teachers on an independent writing measure (National Writing Project, 2008b). These results, taken along with those in the present study, point to the potentially promising practice of using a writing process approach and workshop model along with a variety of teacher learning experiences that include coaching, formal professional development sessions, and teacher inquiry groups.
The decline of reading achievement scores from elementary school to middle school, especially for children from economically disadvantaged families, is well documented (cf., Biancarosa & Snow, 2006). Commonly known as the "fourth grade slump" and the "eighth grade cliff," we speculate that a similar decrease may occur in writing achievement, given the close relationship between reading and writing (cf., Graham & Hebert, 2010). With a renewed attention on the importance of writing skills, empirical studies on writing instruction will be increasingly important.
In conclusion, Writing Matters had an impact on the writing ability of those students who struggle the most. The study limitations point to directions for future research and development: varying the amount (dosage) of Writing Matters (i.e., a standard number of units, more time per unit, and more consistent professional development activities for teachers); controlling for fidelity in more nuanced ways; using other methodologies, such as rigorous and in-depth case studies that collect teaching and learning artifacts over time; and focusing on developing and testing the Writing Matters professional development model.
APPENDIX: WRITING PROMPTS
There are good things and bad things about being a student. Write a multi-paragraph essay that explains to a new student the good things and the bad things about being in your grade in your school. Be sure to include examples.
Plan your writing using the blank space below and on the back of this paper. Write your final essay on the lined paper. Only what you write on the lined paper will count. You will have 45 minutes in all--plan and use your time well.
Your reader will be paying attention to:
* What you say
* The organization of your writing
* How well you express yourself
Do your very best work! Be sure to check spelling and punctuation before you finish.
School is not the only place we learn. Write a multi-paragraph essay to inform your teacher about something you learned outside of school. Be sure to tell what you learned and the ways it has helped you or has been important to you.
Plan your writing using the blank space below and on the back of this paper. Write your final essay on the lined paper. Only what you write on the lined paper will count. You will have 45 minutes in all--plan and use your time well.
Your reader will be paying attention to:
* What you say
* The organization of your writing
* How well you express yourself
Do your very best work! Be sure to check spelling and punctuation before you finish.
Agodini, R., Harris, B., Atkins-Burnett, S., Heaviside, S., Novak, T., & Murphy, R. (2009). Achievement effects of four early elementary school math curricula: Findings from first graders in 39 schools (NCEE 2009-4052). Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education.
Applebee, A. N., & Langer, J. A. (2006). The state of writing instruction in America's schools: what Existing data tell us. Albany, NY: Center on English Learning & Achievement.
Biancarosa, G., & Snow, C. E. (2006). Reading next--A vision for action and research in middle and high school literacy: A report to Carnegie Corporation of New York (2nd ed.). Washington, DC: Alliance for Excellent Education.
Buckley, K., Ehrlich, S., Midouhas, E., & Brodesky, A. (2008). Performance patterns for students with disabilities in grade 4 mathematics education in New York State (Issues & Answers Report, REL 2008-No. 050). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Northeast and Islands. Retrieved from http://ies.ed.gov/ncee/edlabs/ projects/project.asp?ProjectID=158
Collins, A., Brown, J. S., & Newman, S. E. (1989). Cognitive apprenticeship: Teaching the craft of reading, writing and mathematics. In L. B. Resnick (Ed.), Knowing, learning and instruction: Essays in honor of Robert Glaser (pp. 453-494). Hillsdale, NJ: Erlbaum.
Culham, R. (2003). 6 + 1 traits of writing: The complete guide. New York, NY: Scholastic Professional Books.
Daly, J. A. (1978). Writing apprehension and writing competency. Journal of Educational Research, 72(1), 10-14.
Daly, J. A. & Miller, M. D. (1975). The empirical development of an instrument to measure writing apprehension. Research in Teaching of English, 9, 242-249.
Ehrlich, S., Buckley, K., Midouhas, E., & Brodesky, A. (2008). Performance patterns for students with disabilities in grade 4 mathematics education in Massachusetts (Issues & Answers Report, REL 2008-No. 051). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Northeast and Islands. Retrieved from http://ies.ed.gov/ncee/edlabs/ projects/project.asp?ProjectID=160.
English Language Arts Standards >> Writing >> Grade 6. (2011). Retrieved from http://www .corestandards.org/the-standards/englishlanguage-arts-standards/ writing-6-12/grade-6/
Faigley, L., Daly, J. A., & Witte, S. P. (1981). The role of writing apprehension in writing performance and competence. Journal of Educational Research, 75(1), 16-21.
Garet, M. S., Cronen, S., Eaton, M., Kurki, A., Ludwig, M., Jones, ... Sztejnberg, L. (2008). The impact of two professional development interventions on early reading instruction and achievement (NCEE 2008-4030). Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education.
Graham, S., & Hebert, M. A. (2010). Writing to read: Evidence for how writing can improve reading. Washington, DC: Alliance for Excellent Education.
Graham, S., & Perin, D. (2007). Writing next: Effective strategies to improve writing of adolescents in middle and high schools--A report to Carnegie Corporation of New York. Washington, DC: Alliance for Excellent Education.
Konstantopoulos, S. (2006). Trends of school effects on student achievement: Evidence from NLS:72, HSB: 82, and NELS:92. Teachers College Record, 108, 2550-2581.
Lattimer, H. (2003).Thinking through genre: Units of study in reading and writing. Portland, ME: Stenhouse.
Marzano, R. J. (2009). Six steps to better vocabulary instruction. Educational Leadership, 67(1), 83-84.
Mouritzen, G. (1993). Improving writing skills in alternative high school English classes through writer's workshops. Fort Lauderdale-Davie, FL: Nova University. (ERIC Document Reproductive Service No. ED 354 543)
National Center for Education Statistics. (2008). The Nation's Report Card: Writing 2007 (NCES 2008-468). Washington, DC: National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education.
National Writing Project. (2008a). Local site research initiative report: Cohort III (2005-2006). Berkeley, CA: Author.
National Writing Project. (2008b). 2008 research brief. Retrieved from http://www.nwp.org/cs/ public/print/resource/2668
Patthey-Chavez, G. G., Matsumura, L. C., & Valdes, R. (2004). Investigating the process approach to writing instruction in urban middle school classrooms. Journal of Adolescent and Adult Literacy, 47(6), 462-477.
Pritchard, R. J., & Honeycutt, R. L. (2006). The process approach to writing instruction: Examining its effectiveness. In C. A. MacArthur, S. Graham, & J. Fitzgerald (Eds.). Handbook of research on writing (pp. 275-290). New York, NY: Guilford Press.
Schochet, P. Z. (2008). The late pretest problem in randomized control trials of education interventions (NCEE 2009-4033). Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education.
Schweiker-Marra, K. E., & Marra, W. T. (2000). Investigating the effects of pre-writing activities on writing performance and anxiety of at-risk students. Reading Psychology, 21, 99-114.
Singer, N. R., & Scollay, D. (2007). Building a district-based secondary writing program through the National Writing Project Model. Berkeley, CA: National Writing Project. Retrieved from http://www.nwp.org/cs/public/print/resource/2583
Teachers College Reading & Writing Project Website. (n.d.). Project success study. Retrieved from http://rwproject.tc.columbia.edu/default .aspx?pageid=1097
Viadero, D. (2009, April 1). "No effects" studies raising eyebrows. Education Week, 28(27), 14-15.
Lauren Goldenberg and Terri Meade
Education Development Center
Institute of Education, University of London
* Lauren Goldenberg, Education Development Center, Center for Children & Technology, 96 Morton Street, 7th floor, New York, NY 10014. E-mail: LGoldenberg@edc.org
(1.) We targeted the sixth grade to ensure that students had not been exposed to WM in prior years. The WM program is for middle school students, and sixth grade is the first year of middle school.
(2.) Due to circumstances outside our control, we were unable to administer the pretest until 6 weeks after students began school and teachers had already begun implementing the Writing Matters program.
(3.) The Levene's test for equality of variances that is calculated for the t tests and ANOVAs showed that the variances in performance for the students from the two groups--Writing Matters and comparison--were homogeneous for all analyses conducted.
(4.) Domain score t-test results are t(296) = -1.47. p = .14 (Content); t(296) = -1.84, p = .07 (Structure); t(297) = -1.82, p = .07 (Stance); t(296) = -1.60. p =.11 (Sentence Fluency); t(297) = -1.00. p = .31 (Diction); t(296) = -1.29. p = .20 (Conventions).
(5.) There was no main effect of group on the average of students' pre- and posttest holistic scores (F(1, 296) = 3.08, p = .08). In other words, writing ability overall did not differ for students in the Writing Matters group compared with those in the comparison group. The same results were found for the domain scores. Domain score ANOVA group main effect results are F(1, 295) = 2.56, p = .11 (Content); F(1, 296) = 3.10, p = .08 (Structure); F(1, 296) = 3.15, p = .08 (Stance); F(1, 294)= 1.56, p = .21 (Sentence Fluency); F(1, 296) = .45, p = .50 (Diction); F(1, 295) = 1.53, p = .22 (Conventions).
(6.) Domain score t-test results are t(58) = 1.15. p = .25 (Content); t(57) = -3.22, p < .01 (Structure); t(57) = -1.43, p = .19 (Stance); t(57) = -.66. p = .52 (Sentence fluency); t(57) = -.06. p = .95 (Diction); t(58) = -.98. p = .33 (Conventions).
(7.) There was no main effect of group on the average of students' pre- and posttest holistic scores, F(1, 57) = .06, p = .81. Writing ability overall did not differ for the lowest-performing students in the Writing Matters group compared with the lowest-performing students in the comparison group. The same results were found for the domain scores. Domain score ANOVA group main effect results are F(1, 57) = .87, p = .35 (Content); F(1, 56) = .10, p =.75 (Stance); F(1, 56) = .25, p = .62 (Sentence fluency); F(1, 56)= .350, p =.56 (Diction); F(1, 57) = .12, p = .73 (Conventions).
(8.) The distribution of the survey data was positively skewed, with students clustered on the left side of the distribution. A normal distribution is a requirement for conducting repeated measures ANOVAs--the homogeneity of variance. For these ANOVAs, the data were condensed by removing any outlier students whose scores were two or more standard deviations from the mean, resulting in a nearly normal distribution.
(9.) In addition, there was no significant main effect of group on students' averaged pre- and postsurvey scores (F[1, 218] = 1.42), suggesting that students from different groups did not differ in their overall writing engagement. 10. The main effect of group membership was not significant either: (F[1, 43] = .07, p = .80), defined by the average of students' pre- and postsurvey writing engagement scores.
TABLE 1 Characteristics of Writing Matters School and Comparison School Characteristic Writing Matters Comparison School School Community setting * Suburban within a * Suburban within a large urban district large urban district School size * 1,254 * 1,081 Demographics (a) * 11% black, 13% * 13% black, 15% Hispanic, 14% white, Hispanic, 19% white, 62% Asian 53% Asian * 10% English * 3% English language learners language learners * 8% students with * 8% students with disabilities disabilities Percent of proficient * 2007-2008: 80% * 2007-2008: 80% sixth-graders in * 2006-2007: 77% * 2006-2007: 81% ELA (b) * 2005-2006: 78% * 2005-2006: 75% Percent of proficient * 2007-2008: 94% * 2007-2008: 92% sixth-graders in * 2006-2007: 90% * 2006-2007: 87% mathematics (3) * 2005-2006: 89% * 2005-2006: 82% Notes: (a) Obtained through the NYC Department of Education Office of Accountability website at http://schools.nyc.gov. (b) Obtained through publicly available data on the New York State Education Department's website at http://www.emsc. nysed.gov/irts/reportcard/ TABLE 2 Participants Teacher School N Total N Participating Students Classes 1 Writing Matters 3 94 2 Writing Matters 3 75 3 Writing Matters 3 87 4 Comparison school 2 53 5 Comparison school 1 33 6 Comparison school 1 29 Total -- 13 371 TABLE 3 Interitem Reliability of Writing Engagement Survey Presurvey Postsurvey Cronbach's alpha .93 .93 N 286 283 Note: All available pre-and postsurvey scores were included from participating students, regardless of whether they completed both pre-and postsurveys. The total number of items in the measure is 26. TABLE 4 Means and Standard Deviations of Pre- and Posttest Essay Scores by Group (on a Scale of 1-6) Writing Matters Comparison Group (n = 193) Group (n = 106) Pre- Post- Pre- Post- Holistic 3.41 (.89) 3.37 (.82) 3.60 (.95) 3.49 (.81) Content 3.47 (.92) 3.40 (.87) 3.63 (.97) 3.51 (.82) Structure 3.24 (.90) 3.26 (.89) 3.44 (.91) 3.36 (.89) Stance 3.50 (.91) 3.46 (.92) 3.70 (.95) 3.57 (.90) Sentence fluency 3.45 (.91) 3.45 (.90) 3.63 (.98) 3.49 (.91) Diction 3.35 (.88) 3.31 (.83) 3.46 (.96) 3.32 (.82) Conventions 3.44 (1.02) 3.46 (.94) 3.59 (1.03) 3.55 (.93) Note: For each score type, only students with both pre-and posttest scores were included. Due to missing data, the numbers for each domain vary slightly; the n for structure and sentence fluency in the Writing Matters pretest is 192 and the n for the comparison group in the content pretest and stance and conventions posttest is 105. TABLE 5 Repeated Measures ANOVA Interactions of Time and Group Score Group Mean Mean F p Pretest Posttest Value Holistic Writing Matters 3.41 3.37 .33 .57 Comparison 3.60 3.49 Content Writing Matters 3.47 3.40 .17 .69 Comparison 3.63 3.51 Structure Writing Matters 3.24 3.26 .62 .43 Comparison 3.44 3.36 Stance Writing Matters 3.50 3.46 .50 .48 Comparison 3.70 3.57 Sentence Writing Matters 3.45 3.45 1.12 .29 fluency Comparison 3.63 3.49 Diction Writing Matters 3.35 3.31 .86 .35 Comparison 3.46 3.32 Conventions Writing Matters 3.44 3.46 .27 .61 Comparison 3.59 3.55 Note: For each score type, only students with both pre-and posttest scores were included. Due to missing data, the numbers for each domain vary slightly; the n for structure and sentence fluency in the Writing Matters pretest is 192 and the n for the comparison group in the content pretest and stance and conventions posttest is 105. TABLE 6 Means and Standard Deviations of Pre- and Posttest Essay Scores for Students With Holistic Pretest Scores in Bottom 20% (on a Scale of 1-6) Comparison group WM group (bottom 20%) (bottom 20%) Pre Post Pre Post Holistic 2.17 (.57) 2.96 (.82) 2.38 (.47) 2.69 (.68) Content 2.19 (.62) 3.19 (.90) 2.38 (.57) 3.33 (.87) Structure 1.89 (.47) 2.66 (.82) 2.29 (.41) 2.90 (.93) Stance 2.21 (.55) 2.83 (.93) 2.40 (.49) 2.74 (1.00) Sentence 2.18 (.59) 3.00 (1.01) 2.29 (.54) 2.69 (.87) fluency Diction 2.16 (.52) 2.79 (.78) 2.17 (.56) 2.57 (.97) Conventions 2.04 (.61) 2.71 (.72) 2.19 (.49) 2.67 (.98) Note: For each score type, only students with both pre-and posttest scores were included. Due to missing data, the numbers for each domain vary slightly. The n for the Writing Matters group varies from 38 to 39; for all domains, the n for the comparison group is 21. TABLE 7 Repeated Measures ANOVA Interactions of Time and Group (Bottom-Scoring 20%) Score Group Mean Mean F p Pretest Posttest Value Holistic Writing Matters 2.17 2.96 4.25 .04 Comparison 2.38 2.69 Content Writing Matters 2.19 3.19 .05 .83 Comparison 2.38 3.33 Structure Writing Matters 1.89 2.66 N/A N/A (a) Comparison 2.29 2.90 Stance Writing Matters 2.21 2.83 .90 .35 Comparison 2.40 2.74 Sentence Writing Matters 2.18 3.00 1.92 .17 fluency Comparison 2.29 2.69 Diction Writing Matters 2.16 2.79 .63 .43 Comparison 2.17 2.57 Conventions Writing Matters 2.04 2.71 .65 .42 Comparison 2.19 2.67 Notes: For each score type, only students with both pre-and posttest scores were included. Due to missing data, the numbers for each domain vary slightly. The n for the Writing Matters group varies from 38 to 39; for all domains, the n for the comparison group is 21. (a) An independent samples t test showed that structure pretest scores were significantly different for the Writing Matters and comparison students; therefore, an ANOVA was not conducted using structure scores. TABLE 8 Means (a) and Standard Deviations, Pre- and Posttest Writing Engagement Scores Writing Matters Comparison Group Group Pre Post Pre Post Mean 63.79 63.43 60.29 61.05 Standard 19.25 18.74 17.50 14.59 deviation N 147 147 73 73 Note: Writing engagement scores can range from 30 to 130. (a) The mean scores presented here are lower than mean scores found in other studies (e.g., Daly, 1978); however, those studies were conducted with high school and college student populations, whom we might expect to have less apprehension and more engagement with writing, as they have significantly more experience with writing. For example, one study using the Writing Apprehension Test with college students reported a mean of 75.59 (SD = 13.35). TABLE 9 Means and Standard Deviations of Pre- and Postsurvey Writing Engagement Scores by Group (Bottom 20% for Holistic Pretest Scores) Writing Matters Comparison Group Group Pre Post Pre Post Mean 65.27 62.73 63.00 58.07 Standard 19.72 24.87 20.00 13.26 deviation N 26 26 14 14 Note: Writing engagement scores can range from 30 to 130.
|Printer friendly Cite/link Email Feedback|
|Author:||Goldenberg, Lauren; Meade, Terri; Cooperman, Naomi; Midouhas, Emily|
|Publication:||Middle Grades Research Journal|
|Date:||Jun 22, 2011|
|Previous Article:||School culture, teacher regard, and academic aspirations among middle school students.|
|Next Article:||A critical pedagogy of cafeterias and communities: the power of multiple voices in diverse settings.|