Research - Summer Loss: The Phenomenon No One Wants to Deal With.
Interest in the phenomenon of "summer loss" seems to have been largely limited to researchers, but it is certainly long-standing. Twenty years ago, Donald Hayes and Judith Grether reported that a seven-month difference in reading achievement between poor and middle-class students in the second grade had widened to two years and seven months by the end of sixth grade. As in the research reported in the March column, the gap grew in spite of the fact that the two groups made similar progress during the school year. Hayes and Grether concluded that "the differential progress made during the four summers between second and sixth grade accounts for upwards of 80% of the achievement difference between economically advantaged and ghetto schools."
Eighty percent! Just that finding should have been sufficient to make people sit up and take notice, but now there's more. The phenomenon of summer loss has major implications for the measurement of "Adequate Yearly Progress" that is required of states accepting federal money through the reauthorized Elementary and Secondary Education Act (ESEA), otherwise known as the No Child Left Behind Act. (In my opinion, states ought to decline the funds, allowing them to ignore ESEA's outrageous requirements.) Most states and districts do not test twice yearly, but without twice-a-year testing, differential summer loss cannot be detected. Schools whose poor children are learning over the year will suffer because summer loss will cause them to fall farther and farther behind their middle-class peers and to fail to show much growth in reading and math. It will thus appear that the schools are failing, and they will be blamed for what is happening - or, more accurately, is not happening - in the family and the community.
That said, I must also immediately add that there is a lot more to schools than test scores (hard as that might be to believe these days). In the May 2001 Research column, I reported on a study that showed enormous differences in beginning reading instruction in wealthy and poor classrooms. Freelance writer Michael Sokolove has now captured the essential differences nicely in the 24 February 2002 issue of the Washington Post Sunday Magazine. "As I observed them [poor students receiving Direct Instruction], it occurred to me that high-stakes tests are imposing a kind of instructional divide. The wealthy kids get the full package - instruction that is not rote, books that are rich in content. And the poor kids get the stripped-down model - only what they are perceived to need." I think the operative word in the preceding sentence is "perceived."
In a review of research on summer loss, Richard Allington and Anne McGill-Franzen of the University of Florida take note of a meta- analysis of research conducted on 13 studies. The report of the meta- analysis also provided a narrative analysis of 24 other studies that were available but failed to provide sufficient data for the meta- analytic procedures. The conclusion of the meta-analysis is stark: one summer equals a three-month loss. Over the course of grades 1 through 6, this adds up to 11/2 years.
A study that did not directly examine summer loss found that, in Title I programs, gains measured by spring-to-spring testing were much smaller than those measured by fall-to-spring testing. This finding implies summer loss and led the researchers to conclude: "Title I interventions during the regular school year alone may not sustain their relatively large Fall/Spring achievement improvements."
Allington and McGill-Franzen believe that much can be improved by increasing the quality of instruction in low-income schools. However, they also argue that "the scientific evidence on the accumulating impact of summer loss on the achievement gap is compelling." (Allington can be reached at email@example.com.)
They echo the findings of the National Reading Panel that hundreds of studies find that good readers read the most and poor readers, the least. The problem, of course, is figuring out causality. Since ethical considerations preclude researchers from systematically restricting a control group's access to literature, all the studies are perforce correlational. One study did find that the volume of summer reading was the best predictor of summer loss or gain.
Even highly motivated readers in poor neighborhoods are at a disadvantage, though. They get most of their reading material from school libraries, and these have older, smaller, and less diverse collections than school libraries in more affluent neighborhoods.
Outside of schools, affluent communities have three times as many stores that sell children's titles as poor communities. In one study, the worst-case scenario found that affluent families could purchase 16,000 children's titles; poor families, just 55. Naturally, income has an impact on how many books can actually be purchased, no matter how many are available.
The problem of access to books is magnified during summer. Even in high-poverty schools that have summer programs, the library is often locked. Allington and McGill-Franzen argue that poor students must have easy access to many books and that they must also be taught well.
As I said at the outset, no one in the policy arena appears willing to confront the reality of summer loss. The cynic in me says it's just easier to blame the schools. Schools are concrete and can be pointed to, while "family" and "community" are much harder to target. When I published an op-ed article about it in the Washington Post (16 January 2002), I got just a couple of letters and a few phone calls in response. When I posted the March column on my website, a couple of teachers wrote to say that I had confirmed their intuitions. Princeton University economist Alan Krueger wrote about summer loss in the New York Times and got more attention, but, apparently, nothing that affected policy or programs over the long haul.
More Trouble for Schools: The Volatility of Test Scores
DAVID Grissmer of the RAND Corporation wrote, "The question is, are we picking out lucky schools or good schools, and unlucky schools or bad schools? The answer is, we're picking out lucky and unlucky schools." Grissmer was reacting to a paper by Thomas Kane and Douglas Staiger in which they showed that only about 20% of year-to-year changes in a school's test scores had anything to do with what was going on in the school. The rest was "noise" created by nonpersistent events such as a teachers' strike, the extended sick leave of one teacher, a group of disruptive students, or changes in inclusion rules for who gets tested. (I described this study in the 11th Bracey Report, which appeared in the October 2001 Kappan.)
Now come Robert Linn of the University of Colorado and Carolyn Haug of the Colorado Department of Education to affirm the conclusions of Kane and Staiger. Their research appears in the spring 2002 issue of Educational Evaluation and Policy Analysis. Using data from the Colorado Student Assessment Program, they found that schools that have large percentages of students who score well in one year are likely to gain less than schools that have lower percentages. This negative correlation of initial status with change is common in testing, but it's not supposed to happen that way in high-stakes programs.
When Linn and Haug looked at change scores over time, they found that the correlation between change in two, two-year periods was not correlated with the change in all four years. "Knowing the magnitude of the gain or loss in percent proficient or advanced from 1997 to 1999 tells you essentially nothing about the change from 1998 to 2000." This means that schools rated high one year might well be rated low the next - and vice versa.
Indeed, the improvements in schools proved to be quite volatile. "This volatility results in some schools being recognized as outstanding and other schools identified as in need of improvement simply as the result of random fluctuations," Linn and Haug write. "It also means that strategies of looking to schools that show large gains for clues of what other schools should do to improve student achievement will have little chance of identifying those practices that are most effective." In other words, it's a waste of time.
Things could be improved, claim Linn and Haug, if states changed how they measure change. They should stop just testing, say, each year's fourth-graders and instead track the same students over time. Although the "longitudinal tracking of students presents logistical problems and conceptual ones, such as the appropriate accounting for and attribution of results for students who change schools, longitudinal analyses are the most direct and valid way to account for changes in student achievement." It seems to me that the testing required by the reauthorized ESEA will already drain more money from schools than the law provides, and, while longitudinal studies would improve accuracy, they would be even more costly - and perhaps impossible in regions of high mobility. All of this at a time when 45 of 50 states report budget deficits totaling $50 billion.
Although Linn and Haug don't discuss it, longitudinal studies also raise issues of state power and control: all children would have to have IDs that permit states to track them.
GERALD W. BRACEY is a research psychologist, a freelance writer, and an associate for the High/Scope Foundation. He lives in the Washington, D.C., area (e-mail: firstname.lastname@example.org). His newest book is The War Against America's Public Schools (Allyn and Bacon/Longman, 2002).
|Printer friendly Cite/link Email Feedback|
|Author:||Bracey, Gerald W.|
|Publication:||Phi Delta Kappan|
|Date:||Sep 1, 2002|
|Previous Article:||Technology - Sex, Potato Chips, and Media Literacy.|
|Next Article:||Hi, My Name Is Velma!|