Printer Friendly

Making a difference: the great reading adventure revisited: since it was first announced in CIL (May 2014), the software has seen a full cycle of code, sweat, and tears.

In 2013, the idea of public libraries having a fresh platform to gamify summer reading programs became a reality with the Great Reading Adventure (GRA). The software was a game changer, introducing digital badges, robust reporting, and embedded literacy content into an open source framework, which remains free to use, modify, and share. It addressed a need to bring library services into the modern era of computing and user experience. It kick-started new conversations within communities of educators, developers, and librarians. It even won some awards.

But above all, it worked.

The GRA drew 64,987 participants in its pilot year for the Maricopa County Library District, which grew to a userbase of 77,880 last summer. Since it was first announced in CIL (May 2014), the software has seen a full cycle of code, sweat, and tears.

The first year of the GRA was all about getting the platform off the ground. It was a new application that had to be put through its paces in order to refine the user experience and make it a truly viable platform for summer reading programs. In the second year, our aim was to make sure it worked. The GRA introduced a number of new concepts to summer reading programs, and we wanted to make sure they did what we intended.

Method

One of the GRA's most compelling features is its built-in assessment tool. This new functionality gives us the long-sought-after ability to determine our objective impact on the infamous "summer slide," which is a label representing the learning loss experienced by a large portion of our nation's students as they transition between school years. With this feature, we were able to introduce literacy assessments into summer reading programs.

Through an innovative and award-winning partnership with the Maricopa County Education Service Agency, we developed a standardized literacy assessment to gauge the change in the reading comprehension scores of students exiting first and second grade. A full technical brief is available on the GRA development site, which explains the assessment blueprints, test maps, and standards.

Reading skills were measured with multiple-choice questions--based on reading levels from the Developmental Reading Assessment--over a 2 month window during the summer. Two pretest and post-test pairs were constructed in parallel to offer the best possible comparison of raw scores. The tests were administered in the first and last weeks of the program and were taken by 286 students--39 (age 6), 147 (age 7), and 100 (age 8).

Limitations

Before launching into the results of our work, we'd like to make note of our study's limitations. First and foremost, it looked at very few demographic factors outside of age and geography. We made no effort to collect socioeconomic information (outside of readily available Census data). For this study, we were looking exclusively at children's pre-test and post-test scores, correlating their performance with the amount of activity that was logged.

The literacy measure we used is still in its infancy. The tool was created by assessment development experts, but it has yet to go through the rigorous testing and analysis necessary to prove its efficacy. We were not looking for a tool for measuring children's literacy levels--merely a quick and dirty assessment to tell us whether or not our program was on the right track. With time and additional study, it will become more effective in measuring changes in reading comprehension scores.

There are also some technical limitations that will need to be addressed both by software updates and an assessment that's wider in scope. The measure we used doesn't target an individual at his or her own reading level. Instead, we took a more standardized middle path that had a number of questions designed for different reading levels, all delivered in the same assessment for participants of a specific age. In the future, assessments should be adapted to the user's reading level.

Finally, this study lacked a control group. In order to truly compare scores, we would need to have a group of children who did not participate in the summer reading program take the assessments as well.

Results

In general, children who participated in the summer reading program experienced an increase in their reading comprehension scores at the end of the summer. The average pre-test score was 87.47%, and the average post-test score was 91.33%, resulting in a 3.86% gain.

Children who earned between 500 and 1,000 points showed the greatest gains. Those logging 500 points or fewer increased their scores by 2.29%, and children logging more than 500 points increased theirs by 4.67%. In the Maricopa County Reads summer reading program, a free book was unlocked at 500 points, and other drawing prizes were awarded through 1,000 points. In addition, most of the assessment participants fell into these point ranges. This suggests that the offered incentives were effective at meaningfully increasing participation.

The GRA was designed to capture more than just reading. The application was designed with the idea that a good summer reading program should include modules that push beyond the standard reading log. Library programming, in-app activities, and community partnerships all contribute to the execution of a quality program, and our assessment results speak to that.

Children who participated in an offsite "community experience" showed an increase of 8.8% in their test scores. These experiences took place at local museums, science centers, and other community organizations, where children engaged in hands-on educational activities. They were rewarded with secret codes that could be entered into the GRA to earn points and badges. This was a way to incorporate experiential learning into our program. Similarly, children who attended programs inside the library experienced an increase of 4.4%.

The version of the GRA that was used for this study offered in-app engagement by way of electronic books, literacy games, and reading lists. Of these, the librarian-curated reading lists had the most impact, as children completing them saw an increase of 2.2%. The electronic books were correlated with an increase of 2%, and the games came in last with an increase of 1.4%.

The greatest gains experienced by our participants were correlated with tightly planned mission-based activities. Our Queen Creek branch developed a superhero-themed mission system that allowed participants to collect multiple badges in order to level up and unlock secret powers. These badges were earned in a variety of ways, but mainly from earning secret codes through both active and passive programming. Children who participated in these activities had an average score increase of 10.8%.

However, the most compelling results come from comparing the GRA assessment data to Census information to contextualize the scores. Children in low-income areas are often a focus of library efforts, and our summer reading program is no exception. Children residing in ZIP codes in which 31% to 50% of households are living in poverty showed the greatest gains in their assessment scores, a leap of 6.2%--from 83.8% pre-test to 90% post-test.

Best Practices

To ensure a positive and cohesive user experience, the assessments were incorporated into a digital badging theme that ran throughout the entire program. Badges act as a supplement within the online platform, but they also signify achievement, engagement, and personal identity. Secret codes were embedded into the assessments, which when entered into reading logs, allowed students to unlock "hero" badges and serial installments of an original short story. The badges and stories were unlocked in increments of 250 points--one point was equivalent to 1 minute of reading time.

More than 332,000 badges were earned throughout the summer. Students who unlocked the 500-point badge saw a jump of 2.36% in assessment scores, reinforcing the notion shared by libraries of the benefits of reading an average of 20 minutes a day. In addition, organized badging campaigns proved to increase the impact of literacy assessments by more than doubling comprehension scores. They seemed to help enhance the fun aspects of joining a story-based challenge and an unlockable narrative for students, overriding ideas of summer homework or the negative emotions associated with standardized tests.

Students who earned badges by redeeming secret codes offered in library events and in learning experiences offered by partner organizations, such as museums and affiliates within our communities, felt a positive impact as well--a combined boost of 4.53% in assessment scores.

Any raw data from literacy scores ought to be reviewed. In this case, results were adjusted for duplicate accounts and activities, outlier scores, students who did not have both a pre-test and post-test score, and students with a succession of four or more A responses, as that was the default response.

Future Development

Our goal in creating the GRA was to deliver a free, open source framework for libraries to use as an online foundation for summer reading programs. Since the software has demonstrated that a digital platform can successfully combat summer slide, the main goal is to improve engagement.

The challenge for development is twofold: create a system that is both inviting for administrators and for end users. According to Harald Nagel, web developer for the Maricopa County Library District, modernizing and simplifying the user experience will not only make participation easier for students, but it will also streamline the theming and adding of content for librarians.

A mobile app is in the works to ensure a responsive design and a seamless experience from computers, tablets, and smartphones. In March 2016, a substantial overhaul of the existing code expanded the project to new frontiers with a more responsive Bootstrap layout, security improvements, and a new, mission-based Challenges module.

Access

Discover more about the Great Reading Adventure at greatreading adventure.com. Version 2.2.1 is available now, featuring improved stability and security from the pilot version. A forum for help and details on contributing is also available at forum.great readingadventure.com.

Caris O'Malley

(comalleySspokanelibrary.org) is the innovation and outcomes director at the Spokane Public Library.

Antonio Apodaca

(antonioramonapodacaSgmail.com) is the makerspace librarian for the Ventura County Library.

                  Average Pre-   Average Post-
Points Earned      Test Score     Test Score     Change

Less than 250        13.33           13.5        +1.25%
250-500              13.04           13.38       +2.56%
500-1,000            12.91           13.76       +6.58%
1,000-1,500          13.37           13.94       +4.26%
More than 1,500      13.32           13.7        +2.85%

Assessment Scores

Start  87.47%
End    91.33%

Note: Table made from bar graph.
COPYRIGHT 2016 Information Today, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2016 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:O'Malley, Caris; Apodaca, Antonio
Publication:Computers in Libraries
Geographic Code:1USA
Date:Apr 1, 2016
Words:1739
Previous Article:OCLC Research and the Association for Library and Information Science Education (ALISE) awarded research grants.
Next Article:Staff members assist with mixed library collections.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters