Printer Friendly

Effects of Behavioral Skills Training on Teacher Implementation of a Reading Racetrack Intervention.

Abstract

This study examined the effects of behavioral skills training (BST) on teachers' implementation fidelity of a reading racetrack (a board game designed to increase sight word fluency) with elementary students identified as struggling readers. BST, an alternative to traditional professional development, is a performance-based protocol incorporating instruction, modeling, rehearsal, and feedback. A multiple probe design across teacher-student dyads demonstrated that BST was functionally related to the teachers' implementation of a reading racetrack with 100% fidelity on at least three consecutive sessions. Additionally, students met mastery criteria for sight word acquisition and demonstrated maintenance at least one to two weeks post intervention.

Keywords: reading, intervention, fidelity, behavioral skills training

**********

Factors affecting student academic achievement are numerous and complex. Variables such as family socioeconomic status and parenting practices are frustratingly beyond the control of school systems and educators (Lee & Shute, 2010; Oxford & Lee, 2011). One area that can be improved is teacher competency (Coles, Owens, Serrano, Slavec, & Evans, 2015; Shaffer & Thomas-Brown, 2015). Yet, teachers can only be as competent as they know how to be. Teachers must be well-versed in core content, instructional strategies, and current evidence-based practices. For practicing teachers, professional development is the avenue through which they gain and maintain competency.

However, the importance of professional development stands in contrast to how most teachers view it. According to a 2004 survey of teachers, 42% of respondents indicated that professional development "leaves something to be desired" or "is a waste of time" (Peter D. Hart Research Associates & Harris Interactive, 2004). In another, more recent national survey (n = 890), 51% of teachers agreed that, in terms of improving teacher effectiveness, improving professional development would be "very effective," and 44% indicated improving professional development would be "somewhat effective" (Coggshall, Ott, & Lasagna, 2010).

High levels of dissatisfaction with professional development among teachers, combined with insufficient resources, have created a need for efficient solutions. How can teachers be provided high-quality professional development they believe is beneficial to them while being cost effective? And although high-quality and cost-effective professional development is critical, of equal importance is the need for evidence-based professional development. The National Comprehensive Center for Teacher Quality (Archibald, Coggshall, Croft, & Goe, 2011) conducted a review of the literature on professional development and identified five characteristics of effective professional development: (a) alignment with school goals, state and district standards and assessments, and other professional learning activities, including formative teacher evaluation; (b) focus on core content and modeling of teaching strategies for the content; (c) inclusion of opportunities for active learning of new teaching strategies; (d) provision of opportunities for collaboration among teachers; and (e) inclusion of embedded follow-up and continuous feedback.

One method of professional development that has the potential to meet these criteria is behavioral skills training (BST). BST requires active participant engagement and includes instruction, modeling, rehearsal, and feedback (Parsons, Rollyson, & Reid, 2012). These primary elements reflect the literature on coaching, much of which focuses on the role of performance-based feedback (Conroy, Sutherland, Vo, Carr, & Ogston, 2014; Lewis & Newcomer, 2002; Sterling-Turner, Watson, & Moore, 2002). For example, Conroy et al. (2013) examined the effectiveness of BEST in CLASS (Behavioral, Emotional, and Social Training: Competent Learners Achieving School Success), a coaching model incorporating performance feedback. Results indicated coaching with feedback was effective for training early-childhood teachers to use praise, provide feedback, and increase opportunities for students to respond.

With roots in applied behavior analysis, BST provides a framework for effectively teaching a new skill. Parsons et al. (2012) outlined the following BST training steps: (a) verbally describe the target skill; (b) provide a succinct, written description on the skill; (c) demonstrate the target skill; (d) require trainees to practice the target skill; (f) provide feedback during practice; and (g) repeat the practice and feedback steps until mastery is achieved. This method stands in contrast to traditional professional development (e.g., one-time workshops), which yields mixed results in terms of impact on teacher performance or student outcomes (Darling-Hammond, Wei, Andree, Richardson, & Orphanos, 2009; Yoon, Duncan, Lee, Scarloss, & Shapley, 2007).

Research has demonstrated BST is an effective training method for adults and children with disabilities, and for staff who work with individuals with disabilities (Fetherston & Sturmey, 2014; Parsons et al., 2012). Studies conducted with adults with disabilities have primarily targeted maladaptive behavior, including stereotypy (e.g., Dib & Sturmey, 2007) and communication skills, such as vocal and nonvocal conversation skills (e.g., Nuernberger, Ringdahl, Vargo, Crumpecker, & Gunnarsson, 2013). BST studies with children have demonstrated that the training protocol is effective for teaching abduction-prevention skills (Beck, Miltenberger, & Ninness, 2009), preventing gun play (Jostad, Miltenberger, Kelso, & Knudson, 2008), and teaching safety skills (Vanselow & Hanley, 2014).

The outcomes of studies evaluating the use of BST with staff are particularly robust (e.g., Fetherston & Sturmey, 2014; Nigro-Bruzzi & Sturmey, 2010; Parsons et al., 2012; Rosales, Stone, & Rehfeldt, 2009). Parsons et al. (2012) used BST to teach a group of staff working with adults with developmental disabilities to use most-to-least prompting and to use sign language when working with their clients. Staff increased their most-to-least prompting skills from a mean of 50% steps completed correctly in baseline to 92% steps completed correctly in post-intervention. BST also resulted in staff increasing their percentage of correctly produced signs from a mean of 20% at baseline to 93% post-intervention.

Aiming to systematically replicate previous BST studies (e.g., Dib & Sturmey, 2007; Lafasakis & Sturmey, 2007; Ryan, Hemmes, Sturmey, Jacobs, & Grommet, 2007; Sarokoff & Sturmey, 2004; Ward-Homer & Sturmey, 2008), Fetherson and Sturmey (2014) conducted three experiments with a small group of instructors at a school for students with developmental disabilities. Using BST to train staff to implement discrete trial training (DTT), incidental teaching, and activity schedules, they demonstrated a functional relation between BST and instructor responses. Data showed instructors improved DTT implementation from an average of 32% accuracy in baseline to 89% accuracy postBST. Instructor implementation of incidental teaching increased from an average of 28% of their attempts pre-training to an average of 90% following the first BST session. Post-training data demonstrated that the instructors increased the percentage of steps implemented for activity schedules from 36.5% in baseline to 91% or greater in intervention. Additionally, student participants demonstrated an increase in responding (i.e., independent correct response as defined in the individual student's program) for all three teaching procedures and a decrease in disruptive behaviors (e.g., repetitive movements, refusals).

BST has also been used to teach mand training (i.e., teaching students to request preferred items; Skinner, 1957) to special education teachers and speech therapists (Nigro-Buzzi & Sturmey, 2010). With the exception of one participant and her assigned student who were removed from the study because of an increasing trend in baseline, staff showed a marked improvement in percentage of steps implemented correctly between baseline and post-training. Furthermore, although outcomes were somewhat variable, student participants demonstrated an increase in unprompted mands post-training.

The current study seeks to extend the BST literature by evaluating its effects on teacher implementation of an instructional intervention targeting an academic skill. Specifically, teachers were trained to implement a reading fluency intervention, a reading racetrack, with an elementary student. In contrast to previous studies conducted with students with developmental disabilities, the student participants in this study were struggling readers, but otherwise typically developing. Teachers identified struggling readers by reviewing student data from Accelerated Reader and DIBELS results. Students read sight words printed in cells around a board game. The game provides a convenient framework for incorporating direct teaching, repeated practice, and systematic feedback, all components of effective reading fluency intervention (Archer & Hughes, 2011; National Reading Panel [NRP], 2000).

Results from several studies have provided evidence that a reading racetrack is an effective reading intervention for elementary students receiving special education services in general education settings (e.g., Crowley, McLaughlin, & Kahn, 2012; Higgins, McLaughlin, Derby, & Long, 2012). For instance, Falk, Band, and McLaughlin (2003) demonstrated that a reading racetrack increased sight word vocabulary and decreased errors in 1-min timings for three third graders with specific learning disabilities. In a more recent study, Crowley et al. (2012) incorporated Direct Instruction flashcards into the game with a first grader and kindergartener, both diagnosed with autism spectrum disorder. Each participant increased the number of words read correctly in their respective word sets.

To date, interventionists in most reading racetrack studies have been researchers rather than practicing classroom teachers (e.g., Crowley et al., 2012; Falk et al., 2003). In order to examine the extent to which practitioners can apply this intervention with fidelity, the purpose of this study was to extend previous reading racetrack research by examining the effects of BST on procedural fidelity of teacher implementation of reading racetracks. Specifically, this study was designed to answer the following research questions.

1. What are the effects of BST on procedural integrity and maintenance of fidelity of teacher implementation of a reading racetrack with elementary students?

2. What are the effects of a reading racetrack on student acquisition and maintenance of new sight words?

3. What are the opinions of teachers and students about the intervention and its effects on the participants?

Method

Participants and Setting

This study was conducted at a private, parochial Montessori school that serves students in pre-kindergarten through eighth grade. The school adhered to the primary principles of Montessori education, including Montessori-trained teachers, multi-age classrooms (i.e., 3-year groupings), child-directed activities, and uninterrupted work time (Core Components of Montessori Education, 2017). All classrooms in the current study reflected those principles and were characterized by highly organized physical spaces featuring specialized (i.e., Montessori) instructional materials. Instead of students working at desks, they worked either on the floor on rugs or at larger tables where multiple students could gather.

In Montessori classrooms, including the one in which the current study was conducted, academic content is delivered in large-and small-group formats. For small-group instruction, students are grouped based on skill rather than age. For example, a first grade student who is excelling in reading may be paired with second and third graders who are reading at a similar level. If a student is behind in a particular content area, that student may join younger students in helping the teacher provide instruction. Additionally, teachers help students create individualized weekly lists of tasks that are divided by content areas. Students are expected to complete these tasks independently or in pairs.

Three teacher-student dyads participated in this study. All teacher training sessions were conducted in empty classrooms or the staff lounge. All reading racetrack sessions with students were conducted in their respective classrooms (i.e., three separate classrooms) while the other students were engaged in various activities. Each teacher was certified through the American Montessori Society. Ms. Lee (pseudonyms are used for all teachers and students) taught in a 6to 9-year-old classroom. She had a master's degree in education and had been teaching for six years. Ms. Monroe taught at the preschool/ kindergarten level. She had a bachelor's degree in international studies and Spanish and had taught for 16 years. Ms. Beeker also taught at the preschool/kindergarten level. She had a bachelor's degree in early childhood education, a reading endorsement, and six years teaching experience.

Parent permission forms and a description of the proposed study were sent home with all students in the preschool/kindergarten class and the primary (grades 1-3) class; however, only students identified by their teachers as struggling readers were eligible to participate. Three parents provided permission for their children to participate. Based on classroom assessment data (Accelerated Reader levels and DIBELS scores) and teacher recommendation, each of these students was identified as a struggling reader who would benefit from targeted sight word instruction. Aaron was a third grader (paired with Ms. Lee), and Alexander (paired with Ms. Monroe) and Nathan (paired with Ms. Beeker) were in kindergarten. All students were white and from middle-class socio-economic backgrounds and participated in regular education classrooms.

The experimenter (i.e., first author), a special education doctoral student, conducted the teacher training sessions individually with each of the three teachers. An additional staff member, the school art teacher, joined each training session, providing the opportunity for the teacher trainee to practice implementing the game. None of the teachers had any prior knowledge of the intervention.

Definition and Measurement of Dependent Variables

Percentage of correctly implemented game steps. The primary dependent variable was percentage of steps teachers implemented correctly when using a reading racetrack with a student. After teachers were trained on implementation procedures via BST, they were instructed to play the game with their student. The experimenter used a 12-step task analysis (see Table 1) of the game procedures, and checked observed/correct if the teacher implemented the step correctly. If the teacher did not implement the step, or implemented the task incorrectly, that step was counted as not observed/incorrect. For example, the first step in the task analysis required teachers to point to and read aloud each of the first 10 words on the racetrack. If the teacher read each of the 10 words while pointing to each word, that step was scored as correct. If she read the words, but did not point to the words (or failed to implement any other part of that step), that step was scored as incorrect.

Number of words read correctly. A secondary dependent variable was the number of sight words read correctly by student participants. Each racetrack had 30 cells targeting 10 sight words. Each of the 10 words was printed on the racetrack presented three times in random order. The first 10 cells on the racetrack were filled with the 10 unknown sight words. To determine words read correctly, the teacher pointed to those 10 words on the racetrack and asked the student to read the word. If the student responded correctly within 3 s, the word was counted as correct. If the student did not respond within 3 s, misread the word, or said "I don't know," it was counted as incorrect.

Experimental Design and Procedures

We used a multiple probe design across teacher-student dyads. The experimental conditions were baseline, intervention, and maintenance. Prior to baseline, a preassessment was conducted to determine unknown words.

Student preassessment. To create a bank of unknown words for each student, the experimenter presented each student with 40 flashcards, each with a Dolch sight word printed on it (Dolch, 1948). Participants were shown all 40 cards during one visit. Word lists were individualized for each participant. Aaron and Nathan were presented with sight words from the Dolch third grade list, and Alexander was presented with sight words from the kindergarten list. The experimenter recorded the sight words read correctly and incorrectly. Each student ended up with three sets of 10 unknown words to use for the intervention. During the preassessment, if students read the words correctly, those words were removed and replaced with new, unknown words, from the same word list selected for that student.

Teacher baseline. Prior to BST, teacher participants were provided a written description of a reading racetrack (see Table 2). The description included general information about the purpose of a reading racetrack game and the instructional components. Teachers were given 10 minutes to read the description. Then, they were instructed to implement the game with another staff member. The experimenter recorded on a checklist the number of steps teachers implemented correctly and incorrectly.

Student baseline. During student baseline sessions, which took place on the same day as teacher baseline sessions, the experimenter administered sight word identification probes using a set of 10 words. If the student read any of the sight words correctly during a baseline probe, those words were removed from the list, replaced with new sight words from the bank, and the revised set of 10 words was administered the following session. The experimenter sat across from the student and gave the students the following instruction, "I will show you flashcards one at a time. Do your best to read the word. After I say, 'ready, go,' that is when you begin reading the words. Ready, go." If the student was not ready, the experimenter repeated, "Ready, go." The experimenter placed flashcards read correctly and incorrectly in separate piles. At the conclusion, the experimenter said, "Good job, thank you for playing." The experimenter recorded sight words read correctly and incorrectly on the data sheet.

BST intervention. The independent variable in this study was BST. The experimenter trained each teacher in a one-on-one setting with an additional staff member present to serve as the student for training purposes. First, the experimenter verbally described the purpose of a reading racetrack, including materials and procedures. Second, the experimenter provided a written description of how to implement a reading racetrack. Next, the experimenter demonstrated playing the game with the teacher by following the 12-step task analysis. Then, the teacher practiced playing the game with the additional staff member. During the practice session, the experimenter provided feedback. If a step was implemented correctly the experimenter said, "Nice job on that step." If the teacher implemented the step incorrectly, the experimenter read the description of that step, answered any questions from the teacher, and directed the teacher to try again. This process continued until the teacher performed the step correctly without feedback. All steps of the task analysis were repeated until the teacher performed all steps to 100% mastery.

Post-intervention. At the beginning of each post-intervention session, the experimenter told the teacher to implement a reading racetrack with her student as best she can. During this phase, the experimenter did not provide any instruction or feedback. Post-intervention sessions lasted less than 10 min.

Maintenance. Maintenance data were collected two weeks post-intervention for Ms. Lee and Ms. Monroe. Due to time constraints (i.e., end of the school year approaching), maintenance data were collected one week post-intervention for Ms. Beeker. Maintenance probes consisted of having each teacher implement a reading racetrack with her student using the same game procedures from the intervention sessions. Without instructions or feedback from the experimenter, each teacher was asked to implement a reading racetrack with her student. Maintenance data were collected with each student using the same sets of 10 sight words that were directly taught and assessed during a reading racetrack intervention.

Interobserver Agreement (IOA)

The experimenter and an additional staff member from the study site scored IOA on intervention steps for 33% of baseline, intervention, and maintenance sessions across teacher-student dyads. Prior to data collection, the experimenter and staff member verbally reviewed each step of a reading racetrack procedure. The experimenter provided examples and non-examples of each step of the game task analysis. During teacher implementation of the game, the experimenter and staff independently scored each step carried out as observed/correct or not observed/incorrect. The experimenter and staff compared agreements and disagreements on an item-by-item basis for the primary and secondary dependent variables.

IOA was calculated using the following formula: number of agreements divided by number of agreements plus number of disagreements multiplied by 100. IOA for game procedures implemented correctly was 100% for all sessions and all three teachers. IOA for words read correctly by students was 100% across all participants.

Procedural Integrity

Procedural integrity was assessed on teacher training for 33% of intervention sessions. The same staff member who collected IOA data was provided with a six-step checklist (see Figure 1) of the BST steps. She observed the experimenter provide BST to each teacher and scored each training step as observed or not observed. Procedural integrity was calculated by dividing the number of observed steps by the total number of steps and multiplying by 100%. Mean procedural integrity was 95% (range: 86-100%).

Consumer Satisfaction

To assess consumer satisfaction, the experimenter provided the teachers and students questionnaires related to the effectiveness of BST and a reading racetrack at the end of the study. Consumer satisfaction for the teachers was assessed by having them respond to statements using a five-point rating scale (ranging from "strongly agree" to "strongly disagree"). Specifically, the teachers were asked to rate the effectiveness of the BST training they received and the effectiveness of a reading racetrack on student sight word acquisition. They were also asked to rate the ease with which the intervention is implemented, and the ease of making the game materials. Additionally, an open-ended question was included: "Would you use this intervention to increase sight word recognition in other students, why or why not?" Success was measured by whether or not the teacher agreed or strongly agreed to the majority of the questions, as well as if they would implement the game in the future.

To assess student satisfaction, post-intervention, the experimenter read three questions to each student individually and instructed the student to respond by marking either a smiley face picture indicating "agree" or a sad face picture indicating "disagree." The statements were as follows: (1) I like the reading racetrack, (2) The reading racetrack helped me become a better reader, and (3) I think playing the reading racetrack game is a good way for me to learn sight words.

Results

Figure 2 shows the percentage of correctly implemented game steps by the teacher participants across experimental conditions. Training sessions with each teacher were approximately 15 to 20 min. Ms. Lee's data demonstrate an immediate and marked increase in steps implemented correctly (i.e., 75% correct) in the first session post-BST. She performed none of the procedures on the task analysis across three consecutive baseline sessions. During the first post-intervention session, Ms. Lee omitted two steps related to providing the student specific praise. She performed 100% of the steps correctly during the second session. She implemented 100% of the steps correctly for three consecutive sessions (i.e., Sessions 5-8).

Ms. Monroe also met mastery criteria within two sessions. She increased the percentage of steps implemented correctly from 0% throughout baseline to 75% of steps during her first post-intervention session. She did not provide her student with specific praise during the first session, but did so after given feedback from the experimenter. She performed 100% of the steps correctly for the four subsequent consecutive sessions (i.e., Sessions 8-11).

Ms. Beeker demonstrated the most immediate and marked response post-intervention with 100% of steps implemented for six consecutive sessions (i.e., Sessions 7-11) from 0% during baseline conditions. Ms. Beeker maintained 100% steps implemented correctly one week post-intervention, and Ms. Lee and Ms. Monroe maintained 100% steps implemented two weeks post-intervention.

Figure 3 illustrates the number of words read correctly by student participants. After stable responding in baseline, Aaron's data demonstrate an immediate increase in sight words read correctly. Sight words read correctly increased from 0 in baseline to 5 read correctly in the first intervention session. He reached mastery criterion of 10 words read correctly in the second intervention session, then read between 9 and 10 words correctly for three consecutive sessions (i.e., Sessions 6-9).

Alexander's data demonstrate a high level of correct responding after intervention. His data path shows an ascending trend with 8 words read correctly in Sessions 9 and 10, 9 correct words in Session 11, and 10 correct words in Sessions 12 and 13. One maintenance probe was conducted with Aaron and Alexander one week post-intervention. Aaron maintained all 10 words; Alexander maintained 9 words. Nathan's data demonstrate he acquired 5 sight words during the first intervention session. His data trend upward with 8 to 10 words read correctly across six consecutive sessions. He maintained

Consumer Satisfaction

All teacher participants responded to the questionnaire. Each of the teachers rated the training method (BST) as very effective. One teacher commented that the modeling and feedback components of the method were helpful in being able to carry out the game with her student and that BST was especially helpful for understanding the specifics of the error correction procedure.

In response to the question about the extent to which a reading racetrack was instrumental in improving their students' abilities to read the target sight words, two out of the three teachers strongly agreed and the third agreed. All three teachers also strongly agreed that a reading racetrack is relatively simple to implement and that it could be easily constructed without consuming many resources. Each said that she would be willing to use the intervention with other students. One teacher commented that the game was easy to use, did not use much class time, and held the student's interest. Another teacher commented that her student enjoyed the game, it was simple for her to implement, and it was effective.

Students' responses to their questionnaire were overall positive. All three students indicated they liked playing a reading racetrack. They indicated it helped them become better readers, and it is a good way to learn sight words. One of the students commented he liked learning new words by playing a game.

Discussion

The primary purpose of this study was to determine the effects of BST on teacher implementation of a reading racetrack with elementary students. Results demonstrate a functional relation between BST and implementation of a reading racetrack with procedural fidelity. Teacher participants correctly implemented the racetrack intervention with their respective students only after being trained to do so via BST, and maintained their skills with 100% fidelity one to two weeks post-intervention. These results support previous BST research indicating BST is an effective training method (e.g., Fetherston & Sturmey, 2014; Nigro-Bruzzi & Sturmey, 2010; Rosales et al., 2009; Vanselow & Hanley, 2014), including studies that assessed maintenance (i.e., Rosales et al., 2009; Vanselow & Hanley, 2014).

The results of the current study support past findings that demonstrate BST is an effective method of training staff and teachers (i.e., Fetherston & Sturmey, 2014; Nigro-Bruzzi & Sturmey, 2010; Parsons et al., 2012). However, results from the current study indicate that the experimenter needed to prompt some teacher participants to provide feedback and praise. One reason for this may be the relatively long task analysis (i.e., 12 steps), whereas previous studies trained staff using shorter task analyses (i.e., 8 or 10 steps; e.g., Sarokoff & Sturmey, 2001; Nigro-Bruzzi & Sturmey, 2010). Had the current experiment used a somewhat shorter task analysis, reminders for particular steps may not have been necessary. Nonetheless, the teachers in the present study were able to correctly use the intervention post-BST training either immediately or within two sessions.

The present study extends the literature on BST in a couple of ways. First, previous studies have used BST to train staff to implement behavioral interventions with children or adults with disabilities. With roots in applied behavior analysis and its primary purpose of improving socially significant behaviors (Baer, Wolf, & Risley, 1968), it is not surprising that researchers have focused on two areas of social significance, social skills (e.g., abduction-prevention; Beck, Miltenberger, & Ninness, 2009) and communication skills (e.g., vocal and nonvocal conversation skills; Nuernberger, Ringdahl, Vargo, Crumpecker, & Gunnarsson, 2013). The current study broadens the extant literature by effectively applying BST to the implementation of an academic intervention (i.e., reading racetrack). Naturally, more research is required to establish BST as an evidence-based training method for teachers implementing reading interventions. However, this study demonstrates that BST is a viable method for such interventions.

A second way in which the current study extends the BST literature is the inclusion of student participants without disability diagnoses. Previous BST studies involved training staff to implement an intervention with children with disabilities (e.g., autism spectrum disorder; Dib & Sturmey, 2007) and adults with disabilities (e.g., developmental disabilities; Parsons et al., 2012). All three boys in the present study were struggling readers, with their respective teachers recommending them for this study because they believed they would benefit from extra sight word instruction.

The secondary purpose of this study was to evaluate the effectiveness of a reading racetrack on the acquisition and maintenance of novel sight words. The current study supports previous reading racetrack research by demonstrating that a reading racetrack was indeed effective for each of the student participants. Student data show an immediate and marked improvement in sight word reading after the first intervention session with a steady increase over subsequent sessions. All participants reached 100% mastery within two to four sessions. Moreover, students maintained the majority of sight words one to two weeks post-intervention.

In addition to positive teacher and student outcomes, teachers' and students' opinions revealed that both groups of participants viewed the intervention as effective. Teachers appreciated the quick pace of BST and found the training to be effective and one they would use in the future. One teacher expressed that it was a good way to learn how to implement a novel intervention. From the student perspective, the game was enjoyable and helped them become better readers.

BST is a performance-based training protocol, deviating from traditional training approaches to professional development for teachers and staff, such as workshops and lectures. The primary rationale for using BST is that it is highly effective. With multiple training steps, BST takes more time to implement than lectures and workshops and requires trainees to practice all steps and receive feedback until they can complete them with 100% fidelity. However, the time investment is likely to be worthwhile given the robust effects of training and decreased need for retraining. This method also has the potential to meet the criteria for effective professional development as reported by the National Comprehensive Center for Teacher Quality (Archibald et al., 2011). BST can be used to train teachers and staff to implement practices related to core content, incorporating modeling of those teaching strategies. BST is an active method of learning a new skill: Teachers practice the new skill as part of BST and receive continuous feedback throughout the training.

Limitations and Future Research

Although this study demonstrated a functional relation between BST and reading racetrack implementation fidelity, several limitations should be considered. First, teacher baseline scores may have been deflated due to teachers not actively trying to implement the intervention. In this study, although all teacher participants were able to implement the reading racetrack after training procedures, none of the teachers implemented any of the steps correctly with students in baseline condition, including those they could have likely gleaned from the general instruction provided (e.g., using a timer). Instead, they tended to point to each word and have the student imitate, or ask the student to read the words he knew. Future studies might emphasize to teachers that they must try to teach the target words.

Second, there were limitations related to maintenance and generalization. Only one maintenance probe was conducted and after only one or two weeks. Due to the school schedule (i.e., testing periods, spring break), further maintenance probes could not be conducted. It is important to program for maintenance in order for behavior changes to continue over time. Evidence shows that when an intervention comes to an abrupt end, behavior maintenance is not likely to naturally occur (Walker & Buckley, 1972). Future studies on BST should include additional maintenance probes and longer maintenance phases (e.g., 4-6 weeks).

Third, due to time constraints, the experimenters did not assess generalization for teacher participants or students. Including a generalization component, such as having teachers implement a reading racetrack with other students or in different contexts would strengthen future studies. Future research should address generalization for students as well. For example, future researchers might want to have students read their newly acquired sight words in sentences or paragraphs, rather than in isolation.

Finally, experimenters limited social validity data collection to consumer satisfaction. Future research should go beyond collecting opinions of participants and attempt to include behavioral observations, preference assessments, and/or comparisons to successful peers. For instance, when given a choice between practicing words on a racetrack or on flashcards, which option would the students choose? Following implementation of the racetrack intervention, are students' gains sufficient to advance them to a reading level comparable to their successful classmates? A more robust generalization measure might also contribute to documenting social validity of treatment outcomes.

Implications for Practitioners

BST is an evidenced-based method used in the field of applied behavior analysis to train staff who work with children and adults with disabilities. This study demonstrated that BST was effective in training three teachers to implement a reading intervention, a reading racetrack, with struggling readers. Teachers were able to rapidly acquire the steps of the racetrack game and implement them with 100% fidelity.

Similar to findings from other studies (e.g., Parsons et al., 2012), teachers in the current study reported they found this type of hands-on training acceptable. Along with appealing to teachers, there are aspects of the training that may appeal to school districts as well. With BST, current teachers and staff can be utilized to train others. Individuals who are trained to implement BST can then provide training to numerous staff. The concept is similar to various coaching models that have been proven as effective in training teachers (e.g., Conroy et al., 2013; Sailors & Price, 2015; Stormont, Reinke, Newcomer, Marchese, & Lewis, 2015). Although BST requires substantial time, admittedly a precious resource in schools, this study further evidences that it is worth the time. In conclusion, for teachers who are required to attend regular professional development opportunities, BST may not fully replace traditional professional development, but may counterbalance the typical workshop-style presentations and be a smart use of the resources available to school districts.

Carrie A. Davenport

Sheila R. Alber-Morgan

Moira Konrad

The Ohio State University

Author note: This research was supported in part through a grant from the National Institutes of Health (NIDCD R01 DC014956).

Address correspondence to: Carrie Davenport, 110 Pressey Hall, 1070 Carmack Road, Columbus, OH 43210. E-mail: davenport.182@osu.edu

References

Archer, A. L., & Hughes, C. A. (2011). Explicit instruction: Effective and efficient teaching. New York: The Guilford Press.

Archibald, S., Coggshall, J. G., Croft, A., & Goe, L. (2011). High-quality professional development for all teachers: Effectively allocating resources. Research & Policy Brief. National Comprehensive Center for Teacher Quality. Retrieved from http://eric.ed.gov/?id=ED520732

Baer, D. M., Wolf, M. W., & Risley, T. R. (1968). Some current dimensions of applied behavior analysis. Journal of Applied Behavior Analysis, 1, 91-97. doi:10.1901/jaba.1968.1-91

Beck, K. V., Miltenberger, R. G., & Ninness, C. (2009). Evaluation of a commercially available program and in situ training by parents to teach abduction-prevention skills to children. Journal of Applied Behavior Analysis, 42, 761-772. doi:10.1901/jaba.2009.42-761

Coggshall, J. G., & Ott, A. (2010). Retaining teacher talent: Convergence and contradictions in teachers' perceptions of policy reform ideas. Naperville, IL: Learning Point Associates and New York: Public Agenda. Retrieved March 10, 2017, from https://www.publicagenda.org/files/Convergence_Contradiction.pdf

Coles, E. K., Owens, J. S., Serrano, V. J., Slavec, J., & Evans, S. W. (2015). From consultation to student outcomes: The role of teacher knowledge, skills, and beliefs in increasing integrity in classroom management strategies. School Mental Health, 7, 34-48. doi:10.1007/s12310-015-9143-2

Conroy, M. A., Sutherland, K. S., Vo, A. K., Carr, S., & Ogston, P. L. (2014). Early childhood teachers' use of effective instructional practices and the collateral effects on young children's behavior. Journal of Positive Behavior Interventions, 16, 81-92. doi:10.1177/1098300713478666

Core Components of Montessori Education. (2017). Retrieved February 2, 2018 from https://amshq.org/Montessori-Education /Introduction-to-Montessori/Core-Components-of-Montessori-Education

Crowley, K., McLaughlin, T., & Kahn, R. (2013). Using direct instruction flashcards and reading racetracks to improve sight word recognition of two elementary students with autism. Journal of Developmental and Physical Disabilities, 25, 297-311. doi:10.1007/s10882-012-9307-z

Darling-Hammond, L., Wei, R. C., Andree, A., Richardson, N., & Orphanos, S. (2009). Professional learning in the learning profession. Washington, DC: National Staff Development Council. Retrieved from http://impact.sp2.upenn.edu/ostrc/docs /document_library/ppd/Professionalism/Professional%20 Learning%20in%20the%20Learning%20Profession.pdf

Dib, N., & Sturmey, P. (2007). Reducing student stereotypy by improving teachers' implementation of discrete-trial teaching. Journal of Applied Behavior Analysis, 40, 339-343. doi:10.1901/jaba.2007.52-06

Dolch, E. W. (1948). Problems in reading. Champaign, IL: The Garrard Press.

Falk, M., Band, M., & McLaughlin, T. F. (2003). The effects of reading racetracks and flashcards on sight word vocabulary of three third grade students with a specific learning disability: A further replication and analysis. International Journal of Special Education, 18, 57-61.

Fetherston, A. M., & Sturmey, P. (2014). The effects of behavioral skills training on instructor and learner behavior across responses and skill sets. Research in Developmental Disabilities, 35, 541-562. doi:10.1016/j.ridd.2013.11.006

Higgins, M., McLaughlin, T. F., Derby, K. M., & Long, J. (2012). The differential effects of direct instruction flashcards on sight-word identification for two preschool students with autism spectrum disorders. Academic Research International, 2, 394.

Jostad, C. M., Miltenberger, R. G., Kelso, P., & Knudson, P. (2008). Peer tutoring to prevent firearm play: Acquisition, generalization, and long-term maintenance of safety skills. Journal of Applied Behavior Analysis, 41,117-123. doi:10.1901/jaba.2008.41-117

Lafasakis, M., & Sturmey, P. (2007). Training parent implementation of discrete-trial teaching: Effects on generalization of parent teaching and child correct responding. Journal of Applied Behavior Analysis, 40, 685-689. doi:10.1901/jaba.2007.685-689

Lee, J., & Shute, V. J. (2010). Personal and social-contextual factors in K-12 academic performance: An integrative perspective on student learning. Educational Psychologist, 45, 185-202. doi:10.1080/00461520.2010.493471

Lewis, T. J., & Newcomer, L. L. (2002). Examining the efficacy of school-based consultation: Recommendations for improving outcomes. Child & Family Behavior Therapy, 24, 165-181. doi:10/1300/J019v24n01_11

National Reading Panel. (2000). Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction. Jessup, MD: National Institute for Literacy.

Nigro-Bruzzi, D., & Sturmey, P. (2010). The effects of behavioral skills training on mand training by staff and unprompted vocal mands by children. Journal of Applied Behavior Analysis, 43, 757-761. doi:10.1901/jaba.2010.43-757

Nuernberger, J. E., Ringdahl, J. E., Vargo, K. K., Crumpecker, A. C., & Gunnarsson, K. F. (2013). Using a behavioral skills training package to teach conversation skills to young adults with autism spectrum disorders. Research in Autism Spectrum Disorders, 7, 411-416. Retrieved from https://www.journals.elsevier.com/research-in-autism-spectrum-disorders

Oxford, M. L., & Lee, J. O. (2011). The effect of family processes on school achievement as moderated by socioeconomic context. Journal of School Psychology, 49, 597-612. doi:10.1016/j.jsp.2011.06.001

Parsons, M. B., Rollyson, J. H., & Reid, D. H. (2012). Evidence-based staff training: A guide for practitioners. Behavior Analysis in Practice, 5,2. Retrieved from https://www.abainternational.or g/journals/behavior-analysis-in-practice.aspx

Peter D. Hart Research Associates & Harris Interactive. (2004). [Teaching Commission survey, study 7445b, conducted November 19-23, 2004, teacher sample]. Unpublished raw data.

Rosales, R., Stone, K., & Rehfeldt, R. A. (2009). The effects of behavioral skills training on implementation of the picture exchange communication system. Journal of Applied Behavior Analysis, 42, 541-549. doi:10.1901/jaba.2009.42-541

Ryan, C. S., Hemmes, N. S., Sturmey, P, Jacobs, J. D., & Grommet, E. K. (2008). Effects of a brief staff training procedure on instructors' use of incidental teaching and students' frequency of initiation toward instructors. Research in Autism Spectrum Disorders, 2, 28-45. Retrieved from https://www.journals.elsevier.com/research-in-autism-spectrum-disorders

Sailors, M., & Price, L. (2015). Support for the improvement of practices through intensive coaching (SIPIC): A model of coaching for improving reading instruction and reading achievement. Teaching and Teacher Education, 45, 115-127. doi:10.1016/j.tate.2014.09.008

Sarokoff, R. A., & Sturmey, P. (2004). The effects of behavioral skills training on staff implementation of discrete-trial teaching. Journal of Applied Behavior Analysis, 37, 535-538. doi.org/10.1901/jaba.2004.37-535

Shaffer, L., & Thomas-Brown, K. (2015). Enhancing teacher competency through co-teaching and embedded professional development. Journal of Education and Training Studies, 3, 117-125. doi:10.1114/jets.v3i3.685

Skinner, B. F. (1957). Verbal behavior. New York, NY: Appleton-Century -Crofts.

Sterling-Turner, H. E., Watson, T. S., & Moore, J. W. (2002). The effects of direct training and treatment integrity on treatment outcomes in school consultation. School Psychology Quarterly, 17, 47-77. doi:10.1521/scpq.17.1.47.19906

Stormont, M., Reinke, W. M., Newcomer, L., Marchese, D., & Lewis, C. (2015). Coaching teachers' use of social behavior interventions to improve children's outcomes: A review of the literature. Journal of Positive Behavior Interventions, 17, 69-82. doi:10.1177/1098300714550657

Vanselow, N. R., & Hanley, G. P. (2014). An evaluation of computerized behavioral skills training to teach safety skills to young children. Journal of Applied Behavior Analysis, 47, 51-69. doi:10.1002/jaba.105

Walker, H. M., & Buckley, N. K. (1972). Programming generalization and maintenance of treatment effects across time and across settings. Journal of Applied Behavior Analysis, 5, 209-224. doi:10.1901/jaba.1972.5-209

Ward-Horner, J., & Sturmey, P. (2008). The effects of general-case training and behavioral skills training on the generalization of parents' use of discrete-trial teaching, child correct responses, and child maladaptive behavior. Behavioral Interventions, 23, 271-284. doi:10.1002/bin.268

Yoon, K. S., Duncan, T., Lee, S. W.-Y., Scarloss, B., & Shapley, K. L. (2007). Reviewing the evidence on how teacher professional development affects student achievement. Regional Educational Laboratory Southwest (NJ1). Retrieved from http://eric.ed.gov/?id=ED498548

Caption: Figure 2. Teacher results.

Caption: Figure 3. Student results.
Table 1
How to Implement a Reading Racetrack

1. Teacher models pointing to the first 10 words on the racetrack and
saying the word aloud.

2. Teacher directs the student to point to the words and read as many
on the racetrack as best he/she can.

3. If student reads correctly the first 10 words, provide praise after
each word. If the student pauses for longer than 3 seconds, misreads
the word, or mispronounces the word, the teacher points to the word
and reads it aloud.
(if no errors, skip to step #8)

4. Teacher points to the word and says, "I'll read the word to you."
Teacher reads the word.

5. Teacher says, "Let's read the word together." Student and teacher
read word together.

6. Teacher says, "Point to the word and read it 3 times."

7. Teacher gives student praise and thanks him/her for playing.

8. Teacher says, "Now we are going to play the game again, and I want
you to read the words as quickly as you can. On your mark, get set,
go!"

9. Teacher starts timer.

10. When timer sounds, the teacher says, "Stop reading."

11. Teacher counts total words read and number of words read
incorrectly and records on reading racetrack data sheet. Teacher
tells student how many words they read correctly.

12. Teacher gives student praise and thanks him/her for playing.

Table 2
Written Description of a Reading Racetrack

About a reading racetrack

A reading racetrack is a novel approach to teaching sight words.
Materials include a racetrack-style board game with empty cells in
which a teacher writes sight words, repeating each word multiple
times around the racetrack. This technique has several benefits:

* It includes aspects of explicit instruction, a validated approach to
teaching reading: model, lead, test, and retest.

* It promotes reading fluency by having students participate in timed
racetrack readings.

* Teachers have the flexibility to customize the racetracks for their
students, selecting individualized lists of target words and/or
varying the number of target words.

* It's fun! Students love "racing" around the track as they read.

Figure 1. Behavioral Skills Training for reading racetrack.

Training steps                           Observed/Not observed

1. Experimenter verbally describes
how to implement a reading racetrack

2. Experimenter provides trainee
written description of a reading
racetrack

3. Allows trainee up to 10 minutes
to read description

4. Using a research assistant,
experimenter demonstrates how to
play the game

Follows steps 1-13 on checklist

5. During participant's turn playing
the game with the research assistant,
the experimenter provides feedback:
trainer says "nice job" if step
implemented correctly

If step implemented incorrectly,
experimenter reads description of
that step and has teacher try again.
This is done until teacher does it
correctly without feedback,

6. Repeat steps 1-6 until participant
achieves 100% mastery (playing the
game implementing each step correctly)

                                         _____________/_____________
                                         # observed/ft opportunities
COPYRIGHT 2019 West Virginia University Press, University of West Virginia
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2019 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Davenport, Carrie A.; Alber-Morgan, Sheila R.; Konrad, Moira
Publication:Education & Treatment of Children
Article Type:Report
Geographic Code:1USA
Date:Aug 1, 2019
Words:7333
Previous Article:Embedding Tact Instruction During Play for Preschool Children with Autism Spectrum Disorder.
Next Article:Reducing Recidivism: Transition and Reentry Practices for Detained and Adjudicated Youth with Disabilities.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters