Printer Friendly

Evaluation of the effectiveness of an early literacy program for students with significant developmental disabilities.

For the last 2 decades, educators have placed strong emphasis on teaching students to read using scientifically based interventions, implemented through the federal government's Reading First initiative and defined in research synthesis reports (Adams, 1990; National Institute for Literacy, 2001). These reports provide converging evidence that learning to read is influenced by foundational, emergent literacy skills. The National Reading Panel (NRP, 2000) identified five essential components of reading instruction: (a) phonemic awareness, (b) phonics, (c) fluency, (d) vocabulary, and (e) comprehension. In a comprehensive review of research on reading instruction for students with significant developmental disabilities, Browder, Wakeman, Spooner, Ahlgrim-Delzell, and Algozzine (2006) found that the majority of studies for this population focused on sight word acquisition; only a small portion targeted comprehension of these words.

Additional research also suggests that the "science of reading" that emerged during this same time period bypassed students with significant intellectual and developmental disabilities. Qualitative research including content analyses of textbooks (Katims, 2000) and ethnographic studies of children's school experiences (Kliewer, 1998) reveals, a consistent lack of focus on reading for this population--who are probably the least likely to learn to read without carefully planned, explicit instruction. In contrast, the evidence-based practices Browder et al. (2006) identified in their comprehensive review used systematic prompting and feedback procedures during massed trials of sight words instruction (e.g., flash card drills). Many of these prompting procedures, such as time delay, promoted errorless learning through providing preresponse prompting systematically faded across teaching trials (e.g., by delaying the time before the prompt was introduced).

As noted by NRP (2000), decoding skills, not just sight word recognition, are essential to gaining competence as a reader. A few studies have taught phonics to students with moderate intellectual disabilities. Barudin and Hourcade (1990) and Lane and Critchfield (1998) found that students with moderate intellectual disabilities benefited from phonemic awareness training and phonics instruction. Hoogeveen, Smeets, and Lancioni (1989) and Hoogeveen, Smeets, and van der Houven (1987) also demonstrated positive outcomes introducing letter-sound correspondences to students with moderate intellectual disabilities. Bracey, Maggs, and Morath (1975) found that 6 of 8 students with moderate intellectual disabilities made significant improvement in three reading skills: (a) reading sounds, (b) blending sounds into words, and (c) word reading. Two additional studies found that phonic analysis paired with error correction helped this population decrease word recognition errors (J. Singh & Singh, 1985; N. N. Singh & Singh, 1988). Recently, Bradford, Shippen, Alberto, Houchins, and Flores (2006) successfully used the Corrective Reading Program (Engelmann, Becket, Hanner,& Johnson, 1980) to teach decoding to middle school students with moderate intellectual disabilities.

A commonality in this research is that the students had the verbal skills to respond to instruction in the format typically used for phonics instruction. For example, they could articulate the initial consonant sound or say a word by blending sounds. In contrast, many students with significant developmental disabilities are either nonverbal or, even if verbal, utilize augmentative communication systems. Existing commercial programs do not provide guidance on how to adapt student responses for nonverbal learners. A second challenge is that this population often has language deficits that create a major challenge for deriving meaning from printed text. For example, they may have limited picture identification and poor listening comprehension. Although there are literacy programs developed for young children that promote language skills concurrently with skills such as phonemic awareness, these also lack guidance for nonverbal responders and those who need systematic prompting with multiple opportunities to respond to acquire new skills (e.g., Reading Mastery, Engelmann & Bruner, 2003; Early Reading Tutor, Gibbs, Miller, Helf, & Cooke, 2004).

This study developed and evaluated an early literacy program for students with significant disabilities--adapting strategies that have been found effective for the nondisabled population (Ball & Blachman, 1991; Treiman & Baron, 1983)--that would be responsive to these communication challenges and address some compelling questions of literacy instruction for this population. Specifically, the program would include the NRP (2000) components of reading and focus on the early literacy skills that build each of these components. In selecting instruments to measure change, it became apparent that most measures of early reading or early literacy also assume verbal ability. For this reason, our research also included the development of a standardized nonverbal assessment of early literacy. This report summarizes the findings and activities of the first year of a 5-year study on teaching reading to this population and focuses primarily on (a) the development of the curriculum, (b) the development and selection of appropriate measures, and (c) the comparative effects of the curriculum versus a traditional sight word approach. Future questions to be addressed over the course of this longitudinal study are whether these skills, considered essential building blocks for typically developing readers and adapted for students with significant disabilities, will lead to fluent reading and, if so, how long students will need to learn to read.

METHOD

CURRICULUM DEVELOPMENT

Research suggests that children entering first grade with phonemic awareness skills will experience more success in learning to read than their peers who enter first grade with little or no phonemic awareness (e.g., Hiebert & Pearson, 2000; Lyon, 1998; Perfetti, Beck, Bell, & Hughes, 1987; Smith, Simmons & Kame'enui, 1998; Troia, 1999). In contrast, most students with significant developmental disabilities need instruction to develop phonemic awareness in the elementary grades. The curriculum developed for the current study was based on the premise that it is not too late to begin promoting phonemic awareness skills for these students ages 5 to 10, building a bridge to reading by late elementary school.

To develop the curriculum, we reviewed research on early literacy (e.g., Neuman & Dickinson, 2001; Smith et al., 1998); existing early literacy programs (e.g., Reading Mastery; Engelmann & Bruner, 2003; Language for Learning; Engelmann & Osburn, 1999); and early literacy assessments (e.g., Dynamic Indicators of Basic Early Literacy Skills, DIBELS; Good & Kaminski, 2002; Test of Phonological Awareness, TOPA-2+; Torgesen & Bryant, 2004; Test of Early Reading Ability TERA-3; Reid, Hresko, & Hammill, 2001). From this review, we developed a tentative list of instructional objectives and methods and submitted it to a panel of experts who had published in one of the following areas: (a) early literacy for typically developing students, (b) direct instruction reading for students with high incidence disabilities, (c) reading for students with significant disabilities, and (d) augmentative communication.

In June 2005, a panel of national experts in augmentative communication, early literacy, direct instruction, and progress monitoring participated in a full-day discussion and provided subsequent written feedback on proposed and missing objectives and the planned instructional approach. In a revision of the curriculum, we incorporated their suggestions such as repeating skills across levels, adding the opportunity for writing, and including more print awareness. The recommendation to try embedding some of the skills in music was attempted, but proved not feasible. We did not have consensus for and did not incorporate some recommendations (e.g., whether concurrently teaching spelling and rhyming was essential).

We developed the objectives into a scope and sequence chart with scripted lessons. The resulting curriculum, the Early Literacy Skills Builder (ELSB; Browder, Gibbs, Ahlgrim-Delzell, Courtade, & Lee, 2007), originally contained five levels with five lessons at each level. Each level introduced progressively more difficult skills. Some easier skills were replaced in higher levels by more difficult skills. For example, Level 4 fades out clapping out syllables in words and uses clapping out letter sounds. Individual lessons within a level were built around stories featuring a frog named "Moe" and offered students multiple opportunities to practice skills before moving on to more difficult ones.

Table 1 distills the specific ELSB objectives. ELSB begins with a sight word game based on a constant time delay strategy (Collins, 2007). In the first round, the teacher prompts the correct response by pointing to the word as it is presented (zero delay). The next round uses a 5-s delay. For motivation, the puppet Moe "helps" the students as needed (prompting the correct response). The teacher then reads brief stories about Moe. To build text awareness, students assist by finding the missing word for a sentence or pointing to text read by the teacher. These skills are taught using the system of least prompts, a method of instruction that has been effective in teaching a variety of skills to students with significant disabilities (Doyle, Wolery, Ault, & Gast, 1988). Using the Moe stories, ELSB also promotes comprehension and vocabulary development through teaching students to answer literal comprehension questions after listening to brief passages and to identify a variety of pictures that depict the same spoken word (e.g., pictures of "happy").

Evidence suggests that phonemic awareness is strongly related to success in both reading and spelling (Ball & Blachman, 1991; Treiman & Baron, 1983); ELSB teaches students blending and segmenting skills using direct instruction strategies (Carnine, Silbert, Kame'enui, & Tarver, 2004). Although phonemic awareness can be taught without visual referent through auditory training, Hohn and Ehri (1983) found that kindergarten students trained with letters learned to segment better than those who used blank counters or no visual referent. ELSB introduces letter sounds concurrently with printed letters by using pictures as referents for blending so that students who are nonverbal or who need visual support (e.g., some students with autism) can demonstrate learning.

In our experience, we found many teachers of students with significant developmental disabilities had limited training in literacy so we decided to develop ELSB as a scripted curriculum. Each lesson's suggested teacher script includes suggestions for additional supports that can be used (e.g., ways to incorporate eye gazing, suggestions to enlarge materials). Although the suggested text may be used verbatim, teachers also can alter the script to accommodate students' needs (e.g., if more prompting is needed). The Moe stories were designed for use on a story easel; the program includes student response cards for each objective, which also can be adapted for use with an eye gaze board or voice output communication device.

INSTRUMENTATION

Nonverbal Literacy Assessment (NVLA). Our exhaustive review of Mental Measurements Yearbook and Tests in Print did not reveal a general measure of literacy for students who are nonverbal. We designed NVLA as a standardized measure of literacy, knowing that many of the participants would not be able to respond to the standardized administration procedures of available literacy measures. We generated items for NVLA based on the five components of reading proposed by NRP (2000) and by reviewing how these skills were measured in available literacy measures using verbal responses (e.g., DIBELS; Good & Kaminski, 2002; TOPA-2+; Torgesen & Bryant, 2004; TERA-3; Reid et al., 2001). NVLA uses a receptive response format with answers provided in two- to four-choice arrays. Four selection responses can be used in the standard administration: (a) finger pointing with a response book, (b) eye gazing with responses affixed to a Plexiglass board, (c) pulling the response from attached Velcro cards, or (d) pulling the response from a fan of responses displayed by the tester. Correct verbalized answers are also accepted.

NVLA has scripted administration directions and is administered in three sessions to accommodate for the attention difficulties and variability of responding frequently observed in this population. Administration time varies because of the nature of the students who may require nonverbal assessment. In this study, for students who used the finger pointing response, each session took approximately 20 min; for students using the eye gaze system, the administration time was longer because of the manipulation of the materials. NVLA consists of 221 items divided into two sections, the Conventions of Reading (CVR) and Phonemic Skills (PhonSk). CVR includes 41 items with skills such as book orientation, text pointing, turning pages, completing repeated story lines, listening comprehension, prediction, sequencing, and identifying characters in the story. The PhonSk section includes 180 items with skills of word study (e.g., matching words, picture--word matching, sight words, and reading vocabulary); alphabetic principle and beginning phonics (e.g., letter matching and identification and letter sounds); breaking words into syllables and phonemic awareness (e.g., identifying first and last letters of words, identifying first and last sounds of words, and identifying words with same and different first and last sounds); and blending sounds to form words. This study used scores from the CVR, PhonSk, and a total score as dependent variables. We calculated a score for each section by summing the number of correct responses, and the total score by summing the scores of the two sections.

We conducted a test-retest study (with 2 weeks between administrations) to examine NVLA stability. The test-retest reliability coefficient for the total test score of the NVLA was .97 (p < .001). Alpha coefficients for CVR, PhonSk, and the total test were .798, .972, and .979, respectively. An observer recorded the demonstration of proper administration procedures and scored student responses to establish fidelity of administration and interrater reliability. Fidelity was calculated by an item-by-item agreement percentage. The mean fidelity of administration was 95.5%, with a range of 93.1% to 98.5%. Interrater reliability was high at .96.

A national panel of six experts in early literacy, severe disabilities, and assessment reviewed the assessment items to establish content validity. The panel agreed that items reflected the range of early literacy skills. Suggestions included renaming sections to better reflect the construct, adding verbal response sections, adding additional items, ensuring systematic use of distractors, and establishing basal and ceilings. Another expert in literacy assisted in changing section names. We did not add verbal response sections given the availability of published assessments already accessible to students with verbal ability; NVLA was designed specifically for nonverbal responding. Use of distractors was applied in a systematic fashion from pictures/words pairs to words only and progressing difficulty of the distractor options from clearly wrong options (e.g., pictures/words of object options, when asked to identify characters in the story) to finer discriminations (e.g., words beginning with b, iv, and h, when asked to find the word that begins with the/dl sound). We intend to address the suggestions of adding items and establishing a basal and ceiling as work on the instrument progresses.

Early Literacy Skills Assessment (ELSA). We developed ELSA as pretest/posttest for the ELSB curriculum. It contains 152 items exactly matched to the skills taught in the experimental curriculum. Each of the nine sections of ELSA correspond to one of the nine objectives of the ELSA curriculum. We calculated a score for each section by summing the number of correct responses, and the total score by summing the scores of each section.

For this measure, we conducted studies of fidelity of administration, interrater reliability, test-retest reliability, and internal reliability in the same manner as for NVLA. Mean fidelity of administration was 96.6%, with a range of 91.2% to 98.9%; interrater reliability was 96.4%, with a range of 92.5% to 99.2%. Test-retest reliability coefficients ranged from .689 to .797 for the subtests with a mean of .763 for the total score. Internal reliability coefficients for the individual subtests ranged from .149 (4 items for literal questioning) to .980 and .896 for the total test.

Peabody Picture Vocabulary Test-III (PPVT-III). PPVT-III (Dunn & Dunn, 1997) is a standardized measure of receptive vocabulary for standard English with acceptable technical qualities. It allows students who are nonverbal to participate by pointing to a response and provides two equivalent forms and one total test raw score. The respondent points to a picture that best corresponds to a word orally given by the examiner. The total raw score is obtained by subtracting the number of errors from the numerical value of the ceiling item (i.e., highest word correctly identified). The raw score can be converted to a standard score, percentile rank, normal curve equivalent, or age equivalent.

PPVT-III technical adequacy is documented by the test publisher in the Examiner's Manual (Dunn & Dunn, 1997). For the ages included in this study, internal consistency, as measured with coefficient alpha, ranged from .93 to .95. Split-half coefficients ranged from .86 to .95. Alternate forms reliability coefficients for raw scores ranged from .92 to .95, and corrected test-retest reliability coefficients ranged from .93 to .94 for the ages included in this study.

Evidence of validity of PPVT-III (Dunn & Dunn, 1997) scores is provided in the manual by correlating the scores with other measures of reading: the Oral and Written Language Scales (OWLS; Carrow-Woolfolk, 1995); the Wechsler Intelligence Scale for Children-III (WISC-III; Wechsler, 1991); the Kaufman Adolescence and Adult Intelligence Test (KAIT; Kaufman & Kaufman, 1993); and the Kaufman Brief Intelligence Test (K-BIT; Kaufman & Kaufman, 1990). PPVT-III had an average correlation of .70 with the OWLS Listening Comprehension scale, and .67 with the OWLS Oral Expression scale. Its correlations with measures of verbal ability were: .91 (WISC-III VIQ), .87 (KAIT crystallized IQ), and .82 (K-BIT Vocabulary). The Technical References supplement (Pearson's Assessments, n.d.) compares PPVT-III scores of eight special populations (speech impaired; language delayed; language impaired; intellectual disabilities, child and adult; reading disabled; hearing impaired; and gifted) with demographically matched control groups.

Woodcock Language Proficiency Battery (WLPB). The WLPB (Woodcock, 1991) comprises 13 subtests in the areas of oral language, reading, and written language. The manual indicates that this instrument may be used to "determine and describe the status of an individual's language" in these three areas (p. 5). Raw scores for each subtest can be converted to age/grade equivalents, W score, standard score, percentile rank, or Relative Mastery Index. Because of the verbal language requirement for responding to test items, we used raw scores for only two subtests in this study: Memory for Sentences and Letter--Word Identification.

Each subtest provides standardized administration procedures. In Memory for Sentences, the respondent is given words and phrases of increasing difficulty to repeat verbatim and receives a score of 0, 1, or 2 depending upon the accuracy of the oral repetition. We selected this subtest as a standardized measure of oral language. Letter--Word Identification, selected as a standardized reading measure, comprises three sets of responses. In the beginning items the respondent is to match one of three black-and-white rebus symbols to a larger, color picture of the intended object. Later, the respondent is asked to read individual letters and then words of increasing difficulty and receives a score of zero or 1 depending upon the accuracy of the response. The Examiner's Manual reports technical adequacy of the WLPB. Internal consistency for the subtests used in this study, as measured with coefficient alpha, ranges from .81 to .96 for ages 6 and 9. Test stability, measured by readministering the subtest to individuals 1 to 17 months between testing sessions, ranges from .78 and .94.

The manual reports evidence of validity by correlating the WLPB with other measures of reading (Peabody Individual Achievement Test, PIAT; Dunn & Markwardt, 1970; Kaufman Assessment Battery for Children, K-ABC; Kaufman & Kaufman, 1983; Kaufman Test of Educational Achievement, K-TEA; Kaufman & Kaufman, 1985; and Wide-Ranged Achievement Test-Revised, WRAT-R; Jastak & Wilkinson, 1984). Concurrent validities of basic reading skills for Grades 3 and 4 was .84 with the PIAT, .79 with the K-ABC, .82 with the K-TEA, and .85 with the WRAT-R Level 1. Construct validity is provided in the manual by examining subtest intercorrelations at selected ages and patterns of score differences for subjects from selected populations. Intercorrelations for the subtests used in this study for ages 6 and 9 ranged from .27 for passage comprehension and listening comprehension at age 6 and .33 for word-attack and listening comprehension at age 4, to .80 for both word attack and letter-word identification at age 3 and letter-word identification and passage comprehension at age 4. The manual also reports means and standard deviations of scores across four comparison groups of gifted, typically developing, learning disabled, and mentally retarded for skill clusters.

INTERVENTION STUDY

This study utilized a randomized control group design, randomly assigning students to either a treatment or control group. All participants were pretested at the beginning of the academic year before treatment was implemented and posttested at the end of the school year.

INCLUSION CRITERIA

Seven self-nominated special education teachers in three disability areas (severe/profound intellectual disabilities, moderate intellectual disabilities, and autism) in a large urban school district in the southeast United States volunteered to participate in the study. These seven teachers identified students they believed met the eligibility criteria: (a) IQ 55 or below with comparable deficits in adaptive behavior (if no IQ score could be obtained, developmental screening reflected severe deficits in intellectual functioning); (b) enrolled in Grades K to 4; (c) reading below first-grade level; (d) adequate hearing and vision to respond to curricular materials and instruction; (e) responsive to ongoing instruction in English if a normative English speaker; and (f) parental informed consent to participate in the research. From the initial pool of 35 teacher-nominated students, 24 met the criteria for inclusion in the study, but 1 only attended school a few days the first month and therefore was dropped from the study.

PARTICIPANTS

The 23 student participants were enrolled in Grades K to 4 and attended school in self-contained special education classrooms. IQ scores obtained for 19 of the 23 participants from school records were derived from a number of different psychological tests, some of which only provided a mental age equivalent. In cases where a mental age equivalent was provided in place of an IQ score, we calculated a deviation IQ by dividing the mental age by the chronological age and then multiplying by 100. We recorded deviation IQs below 20 and reports containing notations of "IQ below 20" as "20" in the database. Therefore, the obtained mean IQ is somewhat less than the true mean IQ. The estimated mean IQ for the total group of students was 41 with a standard deviation of 12.67 and a range from less than 20 to 54. Six students were included in general education classes ranging from 30 min to 7 hr per week. All the participants had estimated intellectual disabilities in the moderate to severe/profound range, although many could not participate in traditional testing because of restricted verbal and behavioral repertoires. None of the students qualified for English as a second language instruction; 6 qualified for the school's free lunch program. Table 2 provides a description of student participants by group assignment. Chi-square analyses indicated no statistically significant differences (p > .05) between the control and treatment group for gender, ethnicity, verbal status, grade, lunch status, and classroom type; t-test analyses indicated no statistically significant differences (p > .05) between the control and treatment group for IQ and age. Comparison of group differences at pretest found no significant differences between the groups on any of the dependent variables.

The seven teachers who administered the control and treatment intervention were all self-contained, elementary school special education teachers with an average of 8.83 years teaching (8.67 teaching special education), range 1-19 years. One teacher taught general education for 1 year before teaching special education. Four teachers had bachelor's degrees and three had master's degrees. Five teachers had a regular special education teaching license and two had a provisional entry license. One teacher also had a general education teaching license. Six of the teachers were White and one was Hispanic.

RANDOM ASSIGNMENT OF STUDENTS

To help control for a teacher effect, we selected half of the students in each classroom for the treatment group and the other half as the control group. Names of all the eligible students in each classroom were written on pieces of paper and placed into a box; without looking into the box the teachers pulled the predetermined number (i.e., half) of the names from the box to form the treatment group within her own class. The remaining names in the box formed the control group. In the case of an uneven number of students in a classroom, the number of students in the disability group across classrooms (there were at least two classrooms per disability type) was used to divide the number of students in half. The teacher within the disability group with the smallest number of students randomly selected an extra student for the treatment group and the paired teacher of the same disability type with the larger number of students selected one less treatment student. This resulted in 12 students in each group at the initial selection. A student with absentee problems was dropped from the treatment group, leaving 11 treatment students.

This simple sampling method was feasible to the logistics of the applied context. Further matching by type of disability or level of functioning was not feasible given the small sample size. Because of the small sample sizes in each group, we conducted statistical tests for examining mean differences between the treatment and control groups on the pretest measures. Initial statistical analyses indicated that treatment and control groups were equivalent for all pretest measures. We present additional details of these analyses in the Results section.

DEPENDENT VARIABLES

The dependent variables in this study included the two measures we created, the NVLA and the ELSA. We also used two standardized language measures, the PPVT-III (Dunn & Dunn, 1997) and two subtests of the WLPB (i.e., Memory for Sentences and Letter Word Identification; Woodcock, 1991), following the standardized procedures. We used raw scores for all the measures in this study.

INDEPENDENT VARIABLE: INTERVENTION

The independent variable in this study was the type of reading instruction. The classroom teachers conducted all reading instruction, after attending training workshops on implementation. Teachers also received ongoing classroom consultation to ensure ongoing procedural fidelity.

Shared Intervention. As noted earlier, students with significant developmental disabilities often receive minimal literacy instruction (Katims, 2000; Kliewer, 1998). To develop literacy, children need exposure to literature including both narrative and expository works (Morrow & Gambrell, 2002). Children who are read to daily tend to score higher on measures of vocabulary, comprehension, and decoding (Bus, van Ijzendoorn, & Pellegrini, 1995; Senechal, Thomas, & Monker, 1995). The primary purpose of this read-aloud event is the construction of meaning from the interactive event between the adult and child (Vygotsky, 1978). Consistent exposure to read-alouds contributes to improved comprehension and vocabulary development (Vacca et al., 2006). Prior to introducing the experimental curriculum, we decided to ensure that all students in the study were receiving a foundational level of literacy instruction through shared stories that we called story-based lessons (SBL).

Teachers selected grade-appropriate literature from a variety of means, including a list of recommended reading lists accompanying the Open Court reading program (Bereiter et al, 2000) used by the school district and suggestions from general education teachers. Selected books were adapted to accommodate student access as needed for physical challenges, text length, and pictures to support comprehension of text. We developed a 10-step task analysis of engaging students in the reading and comprehension of the book that included (a) anticipatory set, (b) opening the book, (c) turning pages, (d) identifying author and (e) title, (f) completing a repeated story line, (g) pointing to text, (h) answering a prediction question, (i) pointing to/saying vocabulary word, and (j) answering comprehension questions.

In the first full-day training event held in November, we demonstrated the steps of the task analysis; the teachers practiced it with each other until fluent. We described methods for adapting grade-appropriate literature and provided examples of different adaptations. Although teachers selected and adapted their own books for the shared stories, they received ongoing observations and feedback on their fidelity on following the task analysis to the shared story approach. We conducted a total of 55 SBL observations across the seven teachers. Teachers typically shared stories with their entire class, or a small group in the class, including students in both the experimental and control groups. Teachers conducted SBLs with the students throughout the school year, from early November to early June. We interviewed the teachers to ensure that both groups received comparable time in shared stories (daily except during special events). We provide additional information regarding time spent in SBL and teacher fidelity in the Results section.

Experimental Group: ELSB. The teachers received a scripted ELSB curriculum including teacher directions, student response materials, the story easel, and training on each objective of the curriculum in mid-October. We demonstrated following the script, prompting, and error correction procedures for each objective; teachers practiced each objective with each other until fluent during a second full-day training event. We conducted ongoing classroom observations using a task analysis of teacher and student behaviors for each objective and provided feedback on maintaining fidelity with the objectives. We conducted a total of 58 observations of the ELSB intervention across the seven teachers from October to May. Teachers implemented ELSB either 1:1 or in a small group of 2 to 4 students, depending on the number of students in their class randomly assigned to this intervention. Teachers could repeat each lesson on a 2-, 4-, or 10-day cycle depending on the pace of the group. Students did not move to the next level until they had 75% correct responding on the lessons in the prior level. This criterion was based on data taken by a member of the research team while the teacher implemented the lesson. If an individual student's performance was slower than a group, he or she received additional practice to catch up deficient skills or a separate lesson. Teachers delivered ELSB lessons to students throughout the school year, from mid-October to late May. We provide additional information regarding time spent in ELSB lessons and teacher fidelity of instruction in the Results section.

Control Group: Sight Words and Pictures. Students in the control group received sight word or picture instruction using Edmark, a commercial sight word curriculum (Austin & Boekman, 1990), or sight words and pictures that related to the students' needs and preferences. Edmark uses a whole-word approach to learning to read words, in software and print versions. Many of the teachers in the study had this program available in their classrooms prior to implementation of the intervention. In fact, in all cases the sight word and picture training implemented for the control students was an ongoing intervention prescribed by the students' individualized education programs. We tracked the amount of time these students spent in sight word training, but did not record the number and type of sight words and picture instruction received because of the variation across students. The sight word lessons also were implemented in either a 1:1 or small group format depending on the number of students assigned to this condition in the classroom.

ANALYTIC TECHNIQUES

We conducted a series of mixed analyses of variances (ANOVAs) with one between- and one within-subjects factor to determine differences between the treatment and control groups on the eight outcome measures. For all ANOVAs, the between-subjects factor consisted of the instructional type, treatment, and control interventions. The within-subjects factor consisted of the repeated measures obtained from participants across the school year. Because the primary purpose of this study was to examine a differential effect between the treatment and control groups, the statistical tests of interest were the interaction terms. In other words, we hypothesized that the students in the treatment group would have greater gains (i.e., greater mean differences from pretest to posttest) than the gains of the control group, resulting in an interaction.

We did not conduct multivariate analyses because of insufficient sample sizes and did not attempt to adjust for conducting multiple univariate statistical tests. The statistical power based on the small sample sizes suggested that statistical significance would only be found for large effect sizes. Adjusting for multiple statistical tests using a Bonferroni correction would have decreased the statistical power to an unacceptable level. Because a priori hypotheses were stated, we conducted one-tailed statistical tests, which increased the statistical power. Because of the challenges in applying statistical tests to a low incidence population, including small sample size and large individual variance, more emphasis should be placed on interpreting the effect sizes.

RESULTS

All 11 students in the treatment group progressed through at least one level of the five-level curriculum by the end of the academic year; 6 students progressed to Level 2, 3 students progressed to Level 3, 1 student progressed to Level 4, and 1 student completed all five levels.

INSTRUCTIONAL TIME AND TREATMENT DIFFUSION

Every 2 weeks, teachers reported the amount of instructional time and types of literacy skills included in the instruction for the preceding day (approximately 10 reports in total). Table 3 provides the mean and range minutes per day of literacy instruction for ELSB, SBL, sight word/picture instruction, phonics instruction, and other literacy instruction. The most common other literacy instruction was extra group instruction, focusing on days of the week and months of the year, weather words, and daily schedule words. Both treatment and control group participants received equal time in literacy instruction (approximately 1 hr per day). Because the Edmark or other sight word instruction was shorter than ELSB, to give control students comparable time in literacy instruction per day teachers augmented sight word drills with other types of literacy skill instruction. No control students received the ELSB, but one did inadvertently receive some exposure to a computerized phonics program. There were no significant differences between the groups in time spent in literacy instruction.

TEACHER FIDELITY OF INSTRUCTION

Both experimental and control groups received shared stories and the mean fidelity for following the prescribed template was 85% of the steps implemented with a range of 30% to 100% across 55 observations of all seven teachers (lower scores occurred early in intervention). A second observer concurrently and independently observed about a third of these lessons. The mean interrater reliability for the story fidelity measure was 94.9% with a range of 80% to 100%. The mean fidelity for ELSB was 93% with a range of 53% to 98% across 58 observations of all the teachers; interrater reliability for this fidelity was 93.5% with a range of 89% to 97%. Fidelity for the control group's sight words intervention was not feasible because of the diversity of methods the teachers employed. Instead, we focused on comparability of instructional time.

ANOVA RESULTS

Prior to running the ANOVAs, we examined the dependent variables for accuracy of data entry, outliers, missing values, normality of distribution, and other assumptions. All values were within acceptable ranges, and the assumptions were tenable. The first series of ANOVAs examined the group interaction effects on the NVLA total score, CVR, PhonSk, and ELSA. Table 4 reports the means, standard deviations, and effect sizes for the control and treatment groups. The effect sizes were Cohen's d based on a pooled standard deviation and indicate the magnitude of differences between the pretest and posttest for each group. There were large effect sizes for all the measures of the treatment group, ranging from 1.15 to 1.57. The effect sizes for the control group were small (.39) to moderate (.65) except for CVR, which was quite large, 1.24. This was not surprising, as both groups received the shared stories intervention.

Table 5 shows the results of the mixed ANOVAs. Box's tests of equality of covariance matrices were not significant for all analyses satisfying the assumption of equivalent covariance matrices. The statistical tests of interest to determine nonparallel control and treatment slopes are the interaction effects. Three of the four interaction effects (NVLA total score, PhonSk, and ELSA) were statistically significant; Figure 1 illustrates the interactions. For the three dependent variables with statistically significant interactions, the slope of the treatment group was steeper than the slope of the control group, suggesting greater growth on these measures.

The treatment group's effect sizes for PPVT-III and WLPB (see Table 4) were moderate for all measures, ranging from .46 to .66. The effect sizes of the control group ranged from extremely small (.02) to moderate (.41). Two of the four interaction effects (PPVT-III and WLPB Memory for Sentences; see Table 5 and Figure 2) were statistically significant. We found disordinal interactions for both PPVT-III and WLPB Memory for Sentences, with the treatment group having a lower pretest mean than the control group but obtaining a higher mean on the posttest for both significant interactions.

[FIGURE 1 OMITTED]

DISCUSSION

Although the "science of reading" provides important guidance for phonemic awareness and other early literacy skills that can build a bridge to reading, translating this instruction for students who are nonverbal or have limited language and who need intensive instruction to master new skills is a current challenge. The literature has well established the lack of focus on decoding (Browder et al., 2006; Joseph & Seery, 2004) and the lack of attention to literacy in general for this population (Kliewer, 1998; Kliewer, Biklen, & Kasa-Hendrickson, 2006). The first outcome of this study was the creation of an early literacy curriculum that could be used with students with significant disabilities and was acceptable to a panel of experts in severe disabilities, early literacy, direct instruction, and augmentative communication. Significant gains on the ELSA indicated that students who received the curriculum learned significantly more of the objectives than students who did not. Although not a surprising finding, this outcome was important for demonstrating that students receiving ELSB acquired new skills.

In addition, the students also made significant gains on the NVLA phonemic awareness section. This result suggests that the experimental curriculum promoted skills known to be bridges to early reading. It should be noted that this gain occurred despite the fact that nearly all students had mastered only one or two levels of the five-level curriculum in the first year. In contrast, one student with autism completed all levels of the curriculum, which then made him a candidate for kindergarten-level reading instruction in the second year. All other students were targeted for ongoing ELSB instruction in their second year. This outcome suggests that the path to reading may be possible for students with significant disabilities, but also may require more years of instruction. Additional research from the planned 5-year study will be critical to determine if these early gains do eventually produce fluent readers. This pace of skill acquisition also suggests that literacy instruction for this population needs to begin early and continue at least through the middle school years. In contrast, because of the significant developmental delay characteristic of this population, beginning this intensive instruction may not be feasible until the early elementary years for some students.

[FIGURE 2 OMITTED]

There were no statistically significant differences between the groups on the CVR, the assessment most closely aligned to the SBL intervention that both experimental and control students received. This could be interpreted to support the comparability of instruction the two groups received through the shared stories. Because this measure reflects students' developing listening comprehension, it may be important in future research to focus on this specific intervention itself. Browder et al.'s (2006) review of experimental studies on reading for this population found not one example of a literature-based reading intervention. In contrast, qualitative research by Skotko, Koppenhaver, and Erickson (2004) and Kliewer and Biklen (2001) found that social engagement with stories enhanced communication skills for this population. Future research might determine the effect of engaging students with significant developmental disabilities in the reading of stories on their communication and emergent literacy skills.

Groups also differed in their performance on the PPVT-III and WLPB Memory for Sentences subtest. There was no statistically significant difference between the groups on the WLPB Letter--Word Identification subtest. One aspect of ELSB is the presentation of multiple pictorial examples of a given concept/object. For example, three different photos/clip art of children at play (playing soccer, children on a slide, children running through a sprinkler) illustrate the concept of "play." Typical picture vocabulary instruction uses one picture to illustrate a communication need. Demonstration that concepts/objectives have multiple types of representation may have impacted the PPVT-III score. In the Memory for Sentences subtest, students are given a short phrase or sentence to repeat verbatim. Participants receiving the ELSB curriculum may have benefited from increased focus on or knowledge of sentence structure from the fill-in-the-blanks with the missing word Objectives 2 and 4.

Another important outcome of this study was the demonstration that this population could participate in a standardized assessment modified for nonverbal response. The use of standardized assessment for students with significant developmental disabilities is challenging because of the varied ways students need to demonstrate learning given their sensory and physical challenges. In this study we did not try to include students with hearing and vision impairments, although we did adapt response repertoires for students with significant physical, as well as intellectual, disabilities. It was especially encouraging that consistent responding could be established through eye gazing for some students. Placing the response options on the four corners of a piece of Plexiglass made it possible for students to turn their eyes and heads slightly to indicate a distinct response.

LIMITATIONS OF THE RESEARCH DESIGN

Conducting randomized trials research with a low incidence population presents substantial challenges. The potential advantage if these challenges can be surmounted is to provide strong evidence of intervention effectiveness. An alternative would be to use a series of single subject studies. However, most single subject studies have focused on one specific component of reading (like sight words or only initial consonant sounds), rather than the impact of a full curriculum. Although a group research design lends itself well to comparing two approaches to reading, application of inferential statistics is difficult because of the small sample size. For example, differences may have existed among the subgroups in this study (e.g., students with autism vs. those with severe intellectual disabilities) but such differences could not be analyzed given the small sample. Multivariate statistics were also not comparable, introducing the possibility of Type I error. As noted earlier, effect sizes may be the most credible statistic for making inferences about the impact of the intervention with this small sample.

A second limitation in the design was that our primary findings were based on instruments we developed. We addressed this threat to internal validity by providing some support for the reliability of the instruments and through the use of published instruments, which also indicated significant differences between the treatment and control group. Additional research is needed on the standardization and validity of the NVLA in particular, given the need for instruments that can be used to show gains for this population.

IMPLICATIONS FOR PRACTICE

Although the outcomes of the study are encouraging that students with significant developmental disabilities can gain early literacy skills through intensive instruction, much more research is needed to determine if these skills will lead to learning to read and applying those skills to meaningful life contexts. Because experimental research that focuses on the NRP (2000) components of reading is new for this population, practitioners are left with many unanswered questions for planning literacy instruction. Until there is additional research to define evidence-based practice, the most logical approach may be to utilize and adapt strategies that have been found effective with students who are nondisabled. For example, research on shared stories (Bus et al., 1995; Senechal et al., 1995; Vacca et al., 2006) would support utilizing literature-based instruction. Sharing stories may build leisure enjoyment of books and promote language skills. Intensive instruction in phonemic awareness, decoding skills, and comprehension should also be carefully considered for elementary-age students with significant disabilities. Prior research has shown that students with moderate intellectual disabilities can gain phonics skills (Barudin & Hourcade, 1990; Bracey et al., 1975; Bradford et al., 2006; Hoogeyeen et al., 1989; Hoogeveen et al., 1987; Lane & Critchfield, 1998; J. Singh & Singh, 1985; N. N. Singh & Singh, 1988). Our research shows early promise that students with severe intellectual disabilities and autism also can acquire some phonemic awareness and phonics skills, which are strong predictors of learning to read (Ball & Blachman, 1991; Treiman & Baron, 1983). If this population is to have the opportunity to learn to read, they will need instruction that teaches them to decode and comprehend printed text.

SUGGESTIONS FOR FUTURE RESEARCH

Moving beyond sight words is a new venture in reading research for students with significant developmental disabilities. There is so much to discover that substantial energy and resources need to be invested to begin exploring this opportunity for this population. What will be important is to build on the science of reading that is already available. Until research indicates otherwise, the best starting point will be in adapting interventions proven effective for typically developing students. This requires gaining deeper knowledge of this literature as well as knowing what has been effective in teaching reading to students with significant disabilities. Research is especially needed on comprehensive and longitudinal curricula that school systems can adapt and modify for this population. It will be difficult for teachers to piece together a reading program from studies that focus on only one component of reading. Research also is needed on measures that show gains relevant to reading, but that are not biased against students who are nonverbal or have sensory or physical impairments. Through such research more students with significant disabilities may gain the skills needed to become readers.

Manuscript received March 2007; accepted July 2007.

REFERENCES

Adams, M. J. (1990). Beginning to read: Thinking and learning about print. Cambridge, MA: MIT Press.

Austin, P., & Boekman, K. (1990). Edmark functional word series. Redmond, WA: PCI Educational Publishing.

Ball, E. W., & Blachman, B. A. (1991). Does phoneme awareness training in kindergarten make a difference in early word recognition and developmental spelling? Reading Research Quarterly, 26, 49-66.

Barudin, S. I., & Hourcade, J. J. (1990). Relative effectiveness of three methods of reading instruction in developing specific recall and transfer skills in learners with moderate to severe mental retardation. Education and Training in Mental Retardation and Developmental Disabilities, 21, 286-291.

Bereiter, C., Brown, A., Campione, J., Carruthers, I., Case, R., Hirshberg, J., et al. (2000). Open court. Worthington, OH: SRA/McGraw-Hill.

Bracey, S., Maggs, A., & Morath, P. (1975). The effects of direct phonics approach in teaching reading to six moderately retarded children: Acquisition and mastery learning stages. Exceptional Child, 22, 83-90.

Bradford, S., Shippen, M. E., Alberto, P. A., Houchins, D. E., & Flores, M. M. (2006). Using systematic instruction to teach decoding skills to middle school students with moderate intellectual disabilities. Education and Training in Developmental Disabilities, 41, 323-324.

Browder, D., Gibbs, S., Ahlgrim-Delzell, L., Courtade, G., & Lee, A. (2007). Early literacy skills builder. Verona, WI: Attainment Company.

Browder, D. M., Wakeman, S., Spooner, F., Ahlgrim-Delzell, L., & Algozzine, B. (2006). Research on reading instruction for individuals with significant cognitive disabilities. Exceptional Children, 72, 392-408.

Bus, A., van Ijzendoorn, M. H., & Pellegrini, A. (1995). Joint book reading makes for success in learning to read: A meta-analysis on intergenerational transmission of literacy. Review of Educational Research, 65, 1-21.

Carnine, D., Silbert, J., Kame'enui, E., & Tarver, S. (2004). Direct instruction reading (3rd ed.). Columbus, OH: Merrill.

Carrow-Woolfolk, E. (1995). Oral and written language scales: Listening comprehension and oral expression. Circle Pines, MN: American Guidance Service.

Collins, B. (2007). Moderate and severe disabilities: A foundational approach. Upper Saddle River, NJ: Pearson.

Doyle, P. M., Wolery, M., Auk, M. J., & Gast, D. L. (1988). System of least prompts: A literature review of procedural parameters. Journal of the Association for Persons with Severe Handicaps, 13, 28-40.

Dunn, L. M., & Dunn, L. M. (1997). Peabody picture vocabulary test (3rd ed.). Circle Pines, MN: American Guidance Service.

Dunn, L. M., & Markwardt, F. C. (1970). Peabody intelligence test. Circle Pines, MN: American Guidance Service.

Engelmann, S., Becker, W. C., Hanner, S., & Johnson, G. (1980). Corrective reading program. Chicago: Science Research Associates.

Engelmann, S., & Brunet, E. (2003). Reading mastery. Worthington, OH: SRA/McGraw-Hill-Hill.

Engelmann, S., & Osburn, J. (1999). Language for learning. Columbus, OH: SRA/McGraw-Hill-Hill.

Gibbs, S. L., Miller, M., Helf, S., & Cooke. N. L. (2004). Early reading tutor. Columbus, OH: SRA/McGraw-Hill.

Good, R. H., & Kaminski, R. A. (Eds.). (2002). Dynamic indicators of basic early literacy skills (6th ed.). Eugene, OR: Institute for the Development of Education Achievement. Available: http://dibels.uoregon.edu/

Hiebert, E., & Pearson, P. (2000). Building on the past, bridging to the future: A research agenda for the center for the improvement of early reading achievement. Journal of Education Research, 93, 133-145.

Hohn, W. E., & Ehri, L. C. (1983). Do alphabet letters help prereaders acquire phonemic segmentation skill? Journal of Educational Psychology, 75, 752-762.

Hoogeveen, F. R., Smeets, P. M., & van der Houven, J. E. (1987). Establishing letter-sound correspondences in children classified as trainable mentally retarded. Education and Training in Mental Retardation, 22, 77-84.

Hoogeveen, J., Smeets, P., & Lancioni, G. (1989). Teaching moderately mentally retarded children. Research in Developmental Disabilities, 10, 1-18.

Jastak, S. R., & Wilkinson, G. S. (1984). Wide range achievement test-revised. Wilmington, DE: Jastak Associates, Inc.

Joseph, L. M., & Seery, M. E. (2004). Where is the phonics? A review of the literature on the use of phonetic analysis with students with mental retardation. Remedial and Special Education, 25, 88-94.

Katims, D. S. (2000). Literacy instruction for people with mental retardation: Historical highlights and contemporary analysis. Education and Training in Mental Retardation and Developmental Disabilities, 35, 3-15.

Kaufman, A. S., & Kaufman, N. L. (1983). Kaufman assessment battery for children. Circle Pines, MN: American Guidance Service.

Kaufman, A. S., & Kaufman, N. L. (1985). Kaufman tests of educational achievement. Circle Pines, MN: American Guidance Service.

Kaufman, A. S., & Kaufman, N. L. (1990). Kaufman brief intelligence test. Circle Pines, MN: American Guidance Service.

Kaufman, A. S., & Kaufman, N. L. (1993). Kaufman adolescent and adult intelligence test. Circle Pines, MN: American Guidance Service.

Kliewer, C. (1998). Citizenship in the literate community: An ethnography of children with Down syndrome and the written word. Exceptional Children, 64, 167-180.

Kliewer, C., & Biklen, D. (2001). "School's not really a place for reading": A research synthesis of the literate lives of students with severe disabilities. Journal of the Association for Persons with Severe Handicaps, 26, 1-12.

Kliewer, C., Biklen, D., & Kasa-Hendrickson, C. (2006). Who may be literate? Disability and resistance to the cultural denial of competence. American Educational Research Journal, 43, 163-192.

Lane, S. D., & Critchfield, T. S. (1998). Classification of vowels and consonants by individuals with moderate mental retardation: Development of arbitrary relations via match-to-sample training with compound stimuli. Journal of Applied Behavior Analysis, 31, 21-41.

Lyon, G. R. (1998). Why reading is not a natural process. Educational Leadership, 3, 15-18.

Morrow, L. M., & Gambrell, L. B. (2002). Literature-based instruction in the early years. In S. B. Neuman & D. K. Dickinson (Eds.), Handbook of early literacy research (pp. 348-360). New York: Guilford.

National Institute for Literacy. (2001). Put reading first: The research building blocks for teaching children to read. Washington, DC: Author. Retrieved April 25, 2005, from http://www.nifl.gov/partnershipforreading/ publications/PFRbooklet.pdf

National Reading Panel. (2000). Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction. Washington, DC: U.S. Department of Health and Human Services (NIH Pub. No. 00-4754)

Neuman, S. B., & Dickinson, D. K. (Eds.). (2001). Handbook of early literacy research. New York: Guilford.

Pearson's Assessments. (n.d.). PPVT III technical information. Retrieved February 15, 2007, from http://ags. pearsonassessments.com/assessments/technical/ppvt.asp

Perfetti, C. A., Beck, I., Bell, L., & Hughes, C. (1987). Phonemic knowledge and learning to read are reciprocal: A longitudinal study of first grade children. Merrill-Palmer Quarterly, 33, 283-320.

Reid, K., Hresko, W. P., & Hammill, D. D. (2001). Test of early reading ability (3rd ed.). Austin, TX: Pro-Ed.

Senechal, M., Thomas, E., & Monker, J. (1995). Individual differences in 4-year-old children's acquisition of vocabulary during storybook reading. Journal of Educational Psychology, 97, 218-229.

Singh, J., & Singh, N. N. (1985). Comparison of word-supply and word-analysis error-correction procedures on oral reading by mentally retarded children. American Journal of Mental Deficiency, 90, 64-70.

Singh, N. N., & Singh, J. (1988). Increasing oral reading proficiency through overcorrection and phonic analysis. American Journal on Mental Retardation, 93, 312-319.

Skotko, B. G., Koppenhaver, D. A., & Erickson, K. A. (2004). Parent reading behaviors and communication outcomes in girls with Rett syndrome. Exceptional Children, 70, 145-166.

Smith, S. B., Simmons, D. C., & Kame'enui, E. J. (1998). Phonological awareness: Instructional and curricular basics and implications. In D. Simmons & E. J. Kame'enui (Eds.), What reading research tells us about children with diverse learning needs: Bases and basics (pp. 129-140). Mahwah, NJ: Lawrence Erlbaum.

Torgesen, J. K., & Bryant, B. R. (2004). Test of phonological awareness (2nd ed. PLUS). Austin, TX: Pro-Ed.

Treiman, R., & Baron, J. (1983). Phonemic-analysis training helps children benefit from spelling-sound rules. Memory & Cognition, 11, 382-389.

Troia, G. A. (1999). Phonological awareness intervention research: A critical review of the experimental methodology. Reading Research Quarterly, 34, 28-52.

Vacca, J., Vacca, R., Gove, M., Burkey, L., Lenhart, L., & Mckeon, C. (2006). Reading and learning to read (6th ed.). Boston: Allyn & Bacon.

Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Cambridge, MA: Harvard University Press.

Wechsler, D. (1991). The Wechsler intelligence scale for children (3rd ed.). San Antonio, TX: The Psychological Corporation.

Woodcock, R. W. (1991). Woodcock language proficiency battery. Itasca, IL: Riverside.

DIANE M. BROWDER

LYNN AHLGRIM-DELZELL

University of North Carolina at Charlotte

GINEVRA COURTADE

West Virginia University

SUSAN L. GIBBS

CLAUDIA FLOWERS

University of North Carolina at Charlotte

DIANE M. BROWDER (CEC NC Federation), Professor; and LYNN AHLGRIM-DELZELL (CEC NC Federation), Research Associate, Department of Special Education, University of North Carolina at Charlotte. GINEVRA COURTADE (CEC WV Federation), Assistant Professor, West Virginia University, Morgantown. SUSAN L. GIBBS (CEC NC Federation), Clinical Assistant Professor, Office of Field Experiences; and CLAUDIA FLOWERS, Professor, Educational Leadership, University of North Carolina at Charlotte.

This article uses the term "intellectual disability" instead of the term "mental retardation"; the term "developmental disability" is used to refer to the broader population of individuals with both intellectual disabilities and autism. This article focuses on students with developmental disabilities at or below the moderate range of intellectual functioning. Support for this research was provided in part by Grant No. H324K040004 of the U.S. Department of Education, Institute of Education Sciences, awarded to the University of North Carolina at Charlotte. The opinions expressed do not necessarily reflect the position or policy of the Department of Education, and no official endorsement should be inferred.

Address correspondence to Diane Browder, Department of Special Education and Child Development, University of North Carolina at Charlotte, 9201 University City Bird, Charlotte, NC 28223 (e-mail: dbrowder@uncc.edu).
TABLE 1
Early Literacy Skills Builder (ELSB) Objectives

 Increasing
 Difficulty Method Used
 Rationale Across Lessons to Teach the
Objective (NRP Component) and Levels Objective

1. Read Some words are New words Flash card
vocabulary irregular and introduced drill with
sight words. must be learned across lessons constant time
 on sight; and levels. delay (one
 students also round at zero
 benefit from delay; one at
 early word 5 s).
 mastery to
 participate in
 reading the
 stories.
 (Vocabulary)

2. Point to Students use Students System of
sight words sight words receive more least prompts:
to complete from Objective distractors in (a) wait for
sentences. 1 to fill in answer choices student to
 the blank to as levels point without
 promote progress. help; (b) model
 comprehension/ pointing and
 meaning have student
 of words. imitate; (c)
 (Vocabulary) if needed,
 physically
 guide to
 point. (a)

3. Point to Text pointing Students System of
words as teacher promotes the progress from least prompts
reads them concept of pointing to a (same as
aloud. print: text phrase, to a above).
 moves from left sentence, to
 to right and moving down the
 top to bottom; page, to a
 each printed second line of
 word can be text as the
 spoken. For teacher reads.
 non-verbal In the upper
 students it may levels,
 build toward students point
 the use of to each word
 technology individually
 support to within the
 read aloud. sentence.
 (Concept of
 Print)

4. Point to or Promotes Placement of System of
say a word to concept of word word in the least prompts
fill in a and listening sentence varies (same as
repeated story comprehension (last/middle above).
line. as student word). At the
 fills in early levels,
 missing word. the missing
 (Comprehension) word is
 high-lighted.
 Words change
 across lessons
 and levels.

5. Respond to a Builds At first, Scaffolding
question about listening respond to (find answer
the story by comprehension: literal in sentence),
selecting As students questions system of least
correct picture practice text directly prompts if
(later pointing to relating to the needed.
lessons--correct help "read" the text; later,
word). May story (see #3), students are
answer verbally. conveys the asked harder
 idea of reading questions
 comprehension. (main idea,
 (Comprehension) sequencing)
 and draw
 inferences;
 students answer
 questions using
 words versus
 pictures.

6. Demonstrate Segmenting is Early lessons Direct
understanding of a critical use one- and instruction
segmentation by component of two-syllable model/lead/test
clapping out phonemic words; words strategy. If
syllables in awareness; increase to incorrect
words. teaches four syllables response,
 distinguishing and then CVC teacher
7. Demonstrate by auditory words. physically
understanding of cues including guides the
segmentation by rhythm and clapping.
tapping out stress. "Clapping" is
phonemes in CVC Auditorially adapted to
words. segmenting student
 sounds in words response
 is the Primary ability (e.g.,
 precursor in student with
 learning to physical
 read CVC words. challenges may
 (Phonemic tap foot or
 Awareness) hit side of
 wheelchair).

8. Identify Students who New letters Easy-to-hard
letter-sound are nonverbal and sounds discrimination
correspondence. (and some with introduced with
 autism) will across lessons increasingly
 need a visual and levels; more difficult
 referent to distractors distractors;
 indicate letter begin with system of least
 sounds. Use non-letter prompts used
 of letters options; later, for incorrect
 themselves students choose responses.
 may be more from multiple
 efficient than letters.
 some other
 concrete
 referent.
 (Phonics)

9. Identify Isolating Sounds change Direct
first and last beginning across letters instruction
sounds in words. sounds is a and lessons; model/lead/test
 critical consideration strategy;
10. Find phonemic given to order system of least
pictures that awareness skill of phonemes prompts used
begin/end with and a precursor from easy to for incorrect
specific sound. to beginning hard. response; first
 reading. and last sounds
 (Phonemic highlighted.
 Awareness)

11. Point to Blending is Sounds to be Direct
letters in words one of the blended change instruction
that have been most difficult over lessons model/lead/test
segmented. skills to and levels strategy;
 translate for concurrent system of least
12. Point to nonverbal with those prompts used
pictures that students. Voice introduced in for incorrect
represent output devices Objective 8. response.
segmented words. do not require
 the student to
 think about
 the blending
 itself. If
 students can
 hear a
 segmented word
 and identify a
 picture of the
 word that was
 said, this
 demonstrates
 having
 internally
 blended the
 sounds.
 Although more
 difficult than
 simple verbal
 blending, it
 ensures
 students are
 not just
 "hitting a
 switch" to say
 a word.
 (Phonemic
 Awareness)

13. Pointing to Builds First words are System of
pictures of conceptual people; less least prompts
spoken words. understanding concrete words (same as
 of vocabulary are introduced above).
 by using a including
 variety of feelings,
 pictures for places, and
 the same spoken actions; each
 word. level has a
 (Vocabulary) theme that is
 meaningful to
 children
 (friends, pets,
 community
 outings,
 birthday).

Note. NRP = National Reading Panel, CVC = consonant-vowel-consonant.

(a) Students who respond using eye gazing (minimal or no use of hands
and arms) can be guided to correct answer by showing the correct
response with a stimulus prompt such as a light pointer or colored
frame.

TABLE 2
Description of Treatment and Control Groups

 Control Treatment

Characteristic N % N %

Gender Male 6 50.0 7 63.6
 Female 6 50.0 4 36.4

Ethnicity African American 6 50.0 6 54.5
 Caucasian 4 33.3 4 36.4
 Other 2 16.7 1 9.1

Verbal status Verbal 5 41.7 6 54.5
 Nonverbal 7 58.3 5 45.5

Class type SAC 6 50.0 6 54.5
 Autism 3 33.3 3 36.4
 Severe/profound 2 16.7 2 9.1

Grade K 4 33.3 0 0
 1 6 50.0 4 36.4
 2 1 8.3 5 45.5
 4 1 8.3 2 18.2

Free/reduced lunch None 4 33.3 4 36.4
 Reduced 0 0 0 0
 Free 3 25.0 3 27.3
 Did not answer 5 41.7 4 36.4

 Mean Range Mean Range

Age 8.75 8-10 9.36 9-11
IQ 37.55 18-54 36.50 20-50

Note. SAC = specialized academic curriculum for students with moderate
intellectual disability.

TABLE 3
Minutes Spent in Types of Literacy Instruction Per Day

 Sight Words/
 ELSB SBL Pictures

Group M SD M SD M SD

Treatment 18.49 10.23 9.27 6.08 10.51 4.34
Control 0 11.62 6.60 16.15 14.28

 Other
 Phonics Literacy Total Mean

Group M SD M SD M SD

Treatment 4.25 6.96 13.71 7.33 56.23 16.38
Control 5.47 7.02 19.67 5.72 52.91 22.75

Note. ELSB = Early Literary Skills Builder, SBL = story-based
lessons.

TABLE 4
Means, Standard Deviations, and Cohen's d for Control and
Treatment Groups

 Pretest Posttest
 Cohen's
Group M SD M SD d

NVLA Total
 Control 40.92 30.94 63.58 39.13 .65
 Treatment 36.27 21.42 72.55 37.92 1.22
CVR
 Control 9.92 5.53 17.00 5.86 1.24
 Treatment 11.82 4.40 19.00 4.77 1.57
PhonSk
 Control 32.27 25.50 47.36 33.49 .51
 Treatment 25.30 16.51 56.60 30.00 1.35
ELSA
 Control 40.33 35.40 54.08 35.73 .39
 Treatment 42.64 30.80 79.00 32.69 1.15
PPVT III
 Control 18.83 15.76 18.42 18.31 .02
 Treatment 14.36 12.18 20.82 15.76 .46
WLPB Total
 Control 12.58 13.50 15.58 17.92 .19
 Treatment 12.00 12.30 21.45 16.30 .66
Memory for Sentences
 Control 9.83 11.67 9.83 12.80 <.01
 Treatment 7.73 9.14 14.18 10.70 .65
Letter-Word Identification
 Control 1.83 2.98 3.42 4.80 .41
 Treatment 3.18 4.35 5.55 5.54 .48

Note. NVLA = Nonverbal Literacy Assessment, CVR = Conventions of
Reading section, PhonSk = Phonics and Phonemic Awareness section,
ELSA = Early Literary Skills Assessment of the Early Literacy Skills
Builder curriculum, PPVT-III = Peabody Picture Vocabulary Test-III,
WLPB = Woodcock Language Proficiency Battery.

TABLE 5
Results of Repeated ANOVA Measures

Outcome Effect F-Ratio [[eta].sup.2]

NVLA Within-Ss Pre/post 40.47 ** .66
 Interaction 3.47 * .14
 Between-Ss Instruction .21 .01

CVR Within-Ss Pre/post 24.82 ** .54
 Interaction .01 <.01
 Between-Ss Instruction 1.01 .05

PhonSk Within-Ss Pre/post 32.83 ** .63
 Interaction 5.57 ** .23
 Between-Ss Instruction .22 .01

ELSA Within-Ss Pre/post 17.42 ** .45
 Interaction 3.56 * .15
 Between-Ss Instruction 1.14 .05

PPVT Within-Ss Pre/post 2.80 .12
 Interaction 3.63 * .15
 Between-Ss Instruction .03 -.01

WLPB Total Within-Ss Pre/post 8.23 ** .28
 Interaction 2.21 .10
 Between-Ss Instruction .20 .01

Memory for Within-Ss Pre/post 4.59 * .18
 Sentences Interaction 4.59 * .18
 Between-Ss Instruction .06 <.01

Letter-Word Within-Ss Pre/post 8.25 ** .28
 Identification Interaction .32 .02
 Between-Ss Instruction .99 .04

Note. NVLA = Nonverbal Literacy Assessment, CVR = Conventions of
Reading section, PhonSk = Phonics and Phonemic Awareness section,
ELSA = Early Literacy Skills Assessment of the Early Literacy Skills
Builder curriculum, PPVT III = Peabody Picture Vocabulary Test-III,
WLPB = Woodcock Language Proficiency Battery.

* p < .05; ** p < .01. Degrees of freedom for all tests of
significance was 1, 21.
COPYRIGHT 2008 Council for Exceptional Children
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2008 Gale, Cengage Learning. All rights reserved.

 Reader Opinion

Title:

Comment:



 

Article Details
Printer friendly Cite/link Email Feedback
Author:Browder, Diane M.; Ahlgrim-delzell, Lynn; Courtade, Ginevra; Gibbs, Susan L.; Flowers, Claudia
Publication:Exceptional Children
Article Type:Report
Date:Sep 22, 2008
Words:10603
Previous Article:Teacher turnover: examining exit attrition, teaching area transfer, and school migration.
Next Article:Promoting self-determination for transition-age youth: views of high school general and special educators.
Topics:


Related Articles
Effective professional development in early literacy programs.
What's happening in schools for primary students with learning difficulties in literacy and numeracy? A national survey.
The achievement of students with developmental disabilities and their peers without disabilities in inclusive settings: an exploratory study.
An exploratory study of the implementation of embedded instruction by general educators with students with developmental disabilities.
Research on reading instruction for individuals with significant cognitive disabilities.
Reading is rocket science.
Adolescent literacy and older students with learning disabilities: a report from the National Joint Committee on Learning Disabilities.
Conundrums in the differentiated literacy classroom.
From isolation to combination: a multilevel, multicomponent approach to developing literacy skills of students with cognitive impairment.
Emergent literacy supports for students who are deaf-blind or have visual and multiple impairments: a multiple-case study.

Terms of use | Copyright © 2014 Farlex, Inc. | Feedback | For webmasters