Printer Friendly

Processing of bottom-up and top-down information by skilled and average deaf readers and implications for whole language instruction.

If we identify those components of competence that enable skillful reading by the relatively few deaf youth who are proficient readers, this knowledge may suggest instructional goals that will help the many deaf youth with low reading ability. (Although the word deaf has been used in recent years to designate cultural group membership [see Dolnick, 1993], here the term is used exclusively to denote a measured hearing loss that is severe to profound in degree.) Guided by a cognitive view of literacy processes and by previous research with deaf readers, I have studied possible differences between skilled and average deaf secondary readers. Specifically, the two ability groups were compared on two categories of reading tasks. The first category reveals fluency in processing the detailed, visual information of text. A second category measures a reader's tendency to engage in higher-level processes, which focus on meaning. A finding that skilled readers can be distinguished by their effective use of higher-level processes would argue for accelerating the use of whole language instructional strategies. This increasingly popular approach to literacy development emphasizes readers' use of the semantic information that they bring to or generate from text. in the whole language view, processing of visual information is accorded secondary importance. However, if fluency in processing the visual information of text can distinguish between the two groups, this would assert the necessity for instruction that develops efficient processing of visual information.


According to cognitive views of the reading process (see Carpenter & Just, 1981; Rumelhart, 1977; Stanovich, 1980) readers may use various kinds of information, which are available either on the printed page or from the long-term memory of the reader. They construct mental productions of text that integrate their knowledge of English letter combinations, phonology, syntax, semantics, pragmatics, discourse structure, episodic knowledge, and domain knowledge. The kinds of information appearing early in the preceding taxonomy - letter combinations, phonology, and syntax - are frequently referred to as "bottom-up" constraints. These are the specific visual data that readers actually perceive on the printed page. The types of information later in the taxonomy are often characterized as "top-down." Once activated in the mind of a reader, these concepts work "downward" to guide interpretation of the detailed information on the printed page.

The various kinds of constraints interact with each other, sometimes informing and sometimes compensating for one another. However, these interactions occur within the limited space of the reader's "working memory," the faculty used by the mind to temporarily store data that hold our attention. The limited capacity of working memory makes it difficult for readers to process many textual constraints at the same time, and the impermanence of working memory contents necessitates swift processing of words and constraints to avoid decay of information before it contributes to the meaning that is constructed. Research by Daneman and Carpenter (1980) and Perfetti (1985) has indicated that slow readers are susceptible to decay of working memory contents. According to Shankweiler and Crain (1986), working memory capacity is functionally enlarged by use of phonological strategies for sustaining words in working memory, a phenomenon with obvious implications for deaf readers.

Stanovich (1980) theorized a "compensatory" interaction among the constraints of a text, and this is another critical concept of theory related to reading and cognition. He described a compensatory relationship as occurring when a reader's deficient knowledge relative to one constraint is offset by greater reliance on knowledge of one or more other constraints in which competence is relatively superior. As a consequence, the reader is able to construct a reasonable understanding of the text. Garner (1988) gave the example of a weak decoder with considerable prior knowledge about Topic X being better off using top-down processes to read about Topic X. Conversely, strong decoders with limited knowledge of the topic are better off using their bottom-up processes. In a compensatory relationship, a reader's attention is consciously or automatically channeled toward the constraint where knowledge seems to be the most accurate or which seems to be contributing best to comprehension of the text.

Even this brief discussion of a cognitive view of literacy reveals a number of its advantages as a framework for conducting research on the reading processes of deaf youth. The recognition of reading as a composite of multiple sub-processes is particularly relevant to deaf readers, because each of these component processes represents a potential avenue for improving overall reading competence. However, this analytical view of competence does not necessarily call for an explicit drill-and-practice instructional approach as the sole vehicle for developing those processes. The cognitive view also recognizes that human minds are particularly challenged by those tasks, such as reading, that require extremely rapid management of multiple kinds of information. Timely processing of this information is a must, and, by extension, it is informative to measure reading speed as a way of identifying factors that may facilitate or obstruct a reader's processing of the words of a text. Thus, the cognitive view accepts the merits of studying reading speed as an observable indicator of the normally covert mental processes of reading. Haberlandt (1984) provides an excellent review of this work.

Finally, theory related to the compensatory phenomenon invites the question of whether skilled deaf readers are making exceptionally effective use of one or more selected reading processes, even though they may be deficient on certain other processes heavily used by hearing readers. Instruction in those processes that are critical to skilled deaf readers might allow less able deaf readers to become proficient at them and to improve their reading comprehension as a result.



Reading comprehension is a problem for many people who are profoundly deaf. According to the most current data published by the Center for Assessment and Demographic Studies (1991), the average 18-year-old with a severe to profound hearing loss reads with the comprehension of a normally hearing child in the early months of 3rd grade. The same source indicates that only 3% of deaf 18-year-olds read at the same level as an "average" hearing reader of the same age. If these statistics had excluded the performance of students who had experienced a hearing loss after the age of 2 years, the prevailing reading levels would be even lower. The field of deaf education is far from totally agreed on how to address the reading problems of their students. A survey by LaSasso (1987) revealed that a basal reader approach has been the most frequently used method for promoting the literacy of deaf children. The basal approach normally prescribes specific sequences of language-controlled readings, accompanied by drills in vocabulary development and word attack. One basal series, Reading Milestones (Quigley & King, 1982), was developed specifically for deaf children; and it has been heavily used by the field. The series claims to introduce skills according to a scope and sequence dictated by the research of Quigley, Wilbur, Power, Montanelli, and Steinkamp (1976). In addition, as an aid to syntactic development, the first three levels of the series "chunk" the words of sentences into their phrasal constituents. The continuing bleakness of reading achievement scores suggests that the basal approach is not the total solution.

There is more recent evidence of the rising popularity of the whole language approach to literacy development, generally recognized as having been conceived by Goodman (1968). The published literature related to deafness shows many instances of whole language adoption by teachers of deaf students. For a compendium of 19 separate articles, see Abrams (1991). Beyond the spread of whole language through the literature, Pre-College Programs of Gallaudet University (1991) reported that instructional programs for deaf children made 43 separate requests for teacher inservice training in whole language teaching strategies during 1990 and 1991 alone. Although the whole language philosophy has also taken hold in schools for hearing children, the impetus for adoption in reading programs for deaf children seems to have come primarily from the research with four deaf subjects by Ewoldt (1981), a colleague of Kenneth Goodman.

Proponents of whole language stress the need to emphasize communication and meaning when conducting activities aimed at improving a child's literacy competence. Shapiro (1992) described a true whole language program as one that integrates language across the curriculum, surrounds children with print, and emphasizes the communicative and social functions of language. According to Pearson's (1989) interpretation of essential whole language practices, teachers need to use reading tasks and texts that are functional, natural, genuine, and authentic. Teachers also should avoid heavy use of workbooks, rigid sequences of skill teaching, or isolated skill practice. In general, whole language and basal reader advocates differ regarding how much literacy experiences need to be grounded in meaningful context, how much instruction in bottom-up skills should be direct or indirect, and whether bottom-up skills are addressed incidentally or systematically.

Criticism of whole language has steadily emerged in the literature focused on the reading acquisition of children with normal hearing. The approach has been criticized on theoretical grounds by Jager-Adams (1990) and Shapiro (1992) because, first, whole language's incidental, "as-needed" approach to bottom-up skills threatens to leave gaps in the learner's basic skill repertoire and, second, because indirect instruction is considered inadequate for teaching certain basic skills. Vellutino (1991) based his critical assessment of whole language on a meta-analysis of empirical studies and warned that whole language approaches were less effective for developing basic, bottom-up skills.

The use of whole language practices with deaf readers also has attracted a certain amount of criticism. Dolman (1992) reviewed the research with hearing children on the effectiveness of whole language compared to basal reader approaches and concluded that the relatively indirect methods of whole language were not as effective as more direct methods in promoting the reading development of children from disadvantaged backgrounds. Dolman reasoned that deaf children may share some of the same obstacles to literacy development as disadvantaged children, including deficient exposure to print, infrequent interaction with books, and relative inexperience with standard English. For this reason, he concluded that deaf children, too, might be inappropriate candidates for a pure, whole language instructional program. Although whole language does provide an alternative to the basal approach, the field clearly does not yet know the best tactics for helping the average deaf child become a competent reader.

Guidance from Research on Component


Regardless of the reading approach that is adopted, it should focus on those cognitive processes that can be effectively exploited by deaf readers - cognitive processes that are being exploited by the skilled deaf reader and not by the average deaf reader. There are some deaf readers, who, although virtually isolated from the sounds of spoken English, have still managed to become adept at comprehending English text. Whatever experiences guided these skillful students to their current levels of competence, they presently command arrays of cognitive processes that permit proficient reading of written English. Research that identifies the component reading processes that are critical to the performance of skilled deaf readers may suggest appropriate goals for the literacy instruction of average deaf readers, and a review of existing research provides some guidance in this regard.

Most of the past research on deaf readers has focused on their weaknesses and has attributed their reading comprehension problems to deficiencies in a variety of component processes. The completeness, accuracy, and use of syntactic knowledge is one of those deficient processes (Berent, 1988; Israelite, 1981; Moores et al, 1987; Power & Quigley, 1973; Quigley & King, 1980; Quigley et al., 1976; Robbins & Hatcher, 1981; Schmitt. 1968: Scholes, Cohen, & Brumfield, 1978; Wilbur, 1977). These studies show, for one thing, that many deaf readers do not recognize certain standard English phrase and sentence patterns; thus, they do not comprehend the meaningful interword relationships signalled by those patterns.

Past research has also implicated deficient lexical processes as an explanation for inferior reading performance (LaSasso & Davey, 1987; Moores et al., 1987). According to Strassman, Kretschmer, and Bilsky (1987), deaf readers have been documented as retrieving excessively specific word meanings where more general meanings are called for. King and Quigley (1985) reviewed the literature that shows the difficulties of deaf readers comprehending figurative language. Thus, many deaf readers either retrieve word meanings at great costs in time and attention, temporarily derailing higher-level comprehension processes, or they often retrieve and apply wrong word meanings, which also misguides their comprehension.

Another strand of research has focused on the potential importance of phonological recoding, even for readers with a profound hearing loss. Several researchers have discovered some evidence that relatively skilled deaf readers may use a phonological strategy for temporary storage of words in working memory (Conrad, 1979; Hanson, 1990; Hanson & Fowler, 1987; Hanson, Goodell, & Perfetti, 1991; Hanson & Lichtenstein, 1990; Lichtenstein, 1983). Hanson and Fowler hypothesized that some deaf readers may acquire an idiosyncratic, yet consistent, sound-based strategy, despite profound hearing losses, through lipreading experiences and speech training. In contrast, deaf readers who use a less enduring iconic/spatial strategy for sustaining the contents of working memory face a greater danger that the words of a sentence may partly decay before their combined meaning can be constructed and stored in long-term memory. Kelly (1993) found that average deaf readers were significantly less able than skilled deaf readers to demonstrate verbatim recall of just-read sentences.

These studies concluded the importance of certain component reading processes by identifying differences either between hearing and deaf subjects or, less frequently, among samples composed exclusively of deaf subjects who varied in the level of their reading comprehension ability. With the exception of the study by Kelly (1993), the measurement of component skills did not actually occur during the reading process itself. Nevertheless, the weaknesses identified by these studies suggest their likely disruption of the reading process. Documented deficiencies in syntactic knowledge, lexical knowledge, and use of a phonological code imply that the average deaf reader may process sentences inaccurately or inefficiently, consuming working memory capacity and obstructing effective comprehension.

Not all of the research has found deficiencies in the reading processes of deaf children. Quinn (1981) concluded that deaf children use strategies for processing stories that are qualitatively similar to those of children with normal hearing. Gormley (1981) suggests that topic familiarity or world knowledge "strongly impinges" on deaf students' understanding of written text. Her subjects more accurately comprehended texts about familiar topics compared with unfamiliar ones. Ewoldt (1981) found that her deaf subjects "were also capable, in some instances, of by-passing the syntactic deep structure and moving directly to meaning.... There is anecdotal evidence suggesting that there are occasions when the print-to-meaning leap is unmediated by syntax" (pp. 75-76). These findings regarding the favorable contributions of world knowledge and story structure, as well as the supposed redundancy of syntax, raise the question of whether certain deaf readers may be using top-down processes in a compensatory manner to offset weaknesses in bottom-up processes.

Purpose of This Research

Past research has revealed that many deaf readers may have deficits related to certain bottom-up reading processes, and these deficits may explain their problems comprehending English text. However, several questions still remain. For example, does the condition of hearing loss common to skilled and average deaf readers dictate that both groups also share certain deficits in bottom-up processes? Stanovich's (1980) theory of compensatory interaction of component reading processes allows for the possibility that skilled deaf readers could incur such deficits but still compensate for them by making exceptional compensatory use of top-down processes. Alternatively, it could be that skilled deaf readers have achieved relative fluency in bottom-up processing, whereas average readers have not.

Toward the end of identifying potentially critical component processes, the present study investigated whether indicators of top-down processing competence, the sort usually targeted by the whole language instructional approach, seemed to distinguish between skilled and average deaf readers. In addition, it studied whether these two groups were distinguished by measures of bottom-up processing fluency-competence that would be considered more as a by-product, rather than as an explicit goal of whole language instructional programs. The study built on the previous research by testing bottom-up and top-down processing, both of which have been identified by the earlier work as potentially influential. The current study also extended the prior research by recording a, sure of actual processing-reading time that occurs while the act of reading is taking place In contrast, earlier research with deaf subjects has focused primarily on post-reading outcome measures. This study also contrasts two groups of readers who were dramatically different in reading ability, increasing the chances of identifying component process differences that are critical to reading performance.

The study addressed two major questions, as follows:

* Are skilled and average deaf readers distinguished by indicators of bottom-up processing? Specifically, when compared to average deaf readers, do skilled deaf readers demonstrate significantly faster and more stable rates of processing incoming words of text? As stated earlier, research by Daneman and Carpenter (1980) and Perfetti (1985) has indicated that slow readers are highly susceptible to decay of working memory contents. In addition, according to LaBerge and Samuels (1974), fluent reading should be characterized by a "smoother profile" from one word to the next. This was borne out by Aaronson and Ferres (1984), who showed that reading times of children had significantly larger standard deviations than those of adults and that reading times of both groups were more variable during task conditions designed to impair fluency. * Are skilled and average deaf readers distinguished by indicators of top-down processing? Specifically, do the two groups demonstrate varying uses of world knowledge and knowledge constructed from prior text to facilitate processing of incoming words? In addition, do both groups demonstrate a tendency toward elevated reading time sentence "wrap-up" at the conclusion of sentences? Researchers have documented the facilitating influence of high levels of domain knowledge or world knowledge (McNeil, 1984; Spilich, Vesonder, Chiesi, & Voss, 1979; Walker, 1987; Yekovich & Walker, 1987). According to this literature, concepts related to familiar topics stored in long-term memory may be activated during reading, easing the integration of words and concepts as they are encountered by the reader's gaze. It follows then, that readers who do benefit from their stored knowledge will read words more swiftly in passages about familiar topics, compared to unfamiliar ones.

The potential facilitation from prior text has

been studied by Fischer and Glanzer (1986)

and Glanzer, Fischer, and Dorfman (1984).

They have shown that proficient readers

actively keep in mind the information of

just-read text, and they use it to facilitate the

processing of new words. Readers will read

more swiftly when this prior text information

is readily available, compared to when

availability of the information is obstructed.

A third top-down question relates to sentence

wrap-up. According to Carpenter and

Just (1981), a phenomenon observed in normal,

skilled reading is an elevation in reading

time at the last word of sentences.

Carpenter and Just contended that the last

word of a sentence is used by readers for

higher-level processes, such as inference

making, consistency checking, and integration

of new information.

Delimitations of the Study

This study is intended to detect potentially influential component reading processes, rather than educational, social, and familial experiences that affect the development of those processes. The field of deaf education is rife with debate regarding which educational and familial influences foster the reading competence of deaf children and youth. Just a few of the areas of contention and uncertainty are the possible effects of parent hearing status and signing skill, school signing policy, choice of a reading instructional philosophy, and residential or mainstream placement.

A review of that literature is beyond the scope of this article. Rather than seeking to identify the factors that may lead to competent reading, this study is an attempt to identify possible components of competent reading. Given that a person with a profound hearing loss has become a competent reader, what are the component processes that he or she may be exploiting better than a reader who has not achieved competence?



Subjects were 18 students enrolled at a metropolitan secondary school for the deaf. Nine students constituted the average group. School records indicated that each of these students had scored near the 50th percentile (compared to same-age deaf children) on the reading comprehension subtest of the Stanford Achievement Test-Hearing Impaired Edition (Gardner, Rudman, Karlsen, & Merwin, 1982). Their average scaled score on their most recent testing was 583 (range: 574-593), and their average percentile rank was 47 (range: 39-54). This group average is equivalent to the performance of an average hearing child at the beginning of the 3rd grade.

The nine students in the skilled group were all close to the top of the scale on the same test. Their average scaled score was 747 (range: 713-827). Eight of these students had a percentile rank of 99, and one scored at the 98th percentile compared to same-age deaf readers. When expressed in grade equivalents, the average difference between the two groups of students was slightly more than 9 years. To confirm differences between the groups in reading-comprehension ability, the two groups were administered an alternate measure of reading competence, the Degrees of Reading Power (The College Board, 1986), which has a maximum score of 99. The mean of the average group was 39.6 units on the measurement scale, whereas that of the skilled group was 81.3, confirming the dramatic intergroup difference in comprehension ability.

All students had experienced a hearing loss at the age of 2 years or younger. The mean Better Ear Average loss for the average group was 108 decibels (range: 92-125); for the skilled group the mean Better Ear Average loss was 101 decibels (range: 90-108). The mean age of the average subjects was 17.4 years (range: 16-19) and of the skilled subjects, 16.2 years (range: 15-18). The younger ages of the skilled readers suggests that their superiority on measures of overall reading comprehension and possibly on measures of component processes would not be a likely result of advanced age or a greater number of years in school. Two of the nine average readers were male, while three of the skilled readers were male. Three of the skilled group came from families with deaf parents. The parents of the remaining subjects all had normal hearing.

Data Collection

Apparatus. Reading time for words in short paragraphs was used as the dependent variable in the group comparisons. Subjects read experimental texts on the monitor of a personal computer, which had a screen-refresh cycle of 16 milliseconds (ms). Using two buttons on a computer peripheral - a serial mouse - the subjects controlled the display of words on the screen. A press of the right mouse button revealed the next word to the right. The computer screen displayed one word at a time. As each new word appeared, the preceding one disappeared. At various times during and following reading of a passage, subjects were required to respond to true/false questions, and they used the two buttons of the computer mouse for this purpose as well. While the subject controlled the forward-only display on the computer screen, the time-keeping mechanism of the computer, unknown to the subjects, recorded the time in ms that each word was kept in view.

Each word appeared in the same position as it would in normal text; and words not in view were represented as a series of dashes, one for each letter of the unseen words. Thus, data on the length and location of all words were available in peripheral view. This technique has been named the "moving window" by its originators, Just, Carpenter, and Woolley (1982). shows what one display looked like to the subjects after five presses of the mouse button.

It is important to note that, although the word-by-word Moving Window technique tends to be slower than reading from complete lines of text, it has much in common with more natural forms of reading. It is subject paced, and it generally proceeds from left to right. Although words are brought into view separately, this does not mean that they are necessarily processed separately in working memory. The view time for a word is largely an indicator of the working-memory processing required before the subject is ready to access a new word on the screen for the purpose of adding it to those in working memory.

Materials. Subjects read a total of 30 experimental passages that were each eight sentences in length. To test the relative effects on reading time of students' use of world knowledge, 16 passages were designed to address relatively familiar topics, whereas 14 paragraphs dealt with relatively unfamiliar topics. (One subject's data from two unfamiliar passages were damaged by a computer malfunction, and data from those two passages were excluded from the analyses for all subjects, reducing the number of unfamiliar passages to the 14 indicated.)

To prepare passages about topics that were, indeed familiar to subjects, we developed "script-based" texts. According to Bower, Black, and Turner (1979), script-based texts are passages about routine activities, such as doing laundry, going to a restaurant, and visiting a doctor's office. These experiences are highly familiar to most members of a society or culture; and according to the research by Yekovich and Walker (1987), central concepts of script-based texts are highly likely to be retrieved rather spontaneously from long-term memory and brought into an active status in working memory. In an effort to identify concepts that this population considered central to script-based topics, we conducted a prior survey of slightly older students enrolled in the same institution who were not subjects in the main study. We then incorporated the concepts identified as central by this survey into the experimental texts.

Another portion of the experimental passages focused on topics that were intended to be relatively unfamiliar. Topics were from a variety of knowledge domains, the kinds of writing one might find in a social studies text. Examples of the topics were Indian weapons, Viking explorers, and the invention of the steam engine. We assumed that, although readers in the study might have passing familiarity with some of the topics, they would not have in-depth knowledge of the information in these specific texts. Twelve of the passages about unfamiliar topics were taken from the study by Glanzer et al. (1984).

To make the information in all texts more accessible to the average subjects in the study, passages excluded vocabulary that was judged by an experienced teacher of deaf children to be somewhat difficult. In addition, we avoided the use of passive voice and relative clauses.

After the subjects read each passage, the computer posed four comprehension questions about the facts of the preceding text. Average subjects responded correctly to 86% of the questions that followed familiar passages and 67% correct to those following unfamiliar passages. This suggests that the familiar passages were indeed easier for these subjects to comprehend, although the level of accuracy achieved on the unfamiliar passages also suggests that the average readers understood much of what they read in these passages, as well. Rates of response accuracy for the skilled subjects were 95% for the familiar passages and 90% for the unfamiliar passages.

Procedures. Subjects read the first two sentences of each paragraph without any experimental manipulations other than the word-by-word constraint imposed by the Moving Window method. These "warm-up" sentences were designed to give the subjects a sense of what the passage was about before they encountered critical experimental sentences.

To determine use of prior context to facilitate processing of incoming words, we designated two of the remaining six sentences in each paragraph as target sentences; and students read each of these under one of two experimental conditions - interrupted and continuous. In the interrupted condition, the computer removed the text display from the screen without warning and presented three simple arithmetic questions prior to the reading of a target sentence. This strategy was used by Fischer and Glanzer (1986) and Glanzer et al. (1984) to remove the meaning of prior text from the working memories of subjects in their studies. Sentences read under the continuous condition were read without a preceding distractor task. Elevations in reading times for sentences processed under the interrupted condition compared to those read under the continuous condition were taken as evidence of the facilitating effect of information from prior text held active in working memory.

The arithmetic questions were phrased as equations, which subjects were required to evaluate as either true or false. These equations appeared one at a time on the computer screen, and responses were entered by a press of either the left or right button of the computer mouse. After each response, the computer was automatically prompted to present the next equation until all three had been presented; then the subject was returned to the text screen at a location one space before the target sentence. The precaution of returning the subject to the screenful of dashes, instead of the first word of the target sentence, reduced the potential inflation of first-word reading times due to the possible reactive effects of switching from arithmetic processing to reading.

The two target sentences in each passage were always separated by two nontarget sentences. If the third sentence of the passage was a target read after an arithmetic interruption, the sixth sentence was the continuous target for that passage. The other target combinations, randomly assigned to passages, were the fourth paired with the seventh sentence and the fifth sentence paired with the eighth. When the interrupted target was the first of the two encountered in the paragraph, this design feature allowed the subject to reestablish a pattern of continuous reading before the second target sentence was read without the arithmetic interruption.

At the end of each passage, subjects answered four true/false comprehension questions about the specific facts of the passage. Using the two buttons of the computer mouse, subjects classified each statement as either true of false. Subjects understood prior to the experiment that their accuracy on these items would be scored and reported to them.

Prior to reading experimental passages, subjects were given manually signed instructions and practice with three practice passages. The practice session was interactive in nature: The investigator monitored the subject's manipulation of the computer mouse and responses to the arithmetic and reading questions to confirm understanding of procedures for viewing text and responding to questions. Practice data were not analyzed.

Passages were administered during sessions on 2 different days, and each session lasted approximately I hr. During each session, subjects processed four passages and then took a short break until the reading time and response data were transferred from computer memory to a storage disk. This pattern was continued until all passages for the session were completed.

Data Reduction and Analysis

Reading times generated by the foregoing experimental procedures were processed in a manner that was responsive to the research questions that initially prompted the investigation. In addition, statistical procedures were used - analysis of variance (ANOVA) with repeated measures - that could simultaneously accommodate the multiple potential sources of variation in reading time.

A number of studies that used processing time as the dependent variable have taken steps to stabilize a subject's performance across multiple trials and to reduce the potential distortion caused by exceptionally long times - outliers - that may have resulted from a subject's being temporarily distracted on one or more individual trials. Hanson and Fowler (1997) opted to calculate a mean time after eliminating the times of those trials that differed from the cell mean by more than 2 standard deviations. However, because this could have had the result of comparing subjects on different sets of trials, I decided on the approach used by Belmont, Karchmer, and Bourg (1983) and Belmont, Karchmer, and Pilkonis (1976), who reduced the effect of outliers simply by calculating each subject's median processing time for the trials that constituted the same set. These intrasubject medians were then used to calculate the mean across subjects.

Core Reading Rate. Each subject's usual "core" rate of reading while using the Moving Window apparatus was operationalized as the median reading time for the words of all target sentences within each level of topic familiarity. This median was based on 348 words from the unfamiliar topics and 356 words from the familiar topics after extreme times - outliers - had been excluded from each subject's two data sets, according to procedures explicated by Shewhart (1931). Accordingly, the following routine was applied to each subject's distribution of times.

1. Reading times within the distribution were ranked according to their duration. 2. The difference in the reading times at the 25th and 75th percentiles was calculated. 3. Because the 25th/75th percentile difference is equal to 1.35 standard deviation units in a normal distribution, the results of Step :! were divided by 1.35, yielding the standard deviation of the subject's estimated distribution of reading times without the distortion of outliers. 4. Reading times that were more than 3 standard deviations greater than the median for the distribution were considered outside of the subject's normal distribution and were excluded from the distribution. 5. The medians of the remaining times were then calculated separately for each subject's two sets of paragraphs, and these represented each subject's Core Reading Rate for those passages.

Core Reading Rate data were analyzed in a 2 (Ability Group) x 2 (Topic Familiarity) analysis of variance (ANOVA) with repeated measures, with accompanying F-test. Topic Familiarity was treated as a within-subject factor.

Stability of Reading Rate. Each subject's Stability of Reading Rate was operationalized as the standard deviation of the word reading times used to calculate the Core Reading Rate. However, in a refinement of the statistic used by Aaronson and Ferres (1984), the results are expressed as a percentage of the mean of those reading times. The use of a percentage rather than the actual value of the standard deviation prevented a subject's relative standing on the Stability measure from being dictated by the magnitude of the Core Reading Rate, given that reading rates with larger values will naturally tend to have larger standard deviations. Stability of Reading Rate was analyzed in a 2 (Ability Group) x 2 (Topic Familiarity) ANOVA with repeated measures, with F-test.

Facilitation from the Information of Prior

Text. The temporary loss of prior text from working memory logically would tend to have the greatest effect on the first word of sentences read following an interruption. At the first word, it would be most difficult for the subject to have reinstated the substance of prior text from long-term memory, whereas with each new word read, there would be an increased likelihood of recovering the displaced information. Performance was summarized as the median time for reading the first word of each sentence under each of the two conditions, continuous or interrupted, within each level of topic familiarity, familiar or unfamiliar. Thus, in the unfamiliar passages, the medians for the interrupted and continuous conditions were each based on the reading times for the first words in the 14 target sentences of each type. In the familiar passages, the medians were based on the reading times for first words of the 16 target sentences of each type.

To accommodate the different potential sources of variation in reading times, the data from the 2 (Ability Group: Skilled or Average) x 2 (Reading Condition: Interrupted or Continuous) x 2 (Topic Familiarity: Familiar or Unfamiliar) design were analyzed using an ANOVA with repeated measures, with accompanying F-test. Reading Condition and Topic Familiarity were treated as within-subject factors.

Sentence Wrap-up Time. According to the conception by Carpenter and Just (1981), sentence wrap-up is an elevation in reading time at the end of a sentence. Here we tested for the presence of reading time elevation by comparing reading times for the last word of sentences to times for reading the next-to-last word. This was summarized as the median time based on all target sentence words in each of two levels of word position-last or next-to-last-within each level of topic familiarity, familiar or unfamiliar. The medians for the two word positions in the unfamiliar passages thus were each based on 28 target sentences, whereas those for the two word positions in the familiar passages were based on 32 target sentences.

Data from the 2 (Ability Group) x 2 (Word Position: Last or Next-to-last) x 2 (Topic Familiarity) design were analyzed using an ANOVA with repeated measures, with F-test. Word Position and Topic Familiarity were treated as within-subject factors.


The following two sections respond in turn to the two primary research questions of the study: whether skilled and average deaf readers are distinguished (1) by bottom-up reading processes or (2) by top-down reading processes.

Bottom-up Processing

Table 1 shows mean Reading Rates (ms per word) for the skilled and average subjects when reading familiar and unfamiliar passages. Skilled subjects read significantly faster than did the average readers in both familiar and unfamiliar passages The ANOVA produced a significant main effect for the Ability Group factor, F(1, 16) = 20.47, p = 000. In addition, both groups of subjects demonstrated significantly swifter processing rates while reading familiar passages compared to unfamiliar passages, F(1, 16) = 9.82, p = .006. The interaction between the Ability Group and Topic Familiarity factors was not significant, indicating that the Topic Familiarity factor did not produce a differential benefit for either group of subjects.

Table 2 shows results related to a second measure of bottom-up processing fluency, namely, the Stability of each reader's word-to-word reading times. Again, this is summarized as the standard deviation of reading times for all words in target sentences expressed as a percentage of the mean for those times. Like the Reading Rate statistic, the Stability data show that the skilled readers are significantly more fluent than average readers.


The ANOVA for the Ability Group factor produced a significant main effect, F(1, 16) = 11.75, 11.75, p = .003. The data do reveal slight improvements in Stability of Reading Rate for familiar passages, as indicated by lower values for the percentages. However, the ANOVA did not reveal a significant main effect for the Topic Familiarity factor, F(1, 16) = 1.68, p = .213, nor did it for the interaction of Topic Familiarity and Ability Group.

Top-Down Processing

The results shown in Table 3 deal primarily, but not exclusively, with the first issue related to top-down processing: whether the two reader groups benefit in a similar way from meaning constructed from earlier sentences in a passage. The table shows mean reading times (based on each subject's median) for the first word of sentences read continuously, when prior meaning is assumed to be more available, and those read following an interruption, when the prior meaning is assumed to be temporarily displaced from working memory. The times for familiar and, unfamiliar topics are presented separately.


Both skilled and average readers read the first word of continuous sentences significantly faster than the first word of sentences read following an arithmetic interruption. This finding was obtained in both familiar and unfamiliar passages. The ANOVA for the Reading Condition factor produced a significant main effect, F(1, 16) = 20.38, p =.000, whereas the F for the Reading Condition Ability Group interaction did not reach significance when p < .05 is set as a criterion of significance, F(1, 16) = 4.23, p = .056. This finding suggests that both groups seem to use the meaning of prior text to facilitate processing of incoming words, and the reading process is thus significantly slowed when that information is removed through experimental manipulation.

This ANOVA also indicated that both groups read the first words of sentences in familiar passages significantly faster than those of unfamiliar passages, regardless of whether they followed an interruption, F(1, 16) = 5.34, p = .035. Combined with the significant main effect found for topic familiarity reported in Table 1, this is additional evidence that the text processing of both groups of readers is facilitated by the retrieval and application of prior knowledge. The interactions between Reading Condition and Topic Familiarity and between Topic Familiarity and Ability Group were not significant. The three-way interaction between Reading Condition, Topic Familiarity, and Ability Group was also not significant.

The results reported in Table 3 also illuminate the issue of bottom-up fluency differences between the two groups. As in Table 1, the Ability Group main effect showed significantly faster reading rates for the skilled readers, F(1, 16) = 18.20, p = .001. This again indicates that the average readers are significantly less fluent in the rate at which they process the incoming words of text.

A second analysis of top-down processes focused on possible differences in reading time elevations for the final word of sentences - an indication of appropriate "wrap-up" processing. Table 4 shows the mean of the subjects' median reading times for the last and next-to-last words of target sentences, shown separately for the unfamiliar and familiar passages.


Both groups processed the last word of target sentences significantly more slowly than the next-to-last word, regardless of whether the topic was familiar. The ANOVA indicated a significant main effect for Word Position, F(1, 16) = 20.65, p = .000; there was no interaction between Word Position and Ability Group, F(1, 16) = 2.08, p = .168, indicating that both groups seemed to engage in sentence wrap-up to an equal degree.

This analysis also yielded significant main effects for Topic Familiarity, F(1, 16) = 5.85, p = .028, but not a Topic Familiarity Ability Group interaction. This combines with the evidence from Tables 1 and 3 that both groups tend to retrieve and use familiar topic information to facilitate processing. The two-way interaction between Word Position and Topic Familiarity and the three-way interaction between Word Position, Topic Familiarity, and Ability Group were not significant. In contrast, the significant Ability Group main effect reported in Table 4, F(1, 16) = 20.64, p = .000, indicates that the skilled subjects were significantly swifter in their processing of incoming words; and this bolsters the likelihood of bottom-up differences between the groups, as reported in Tables 1, 2, and 3.


These results revealed a number of marked similarities between samples of skilled and average deaf secondary students with respect to top-down processes and several differences relative to bottom-up processes. Both groups tended to use the information of prior text to facilitate processing of new words, as indicated by the significant F-value for the Reading Condition factor reported in Table 3. Both groups also seemed to retrieve stored topic knowledge to process passages about familiar topics more fluently compared to passages about relatively unfamiliar topics. This is supported by significant findings for the Topic Familiarity main effect reported for three different sets of words in Tables 1, 3 and 4. Both groups also engaged in sentence wrap-up as indicated by a significant main effect for the Word Position factor reported in Table 4. The absence of a significant interaction between any of the latter three factors and Ability Group indicates that neither group was engaging in these processes more than the other. These findings support a conclusion that these two groups of readers shared a number of productive top-down reading processes.

The measures of bottom-up processing in contrast, revealed some obvious differences between these two groups of readers. The skilled group read significantly more swiftly than the average group. A significant ANOVA was found for the Ability Group factor in each of the analyses reported in Tables 1, 3, and 4. In addition, the Stability of Reading Rate analysis of Table 2 indicates that the skilled subjects exhibited a significantly smoother, more fluent processing profile. The average subjects revealed relatively halting processing rhythms that were interrupted much more frequently by relatively long reading times for individual words. The latter results support a conclusion that the processing of incoming words is a far more laborious undertaking for average deaf readers. Thus, in contrast to the top-down components, these bottom-up processes seem better to distinguish between these skilled and average deaf readers.


These findings have definite implications for explaining the difference in overall reading competence - nine grade levels - that originally defined membership in the two ability groups of the study. The findings also provide some guidance in designing instruction intended to help low-ability deaf readers.

First, the average readers showed evidence of top-down processing resembling that of the skilled readers in the sample. This is not to argue that the average readers enjoy world knowledge even remotely equivalent to that of the skilled readers. The latter group has no doubt enhanced world knowledge through their more extended experiences of skillful reading comprehension. In addition, under normal reading conditions, the skilled readers probably generate a more accurate understanding of prior text and more deftly use it to facilitate the processing of incoming words.

However, the performance of the average readers indicated that they do use their world knowledge and the meaning they construct from prior text - when these resources are available - and these processes may have contributed significantly to the average readers' appreciable understanding of the texts used in this study. Their accuracy when responding to ending comprehension questions was reasonably high, particularly those related to the familiar topics. Though it might be instructionally sound to augment the world knowledge of the average readers to improve comprehension, teaching them strategies for accessing what they already know would seem of little advantage because they already seem to engage in that process. This finding reflects favorably on the whole language program of the school where all of the subjects were enrolled, and one wonders whether the measured ability of the average group might have been worse without exposure to this instructional approach.

It must also be emphasized, however, that the texts of these experiments were designed to allow top-down processes to surface if indeed they were being used by the readers; the passages were intentionally made rather easy. Texts more appropriate for secondary school students, which would focus on many novel topics and use more challenging vocabulary and syntax, would likely present immense comprehension problems for the average subjects in this study. The decline from 86% correct on the ending comprehension questions following the familiar paragraphs to 67% correct on the unfamiliar paragraphs hints at the problems imposed by more challenging text. The skilled group declined only from 95% on the familiar passages to 90% on the unfamiliar ones.

The use of two categories of processing measures suggests that the primary contributors to the discrepancy in reading competence between the two groups are differences in the efficiency of processing the visual information of text. The slow, irregular word reading rates of the average readers suggest that they are vulnerable to a variety of obstructions to comprehension, which will likely be aggravated by texts appropriate for secondary level students. First, the additional time required to process each incoming word places a greater burden on working memory, increasing the danger of words decaying before meanings of cohesive word segments can be constructed. The problem may be further exacerbated by the failure of these readers to use a phonological strategy for temporary storage of words in working memory, the phenomenon being studied by Hanson and colleagues. The labored rhythms, of word processing also suggest that retrieval of word meanings by these readers is difficult and perhaps inaccurate, chronically contributing wrong lexical entries to the meaning that is constructed.

Cognitive theory and empirical research have suggested an additional drawback imposed by the excessive use of attention for bottom-up processing. The interactive theory of Carpenter and Just (1981), Rumelhart (1977), and Stanovich (1980) and studies by Daneman and Carpenter (1980) and Perfetti (1985), among others, indicated that the increased draw on attention imposed by the labored processing of incoming words circumscribes the amount of working memory capacity available to engage in higher-level processes related to comprehension of the entire text. Ironically, then, the readers who have greatest need for the facilitating effect of higher-level processes, often have the least access to them; these readers thus may be forced to depend unduly on their underdeveloped bottom-up skills. The results of this study indicate that although productive top-down processes may be necessary for comprehension, they are not sufficient. Reasonable bottom-up fluency is also a necessary component of reading competence.

The need for bottom-up fluency does not automatically justify a resurgence of the drill-and-practice approaches that have been so prevalent in reading programs for deaf children and youth. Recall that explicitly skill-focused programs have been largely unsuccessful in routinely developing reading competence of deaf students. The goal of bottom-up fluency does, however, level a three-fold challenge to those who use whole language practices with deaf students. First, a logical, theoretical basis for expecting that whole language practices will lead to fluency requires explication; second, methods for monitoring effective development of bottom-up fluency need to be developed and implemented; and, third, the criticisms of whole language that cite insufficient amounts of direct and systematic instruction in basic, bottom-up skills need to be answered.

Turning to the first issue, cognition theory does include reason to suspect that whole language approaches may contribute to bottom-up competence by increasing the exposure to print that is essential for development of automaticity. The top-down processes stimulated by whole language practices ought to be relied on to compensate - but only temporarily during reading acquisition - for weaknesses in processing the visual information in texts. This will render reading a much less tedious experience, increase the incidence of meaningful and enjoyable reading, and thus expand the frequency of interaction with print. The favorable effect of increased exposure to print has been documented by Allington (1980). Thus, in theory, whole language can foster bottom-up fluency.

Effective monitoring of emerging fluency is the second issue; and the results of this investigation indicate that bottom-up fluency, because it is so vital, cannot be taken for granted, regardless of the instructional approach. Thus, to monitor program effectiveness and to inform teachers of specific instructional needs, teachers need measures of bottom-up processing fluency to complement the tests of accuracy and global comprehension that already exist. Adaptations of the tasks used in this research could be used to confirm development of fluent processing of visual information during reading. Measurement of reading time has rarely been used as a diagnostic method in deaf education. However, Cross and Paris (1987) specifically mentioned reaction time as a vehicle for "penetrating" normally covert reading processes and rendering them more observable to the diagnostician and to the classroom teacher. The growing availability of computers in schools suggests that implementation of such measurement is quite feasible.

The third issue is whether whole language is sufficiently direct and systematic dealing with bottom-up skills. Contrary to the assumptions of some educators, the whole language philosophy does not preclude explicit instruction in bottom-up skills - even phonics - provided that the instruction is grounded in a meaningful communication context. For example, Edelsky (1990) stated, "Whole language kindergarten teachers certainly teach sound-letter correspondences - not because it is `H' week but because a child is writing directions classmates will use in caring for the hamster" (p. 9).

One specific version of whole language demonstrates direct and systematic treatment of basic skills. The Reading Recovery Program (Clay, 1985) is one exarnple of whole language proponents approaching linguistic skills in a direct manner. With Reading Recovery, a teacher works with a student in daily one-to-one sessions, where they read together and apply strategies that promote comprehension and competence. Pinnell (1989) stated that, when necessary, "teachers call students' attention to the conventions of print" (p. 165). In one exemplar interaction between child and teacher, the child initially read `kids' in the sentence beginning, "All the children..." Then the child returned to correct herself, reading the correct version, "children," the second time around. The teacher's reaction: "You said `kids.' That was a really good guess, wasn't it? That would make sense there. But you checked again and fixed it yourself" (p. 178). Obviously, this teacher was not adverse to a faithful reading of the words that appeared on the page. Although Reading Recovery has been criticized because it is so labor intensive, the relative lack of success in the field of deaf education, despite significant investments of teacher and student time, suggests that the net efficiency of the one-to-one program might surpass earlier efforts.

One thing is certain in the field of deaf education: At present, we cannot be totally certain about which reading approach is the most effective for teaching deaf students. One reason is the lack of precision in specifying exactly the essential practices of alternative approaches. The whole language approach, itself, has not escaped disagreement about what constitutes its key ingredients. Bergeron (1990) reviewed the literature on whole language practices and found wide divergence of opinion regarding what constituted the essence of the approach. Dramatizing these differences was the fact that on separate occasions Kenneth Goodman, himself one of the founders of whole language, referred to it as "a theory" (1979), "an approach" (1987), and "a philosophy" (1989).

Even if the essential practices of competing reading approaches were well explicated, it would still be difficult to subject them to the scrutiny of a fair test in a single evaluation study. Only rarely are large numbers of deaf students enrolled in the same instructional program who do not vary on one or more demographic characteristics known to affect literacy competence. These varying characteristics complicate the separation of background influences from instructional effects. Given these realities, we must continuously, gradually, expand our knowledge base and make educated programming decisions based on that accumulating knowledge. The results of this study add to that knowledge base by indicating the need for literacy development strategies that address both bottom-up and top-down competence.

LEONARD P. KELLY, Research Scientist, Center for Studies in Education and Human Development, Gallaudet Research Institute, Gallaudet University, Washington, DC.

Gratitude is expressed to Karen Saulnier, Linda Stamper, David Snyder, and Thomas Clem for their contributions to this investigation. I am also grateful to the editor of Exceptional Children and to an anonymous reviewer for their helpful suggestions for improving earlier versions of this paper.


Aaronson, D., & Ferres, S. (1984). Reading strategies for children and adults. Journal of Verbal Learning and Verbal Behavior, 23, 189-220. Abrams, M. (Ed.). (1991). Whole language: Afolio of articles from Perspectives in Education and Deafness. Washington, DC: Pre-College Programs, Gallaudet University. Allington, R. L. (1980). Poor readers don't get to read much in reading groups. Language Arts, 57, 872-876. Belmont, J., Karchmer, M., & Bourg, J. (1983). Structural influences on deaf and hearing children's recall of temporal/spatial incongruent letter strings. Educational Psychology, 3, 259-274. Belmont, J., Karchmer, M., & Pilkonis, P. (1976). Instructed rehearsal strategies' influence on deaf memory processing. Journal of Speech and Hearing Research, 19, 36-46. Berent, G. P. (1988). An assessment of syntactic capabilities. In Michael Strong (Ed.), Language, learning, and deafness (pp. 133-161). Cambridge: Cambridge University Press. Bergeron, B. (1990). What does the term whole language mean? Constructing a definition from the literature. Journal of Reading Behavior, 22(4), 301-329. Bower G., Black, J., & Turner, T. (1979). Scripts in memory for text Cognitive Psychology, 11, 177-220. Carpenter, P., A Just, M. (1981). Cognitive processes in reading: Models based on readers' eye fixations. In A. Lesgold & C. Perfetti (Eds.), Interactive processes in reading (pp. 177-213). Hillsdale, NJ: Lawrence Erlbaum and Associates. Center for Assessment and Demographic Studies. (1991). Stanford achievement test, eighth edition: Hearing-impaired norms booklet. Washington, DC: Gallaudet Research Institute, Gallaudet University. Clay, M. (1985). The early detection of reading difficulties. Portsmouth, NH: Heinemann. The College Board. (1986). Degrees of reading power. New York: Author. Conrad, R. (1979). The deaf school child. London: Harper & Row. Cross, D., & Paris, S. (1987). Assessment of reading comprehension: Matching test purposes and test properties. Educational Psychologist, 22, 313-332. Daneman, M., & Carpenter, P. (1980). Individual differences in working memory and reading. Journal of Verbal Learning and Verbal Behavior, 19, 450-456. Dolman, D. (1992). Some concerns about using whole language approaches with deaf children. American Annals of the Deaf, 137(3), 278-282. Dolnick, E. (1993). Deafness as culture. The Atlantic Monthly, 272(3), 37-53. Edelsky, C. (1990). Whose agenda is this anyway? A response to McKenna, Robinson, and Miller. Educational Researcher, 19(8), 7-11. Ewoldt, C. (1981). A psycholinguistic description of selected deaf children reading in sign language. Reading Research Quarterly, 1, 58-89. Fischer, B., & Glanzer, M. (1986). Short-term storage and the processing of cohesion during reading. The Quarterly Journal of Experimental Psychology, 38, 431-460. Gardner, E., Rudman, H., Karlsen, B., & Merwin, J. (1982). Stanford Achievement Test (7th ed.). Cleveland: Psychological Corporation. Garner, R. (1988). Metacognition and reading comprehension. Norwood, NJ: Ablex. Glanzer, M., Fischer, B., & Dorfman, D. (1984). Short-term storage in reading. Journal of Verbal Learning and Verbal Behavior, 23, 467-486. Goodman, K. (1968). The psycholinguistic nature of the reading process. Detroit: Wayne State University Press. Goodman, K. (1979). The know-more and the know-nothing movements in reading: A personal response. Language Arts, 56, 657-663. Goodman, K. (1987). Beyond basal readers: Taking charge of your own teaching. Learning, 16, 63-65. Goodman, K. (1989). Whole language research: Foundations and development. The Elementary School Journal, 90,207-221. Gormley, K. (1981). On the influence of familiarity on deaf students' text recall. American Annals of the Deaf, 126, 1024-1030. Haberlandt, K. (1984). Components of sentence and word reading times. In D. Kieras & M. Just (Eds.), New methods in reading comprehension research (pp. 219-252). Hillsdale, NJ: Lawrence Erlbaum. Hanson, V. (1990). Recall of order information by deaf signers: Phonetic coding in temporal order recall. Memory and Cognition, 18(6), 604-610. Hanson, V., & Fowler, C. (1987). Phonological coding in word reading: Evidence from hearing and deaf readers. Memory and Cognition, 15, 199-207. Hanson, V., Goodell, E. W., & Perfetti, C. (1991). Tongue-twister effects in the silent reading of hearing and deaf college students. Journal of Memory and Language, 30, 319-3 30. Hanson, V., & Lichtenstein, E. (1990). Short-term memory coding by deaf signers: The primary language coding hypothesis reconsidered. Cognitive Psychology, 22, 211-224. Israelite, N. (1981). Direct antecedent context and comprehension of reversible passive voice sentences. Unpublished doctoral dissertation, University of Pittsburgh. Jager-Adams, M. (1990). Beginning to read: Thinking and learning about print. Urbana: Center for the Study of Reading, The Reading Research and Education Center, University of Illinois at Urbana-Champaign. Just, M., Carpenter, P., & Woolley, J. (1982). Paradigms and processes in reading comprehension. Journal of Experimental Psychology: General, 111, 228-238. Kelly, L. (1993). Recall of English function words and inflections by skilled and average deaf readers. American Annals of the Deaf, 138(3), 288-296. King, C., & Quigley, S. (1985). Reading and deafness. San Diego: College Hill Press. Laberge, D., & Samuels, S. (1974). Toward a theory of automatic information processing in reading. Cognitive Psychology, 6, 293-323. LaSasso, C. (1987). Survey of reading instruction for hearing-impaired students in the United States. The Volta Review, 89, 85-98. LaSasso, C., & Davey, B. (1987). The relationship between lexical knowledge and reading comprehension for prelingually profoundly hearing impaired students. Volta Review, 89, 211-220. Lichtenstein, E. H. (1983). The relationships between reading processes and English skills of deaf college students: Parts I and II. Rochester, NY: National Technical Institute for the Deaf, Communication Program. McNeil, J. (1984). Reading comprehension: New directions for classroom practice. Glenview, IL: Scott Foresman. Moores, D., Kluwin, T., Johnson, R., Cox, P., Blennerhassett, L., Kelly, L., Ewoldt, C., Sweet, C., & Fields, L. (1987). Factors predictive of literacy in deaf adolescents. (Project No. NIHNINCDS-83-19. Final Report to National Institute of Handicapped Children and Communicative Disorders and Stroke.) Washington, DC: Gallaudet University. Pearson, D. (1989). Reading the whole-language movement. The Elementary School Journal, 90(2), 231-241. Perfetti, C. (1985). Reading ability. New York: Oxford University Press. Pinnell, G. S. (1989). Reading recovery: Helping at-risk children learn to read. The Elementary School Journal, 90(2), 161-183. Power, D., & Quigley, S. (1973). Deaf children's acquisition of the passive voice. Journal of Speech and Hearing Research, 16, 5-11. Pre-College Programs. (1991). Carrying out our national mission: Pre-College Programs annual report. Washington, DC: Gallaudet University. Quigley, S., & King, C. (1980). Syntactic performance of hearing impaired and normal hearing individuals. Applied Psycholinguistics, 1(4), 329-356. Quigley, S., & King, C. (Eds.). (1982). Reading milestones. Level 5. Beaverton, OR: Dormac. Quigley, S., Wilbur, R., Power, D., Montanelli, D., & Steinkamp, M. (1976). Syntactic structure in the language of deaf children. Urbana: University of Illinois, Institute for Child Behavior and Development. Quinn, L. (1981). Reading skills of hearing and congenitally deaf children. Journal of Experimental Child Psychology, 32(1), 139-161. Robbins, N., & Hatcher, C. (1981, February-March). The effects of syntax on the reading of hearing impaired children. Volta Review, 83, 105-115. Rumelhart, D. (1977). Toward an interactive model of reading. In S. Domic (Ed.), Attention and performance VI. Hillsdale, NJ: Lawrence Erlbaum. Schmitt, P. (1968). Deaf children's comprehension and production of sentence transformations and verb tenses. Unpublished doctoral dissertation, University of Illinois. Scholes, R., Cohen, M., & Brumfield, S. (1978). Some possible causes of syntactic deficits in the congenitally deaf English user. American Annals of the deaf 123(5), 528-535. Shankweiler, D., & Crain, S. (1986). Language mechanisms and reading disorders: A modular approach. Cognition, 24, 139-168. Shapiro, H. (1992). Debatable issues underlying whole-language philosophy: A speech-language pathologist's perspective. Language, Speech, and Hearing Services in Schools, 23, 308-311. Shewhart, W. (1931). Economic control of quality of manufacturedproduct. Princeton, NJ: Van Nostrand Reinhold. Spilich, G., Vesonder, G., Chiesi, H., & Voss, J. (1979). Text processing of domain related information for individuals with high and low domain knowledge. Journal of Verbal Learning and Verbal Behavior, 18, 275-290. Stanovich, K. (1980). Toward an interactive-compensatory model of individual differences in the development of reading fluency. Reading Research Quarterly, 16, 32-71. Strassman, B., Kretschmer, R., & Bilsky, L. (1987). The instantiation of general terms by deaf adolescents/adults. Journal of Communication Disorders, 20, 1-13. Vellutino, F. (1991). Introduction to three studies of reading acquisition: Convergent findings on the theoretical foundations of code-oriented versus whole language approaches to reading instruction. Journal of Educational Psychology, 83(4), 437-443. Walker, C. (1987). Relative importance of domain knowledge and overall aptitude on acquisition of domain-related information. Cognition and Instruction, 4(1), 25-42. Wilbur, R. (1977). An explanation of deaf children's difficulty with certain syntactic structures. Volta Review, 79(2), 85-92. Yekovich, F. R., & Walker, C. H. (1987). The activation of scripted knowledge in reading about routine activities. In B. Britton & S. Glynn (Eds.), Executive control processes in reading (pp. 145-171). Hillsdale, NJ: Lawrence Erlbaum.
COPYRIGHT 1995 Council for Exceptional Children
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1995 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Kelly, Leonard P.
Publication:Exceptional Children
Date:Feb 1, 1995
Previous Article:Counterpoint: special education - ineffective? Immoral?
Next Article:General education teacher planning: what can students with learning disabilities expect?

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters