Printer Friendly

Implementing a successful writing program in public schools for students who are deaf.

The difficulties that young students who are deaf have with writing English are well documented in a history that goes back several decades (Heider & Heider, 1940; Kluwin, 1979; Stuckless & Birch, 1966; Taylor, 1969; Thompson, 1936; Walter, 1955). More recently, in a study of the changes made by students who are deaf to narratives shared on a computer system, Livingston (1989) showed that the students engaged in surface word changes or rephrasings of entries to respond to teachers' inquiries for clarification rather than any major restructuring of the text. The writers who were deaf tended to make surface changes by adding or substituting words rather than through deletions as was characteristic of some hearing writers. In addition, other research on the composing process of young children who are deaf suggests that some apparent errors in the writing of these children may result from the process of thinking in one language and writing in another when there is no clear concept of how to compose in either situation (Mather, 1989). Although researchers have repeatedly observed that young writers who are deaf have a poor command of written English syntax, more recent work suggests that their problems with writing may be related to an ignorance of how to compose effectively.

Parallel to these investigations of poor writing skills of children who are deaf is an extensive research history on the process of writing, as both a theoretical construct and a curricular innovation (Applebee, 1982; Hillocks, 1987; Humes, 1983). The thrust of this work is that writing, as a process, is not a linear sequence of steps but rather a recursive process that has identifiable subprocesses. This approach to writing both as theory (Humes) and as pedagogy (Applebee) has a considerable and successful tradition. The goal of teaching writing as a process is to get students to work through the same general steps in composing that skilled writers go through rather than teaching writing through correcting finished compositions. In other words, process approaches to writing instruction are effective in that they promote the thinking process of the individual student.

In a review of 20 years of writing research (about 2,000 studies), Hillocks (1987) came to some specific conclusions about the effects of such an intervention. Hillocks reported that the use of editing skills, such as grammar and mechanics corrections, as the primary focus of writing instruction had a negative effect on outcomes. Writing programs that focused on a study of writing as "products" were more effective--but not as effective as forms of writing instruction that focused on the production of discourse or on activities that fostered the production of discourse, such as planning or organizing.

In a large-scale study, Baxter and Kwalick (1984) reported that the writing of 1,029 high-school students improved after only 15 weeks of instruction using a process approach to composing. They reported "contradictory" results in that holistic scores for papers had increased, but at the same time there was an increase in the number of grammatical errors made by the students. Working with 48 high-school students in a one-semester project, Moriarity (1978) reported that instruction in any component of the writing process led to an improvement in compositions that were rated impressionistically. Though Moriarity's study was flawed by a possible "Hawthorne" effect, it does fit into the regular pattern of findings for this kind of evaluation. Working with college-age students, Clifford (1981) reported that a modified process approach was successful in an experimental/control group study. Covarying for initial between-group differences, Clifford reported that there were significantly greater gains in the holistic scores of the students, but no differences were found in their knowledge of the mechanics of writing or their use of the mechanical conventions of writing.

Humes (1983) offered a possible explanation for the differential effects of process approaches to writing instruction. In her review of the research in this area, she commented that the biggest impact of this type of composing was on planning, with considerably less emphasis on "translating," or putting words to paper. In addition, for those who were successful in this type of composition instruction, most revisions involved conceptual restructuring and responding to audience interests. Writers placed less explicit emphasis on revision of formal aspects of the composition. Knudson (1988) supported this contention: She reported that reduced amounts of direct teacher involvement led to better compositions when the approach was to teach writing as a process.

This history of process approaches to teaching writing suggests three general findings:

* Studies regularly cite positive effects for this approach when holistic or impressionistic scoring is employed, even for relatively short periods of instruction.

* These approaches report very mixed results in the improvement of specific grammatical or mechanical skills; that is, improvements in grammatical skills are occasionally reported but are difficult to link to the instructional procedure used.

* Some information exists concerning the use of process approaches with writers with learning disabilities but not with writers who are deaf. That is not to say that the approach has not been used with these writers, but rather, there are no formal evaluations of these attempts using student writing as an outcome measure.

Consequently, to improve the English composing skills of young writers who are deaf, we conducted a 2-year intervention program in 10 public school programs for students who are deaf by training teachers to use the process approach to teaching writing. We assumed that the method would be generally effective in improving the overall quality of students' compositions as measured by impressionistic scoring of overall writing quality and hoped that improvements in grammatical complexity would be seen as well.

METHOD

Teacher Training

Fifty-two teachers from 10 school districts around the United States with an average of 10 years of teaching experience participated in the project. Participants came from every region of the United States. Forty percent of the teachers had master's degrees, 31% had done postgraduate work, and the remaining teachers had bachelor's degrees. Eighty-two percent had permanent certificates as teachers of the deaf; 3 were certified to teach English, and 7 were certified to teach secondary-level classes. One participant teacher identified herself as hard of hearing and 2 identified themselves as deaf.

During the 2 years of the project, two separate workshop sequences were conducted for the teachers of the deaf involved in this project, both on and off the Gallaudet University campus. The first-year workshops focused on developing a rationale for writing instruction, teaching writing as a process rather than as a product, and the promotion of writing through dialogue journal writing. The point of the second-year workshops was threefold: to review the first year's training goals with an emphasis on a clearer definition of the goals of a writing program, to learn to use specific rationales in the selection of writing topics, and to learn how to provide clear and useful feedback to the students about their compositions. During the second-year workshops, the participants were taught how to express nonjudgmental acceptance of the content of students' writing while discussing revisions to the form and how to create classroom dialogue about form as a means to convey content in a specific fashion. Two forms of feedback were stressed during the second year. First, the teachers were given additional training in the preparation of scoring guides. Second, the face-to-face writing conference was introduced as a technique to provide feedback. [TABULAR DATA OMITTED]

Each training session consisted of two 8-hr days. Training procedures included a mix of lectures, discussion, and feedback from the teachers. Posttraining feedback questionnaires indicated a positive response to the content of the training and the presentations.

Sample

This was a quasi-experimental study of the implementation of a teaching method under local schooling conditions; consequently, the movement of students into and out of the project was not controlled for. As a result, four types of student groups emerged naturally as the project went along:

1. Students who started the project but left after 1 year. 2. Students who entered the project at the start of the second year. 3. Students who were in both years of the project but who changed teachers from Year 1 to Year 2. 4. Students who kept the same teacher for both years of the project.

Because of this difference in the degree of participation, the variable, exposure to instruction, was defined as having four values for these definitions.

Table 1 compares the four groups of students by age, gender, ethnicity, severity of hearing loss, and reading level. Some systematic differences could be seen among the four groups. Group 2 was considerably younger than any of the other groups, but the difference in age between Group 3 and Group 4 was not significant. Group 1 was significantly older than the other groups. The gender differences did not appear to be substantial, but the number of minority group students was considerably lower in Group 4. Group 3 had more students with a more severe hearing loss than the other groups. Group 1 had a reading level that was significantly lower than that of the other groups. This group was also the most difficult to collect posttest data from because we did not know they were out of the project until the fall of the following year. As a result, they were not used in later comparative analyses. On the whole, there were a number of random, but significant, differences among the four groups. These differences were compensated for in later analyses by adjusting pretest and posttest scores statistically for the differences in age, ethnicity, hearing loss, and reading ability.

Instrumentation

Demographic Information. With the permission of the students' parents and the cooperation of the schools and the Center for Assessment and Demographic Studies, we obtained background information on the students. This included the date of birth, sex, ethnic group, degree of hearing loss, etiology, and onset of deafness for each child. In addition, we obtained students' current reading achievement scores from school records. The measure of reading skill was the Stanford Achievement Test, Hearing Impaired version. We asked the schools for the scores on the reading comprehension subtest. Because not all schools provided us with scaled scores, we used grade equivalent scores (see Table 1).

Writing Assessment. During the fall of 1987, each student in the project was scheduled to be given the descriptive, persuasive, and business letter test stimuli appropriate to his or her age level that had been developed by the Educational Testing Service (ETS) for use by the National Assessment of Educational Progress (NAEP) (Mullis, 1980).

The tests were administered locally by the teachers in the project and returned by mail to the research team during the fall of 1987. Students were allowed as much time as needed but generally completed each test within half an hour. With the exception of one group of essays (submitted by a teacher who encouraged students to rewrite their essays), all essays were the product of a single draft. The test administrators were told to encourage the children to write and to explain to the students what was expected but not to tell them what or how to write. The stimulus was provided to the students in print and was read to them using total communication. The process was repeated again in the spring of 1989.

Teacher Logs. To assist teachers in monitoring their writing instruction, we developed a self-report system that requested information from the teachers about the quantity of the writing the students were doing. This information included a description of the books used, pages covered, and amount of classwork and homework. On a monthly chart, the teachers were to estimate what portion of their class time was devoted to a specific activity in 15-min increments.

The purpose of the coding system was to gather estimates of the amount of writing instruction taking place and the degree to which the teachers followed the procedures for teaching writing as a process. Six categories of writing instruction were to be coded by the teachers: dialogue journal writing, prewriting or organizing activities, writing in class, revision activities, publishing or any class time devoted to the production of material in a finished form, and other writing activities.

ANALYSIS

Essay Scoring

Several scoring systems were used. For the persuasive and descriptive essays, counts were made of the number of words, sentences, and clauses, both grammatical and ungrammatical, and the number of "t-units." Words and sentences were defined orthographically; t-units were defined as a main verb clause and any subordinate clauses. Grammatical clauses were defined as complete verbs with their subjects and subordinating or coordinating conjunctions, if appropriate. If a group of words functioned as a clause in a sentence but lacked a major element such as a complete verb or an appropriate subordinate conjunction, it was counted as an ungrammatical clause.

The persuasive and descriptive essays were also coded using holistic scoring systems. The format of the holistic scoring system for the descriptive and the persuasive essays consisted of a 6-point system, ranging from outstanding papers to barely comprehensible papers. A seventh category was used when the paper was comprehensible but off the topic, which happened more with the persuasive essays than with the descriptive essays. The operational definitions of these scales are shown in Tables 2 and 3. [TABULAR DATA OMITTED]

The business letters represented a distinct type of writing from the persuasive and descriptive papers; thus, two different types of scoring systems were used. Because variations in missing information in the business letter made holistic scoring difficult, we developed an individual-feature-analysis system, which counted the presence or absence of specific pieces of information such as the greeting, the internal address, or the closing. Two primary categories were used: form and content. The form categories included the internal address, date, greeting, writer's name, return address, and closing. The contents were coded for a reference to the calendar, a request for the item, statement of a specific choice, and the addition of extraneous information. In addition, there were specific content requirements for effective communication about the topic: the writer had to mention a particular time, to request that the item be sent, and to provide information as to where to send it. The coding system involved only checking for the presence of key words or phrases. An explanation of this system is provided in Table 4. [TABULAR DATA OMITTED]

Because the business letters were so brief, it was impractical to do grammatical counts on them. Instead, they were evaluated using a 6-point primary trait scoring system for grammatical correctness, which rated the business letters on a scale ranging from being virtually free of grammatical or mechanical errors to having substantial deletions of major syntactic elements and a failure to observe orthographic conventions (see Table 5). A seventh category was used, which indicated that the letter was too brief to be evaluated.

The same general scoring procedure was followed in the case of all three impressionistic scoring systems: the holistic scoring system for the descriptive essay, the holistic scoring system for the persuasive essay, and the primary trait rating system for the grammaticality of the business letter. Before papers were scored, we explained the scoring system to two readers. We discussed the criteria and the anchor papers. Then the readers practice-scored 20 papers in groups of 5 to develop reliability. This process was repeated until the desired level of reliability was achieved. [TABULAR DATA OMITTED]

During the scoring session, each reader assigned a score from 1 to 6 to a paper. If the scores of the two readers were within 1 point of each other, the scores were accepted as being in agreement. The score for the paper was the sum of the two readers' scores. In the event of a disagreement, the referee "pulled" the paper to discuss the discrepancy and to attempt to reestablish reliability between the two readers. However, after training, the readers usually did not differ by more than 1 point on any paper. Consequently, discrepant scores occurred less than 3% of the time. When the readers were consistent with each other and the criteria, they then scored blocks of 20 papers each to check consistency with the referee. [TABULAR DATA OMITTED]

This system was developed by ETS (Mullis, 1980) as a way of ensuring greater reader agreement in testing situations where individual subject or program evaluations were concerned. In theory, reader agreement is 100% because no disagreements beyond the 1-point discrepancy limit are allowed; however, in practice, uncorrected reliability across all three separate scoring systems was 97%.

Outcome Measures

Grammatical Complexity. From the counts of the grammatical categories for the descriptive and persuasive essays, three measures of syntactic complexity were computed: words per clause, words per t-unit, and clauses per t-unit. Consequently, there were nine measures of grammatical complexity for the pretest papers including a measure of syntactic errors for both the descriptive and persuasive pretest essays, and the primary trait grammatical rating for the pretest business letter. A factor analysis, computed to reduce the number of variables, generated a grammatical complexity factor which was a measure of syntactic complexity and general grammatical accuracy in that it consisted of the t-units per clause variables, the overall quality rating for the grammaticality of the business letter, and the syntactic complexity measures for the descriptive essay. This factor score was used as a global description of grammatical complexity since it included a range of writing settings and measures. To create posttest scores, the factor loadings for the pretest grammatical complexity measure were used as weights in computing the posttest factor scores for grammatical complexity.

Overall Writing Quality. To develop a composite quality measure, as opposed to a measure of grammatical complexity, the holistic scores for the descriptive pretest and the persuasive pretest were factor analyzed, along with the three pretest factor scores for the business letter described below.

In the process of generating a single score for the general quality of the business letter, a factor analysis of the trait counts for the business letter form and content categories produced three separate factor scores. First, the content mastery factor included all the content measures and the essential information of the return address. Scoring well on this trait would mean that the individual would receive the product described in the stimulus, whereas a low score would mean that one would not get the product. Second, the formal mastery factor included the formalism of the internal address and the date as well as the return address and the name. Third, the social mastery factor score included the elements that were descriptive of a social letter as well as of a business letter, whereas the categories in the other factors seemed more unique to business letters. The social mastery factor also contained a large loading for extraneous information.

The composite quality score consisted of the persuasive holistic score and the descriptive holistic score, as well as the business content mastery score and the business form mastery score. The composite quality score was then used in further analyses as the measure of overall writing improvement. In summary, the measure of writing quality used in this study addressed three questions: Did the students' ability to write a description improve? Did the students' ability to construct a persuasive argument improve? Did a student's chances of receiving a product as the result of writing a business letter improve?

Evaluation Questions

What Evidence Was There That the Teachers Actually Taught in the Way They Were Trained? The teachers in the project kept logs of their teaching activities during the 2 years of the project as an inexpensive check on the impact of the training.

Teachers reported the number of minutes they spent in writing instruction during the day. On the average, about 40% of all available class time for literacy-related subjects was devoted to teaching writing. During the first year of the project, an average of 22.21 min per day were devoted to the teaching of writing; the average for the second year was 16.45 min per day.

To assess the implementation of teaching writing as a process, the logs were analyzed for the completion of the various categories of teaching writing as a process; that is, did the teachers engage in the cycle of prewriting-writing-revision-publishing respective of other writing activities? Of the 52 teachers involved in the project across the 2 years, 60% regularly taught all four phases of the writing process by their own report. Thirty percent of the teachers regularly taught the first three phases of the process, but did not regularly engage in "publishing" as an activity. Eight percent of the teachers engaged only in prewriting and writing in the classroom. One teacher did not follow the writing process approach or failed to properly note her activities on the self-reporting log system. This teacher entered the project in the second year. Her students are not included in the subsequent inferential analyses.

Was Instruction Effective? The primary hypothesis of the study was that if instruction were effective, a posttest score adjusted for the effects of maturation and differences in the implementation of the training would be greater than a pretest score. Since there was variability in the amount of instruction provided, it was possible that the students who were only in the second year of the project would show the smallest amount of change in their writing skills and those students who had 2 years of instruction with the same project teacher would show the greatest amount of change. To assess the impact of additional training on writing change, this difference was kept in the analysis.

To test the primary hypothesis, the pretest factor scores for the overall quality of the composition and for the grammatical complexity of the essays were adjusted for between-subjects differences described earlier in this article. Adjusted pretest scores were computed in a multiple-regression analysis using the students' beginning reading ability as measured by their grade equivalent score, their degree of hearing loss as measured by better ear average, the gender of the student, the age of the student when the data collection was done, and the ethnicity of the student as predictor variables. Maturation was controlled for by using the child's age at the two testing points in the prediction equations to create the "adjusted scores." In other words, the posttest score would be reduced because the child was older. The equation to adjust the pretest composite quality score accounted for nearly 60% of the variance. Because these were the variables that appeared to differentiate the groups who participated in the study, we are confident that we have controlled for sources of variance not related to instruction.

The F value for the multiple-regression equation to adjust the grammar pretest factor score was statistically significant, although only 18% of the variance was accounted for in this equation. What this suggests is that, although there may be between-groups differences on demographic variables, these same demographic variables are not substantially related to the grammatical complexity of the students' writing.

With one addition, the same multiple-regression procedure was used to adjust for between-subjects differences on posttest factors scores. First, because some of the students had been in the study for only 1 year, whereas others had been in the study for 2 years, it was necessary to adjust the posttest results for the amount of time between testing sessions. For the students who were in the study for 2 years, this was 18 months. For the other students, it was 9 months.

This equation predicted about 60% of the variance in the posttest composite quality score, suggesting that the holistic scores were more sensitive to between-student differences than were the grammar measures, which predicted only 14% of the posttest grammatical complexity score variance.

In summary, both pretest and posttest scores were adjusted for between-groups differences noted earlier, specifically, age, sex, ethnicity, degree of hearing loss, and reading ability. In addition the posttest score was adjusted for the amount of instruction, as measured by the months of instruction that the students received. Because both the adjusted pretest scores and posttest scores included corrections for the age of the students, maturational effects were eliminated from the subsequent analysis. Figure 1 shows the adjusted pretest and posttest group means for both the composite quality score and for the grammar complexity score.

To test the hypothesis stated previously, a repeated-measures analysis of variance was computed using the within-subjects factors of the time of testing--adjusted pretest versus adjusted posttest--and the measure used to judge progress--composite quality versus grammatical complexity. The two indexes were the posttest score and the expected posttest score. The three-level factor was the degree of exposure to instruction. The first level of the factor consisted of students who were only in the second year of the project; the second level, students who were in both years of the project but had different teachers; the third level, students who had the same teacher for both years of the project. The between-subjects factor of instructional group membership--1 year of instruction, 2 years of instruction with two different teachers, and 2 years of instruction with the same teacher--was included as a secondary control for the effects of amount of instruction.

In Table 6, Exposure refers to the three groups of students who had varying degrees of exposure to instruction, that is, 9 months or 18 months. This difference should have been controlled for in the process of creating the adjusted scores described earlier. It is clear from the F value of exposure that the process of adjusting for between-groups differences on this factor was successful in controlling for these differences. [TABULAR DATA OMITTED]

Time refers to the time of testing, that is, before or after training. There is a statistically significant F value for time of testing in Table 6. In other words, adjusted posttest scores, which included an adjustment for maturation effects, were higher than the adjusted pretest scores. Instruction was effective because the factor that measured the difference between the adjusted pretest scores and the adjusted posttest scores was statistically significant.

There was a statistically significant effect for the type of test, identified as Test in Table 6. As can be seen in Figure 1, adjusted posttest scores were higher for the grammatical complexity measure than for the composite quality measure; however, the magnitude of the differences between the pretest and posttest scores were greater than the differences among the three groups.

There is a two-way interaction of Exposure to instruction and Test. The bulk of this effect is traceable to pretest/posttest differences. The group that had only 1 year of instruction showed the same amount of change between their pretest composite quality scores and their posttest composite quality scores; however, they showed dramatic improvements in the complexity of their grammar. For the other two groups, those with 2 years of instruction, the improvement in their grammatical complexity was nearly as great as for the group with only 1 year of instruction; but their composite quality scores jumped nearly as much. The three-way interaction of exposure, time of testing, and type of test is reflected in Figure 1. This source of the three-way effect comes primarily from the pretest/posttest differences and the substantial differences between the composite quality score change and the grammatical complexity score change.

The apparent changes in the adjusted grammar scores are partially explicable as artifactual results because similar findings have been noted during a reanalysis of the 13-year-olds' essays from the NAEP (Soltis & Walberg, 1989). Specifically, the discrepancy in results may lie in the nature of the scales that were used. The holistic scores that generated the composite quality score were a discrete, ordinal scale with a range of 12 values. The counts for the grammatical measures had a maximum range of 30 points on a continuous, interval scale. There is more variance between scores for the grammatical measures, thus creating the possibility for greater score differences. In addition, the multiple regression to adjust for various extraneous effects accounted for less of the variance for the grammatical complexity score than it did for the qualitative score. What is probably true of these data is that the differences and the directions of the differences are real, but that the degree of discrepancy between measures may be artifactual. In other words, there was an increase in the grammatical complexity of all of the students, but the magnitude of that change in relation to the change in the composite quality score may not be as great as it appears.

DISCUSSION

It is apparent from our study that teaching writing as a process results in improvements in the writing of students who are deaf. When the teacher's focus was on providing specific steps that the students could take to improve their compositions, the students' writing improved beyond what would be expected from normal maturation, as measured both by overall impressionistic measures and by measures of increasing grammatical complexity.

Teaching writing as a process improved the overall quality of the writing of students who are deaf. Previous research on teaching writing through a process approach had strongly suggested that we would in fact achieve such results using impressionistic scoring techniques which focus on the overall quality of the writing. It was not as apparent from reviewing the previous research that changes in grammatical complexity could be consistently expected, but such a finding has on occasion been reported. When writers become preoccupied with grammatical correctness, they have a tendency to use simpler constructions, employ more familiar or common words, and experiment less with language. The apparent change in grammatical complexity may be due to greater experimentation on the part of the students or to a greater sense of freedom of expression, which could be seen in a major increase in the length of students' sentence elements, especially the number of words per clause and per t-unit.

Because writing is a complex task with many components that might be taught, there is a tendency on the part of teachers to either avoid teaching writing or to attempt to substitute other more manageable tasks for actual composing or instruction in composing. By giving the teachers an approach that produced rapid, apparent improvements in the writing of individual students, we encouraged composing as an activity. Consequently, not only did the students become better writers, but many of the teachers were able to have a positive and rewarding teaching experience. Because this activity was sustained over a 2-year period, it was possible to document a substantive change in writing quality.

REFERENCES

Applebee, A. N. (1982). Writing and learning in school settings. In P. M. Nystrand (Ed.), What writers know: The language, process, and structure of written discours. (pp. 365-382). New York: Academic Press.

Baxter, M., & Kwalick, B. (1984). Holism and the teaching of writing (Research Report). New York: Scribner Educational Publishers.

Clifford, J. (1981). Composing in stages: The effects of a collaborative pedagogy. Research in the Teaching of English, 15(1), 37-53.

Heider, F., & Heider, G. (1940). A comparison of sentence structure of deaf and hearing children. Psychological Monographs, 52(1), 42-103.

Hillocks, G. (1987). Synthesis of research on teaching writing. Educational Leadership, 44, 71-76.

Humes, A. (1983). Research on the composing process. Review of Educational Research, 53(2), 201-216.

Kluwin, T. (1979). The effects of selected errors on the written discourse of deaf adolescents. Directions, 1(2), 46-53.

Knudson, R. E. (1988). The effects of highly structured versus less structured lessons on student writing. Journal of Educational Research, 81(6), 365-368.

Livingston, S. (1989). Revision strategies of deaf student writers. American Annals of the Deaf, 134, 21-26.

Mather, S. (1989). Visually oriented teaching strategies with deaf preschool children. In C. Lucas (Ed.), The sociolinguistics of the deaf community (pp. 165-190). New York: Academic Press.

Moriarity, D. J. (1978). An investigation of the effects of instruction in five components of the writing process on the quality and syntactic complexity of students' writing (Research Report). Framingham, MA: Framingham State University.

Mullis, I. (1980). Using the primary trait system for evaluating writing. Princeton, NJ: Educational Testing Service.

Soltis, J., & Walberg, H. (1989). Thirteen-year-olds' writing achievements: A secondary analysis of the fourth national assessment of writing. Journal of Educational Research, 83(1), 22-29.

Stuckless, E., & Birch, J. (1966). The influence of early manual communication on the linguistic development of deaf children. American Annals of the Deaf, 111(4), 425-460, 499-504.

Taylor, L. (1969). A language analysis of the writing of deaf children (Final Report). Tallahassee: Department of English, Florida State University.

Thompson, W. (1936). Analysis of errors in written composition by deaf children. American Annals of the Deaf, 81(2), 95-99.

Walter, J. (1955). A study of the written sentence construction of a group of profoundly deaf children. American Annals of the Deaf, 100(3), 235-252.
COPYRIGHT 1992 Council for Exceptional Children
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1992 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Kluwin, Thomas N.; Kelly, Arlene Blumenthal
Publication:Exceptional Children
Date:Sep 1, 1992
Words:5453
Previous Article:Making sense of disability: low-income, Puerto Rican parents' theories of the problem.
Next Article:The Task Demonstration Model: a concurrent model for teaching groups of students with severe disabilities.
Topics:


Related Articles
Sound advice for deaf learners.
The Rhode Island School for the Deaf.
The perils of assimilation in modern France: the deaf community, social status, and educational opportunity, 1815-1870.
Education and deafness: understanding the past and the needs of the present enables a better tomorrow.
Using a Writing Assessment Rubric for Writing Development of Children Who Are Deaf.
An Evaluation of a College Preparatory and Readiness Program for Deaf Students.
Will the Courts Go Bi-Bi? IDEA 1997, the Courts, and Deaf Education.
Applying social critical literacy theory to deaf education.
A preliminary examination of instructional arrangements, teaching behaviors, levels of academic responding of deaf middle school students in three...
ESL REDEUX: when sign lanquage is first.

Terms of use | Copyright © 2016 Farlex, Inc. | Feedback | For webmasters